input
stringlengths
6.82k
29k
Instruction: Does a general surgery clerkship influence student perceptions of surgeons and surgical careers? Abstracts: abstract_id: PUBMED:12947312 Does a general surgery clerkship influence student perceptions of surgeons and surgical careers? Background: Student perceptions of surgeons and surgical careers may influence their decision to pursue a surgical career. We evaluated the impact of a general surgery clerkship on medical student perceptions of surgeons and surgical careers and of the clerkship on student interest in surgical careers. Methods: We conducted voluntary pre- and postclerkship surveys of third-year medical students who were enrolled in their required general surgery clerkship and used 5-point Likert scale rankings to capture agreement with declarative statements (1 = strongly agree; 5 = strongly disagree). T-tests were used to evaluate both paired and individual items. Results: Students indicated that their surgical clerkship improved their opinion of surgeons (2.47; P <.001). Several perceptions changed after the clerkship. First, students agreed more strongly that surgeons were compassionate physicians (ranking, 2.87 vs 2.53; P =.003) and that patients respected surgeons (ranking, 1.84 vs 1.62; P =.026). Acknowledgment of career satisfaction by students increased (ranking, 2.57 vs 2.22; P =.008). Students more strongly disagreed that "surgeons [were] respectful of other physicians" (ranking, 3.29 vs 3.62; P =.009). Interest in surgical careers did not change significantly during the clerkship (ranking, 2.83 vs 2.68; P =.218). Conclusions: Medical student perceptions of surgeons and surgical careers generally improve during the surgical clerkship. However, student impressions of surgeons' collegial behavior and commitment to teaching deteriorate significantly during the surgical clerkship. abstract_id: PUBMED:37857141 Surgical Residents as Clerkship Scholars May Improve Student Perception of the Surgery Clerkship. Introduction: The surgical clerkship is a formative experience in the medical school curriculum and can leave a lasting impression on students' perception of surgery. Given the historical negative stereotypes of surgeons, the clerkship represents an opportunity to impact students in a meaningful way. Methods: Our institution developed a program in which research residents can serve as junior clerkship coordinators and educators; working closely with medical students on their surgery clerkship. At the end of their clerkship, students were administered a survey with Likert-scale and free text responses regarding satisfaction with the rotation, lectures, feedback, and value of the clerkship. Student survey results were compared before (2015-2016) and after (2017-2019) the implementation of the scholar program with nonparametric statistical analysis and qualitative text analysis. Results: A total of 413 students responded to the survey with no significant difference in response rate by term (P = 0.88). We found no statistical difference with respect to overall course perception (92.3% versus 91.2%, P = 0.84), but a statistically significant difference was noted for the clarity of the provided written clerkship materials (80.3% versus 91.3%, P = 0.02) and usefulness of the feedback (57.5% versus 78.7%, P = 0.01). Qualitative analysis demonstrated an overall positive shift in perception of the clerkship, improvement in the course materials, and organization. Conclusions: The scholar program was overall well received by the students with improvements in certain aspects of the clerkship: organization, feedback, and course materials. This program represents a potential strategy to improve certain portions of the medical school clerkship experience. abstract_id: PUBMED:33278797 The Influence of an Acting or Subintern on Third-Year Medical School Surgery Clerkship Students. Background: Previous reports demonstrated a positive relationship between the surgical clerkship and student likelihood of pursuing a surgical career, but no studies have examined the influence a peer has on comfort during a surgical clerkship. We hypothesized that a fourth-year acting intern (AI) would positively impact third-year medical students' experience during their surgical clerkship. Methods: All third-year medical students at our institution who completed their surgical clerkship in 2019 were surveyed regarding the preclerkship and postclerkship perceptions. Results: Of the 110 students surveyed, 52 responded (47.3% response rate), and 25 students (48.1%) reported having an AI during their clerkship rotation, and 27 did not (51.9%). Presence of an AI had no significant effect on the postclerkship perception of surgery, likelihood of pursuing general surgery, or comfort in the OR. Analysis of all responses demonstrated the surgery clerkship had no significant impact on students' perception of surgery or likelihood of pursuing general surgery but did statistically increase students' comfort in the OR. Conclusions: The results of this study suggest that AI presence did not significantly influence a student's clerkship experience or comfort in the OR. Further studies are needed to determine what, if any effect, an AI could have on third-year clerkship students. abstract_id: PUBMED:37980827 A Mentorship-Based Surgery Clerkship: A Prospective Study Investigating Student Satisfaction. Background: Interest in general surgery has steadily decreased among medical students due to negative perceptions of surgeons, a lack of meaningful clerkship involvement, and inadequate mentorship. We implemented a novel mentorship-based surgery clerkship (MBSC) in which each student was matched with a resident mentor with the goals of enhancing student learning experience, meaningfulness, and interest in surgery. We hypothesized that students participating in the MBSC would report increased confidence in surgical competencies, exposure to surgical faculty, and positive perception of surgery, with no detriment to clerkship grades. Methods: Mentors were instructed to provide the following when asked by the student: (1) weekly feedback; (2) personalized goals; (3) daily cases; (4) specific videos; (5) presentation subjects; (6) operating room skills coaching. A 5-point Likert Scale survey was distributed to the students pre and post clerkship, and median differences in Likert Scale Score pre and post mentorship were compared between mentored and control groups using the unpaired Wilcoxon's test. This was a two-arm, nonrandomized trial comparing traditional curriculum with the mentored program. Results: The total sample size was n = 84. When comparing mentored to control, Wilcoxon's analysis showed greater post clerkship increases in confidence in operating room etiquette (P = 0.03), participating in rounds (P = 0.02), and suturing (P < 0.01). There were greater increases in perceived surgeon compassion (P = 0.04), respectfulness (P < 0.01), and teaching ability (P < 0.01). Median scores for meaningfulness overall (P = 0.01) and as measured as a feeling of positively impacting a patient (P = 0.02) were also increased when comparing mentored to control. More students were encouraged by a surgeon to pursue surgery (P = 0.01) and consider a surgery career themselves (P = 0.02). Conclusions: An MBSC increases meaningfulness, confidence, skills, and exposure in various surgical competencies. Compared to nonmentored students, MBSC students have more positive perceptions of surgeons and are more likely to pursue surgery. abstract_id: PUBMED:35078627 A multisite study of medical student perspectives on the core surgical clerkship. Background: The surgical clerkship is the primary surgical learning experience for medical students. This study aims to understand student perspectives on the surgery clerkship both before and after the core surgical rotation. Methods: Medical students at 4 academic hospitals completed pre and postclerkship surveys that included open-ended questions regarding (1) student learning goals and concerns and (2) how surgical clerkship learning could be enhanced. Thematic analysis was performed, and interrater reliability was calculated. Results: Ninety-one percent of students completed both a pre and postclerkship survey (n =162 of 179), generating 320 preclerkship and 270 postclerkship responses. Mean kappa coefficients were 0.83 and 0.82 for pre and postclerkship primary themes, respectively. Thematic analysis identified 5 broad themes: (1) core learning expectations, (2) understanding surgical careers, culture, and work, (3) inhabiting the role of a surgeon, (4) inclusion in the surgical team, and (5) the unique role of the medical student on clinical clerkships. Based on these themes, we propose a learner-centered model of a successful surgical clerkship that satisfies discrete student learning and goals and career objectives while ameliorating the challenges of high-stakes clinical surgical environments such as the operating room. Conclusion: Understanding student perspectives on the surgery clerkship, including preclerkship motivations and concerns and postclerkship reflections on surgical learning, revealed potential targets of intervention to improve the surgery clerkship. Future investigation may elucidate whether the proposed model of the elements of a successful surgery clerkship learning facilitates improvement of the surgical learning environment and enhanced surgical learning. abstract_id: PUBMED:24680575 Impact of a third-year surgical apprenticeship model: perceptions and attitudes compared with the traditional medical student clerkship experience. Background: Current literature suggests that medical students may have negative misconceptions of a surgical career partly due to the traditional hierarchical structure of the surgical clerkship. We hypothesized that a novel medical student apprenticeship would result in positive changes in perceptions of both surgeons and surgical careers. Study Design: In the 2011 academic year, third-year medical students were offered a 2-week apprenticeship elective, in addition to the standard 8-week surgical clerkship. Unlike a traditional service, students apprenticed directly with participating faculty on a one-on-one basis. At the clerkship's end, students received a structured questionnaire assessing perceptions and attitudes toward surgeons and a career in surgery. Subjects responded anonymously using a 5-point Likert scale. A Wilcoxon Rank-Sum was performed comparing students who participated vs those who did not participate in the apprenticeship. Results: There was a 99% survey response (105 of 106). Of those, 50 (48%) participated in the apprenticeship. Apprenticeship students were more likely to view surgeons as content (p < 0.001), well-balanced (p < 0.01), respectful (p = 0.01), and as role models (p < 0.005). Apprenticeship students were also more likely to participate in the operating room (p < 0.05) and in patient management (p < 0.05). There was no difference in an interest to pursue a surgical career between groups both before and on completion of the clerkship. Conclusions: Students participating in a surgical apprenticeship had a more positive view of surgeons and the field of surgery compared with students not participating. An apprenticeship model enhances the surgical clerkship experience and improves medical student perceptions of surgery as a career. abstract_id: PUBMED:25092269 Can I cut it? Medical students' perceptions of surgeons and surgical careers. Background: Recent years have seen a significant drop in applications to surgical residencies. Existing research has yet to explain how medical students make career decisions. This qualitative study explores students' perceptions of surgery and surgeons, and the influence of stereotypes on career decisions. Methods: Exploratory questionnaires captured students' perceptions of surgeons and surgery. Questionnaire data informed individual interviews, exploring students' perceptions in depth. Rigorous qualitative interrogation of interviews identified emergent themes from which a cohesive analysis was synthesized. Results: Respondents held uniform stereotypes of surgeons as self-confident and intimidating; surgery was competitive, masculine, and required sacrifice. To succeed in surgery, students felt they must fit these stereotypes, excluding those unwilling, or who felt unable, to conform. Deviating from the stereotypes required displaying such characteristics to a level exceptional even for surgery; consequently, surgery was neither an attractive nor realistic career option. Conclusions: Strong stereotypes of surgery deterred students from a surgical career. As a field, surgery must actively engage medical students to encourage participation and dispel negative stereotypes that are damaging recruitment into surgery. abstract_id: PUBMED:23481029 International surgical clerkship rotation: perceptions and academic performance. Background: Concerns about international training experiences in medical school curricula include the effect on student learning. We studied the educational effect of an international elective integrated into a traditional third-year (M3) surgical clerkship. Methods: A 1-week surgical elective in Haiti was available to M3 students during the conventional 8-week surgical clerkship each year for the 4 academic years 2008 to 2011. The authors collected student and surgeon perceptions of the elective using a mixed-methods web-based survey. Statistical analysis compared the academic performance of participating M3s relative to nonparticipating peers. Results: Twenty-eight (100%) students (41 trip weeks) and 3 (75%) surgeons responded. Twenty-five (89%) students believed the elective provided appropriate clinical training. Surgeon responses were consistent with students' reported perceptions. Strengths included unique clinical experiences and close interactions with faculty. Criticisms included recurring overwhelming clinical responsibilities and lack of local provider involvement. Academic performance of participants versus nonparticipants in the same clerkship term were statistically insignificant. Conclusions: This study demonstrates the feasibility of integrating global health experiences into traditional medical student clinical curricula. The effects on less tangible attributes such as leadership skills, fostering teamwork, and cultural competency require future investigation. abstract_id: PUBMED:32408030 Identification of Specific Educational Targets to Improve the Student Surgical Clerkship Experience. Background: This study describes the relationship between medical student perception of surgery, frequency of positive surgery clerkship activities, and overall surgical clerkship experience. Methods: Medical students at four academic hospitals completed pre- and post-clerkship surveys assessing 1) surgery clerkship activities/experiences and 2) perceptions of surgery during the 2017-2018 academic year. Results: Ninety-one percent of students completed both a pre- and post-clerkship survey (n = 162 of 179). Student perception of surgery significantly improved across the clerkship overall (P < 0.0001) and for 7 of 21 specific items. Eighty-six percent of students agreed that the clerkship was a meaningful experience. Sixty-six percent agreed that the operating room was a positive learning environment. Multivariable logistic regression identified one-on-one mentoring from a resident (OR [95% CI] = 2.12 [1.11-4.04], P = 0.02) and establishing a meaningful relationship with a surgical patient (OR = 2.21 [1.12-4.37], P = 0.02) as activities predictive of student agreement that the surgical clerkship was meaningful. Making an incision (OR = 2.92 [1.54-5.56], P = 0.001) and assisting in dissection (OR = 1.67 [1.03-2.69], P = 0.035) were predictive of student agreement that the operating room was a positive learning environment. Positive student perception of surgery before the clerkship was associated with increased frequency of positive clerkship activities including operative involvement (r = 0.26, P = 0.001) and relationships with surgical attendings (r = 0.20, P = 0.01), residents (r = 0.41, P < 0.0001), and patients (r = 0.24, P = 0.003). Conclusions: Interventions to improve surgery clerkship quality should target enhancing student relationships with residents and surgical patients as well as providing opportunity for student operative involvement beyond just suturing. In addition, fostering positive perceptions of surgery in the preclinical period may increase meaningfulness and experience with the later surgery clerkship. abstract_id: PUBMED:38177035 Medical Student Perceptions of Academic Surgery: Rose-Colored Glasses or Jaded Prism? Objective: Stereotypes of surgeons are pervasive and play a role in medical students' decisions about pursuing a surgical career. This study aimed to determine: (1) how medical students' perceptions of surgery and surgeons changed following exposure to surgery during clerkship rotations; and (2) if gender and racial/ethnic identification played a role in this process. Design, Setting, And Participants: In this mixed-method study, clerkship students at one U.S. medical school were asked to anonymously contribute words and phrases that they associated with surgery to an online "word cloud" at the beginning and end of their 12-week surgery clerkship. In addition, an end-of-year, anonymous survey of their perceptions was administered and analyzed using a Grounded Theory approach. Results: Of 154 students invited to complete the online survey, analysis of 24 completed surveys suggested that students believe surgical culture to be toxic, with unfriendly attitudes, strict hierarchy, and lack of work-life balance. Analysis of 678 Word Cloud responses, however, indicated that the frequency of complimentary responses increased following surgery clerkships (25% vs 36%; z = -3.26; p = 0.001), while the proportion of responses describing surgery/surgeons as male-dominated, egotistical, and scary decreased (5% vs 1%, z = 2.86, p = 0.004; 9% vs 4%, z = 2.78, p = 0.005; 3% vs 0.3%, z = 2.56, p = 0.011, respectively). The association between surgeons and being White disappeared entirely. Female students were more likely than male students to state that their perceptions did not change following exposure (40% vs 0%; z = 2.19; p = 0.029). Conclusions: With exposure to surgery, students' preconceived notions may be positively influenced. However, students continue to hold negative perceptions, and this effect may be stratified by gender identification. Institutions should work to address these perceptions in pre-clerkship years to attract a more diverse pool of future surgeons. Answer: Yes, a general surgery clerkship does influence student perceptions of surgeons and surgical careers. Studies have shown that medical student perceptions of surgeons and surgical careers generally improve during the surgical clerkship. For instance, students indicated that their surgical clerkship improved their opinion of surgeons, with increased agreement that surgeons were compassionate physicians and that patients respected surgeons. However, student impressions of surgeons' collegial behavior and commitment to teaching deteriorated significantly during the surgical clerkship, and interest in surgical careers did not change significantly (PUBMED:12947312). Other studies have implemented different strategies to enhance the clerkship experience and potentially influence student perceptions. For example, a program where surgical residents served as junior clerkship coordinators and educators was associated with improvements in certain aspects of the clerkship, such as organization, feedback, and course materials (PUBMED:37857141). A mentorship-based surgery clerkship showed that students participating in the program reported increased confidence in surgical competencies, exposure to surgical faculty, and positive perception of surgery, with more students encouraged to pursue surgery (PUBMED:37980827). Furthermore, a surgical apprenticeship model resulted in students having a more positive view of surgeons and the field of surgery compared to those not participating in the apprenticeship (PUBMED:24680575). Additionally, a study on medical student perspectives on the core surgical clerkship suggested that understanding student perspectives, including preclerkship motivations and concerns and postclerkship reflections on surgical learning, could reveal potential targets of intervention to improve the surgery clerkship (PUBMED:35078627). In summary, the general surgery clerkship has a significant impact on medical student perceptions of surgeons and surgical careers, with various interventions showing potential to improve these perceptions and possibly influence career choices.
Instruction: Is burnout separable from depression in cluster analysis? Abstracts: abstract_id: PUBMED:25527209 Is burnout separable from depression in cluster analysis? A longitudinal study. Purpose: Whether burnout and depression represent distinct pathologies is unclear. The aim of this study was to examine whether burnout and depressive symptoms manifest themselves separately from each other or are so closely intertwined as to reflect the same phenomenon. Methods: A two-wave longitudinal study involving 627 French schoolteachers (73 % female) was conducted. Burnout was assessed with the Maslach Burnout Inventory and depression with the 9-item depression module of the Patient Health Questionnaire. Results: Burnout and depressive symptoms clustered both at baseline and follow-up. Cluster membership at time 1 (T1) predicted cases of burnout and depression at time 2 (T2), controlling for gender, age, length of employment, lifetime history of depression, and antidepressant intake. Changes in burnout and depressive symptoms from T1 to T2 were found to overlap. Teachers with increasing burnout experienced increases in depression and teachers with decreasing burnout experienced decreases in depression. In addition, emotional exhaustion, the core of burnout, was more strongly associated with depression than with depersonalization, the second dimension of burnout, underlining an inconsistency in the conceptualization of the burnout syndrome. Conclusions: Our results are consistent with recent findings showing qualitative and quantitative symptom overlap of burnout with depression. The close interconnection of burnout and depression questions the relevance of a nosological distinction between the two entities. Emotional exhaustion and depersonalization, the two main dimensions of burnout, may be better conceptualized as depressive responses to adverse occupational environments than as components of a separate entity. abstract_id: PUBMED:26869983 Subgroup Analysis in Burnout: Relations Between Fatigue, Anxiety, and Depression. Several authors have suggested that burned out patients do not form a homogeneous group and that subgroups should be considered. The identification of these subgroups may contribute to a better understanding of the burnout construct and lead to more specific therapeutic interventions. Subgroup analysis may also help clarify whether burnout is a distinct entity and whether subgroups of burnout overlap with other disorders such as depression and chronic fatigue syndrome. In a group of 113 clinically diagnosed burned out patients, levels of fatigue, depression, and anxiety were assessed. In order to identify possible subgroups, we performed a two-step cluster analysis. The analysis revealed two clusters that differed from one another in terms of symptom severity on the three aforementioned measures. Depression appeared to be the strongest predictor of group membership. These results are considered in the light of the scientific debate on whether burnout can be distinguished from depression and whether burnout subtyping is useful. Finally, implications for clinical practice and future research are discussed. abstract_id: PUBMED:25183615 Depression and burnout Burnout is not a psychiatric diagnosis but a concept of the occupational psychology concerning the impact of working conditions on the psyche of affected persons. Burnout can be a mediator for clinical depression. Severe burnout fulfills mostly the criteria for an adjustment disorder. The treating physician is supposed to clarify the causes of a burnout. In addition to the causes related to working conditions, the role of certain personality traits should be elucidated. Psychological/psychotherapeutic approaches preferably focus on stress reduction. Empirical evidence for these approaches is rather weak. abstract_id: PUBMED:31446386 Depression, anxiety, and guilt in mothers with burnout of preschool and school-aged children: Insight from a cluster analysis. Introduction: The primary objective of this study is to identify the typology of exhausted mothers based on, depressive, anxiety, and guilt symptoms. And, secondarily, evaluate whether these profiles differ on parental stress, environmental stress, and parental burnout. Methods: Five hundred and fifty French-speaking mothers suffering from exhaustion completed several questionnaires assessing parental stress, general perceived stress, maternal guilt, and depressive and anxious symptomology. Results from two hundred and ten mothers with burnout symptoms were explored (mean age [SD] = 33.69 [5.1] years). In order to identify possible subgroups, a hierarchical cluster analysis was performed. Results: Three profiles were highlighted from the sample (Wilks' λ = 0.156, p < .0001): the first was characterized by high feelings of guilt (29.52%; n = 62), the second showed a dominance of anxiety symptoms and high feelings of guilt (36.19%; n = 76), and the fourth presented low levels of depression, anxiety, and feelings of guilt (23.3%, n = 49). Discussion: These findings suggest different profiles of burned out mothers that vary according to severity of anxiety, depression, and guilt. That suggests that parental burnout, depression and anxiety are distinct disorders. This study also makes it possible to show the aggravating effect of guilt in the experience of burnout. Additional research is warranted as it is important to more carefully consider possible appropriate intervention strategies given the differing experiences of burned out mothers. abstract_id: PUBMED:33261831 Explanatory style and burnout at the workplace: A cluster analysis In the domain of the workplace, burnout appears as an important consequence of stressful events experienced by an individual. But how to define it, and what reality does this concept represent? The benefits of optimism, and particularly of an optimistic explanatory style, have also been demonstrated on a set of other variables such as health, performance and resilience. If an optimistic explanatory style is an important variable, what are the relationships that can be observed with burnout? The purpose of this study is to explore the links that exist between explanatory styles and burnout assuming that there are different types of profiles according to the styles of the individuals, whether they are actually optimistic or pessimistic, or neither one nor the other. Through a cluster analysis, the results of this study revealed: 1) the existence of three different profiles of explanatory styles, ranging from optimistic to pessimistic; and 2) it was indeed possible to associate levels of burnout with the profiles of these styles. abstract_id: PUBMED:10782425 Burnout in nursing staff: is there a relationship between depression and burnout? Objective: There is evidence that burnout may be a clinical entity with pathological stress reaction features related to the inability in finding pleasure from work. The purpose of this study was to investigate the relationship between burnout and depression. Methods: The study took place in the general hospital AHEPA of Thessaloniki. All members of the nursing staff (in total 368 subjects) took part. The protocol was self-reported and anonymous, in order to obtain as valid data as possible and included the Maslach Burnout Inventory (MBI) to assess the level of burnout, the Eysenck Personality Questionnaire (EPQ) to assess personality traits, and the Zung Self-Rating Depression Scale to assess depressive symptomatology. Results: The analysis revealed a weak but significant relationship between burnout and depression. Conclusions: Depression is a pervasive disorder that affects almost every aspect of the patient's life. On the contrary, burnout is, by definition, a syndrome restricted to the patient's professional environment. However, it seems that there may be two distinct types of burnout syndromes, of which the one comprising the majority of nurses has little or no common features with depression. The second type consists of individuals with a predisposition to develop burnout. The latter is characterized by more severe symptomatology, phenotypic similarity to depression and presumably common etiological mechanisms. abstract_id: PUBMED:29139312 Burnout and depression: Causal attributions and construct overlap. Burnout has been commonly regarded as a job-induced syndrome. In this 468-participant study (67% female; mean age: 46.48), we examined the extent to which individuals with burnout and depressive symptoms attribute these symptoms to their job. Fewer than half (44%) of the individuals with burnout symptoms viewed their job as the main cause of these symptoms. The proportion of participants ascribing their depressive symptoms to work was similar (39%). Results from correlation and cluster analyses were indicative of burnout-depression overlap. Our findings suggest that burnout may not be a specifically job-induced syndrome and further question the validity of the burnout construct. abstract_id: PUBMED:35413739 Burnout or Depression? - Field Experience from a University Outpatient Clinic Objective: With the increasing presence of burnout symptoms in clinical and social everyday life, it is necessary to closely examine patients with symptoms of burnout and to classify their respective symptoms in the spectrum of mental disorders. Methods: The sample includes 83 patients who presented themselves in a burnout consultation of the university outpatient clinic of the University Hospital Dresden. As part of the diagnostic examination, the patients completed the Structured Clinical Interview (SKID) and a comprehensive questionnaire diagnostic (PHQ-D; MBI, BMI) as well as a clinical interview. Results: The clinical interview led to a burnout diagnosis in 35% of the patients, although for 1/3 of these patients no SKID diagnosis could be affirmed. However, a large proportion of the patients presenting at the university outpatient clinic suffered from a mental disorder, with depressive disorders accounting for the majority of complaints. Patients with burnout differed in a few points from patients with depressive symptoms: They scored lower in the MBI questionnaire and in the scales of mental and physical stress symptoms (p=.031) and boreout (p=.037). In addition, patients diagnosed with burnout were significantly less likely to report previous mental disorders requiring treatment (p=.017). Discussion: Previous standardized diagnostic instruments are less able to differentiate between burnout and depressive disorders. The present study supports the hypothesis that burnout and depression are part of the same spectrum of symptoms, with patients with burnout being less distressed than patients with depressive disorders. Conclusion: There appears to be a significant overlap between depression and symptoms of burnout making the differentiation difficult in everyday clinical practice and calling for a accurate diagnostic. abstract_id: PUBMED:30418617 Burnout, depression and paranoid ideation: a cluster-analytic study. Background: A link between burnout and paranoid ideation has long been suspected. However, systematic research on the association has been scarce. Aims: We investigated the relationship between burnout and paranoid ideation. Because burnout overlaps with depression, depression was also examined. Methods: A total of 218 Swiss schoolteachers participated in the study (58% female; mean age: 47). Burnout symptoms were assessed with the emotional exhaustion (EE) and depersonalization (DP) subscales of the Maslach Burnout Inventory-Educators Survey, depressive symptoms with the PHQ-9 and paranoid ideation with the Green et al. Paranoid Thought Scales. Results: Burnout, depression and their subdimensions showed raw correlations ranging from 0.42 to 0.55 with paranoid ideation. Burnout, depression and paranoid thoughts were found to cluster together. Lower levels of burnout and depression coexisted with lower levels of paranoid ideation and higher levels of burnout and depression coexisted with higher levels of paranoid ideation. When corrected for measurement error, the correlations of EE with depression and DP were 0.96 and 0.57, respectively. A principal component analysis confirmed that EE was indistinguishable from depression. Conclusions: Burnout is substantially associated with paranoid ideation. Interestingly, EE correlated as strongly with paranoid ideation as it correlated with DP. Moreover, if burnout is a syndrome of EE and DP that excludes depression, then the EE-depression correlation should not be close to 1 and EE should not correlate more strongly with depression than with DP. These basic requirements for construct distinctiveness and syndromal unity were not satisfied. abstract_id: PUBMED:34715576 Burnout and depression in nurses: A systematic review and meta-analysis. Background: Nurses work in stressful and demanding settings and often suffer from depression and burnout. Despite overlapping symptoms, research has been inconclusive regarding the discriminant validity of measures of burnout with regard to measures of depression. Such inconclusive discriminant validity might cause clinicians to fail to recognize and manage depression separately from burnout. Objectives: This meta-analysis aimed to clarify the distinctiveness of burnout as a separate construct by examining the size of the relationship between burnout and depression among nurses as well as potential moderators. Method: A stepwise method was used by searching 4 databases (PubMed, CINAHL, PsycINFO, and EMBASE) to retrieve published papers in English examining the relationship between burnout and depression among nurses and reporting the effect sizes of their findings. Results: We identified a total of 37 eligible studies. The pooled estimate showed a positive association between burnout and depression among nurses (r = 0.403, 95% CI [0.327, 0.474], p < 0.0001) and a slightly higher correlation coefficient for the Emotional Exhaustion subscale of the Maslach Burnout Inventory (MBI) measure (0.494, 95% CI [0.41, 0.57]). Conclusions And Implications: This review confirms a large burnout - depression correlation in nursing samples, adding to existing literature encompassing a variety of occupations. Future studies should focus on path analysis to assess the causal relationship as well as investigate potential moderators. Answer: The question of whether burnout is separable from depression in cluster analysis has been explored in several studies, with mixed results. Some studies suggest that burnout and depression are closely intertwined and may not be distinct entities. For instance, a longitudinal study involving French schoolteachers found that burnout and depressive symptoms clustered together at both baseline and follow-up, with changes in burnout closely associated with changes in depression. This study suggested that emotional exhaustion, a core component of burnout, was more strongly associated with depression than with depersonalization, another dimension of burnout, questioning the nosological distinction between the two entities (PUBMED:25527209). Another study on clinically diagnosed burned-out patients revealed two clusters differing in symptom severity of fatigue, depression, and anxiety, with depression being the strongest predictor of group membership. This finding supports the notion that burnout may not be a homogeneous group and may overlap with other disorders such as depression (PUBMED:26869983). A study on mothers with burnout of preschool and school-aged children identified different profiles based on depressive, anxiety, and guilt symptoms, suggesting that parental burnout, depression, and anxiety are distinct disorders, but with an aggravating effect of guilt in the experience of burnout (PUBMED:31446386). Conversely, a study on nursing staff found a weak but significant relationship between burnout and depression, indicating that there may be two distinct types of burnout syndromes, one with little common features with depression and another with more severe symptomatology and phenotypic similarity to depression (PUBMED:10782425). A study examining burnout and depressive symptoms' causal attributions found that less than half of individuals with burnout symptoms viewed their job as the main cause, similar to those with depressive symptoms, suggesting that burnout may not be specifically job-induced and questioning the validity of the burnout construct (PUBMED:29139312). In summary, while some cluster analyses suggest that burnout and depression may manifest as distinct syndromes, there is also evidence of substantial overlap between the two, with some studies indicating that they may be part of the same spectrum of symptoms (PUBMED:35413739, PUBMED:30418617, PUBMED:34715576). The distinction between burnout and depression remains a topic of debate, and further research is needed to clarify their relationship.
Instruction: Does Magnetic Resonance Imaging Improve the Predictive Performance of a Validated Clinical Prediction Rule Developed to Evaluate Surgical Outcome in Patients With Degenerative Cervical Myelopathy? Abstracts: abstract_id: PUBMED:25893357 Does Magnetic Resonance Imaging Improve the Predictive Performance of a Validated Clinical Prediction Rule Developed to Evaluate Surgical Outcome in Patients With Degenerative Cervical Myelopathy? Study Design: Ambispective study. Objective: To determine whether MRI parameters improve the predictive performance of a validated clinical prediction rule used to assess functional outcomes in surgical patients with DCM. Summary Of Background Data: Degenerative cervical myelopathy (DCM) is the most common cause of spinal cord dysfunction in the elderly worldwide. A clinical prediction rule was developed to discriminate between patients with mild myelopathy postoperatively (mJOA ≥ 16) and those with substantial residual neurological impairment (mJOA < 16). Recently, a separate magnetic resonance imaging (MRI)-based prediction model was created. However, a model exploring the combined predictive value of imaging and clinical variables does not exist. Methods: One hundred and fourteen patients with MRIs were examined from a cohort of 278 patients enrolled in the AOSpine CSM-North America Study. Ninety-nine patients had complete preoperative imaging and postoperative outcome data. MRIs were evaluated for the presence/absence of signal change on T2- and T1-weighted images. Quantitative analysis of the T2 signal change was conducted and maximum canal compromise and cord compression were calculated. The added predictive performance of each MRI parameter to the clinical model was evaluated using receiver operator characteristic curves. Results: The model developed on our subsample yielded an area under the receiver operator curve (AUC) of 0.811 (95% CI: 0.726-0.896). The addition of imaging variables did not significantly improve the predictive performance. Small improvements in prediction were obtained when sagittal extent of T2 hyperintensity (AUC: 0.826, 95% CI: 0.743-0.908, 1.35% increase) or Wang ratio (AUC: 0.823, 95% CI: 0.739-0.907, 1.21%) was added. Anatomic characteristics, such as maximum canal compromise and maximum cord compression, did not improve the discriminative ability of the clinical prediction model. Conclusion: In our sample of surgical patients, with clinical and image-evidence of DCM, MRI parameters do not significantly add to the predictive performance of a previously published clinical prediction rule. It remains plausible that combinations of the strongest clinical and MRI predictors may yield a similar or a superior prediction model. Level Of Evidence: 3. abstract_id: PUBMED:30241007 Clinical and magnetic resonance imaging predictors of the surgical outcomes of patients with cervical spondylotic myelopathy. Objective: To determine whether clinical characteristics and signal and morphologic changes on magnetic resonance (MR) images of the spinal cord (SC) are associated with surgical outcomes for cervical spondylotic myelopathy (CSM). Patients And Methods: A total of 113 consecutive patients with cervical myelopathy underwent cervical decompression surgery in our hospital from January 2015 to January 2018. All patients with preoperative MR images available for review were recruited for this study. Research data included patient sex, age, duration of symptoms, surgical approach, compression level, preoperative mJOA (modified Japanese Orthopaedic Association) score, postoperative mJOA recovery rate, and complications. Imaging data included signal changes on T2-weighted MRI images (grade and extension on sagittal images, four types of signal changes on axial images according to the Ax-CCM system), SC compression, transverse area of the SC, and compression ratio. The t-test, Mann-Whitney U-test, Kruskal-Wallis H - test, analysis of variance, and regression analysis were used to evaluate the effects of individual predictors on surgical outcomes. Results: The study cohort included 85 males and 27 females with a mean age of 60.92 ± 8.93 years. The mean mJOA score improved from 10.24 ± 1.69 preoperatively to 15.11 ± 2.05 at the final follow-up (p < 0.001). Patients in the poor outcome group were more likely to present with a longer duration of symptoms (p < 0.001) and smaller transverse area of the SC (p < 0.001). Bright T2-weighted high signal changes (T2HSCs), multisegmental high signal changes on sagittal MR images, and fuzzy focal T2HSCs on axial MR images were associated with a poor outcome (p < 0.001, p = 0.005, p < 0.001, respectively). The maximum SC compression and compression ratio were not reliable predictors of surgical outcomes (p = 0.375, p = 0.055, respectively). The result of multivariate stepwise logistic regression showed that a longer duration of symptoms, multisegmental T2HSCs on sagittal MR images and fuzzy focal T2HSCs on axial MR images were significant risk factors of poor outcomes (p < 0.001, p = 0.049, p = 0.016, respectively). Conclusion: A longer duration of symptom, multisegmental T2HSCs on sagittal MR images, and fuzzy focal T2HSCs on axial MR images were highly predictive of a poor surgical outcome for CSM. Smaller transverse area of the SC and bright T2HSCs were also associated with the prognosis of CSM. abstract_id: PUBMED:29332032 Value of conventional MRI and diffusion tensor imaging parameters in predicting surgical outcome in patients with degenerative cervical myelopathy. Background: In addition to conventional magnetic resonance imaging (cMRI), diffusion tensor imaging (DTI) has been investigated as a potential diagnostic and prognostic tool for patients with degenerative cervical myelopathy (DCM). Objective: To assess the efficacy of cMRI and DTI parameters in prediction of surgical outcome in DCM patients. Methods: One hundred and forty-two patients with DCM who underwent presurgical cMRI and DTI of the cervical spine were included. Quantitative parameters obtained by cMRI included compression ratio (CR), transverse area (TA), and signal intensity ratio (SIR). DTI was evaluated for apparent diffusion coefficient (ADC) and fractional anisotropy (FA). The Japanese Orthopaedic Association (JOA) score and recovery rate were used to evaluate clinical outcomes. A JOA recovery rate < 50% was defined as a poor surgical outcome. The relationship of surgical outcome with various imaging parameters was examined. Receiver operating characteristic (ROC) curves were used to measure the predictive ability and determine the best cut-off values of the quantitative parameters. Results: By ROC curve analyses of imaging parameters, the largest area under the ROC curve (AUC) was for FA (0.750), followed by ADC (0.719), TA (0.716), SIR (0.673), and CR (0.591). The cut-off values with the best compromise between sensitivity and specificity were set at 0.390 for FA, 1.344 × 10-3 mm2/s for ADC, 46.02 mm2 for TA, 1.556 for SIR, and 26.56% for CR. Multivariate logistic regression model revealed that JOA score ⩽ 8 points, TA ⩽ 46.02 mm2, and FA ⩽ 0.390 were independently associated with poor surgical outcome. The AUC value for the three-predictor model was 0.871, indicating strong predictive discrimination, and was significantly higher than the AUC value for the model containing only the JOA score (0.763; P= 0.003). Conclusions: JOA score is a reasonable predictor of surgical outcome in DCM. However, a model inclusive of TA and FA provides superior predictive ability. Thus, quantitative analysis of cMRI and DTI is useful for predicting surgical outcome in DCM. abstract_id: PUBMED:19910766 Prediction of clinical results of laminoplasty for cervical myelopathy focusing on spinal cord motion in intraoperative ultrasonography and postoperative magnetic resonance imaging. Study Design: Retrospective analysis of preoperative imaging and clinical data from patients undergoing cervical expansive laminoplasty for cervical myelopathy. Objective: To investigate preoperative parameters that predict the floating status of the spinal cord at the anterior elements of the cervical spine in both intraoperative ultrasonography (US) and postoperative magnetic resonance imaging (MRI), and to evaluate the association between clinical outcome and spinal cord floating. Summary Of Background Data: Intraoperative US has been used to evaluate the status of the spinal cord after cervical laminoplasty for cervical myelopathy. Few studies have evaluated the predictive preoperative parameters for intraoperative US results. Methods: Imaging and clinical outcome data were collected from 101 consecutive patients who underwent cervical expansive laminoplasty for cervical myelopathy at Kaikoukai Nagoya Kyouritsu Hospital, Japan, from April 2004 to April 2008. The preoperative parameters associated with spinal cord floating in intraoperative US and postoperative MR images were investigated. Predictive parameters for the rate of recovery according to the Japanese Orthopedic Association score for cervical myelopathy at each follow-up session were also investigated. Results: Predictive parameters for spinal cord floating after decompression in intraoperative US were the cervical vertebrae 2 to 7 (C2-C7) sagittal alignment in the standing neutral position on preoperative plain radiograph radiography (cut-off value=3 degrees) and the C5/6 "beak angle" in preoperative MRI (cut-off value=20 degrees). A predictive parameter for spinal cord floating in postoperative MRI was the C5/6 beak angle in preoperative MRI (cut-off value=21 degrees). The preoperative Japanese Orthopedic Association score and spinal cord floating at anterior elements of the cervical spine in intraoperative US were predictive parameters for clinical outcome. Conclusion: Intraoperative US was more useful than postoperative MRI for predicting the clinical outcome of cervical expansive laminoplasty. Knowledge of the predictive parameters for spinal cord floating after cervical expansive laminoplasty could help evaluate the limitations of posterior decompression. abstract_id: PUBMED:26682095 The Practical Application of Clinical Prediction Rules: A Commentary Using Case Examples in Surgical Patients with Degenerative Cervical Myelopathy. Study Design Commentary. Objective This commentary aims to discuss the practical applications of a clinical prediction rule (CPR) developed to predict functional status in patients undergoing surgery for the treatment of degenerative cervical myelopathy. Methods Clinical cases from the AOSpine CSM-North America study were used to illustrate the application of a prediction rule in a surgical setting and to highlight how this CPR can be used to ultimately enhance patient care. Results A CPR combines signs and symptoms, patient characteristics, and other predictive factors to estimate disease probability, treatment prognosis, or risk of complications. These tools can influence allocation of health care resources, inform clinical decision making, and guide the design of future research studies. In a surgical setting, CPRs can be used to (1) manage patients' expectations of outcome and, in turn, improve overall satisfaction; (2) facilitate shared decision making between patient and physician; (3) identify strategies to optimize surgical results; and (4) reduce heterogeneity of care and align surgeons' perceptions of outcome with objective evidence. Conclusions Valid and clinically-relevant CPRs have tremendous value in a surgical setting. abstract_id: PUBMED:25077905 Modified K-line in magnetic resonance imaging predicts clinical outcome in patients with nonlordotic alignment after laminoplasty for cervical spondylotic myelopathy. Study Design: Retrospective single-center study. Objective: To investigate whether a preoperative index predicts clinical outcome after laminoplasty for cervical spondylotic myelopathy. Summary Of Background Data: This is the first study using the modified K-line, which connects the midpoints of the spinal cord at the C2 and C7 levels on midsagittal magnetic resonance imaging, to assess the relationship between postoperative clinical outcome and anticipated degree of spinal cord shifting. Methods: Sixty-one consecutive patients who underwent laminoplasty for the treatment of cervical spondylotic myelopathy between 2000 and 2011 at our hospital were retrospectively reviewed. The interval between the preoperative mK-line and the anterior structure of the spinal canal at each segment of the C3 to C6 levels (INTn, n = 3-6) were measured on sagittal T1-weighted magnetic resonance imaging, and the sum of the INTn (INTsum) was then calculated. The degree of posterior cord shift was defined as follows: %Csum = ΣCn; Cn = (Bn-An) × 100/An (n = 3-6; An and Bn represent the preoperative and postoperative intervals between the midpoint of the spinal cord and the anterior impingement at each segment on sagittal T1-weighted magnetic resonance imaging, respectively). In addition, we defined INTmin as the minimum interval of the INTn in each patient. All patients were divided into lordotic and nonlordotic groups on the basis of lateral neutral radiography. The Japanese Orthopaedic Association (JOA) scoring system and recovery rate of the JOA score for cervical myelopathy was evaluated as clinical outcomes. Results: The recovery rate of the JOA score was 48.1%. The lordotic and nonlordotic groups contained 38 and 23 patients, respectively. Linear regression analysis revealed that INTmin was significantly correlated with the recovery rate of the patients in the nonlordotic group, whereas INTsum was not associated with recovery of the JOA score. Conclusion: We identified INTmin as a predictive factor for clinical outcomes in patients with nonlordotic alignment after laminoplasty. Level Of Evidence: 4. abstract_id: PUBMED:32389875 Degenerative Cervical Myelopathy: Review of Surgical Outcome Predictors and Need for Multimodal Approach. Degenerative cervical myelopathy is the most common cause of spinal cord injury in the elderly population in the developed world, and it significantly affects the quality of life of patients and their caregivers. Surgery remains the only treatment option able to halt disease progression and provide neurological recovery for most patients. Although it has remained challenging to predict exactly who will experience improvement after surgery, increasingly it has been shown that clinical, imaging, and electrophysiological factors can predict, with relatively good capacity, those more likely to benefit. Clinically, the baseline neurological impairment appears to be strongly related to the outcome, and the magnetic resonance imaging findings of T1-weighted hypointensity and the length of T2-weighted hyperintensity appear to be the most prognostic. In this context, electrophysiology findings (both motor and sensory evoked potentials) have shown some predictive capacity. However, large studies are lacking. Although multivariate models have been conducted using clinical and magnetic resonance imaging data, no multimodal prediction models are available that encompass the predictive capacity of clinical, imaging, and electrophysiological data. In the present review, we examined the rationale for clinical, imaging, and electrophysiological usage in clinical practice and discussed a model of multimodal assessment for the management of degenerative cervical myelopathy. abstract_id: PUBMED:34266935 Can Morphometric Analysis of Cervical Spondylotic Myelopathy Be a Tool for Surgical Outcome Prediction? Background: Cervical spondylotic myelopathy (CSM) is one of the leading causes of degenerative cervical myelopathy and the most common cause of spinal cord dysfunction in the elderly worldwide. Although there is emerging evidence that most patients improve after surgery, the key clinical and imaging factors predicting outcome remain uncertain. Our purpose is to evaluate preoperative and postoperative morphometric parameters on magnetic resonance imaging and their relation with neurological outcome at discharge and at 12-month follow-up. Methods: Morphometric features (volume, area, and antero-posterior diameter of the major stenotic section) were acquired by manual segmentation of the spinal canal using OsiriX open-source software and confronted with neurological outcome (at discharge and 12 months after surgery) using Nurick's scale. Results: Fifty-five patients (21 females and 34 males) with a mean age of 64.89 ± 11.95 years were analyzed. Recovery ratio was 2.44% ± 2.40% at discharge and 11.74% ± 2.50% at follow-up. Statistical analysis revealed a significant difference (P < .001) between Nurick's scale at admission versus discharge, at discharge versus follow-up at 1 year, and between admission and follow-up at 1 year. Morphometric changes (difference and ratio) between preoperative and postoperative measurements were also statistically significant (P < .001, paired samples t test). When linear regression was applied, volume difference was shown to have an influence on clinical improvement (P < .05; R = 0.519). Linear regression was also applied using recovery ratio at discharge and follow-up as dependent variables, with the same conclusion: volume difference between preoperative and postoperative measurements correlates with outcome improvement at 1 year after surgery. Conclusions: Acquisition of morphometric features might be useful in predicting surgical outcome in patients with CSM. This information can be used to inform patients of their long-term postoperative prognosis in the future with more accurate and standardized measurements. abstract_id: PUBMED:22250580 Clinical and magnetic resonance imaging features of canine compressive cervical myelopathy with suspected hydrated nucleus pulposus extrusion. Objective: To describe clinical and magnetic resonance imaging features in dogs with compressive cervical myelopathy due to acute suspected hydrated nucleus pulposus extrusion. Design: Retrospective case series. Animals: Ten dogs with compressive cervical myelopathy caused by acute suspected hydrated nucleus pulposus extrusion. Materials And Methods: Medical records and magnetic resonance images of dogs evaluated from 2005 through 2010 were reviewed. The inclusion criteria were clinical signs compatible with cervical myelopathy, magnetic resonance imaging of the spine performed within 48 hours after onset, magnetic resonance imaging findings consistent with compressive suspected hydrated nucleus pulposus extrusion, complete medical records and follow-up information. Results: Six dogs were presented with acute onset tetraplegia and four dogs with acute onset of nonambulatory tetraparesis. Compromised respiratory function was present in three dogs. Compressive suspected hydrated nucleus pulposus extrusion was found on magnetic resonance imaging at the C4-C5 (n=6), C3-C4 (n=3) and C5-C6 (n=1) intervertebral disc spaces. Seven dogs underwent surgical treatment and three dogs were treated conservatively. All dogs except one regained ambulatory status within two weeks after the onset, and had a favourable outcome. Clinical Significance: Compressive myelopathy caused by acute suspected hydrated nucleus pulposus extrusion has not been reported previously and, even though neurological deficits can be severe on presentation, outcome is favourable. abstract_id: PUBMED:17894620 Clinical and MRI predictors of outcome after surgical intervention for cervical spondylotic myelopathy. Background: Cervical spondylotic myelopathy (CSM) is the most common cause of spinal cord dysfunction in older individuals. Controversy remains in terms of the optimal timing and indications for surgical intervention. In this context, it would be of benefit to define clinical and magnetic resonance imaging (MRI) predictors of outcome after intervention for CSM. Objective: We studied subjects with clinically documented cervical myelopathy to evaluate the relationship among preoperative MRI signal change, clinical findings, and outcome after surgical intervention. Methods: We performed a retrospective case study of 76 CSM patients who underwent cervical decompressive surgery and who had pre- and postoperative MRI studies available for review. Preoperative clinical findings and MRI abnormalities on T1- (T1WI) and T2-weighted (T2WI) images were correlated with outcomes (Nurick scores; Odom's criteria) following surgical intervention. Postoperative MRIs were performed 2-4 months postsurgery to assess for adequacy of decompression and resolution of preoperative signal changes. The pattern of spinal cord signal intensity was classified as: Group A (MRI N/N), no intramedullary signal intensity abnormality on T1WI or T2WI; Group B (MRI N/Hi), no intramedullary signal intensity abnormality on T1WI and high intramedullary signal intensity on T2WI; Group C (MRI Lo/Hi), low intensity intramedullary signal abnormality on T1WI and high intensity intramedullary signal abnormality on T2WI. Statistical analyses were performed using SAS (version 8.2). Results: We evaluated 76 patients (57% males, mean age 62 years, range 30-89) who experienced preoperative symptoms for an average of 6.5 months (range 1 month to 9 years). Preoperative MRI studies demonstrated the following: Group A (MRI N/N) = 45; Group B (MRI N/Hi) = 23; and Group C (MRI Lo/Hi) = 8. The mean postoperative follow-up period was 2.5 years (range 2 months to 8.5 years). A positive Babinski sign and the presence of intrinsic hand muscle atrophy showed the greatest association with abnormal preoperative MRI signal change. High preoperative Nurick score, clonus, and leg spasticity were associated with a less favorable postoperative outcome. In Group B (MRI N/Hi), 11/23 (52.17%) patients had recovery to MRI N/N (P < .0001) at their follow-up scan. Conclusions: Patients with high intramedullary signal change on T2WI who do not have clonus or spasticity may experience a good surgical outcome and may have reversal of the MRI abnormality. A less favorable surgical outcome is predicted by the presence of low intramedullary signal on T1WI, clonus, or spasticity. These data suggest that there may be a window of opportunity to obtain optimal surgical outcomes in patients with CSM. Answer: The use of Magnetic Resonance Imaging (MRI) in conjunction with a validated clinical prediction rule to evaluate surgical outcomes in patients with Degenerative Cervical Myelopathy (DCM) has been explored in several studies. However, the findings are mixed regarding the added value of MRI parameters to the predictive performance of clinical prediction rules. One study found that MRI parameters did not significantly add to the predictive performance of a previously published clinical prediction rule. The addition of imaging variables such as sagittal extent of T2 hyperintensity or Wang ratio only yielded small improvements in prediction, and anatomic characteristics like maximum canal compromise and maximum cord compression did not improve the discriminative ability of the clinical prediction model (PUBMED:25893357). Conversely, another study indicated that certain MRI features, such as bright T2-weighted high signal changes, multisegmental high signal changes on sagittal MR images, and fuzzy focal T2 high signal changes on axial MR images, were associated with poor surgical outcomes. The duration of symptoms and smaller transverse area of the spinal cord were also linked to prognosis, suggesting that MRI can provide valuable predictive information (PUBMED:30241007). Additional research using conventional MRI and diffusion tensor imaging (DTI) parameters showed that a model inclusive of transverse area (TA) and fractional anisotropy (FA) from DTI provides superior predictive ability compared to the Japanese Orthopaedic Association (JOA) score alone. This suggests that quantitative analysis of MRI and DTI is useful for predicting surgical outcomes in DCM (PUBMED:29332032). In summary, while some studies suggest that MRI can enhance the predictive performance of clinical prediction rules for surgical outcomes in DCM patients, the evidence is not uniformly conclusive. The added value of MRI may depend on the specific parameters and models used, and further research may be needed to fully understand the role of MRI in this context.
Instruction: Do cerebral white matter lesions influence the rate of progression from mild cognitive impairment to dementia? Abstracts: abstract_id: PUBMED:22874528 Do cerebral white matter lesions influence the rate of progression from mild cognitive impairment to dementia? Background: Cerebral white matter lesions (WML), evident on CT and MRI brain scans, are histopathologically heterogeneous but associated with vascular risk factors and thought mainly to indicate ischemic damage. There has been disagreement over their clinical prognostic value in predicting conversion from mild cognitive impairment (MCI) to dementia. Methods: We scrutinised and rated CT and MRI brain scans for degree of WML in a memory clinic cohort of 129 patients with at least 1 year of follow-up. We examined the relationship between WML severity and time until conversion to dementia for all MCI patients and for amnestic (aMCI) and non-amnestic (naMCI) subgroups separately. Results: Five-year outcome data were available for 87 (67%) of the 129 patients. The proportion of patients converting to dementia was 25% at 1 year and 76% at 5 years. Patients with aMCI converted to dementia significantly earlier than those with naMCI. WML severity was not associated with time to conversion to dementia for either MCI patients in general or aMCI patients in particular. Among naMCI patients, there was a tendency for those with a low degree of WML to survive without dementia for longer than those with a high degree of WML. However, this was not statistically significant. Conclusions: MCI subtype is a significant independent predictor of conversion to dementia, with aMCI patients having higher risk than naMCI for conversion throughout the 5-year follow-up period. WML severity does not influence conversion to dementia for aMCI but might accelerate progression in naMCI. abstract_id: PUBMED:15249623 White matter lesion progression: a surrogate endpoint for trials in cerebral small-vessel disease. There is neuropathologic evidence that confluent MRI white matter lesions in the elderly reflect ischemic brain damage due to microangiopathy. The authors hypothesize that measuring changes in the progression of white matter lesions as shown by MRI may provide a surrogate marker in clinical trials on cerebral small-vessel disease in which the currently used primary outcomes are cognitive impairment and dementia. This hypothesis is based on evidence that confluent white matter lesions progress rapidly as shown in a recent follow-up study in community-dwelling subjects. The mean increase in lesion volume was 5.2 cm(3) after 3 years. Based on these data in a clinical trial, 195 subjects with confluent lesions would be required per treatment arm to demonstrate a 20% reduction in the rate of disease progression over a 3-year period. Like any other MRI metric, the change in white matter lesion volume cannot be considered preferable to clinical outcomes unless it has been demonstrated that it matters to the patient in terms of function. abstract_id: PUBMED:18635849 Progression of cerebral small vessel disease in relation to risk factors and cognitive consequences: Rotterdam Scan study. Background And Purpose: Cerebral white matter lesions and lacunar infarcts are small vessel disease-related lesions, which are associated with cognitive decline and dementia. We aimed to assess the relationship between risk factors, effect modifiers, and progression of these lesions. Furthermore, we studied the cognitive consequences of lesion progression. Methods: Six hundred sixty-eight people, aged 60 to 90 years, underwent repeated MRI scanning and neuropsychological testing within 3-year follow-up. We rated incident lacunar infarcts and change in periventricular and subcortical white matter lesion severity with a semiquantitative scale. We assessed the relationships between age, sex, baseline lesion load, risk factors, lesion progression, and change in cognitive function by multivariate regression analyses and additional stratified analyses. Results: Baseline lesion load, higher age, high blood pressure, and current smoking were independently associated with progression of white matter lesions. Women had more marked progression of subcortical white matter lesions and incident lacunar infarcts compared with men. Carotid atherosclerosis was associated with incident lacunar infarcts. Higher blood pressure did not contribute to lesion progression in people with already severe lesions at baseline nor in the very old. Lesion progression was associated with a paralleled decline in general cognitive function and in particular with a decreased information processing speed. Conclusions: Higher age, female sex, cigarette smoking, elevated blood pressure, and baseline lesion load were associated with small vessel disease progression. Age and baseline lesion load influenced the risk relations with blood pressure. Progression of small vessel disease was related to a paralleled decline in cognitive function. abstract_id: PUBMED:15515734 A neuroradiological study on the influence of cerebral atrophy and white matter lesion on cognitive function in the elderly We investigated the influence of brain atrophy and white matter lesions on cognitive function in elderly people. We selected 33 subjects (mean age, 79.2 +/- 5.1yrs) with a MMSE score from 14 to 30 who had no previous history of stroke from the outpatients in the Memory Clinic of our hospital. These subjects were divided into four groups on the basis of their MMSE score as follows: 14-20; moderate dementia (Moderate-D, n = 9), 21-23; mild dementia (Mild-D, n = 9), 24-27; mild cognitive impairment (MCI, n = 10), 28-30; normal (Normal, n = 5). Among these four groups, we compared the frequency of the associated risk factors for cerebral infarction (hypertension, diabetes mellitus, hyperlipidemia, heart disease), and the severity of brain atrophy and cerebral white matter lesion which were visually evaluated by MRI technique. Brain atrophy and white matter lesions were assessed by reviewing the cerebral cortex and hippocampus, and deep white matter lesion (DWML) and periventricular hyperintensity (PVH), respectively. Brain atrophy was divided into three grades (mild, moderate, severe) and white matter lesions were classified into four grades (0-3) using Fazekas's criteria. We performed statistical analysis to detect t parameters which correlate with and influence MMSE scores from among the MRI findings. The cases with dementia were all diagnosed as Alzheimer's disease. There were no significant differences among the four groups in mean age, the incidence of individual associated risk factors, the severity of cortical atrophy, or the grade of DWML (< or = 2) and PVH (< or = 2). However, the frequency of hippocampal atrophic change greater than a moderate grade increased in parallel with the exacerbation of reduced cognitive function (Normal; 20%, MCI: 40%, Mild-D; 56%, Moderate-D 89%), and approximately 76% with such a change were AD cases. Statistical analysis showed a significant negative correlation between the grade of hippocampal atrophy and MMSE score (r = -0.518, p < 0.005) and a great influence of hippocampal atrophy on that score (step-wise regression analysis: r = 0.518, p < 0.005). From the above results, it was suggested that more than moderate atrophic change in the hippocampus might possibly be related with cognitive impairment and that both DWML and PVH less than the second grade had little influence on the decline of brain function. abstract_id: PUBMED:29483399 Hypertension-Induced Cerebral Small Vessel Disease Leading to Cognitive Impairment. Objective: Alzheimer's disease and vascular dementia are responsible for more than 80% of dementia cases. These two conditions share common risk factors including hypertension. Cerebral small vessel disease (CSVD) is strongly associated with both hypertension and cognitive impairment. In this review, we identify the pathophysiological changes in CSVD that are caused by hypertension and further explore the relationship between CSVD and cognitive impairment. Data Sources: We searched and scanned the PubMed database for recently published literatures up to December 2017. We used the keywords of "hypertension", "cerebral small vessel disease", "white matter lesions", "enlarged perivascular spaces", "lacunar infarcts", "cerebral microbleeds", and "cognitive impairment" in the database of PubMed. Study Selection: Articles were obtained and reviewed to analyze the hypertension-induced pathophysiological changes that occur in CSVD and the correlation between CSVD and cognitive impairment. Results: In recent years, studies have demonstrated that hypertension-related changes (e.g., small vascular lesions, inflammatory reactions, hypoperfusion, oxidative stress, damage to autoregulatory processes and the blood-brain barrier, and cerebral amyloid angiopathy) can occur over time in cerebral small vessels, potentially leading to lower cognitive function when blood pressure (BP) control is poor or lacking. Both isolated and co-occurrent CSVD can lead to cognitive deterioration, and this effect may be attributable to a dysfunction in either the cholinergic system or the functionality of cortical and subcortical tracts. Conclusions: We explore the currently available evidence about the hypertensive vasculopathy and inflammatory changes that occur in CSVD. Both are vital prognostic indicators of the development of cognitive impairment. Future studies should be performed to validate the relationship between BP levels and CSVD progression and between the numbers, volumes, and anatomical locations of CSVD and cognitive impairment. abstract_id: PUBMED:29120920 The growing clinical spectrum of cerebral amyloid angiopathy. Purpose Of Review: Cerebral amyloid angiopathy (CAA) is diagnosed primarily as a cause of lobar intracerebral hemorrhages (ICH) in elderly patients. With improving MRI techniques, however, the role of CAA in causing other symptoms has become clear. Recognizing the full clinical spectrum of CAA is important for diagnosis and treatment. In this review we summarize recent insights in clinical CAA features, MRI biomarkers, and management. Recent Findings: The rate of ICH recurrence in CAA is among the highest of all stroke subtypes. Cortical superficial siderosis (cSS) and cortical subarachnoid hemorrhage (cSAH) are important imaging predictors for recurrent ICH. CAA also causes cognitive problems in multiple domains. In patients with nondemented CAA, the risk of developing dementia is high especially after ICH. CAA pathology probably starts years before the first clinical manifestations. The first signs in hereditary CAA are white matter lesions, cortical microinfarcts, and impaired occipital cerebral vasoreactivity. Visible centrum semiovale perivascular spaces, lobar located lacunes, and cortical atrophy are new nonhemorrhagic MRI markers. Summary: CAA should be in the differential diagnosis of elderly patients with lobar ICH but also in those with cognitive decline and episodic transient neurological symptoms. Physicians should be aware of the cognitive effects of CAA. In patients with a previous ICH, cSS, or cSAH, anticoagulation should be considered risky. The increasing number of MRI markers may help to discriminate CAA from other small vessel diseases and dementia subtypes. abstract_id: PUBMED:10762508 Neuropsychological testing may predict early progression of asymptomatic adrenoleukodystrophy. Objectives: To investigate the correlation between neuropsychological and MRI findings in children with the childhood cerebral (CCALD) and asymptomatic forms of X-linked adrenoleukodystrophy (ALD) and to identify early cognitive markers that may predict disease progression in asymptomatic children with ALD. Background: The few published neuropsychological studies on CCALD suggest a correlation between the pattern of cognitive deficit and lesion site; however, neuropsychological performance in asymptomatic children with ALD has not been investigated. Methods: The authors assessed cognitive function and cerebral MRI findings in seven CCALD and eight asymptomatic ALD children. Results: The CCALD children's cognitive skills were severely compromised, especially Wechsler and executive functions. Visual perception, short-term memory, and language were generally preserved, except that naming was severely impaired. All had extensive posterior white matter deterioration. The asymptomatic children had relatively intact neuropsychological performance, but their verbal fluency was compromised and naming severely impaired. All except one had mild white matter alterations. For all the children, the majority of neuropsychological test performance correlated significantly with extent of white matter lesions. Conclusions: The pattern of cognitive deterioration in children with CCALD and the significant correlation of neuropsychological test performance with extent of white matter lesions indicate a white matter dementia similar to that observed in adults with demyelinating diseases. The deficits found in asymptomatic children, despite their normal intelligence, suggest that careful neuropsychological investigation can identify early signs of malfunction. These may be markers of disease progression useful for selecting children for bone marrow transplant, although this will require confirmation by prospective longitudinal studies. abstract_id: PUBMED:23623602 Cerebral microbleeds are related to subjective cognitive failures: the RUN DMC study. Cerebral small vessel disease (SVD), including white matter lesions (WML) and lacunar infarcts, is related to objective cognitive impairment but also to subjective cognitive failures (SCF). SCF have reported to be an early predictor of dementia. Cerebral microbleeds (MB) are another manifestation of SVD and have been related to cognitive impairment, but the role of MB in SCF has never been studied. We therefore investigated whether MB are related to SCF among non-demented elderly individuals with SVD, independent of coexisting WML and lacunar infarcts. The RUN DMC study is a prospective cohort study among 503 older persons with cerebral SVD between 50 and 85 years of age. All participants underwent FLAIR and T2* scanning. SCF, subjective memory failures (SMF), and subjective executive failures (SEF) were assessed. The relation between SCF and the presence, number and location of MB was assessed by linear regression analyses adjusted for age, sex, education, depressive symptoms, cognitive function, total brain volume, normalized hippocampal volume, territorial infarcts, WML, and lacunar infarcts. MB were present in 11%. We found a relation between the presence, total number and lobar located MB, and SCF, SMF, and SEF and the reported progression of these failures, especially in participants with good objective cognitive function. In conclusion, MB are related to SCF independent of co-existing WML and lacunar infarcts, especially in those with good objective cognitive performance. These results suggest that MB are associated with the earliest manifestations of cognitive impairment. MB may help us to understand the role of the ever-expanding spectrum of SVD in cognitive impairment. abstract_id: PUBMED:15760614 Cerebral small vessel disease: how does it progress? Small vessel disease (SVD), or microangiopathy, of the cerebral white and central grey matter is an important subtype of vascular dementia (VD). SVD-dementia is characterised by a "dysexecutive" type of cognitive impairment, neurological deficits including imbalance and voiding dysfunction, and emotional disturbances. SVD is also frequent among clinically healthy subjects and patients with mild cognitive impairment. It is easily visualised by imaging techniques, but difficult to distinguish from mixed SVD/Alzheimer Disease. SVD has an inherent tendency to progress, but data on its natural course are sparse, and there are almost no drug trials dedicated to it. This article reviews the evidence on the speed and predictors of progression of SVD in regard to cognitive deficits, functional decline and white matter lesions, as derived from epidemiological, clinical and imaging studies and the placebo branches of VD drug trials. Based on the available data, we make suggestions for future research and outcome measures. abstract_id: PUBMED:17620550 Lacunar lesions are independently associated with disability and cognitive impairment in CADASIL. Objective: Cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL) is a hereditary arteriopathy caused by mutations of the Notch3 gene. The disease is characterized by T2-hyperintense lesions (subcortical white matter lesions), T1-hypointense lesions (lacunar lesions), and T2*-weighted gradient-echo (GE) lesions (cerebral microhemorrhages [CMs]) visualized on clinical MRI sequences and is considered as a model of "pure" subcortical ischemic vascular dementia. Although numerous studies have investigated the impact of white matter lesions in patients with CADASIL, the clinical importance of lacunar lesions remains unknown. Our purpose was to examine the influence of the visible MRI markers in the disease, including the load of lacunar lesions on cognitive impairment and disability in CADASIL. Methods: We collected clinical data from 147 consecutive patients enrolled in an ongoing two-center prospective cohort study. Degree of disability was assessed by modified Rankin scale and Barthel index. Degree of cognitive impairment was assessed by Mattis Dementia Rating Scale and Mini-Mental Status Examination. T1-weighted, fluid-attenuated inversion recovery, and GE images were obtained on a 1.5-T MRI. Volume and location of lacunar lesions, white matter hyperintensities (WMHs), and CMs were assessed. Results: There was a significant independent association between age, volume of lacunar lesions, and global cognitive function scales when analyzed in a multivariable model. In contrast, WMHs and CMs had no independent influence on cognitive function. Disability in this cohort was associated with volume of lacunar lesions, CMs, systolic blood pressure, and age but not with WMHs. Conclusions: Among the lesions observed on conventional MRI in cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL), the overall lacunar lesion burden seems to have the most important impact on cognitive function and disability. These findings suggest that preventive strategies to decrease the risk of lacunar lesions as observed on MRI may reduce disease-related impairment in CADASIL. These results suggest that lacunar lesions may also play a key role in disability and cognitive impairment in more common forms of small-vessel disease. Answer: Cerebral white matter lesions (WML) have been studied to determine their influence on the progression from mild cognitive impairment (MCI) to dementia. The evidence suggests that the impact of WML on this progression may vary depending on the subtype of MCI and other factors. In a study examining the relationship between WML severity and time until conversion to dementia in a memory clinic cohort, it was found that WML severity did not influence the time to conversion to dementia for MCI patients in general or for amnestic MCI (aMCI) patients in particular. However, there was a tendency for non-amnestic MCI (naMCI) patients with a low degree of WML to survive without dementia for longer than those with a high degree of WML, although this was not statistically significant. This suggests that while MCI subtype is a significant predictor of conversion to dementia, with aMCI patients having a higher risk than naMCI, WML severity does not influence conversion to dementia for aMCI but might accelerate progression in naMCI (PUBMED:22874528). Other studies have explored the role of WML as a surrogate endpoint for trials in cerebral small-vessel disease, hypothesizing that measuring changes in the progression of WML may provide a surrogate marker in clinical trials where the primary outcomes are cognitive impairment and dementia (PUBMED:15249623). Additionally, progression of cerebral small vessel disease, which includes WML and lacunar infarcts, has been associated with cognitive decline and dementia, with risk factors such as higher age, high blood pressure, and smoking being independently associated with WML progression (PUBMED:18635849). Furthermore, a study on the influence of cerebral atrophy and WML on cognitive function in the elderly found that hippocampal atrophy had a significant negative correlation with cognitive function scores, suggesting that hippocampal atrophy might be more closely related to cognitive impairment than WML (PUBMED:15515734). In summary, while WML are associated with vascular risk factors and thought to indicate ischemic damage, their role in influencing the rate of progression from MCI to dementia is not clear-cut and may depend on the subtype of MCI and other individual factors. The evidence does not support a strong influence of WML severity on the conversion to dementia for aMCI patients, but there may be an effect on naMCI patients, although further research is needed to clarify this relationship (PUBMED:22874528).
Instruction: Can young adult patients with proteinuric IgA nephropathy perform physical exercise? Abstracts: abstract_id: PUBMED:15264183 Can young adult patients with proteinuric IgA nephropathy perform physical exercise? Background: It is not known whether physical exercise increases daily proteinuria in patients with proteinuric nephropathies, thus accelerating progression of the renal lesion. This study evaluates the acute effects of physical exercise on proteinuria in young adults with immunoglobulin A (IgA) nephropathy. Methods: Changes induced by intense physical exercise on quantitative and qualitative proteinuria were evaluated in basal conditions and after 10 days of ramipril therapy in 10 patients with IgA nephropathy, normal glomerular filtration rate (GFR), proteinuria between 0.8 and 1.49 g/24 h, and "glomerular" microhematuria before and after the end of a maximal treadmill Bruce test (B-test). The basal study also was performed in 10 age- and sex-matched healthy volunteers. Results: At rest, GFR averaged 141 +/- 23 mL/min; it increased by 16.3% +/- 3.3% (P < 0.005) and 7.1% +/- 1.6% at 60 and 120 minutes after the B-test, respectively. At rest, GFR-corrected proteinuria averaged protein of 0.76 +/- 0.21 mg/min/100 mL GFR; it increased to 1.55 +/- 0.28 mg/min/100 mL GFR after 60 minutes (P < 0.001) and declined to 0.60 +/- 0.11 mg/min/100 mL GFR at 120 minutes after the end of the B-test. The pattern of urinary proteins remained unchanged, as did microhematuria. Daily proteinuria was not different from the basal value on the day of the B-test. After ramipril therapy, patients showed a reduction in GFR, but no change in daily GFR-corrected proteinuria, pattern of urinary proteins, or hematuria. Conclusion: The increase in proteinuria after exercise in our patients is significant and is not prevented by ramipril therapy, but lasts less than 120 minutes. Therefore, it cannot modify daily proteinuria. Thus, these data do not support the need to reduce acute physical activity in patients with nonnephrotic renal diseases. abstract_id: PUBMED:26780894 Transition of adolescent and young adult patients with childhood-onset chronic kidney disease from pediatric to adult renal services: a nationwide survey in Japan. Background: Transition of adolescent and young adult (AYA) patients with childhood-onset chronic kidney diseases (C-CKD) from pediatric to adult renal services has received increasing attention. However, information on transition of Japanese patients with C-CKD is limited. Methods: The Transition Medicine Working Group, in collaboration with the Japanese Society for Nephrology, the Japanese Society for Pediatric Nephrology and the Japanese Society of Pediatric Urology, conducted a retrospective cross-sectional study in 2014 on issues concerning the transition of Japanese patients with C-CKD. Results: Few institutions in Japan had transition programs and/or transition coordinators for patients with C-CKD. Refusal to transfer by patients or their families, lack of concern about transition and inability to decide on transfer were common reasons for non-transfer of patients still followed by pediatric renal services. Around 25 % of patients who had ended or interrupted follow-up by pediatric renal services presented to adult renal services because of symptoms associated with C-CKD. Patients with various types of childhood-onset nephrourological diseases were transferred from pediatric to adult renal services. IgA nephropathy, minimal change nephrotic syndrome and congenital anomalies of the kidney and urinary tract were the most frequent primary kidney diseases in adult patients with C-CKD. Conclusion: These survey results indicate the need for introduction of transitional care for Japanese AYA patients with C-CKD. Consensus guidelines for the optimal clinical management of AYA patients with C-CKD are required to ensure the continuity of care from child to adult renal services. abstract_id: PUBMED:21366514 The origin and the clinical significance of urinary angiotensinogen in proteinuric IgA nephropathy patients. Background: Urinary angiotensinogen (AGT) was reported as a marker of renal injury in chronic kidney disease patients. However, the main source of urinary AGT is unknown in proteinuric patients because the disrupted filtration barrier might cause AGT filtration. We investigated the origin and the clinical importance of urinary AGT in proteinuric IgA nephropathy (IgAN) patients. Methods: In patients with biopsy-proven IgAN, urinary and plasma AGT was measured using a sandwich ELISA and compared with intrarenal AGT expression. The patients were followed up for 3 years. Results: Natural logarithm of the urinary AGT/creatinine (ln (urinary AGT/Cr)) was positively correlated with intrarenal expression of AGT (ln (urinary AGT/Cr) versus AGT/β-actin, r = 0.620, P < 0.0001; ln (urinary AGT/Cr) versus AGT density, r = 0.452, P = 0.007). Ln (urinary AGT/Cr) showed a positive correlation with urinary protein/creatinine ratio (PCR) but a negative correlation with estimated glomerular filtration rate (eGFR). Regression analyses showed that ln (urinary AGT/Cr) was a significant determinant of urinary PCR and eGFR 3 years after biopsy. Conclusions: Urinary AGT reflects intrarenal AGT expression and correlates with the extent of proteinuria and renal function. Our study indicates the intrarenal compartment as the main source of urinary AGT, suggesting its clinical implication as an important biomarker in proteinuric IgAN patients. abstract_id: PUBMED:22532142 Tubular NF-κB is overexpressed in proteinuric patients with IgA nephropathy. Increasing evidence suggests that nuclear factor κB (NF-κB) plays a pivotal role in many glomerulopathies. Therefore, the aim of the present study was to determine the tubular immunoexpression of NF-κB in non-proteinuric (n = 22) and proteinuric patients (n = 16) with IgA nephropathy (IgAN). Another purpose of this study was to examine the possible relationship between NF-κB immunoexpression and proteinuria, interstitial fibrosis as well as interstitial infiltrates. Tubular immunoexpression of NF-κB, interstitial monocytes/macrophages, T lymphocytes, B lymphocytes and interstitial area were determined using a computer image analysis system. The mean values of the tubular immunoexpression of NF-κB, interstitial area and interstitial monocytes/macrophages were in proteinuric IgAN patients significantly increased compared to non-proteinuric IgAN cases, whereas interstitial T and B lymphocytes did not differ between these groups. In proteinuric patients, tubular immunoexpression of NF-κB was highly significantly positively correlated with the degree of proteinuria. Moreover, in both the non-proteinuric and the proteinuric groups with IgAN, tubular immunoexpression of NF-κB was positively correlated with the interstitial area and interstitial monocytes/macrophages. Our findings raise the possibility that proteinuria causes tubular overexpression of NF-κB and, in the process, recruitment of monocytes/macrophages and tubulointerstitial injury in IgAN patients. abstract_id: PUBMED:8770971 A prospective study of the natural history of idiopathic non-proteinuric hematuria. In a prospective study of idiopathic glomerulonephritis we determined the natural history of 49 adult patients (12 primary IgA nephropathy, 13 thin GBM nephropathy, 20 normal renal tissue and 4 miscellaneous nephropathies) who presented with idiopathic non-proteinuric non-azotemic hematuria of at least six months duration, in the absence of hypertension and with a negative urological work-up. The median follow-up was 11 years with a range of 8 to 14 years. At the end of the follow-up, renal function had remained stable in all subsets except for those with miscellaneous disease. Hematuria was still present in all patients with thin GBM nephropathy, in all but two patients with IgA nephropathy who went into immunopathological remission, in three out of four miscellaneous nephropathies, and in seven out of 20 patients with normal renal tissue. Of the latter patients five had a history suggestive of urolithiasis at follow-up, which was in the absence of hypercalciuria and hyperuricosuria. Seven thin GBM patients, five IgA nephropathy patients and three miscellaneous nephropathies developed hypertension; the incidence of hypertension in each subset was significantly higher than in patients with normal renal tissue. This study shows that in young adults with idiopathic chronic non-proteinuric hematuria of four years duration, renal biopsy will give a definite diagnosis in 86% of the patients, and that those patients with so-called minor glomerular diseases are at high risk for hypertension. Those patients with normal renal tissue have a high incidence of urolithiasis and should have a urological follow-up. abstract_id: PUBMED:12803507 Increased urinary excretion of monocyte chemoattractant protein-1 in proteinuric renal diseases. Monocyte chemoattractant protein-1 (MCP-1) is a chemokine that is produced mainly by tubular epithelial cells in kidney and contributes to renal interstitial inflammation and fibrosis. More recently, we have demonstrated that urinary MCP-1 excretion is increased in proportion to the degree of albuminuria (proteinuria) and positively correlated with urinary N-acetylglucosaminidase (NAG) levels in type 2 diabetic patients. Based on these findings, we have suggested that heavy proteinuria, itself, probably aggravates renal tubular damage and accelerates the disease progression in diabetic nephropathy by increasing the MCP-1 expression in renal tubuli. In the present study, to evaluate whether urinary MCP-1 excretion is increased in the proteinuric states not only in diabetic nephropathy but also in other renal diseases, we examined urinary MCP-1 levels in IgA nephropathy patients with macroalbuminuria (IgAN group; n = 6), and compared the results with the data obtained from type 2 diabetic patients with overt diabetic nephropathy (DN group; n = 23) and those without diabetic nephropathy (non-DN group; n = 27). Urinary MCP-1 excretion levels in non-DN, DN, IgAN groups were 157.2 (52.8-378.5), 346.1 (147.0-1276.7), and 274.4 (162.2-994.5) ng/g creatinine, median (range), respectively. Expectedly, urinary MCP-1 and NAG excretion levels in DN and IgAN groups were significantly elevated as compared with non-DN group. Therefore, we suggest that MCP-1 expression in renal tubuli is enhanced in proteinuric states,irrespective of the types of renal disease, and that increased MCP-1 expression probably contributes to renal tubular damage in proteinuric states. abstract_id: PUBMED:23029130 Involvement of the tubular ClC-type exchanger ClC-5 in glomeruli of human proteinuric nephropathies. Unlabelled: Glomerular protein handling mechanisms have received much attention in studies of nephrotic syndrome. Histopathological findings in renal biopsies from severely proteinuric patients support the likelihood of protein endocytosis by podocytes. ClC-5 is involved in the endocytosis of albumin in the proximal tubule. Aim: To investigate whether ClC-5 is expressed in the glomerular compartment and whether it has a role in proteinuric nephropathies. ClC-5 expression was studied using Real-time PCR in manually- and laser-microdissected biopsies from patients with type 2 diabetes (n 37) and IgA nephropathy (n 10); in biopsies of membranous glomerulopathy (MG) (n 14) immunohistochemistry for ClC-5 (with morphometric analysis) and for WT1 was done. Controls: cortical tissue (n 23) obtained from unaffected parts of tumor-related nephrectomy specimens. Results: ClC-5 was expressed at glomerular level in all biopsies. Glomerular ClC-5 levels were significantly higher in diabetic nephropaty and MG at both mRNA and protein level (p<0.002; p<0.01). ClC-5 and WT1 double-staining analysis in MG showed that ClC-5 was localized in the podocytes. ClC-5 ultrastructural immunolocalization was demonstrated in podocytes foot processes. Our study is the first to demonstrate that ClC-5 is expressed in human podocytes. The ClC-5 overexpression found in biopsies of proteinuric patients suggests that proteinuria may play a part in its expression and that podocytes are likely to have a key role in albumin handling in proteinuric states. abstract_id: PUBMED:15996647 Messenger RNA expression of glomerular podocyte markers in the urinary sediment of acquired proteinuric diseases. Background: Podocyte slit diaphragm plays an important role in the control of glomerular permeability. We hypothesize that studying the gene expression profile of podocyte in urinary sediment may provide diagnostic and prognostic information on acquired proteinuric diseases. Methods: We studied 28 patients who required kidney biopsy for acquired proteinuric diseases (diabetic glomerulosclerosis, 9 cases; IgA nephropathy, 10 cases; minimal change disease, 5 cases; membranous nephropathy, 5 cases). We also studied 10 cases of diabetic microalbuminuria and 9 healthy controls. The mRNA expressions of nephrin (NephRNA), podocin (PodRNA) and synaptopodin (SynRNA) in urinary sediment were measured by real time quantitative PCR. After recruitment, all patients were followed for at least 12 months. Results: There were significant differences in the NephRNA and PodRNA in the urinary sediment between diagnosis groups (p<0.005). On the other hand, SynRNA was only marginally significant between diagnosis groups (p<0.05). Although statistically significant, the degree of proteinuria had only modest correlations with the urinary expression of nephrin. After a median follow up for 23 months, there was a significant correlation between the rate of decline in renal function and NephRNA (r=0.559, p=0.001) and PodRNA (r=0.530, p=0.002), but not SynRNA (r=0.054, p=NS). The correlation remained statistically significant after multivariate analysis to adjust for the degree of proteinuria and initial renal function. Conclusions: Urinary mRNA expression of podocyte markers, such as nephrin and podocin, are significantly different between proteinuric disease categories. Further, NephRNA and PodRNA correlated with the rate of decline in renal function. Our results suggest that urinary podocyte gene expression may be a useful non-invasive tool which provides additional information for the management of proteinuric diseases. abstract_id: PUBMED:37595842 Risk for graft loss in pediatric and young adult kidney transplant recipients due to recurrent IgA nephropathy. IgA nephropathy (IgAN) is associated with a risk for posttransplant recurrence. Data are limited regarding graft loss attributable to recurrence of IgAN among pediatric and young adult kidney transplant (KT) recipients. This was a retrospective cohort study of patients aged 0 to 25 years from the Scientific Registry of Transplant Recipients who received a primary KT for IgAN. Patients with history of KT attributable to renal dysplasia were comparators. Outcomes included the incidence of graft loss attributable to IgAN recurrence, association with donor type, and posttransplant corticosteroid use. In total, 5475 transplant recipients were included, with 1915 patients with IgAN and 3560 patients with renal dysplasia. In a multivariable Cox proportional hazards model, IgAN was associated with higher risk of graft loss (adjusted hazard ratio [aHR], 1.35; 95% CI, 1.21-1.50; P < .001) compared with dysplasia. Graft loss was attributed to recurrent disease in 5.4% of patients with IgAN. In a multivariable competing risks analysis, patients with IgAN receiving a parental living-donor kidney were more likely to report graft loss from recurrent disease compared with patients with a nonparental living donor (aHR, 0.52; 95% CI, 0.31-0.91; P = .02). Posttransplant prednisone use was not associated with improved graft survival (P = .2). These data challenge existing paradigms in posttransplant management of patients with IgAN. abstract_id: PUBMED:10752529 Complement activation products in the urine from proteinuric patients. The presence of plasma proteins in the tubular lumen has variety of adverse effects on the tubular cells. Among various plasma proteins filtered through glomerular barrier, complement has been proven as the possible candidate inducing tubulointerstitial injury. To study the role of intratubular complement activation in proteinuric patients, complement activation products (CAP) at C3 level (iC3b and Bb) and C9 level (membrane attack complex) were measured in both plasma and urine of patients with minimal change nephrotic syndrome (MCNS), focal glomerular sclerosis, IgA nephropathy, membranous nephropathy, and diabetic nephropathy. For evaluation of the effect of metabolic acidosis on the intratubular complement activation, urinary CAP were measured before and after sodium bicarbonate administration in patients with renal insufficiency. The following results were obtained: (1) Patients with focal glomerular sclerosis and diabetic nephropathy showed the highest level of urinary CAP excretion rate (unit/creatinine), while MCNS revealed no increase. (2) Patients with membranous nephropathy showed a unique finding, i.e., isolated increase of membrane attack complex excretion. (3) There was no significant correlation between urine and plasma levels of CAP. (4) Except for MCNS patients, the urinary excretion rate of CAP significantly increased when the level of proteinuria exceeded the nephrotic range, and it was significantly correlated with the serum creatinine level. (5) Urinary CAP excretion rate significantly decreased 2 wk after sodium bicarbonate administration without affecting the level of proteinuria or plasma CAP. These results suggest that the degree of intratubular complement activation correlates with the level of proteinuria, type of glomerular disease, impairment of renal function, and metabolic acidosis. Answer: Yes, young adult patients with proteinuric IgA nephropathy can perform physical exercise. A study evaluated the acute effects of physical exercise on proteinuria in young adults with IgA nephropathy and found that while intense physical exercise did increase proteinuria significantly, this increase was transient and lasted less than 120 minutes. Importantly, the daily proteinuria was not affected by the exercise on the day of the test. The study concluded that these data do not support the need to reduce acute physical activity in patients with nonnephrotic renal diseases, suggesting that young adult patients with proteinuric IgA nephropathy can engage in physical exercise without the concern of exacerbating their condition (PUBMED:15264183).
Instruction: Is the new Mayo Clinic Quadratic equation useful for the estimation of glomerular filtration rate in type 2 diabetic patients? Abstracts: abstract_id: PUBMED:36175599 Plasma galectin-3 concentration and estimated glomerular filtration rate in patients with type 2 diabetes with and without albuminuria. This study aimed to investigate the association between galectin-3 concentration and estimated glomerular filtration rate (eGFR) in patients with type 2 diabetes mellitus (T2DM) with and without albuminuria. In this cross-sectional study, we examined 334 patients with T2DM. The eGFR was calculated using a creatinine-based formula (eGFRcrea) and a combined creatinine-cystatin C equation (eGFRcrea-cyst). The participants were categorized into two groups based on the urinary albumin-to-creatinine ratio (UACR): patients without albuminuria (UACR < 30 mg/g) and those with albuminuria (UACR ≥ 30 mg/g). Greater concentrations of plasma galectin-3 were associated with lower eGFRcrea-cyst and eGFRcrea levels in patients with and without albuminuria. Plasma galectin-3 concentrations were negatively correlated with eGFRcrea-cyst in patients with normoalbuminuria and albuminuria (γ = - 0.405, P < 0.001; γ = - 0.525, P < 0.001, respectively). Galectin-3 concentrations were significantly associated with eGFRcrea-cyst after adjusting for sex, age, and other confounding factors, including UACR as a categorical or continuous variable in multiple regression analyses (β = - 0.294, 95% CI - 70.804 to - 41.768, P < 0.001; β = - 0.265, 95% CI - 65.192 to - 36.550, P < 0.001, respectively). Likewise, when eGFRcrea-cyst was treated in place of eGFRcrea, this result was replicated in the correlation and regression analyses. Galectin-3 concentration was negatively associated with eGFR in patients with T2DM, independent of albuminuria status. abstract_id: PUBMED:26932693 An estimated glomerular filtration rate equation for the full age spectrum. Background: Glomerular filtration rate (GFR) is accepted as the best indicator of kidney function and is commonly estimated from serum creatinine (SCr)-based equations. Separate equations have been developed for children (Schwartz equation), younger and middle-age adults [Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation] and older adults [Berlin Initiative Study 1 (BIS1) equation], and these equations lack continuity with ageing. We developed and validated an equation for estimating the glomerular filtration rate that can be used across the full age spectrum (FAS). Methods: The new FAS equation is based on normalized serum creatinine (SCr/Q), where Q is the median SCr from healthy populations to account for age and sex. Coefficients for the equation are mathematically obtained by requiring continuity during the paediatric-adult and adult-elderly transition. Research studies containing a total of 6870 healthy and kidney-diseased white individuals, including 735 children, <18 years of age, 4371 adults, between 18 and 70 years of age, and 1764 older adults, ≥70 years of age with measured GFR (inulin, iohexol and iothalamate clearance) and isotope dilution mass spectrometry-equivalent SCr, were used for the validation. Bias, precision and accuracy (P30) were evaluated. Results: The FAS equation was less biased [-1.7 (95% CI -3.4, -0.2) versus 6.0 (4.5, 7.5)] and more accurate [87.5% (85.1, 89.9) versus 83.8% (81.1, 86.5)] than the Schwartz equation for children and adolescents; less biased [5.0 (4.5, 5.5) versus 6.3 (5.9, 6.8)] and as accurate [81.6% (80.4, 82.7) versus 81.9% (80.7, 83.0)] as the CKD-EPI equation for young and middle-age adults; and less biased [-1.1 (-1.6, -0.6) versus 5.6 (5.1, 6.2)] and more accurate [86.1% (84.4, 87.7) versus 81.8% (79.7, 84.0)] than CKD-EPI for older adults. Conclusions: The FAS equation has improved validity and continuity across the full age-spectrum and overcomes the problem of implausible eGFR changes in patients which would otherwise occur when switching between more age-specific equations. abstract_id: PUBMED:34943527 Estimating Glomerular Filtration Rate from Serum Myo-Inositol, Valine, Creatinine and Cystatin C. Assessment of renal function relies on the estimation of the glomerular filtration rate (eGFR). Existing eGFR equations, usually based on serum levels of creatinine and/or cystatin C, are not uniformly accurate across patient populations. In the present study, we expanded a recent proof-of-concept approach to optimize an eGFR equation targeting the adult population with and without chronic kidney disease (CKD), based on a nuclear magnetic resonance spectroscopy (NMR) derived 'metabolite constellation' (GFRNMR). A total of 1855 serum samples were partitioned into development, internal validation and external validation datasets. The new GFRNMR equation used serum myo-inositol, valine, creatinine and cystatin C plus age and sex. GFRNMR had a lower bias to tracer measured GFR (mGFR) than existing eGFR equations, with a median bias (95% confidence interval [CI]) of 0.0 (-1.0; 1.0) mL/min/1.73 m2 for GFRNMR vs. -6.0 (-7.0; -5.0) mL/min/1.73 m2 for the Chronic Kidney Disease Epidemiology Collaboration equation that combines creatinine and cystatin C (CKD-EPI2012) (p < 0.0001). Accuracy (95% CI) within 15% of mGFR (1-P15) was 38.8% (34.3; 42.5) for GFRNMR vs. 47.3% (43.2; 51.5) for CKD-EPI2012 (p < 0.010). Thus, GFRNMR holds promise as an alternative way to assess eGFR with superior accuracy in adult patients with and without CKD. abstract_id: PUBMED:30815379 Associations of renal function with diabetic retinopathy and visual impairment in type 2 diabetes: A multicenter nationwide cross-sectional study. Background: Diabetic retinopathy (DR) separately has been noted as a major public health problem worldwide as well. Currently, many studies have demonstrated an association between diabetic nephropathy and DR in type 1 diabetes mellitus (T1DM) patients, but this association is less strong in T2DM. The evidence for an association between renal function and DR and visual impairment among T2DM patients is limited, particularly in the Asian population. Aim: To assess the association between glomerular filtration rate (GFR) and DR, severe DR, and severe visual impairment among T2DM patients in Thailand. Methods: We conducted a nationwide cross-sectional study based on the DM/HT study of the Medical Research Network of the Consortium of Thai Medical Schools. This study evaluated adult T2DM patients from 831 public hospitals in Thailand in the year 2013. GFR was categorized into ≥ 90, 60-89, 30-59 and < 30 mL/min/1.73 m2. The association between GFR and DR, severe DR, and severe visual impairment were assessed using multivariate logistic regression. Results: A total of 13192 T2DM patients with available GFR were included in the analysis. The mean GFR was 66.9 ± 25.8 mL/min/1.73 m2. The prevalence of DR, proliferative DR, diabetic macular edema, and severe visual impairment were 12.4%, 1.8%, 0.2%, and 2.1%, respectively. Patients with GFR of 60-89, 30-59 and < 30 mL/min/1.73 m2 were significantly associated with increased DR and severe DR when compared with patients with GFR of ≥ 90 mL/min/1.73 m2. In addition, increased severe visual impairment was associated with GFR 30-59 and < 30 mL/min/1.73 m2. Conclusion: Decreased GFR was independently associated with increased DR, severe DR, and severe visual impairment. GFR should be monitored in diabetic patients for DR awareness and prevention. abstract_id: PUBMED:32905314 Prospects for improved glomerular filtration rate estimation based on creatinine-results from a transnational multicentre study. Background: The Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) creatinine equation is routinely used to assess renal function but exhibits varying accuracy depending on patient characteristics and clinical presentation. The overall aim of the present study was to assess if and to what extent glomerular filtration rate (GFR) estimation based on creatinine can be improved. Methods: In a cross-sectional analysis covering the years 2003-17, CKD-EPI was validated against measured GFR (mGFR; using various tracer methods) in patients with high likelihood of chronic kidney disease (CKD; five CKD cohorts, n = 8365) and in patients with low likelihood of CKD (six community cohorts, n = 6759). Comparisons were made with the Lund-Malmö revised equation (LMR) and the Full Age Spectrum equation. Results: 7In patients aged 18-39 years old, CKD-EPI overestimated GFR with 5.0-16 mL/min/1.73 m2 in median in both cohort types at mGFR levels <120 mL/min/1.73 m2. LMR had greater accuracy than CKD-EPI in the CKD cohorts (P30, the percentage of estimated GFR within 30% of mGFR, 83.5% versus 76.6%). CKD-EPI was generally the most accurate equation in the community cohorts, but all three equations reached P30 above the Kidney Disease Outcomes Quality Initiative benchmark of 90%. Conclusions: None of the evaluated equations made optimal use of available data. Prospects for improved GFR estimation procedures based on creatinine exist, particularly in young adults and in settings where patients with suspected or manifest CKD are investigated. abstract_id: PUBMED:37370648 Performance of Nuclear Magnetic Resonance-Based Estimated Glomerular Filtration Rate in a Real-World Setting. An accurate estimate of glomerular filtration rate (eGFR) is essential for proper clinical management, especially in patients with kidney dysfunction. This prospective observational study evaluated the real-world performance of the nuclear magnetic resonance (NMR)-based GFRNMR equation, which combines creatinine, cystatin C, valine, and myo-inositol with age and sex. We compared GFRNMR performance to that of the 2021 CKD-EPI creatinine and creatinine-cystatin C equations (CKD-EPI2021Cr and CKD-EPI2021CrCys), using 115 fresh routine samples of patients scheduled for urinary iothalamate clearance measurement (mGFR). Median bias to mGFR of the three eGFR equations was comparably low, ranging from 0.4 to 2.0 mL/min/1.73 m2. GFRNMR outperformed the 2021 CKD-EPI equations in terms of precision (interquartile range to mGFR of 10.5 vs. 17.9 mL/min/1.73 m2 for GFRNMR vs. CKD-EPI2021CrCys; p = 0.01) and accuracy (P15, P20, and P30 of 66.1% vs. 48.7% [p = 0.007], 80.0% vs. 60.0% [p < 0.001] and 95.7% vs. 86.1% [p = 0.006], respectively, for GFRNMR vs. CKD-EPI2021CrCys). Clinical parameters such as etiology, comorbidities, or medications did not significantly alter the performance of the three eGFR equations. Altogether, this study confirmed the utility of GFRNMR for accurate GFR estimation, and its potential value in routine clinical practice for improved medical care. abstract_id: PUBMED:35626276 Analytical Validation of GFRNMR: A Blood-Based Multiple Biomarker Assay for Accurate Estimation of Glomerular Filtration Rate. Accurate and precise monitoring of kidney function is critical for a timely and reliable diagnosis of chronic kidney disease (CKD). The determination of kidney function usually involves the estimation of the glomerular filtration rate (eGFR). We recently reported the clinical performance of a new eGFR equation (GFRNMR) based on the nuclear magnetic resonance (NMR) measurement of serum myo-inositol, valine, and creatinine, in addition to the immunoturbidometric quantification of serum cystatin C, age and sex. We now describe the analytical performance evaluation of GFRNMR according to the Clinical and Laboratory Standards Institute guidelines. Within-laboratory coefficients of variation (CV%) of the GFRNMR equation did not exceed 4.3%, with a maximum CV% for repeatability of 3.7%. Between-site reproducibility (three sites) demonstrated a maximum CV% of 5.9%. GFRNMR stability was demonstrated for sera stored for up to 8 days at 2-10°C and for NMR samples stored for up to 10 days in the NMR device at 6 ± 2°C. Substance interference was limited to 4/40 (10.0%) of the investigated substances, resulting in an underestimated GFRNMR (for glucose and metformin) or a loss of results (for naproxen and ribavirin) for concentrations twice as high as usual clinical doses. The analytical performances of GFRNMR, combined with its previously reported clinical performance, support the potential integration of this NMR method into clinical practice. abstract_id: PUBMED:28932781 Data on the relation between renal biomarkers and measured glomerular filtration rate. The data presented in this article are related to the research article entitled "The Diagnostic Value of Rescaled Renal Biomarkers Serum Creatinine and Serum Cystatin C and their Relation with Measured Glomerular Filtration Rate" (Pottel et al. (2017) [1]). Data are presented demonstrating the rationale for the normalization or rescaling of serum cystatin C, equivalent to the rescaling of serum creatinine. Rescaling biomarkers brings them to a notionally common scale with reference interval [0.67-1.33]. This article illustrates the correlation between rescaled biomarkers serum creatinine and serum cystatin C by plotting them in a 2-dimensional graph. The diagnostic value in terms of sensitivity and specificity with measured Glomerular Filtration Rate as the reference method is calculated per age-decade for both rescaled biomarkers. Finally, the interchangeability between detecting impaired kidney function from renal biomarkers and from the Full Age Spectrum FAS-estimating GFR-equation and measured GFR using a fixed and an age-dependent threshold is shown. abstract_id: PUBMED:20300844 Increased prevalence of reduced estimated glomerular filtration rate in chronic hepatitis C patients. Background: To investigate the prevalence and severity of reduced estimated glomerular filtration rate (eGFR) in patients with chronic hepatitis C (CHC). Methods: Medical record review of 831 consecutive CHC patients seen in our clinic between July 2000 and August 2003; eGFR was estimated using the abbreviated Modification of Diet in Renal Disease (aMDRD) equation. The stage of kidney disease was determined based on eGFR expressed in milliliters per minute per 1.73 m(2): stage 1 (signs of kidney damage but normal or elevated (eGFR >or= 90), stage 2 (eGFR 60-89), stage 3 (30-59), stage 4 (eGFR 15-29), stage 5 (eGFR < 15 or dialysis-dependent). Results: A total of 522 patients had available data with using the aMDRD equation, 51% had abnormal eGFR (stage 1, 4.6%; stage 2, 36.4%; stage 3 or 4, 6.1%; stage 5, 3.8%). Of 190 patients with stage 2 kidney disease, 189 patients (99.5%) had normal serum creatinine and only one patient (0.5%) had elevated creatinine concentrations (>1.4 mg/dl). Of the 32 patients with stage 3 or 4 disease, 20 (62.5%) had a normal serum creatinine concentration. Of 349 patients without diseases known to cause renal insufficiency, 38% had stage 2-4 renal disease. In a subset of these patients, 95/522 (18%) the measured creatinine clearance showed good correlation with their aMDRD (R = 0.47, (p < 0.0001). Conclusions: In CHC patients, a normal serum creatinine concentration does not assure normal kidney function. Estimation of eGFR with the aMDRD equation is a more accurate method of identifying patients with chronic kidney disease and reduced eGFR. Therefore, CHC patients should be screened more rigorously for chronic kidney disease because of the high prevalence of reduced eGFR. Lastly, in all CHC patients, the aMDRD eGFR should be used in each encounter with these patients when assessing their renal function irrespective of their serum creatinine. abstract_id: PUBMED:37257905 Race-free estimated glomerular filtration rate equation in kidney transplant recipients: development and validation study. Objective: To compare the performance of a newly developed race-free kidney recipient specific glomerular filtration rate (GFR) equation with the three current main equations for measuring GFR in kidney transplant recipients. Design: Development and validation study SETTING: 17 cohorts in Europe, the United States, and Australia (14 transplant centres, three clinical trials). Participants: 15 489 adults (3622 in development cohort (Necker, Saint Louis, and Toulouse hospitals, France), 11 867 in multiple external validation cohorts) who received kidney transplants between 1 January 2000 and 1 January 2021. Main Outcome Measure: The main outcome measure was GFR, measured according to local practice. Performance of the GFR equations was assessed using P30 (proportion of estimated GFR (eGFR) within 30% of measured GFR (mGFR)) and correct classification (agreement between eGFR and mGFR according to GFR stages). The race-free equation, based on creatinine level, age, and sex, was developed using additive and multiplicative linear regressions, and its performance was compared with the three current main GFR equations: Modification of Diet in Renal Disease (MDRD) equation, Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) 2009 equation, and race-free CKD-EPI 2021 equation. Results: The study included 15 489 participants, with 50 464 mGFR and eGFR values. The mean GFR was 53.18 mL/min/1.73m2 (SD 17.23) in the development cohort and 55.90 mL/min/1.73m2 (19.69) in the external validation cohorts. Among the current GFR equations, the race-free CKD-EPI 2021 equation showed the lowest performance compared with the MDRD and CKD-EPI 2009 equations. When race was included in the kidney recipient specific GFR equation, performance did not increase. The race-free kidney recipient specific GFR equation showed significantly improved performance compared with the race-free CKD-EPI 2021 equation and performed well in the external validation cohorts (P30 ranging from 73.0% to 91.3%). The race-free kidney recipient specific GFR equation performed well in several subpopulations of kidney transplant recipients stratified by race (P30 73.0-91.3%), sex (72.7-91.4%), age (70.3-92.0%), body mass index (64.5-100%), donor type (58.5-92.9%), donor age (68.3-94.3%), treatment (78.5-85.2%), creatinine level (72.8-91.3%), GFR measurement method (73.0-91.3%), and timing of GFR measurement post-transplant (72.9-95.5%). An online application was developed that estimates GFR based on recipient's creatinine level, age, and sex (https://transplant-prediction-system.shinyapps.io/eGFR_equation_KTX/). Conclusion: A new race-free kidney recipient specific GFR equation was developed and validated using multiple, large, international cohorts of kidney transplant recipients. The equation showed high accuracy and outperformed the race-free CKD-EPI 2021 equation that was developed in individuals with native kidneys. Trial Registration: ClinicalTrials.gov NCT05229939. Answer: The provided abstracts do not mention the Mayo Clinic Quadratic equation, which is another method for estimating glomerular filtration rate (GFR). Instead, they discuss various other equations and methods for estimating GFR, such as the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation, the Full Age Spectrum (FAS) equation, and a nuclear magnetic resonance (NMR)-based equation known as GFRNMR. These abstracts focus on the development, validation, and performance of these different equations in various populations, including those with type 2 diabetes mellitus (T2DM), chronic kidney disease (CKD), and kidney transplant recipients. For example, one study investigated the association between plasma galectin-3 concentration and estimated GFR in patients with T2DM with and without albuminuria, using creatinine-based and combined creatinine-cystatin C equations (PUBMED:36175599). Another study developed and validated the FAS equation for estimating GFR across the full age spectrum (PUBMED:26932693). The GFRNMR equation, which includes serum myo-inositol, valine, creatinine, and cystatin C, was also evaluated and showed promise as an alternative way to assess eGFR with superior accuracy in adult patients with and without CKD (PUBMED:34943527, PUBMED:37370648, PUBMED:35626276). Additionally, the association between renal function and diabetic retinopathy and visual impairment in T2DM patients was examined in a multicenter nationwide cross-sectional study (PUBMED:30815379). Another study assessed the prospects for improved GFR estimation based on creatinine in a transnational multicentre study (PUBMED:32905314). Lastly, a study compared the performance of a newly developed race-free kidney recipient specific GFR equation with the three current main equations for measuring GFR in kidney transplant recipients (PUBMED:37257905). In conclusion, while the abstracts provided valuable information on various GFR estimation methods, they do not specifically address the utility of the Mayo Clinic Quadratic equation for estimating GFR in type 2 diabetic patients. Therefore, based on the provided abstracts, it is not possible to determine the usefulness of the Mayo Clinic Quadratic equation in this patient population.
Instruction: Comparison of ambulatory blood pressure measurement with home, office and pharmacy measurements: is arterial blood pressure measured at pharmacy reliable? Abstracts: abstract_id: PUBMED:26303331 Comparison of ambulatory blood pressure measurement with home, office and pharmacy measurements: is arterial blood pressure measured at pharmacy reliable? Rationale, Aims And Objectives: Standardizing arterial blood pressure (BP) measurement is difficult because of different performers like doctor or pharmacy employee. We investigated the reliability between different BP measurement methods. Methods: The study was conducted in an internal medicine service with 160 patients in Ankara, Turkey. First, the subjects' BP was measured by doctor. Then, 24-hour BP monitoring devices were placed. Participants were then instructed to measure their BPs both at home and in pharmacy. The next day, arterial BP was measured by the doctor for the second time. Results: The prevalence rates of masked and white coat hypertension were 8.8% (n = 14) and 8.1% (n = 13), respectively. There was no statistically significant differences between ambulatory measurement and home, office and pharmacy measurements (P > 0.05). The consistency rate between ambulatory and home measurements was 97.5% (kappa = 0.947, P < 0.001). The consistency rate between ambulatory and pharmacy measurements was 82.5% (kappa = 0.634, P < 0.001). When compared with ambulatory measurement, the most sensitive (98.0%) and most specific (96.8%) method was home measurement. There was a moderate positive correlation between ambulatory and other measurements in both systolic and diastolic values. There was a positive and very strong correlation between ambulatory and home measurements of systolic and diastolic ABP values (respectively; r = 0.926 and r = 0.968) and there was a statistically significant relation between these measurements (P < 0.001). Conclusion: The results of all measurement methods were close to each other. The most sensitive and specific method was home measurement when compared with ambulatory measurement. But both office and pharmacy measurements had also high sensitivity and specificity. abstract_id: PUBMED:25600420 Comparison of an in-pharmacy automated blood pressure kiosk to daytime ambulatory blood pressure in hypertensive subjects. The objective of this study was to compare serial readings from an in-pharmacy automated blood pressure (BP) kiosk to mean daytime ambulatory BP. A total of 100 community-dwelling adults with hypertension underwent (1) three baseline automated office readings; (2) three in-pharmacy readings on each of four visits (12 total) using the PharmaSmart PS-2000 kiosk; and (3) 24-hour ambulatory BP monitoring between in-pharmacy visits two and three. Paired t-tests, Bland-Altman plots, and Pearson correlation coefficients were used for analysis. Mean BPs were 137.8 ± 13.7/81.9 ± 12.2 mm Hg for in-pharmacy and 135.5 ± 11.7/79.7 ± 10.0 mm Hg for daytime ambulatory (difference of 2.3 ± 9.5/2.2 ± 6.9 mm Hg [P ≤ .05]). Bland-Altman plots depicted a high degree of BP variability but did not show clinically important systematic BP differences. With ambulatory BP as the reference standard, in-pharmacy device results were similar to automated office results. The PharmaSmart PS-2000 closely approximated mean daytime ambulatory BP, supporting the use of serial readings from this device in the assessment of BP. abstract_id: PUBMED:20652831 Clinical value of blood pressure measurement in the community pharmacy. Aim Of The Study: To investigate whether the measurement of blood pressure in the community pharmacy is a valuable method to diagnose hypertension, to assess the need and the effectiveness of anti-hypertensive treatments, or, in general, to make clinical decisions. Method: Information has been extracted from articles published in English and in Spanish, from January 1989 to December 2009, in indexed magazines in MEDLINE and EMBASE. To perform the search, multiple and specified terms related to the community pharmacy setting, to blood pressure measurement and to the comparison and agreement between blood pressure measurement methods were used. Selected articles were those that: (1) compared and/or measured the agreement (concordance) between community pharmacy blood pressure measurements obtained in repeated occasions, or (2) compared and/or measured the agreement between the community pharmacy blood pressure measurement method and other measurement methods used in clinical practice for decision-making purposes: blood pressure measurement by a physician, by a nurse and home or ambulatory blood pressure monitoring. Articles were included and analyzed by two investigators independently, who essentially extracted the main results of the manuscripts, emphasizing the assessment of the blood pressure measurement methods used and the completed statistical analysis. Results: Only three studies comparing the community pharmacy blood pressure measurement method with other methods and one comparing repeated measurements of community pharmacy blood pressure were found. Moreover, these works present significant biases and limitations, both in terms of method and statistical analysis, which make difficult to draw consistent conclusions. Conclusion: Further research of high quality is needed, which results can guide the clinical decision-making based on the community pharmacy blood pressure measurement method. abstract_id: PUBMED:27417699 Principles of Blood Pressure Measurement - Current Techniques, Office vs Ambulatory Blood Pressure Measurement. Blood pressure measurement has a long history and a crucial role in clinical medicine. Manual measurement using a mercury sphygmomanometer and a stethoscope remains the Gold Standard. However, this technique is technically demanding and commonly leads to faulty values. Automatic devices have helped to improve and simplify the technical aspects, but a standardised procedure of obtaining comparable measurements remains problematic and may therefore limit their validity in clinical practice. This underlines the importance of less error-prone measurement methods such as ambulatory or home blood pressure measurements and automated office blood pressure measurements. These techniques may help to uncover patients with otherwise unrecognised or overestimated arterial hypertension. Additionally these techniques may yield a better prognostic value. abstract_id: PUBMED:29033973 Comparison of Central, Ambulatory, Home and Office Blood Pressure Measurement as Risk Markers for Mild Cognitive Impairment in Hypertensive Patients. Aims: We compared the role of central blood pressure (BP), ambulatory BP monitoring (ABPM), home-measured BP (HMBP) and office BP measurement as risk markers for the development of mild cognitive impairment (MCI). Methods: 70 hypertensive patients on combination medical therapy were studied. Their mean age was 64.97 ± 8.88 years. Eighteen (25.71%) were males and 52 (74.28%) females. All of the patients underwent full physical examination, laboratory screening, echocardiography, and office, ambulatory, home and central BP measurement. The neuropsychological tests used were: Mini Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA). SPSS 19 was used for the statistical analysis with a level of significance of 0.05. Results: The mean central pulse pressure values of patients with MCI were significantly (p = 0.016) higher than those of the patients without MCI. There was a weak negative correlation between central pulse pressure and the results from the MoCA and MMSE (r = -0.283, p = 0.017 and r = -0.241, p = 0.044, respectively). There was a correlation between ABPM and MCI as well as between HMBP and MCI. Conclusions: The correlation of central BP with target organ damage (MCI) is as good as for the other types of measurements of BP (home and ambulatory). Office BP seems to be the poorest marker for the assessment of target organ damage. abstract_id: PUBMED:34163281 The Development of an Educational Video on Blood Pressure Measurement for Pharmacy Students. Introduction: High blood pressure is an important worldwide health issue. Pharmacists can perform multifaceted tasks in hypertension management such as measuring blood pressure. In a time where the use of educational videos in health professions education has increased, an educational video might be an option for teaching blood pressure measurement skills to pharmacy students. This project aimed to develop an educational video tailored to pharmacy students on oscillometric blood pressure measurement in a community pharmacy setting that can be used as a self-instruction video. Methods: The video was created with support from the university's multimedia center. The video development was roughly divided into pre-production, production, and post-production. Students' satisfaction with and perception of the video was surveyed. Results: An 11-minute 33-second self-instruction video in the German language on proper oscillometric blood pressure measurement tailored for pharmacy students was created. Along with descriptive slides, the video delineates the necessary steps of blood pressure measurement in a community pharmacy setting in a role-play, to support students in communication with the patient. Results of a survey on the satisfaction and perception of the video from thirty-seven pharmacy students were included in the analysis and revealed that the video was well accepted by pharmacy students. Moreover, approximately 95% responded that instructional videos should be included in future pharmacy education. Conclusion: We successfully developed an educational video on oscillometric blood pressure measurement for a community pharmacy setting. This work is a valuable form of support for faculty members, who intend to develop educational videos. This might be of interest especially during the coronavirus disease 2019 (COVID-19) pandemic, where distance learning has become highly relevant. abstract_id: PUBMED:18297206 Blood pressure measurements taken by patients are similar to home and ambulatory blood pressure measurements. Objective: To compare blood pressure measurements taken at home by physicians, nurses, and patients with office blood pressure measurement , ambulatory blood pressure monitoring and home blood pressure measurement. Methods: A total of 44 patients seen by a home care program were studied. Protocol 1 a) blood pressure was measured by the patient, a physician and a nurse during a regular home visit (Home1); b) home blood pressure measurement was measured for 4 days (HBPM1); c) office blood pressure measurement was measured by a physician, a nurse, and the patient; and by 24-hour ambulatory blood pressure monitoring. Protocol 2 blood pressure was measured by the patient, a physician, and a nurse during a special home visit in the presence of a physician and a nurse only (Home2); and b) home blood pressure measurement was taken for the second time (HBPM2). Echocardiography, guided by a two-dimensional echocardiograph, was performed. Results: Protocol 1: a) office blood pressure measurement and Home1 were significantly higher than ambulatory blood pressure monitoring, except for systolic and diastolic office blood pressure measurement taken by the patient or a family member, systolic blood pressure taken by a nurse, and diastolic blood pressure taken by a physician. b) ambulatory blood pressure monitoring and HBPM1 were similar. Protocol 2: a) HBPM2 and Home2 were similar. b) Home2 was significantly lower than Home1, except for diastolic blood pressure taken by a nurse or the patient. There were significant relationships between: a) diastolic blood pressure measured by the patient and the thickness of the interventricular septum, posterior wall, and left ventricular mass; and b) ambulatory and HBPM2 diastolic and systolic blood pressure taken by a physician (home2) and left ventricular mass. Therefore, the data indicate that home blood pressure measurement and ambulatory blood pressure monitoring had good prognostic values relative to "office measurement." Conclusion: This study showed that the measurement most similar to home blood pressure measurement and ambulatory blood pressure monitoring was blood pressure measured by the patient, and that home blood pressure measurement and ambulatory blood pressure monitoring had good prognostic value relative to "office measurements". abstract_id: PUBMED:22089107 Agreement between community pharmacy, physician's office, and home blood pressure measurement methods: the PALMERA Study. Background: The usefulness of the community pharmacy blood pressure (BP) (CPBP) method in assessing the effectiveness of antihypertensive treatment has not been adequately studied. The aim of this study was to assess the agreement between community pharmacy, home, and physician office BP (POBP) measurement methods in treated hypertensive patients. Methods: BP was measured at the pharmacy (three visits), at home (4 days) and at the physician office (three visits). The Lin correlation-concordance coefficient (CCC) was used to evaluate the quantitative agreement. The qualitative agreement between methods to establish the patient's hypertensive state was evaluated using the κ-coefficient. Using home BP (HBP) monitoring as the reference method, the sensitivity, specificity, positive and negative likelihood ratios of the CPBP and POBP measurement methods were calculated. Results: The study included 70 patients. Agreements were acceptable-moderate between CPBP and HBP (CCC (systolic BP (SBP)/diastolic BP (DBP)) = 0.79/0.66; κ = 0.56), moderate between CPBP and POBP (CCC = 0.57/0.61; κ = 0.35), and moderate-poor (CCC = 0.56/0.49; κ = 0.28) between POBP and HBP. The sensitivity, specificity, positive and negative likelihood ratio for the CPBP and the POBP measurement methods were: 60.7%, 92.9%, 8.5, 0.4 and 75.0%, 54.8%, 1.7, 0.5, respectively. Conclusions: In this sample of treated hypertensive patients, the agreement between the community pharmacy and HBP measurement methods was acceptable-moderate and greater than other agreements. The CPBP measurement method was more reliable than the POBP measurement method for detecting the presence of both uncontrolled and controlled BP and could be a good alternative to HBP monitoring when the latter lacks suitability. abstract_id: PUBMED:10347933 Home blood pressure measurement and ambulatory blood pressure measurement versus office blood pressure measurement Purpose: To compare both home blood pressure measurement (HBPM) and ambulatory blood pressure monitoring (ABPM) with office blood pressure measurement (OBP); and also to compare the correlation between HBPM and OBP with LVMI (left ventricular mass index). Methods: Protocol 1--68 hypertensive patients (58 +/- 12 years, 37 females): a) self recorded blood pressure at home in the 7 days; b) recorded the ABPM during 24 hours; and c) the physician recorded blood pressure in the office. Protocol 2-41 hypertensive patients underwent the HBPM, OBP, and BI-dimensional echocardiogram. Results: Protocol 1--OBP (153 +/- 24/96 +/- 13 mmHg) was higher (p < 0.05) than HBPM (133 +/- 18/84 +/- 12 mmHg) and ABPM (137 +/- 17/87 +/- 12 mmHg); Protocol 2--LVMI correlated better with HBPM (r = 0.39/0.49, p < 0.05, systolic and diastolic, respectively) than OBP (r = 0.02/ 0.22, p > 0.05, systolic and diastolic, respectively). Conclusion: This study showed that HBPM has a better correlation with LVMI than OBP. abstract_id: PUBMED:14596357 The Norwegian office-, home-, and ambulatory blood pressure study (NOHA). Objectives: To evaluate the use of ambulatory blood pressure monitoring in comparison with home and office blood pressure in every-day general practice. Methods: In a multicenter survey sixty-two practitioners previously unfamiliar with ambulatory blood pressure monitoring performed a total of 1162 (mean 18 +/- 16; range 3-94) 24-h ambulatory- and home blood pressure measurements in normotensive subjects or patients with untreated or treated hypertension. Results: In subjects with office blood pressure <140/90 mmHg average 24-h ambulatory blood pressure (Tenso 24 monitors) was 126/79 mmHg, home blood pressure (Tenso Plus semiautomatic sphygmomanometers) was 128/ 84 mmHg, and office blood pressure (standard sphygmomanometer) was 130/82 mmHg. Patients with office blood pressure > or = 140/90 mmHg had 24-h ambulatory blood pressure averaging 141/88 mmHg, home blood pressure 149/93 mmHg, and office blood pressure 157/95 mmHg. The difference between office-, home-, and ambulatory blood pressure increased with age. Expected differences were found between the three blood pressure modalities according to indication of examination (borderline -, "white-coat" -, and therapy refractory hypertension). Indices of hypertension related end-organ damage were positively correlated with home and ambulatory blood pressure but not with office blood pressure. Overall the performance of home- and ambulatory blood pressure in this general practice setting was similar to that seen by specialized blood pressure units. Conclusion: This study demonstrates the practical utility of home- and ambulatory blood pressure measurement in every-day general practice. Both methods are easy to introduce and the quality of blood pressure data obtained in every-day general practice are comparable to those obtained in hypertension clinics and trial centers. Answer: The comparison of ambulatory blood pressure measurement with home, office, and pharmacy measurements indicates that arterial blood pressure measured at a pharmacy is relatively reliable. According to the study conducted in Ankara, Turkey, there were no statistically significant differences between ambulatory measurement and home, office, and pharmacy measurements. The consistency rate between ambulatory and home measurements was very high at 97.5%, and the consistency rate between ambulatory and pharmacy measurements was also substantial at 82.5%. Home measurement was found to be the most sensitive and specific method when compared with ambulatory measurement, but office and pharmacy measurements also demonstrated high sensitivity and specificity (PUBMED:26303331). Another study comparing an in-pharmacy automated blood pressure kiosk to mean daytime ambulatory blood pressure in hypertensive subjects found that the PharmaSmart PS-2000 kiosk closely approximated mean daytime ambulatory blood pressure, supporting the use of serial readings from this device in the assessment of blood pressure (PUBMED:25600420). However, research has indicated that further high-quality studies are needed to guide clinical decision-making based on community pharmacy blood pressure measurement methods, as some studies present significant biases and limitations (PUBMED:20652831). In general, ambulatory or home blood pressure measurements and automated office blood pressure measurements are considered less error-prone and may help to uncover patients with otherwise unrecognised or overestimated arterial hypertension, potentially yielding a better prognostic value (PUBMED:27417699). The PALMERA Study also showed that the agreement between community pharmacy and home blood pressure measurement methods was acceptable to moderate and greater than other agreements, suggesting that community pharmacy blood pressure measurement could be a good alternative to home blood pressure monitoring when the latter lacks suitability (PUBMED:22089107). In summary, while ambulatory and home blood pressure measurements are often considered more reliable and have a better prognostic value, blood pressure measurements taken at pharmacies are also reliable and can be used as an alternative method for assessing blood pressure, especially when home measurements are not feasible.
Instruction: Is incisional biopsy of melanoma harmful? Abstracts: abstract_id: PUBMED:35233820 Preoperative Ultrasound-Guided Incisional Biopsy Enhances the Pathological Accuracy of Incisional Biopsy of Cutaneous Melanoma: A Prospective Clinical Trial in Chinese Patients. Objectives: To assess the feasibility of preoperative ultrasound (US)-guided incisional biopsy through a prospective controlled clinical trial. Methods: This was a prospective, double-arm, single-center study of Chinese patients. Thirty patients were enrolled in the study. Fourteen patients received incisional biopsies for which the choice of biopsy area relied on a clinical evaluation, and 16 patients received incisional biopsies for which the choice of biopsy area relied on a US-guided evaluation. The following procedure was used in the US-guided incisional biopsy group: 1) clinical and dermoscopic evaluation of skin lesions; 2) US examination; 3) incisional biopsy; 4) surgical excision; and 5) histopathological examination. The same procedure was used in the non-US-guided group except without US examination. Results: In the non-US-guided group, the mean tumor thicknesses obtained from incisional biopsy and postoperative histopathological examination were 2.1 and 4.1 mm, respectively. Seven melanomas were underestimated by incisional biopsy, resulting in margins narrower than currently recommended. In the US-guided group, the mean tumor thicknesses obtained from US, incisional biopsy, and postoperative histopathological examination were 3.4, 2.9, and 2.7 mm, respectively. In only 3 melanomas was the tumor thickness of the incisional biopsy less than that of the postoperative histopathological examination, demonstrating that US-guided biopsy obtains the maximum thickness area. Conclusions: Preoperative US-guided incisional biopsy can enhance the pathological accuracy of incisional biopsy, which may allow us to better perform surgical excision with safe peripheral surgical margins. abstract_id: PUBMED:16307945 Is incisional biopsy of melanoma harmful? Background: In the era of sentinel lymph node (SLN) biopsy, there has been concern that manipulation, injection, and massage of intact primary melanomas (after incisional or shave biopsy) could lead to an artifactual increased rate of SLN micrometastases or an actual increased risk of recurrence. The aim of this study was to evaluate the difference in the incidence of SLN metastasis, locoregional recurrence (LRR), disease-free survival (DFS), distant disease-free survival (DDFS), or overall survival (OS) for patients who undergo excisional versus incisional versus shave biopsy. Methods: Analysis of database from a multicenter prospective randomized study from centers across the United States and Canada. Eligible patients were 18 to 71 years old, with cutaneous melanoma > or = 1.0 mm Breslow thickness. All patients underwent SLN biopsy using blue dye and radioactive colloid injection. SLNd were evaluated by serial histological sections with S100 immunohistochemistry. Statistical analysis was performed using univariate and multivariate analyses with a significance level of P < .05; survival analysis was performed by the Kaplan-Meier method with the log-rank test. Results: A total of 2,164 patients were evaluated; 382 patients were excluded for lack of biopsy information. Positive SLNs were found in 220 of 1,130 (19.5%), 58 of 281 (20.6%), and 67 of 354 (18.9%) of patients with excisional, incisional, or shave biopsy, respectively (no significant difference). There were significant differences among the 3 biopsy types in ulceration (P = .018, chi2) and regression (P = .022, chi2); there were no differences in age, gender, Breslow thickness, Clark level, lymphovascular invasion, tumor location, or histologic subtype. Biopsy type did not significantly affect LRR, DFS, DDFS, or OS. Conclusions: The concern that incomplete excision of primary melanomas may result in an increased incidence of SLN micrometastases, artifactual or real, is unfounded. Similarly, there is no evidence that biopsy type adversely affects locoregional or distant recurrence. Although shave biopsy is generally discouraged because it may lead to inaccurate tumor thickness measurements, it does not appear to affect overall patient outcome. abstract_id: PUBMED:12004308 Incisional biopsy and melanoma prognosis. Background: There are many circumstances in clinical practice in which it is helpful to have a definitive diagnosis of melanoma before subjecting a patient to mutilating surgery. Previous studies on the effect of incisional biopsy on melanoma prognosis were conflicting and lacked a matched control group to account for the other prognostic indicators. Objective: We set up this study to investigate the effect of incisional biopsy on melanoma prognosis. Methods: The design was of a retrospective case control. Data were obtained from the database of the Scottish Melanoma Group; the database was set up in 1979 to collect detailed clinical, pathologic, and follow-up data on all patients diagnosed with melanoma in Scotland. Each incisional case was matched against 2 excision cases controlling for age, sex, sites, and Breslow thickness. The main outcome measures were time from initial biopsy to recurrence and to melanoma-related death. Results: Two hundred sixty-five patients who had incisional biopsy before definitive excision of melanoma were included in the study; these were matched with 496 cases of excisional biopsy specimens. Cox's proportional hazard model for survival analysis showed that biopsy type had no significant effect on recurrence (P =.30) or melanoma-related death (P =.34). Conclusions: This study is the largest series on the effect of incisional biopsy on melanoma prognosis to date and the first to include matched controls. Melanoma prognosis is not influenced by incisional biopsy,. before definitive excision. abstract_id: PUBMED:25332645 Current biopsy practices for suspected melanoma: A survey of family physicians in Southwestern Ontario. Background: Family physicians (FPs) have an opportunity to diagnose pigmented lesions early with a timely biopsy. Objective: To assess current biopsy practices used by FPs in diagnosing melanoma. Methods: A computer-generated random sample of 200 practicing FPs from large and small communities in Southwestern Ontario was identified from the College of Physicians and Surgeons of Ontario physician directory. Paper-based surveys exploring practice setting, basic melanoma knowledge, biopsy practices and referral wait times were mailed using a modified Dillman protocol. Results: The response rate was 50% and respondents reflected the demographic characteristics of FPs nationwide as per the National Physician Survey. Knowledge testing revealed reasonable mean (± SD) scores (3.2±1.03 of 5). Twenty percent of respondents would always perform an excisional biopsy of skin lesions suspicious for melanoma. The remaining 80% would avoid an excisional biopsy in an aesthetically sensitive area and if there was risk of failure to close the defect primarily, among other reasons. If an excisional biopsy were not performed, one-half of respondents would perform an incisional biopsy (eg, punch biopsy). In large communities, 24% of patients were not seen by a surgeon within six months when referred without a tissue biopsy, leading to delayed diagnosis. Discussion: Educating and supporting FPs to perform incisional biopsies in cases for which excisional biopsies are inappropriate should result in earlier diagnosis of melanoma. Conclusion: FPs appropriately recognize that excisional biopsies are ideal in melanoma management and one-half will move on to an incisional biopsy when excision is not appropriate. abstract_id: PUBMED:10886267 Fusiform incisional biopsy for pigmented skin lesions. Current recommendations for biopsy of a pigmented skin lesion suspected of being melanoma include complete excision when possible. Because benign pigmented lesions do not warrant excisional biopsy, especially when they are large and/or on cosmetically sensitive sites, incisional biopsy is desirable. For such lesions we recommend a fusiform incisional biopsy. This tissue sampling technique not only yields an excellent cosmetic outcome, but also provides the pathologist with an optimal biopsy specimen in which he/she can assess the symmetry, circumscription, breadth, and depth of the lesion. abstract_id: PUBMED:15858469 Microstaging accuracy after subtotal incisional biopsy of cutaneous melanoma. Background: A significant portion of cutaneous melanoma may remain after subtotal incisional biopsy. The accuracy of microstaging and impact on clinical practice in this scenario are unknown. Objective: Our purpose was to examine microstaging accuracy of an initial incisional biopsy with a significant portion of the clinical lesion remaining (> or =50%). Methods: Patients with cutaneous melanoma, diagnosed by incisional biopsy with > or =50% of the lesion remaining, were prospectively evaluated for microstaging accuracy, comparing initial Breslow depth (BD1) to final depth (BD2) after excision of the residual lesion. Impact on prognosis and treatment was also evaluated. Results: Two hundred fifty of 1783 patients (14%) presented with > or =50% residual clinical lesion after incisional biopsy. The mean BD1 was 0.66 mm; the mean BD2, 1.07 mm (P = .001). After complete excision of the residual lesion, upstaging occurred in 21% and 10% became candidates for sentinel node biopsy. Conclusion: An incisional biopsy with > or =50% clinical lesion remaining afterward may be inadequate for accurate microstaging of melanoma. This scenario is relatively uncommon but clinically significant. abstract_id: PUBMED:35298451 Incisional Biopsy Technique Is Associated With Decreased Overall Survival for Cutaneous Melanoma. Background: Previous studies examining melanoma biopsy technique have not demonstrated an effect on overall survival. Objective: To examine overall survival of patients with cutaneous melanoma diagnosed by shave, punch, incisional, or excisional techniques from the National Cancer Database (NCDB). Materials And Methods: Melanoma data from the 2004 to 2016 NCDB data set were analyzed. A Cox proportional hazards model was constructed to assess the risk of 5-year all-cause mortality. Results: In total, 42,272 cases of melanoma were reviewed, with 27,899 (66%) diagnosed by shave biopsy, 8,823 (20.9%) by punch biopsy, and 5,550 (13.1%) by incisional biopsy. Both the univariate and multivariate analyses demonstrated that tumors diagnosed by incisional biopsy had significantly (p = .001) lower overall 5-year survival compared with shave techniques (hazard ratio [HR] = 1.140, 95% confidence interval [CI] 1.055 to 1.231). We found no difference (p = .109) between shave and punch biopsy techniques (HR 1.062, 95% CI 0.987-1.142) or between punch and incisional techniques (HR 1.074, 95% CI 0.979-1.177, p = .131). Conclusion: Incisional biopsies were associated with decreased overall 5-year survival in the NCDB. No difference was observed between shave and punch biopsy techniques. These findings support current melanoma management guidelines. abstract_id: PUBMED:20541685 Incisional biopsy and melanoma prognosis: Facts and controversies. Facing the increasing number of melanoma patients is the controversial question of whether an incisional biopsy is associated with an unfavorable patient prognosis. Results of nine studies that occurred during the last four decades were reviewed. One of these studies was a large, prospective randomized controlled trial. Evidence from this trial and from most other studies is that incisional biopsies were not associated with an unfavorable prognosis for melanoma patients. Incisional biopsies are currently recommended for the histopathologic diagnosis of large tumors in facial, mucosal, and acral locations. Complete excisional biopsies are the generally recommended standard for melanoma surgery. Incisional biopsies of malignant melanoma do not negatively influence prognosis. Complete excision of primary melanoma is still the recommended standard of care and is a precondition for accurate histopathologic diagnosis. abstract_id: PUBMED:2095182 Effect of incisional biopsy on subsequent histology of melanocytic naevi. We examined the effect of incisional biopsy on the subsequent histology of 15 benign melanocytic naevi. In all cases the histology of the residual lesion within the second specimen was unchanged apart from the presence of a surgical scar, and in general the definitive excision specimen showed a reduction in melanocytes at the dermo-epidermal junction overlying the scar. In no case were there any features suggestive of melanoma. abstract_id: PUBMED:22800550 Biopsy of the pigmented lesions. Although new technologies are becoming available to aid in diagnosis, the skin biopsy continues to be the fundamental tool of the dermatologist to evaluate the nature of a pigmented lesion. There are 3 major techniques for the biopsy of a pigmented lesion: shave biopsy, punch/incisional biopsy, and excisional biopsy. This article discusses when to biopsy a pigmented lesion and reviews the different biopsy techniques, with reference to specific clinical scenarios. Answer: Based on the provided abstracts, incisional biopsy of melanoma does not appear to be harmful in terms of prognosis or overall survival. Several studies have investigated the impact of incisional biopsy on melanoma outcomes, and the evidence suggests that incisional biopsies are not associated with an unfavorable prognosis for melanoma patients (PUBMED:20541685, PUBMED:12004308). A retrospective case-control study showed that the type of biopsy had no significant effect on recurrence or melanoma-related death (PUBMED:12004308). Similarly, a multicenter prospective randomized study found no significant difference in the incidence of sentinel lymph node metastasis, locoregional recurrence, disease-free survival, distant disease-free survival, or overall survival among patients who underwent excisional, incisional, or shave biopsy (PUBMED:16307945). However, one study from the National Cancer Database indicated that tumors diagnosed by incisional biopsy had significantly lower overall 5-year survival compared with shave techniques, but no difference was observed between shave and punch biopsy techniques or between punch and incisional techniques (PUBMED:35298451). This finding suggests that while there may be some association with decreased survival, the evidence is not conclusive, and other factors may be at play. Furthermore, a study on preoperative ultrasound-guided incisional biopsy in Chinese patients demonstrated that this technique can enhance the pathological accuracy of incisional biopsy, potentially leading to better surgical excision with safe peripheral surgical margins (PUBMED:35233820). In conclusion, the majority of the evidence suggests that incisional biopsy is not harmful to melanoma patients and does not adversely affect their prognosis. However, it is essential to perform biopsies correctly and consider each patient's unique circumstances to ensure accurate diagnosis and appropriate treatment planning.
Instruction: Does preoperative 25-hydroxyvitamin D status significantly affect the calcium kinetics after total thyroidectomy? Abstracts: abstract_id: PUBMED:23532602 Does preoperative 25-hydroxyvitamin D status significantly affect the calcium kinetics after total thyroidectomy? Background: Although previous studies have suggested that low preoperative 25-hydroxyvitamin D (25-OHD) is a risk factor for hypocalcemia after total thyroidectomy, the impact of preoperative 25-OHD on calcium (Ca)/parathyroid hormone (PTH) kinetics in the immediate postoperative period remains unclear. The study compared the postoperative Ca/PTH kinetics between different preoperative 25-OHD levels. Patients: A total of 281 patients who underwent a total thyroidectomy were analyzed. Serum Ca was measured preoperatively within 1 h after surgery (Ca-D0) and on the following morning (Ca-D1). Preoperative 25-OHD was also measured after overnight fasting while postoperative PTH was checked at skin closure on day 0 (PTH-D0) and on the following morning on day 1 (PTH-D1). The Ca/PTH kinetics were compared between three groups (group I: preoperative 25-OHD < 10 ng/mL; group II: 25-OHD = 10-20 ng/mL; group III: 25-OHD > 20 ng/mL). Results: Group I had significantly lower preoperative Ca (p = 0.016) and Ca-D0 (p = 0.036) but higher PTH-D1 (p = 0.015) than groups II and III. PTH-D0, Ca-D1, and the rate of clinically significant hypocalcemia were similar in the three groups. Group I had a significantly smaller Ca drop (-0.02 vs. 0.01 and 0.02 mmol/L, p = 0.011) and a tendency for a significantly smaller PTH drop (0.4 vs. 0.5 and 1.0 pmol/L, p = 0.073) than groups II and III. PTH-D1 (OR = 1.550) and 25-OHD (OR = 0.958) were independent factors for Ca drop from day 0 to day 1. Conclusions: Although group I began with lower serum Ca, those patients tended to have a greater PTH response to Ca drop and so preoperative 25-OHD did not significantly affect the overall Ca kinetics from preoperative to day 1. abstract_id: PUBMED:36542113 The role of preoperative vitamin D and calcium in preventing post-thyroidectomy hypocalcaemia: a systematic review. Background: The thyroid gland plays an important endocrine function regulating metabolism, growth and development. A variety of benign and malignant conditions necessitate surgical removal of the thyroid gland, either in its entirety, or partially. Thyroidectomy is the most commonly undertaken endocrine surgery. Hypocalcaemia, occurring at a rate of approximately 25%, is the most common complication following total thyroidectomy. Hypocalcaemia can increase healthcare costs requiring further investigations, treatment and delaying discharge. Severe hypocalcaemia can be potentially life threatening. It has been demonstrated that routine post-operative supplementation with Vitamin D and calcium, can reduce post-operative hypocalcaemia, at present it is unclear whether preoperative supplementation has a similar effect. Methods: This systematic review includes randomised controlled trials on human adult subjects, who received preoperative Vitamin D and/or calcium, and which studied the effect on post-operative hypocalcaemia following total or near total thyroidectomy. There were no restrictions in respect of post-operative supplementation regimen. Studies were limited to those published in English. Result: Seven of the eleven trials demonstrated a reduction in either laboratory or clinical hypocalcaemia with preoperative supplementation. This benefit was most likely to be seen when postoperative supplementation was performed on an as needed basis. IV calcium was required to treat resistant or severe hypocalcaemia, in five of the eleven trials, in each of these five trials, rates of IV replacement were lower in the preoperative supplementation group. Conclusion: The use of preoperative Vitamin D, with or without calcium, may provide a modest reduction in the incidence of laboratory and clinic hypocalcaemia following total or near total thyroidectomy. There is a trend toward a reduced requirement for IV calcium with preoperative supplementation. Larger robust randomised trials are needed to provide a definitive answer. Preoperative supplementation can be considered in patients undergoing total or near total thyroidectomy. abstract_id: PUBMED:38259695 Preoperative Vitamin.D Status and Post-Total Thyroidectomy Hypocalcemia. Introduction: Hypocalcemia is a common complication of total thyroidectomy (TT). This study was designed to investigate the effect of preoperative vitamin.D (Vit.D) status on the occurrence of post-total thyroidectomy hypocalcemia. Materials And Methods: Patients who underwent TT without parathyroidectomy were divided into three groups based on their preoperative Serum Vit.D levels (<20 ng/ml, 20-30 ng/ml, and ≥30 ng/ml were considered deficient, insufficient, and normal Vit.D levels, respectively). Serum levels of calcium and phosphorus were measured before and 24 hours after surgery in all patients. The patients were examined for clinical symptoms and signs of hypocalcemia postoperatively. In cases with positive clinical symptoms and signs of hypocalcemia and/or calcium levels <8 mg/dl, PTH level was measured before starting calcium infusion, while serum calcium and phosphorus levels were also measured 24 hours later. Results: Among 100 patients enrolled in this study, 81% were females. The mean age was 36.60±8.32 years. Before surgery, the mean Vit.D level was 26.9±16.89 ng/ml, while 47% of cases had normal Vit.D level, 32% had insufficient vitamin levels, and 21% had Vit.D deficiency. Twenty-four hours after surgery, the calcium (P=0.356) and phosphorus (P=0.743) levels were not significantly different between the three Vit.D groups. A comparison of postoperative PTH levels between the three Vit.D groups showed no significant difference (P=0.596). Conclusions: Based on our findings, preoperative serum Vit.D levels did not affect postoperative serum calcium levels. abstract_id: PUBMED:28418509 Association of Preoperative Calcium and Calcitriol Therapy With Postoperative Hypocalcemia After Total Thyroidectomy. Importance: Hypocalcemia is the most common complication after total thyroidectomy and can result in prolonged hospital admissions and increased hospital charges. Objective: To determine the effectiveness of preoperative calcium and calcitriol supplementation in reducing hypocalcemia following total thyroidectomy. Design, Setting, And Participants: A retrospective cohort study was conducted at a tertiary care center in 65 patients undergoing total thyroidectomy by a single surgeon. Patients were divided into 2 groups: those receiving preoperative as well as postoperative supplementation with calcium carbonate, 1000 to 1500 mg, 3 times daily and calcitriol, 0.25 to 0.5 µg, twice daily, and those receiving only postoperative supplementation with those agents at the same dosages. Data on patients who underwent surgery between January 1, 2008, and December 31, 2011, were acquired, and data analyses were conducted from March through June 2012, and from October through December 2016. Interventions: Calcium and calcitriol therapy. Main Outcomes And Measures: Postoperative serum calcium levels and development of postoperative hypocalcemia. Results: Of the 65 patients who underwent total thyroidectomy 27 (42%) were men; mean (SD) age was 49.7 (16.7) years. Thirty-three patients received preoperative calcium and calcitriol supplementation, and 32 patients received only postoperative therapy. In the preoperative supplementation group, 15 of 33 (45%) patients underwent complete central compartment neck dissection and 11 of 33 (33%) had lateral neck dissection, compared with 16 of 32 (50%) and 12 of 32 (38%), respectively, patients without preoperative supplementation. The mean measured serum calcium level in those without preoperative supplementation vs those with supplementation are as follows: preoperative, 9.6 vs 9.4 mg/dL (absolute difference, 0.16; 95% CI, -0.12 to 0.49 mg/dL); 12 hours postoperative, 8.3 vs 8.6 mg/dL (absolute difference, -0.30; 95% CI, -0.63 to 0.02 mg/dL); and 24 hours postoperative, 8.4 vs 8.5 mg/dL (absolute difference, -0.13; 95% CI, -0.43 to 0.16 mg/dL). In patients not receiving preoperative supplementation, 5 of 32 (16%) individuals became symptomatically hypocalcemic vs 2 of 33 (6%) in the preoperative supplementation group; an absolute difference of 10% (95% CI, -6.6% to 26.3%). Compared with the group not receiving preoperative supplementation, the mean [SD] length of stay was significantly shorter in the preoperative supplementation group (3.8 [1.8] vs 2.9 [1.4] days; absolute difference, -0.9; 95% CI, -1.70 to -0.105 days). Preoperative supplementation resulted in an estimated $2819 savings in charges per patient undergoing total thyroidectomy. Conclusions And Relevance: Preoperative calcium and calcitriol supplementation, in addition to routine postoperative supplementation, was associated with a reduced incidence of symptomatic hypocalcemia, length of hospital stay, and overall charges following total thyroidectomy. abstract_id: PUBMED:36091150 Preoperative supplementation of calcitriol and calcium relieves symptom and extent of hypocalcemia in patients undergoing total thyroidectomy and bilateral central compartment neck dissection: A prospective, randomized, open-label, parallel-controlled clinical study. Background: Hypocalcemia is the most common complication that challenges surgeons performing total thyroidectomy. Conventional postoperative calcium and calcitriol supplement has been reportedly effective; however, a time lag has been reported before taking effect. Therefore, the role of preoperative strategy is yet to be determined. Study Design: In this prospective, randomized, open-label, parallel-controlled phase II clinical study (registration number: ChiCTR2200059815), a short-term preoperative administration of calcitriol and calcium was proposed in 210 patients undergoing total thyroidectomy and bilateral central compartment neck dissection. Patients were recruited and randomized (1:1:1) into three groups: (A) combined (preoperative calcitriol and calcium), (B) calcium only (preoperative calcium only), and (C) control (no preoperative intervention). Finally, a total of 172 patients were qualified for final analysis. Results: Our data showed that 16 of 63 patients (25.4%) in the combined group had symptomatic hypocalcemia, whereas more patients from the control group (25 of 57 patients, 43.9%, P = 0.033) had symptomatic hypocalcemia. Further, the postoperative calcium level in the combined group is higher than in the control group (2.15 ± 0.15 vs. 2.09 ± 0.15 mmol/L, P = 0.031). Moreover, patients from the combined group showed lower calcium rates of <2.00 mmol/L (12.7% vs. 28.1%, P = 0.036). Remarkably, compared with the control group, patients with transient hypoparathyroidism in the combined group showed fewer rates for both symptomatic and biochemical hypocalcemia (28.6% vs. 61.1% for symptomatic hypocalcemia; 47.6% vs. 75% for biochemical hypocalcemia). Patients without transient hypoparathyroidism in all three groups showed no significant difference in rates for either symptomatic or biochemical hypocalcemia, indicating that this preoperative strategy is only effective for patients with transient hypoparathyroidism. We did not observe such beneficial effects in patients from the calcium group. Conclusions: Preoperative administration of calcitriol and calcium could reduce symptomatic and biochemical hypocalcemia, especially for those with transient hypoparathyroidism. Moreover, this maneuver could be recommended as a clinical routine in patients undergoing total thyroidectomy and bilateral central compartment neck dissection. Clinical Trial Registration: http://www.chictr.org.cn/edit.aspx?pid=164316&htm=4, identifier ChiCTR2200059815. abstract_id: PUBMED:32466854 Preoperative Vitamin D Supplementation in Patients With Vitamin D Deficiency Undergoing Total Thyroidectomy. Background: Transient hypocalcemia due to parathyroid gland or vessel manipulation is a common complication following thyroidectomy. Considering the role of 25-hydroxyvitamin D (25(OH)D) in calcium hemostasis, this study aimed to evaluate the effect of preoperative vitamin D supplementation on hypocalcemia incidence in thyroidectomy patients. Methods: In this randomized clinical trial, 100 patients scheduled for total thyroidectomy and suffering from preoperative moderate or severe vitamin D deficiency were enrolled. Patients were randomly allocated to either study or control groups using the sealed envelope method. Patients in the study group received vitamin D3 50,000-unit pearl weekly for 4 weeks prior to the operation. The control group received placebo. Total and ionized serum calcium levels were checked before surgery, the day after surgery, and 2 weeks postoperatively. Results: No significant difference was observed in terms of demographic data. During serial total calcium checks (5 episodes), total calcium levels changed significantly in patients who had received vitamin D supplements compared to the control group (P = 0.043). Symptomatic hypocalcemia incidence was significantly lower in patients supplemented with 25-hydroxyvitamin D (25(OH)D) (P = 0.04). Also, the requirement for intravenous calcium administration in order to treat the hypocalcemia symptoms was significantly lower in the study in comparison to the control group (P = 0.03). Conclusions: Vitamin D supplementation in patients with vitamin D deficiency might lead to a lower incidence of early-onset symptomatic hypocalcemia; hence, requiring less calcium supplementation for the management of hypocalcemia. abstract_id: PUBMED:9395723 Estimation of free calcium levels after thyroidectomy. Total calcium is routinely measured after thyroidectomy in a clinical setting, while the measurement or calculation of the free calcium level is not generally performed. We reviewed total and free calcium levels in patients who underwent lobectomy (n = 15), subtotal thyroidectomy (n = 15) and total thyroidectomy (n = 15). Postoperative total calcium levels decreased significantly in comparison to preoperative levels in all thyroidectomies (p < 0.01), and this fall was significantly related to the extent of surgery (p < 0.01). In contrast, there was no significant difference between preoperative and postoperative free calcium levels in patients undergoing lobectomy, although we found a decrease in free calcium levels after both subtotal and total thyroidectomy. Total protein levels decreased regardless of the type of operation. Serum total calcium levels were thought to be altered by serum protein levels through the change of protein-bound calcium levels. When examined for free calcium levels, some patients were administered unnecessary calcium supplementation because hypocalcemia had been judged from the total calcium level. Since the wrong diagnosis may be given with regard to hypoparathyroidism by measurement of total calcium levels alone, we propose that free calcium levels should be routinely measured or calculated after thyroidectomy. abstract_id: PUBMED:12402007 Predicting calcium status post thyroidectomy with early calcium levels. Objective: The study goals were to predict postoperative normocalcemia and hypocalcemia after total thyroidectomy using calcium levels and to assess the value of a standardized protocol in managing the total thyroidectomy patient. Study Design: We conducted a prospective study encompassing 68 patients undergoing a total thyroidectomy using a standardized protocol. Blood to measure postoperative calcium levels was drawn at 6, 12, and 20 hours and then twice daily thereafter. Calcium slope was calculated from the 6- and 12-hour serum corrected calcium levels. Results: Logistic regression analysis allowed the comparison of the 6- and 12-hour calcium slope versus proportion of normocalcemic patients postoperatively. A slope of +0.02 had a 97% chance of remaining normocalcemic (p = 0.0007). Conclusion: Successful prediction of calcium status post total thyroidectomy can be achieved using the slope of the 6- and 12-hour calcium levels. The risk of developing severe hypocalcemia can also be predicted with these slope values. Implementation of the protocol resulted in a significant reduction in the duration of hospital stay for patients who remain normocalcemic. abstract_id: PUBMED:31281762 Role of Preoperative Administration of Vitamin D and Calcium in Postoperative Transient Hypocalcemia after Total Thyroidectomy. Introduction Thyroid surgery is one of the most frequently performed surgical procedures worldwide. Total thyroidectomy is a recommended procedure for most of the thyroid diseases. The most common complication resulting after this surgery is transient hypocalcemia - the incidence is 24% - which increases the morbidity rate and increases the length of stay in the hospital. The objective of our study was to compare the frequency of transient hypocalcemia after vitamin D and calcium supplementation with the control group for patients undergoing total thyroidectomy. Patients and methods It was a randomized controlled trial conducted at Department of Surgery, Pakistan Atomic Energy Commission General Hospital, Islamabad, Pakistan from January 2017 to July 2017. A total of 92 patients of both genders undergoing total thyroidectomy were included in the study. Patients undergoing reoperation for thyroid disease, American Society of Anesthesiologists (ASA) grade 3 or above, patients with chronic renal failure, preoperative hypocalcemia or hypercalcemia were excluded. The patients were sorted into two groups by lottery method; Group 1 in which Vitamin D (2,00,000 IU) and calcium (1 gm) was given 24 hours preoperatively. Group 2 was the control group. Total thyroidectomy was done and serum calcium levels were evaluated immediately after surgery on day two, seven and on the 30th day. The final outcome was measured at one month. Data was analyzed via the Statistical Package for Social Sciences version 22.0 (IBM Corp, Armonk, NY, USA). P value ≤ 0.05 was considered significant. Results The age ranged from 18 to 65 years with the mean age of 38.673 ± 8.63 years in group 1 while 41.217 ± 9.52 years in group 2, mean preoperative calcium level was 9.482 ± 0.49 mg/dl in group 1 and 9.678 ± 0.54 mg/dl in group 2. Hypocalcemia was seen in 3 (6.5%) in group 1 as compared to 12 (26.1%) patients in group 2 (p = 0.011). Conclusion Preoperative oral calcium and vitamin D supplements may prevent postoperative hypocalcemia, allowing a safe and early discharge. This will ultimately lead to improved patient satisfaction and significant cost savings. abstract_id: PUBMED:25475499 Severe vitamin D deficiency: a significant predictor of early hypocalcemia after total thyroidectomy. Objective: To assess the role of preoperative serum 25 hydroxyvitamin D as predictor of hypocalcemia after total thyroidectomy. Study Design: Retrospective cohort study. Setting: University teaching hospital. Subjects And Methods: All consecutively performed total and completion thyroidectomies from February 2007 to December 2013 were reviewed through a hospital database and patient charts. The relationship between postthyroidectomy laboratory hypocalcemia (serum calcium≤2 mmol/L), clinical hypocalcemia, and preoperative serum 25 hydroxyvitamin D level was evaluated. Results: Two hundred thirteen patients were analyzed. The incidence of postoperative laboratory and clinical hypocalcemia was 19.7% and 17.8%, respectively. The incidence of laboratory and clinical hypocalcemia among severely deficient (<25 nmol/L), deficient (<50 nmol/L), insufficient (<75 nmol/L), and sufficient (≥75 nmol/L) serum 25 hydroxyvitamin D levels was 54% versus 33.9%, 10% versus 18%, 2.9% versus 11.6%, and 3.1% versus 0%, respectively. Multiple logistic regression analysis revealed preoperative severe vitamin D deficiency as a significant independent predictor of postoperative hypocalcemia (odds ratio [OR], 7.3; 95% confidence interval [CI], 2.3-22.9; P=.001). Parathyroid hormone level was also found to be an independent predictor of postoperative hypocalcemia (OR, 0.6; 95% CI, 0.5-0.8; P=.002). Conclusion: Postoperative clinical and laboratory hypocalcemia is significantly associated with low levels of serum 25 hydroxyvitamin D. Our findings identify severe vitamin D deficiency (<25 nmol/L) as an independent predictor of postoperative laboratory hypocalcemia. Early identification and management of patients at risk may reduce morbidity and costs. Answer: The impact of preoperative 25-hydroxyvitamin D (25-OHD) status on calcium kinetics after total thyroidectomy has been investigated in several studies, with varying results. One study found that patients with lower preoperative 25-OHD levels had significantly lower preoperative calcium and immediate postoperative calcium levels, but the overall calcium kinetics from preoperative to day 1 postoperative were not significantly affected by preoperative 25-OHD status. This study suggested that patients with lower 25-OHD levels tended to have a greater parathyroid hormone (PTH) response to the calcium drop, which may help to mitigate the impact on calcium kinetics (PUBMED:23532602). Another study indicated that preoperative vitamin D supplementation in patients with vitamin D deficiency undergoing total thyroidectomy might lead to a lower incidence of early-onset symptomatic hypocalcemia, thus requiring less calcium supplementation for the management of hypocalcemia (PUBMED:32466854). Similarly, a systematic review found that preoperative supplementation with vitamin D and/or calcium may provide a modest reduction in the incidence of laboratory and clinical hypocalcemia following total or near total thyroidectomy (PUBMED:36542113). A prospective, randomized, open-label, parallel-controlled clinical study showed that preoperative administration of calcitriol and calcium could reduce symptomatic and biochemical hypocalcemia, especially for those with transient hypoparathyroidism (PUBMED:36091150). Additionally, a retrospective cohort study found that preoperative calcium and calcitriol supplementation was associated with a reduced incidence of symptomatic hypocalcemia, length of hospital stay, and overall charges following total thyroidectomy (PUBMED:28418509). However, another study concluded that preoperative serum vitamin D levels did not affect postoperative serum calcium levels, suggesting that preoperative vitamin D status may not have a significant impact on post-total thyroidectomy hypocalcemia (PUBMED:38259695). In summary, while some studies suggest that preoperative 25-OHD status may not significantly affect the overall calcium kinetics after total thyroidectomy, others indicate that preoperative supplementation with vitamin D and/or calcium can reduce the incidence of hypocalcemia and its associated complications. The evidence is mixed, and further research may be needed to provide a definitive answer.
Instruction: Is farm-related job title an adequate surrogate for pesticide exposure in occupational cancer epidemiology? Abstracts: abstract_id: PUBMED:19221114 Is farm-related job title an adequate surrogate for pesticide exposure in occupational cancer epidemiology? Objectives: Accurate assessment of exposure is a key factor in occupational epidemiology but can be problematic, particularly where exposures of interest may be many decades removed from relevant health outcomes. Studies have traditionally relied on crude surrogates of exposure based on job title only, for instance farm-related job title as a surrogate for pesticide exposure. Methods: This analysis was based on data collected in Western Australia in 2000-2001. Using a multivariate regression model, we compared expert-assessed likelihood of pesticide exposure based on detailed, individual-specific questionnaire and job specific module interview information with reported farm-related job titles as a surrogate for pesticide exposure. Results: Most (68.8%) jobs with likely pesticide exposure were farm jobs, but 78.3% of farm jobs were assessed as having no likelihood of pesticide exposure. Likely pesticide exposure was more frequent among jobs on crop farms than on livestock farms. Likely pesticide exposure was also more frequent among jobs commenced in more recent decades and jobs of longer duration. Our results suggest that very little misclassification would have resulted from the inverse assumption that all non-farming jobs are not pesticide exposed since only a very small fraction of non-agricultural jobs were likely to have had pesticide exposure. Conclusions: Classification of all farm jobs as pesticide exposed is likely to substantially over-estimate the number of individuals exposed. Our results also suggest that researchers should pay special attention to farm type, length of service and historical period of employment when assessing the likelihood of pesticide exposure in farming jobs. abstract_id: PUBMED:7807275 Evaluation of exposure data from men and women with the same job title. In the epidemiological approach to occupational cancers, large bodies of data must be analyzed to find rare cases of cancer. The exposure status of workers must therefore be assessed. Inaccuracies will lead to bias toward the null value in certain cases. Job title has often been used as a proxy for exposure status. This study was undertaken to examine content (ie, tasks and activities) associated with job title among men and women in a large Québec municipality. Occupational accident reports were studied for 1589 accidents, and 113 men and women workers were interviewed about job content. Women and men did not seem to have the same accident rates. From interview data, it appeared that women and men with the same job title did not perform the same tasks. Thus, they might have different exposures. The data reported here support caution in using job title to estimate exposure for both genders if the job-exposure matrix has not previously been validated separately by gender. In addition, it may be unwise to adjust relationships between job title and cancer incidence for gender, thus treating gender as a confounder when it may be a proxy for specific exposures. abstract_id: PUBMED:24401793 Use of the Finnish Information System on Occupational Exposure (FINJEM) in epidemiologic, surveillance, and other applications. This paper reviews the use of the Finnish Information System on Occupational Exposure (Finnish job-exposure matrix, FINJEM) in different applications in Finland and other countries. We describe and discuss studies on FINJEM and studies utilizing FINJEM in regard to the validity of exposure estimates, occupational epidemiology, hazard surveillance and prevention, the assessment of health risks and the burden of disease, the assessment of exposure trends and future hazards, and the construction of job-exposure matrices (JEMs) in countries other than Finland. FINJEM can be used as an exposure assessment tool in occupational epidemiology, particularly in large register-based studies. It also provides information for hazard surveillance at the national level. It is able to identify occupations with high average exposures to chemical agents and can therefore serve the priority setting of prevention. However, it has only limited use at the workplace level due to the variability of exposure between workplaces. The national estimates of exposure and their temporal trends may contribute to the assessment of both the recent and future burden of work-related health outcomes. FINJEM has also proved to be useful in the construction of other national JEMs, for example in the Nordic Occupational Cancer study in the Nordic countries. FINJEM is a quantitative JEM, which can serve many purposes and its comprehensive documentation also makes it potentially useful in countries other than Finland. abstract_id: PUBMED:35211727 Development of a Crosswalk to Translate Italian Occupation Codes to ISCO-68 Codes. In occupational epidemiology, job coding is an important-but time-consuming-step in assigning exposure. We implemented a tool (i.e. a crosswalk) to translate occupation codes from the Italian (ISTAT-CIP-91, n = 6319 five-digit job codes) to the International Standard Classification of Occupations (ISCO-68, n = 1881 five-digit job codes). The former is currently used in Italy for various purposes (e.g. in the National Mesothelioma Registry). The latter has been used in several studies on occupational cancers because it facilitates communication of results to the scientific community and, most importantly, because some job exposure matrices (JEMs) are based on international codes. Three authors created a table containing the crosswalk structure, providing an interpretation for each of the ISTAT-CIP-91 codes job descriptions and then manually recoding them according to ISCO-68. Two other authors independently revised it. The performance of the final version was assessed by comparison with results obtained by manual ISCO-68 coding performed in two previous case-control studies on asbestos and mesothelioma. More specifically, the automatically obtained ISCO-68 codes were merged with a JEM (DOM-JEM). The resulting individual asbestos exposure estimates (ever versus never exposed) were compared to those originally obtained (using the same DOM-JEM) from manual translation of ISTAT-CIP-91 to ISCO-68 (considered as the 'gold standard'). In the first study, among 159 peritoneal mesothelioma cases (400 job codes), Cohen's kappa was 0.91, sensitivity 0.95, and specificity 0.96. In the second study, among 716 pleural mesothelioma cases and controls (4400 job codes) kappa was 0.86, sensitivity 0.94, and specificity 0.91. Performance was better among in women. For men, performance was lower among cases than among controls (kappa 0.70, sensitivity 0.95, specificity 0.72 versus kappa 0.87, sensitivity 0.97, and specificity 0.92). In conclusion, the proposed tool allowed a rapid translation of thousands of job codes with good to excellent accuracy. The table containing ISTAT-CIP-91 codes and job descriptions and the corresponding ISCO-68 codes and job descriptions is made publicly available and can be freely used for epidemiological analyses in Italy and international collaborations. abstract_id: PUBMED:28262960 Estimation of occupational cosmic radiation exposure among airline personnel: Agreement between a job-exposure matrix, aggregate, and individual dose estimates. Background: Job-exposure matrices (JEM) are used for exposure assessment in occupational studies, but they can involve errors. We assessed agreement between the Nordic Occupational Cancer Studies JEM (NOCCA-JEM) and aggregate and individual dose estimates for cosmic radiation exposure among Finnish airline personnel. Methods: Cumulative cosmic radiation exposure for 5,022 airline crew members was compared between a JEM and aggregate and individual dose estimates. Results: The NOCCA-JEM underestimated individual doses. Intraclass correlation coefficient was 0.37, proportion of agreement 64%, kappa 0.46 compared with individual doses. Higher agreement was achieved with aggregate dose estimates, that is annual medians of individual doses and estimates adjusted for heliocentric potentials. Conclusions: The substantial disagreement between NOCCA-JEM and individual dose estimates of cosmic radiation may lead to exposure misclassification and biased risk estimates in epidemiological studies. Using aggregate data may provide improved estimates. Am. J. Ind. Med. 60:386-393, 2017. © 2017 Wiley Periodicals, Inc. abstract_id: PUBMED:7807274 Exposure assessment and gender differences. Exposure assessments for occupational epidemiological studies are typically conducted to (1) establish risk gradients with exposure, evaluating a potential causal relationship, or (2) estimate exposure-response dosimetry for quantitative risk calculations. Unavailable quantitative exposure data require use of surrogate or qualitative measures. Differences in women's employment patterns may make surrogate measures less reliable, resulting in systematic errors. Exposures associated with traditionally female careers have not been fully evaluated. Occupational cohorts are often defined to include workers with a minimum employment duration or employment for some minimum time in exposure-related jobs, thereby excluding many women workers. Even when included among studied and exposed worker cohorts, women's domestic exposures may confound risk evaluation. Male/female differences in xenobiotic uptake, distribution, kinetics, and metabolism may affect the relationship between external exposure and resulting biologically effective dose. Clinical factors alter the recognition of disease among women workers, confounding risk determination. Recognizing these problems during design and analysis of occupational cancer epidemiology research is essential to develop valid preventive strategies. abstract_id: PUBMED:15864903 National job-exposure matrix in analyses of census-based estimates of occupational cancer risk. Objectives: The aim of this study was to increase the understanding of the alternative exposure metrics and analysis methods in studies applying job-exposure matrices in analyses of health outcomes, the association between crystalline silica and cancer being used as an example. Methods: Observed and expected numbers of cancer cases during 1971-1995 among Finns born in 1906-1945 were calculated for 393 occupational categories, as defined in the 1970 population census. According to the Finnish Cancer Registry, there were 43 433 lung and 21 444 prostate cancer cases. The Finnish job-exposure matrix (FINJEM) provided estimates of the proportion of exposed persons and the mean level of exposure among the exposed in each occupation. Results: The most comprehensive exposure metric included period- and age-specific estimates of exposure and an estimate of occupational stability, but also remarkably simpler metrics gave significantly elevated estimates of the risk ratio (RR) between 1.36 and 1.50 for lung cancer for occupations with the highest estimated cumulative silica exposure (> or = 10 mg/m3-years), allowing a lag time of 20 years. It proved important to adjust the risk ratios at least for the socioeconomic status and occupational exposure to asbestos. The risk ratios for prostate cancer were close to 1.0 in every model. Conclusions: The results showed that the FINJEM-based analysis was able to replicate the well-known association between exposure to crystalline silica and lung cancer. The FINJEM-based method gives valid results, and it can be used to analyze large sets of register-based data on health outcomes. abstract_id: PUBMED:37190163 Attributable Fraction of Cancer Related to Occupational Exposure in Italy. Background: Exposure to occupational carcinogens is an important and avoidable cause of cancer. We aimed to provide an evidence-based estimate of the burden of occupation-related cancers in Italy. Methods: The attributable fraction (AF) was calculated based on the counterfactual scenario of no occupational exposure to carcinogens. We included exposures classified as IARC group 1 and with reliable evidence of exposure in Italy. Relative risk estimates for selected cancers and prevalences of exposure were derived from large-scale studies. Except for mesothelioma, a 15-20-year latency period between exposure and cancer was considered. The data on cancer incidence in 2020 and mortality in 2017 in Italy were obtained from the Italian Association of Cancer Registries. Results: The most prevalent exposures were UV radiation (5.8%), diesel exhaust (4.3%), wood dust (2.3%) and silica dust (2.1%). Mesothelioma had the largest AF to occupational carcinogens (86.6%), followed by sinonasal cancer (11.8%) and lung cancer (3.8%). We estimated that 0.9% of cancer cases (N~3500) and 1.6% of cancer deaths (N~2800) were attributable to occupational carcinogens in Italy. Of these, about 60% were attributable to asbestos, 17.5% to diesel exhaust, followed by chromium and silica dust (7% and 5%). Conclusions: Our estimates provide up-to-date quantification of the low, but persistent, burden of occupational cancers in Italy. abstract_id: PUBMED:26511593 Bladder cancer and occupational exposure to diesel and gasoline engine emissions among Canadian men. The International Agency for Research on Cancer has classified diesel exhaust as a carcinogen based on lung cancer evidence; however, few studies have investigated the effect of engine emissions on bladder cancer. The purpose of this study was to investigate the association between occupational exposure to diesel and gasoline emissions and bladder cancer in men using data from the Canadian National Enhanced Cancer Surveillance System; a population-based case-control study. This analysis included 658 bladder cancer cases and 1360 controls with information on lifetime occupational histories and a large number of possible cancer risk factors. A job-exposure matrix for engine emissions was supplemented by expert review to assign values for each job across three dimensions of exposure: concentration, frequency, and reliability. Odds ratios (OR) and their corresponding 95% confidence intervals were estimated using logistic regression. Relative to unexposed, men ever exposed to high concentrations of diesel emissions were at an increased risk of bladder cancer (OR = 1.64, 0.87-3.08), but this result was not significant, and those with >10 years of exposure to diesel emissions at high concentrations had a greater than twofold increase in risk (OR = 2.45, 1.04-5.74). Increased risk of bladder cancer was also observed with >30% of work time exposed to gasoline engine emissions (OR = 1.59, 1.04-2.43) relative to the unexposed, but only among men that had never been exposed to diesel emissions. Taken together, our findings support the hypothesis that exposure to high concentrations of diesel engine emissions may increase the risk of bladder cancer. abstract_id: PUBMED:19225948 Construction of job-exposure matrices for the Nordic Occupational Cancer Study (NOCCA). Introduction: The Nordic Occupational Cancer study (NOCCA) is a cohort study based on employed populations in one or more censuses in Denmark, Finland, Iceland, Norway and Sweden. The large size of the cohort allows us to study rare cancers and to identify even small risks by occupation and by specific occupational exposures. This paper describes principles and experiences of the construction of job-exposure matrices (JEMs), an instrument to transform the history of occupational titles into quantitative estimates of exposure to potential carcinogenic substances. Material And Methods: For each Nordic country, a national JEM was constructed by a team of experts on the basis of the Finnish matrix (FINJEM) that has been used in similar national studies since the mid-1990s. Results: The structure of the Nordic JEMs is three-dimensional (over 300 occupations, over 20 agents, 4 periods covering 1945-1994). Exposure is characterised by estimates of the prevalence and level of exposure. Important differences between the Nordic countries were observed for several exposures. Discussion: The selection of priority agent-occupation combinations and the adoption of general principles in the beginning of the work were necessary because of the high number of estimates to be evaluated (over 50 000/country). The selective modification of an existing JEM for use in other countries was a feasible, albeit challenging task, because exposure data and information about the use of chemicals in the past was scanty. As compared to the use of FINJEM for all Nordic countries, the modification process will probably increase the validity of dose-response and risk estimates of occupational cancer which is to be expected soon as the main outcome of the NOCCA project. Answer: No, using a farm-related job title as a surrogate for pesticide exposure in occupational cancer epidemiology is not considered adequate. A study based on data collected in Western Australia found that while most jobs with likely pesticide exposure were farm jobs, a significant majority (78.3%) of farm jobs were assessed as having no likelihood of pesticide exposure. This suggests that classifying all farm jobs as pesticide-exposed would substantially overestimate the number of individuals actually exposed to pesticides. The study also highlighted the importance of considering farm type, length of service, and historical period of employment when assessing the likelihood of pesticide exposure in farming jobs (PUBMED:19221114). Additionally, other studies have shown that job titles can be problematic proxies for exposure status, as they may not accurately reflect the tasks and activities performed, which can differ between men and women even within the same job title (PUBMED:7807275). This supports the need for caution in using job titles to estimate exposure without validating the job-exposure matrix (JEM) by gender or other specific factors. Overall, these findings indicate that relying solely on job titles, such as farm-related job titles, can lead to misclassification and potentially biased results in occupational cancer epidemiology. More detailed and individual-specific exposure assessments are necessary for accurate evaluation of pesticide exposure and its association with cancer risk.
Instruction: Do Independent Sector Treatment Centres (ISTC) impact on specialist registrar training in primary hip and knee arthroplasty? Abstracts: abstract_id: PUBMED:18765029 Do Independent Sector Treatment Centres (ISTC) impact on specialist registrar training in primary hip and knee arthroplasty? Introduction: The introduction of Modernising Medical Careers (MMC) is likely to reduce specialist registrar (SpR) operative experience during higher surgical training (HST). A further negative impact on training by local Independent Sector Treatment Centres (ISTCs) could reduce experience, and thus competence, in primary joint arthroplasty at completion of higher surgical training. Patients And Methods: Retrospective case note and radiograph analysis of patients receiving primary hip and knee arthroplasty in a teaching hospital, before and after the establishment of a local ISTC. Patients and operative details were recorded from the selected case notes. Corresponding radiographs were assessed and the severity of the disease process assessed. Results: Fewer primary hip and knee replacements were performed by SpRs in the time period after the establishment of an ISTC. Conclusions: ISTCs may adversely affect SpR training in primary joint arthroplasty. abstract_id: PUBMED:36250556 Future demand for primary hip and knee arthroplasty in Scotland. Introduction: The rising prevalence of osteoarthritis, associated with an ageing population, is expected to deliver increasing demand for arthroplasty services in the future. Understanding the scale of potential change is essential to ensure adequate provision of services and prevent prolonged waiting times that can cause patient harm. Methods: We set out to provide projections of future primary knee and hip arthroplasty out to 2038 utilising historical trend data (2008-2018) from the Scottish Arthroplasty Project. All analyses were performed using the Holt's exponential smoothing projection method with the forecast package in R statistics. Results were adjusted for projected future population estimates provided by National Records of Scotland. Independent age group predictions were also performed. Results: The predicted rise of primary hip arthroplasty for all ages is from 120/100k/year in 2018 to 152/100k/year in 2038, a 28% increase. The predicted rise of primary knee arthroplasty for all ages is from 164/100k/year in 2018 to 220/100k/year in 2038, a 34% increase. Based on a static 3-day length of stay average this would see 4280 and 7392 additional patient bed days required for primary hip and knee arthroplasty patients respectively per annum. The associated additional cost is anticipated to be approximately £26 million. Conclusions: Anticipated future demand for arthroplasty will require significant additional resource and funding to prevent deterioration in quality of care and an increase in patient wait times, additional to that already required to clear the COVID-19 backlog. Understanding presented projections of changes to arthroplasty demand is key to future service delivery. abstract_id: PUBMED:17482710 Physical training in rehabilitation programs before and after total hip and knee arthroplasty. Objective: A review of the literature to investigate physical training in rehabilitation programs before and after hip and knee arthroplasty. Methods: We performed a literature search of the MedLINE and Cochrane databases since 1966 to 2006 using 8 keywords for articles of literature reviews or randomized controlled trials investigating physical training before and after hip and knee arthroplasty. Results And Discussion: The search resulted in 14 articles: 2 reviews of the literature and 7 articles of studies concerning total hip arthroplasty and 5 knee arthroplasty. Results were difficult to analyze because of the low number of patients included, a high number of dropouts, no matched control populations, different physical training protocols, and the use of functional scores or inadequate parameters. We found no randomized controlled trial concerning physical training after knee arthroplasty. Conclusion: Physical training does not seem benefit patients before hip or knee arthroplasty. However, the training may have benefit immediately after, and particularly well after, total hip arthroplasty. abstract_id: PUBMED:37632527 The impact of COVID-19 on hip and knee arthroplasty surgical volume in China. Purpose: The reduction of hip and knee arthroplasty surgical volume has been reported in many countries during the COVID-19 pandemic. In China, there is no national joint registry system and the impact of COVID-19 towards surgical volume remains unclear. The aim of this study was to investigate the hip and knee arthroplasty surgical volume in China during the pandemic and evaluate its change trends. Methods: Annual sale numbers of prostheses used in total knee arthroplasty (TKA), total hip arthroplasty (THA), and femoral head replacement (FHR) from 2011 to 2021 was collected from providers registered in National Medical Products Administration (NMPA). Annual surgical volume of TKA, THA, FHR, unicompartmental knee arthroplasty (UKA), and revision of hip/knee arthroplasty (RJA) was collected from member hospitals of Beijing Joint Society (BJS). We used linear regression to estimate the loss of surgical volume. Annual surgical volume obtained from Britain and Australian joint registries were used to make comparison. Results: In China, the surgical volume of THA/FHR, TKA, and UKA in 2020 all decreased compared to the predicted value, with a reduction of 82,525 cases (13.46%), 165,178 cases (33.50%), and 151 cases (0.65%), respectively. All the three procedures showed significant recovery in 2021. The surgical volumes of THA/FHR and UKA were 68,813 and 9402 cases higher than predicted levels, respectively, while TKA volume remained slightly below the predicted level. The regional statistics in Beijing showed similar change mode. In 2020, the surgical volume of THA/FHR, TKA, FHR, and UKA all decreased compared to the predicted value, with a reduction of 5031 cases (43.37%), 5290 cases (40.69%), 620 cases (29.18%), and 925 cases (39.11%), respectively. In 2021, with the exception of FHR, the number of these procedures increased compared to 2020, but remained below the predicted value. Compared with the data from Britain and Australia, China experienced less reduction and faster recovery in the proportions of elderly people (> 65 years old) who undergo hip and knee arthroplasty during the COVID-19 pandemic. Conclusion: During the COVID-19 pandemic, although hip and knee arthroplasty surgical volume in China showed a similar "restoration-recovery" change pattern with other countries, China took fewer losses in this field. abstract_id: PUBMED:22310403 Effect of training level on accuracy of digital templating in primary total hip and knee arthroplasty. The use of digital radiography and templating software continues to become more prevalent in orthopedics as the number of total hip arthroplasty (THA) and total knee arthroplasty (TKA) procedures increases every year. The purpose of this study was to evaluate the effect of training level on the accuracy of digital templating for primary THA and TKA. Digital radiographs of 97 patients undergoing primary THA (49 cases) and TKA (48 cases) were retrospectively templated using OrthoView digital planning software (OrthoView LLC, Jacksonville, Florida). Anteroposterior hip and lateral knee radiographs were digitally templated and compared with the actual size of the implants used intraoperatively. An implant sales representative, physician assistant, medical student (J.D.K.), resident (A.R.H.), and fellowship-trained arthroplasty surgeon (B.R.L.) templated all cases independently after a standardized orientation and were blinded to the actual component sizes used for surgery. The medical student, resident, and arthroplasty surgeon retemplated the same 97 cases 1 month later to determine intraobserver reliability. Digital templating was accurate in predicting the correct implant size in 33% of THAs and 54% of TKAs. In 73% of THAs and 92% of TKAs, digital templating was within 1 size of the actual implant used, and in 88% of THAs and 99% of TKAs, templating was within 2 sizes of the final components. In no cases did the templated implant size vary by >3 sizes from the final components. Interobserver reliability for templating THAs and TKAs showed good reliability as measured by intraclass correlation coefficient (ICC) (ICC(THA)=.70; ICC(TKA)=.86). Intraobserver reliability for templating THAs had excellent reliability for the resident and arthroplasty surgeon, with a kappa coefficient (κ) of 0.92, and good reliability for the medical student (κ=0.78). Intraobserver reliability for templating TKAs showed excellent reliability among all examiners (κ=0.90). abstract_id: PUBMED:36571779 The impact of frailty on patient-reported outcomes following hip and knee arthroplasty. Aim: to determine the impact of frailty on patient-reported outcomes following hip and knee arthroplasty. Methods: we used linked primary and secondary care electronic health records. Frailty was assessed using the electronic frailty index (categorised: fit, mild, moderate, severe frailty). We determined the association between frailty category and post-operative Oxford hip/knee score (OHS/OKS) using Tobit regression. We calculated the proportion of patients in each frailty category who achieved the minimally important change (MIC) in OHS (≥8 points) and OKS (≥7 points) and the proportion who reported a successful outcome (hip/knee problems either 'much better' or 'a little better' following surgery). Results: About 42,512 people who had a hip arthroplasty and 49,208 who had a knee arthroplasty contributed data. In a Tobit model adjusted for pre-operative OHS/OKS, age, sex and quintile of index of multiple deprivation, increasing frailty was associated with decreasing post-operative OHS and OKS, respectively, β-coefficient (95% CI) in severely frail versus fit, -6.97 (-7.44, -6.49) and - 5.88 (-6.28, -5.47). The proportion of people who achieved the MIC in OHS and OKS, respectively, decreased from 92 and 86% among fit individuals to 84 and 78% among those with severe frailty. Patient-reported success following hip and knee arthroplasty, respectively, decreased from 97 and 93% among fit individuals to 90 and 83% among those with severe frailty. Conclusion: frailty adversely impacts on patient-reported outcomes following hip and knee arthroplasty. However, even among those with severe frailty, the large majority achieved the MIC in OHS/OKS and reported a successful outcome. abstract_id: PUBMED:28244910 Subsequent Total Joint Arthroplasty After Primary Total Knee or Hip Arthroplasty: A 40-Year Population-Based Study. Background: Despite the large increase in total hip arthroplasties and total knee arthroplasties, the incidence and prevalence of additional contralateral or ipsilateral joint arthroplasty are poorly understood. The purpose of this study was to determine the rate of additional joint arthroplasty after a primary total hip arthroplasty or total knee arthroplasty. Methods: This historical cohort study identified population-based cohorts of patients who underwent primary total hip arthroplasty (n = 1,933) or total knee arthroplasty (n = 2,139) between 1969 and 2008. Patients underwent passive follow-up through their medical records beginning with the primary total hip arthroplasty or total knee arthroplasty. We assessed the likelihood of undergoing a subsequent total joint arthroplasty, including simultaneous and staged bilateral procedures. Age, sex, and calendar year were evaluated as potential predictors of subsequent arthroplasty. Results: During a mean follow-up of 12 years after an initial total hip arthroplasty, we observed 422 contralateral total hip arthroplasties (29% at 20 years), 76 contralateral total knee arthroplasties (6% at 10 years), and 32 ipsilateral total knee arthroplasties (2% at 20 years). Younger age was a significant predictor of contralateral total hip arthroplasty (p < 0.0001), but not a predictor of the subsequent risk of total knee arthroplasty. During a mean follow-up of 11 years after an initial total knee arthroplasty, we observed 809 contralateral total knee arthroplasties (45% at 20 years), 31 contralateral total hip arthroplasties (3% at 20 years), and 29 ipsilateral total hip arthroplasties (2% at 20 years). Older age was a significant predictor of ipsilateral or contralateral total hip arthroplasty (p < 0.001). Conclusions: Patients undergoing total hip arthroplasty or total knee arthroplasty can be informed of a 30% to 45% chance of a surgical procedure in a contralateral cognate joint and about a 5% chance of a surgical procedure in noncognate joints within 20 years of initial arthroplasty. Increased risk of contralateral total knee arthroplasty following an initial total hip arthroplasty may be due to gait changes prior to and/or following total hip arthroplasty. The higher prevalence of bilateral total hip arthroplasty in younger patients may result from bilateral disease processes that selectively affect the young hip, such as osteonecrosis, or structural hip problems, such as acetabular dysplasia or femoroacetabular impingement. Level Of Evidence: Therapeutic Level IV. See Instructions for Authors for a complete description of levels of evidence. abstract_id: PUBMED:32301372 Pain and load progression following an early maximal strength training program in total hip- and knee arthroplasty patients. Purpose: Total hip arthroplasty (THA) and total knee arthroplasty (TKA) patients significantly increase muscle strength after maximal strength training (MST) initiated soon after surgery. Owing to severe postoperative pain, knee patients are anticipated to be more limited in performing heavy load exercises than hip patients. The aim of the present study was to describe pain and load progression during early MST in THA and TKA patients. Methods: Explorative study based on secondary analyses from two randomized controlled trials: 26 THA and 16 TKA patients had their training sessions logged. They trained at 85-90% of their maximal capacity in leg press, and abduction/knee-extension of the operated leg (4 × 5 repetitions) for 8-10 weeks, initiated early postoperatively. Results: Knee patients experienced significantly more pain than hip patients during the training sessions (p < 0.03), however, pain before and after training was not different (p > 0.09). All patients significantly increased leg press training load until the last intervention week (p < 0.01). Conclusion: This study demonstrates that TKA patients experience more pain than THA patients during training following a MST program but not more than moderate levels during or after training. Pain before and after training is not different. Both groups significantly increased load progression during the intervention. These findings indicate that both THA and TKA patients might perform MST with extensive load progression early after surgery without compromising pain. The studies were registered at ClinicalTrials.gov . abstract_id: PUBMED:24004242 Evaluating the primary-to-specialist referral system for elective hip and knee arthroplasty. Rationale, Aims And Objectives: Persistently long waiting times for hip and knee total joint arthroplasty (TJA) specialist consultations have been identified as a problem. This study described referral processes and practices, and their impact on the waiting time from referral to consultation for TJA. Methods: A mixed-methods retrospective study incorporating semi-structured interviews, patient chart reviews and observational studies was conducted at three clinic sites in Alberta, Canada. A total of 218 charts were selected for analysis. Standardized definitions were applied to key event dates. Performance measures included waiting times percentage of referrals initially accepted. Voluntary (patient-related) and involuntary (health system-related) waiting times were quantified. Results: All three clinics had defined, but differing, referral processing rules. The mean time from referral to consultation ranged from 51 to 139 business days. Choosing a specific surgeon for consultation rather than a next available surgeon lengthened waits by 10-47 business days. Involuntary waiting times accounted for at least 11% of total waiting time. Approximately 40-80% of the time patients with TJA wait for surgery was in the consultation period. Fifty-four per cent of new referrals were initially rejected, prolonging patient waits by 8-46 business days. Conclusions: Our results suggest that variation in referral processing led to increased waiting times for patients. The large proportion of total wait attributable to waiting for a surgical consultation makes failure to measure and evaluate this period a significant omission. Improving referral processes and decreasing variation between clinics would improve patient access to these specialist referrals in Alberta. abstract_id: PUBMED:33803721 Impact of the 1st and 2nd Wave of the COVID-19 Pandemic on Primary or Revision Total Hip and Knee Arthroplasty-A Cross-Sectional Single Center Study. The aim of this study was to evaluate the number of primary and revision total joint arthroplasties (TJA/rTJA) in 2020 compared to 2019. Specifically, the first and the second waves of the COVID-19 pandemic were evaluated as well as the pre-operative COVID-19 test. A cross-sectional single-center study of our prospectively maintained institutional arthroplasty registry was performed. The first COVID-19 wave and the second COVID-19 wave led to a socioeconomic lockdown in 2020. Performed surgeries, cause of revision, age, gender, and American Society of Anesthesiologists-level were analyzed. Preoperative COVID-19 testing was evaluated and nationwide COVID-19 data were compared to other countries. In 2020, there was a decrease by 16.2% in primary and revision TJAs of the hip and knee compared to 2019. We observed a reduction of 15.8% in primary TJAs and a reduction of 18.6% on rTJAs in 2020 compared to 2019. There is an incline in total hip arthroplasties (THAs) and a decline in total knee arthroplasties (TKAs) comparing 2019 to 2020. During the first wave, there was a reduction in performed primary TJAs of 86%. During the second wave, no changes were observed. This is the first study quantifying the impact of the COVID-19 pandemic on primary and revision TJAs regarding the first and second wave. Answer: Yes, Independent Sector Treatment Centres (ISTCs) may impact specialist registrar training in primary hip and knee arthroplasty. A study indicated that after the establishment of a local ISTC, fewer primary hip and knee replacements were performed by specialist registrars (SpRs), suggesting that ISTCs could adversely affect SpR training in primary joint arthroplasty (PUBMED:18765029).
Instruction: A randomized trial of platelet transfusions over 30 vs 120 minutes: is there an effect on post-transfusion platelet counts? Abstracts: abstract_id: PUBMED:20398269 Use of platelet rich plasma to treat plantar fasciitis: design of a multi centre randomized controlled trial. Background: If conservative treatment for chronic plantar fasciitis fails, often a corticosteroid injection is given. Corticosteroid injection gives temporarily pain reduction, but no healing. Blood platelets initiate the natural healing rate. GPS(R) gives an eightfold concentrate platelets of patients own blood. Injection of these platelets in the attachment of the fascia to the os calcis might induce a healing rate. Methods And Design: A randomized controlled multi centre trial will be performed. The study population consists of 120 patients of 18 years and older. Patients with chronic plantar fasciitis will be allocated randomly to have a steroid injection or an autologous platelet concentrate injections. Data will be collected before the procedure, 4,8,12,26 weeks and 1 year after the procedure.The main outcome measures of this study are pain and function measured with questionnaires. Conclusion: Recent literature show positive effects for the treatment of tendinosis with autologous platelet injections. The forthcoming trial will compare treatment for chronic plantar fasciitis with a steroid injection versus an autologous platelet injection. Our results will be published as soon as they become available. Trial Registration: Trial Registration Number: http://www.clinicaltrials.gov NCT00758641. abstract_id: PUBMED:23016747 Use of fresh platelet concentrate or lyophilized platelets in thrombocytopenic dogs with clinical signs of hemorrhage: a preliminary trial in 37 dogs. Objective: To examine the safety and feasibility of using lyophilized platelets (LYO) and fresh platelet concentrate (FRESH) in bleeding thrombocytopenic dogs. Design: Preliminary prospective randomized clinical trial. Setting: Two private referral centers and 3 university teaching hospitals. Animals: Thirty-seven dogs with a complaint of hemorrhage associated with thrombocytopenia (platelet count <70 × 10(9) /L [70,000/μL], a hematocrit >15%, and that had received neither vincristine nor platelet-containing transfusions within 72 h of enrollment were studied. Interventions: Animals were randomized to receive LYO or FRESH, dosed according to weight. Physical examination, complete blood counts, and coagulation testing (prothrombin time and activated partial thromboplastin time) were performed at enrollment. Physical examinations were also performed immediately post transfusion, and at 1 and 24 h after transfusion. Complete blood counts were repeated immediately post transfusion and at 24 h. Collected data included bleeding score (BLS), response to transfusion, adverse reactions, hospitalization time, need for additional transfusions, survival to discharge, and 28-d survival. Measurements And Main Results: Twenty-two dogs received LYO and 15 received FRESH. There was no difference between groups in age, weight, BLS, platelet count, white blood cell count, hematocrit, or presence of melena. There was no difference between groups in transfusion reaction rates, the need for additional transfusions, 24-h BLS, hospitalization time, survival to discharge, or 28-d survival. Conclusions: Transfusion of LYO was feasible and associated with a low transfusion reaction rate in this limited study of thrombocytopenic canine patients presenting with mild-to-severe hemorrhage. LYO were easy to use and provided storage advantages over FRESH. Further study of this product, including examination of efficacy and platelet life span, is warranted. abstract_id: PUBMED:8086623 Diagnosis of post-transfusion graft-versus-host disease after formalin-fixation. A 72-year-old woman with multiple recurrence of gallbladder cancer was treated by intrahepatic-arterial infusion of doxorubicin using an extracorporeal system of direct hemoperfusion with venovenous bypass. During this treatment, the patient received 600 ml of fresh whole blood and 30 units of platelet concentrate from five unrelated donors. Thereafter, high fever, skin rash over the whole body, and watery diarrhea developed, followed by leukopenia progressing to a fatal sepsis. Post-transfusion graft-versus-host disease (PT-GVHD) was suspected by the clinical manifestations and postmortem pathologic findings. To establish the diagnosis of PT-GVHD, polymerase chain reaction (PCR) amplification of DNA polymorphism associated with length variation in dinucleotide or trinucleotide microsatellite repeats at the loci of D6S89, int-2 protooncogene, and human growth factor with each of the different primer sets was performed using DNA from blood drawn from the patient with clinically established PT-GVHD of a donor origin and formalin-fixed pancreas of recipient origin. Genetic analysis revealed the changes in the patient's lymphocytes from that of the patient to that of donor origin. The present finding that formalin-fixed tissues can be used as a material of patient origin may contribute to accurate diagnosis of PT-GVHD after autopsy. abstract_id: PUBMED:23416811 Post-transfusion increment of platelet count in thrombocytopenic patients treated with platelet concentrates. The Platelet (PLT) Transfusion Therapy plays an important role in the support of surgical, haematological, oncological and transplant patients. The present study was assigned to find out the post transfusion increment of platelet count among the thrombocytopenic patients in Bangladeshi population. This descriptive study was conducted at the Departments of Haematology and Transfusion Medicine, BSMMU, Dhaka. Total 42 thrmbocytopenic patients were randomly assigned to receive a transfusion when their platelet counts below 10000 per cubic millimeter or with active bleeding. Pre transfusion and post transfusion platelet count were measured in all patients. Out of 42 patients, 26(61.90%) were male and 16(38.10%) were female. Leukemia was the most common cause of thrombocytopenia (47.62%). Most of the patients (71.34%) required transfusion of multiple units of platelet and 12(28.57%) patients required double units. Before transfusion of platelet concentrate <30×10⁸/L, 30-80×10⁸/L and >80×10⁸/L platelet count were found in 30(71.43%), 08(19.05%) and 04(09.52%) patients respectively. After transfusion of platelet concentrate <50×10⁸/L, 50-100×10⁸/L and >100×10⁸/L platelet count were found in 30(71.43%), 08(19.05%) and 04(09.52%) patients respectively. In all patients post transfusion platelet count increases but 2 or multiple units of transfusion were needed. abstract_id: PUBMED:24134578 A randomized controlled trial of the efficacy of autologous platelet therapy for the treatment of osteoarthritis in dogs. Objective: To determine efficacy of a single intra-articular injection of an autologous platelet concentrate for treatment of osteoarthritis in dogs. Design: Randomized, controlled, 2-center clinical trial. Animals: 20 client-owned dogs with osteoarthritis involving a single joint. Procedures: Dogs were randomly assigned to a treatment or control group. In all dogs, severity of lameness and pain was scored by owners with the Hudson visual analog scale and the University of Pennsylvania Canine Brief Pain Inventory, respectively, and peak vertical force (PVF) was determined with a force platform. Dogs in the treatment group were then sedated, and a blood sample (55 mL) was obtained. Platelets were recovered by means of a point-of-use filter and injected intra-articularly within 30 minutes. Control dogs were sedated and given an intra-articular injection of saline (0.9% NaCl) solution. Assessments were repeated 12 weeks after injection of platelets or saline solution. Results: Dogs weighed between 18.3 and 63.9 kg (40.3 and 140.6 lb) and ranged from 1.5 to 8 years old. For control dogs, lameness scores, pain scores, and PVF at week 12 were not significantly different from pretreatment values. In contrast, for dogs that received platelet injections, lameness scores (55% decrease in median score), pain scores (53% decrease in median score), and PVF (12% increase in mean PVF) were significantly improved after 12 weeks, compared with pretreatment values. Conclusions And Clinical Relevance: Results suggested that a single intra-articular injection of autologous platelets resulted in significant improvements at 12 weeks in dogs with osteoarthritis involving a single joint. abstract_id: PUBMED:37286322 Efficacy of autologous plateletpheresis in adult aortic surgery: study protocol for a randomised controlled trial. Introduction: Perioperative coagulopathy is common in patients undergoing aortic surgery, increasing the risk of excessive blood loss and subsequent allogeneic transfusion. Blood conservation has become a vital part of cardiovascular surgery, but measures to protect platelets from destruction by cardiopulmonary bypass (CPB) are still lacking. Autologous platelet concentrate (APC) may have potential benefits for intraoperative blood preservation, but its efficacy has not been studied extensively. This study aims to evaluate the efficacy of APC as a blood conservation technique to reduce blood transfusion in adult aortic surgery. Methods And Analysis: This is a prospective, single-centre, single-blind randomised controlled trial. A total of 344 adult patients undergoing aortic surgery with CPB will be enrolled and randomised to either the APC group or the control group with a 1:1 randomisation ratio. Patients in the APC group will receive autologous plateletpheresis before heparinisation, while those in the control group will not. The primary outcome is the perioperative packed red blood cell (pRBC) transfusion rate. Secondary endpoints include the volume of perioperative pRBC transfusion; drainage volume within 72 hours post-surgery; postoperative coagulation and platelet function; and the incidence of adverse events. Data will be analysed according to the intention-to-treat principle. Ethics And Dissemination: This study was approved by the institutional review board of Fuwai Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College (no. 2022-1806). All procedures included in this study will be performed in adherence to the Helsinki Declaration. The results of the trial will be published in an international peer-reviewed journal. Trial Registration Number: Chinese Clinical Trial Register (ChiCTR2200065834). abstract_id: PUBMED:15588315 A randomized trial of washed red blood cell and platelet transfusions in adult acute leukemia [ISRCTN76536440 BACKGROUND: Platelet transfusion is universally employed in acute leukemia. Platelet concentrate supernatants contain high concentrations of biologic mediators that might impair immunity. We investigated whether washed platelet and red cell transfusions could improve clinical outcomes in adult patients with acute leukemia. METHODS: A pilot randomized trial of washed, leukoreduced ABO identical transfusions versus leukoreduced ABO identical transfusions was conducted in 43 adult patients with acute myeloid or lymphoid leukemia during 1991-94. Primary endpoints to be evaluated were platelet transfusion refractoriness, infectious and bleeding complications and overall survival. RESULTS: There were no significant differences in infectious or major bleeding complications and only one patient required HLA matched platelet transfusions. Minor bleeding was more frequent in the washed, leukoreduced arm of the study. Confirmed transfusion reactions were more frequent in the leukoreduced arm of the study. Overall survival was superior in the washed arm of the study (40% versus 22% at 5 years), but this difference was not statistically significant (p = 0.36). A planned subset analysis of those </=50 years of age found that those in the washed, leukoreduced arm (n = 12) had a 75% survival at five years compared with 30% in the leukoreduced arm (n = 10) (p = 0.037) CONCLUSION: This study provides the first evidence concerning the safety and efficacy of washed platelets, and also raises the possibility of improved survival. We speculate that transfusion of stored red cell and platelet supernatant may compromise treatment, particularly in younger patients with curable disease. Larger trials will be needed to assess this hypothesis. abstract_id: PUBMED:8582893 Clinical perspectives of platelet transfusions: defining the optimal dose. To halt bleeding in patients with severe thrombocytopenia due to bone marrow failure, it is desirable to achieve a post-transfusion blood platelet count of 40 x 10(9)/L by platelet transfusions. Based on calculations of corrected count increments, each 1 x 10(11) platelets transfused will increase the blood platelet count approximately 10 x 10(9)/L per each square meter of patient body surface area. Thus, the post-transfusion blood platelet count will be approximately 20 x 10(9)/L following transfusion of 3 x 10(11) platelets to a 5 foot, 8 inch patient weighing 170 pounds (2.0 m2), who is bleeding because of a pre-transfusion platelet count of 5 x 10(9)/L. The post-transfusion platelet count likely will be even lower in sick patients (sepsis, amphotericin B plus antibiotic therapy, splenomegaly, graft-vs.-host disease, etc.) or if platelets are lost from the unit by leukofiltration before transfusion. Although a dose of 3 x 10(11) platelets is acceptable, in a regulatory sense for product quality, it is inadequate to control bleeding in most thrombocytopenic adult patients. Adjusting dose for body size, bleeding patients with pre-transfusion blood platelet of < 10 x 10(9)/L and weighing > 120 pounds should receive approximately 6 x 10(11) platelets, those weighing 30 to 120 pounds should receive 3 x 10(11) platelets, and infants weighing < 30 pounds (15 kg) should receive 5-10 ml/kg of platelet concentrate. abstract_id: PUBMED:27277211 Effect of high or low protamine dosing on postoperative bleeding following heparin anticoagulation in cardiac surgery. A randomised clinical trial. While experimental data state that protamine exerts intrinsic anticoagulation effects, protamine is still frequently overdosed for heparin neutralisation during cardiac surgery with cardiopulmonary bypass (CPB). Since comparative studies are lacking, we assessed the influence of two protamine-to-heparin dosing ratios on perioperative haemostasis and bleeding, and hypothesised that protamine overdosing impairs the coagulation status following cardiac surgery. In this open-label, multicentre, single-blinded, randomised controlled trial, patients undergoing on-pump coronary artery bypass graft surgery were assigned to a low (0.8; n=49) or high (1.3; n=47) protamine-to-heparin dosing group. The primary outcome was 24-hour blood loss. Patient haemostasis was monitored using rotational thromboelastometry and a thrombin generation assay. The low protamine-to-heparin dosing ratio group received less protamine (329 ± 95 vs 539 ± 117 mg; p<0.001), while post-protamine activated clotting times were similar among groups. The high dosing group revealed increased intrinsic clotting times (236 ± 74 vs 196 ± 64 s; p=0.006) and the maximum post-protamine thrombin generation was less suppressed in the low dosing group (38 ± 40 % vs 6 ± 9 %; p=0.001). Postoperative blood loss was increased in the high dosing ratio group (615 ml; 95 % CI 500-830 ml vs 470 ml; 95 % CI 420-530 ml; p=0.021) when compared to the low dosing group, respectively. More patients in the high dosing group received fresh frozen plasma (11 % vs 0 %; p=0.02) and platelet concentrate (21 % vs 6 %; p=0.04) compared to the low dosing group. Our study confirms in vitro data that abundant protamine dosing is associated with increased postoperative blood loss and higher transfusion rates in cardiac surgery. abstract_id: PUBMED:23566712 Hydroxyethyl starch 6%, 130/0.4 vs. a balanced crystalloid solution in cardiopulmonary bypass priming: a randomized, prospective study. Background: Since the advent of cardiopulmonary bypass, many efforts have been made to avoid the complications related with it. Any component of the pump participates in occurrence of these adverse events, one of which is the type of prime solution. In this study, we aimed to compare the effects of 6% hydroxyethyl starch 130/0.4 with a commonly used balanced electrolyte solution on postoperative outcomes following coronary bypass surgery. Methods: Two hundred patients undergoing elective coronary bypass surgery were prospectively studied. The patients were randomized in to two groups. First group received a balanced electrolyte solution and the second group received 6% hydoxyethyl starch 130/0.4 as prime solution. The postoperative outcomes of the patients were studied. Results: The mean age of the patients was 61.81 ± 10.12 in the crystalloid group whereas 61.52 ± 9.29 in the HES group. There were 77 male patients in crystalloid group and 74 in HES group. 6% hydroxyethyl starch 130/0.4 did not have any detrimental effects on renal and pulmonary functions. The intensive care unit stay and postoperative hospital length of stay were shorter in hydroxyethyl starch group (p < 0.05 for each). Hydroxyethyl starch did not increase postoperative blood loss, amount of blood and fresh frozen plasma used, but it decreased platelet concentrate requirement. It did not have any effect on occurrence of post-coronary bypass atrial fibrillation (p > 0.05). Conclusions: 6% hydroxyethyl starch 130/0.4 when used as a prime solution did not adversely affect postoperative outcomes including renal functions and postoperative blood transfusion following coronary bypass surgery. Answer: The provided abstracts do not contain information about a randomized trial comparing the effects of platelet transfusions administered over 30 minutes versus 120 minutes on post-transfusion platelet counts. The abstracts discuss various aspects of platelet transfusion and platelet-rich therapies in different contexts, such as treatment for plantar fasciitis (PUBMED:20398269), use in thrombocytopenic dogs (PUBMED:23016747), diagnosis of post-transfusion graft-versus-host disease (PUBMED:8086623), post-transfusion platelet count increments in thrombocytopenic patients (PUBMED:23416811), treatment of osteoarthritis in dogs (PUBMED:24134578), autologous plateletpheresis in aortic surgery (PUBMED:37286322), washed red blood cell and platelet transfusions in adult acute leukemia (PUBMED:15588315), defining the optimal dose of platelet transfusions (PUBMED:8582893), the effect of protamine dosing on postoperative bleeding (PUBMED:27277211), and the use of hydroxyethyl starch in cardiopulmonary bypass priming (PUBMED:23566712). None of these abstracts specifically address the question of whether the duration of platelet transfusion (30 minutes vs. 120 minutes) has an effect on post-transfusion platelet counts. Therefore, based on the provided abstracts, it is not possible to answer the question. Additional literature searches would be required to find studies that have investigated this specific aspect of platelet transfusion.
Instruction: Elevated day 3 follicle-stimulating hormone in younger women: is gonadotropin stimulation/intrauterine insemination a good option? Abstracts: abstract_id: PUBMED:24487006 Elevated day 3 follicle-stimulating hormone in younger women: is gonadotropin stimulation/intrauterine insemination a good option? Objective: The objective of the study was to determine the fecundity of young women (<35 years) with an elevated day 3 follicle stimulating hormone (FSH) undergoing gonadotropin-stimulation/intrauterine insemination. Study Design: This was a retrospective study. The study was conducted at an academic fertility center. A total of 1396 gonadotropin stimulation/intrauterine insemination cycles from 563 women were stratified by day 3 FSH levels (<10 vs ≥10 U/L) and outcomes were compared. Gonadotropin dose, treatment duration, peak estradiol (E2), number of preovulatory follicles (total, large, and medium size), E2/follicle, endometrial thickness, spontaneous abortion, clinical and multiple pregnancy rates were measured. The statistics included a Student t test, a χ(2), regression, and a discrete survival analysis. Results: An elevated day 3 FSH was found in 10.2% of the women, despite favorable age (31.9 ± 2.5 years). Women with a day 3 FSH of 10 U/L or greater when compared with women with a normal day 3 level required significantly more medication (1058.9 ± 1106.0 vs 632.7 ± 477.5 IU, P < .0001) were triggered a day earlier (10.6 ± 2.4 vs 11.5 ± 2.9 days, P = .0006) and had E2 levels (on the day of and the day prior to human chorionic gonadotropin administration) that were significantly higher (529.5 ± 244.3 vs 450.0 ± 244.2 and 359.6 ± 141.7 vs 306.8 ± 160.9 pg/mL, respectively, P < .05). Clinical pregnancy rates were comparable among the groups (14.6 vs 14%, respectively, P > .05). Spontaneous abortion and multiple pregnancy rates were higher among women with an FSH of 10U/L or greater but not significantly so (27.8% vs 12.0%, 22.2% vs 13.8% for FSH of ≥10 vs FSH < or >10 U/L, P > .05). Conclusion: Women younger than 35 years with an elevated day 3 FSH, when treated aggressively with gonadotropins have pregnancy rates comparable with those of women with a normal baseline FSH. To achieve this outcome, they need higher doses of medication to stimulate the production of a larger preovulatory follicular cohort. abstract_id: PUBMED:26930042 Do younger women with elevated basal follicular stimulating hormone levels undergoing gonadotropin-stimulated intrauterine insemination cycles represent compromised reproductive outcomes? Objective: To compare stimulation characteristics and reproductive outcomes in women representing elevated and normal day 3 FSH levels and to evaluate the prognostic significance of day 3 FSH on the reproductive outcomes of gonadotropin-stimulated IUI (GS-IUI) cycles in women <35 years. Study Design: A cross-sectional study was designed. Unexplained infertility patients at the age ≤36 years, who underwent IUI, following gonadotropin stimulation (GS), were investigated. From 105 women with a day 3 FSH≥ 10U/L, 170GS/IUI cycles were assigned to Group EF; whereas a control group (Group NF, normal FSH) was constituted of 170 cycles with a day 3 FSH levels <10U/L. Demographic and stimulation characteristics as well as reproductive outcomes were compared. Primary outcome measure of this study was the biochemical, clinical and ongoing pregnancy rates. Secondary outcome measures were total gonadotropin dose, duration of gonadotropin stimulation, multiple pregnancy, miscarriage and cycle cancellation rates. Results: β-hCG positivity, clinical and ongoing pregnancy rates did not differ between women with normal and elevated FSH levels (p=0.234, 0.282 and 0.388, respectively). Total gonadotropin dose, multiple pregnancy and miscarriage rates were not significantly different between the groups (p=0,181, 0.652 and 0.415, respectively). Duration of stimulation was significantly longer and cycle cancellation rate was significantly higher in Group EF than in Group NF (p=0.005 and 0.021, respectively). Conclusion: Younger women with elevated day 3 FSH represent comparable reproductive outcomes in GS-IUI cycles to those with normal FSH levels, although they may require longer periods of stimulation and are at higher risk of cycle cancellation. Thus, GS-IUI could be a possible treatment option in this patient group and should not be neglected. abstract_id: PUBMED:26873125 Anti-Müllerian hormone concentrations and antral follicle counts for the prediction of pregnancy outcomes after intrauterine insemination. Objective: To evaluate anti-Müllerian hormone (AMH) concentrations and antral follicle counts (AFCs) in the prediction of pregnancy outcomes after controlled ovarian stimulation among women undergoing intrauterine insemination. Methods: A retrospective study included women with unexplained infertility aged 41years or younger who attended a fertility clinic in Italy between December 2009 and May 2014. Ovarian stimulation was achieved with recombinant follicle-stimulating hormone or highly purified human menopausal gonadotropin. Receiver operating characteristic curves were generated to predict ongoing pregnancy. The primary outcome was the association between AMH/AFC and ongoing pregnancy, and was assessed by logistic regression. Results: Overall, 276 women were included, of whom 43 (15.6%) achieved ongoing pregnancy. Multivariate analysis showed that women with a serum day-3 concentration of AMH higher than 2.3ng/mL were more likely to have ongoing pregnancy than were those with a concentration lower than 2.3ng/mL (odds ratio 5.84, 95% confidence interval 2.38-14.31; P<0.001). No associations were recorded for AFCs. Conclusion: AMH should be used to predict the pregnancy outcome of intrauterine insemination. abstract_id: PUBMED:24639750 Predictive factors influencing pregnancy rates after intrauterine insemination. Background: So far, many studies investigated factors that affect pregnancy rates after intrauterine insemination (IUI). Various investigators have not agreed on the nature and ranking of these criteria. Objective: The aim of this study was to assess the predictive factors for pregnancy rate after controlled ovarian hyperstimulation (COH)/ IUI. Materials And Methods: Retrospective study of all patients undergoing IUI at Zeynep Kamil Gynecologic and Pediatric Training and Research Hospital from January 2006 to December 2009. In total 980 IUI cycles in 569 couples were analyzed. All women in the study underwent ovarian stimulation using gonadotropin and IUI was performed 36 h after triggering ovulation. The primary outcome measure was clinical pregnancy rates. Predictive factors evaluated were female age, body mass index (BMI), duration of infertility, type of infertility, follicle stimulating hormone (FSH) level and estradiol (E2) on third day of the cycle, number of preovulatory follicles, endometrial thichness, total motil sperm (TMS) count, and ratio of progressive motile sperm. Results: The overall clinical pregnancy rate was 4.7%. Among the predictive factors after multivariate logistic regression analysis level of BMI (<25 kg/m²), number of preovulatory follicles (≥2), level of FSH (<9.4 IU/L), level of E2 (<80 pg/ml) and the ratio of progressive motile sperm (>50%) significantly influenced the clinical pregnancy rate. Conclusion: Level of BMI, FSH, estradiol, number of preovulatory follicles and the ratio of progressive motile sperm may determine IUI procedure as optimum treatment model. abstract_id: PUBMED:29681714 Tamoxifen is Better than Low-Dose Clomiphene or Gonadotropins in Women with Thin Endometrium (<7 mm) after Clomiphene in Intrauterine Insemination Cycles: A Prospective Study. Aim: Gonadotropin stimulation is used as the second line of treatment in patients with thin endometrium following clomiphene citrate (CC) administration, which is associated with higher cost, multiple births, and ovarian hyperstimulation syndrome. Tamoxifen (TMX), a selective estrogen receptor modulator, acts as an agonist on the endometrium. The objective of the present study was to compare the efficacy of low-dose CC, TMX, and gonadotropins in women with thin endometrium (<7 mm) following Clomiphene in intrauterine insemination (IUI) cycles. Settings And Design: A prospective observational study between December 2011 and June 2013 was carried out in a tertiary infertility center. Methods: Women (n = 502) undergoing IUI with endometrium <7 mm after 100 mg CC were included in the study and divided into three treatment groups. Women in Group A (n = 182, cycles = 364) received clomiphene (50 mg/day from day 3 to 7), Group B (n = 179, cycles = 342) received TMX (40 mg/day from day 3 to 7), and Group C (n = 141, cycles = 226) received continuous urine-derived follicle-stimulating hormone 75-150 IU from day 3 onward until human chorionic gonadotropin injection. Endometrial thickness (ET), pregnancy rate, and live birth rate were considered as main outcome measures. Statistical Analysis: Multiple comparisons using one-way ANOVA and Schiff's test were performed. Results: Pregnancy and live birth rate were significantly higher (P < 0.004) in TMX and gonadotropin groups compared to clomiphene. A number of follicles in the TMX group were found to be lower (P < 0.001) compared to other two groups. In polycystic ovary syndrome patients, ovulation induction with TMX resulted in inadequate response in more than half of the cycles. Conclusions: TMX can improve ET and live birth rate in patients with thin endometrium after clomiphene. abstract_id: PUBMED:27803582 Impact of gonadotropin-releasing hormone antagonist addition on pregnancy rates in gonadotropin-stimulated intrauterine insemination cycles. Objectives: The objective of the study is to evaluate the efficacy of gonadotropin-releasing hormone (GnRH) antagonist in improving clinical pregnancy rate in gonadotropin-stimulated intrauterine insemination (IUI) cycles in patients of unexplained infertility. Study Design: This was a prospective, randomized case-controlled study. Settings: The study was conducted in the infertility clinic of a tertiary care center. Materials And Methods: Four hundred twenty-seven women undergoing IUI following controlled ovarian stimulation with gonadotropins (recombinant follicle-stimulating hormone [r-FSH] 75 IU/day) were randomly divided into two groups. Women in Group I received GnRH antagonist (Cetrorelix 0.25 mg/day) in a multiple dose flexible protocol. Women in Group II received r-FSH alone. Ovulatory trigger was given with human chorionic gonadotropin 5000 IU when dominant follicle was ≥18 mm. IUI was performed within 44-48 h. Both groups received similar luteal phase support. Primary outcome measure was clinical pregnancy rate. The trial was powered to detect an absolute increase in clinical pregnancy rate by 13% from an assumed 20% clinical pregnancy rate in the control group, with an alpha error level of 0.05 and a beta error level of 0.20. Results: Clinical pregnancy rate in Groups I and II was 27.6% (n = 56) and 26.5% (n = 54), respectively (P=0.800). Ongoing pregnancy and multiple pregnancy rates were likewise similar between the groups. Conclusions: Addition of GnRH antagonist to gonadotropin-stimulated IUI cycles results in no significant difference in clinical pregnancy rate. abstract_id: PUBMED:36920100 The importance of infertility duration and follicle size according to pregnancy success in women undergoing ovulation induction with gonadotropins and intrauterine insemination. In this study, we aimed to evaluate the effect of infertility duration and dominant follicle size measured on the day of human chorionic gonadotropin (HCG) administration on pregnancy rates in infertile women undergoing ovulation induction (OI) with gonadotropin (GND). A total of 352 patients aged 20 to 41 years who were diagnosed with unexplained infertility or polycystic ovary syndrome (PCOS) were included in this study. Patients with a history of multifollicular development or follicle stimulating hormone (FSH) value more than 12 IU/ml were excluded from the study. The demographic and clinical features of the patients were obtained from the patients' files and hospital automation system and recorded for each woman. The demographic and clinical features of the patients were recorded. Patients were divided into two groups as live birth (group 1, n = 47) and non-live birth groups (group 2, n = 305). There were no statistically significant differences in regard to age, infertility type, follicle stimulating hormone (FSH) level, oestradiol (E2) level, antral follicle count (AFC), cycle characteristics, GND type, number of follicles, E2 level and endometrial thickness on HCG day, total GND dose, dominant follicle size (p > 0.05). Infertility duration in group 1 was 3.5 ± 2.1; in group 2, 4.7 ± 3.9 years. This difference was statistically significant (p = .014). According to this study, live birth rates after GND + IUIs(intrauterine insemination) were significantly affected by the duration of infertility. But the dominant follicle size and endometrial thickness measured on HCG day in GND and IUI cycles did not have a significant effect on pregnancy rates.IMPACT STATEMENTWhat is already known on the subject? It is thought that the timing of the ovarian triggering is vital for the success of intrauterine insemmination (IUI) treatment.What do the results of this study add? According to our results live birth rates after GND + IUIs were significantly affected by the duration of infertility.What are the implications of these findings for clinical practice and/or further research? It is determined that the duration of infertility is significant and patients should be encouraged to the treatment as soon as possible. abstract_id: PUBMED:10327765 Efficacy of simultaneous use of cryopreserved semen, controlled ovarian hyperstimulation and intrauterine insemination in the treatment of sterility caused by hypergonadotropic azoospermia The purpose of the present study is to determine the efficacy of an artificial intrauterine insemination program with frozen donor sperm and controlled ovarian hyperstimulation as an alternative therapy for infertility cause by hypergonadotropic azoospermia. Two hundred forty three insemination cycles with frozen donor sperm were analyzed. Clomiphene citrate, pure FSH, recombinant FSH or human menopausal gonadotropins were utilized for ovulation induction; human corionic gonadotropin (hCG), 10,000 IU, was administered when one or more dominant follicles with diameter > or = 16 mm were present; intrauterine insemination was performed 36 hours after the hCG injection. The pregnancy rate per cycle was 19.9%, and the cumulative pregnancy rate was 59.3%. It is concluded that intrauterine insemination with frozen donor sperm and ovulation induction is a good alternative for male factor infertility with no available treatment. abstract_id: PUBMED:30930300 Management of ovulation induction and intrauterine insemination in infertile patients with hypogonadotropic hypogonadism. Aim: To investigate the effectiveness of ovulation induction and intrauterine insemination (OI + IUI) in female patients with hypogonadotropic hypogonadism (HH), and to compare the outcomes of different stimulation protocols and cycle characteristics. Material And Methods: The outcomes of OI + IUI treatments in patients with HH diagnosed between 2010 and 2018 were retrospectively evaluated. Cycles using recombinant (rec) luteinizing hormone (LH) or human menopausal gonadotropin (hMG) as LH sources were compared with each other. The cycle characteristics and pregnancy rates of the first cycles were compared with those of the second cycles in patients who underwent 2 or more cycles. Results: Of 104 patients diagnosed with World Health Organization type 1 anovulation, 99 were treated with hMG or rec LH + rec follicle-stimulating hormone (FSH) in a total of 220 cycles. The mean age of the study patients was 27.8 ± 4.6 years (range, 19-39 years). Rec FSH + rec LH was given in 37 cycles, and hMG was used in 183 cycles. The hormone values were as follows: FSH, 1.4 ± 1.6 mIU/mL; LH, 0.7 ± 1.2 mIU/mL; oestradiol, 13 (15.8 ± 12.0) pg/mL; and anti-Müllerian hormone, 2.1 (2.6 ± 1.2) ng/mL. A dominant follicle was observed in 85.7% of the first cycles and in 86.2% of the second cycles. The treatment lasted 17.2 ± 5.0 and 15.5 ± 3.8 days until the human chorionic gonadotropin (hCG) administration day in the first and second cycles, respectively, and the difference was statistically significant (p < 0.05). The cycle cancellation rate was 8.1% (n = 3) in cycles done using rec gonadotropins and 29% (n = 53) in patients stimulated with hMG, and the difference was statistically significant (p < 0.05). The pregnancy rates were 12.7% and 28.3% per cycle and per patient, respectively. The pregnancy rate in hCG-triggered patients (successful stimulation) was 17.1% per cycle in all patients. Conclusion: OI with gonadotropins and IUI is a safe, efficient, and relatively cost-effective treatment option in patients with HH, yielding reasonable pregnancy rates per cycle and per patient. The use of rec FSH + rec LH facilitates cycle management but does not positively contribute to pregnancy rates and is more expensive than some other feasible options. abstract_id: PUBMED:12066105 Assessing the risk of multiple gestation in gonadotropin intrauterine insemination cycles. Objective: The purpose of this study was to analyze factors for their ability to predict multiple gestation in women who undergo controlled ovarian hyperstimulation with gonadotropins (follicle-stimulating hormone/human menopausal gonadotropin) and intrauterine insemination. Study Design: This was a retrospective analysis of the clinical and laboratory variables that are associated with multiple gestation. Data for 6 variables in 678 cycles of gonadotropin/intrauterine insemination between 1990 and 1999 were analyzed with survival analysis, Cox regression analysis, and multiple logistic regression. Results: There were 99 clinical pregnancies among 678 cycles (14.6% per cycle) in 306 women. Of the 14 women with multiple gestations (14.1% of pregnancies), 11 women had twins, 2 women had triplets, and 1 woman had quadruplets. Age, days of gonadotropin treatment, total dose of gonadotropin, and number of follicles that were >or=15 mm at the time of human chorionic gonadotropin administration were statistically significant predictors of multiple gestation in >or=1 of the statistical models. Conclusion: The risk of multiple gestation with controlled ovarian hyperstimulation/intrauterine insemination in this study was relatively low. In addition to age, several controllable variables that are associated with multiple gestation were identified. Answer: Based on the available abstracts, gonadotropin stimulation/intrauterine insemination (IUI) appears to be a viable option for younger women with elevated day 3 follicle-stimulating hormone (FSH) levels. A retrospective study (PUBMED:24487006) found that women under 35 years with elevated day 3 FSH levels, when treated aggressively with gonadotropins, have pregnancy rates comparable to those of women with normal baseline FSH. However, they required higher doses of medication to stimulate the production of a larger preovulatory follicular cohort. Another study (PUBMED:26930042) also supports the use of gonadotropin-stimulated IUI (GS-IUI) in younger women with elevated day 3 FSH levels, showing comparable reproductive outcomes to those with normal FSH levels, although they may require longer periods of stimulation and are at higher risk of cycle cancellation. Furthermore, a study (PUBMED:26873125) suggests that anti-Müllerian hormone (AMH) concentrations can be used to predict pregnancy outcomes after controlled ovarian stimulation and IUI, indicating that ovarian reserve markers like AMH and day 3 FSH can be important in managing fertility treatments. However, it is important to note that while gonadotropin stimulation/IUI is a treatment option, there are risks associated with it, such as multiple gestations and higher miscarriage rates, although not significantly so (PUBMED:24487006). Additionally, the risk of multiple gestation with controlled ovarian hyperstimulation/IUI is relatively low, but certain controllable variables associated with multiple gestation were identified (PUBMED:12066105). In conclusion, gonadotropin stimulation/IUI can be considered a good option for younger women with elevated day 3 FSH, but it requires careful management and consideration of the associated risks. It is also important to consider individual patient factors, such as infertility duration and ovarian reserve markers, to optimize treatment outcomes (PUBMED:36920100).
Instruction: Does positive pressure ventilation increase arginine vasopressin in preterm neonates? Abstracts: abstract_id: PUBMED:36360641 Efficacy and Safety of Vasopressin and Terlipressin in Preterm Neonates: A Systematic Review. Introduction: The use of arginine vasopressin (AVP) and terlipressin to treat hypotension in preterm neonates is increasing. Our aim was to review the available evidence on the efficacy and safety of AVP and terlipressin for use in preterm neonates. Methods: MEDLINE, EMBASE, the Cochrane Central Register of Controlled Trials, Web of Science, and Google Scholar from inception to September 2021 were searched for studies of AVP and terlipressin in the treatment of hypotension of any cause in preterm neonates. Primary outcomes were improvement in end-organ perfusion and mortality. The risk of bias assessment and certainty of the evidence were performed using appropriate tools. Results: Fifteen studies describing the use of AVP (n = 12) or terlipressin (n = 3) among 148 preterm neonates were included. Certainly, the available evidence for the primary outcome of end-organ perfusion rated as very low. AVP or terlipressin were used to treat 144 and 4 neonates, respectively. Improvement in markers of end-organ perfusion was reported in 143 (99%) neonates treated with AVP and 3 (75%) treated with terlipressin. The mortality rate was 41% (n = 59) and 50% (n = 2) for neonates who received AVP and terlipressin, respectively. Hyponatremia was the most frequently reported adverse event (n = 37, 25%). Conclusion: AVP and terlipressin may improve measured blood pressure values and possibly end-organ perfusion among neonates with refractory hypotension. However, the efficacy-safety balance of these drugs should be assessed on an individual basis and as per the underlying cause. Studies on the optimal dosing, efficacy, and safety of AVP and terlipressin in preterm neonates with variable underlying conditions are critically needed. abstract_id: PUBMED:6360268 Effect of positive pressure breathing on plasma antidiuretic hormone and renal function in dogs. The effect of intermittent mechanical ventilation and positive end-expiratory pressure on plasma antidiuretic hormone concentration was determined in 14 pentobarbital anesthetized dogs. The study was divided into a control period (spontaneous respiration), and two consecutive 30 and 60 min periods after the start of controlled respiration: Group I - intermittent positive breathing (IPPB); Group II - positive end-expiratory pressure (PEEP) with 5 cm H2O. A decrease in urinary flow (36.9%) was observed during end-expiratory pressure breathing. An increase in plasma antidiuretic hormone in group II from 4.5 +/- 2.4 to 24.6 +/- 16.0 pg/ml (P less than 0.01) was associated with a significant reduction of free water clearance from 1.2 +/- 0.6 to 0.3 +/- 0.4 ml/min and an increase of the urine/plasma osmolality ratio (143%, P less than 0.05). The decrease in urinary output and concurrent reduction of urinary sodium excretion also suggest an influence of the fall in glomerular filtration rate and renal plasma flow on renal function. IPPB only reduced total Na+ and K+ excretion. These results indicate that the mechanisms underlying the renal response to positive end-expiratory pressure breathing may be due to an increase in antidiuretic hormone plasma levels leading to a fall in urinary flow and in part to a decrease in sodium excretion. abstract_id: PUBMED:679506 Influence of lower body negative pressure upon arginine vasopressin release. Lower body negative pressure (9-12 kPa) was applied to ten normal subjects. Large increases in plasma arginine vasopression concentration occurred only in subjects that experienced syncopal symptoms and developed hypotension. Blood samples obtained from the superior vena cava at 1/2 min intervals during application of negative pressure showed that maximal plasma vasopressin concentrations occurred with hypotension. Chromatography of the presyncopal plasma on Sephadex G-25 gave a large peak which eluted in the position of synthetic arginine vasopressin. abstract_id: PUBMED:8287654 Extracellular volume expansion inhibits antidiuretic hormone increase during positive end-expiratory pressure in conscious dogs. 1. This study in conscious dogs examined the effects of extracellular volume expansion on plasma antidiuretic hormone, atrial natriuretic peptide and aldosterone concentrations, plasma renin activity, and haemodynamic and renal responses during controlled mechanical ventilation with 20 cmH2O positive end-expiratory pressure. 2. Twenty experiments (10 controls, 10 expansion experiments with 0.5 ml min-1 kg-1 body weight of a balanced electrolyte solution given intravenously throughout) were performed in five trained, conscious, tracheotomized dogs over 4 h: first and fourth hour, spontaneous breathing; second and third hour, 20 cmH2O positive end-expiratory pressure. 3. In the control experiments positive end-expiratory pressure increased plasma antidiuretic hormone concentration from 1.4 +/- 0.2 to 10.0 +/- 3.3 pg/ml, plasma aldosterone concentration from 113 +/- 19 to 258 +/- 58 pg/ml and heart rate from 77 +/- 5 to 94 +/- 5 beats/min. Positive end-expiratory pressure did not change plasma atrial natriuretic peptide concentration (55 +/- 5 pg/ml), plasma renin activity (2.6 +/- 0.4 pmol of angiotensin I h-1 ml-1) and mean arterial pressure 103 +/- 3 mmHg). 4. In the expansion experiments, positive end-expiratory pressure did not change plasma antidiuretic hormone concentration (1.1 +/- 0.1 pg/ml), plasma aldosterone concentration (25 +/- 2 pg/ml), plasma atrial natriuretic peptide concentration (82 +/- 8 pg/ml), plasma renin activity (0.8 +/- 0.15 pmol of angiotensin I h-1 ml-1), heart rate (92 +/- 6 beats/min) and mean arterial pressure (111 +/- 4 mmHg). 5. In the control experiments, urine volume, sodium excretion and fractional sodium excretion remained in a low range during positive end-expiratory pressure, whereas potassium excretion increased.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:20446776 Effects of inverse ratio ventilation with PEEP on kidney function. Introduction: Positive pressure ventilation influences hemodynamics and impairs renal function. The effects of inverse ratio ventilation (IRV) on hormonal response and renal function need to be investigated because this represents a highly invasive form of positive pressure ventilation. Materials And Methods: Ten male patients were treated postoperatively for 60 min using five different ventilation modes. At the end of these periods, hemodynamics, urine production, fractional excretion of sodium (FESo), as well as the hormones, atrial natriuretic peptide (ANP), renin, angiotensin II, aldosterone, and antidiuretic hormone (ADH), were measured in plasma. Results: Central venous pressure (CVP), ADH, and renin with a positive end-expiratory pressure (PEEP) of 15 cm H(2)O and an inspiration/expiration ratio (I:E) of 1:2 revealed significant increases from baseline, whereas CVP, renin, and FESo showed an increase with an PEEP of 15 cm H(2)O and an I:E ratio of 2:1. Urine production significantly decreased with an PEEP of 15 cm H(2)O and an I:E ratio of 2:1. Conclusions: IRV with PEEP induced clear effects on hemodynamics and hormonal responses (renin) as well as a decrease in urine production in patients with healthy renal and pulmonary systems after an operation. However, all parameters apart from FESo and renin remained within the normal range. Whether pathological values are also observed after longer periods of positive pressure ventilation shall be the subject of other studies. abstract_id: PUBMED:2686333 Influence of low- and high-pressure baroreflexes on vasopressin release in humans. Stimulation of left atrial volume receptors in the dog suppresses arginine vasopressin (AVP) release. Whether the same mechanism is valid in nonhuman primates and in humans is unclear. In order to investigate whether changes in low-pressure baroreceptor stimulation influence AVP release in humans, we utilized different experimental models (water immersion, lower body positive and negative pressure, and negative pressure breathing) to induce systematic changes in central venous pressure. In an additional study, carotid baroreceptors were selectively stimulated by neck suction in order to evaluate the role of arterial baroreceptors on AVP regulation. The results from these studies indicate that low-pressure baroreflexes play little or no role in the regulation of AVP release in overnight fluid-restricted humans during isoosmotic changes in central blood volume. It is suggested that high-pressure baroreflexes (probably in concert with other mechanisms) during these circumstances play a more important role and that changes in arterial pulse pressure (PP) and in the PP profile over time (dP/dt) may be more essential than changes in mean arterial pressure in regard to modulation of AVP release in humans. Future studies should focus on the effects of changes in arterial baroreceptor stimulation on AVP release in humans. In this regard, water immersion and selective carotid baroreceptor stimulation by neck suction are promising models. abstract_id: PUBMED:34179513 Efficacy and safety of vasopressin and terlipressin in preterm neonates: a protocol for a systematic review. Background: The use of vasoactive agents like arginine vasopressin (AVP) and terlipressin to treat hypotension or persistent pulmonary hypertension in critically ill preterm neonates is increasing. Therefore, a systematic review of the available data on dosing, efficacy and safety of AVP and terlipressin in this patient population appears beneficial. Methods: We will conduct a systematic review of the available evidence on the use of AVP and terlipressin for the treatment of hypotension or persistent pulmonary hypertension in preterm neonates. We will search Ovid MEDLINE, EMBASE, the Cochrane Central Register of Controlled Trials, Web of Science and Google Scholar from inception to March 2021. Two reviewers will independently screen titles and abstracts, review the full text of eligible studies, extract data, assess the risk of bias and judge the certainty of the evidence. Our primary outcome will be an (1) improvement of end-organ perfusion after initiation of AVP or terlipressin and (2) mortality prior to discharge. Our secondary outcomes will include (1) major neurosensory abnormality and (2) the occurrence of adverse events. Discussion: The currently available evidence on the efficacy and safety of AVP and terlipressin in preterm neonates is limited. Yet, evidence on the pharmacology of these drugs and the pathophysiology of vasoplegic shock support the biological plausibility for their clinical effectiveness in this population. Therefore, we aim to address this gap concerning the use of vasopressin and terlipressin among critically ill preterm neonates. Trial Registration: This protocol has been submitted for registration to the international database of prospectively registered systematic reviews (PROSPERO, awaiting registration number). abstract_id: PUBMED:2948066 The role of alpha-atrial natriuretic peptide in fluid retention during mechanical ventilation with positive end-expiratory pressure. Although controlled mechanical ventilation (CMV) with positive end-expiratory pressure (PEEP) has a central place in the treatment of acute respiratory failure (ARF), several side effects of this technique have to be faced. CMV with PEEP may induce pulmonary barotrauma, disturbance of cardiac performance, impairment of renal function and fluid retention. Atrial natriuretic peptides (ANP) are released from the atria upon stretching and play a major role in the control of sodium and fluid balance. Therefore it was logical to determine plasma levels of alpha-ANP in CMV. A study performed in 7 patients suffering from ARF suggested that alpha-ANP plasma levels were depressed during PEEP at 15 cm H2O in comparison with PEEP at 0 cm H2O (ZEEP). The decrease in plasma levels of alpha-ANP was evident in samples taken from superior vena cava, right atrium, pulmonary artery and radial artery as well. The decrease in alpha-ANP was associated with a decline in cardiac index, creatinine clearance, urinary output and urinary sodium excretion. Experiments in volume- expanded healthy volunteers also suggest that CMV with PEEP is able to depress plasma levels of alpha-ANP. The reasons behind the decline in release of alpha-ANP may be atrial compression by the distended lungs and the well-known reduction of venous return to the heart. Other possible factors promoting fluid retention during CMV with PEEP are the decrease in cardiac index and glomerular filtration rate, changes in intrarenal distribution of blood flow, and a stimulation of release of antidiuretic hormone via stretch receptors in the left atrium and baroreceptors in aorta and carotid arteries. abstract_id: PUBMED:23477980 Vasopressin for the treatment of neonatal hypotension. Vasopressin (pitressin), also known as arginine vasopressin (AVP), is an antidiuretic hormone formed in the hypothalamus and secreted from the posterior pituitary gland. Various forms of exogenous vasopressin exist and have been used in neonates to treat conditions such as diabetes insipidus. Vasopressin has also been studied on a limited basis for use in the treatment of catecholamine-resistant hypotension in vasodilatory shock. Hypotension is a significant problem resulting in increased morbidity in preterm, septic, and postsurgical neonates. This article will discuss the role of vasopressin and its use as a therapeutic agent in the treatment of hypotension in the neonate. abstract_id: PUBMED:16625119 Effects of vasopressin, norepinephrine, and L-arginine on intestinal microcirculation in endotoxemia. Objective: The effects of vasopressin, norepinephrine, and L-arginine alone or combined on intestinal microcirculation were evaluated in the septic mouse by intravital microscopy, with which we measured the erythrocyte flux and velocity in villus tip arterioles and the density of perfused villi. Design: Controlled animal study. Setting: University research laboratory. Subjects: Female BALB/c mice weighing between 18 and 21 g. Interventions: Anesthetized and ventilated mice received at t0 an intravenous injection of Escherichia coli endotoxin (2 mg/kg bolus intravenously), inducing after 1 hr (t60) a decrease in mean arterial blood pressure to 40-50 mm Hg associated with a significant decrease in erythrocyte flux and velocity in villus tip arterioles and in the density of perfused villi. The mice then received a randomly different treatment for endotoxin-induced shock. Treatments consisted in continuous intravenous infusion for 1 hr with either saline (control group), norepinephrine, vasopressin, L-arginine, vasopressin+L-arginine, or norepinephrine+L-arginine. The doses of vasopressors (used alone or combined with L-arginine) were titrated to restore mean arterial pressure to the baseline level. Measurements And Main Results: At the end of the treatment (t120), we observed in the control group further decreases in arteriolar flux and velocity and in the density of perfused villi. In the groups treated by a vasopressor alone, mean arterial pressure returned to baseline and there were no additional decreases in arteriolar flux and velocity or in the density of perfused villi. However, these latter three variables did not return to their preshock baseline values. Even though L-arginine did not restore mean arterial pressure, the infusion of L-arginine alone prevented the decrease in flux or erythrocyte velocity occurring between t60 and t120 and conserved to some extent the density of perfused villi compared with that in the control groups. In addition, we found that simultaneous administration of norepinephrine or vasopressin with L-arginine improved all microcirculation variables more efficiently than either vasopressor alone. Conclusions: From these data, we conclude that a) restoring mean arterial pressure after 1 hr of endotoxemia was not sufficient to restore ad integrum intestinal mucosa microvascular perfusion; b) L-arginine could have a beneficial effect at the microcirculatory level, which was independent of mean arterial pressure; and c) administration of L-arginine combined with the maintenance of perfusion pressure by vasopressive drugs allowed a better preservation of intestinal microcirculation at an early stage of endotoxemia. Answer: The provided abstracts do not directly address the question of whether positive pressure ventilation increases arginine vasopressin in preterm neonates. However, studies on different subjects and under various conditions have shown that positive pressure ventilation can influence arginine vasopressin levels. For instance, in anesthetized dogs, positive end-expiratory pressure (PEEP) was associated with an increase in plasma antidiuretic hormone (ADH, which is arginine vasopressin) concentration, suggesting that positive pressure breathing may influence ADH levels (PUBMED:6360268). Another study in conscious dogs showed that extracellular volume expansion inhibited the increase in ADH during positive end-expiratory pressure, indicating that volume status can modulate the hormonal response to positive pressure ventilation (PUBMED:8287654). In humans, the influence of low- and high-pressure baroreflexes on vasopressin release was investigated, and it was suggested that high-pressure baroreflexes might play a more important role in the modulation of AVP release during isoosmotic changes in central blood volume (PUBMED:2686333). Additionally, inverse ratio ventilation with PEEP in postoperative patients resulted in significant increases in central venous pressure (CVP) and ADH, indicating that highly invasive forms of positive pressure ventilation can affect hormonal responses and potentially influence renal function (PUBMED:20446776). While these studies provide insights into the effects of positive pressure ventilation on arginine vasopressin levels in dogs and adult humans, they do not provide direct evidence for preterm neonates. The abstracts related to the use of vasopressin and terlipressin in preterm neonates (PUBMED:36360641, PUBMED:34179513, PUBMED:23477980) focus on the treatment of hypotension and do not discuss the impact of positive pressure ventilation on arginine vasopressin levels in this population. Therefore, based on the provided abstracts, it cannot be conclusively stated whether positive pressure ventilation increases arginine vasopressin in preterm neonates. Further research specifically targeting this population would be required to answer the question definitively.
Instruction: Medical student perceptions of factors affecting productivity of problem-based learning tutorial groups: does culture influence the outcome? Abstracts: abstract_id: PUBMED:12632710 Medical student perceptions of factors affecting productivity of problem-based learning tutorial groups: does culture influence the outcome? Background: The popularization of problem-based learning (PBL) has drawn attention to the motivational and cognitive skills necessary for medical students in group learning. This study identifies the effect of motivational and cognitive factors on group productivity of PBL tutorial groups. Description: A self-administered questionnaire was completed by 115 students at the end of PBL tutorials for 4 themes. The questionnaire explored student perceptions about effect of motivation, cohesion, sponging, withdrawal, interaction, and elaboration on group productivity. We further analyzed (a) differences in perceptions between male and female students, (b) effect of "problems," and (c) effect of student progress over time on group productivity. Evaluation: There were linear relations between a tutorial group's success and the factors studied. Significant differences were noted between male and female student groups. Conclusions: Students and tutors need to recognize symptoms of ineffective PBL groups. Our study emphasizes the need to take into account cultural issues in setting ground rules for PBL tutorials. abstract_id: PUBMED:31211224 Student perception of the effect of problem familiarity on group discussion quality in a problem-based learning environment. Introduction: Problem-based learning (PBL) is a student-centred approach to learning using health problem scenarios to trigger the learning process. Several factors contribute to the role of the problem scenarios in stimulating student learning. One of those factors is the student's familiarity and knowledge about the problem itself. This may affect the challenge and stimulate the student discussion in the tutorial group. No previous research studied the impact of reusing the case scenarios on the group discussion. This study explored the effect of student familiarity of the problems as a result of reusing the case scenarios on the discussion quality in the tutorial session. Methods: A qualitative study was used primarily to explore an understanding of the underlying opinions of the medical students of first and second academic year in the college of Medicine, University of Sharjah, UAE. Direct-discussion groups were arranged, and an open-ended online questionnaire was provided. Results: The results of the study showed that fore-knowledge about the case scenario had no significant adverse effect on the discussion. Students stated that the facilitators played a vital role in maintaining the excellent quality of the discussion. Discussion: Reuse of problem scenarios in PBL does not hurt the quality of the discussion, provided that the group dynamics are maintained. abstract_id: PUBMED:10583789 Assessment of student performance in problem-based learning tutorial sessions. Objectives: To assess student performance during tutorial sessions in problem-based learning (PBL). Design: A 24-item rating scale was developed to assess student performance during tutorial sessions in problem-based learning (PBL) as conducted during the pre-clinical years of Medical School at the National Autonomous University of Mexico. Items were divided into three categories: Independent study, Group interaction and Reasoning skills. Fourteen tutors assessed 152 first and second-year students in 16 tutorial groups. An exploratory factor analysis with an Oblimin rotation was carried out to identify the underlying dimensions of the questionnaire. Setting: Medical School at the National Autonomous University of Mexico. Subjects: Medical students. Results: Factor analysis yielded four factors (Independent study, Group interaction, Reasoning skills, and Active participation) which together accounted for 76.6% of the variance. Their Cronbach reliability coefficients were 0.95, 0.83, 0.94 and 0. 93, respectively, and 0.96 for the scale as a whole. Conclusions: It was concluded that the questionnaire provides a reliable identification of the fundamental components of the PBL method as observable in tutorial groups and could be a useful assessment instrument for tutors wishing to monitor students' progress in each of these components. abstract_id: PUBMED:26440831 Perceptions of undergraduate medical students regarding case based learning and tutorial format. Objective: To evaluate the perception of medical students regarding tutorial and case-based learning formats. Methods: The within stage mixed model research was conducted at the Army Medical College, National University of Sciences and Technology, Islamabad, Pakistan, from April 2012 to March 2013, and comprised medical students of 4th year. Data-collection tool was a self-developed questionnaire, and it was statistically analysed using SPSS 16. Results: Of the 144 students, 86(60%) were males. The overall age range was 21.5-23.4 years. Case-based learning method had significantly higher scores compared to tutorial method for learning process (p<0.001), for behavioural influence of facilitators (p<0.001), for group dynamics (p<0.001), and for learning environment (p<0.001). For open-ended questions regarding like or dislike of the two methods, majority named some faculty members as their role models and appreciated their support in better and in-depth learning through case-based learning format. Conclusions: Case-based learning format was significantly more appreciated and favoured as a learning strategy by students. abstract_id: PUBMED:28144171 Advances in medical education and practice: student perceptions of the flipped classroom. The flipped classroom (FC) approach to teaching has been increasingly employed in undergraduate medical education in recent years. In FC applications, students are first exposed to content via online resources. Subsequent face-to-face class time can then be devoted to student-centered activities that promote active learning. Although the FC has been well received by students in other contexts, the perceptions of medical students regarding this innovation are unclear. This review serves as an early exploration into medical student perceptions of benefits and limitations of the FC. Medical students have generally expressed strong appreciation for the pre-class preparation activities (especially when facilitated by concise, readily accessed online tools) as well as for interactive, engaging small group classroom activities. Some students have expressed concerns with the FC and noted that suboptimal student preparation and insufficient direction and structure during active learning sessions may limit the student-centered benefits. Although students generally perceive that FC approaches can improve their learning and knowledge, this has not been conclusively shown via performances on assessment tools, which may be related to caveats with the assessment tools used. In any case, lifelong self-directed learning skills are perceived by medical students to be enhanced by the FC. In conclusion, medical students have generally expressed strong satisfaction with early applications of the FC to undergraduate medical education, and generally prefer this method to lecture-based instruction. abstract_id: PUBMED:16199359 Student-led tutorials in problem-based learning: educational outcomes and students' perceptions. The aim of this study was to examine the effectiveness of using students as tutors in a problem-based learning (PBL) medical curriculum. Ninety-one third-year medical students were divided into ten tutorial groups. The groups were randomly allocated into student-led tutorials (SLT) (five groups, n = 44 students) and faculty-led tutorials (FLT) (five groups, n = 47 students). Outcome measurements included assessment of students' performance in tutorials individually and as a group, end-unit examinations scores, assessment of tutoring skills and identifying students' perceptions about peer tutoring. Student tutors were perceived better in providing feedback and in understanding the difficulties students face in tutorials. Tutorial atmosphere, decision-making and support for the group leader were better in SLT compared with FLT groups. Self-assessment of student performance in SLT was not different from FLT. Student scores in the written and practical examinations were comparable in both groups. However, SLT groups found difficulties in analysis of problems presented in the first tutorial session. We conclude that the impact of peer tutoring on student performance in tutorials, group dynamics, and student achievement in examinations is positive overall. However, student tutors require special training before adopting this approach in PBL programs. abstract_id: PUBMED:19502146 Factors affecting the quality of problem-based learning in a hybrid medical curriculum. For medical schools that wish to implement or are involved in problem-based learning (PBL) as part of their medical curriculum, there are many factors which can affect its quality. This paper discusses four critical issues--the need for sufficient protected time for PBL; the structure of the PBL case and its alignment with learning outside the tutorial room; the role of tutors and assessment in PBL--which can affect tutor and students' performance. abstract_id: PUBMED:34125761 Perception of medical students about problem-based learning at Jouf University. Objective: To evaluate students' perceptions of problem-based learning tutorial sessions. Methods: The cross-sectional study was conducted at the College of Medicine, Jouf University, Al-Jawf Saudi Arabia, from March to May 2019, and comprised medical students in their clinical phase. Data was collected using an electronic questionnaire about students' perceptions of problem-based learning sessions, tutors' session facilitation, and tutors' student evaluation fairness. Data was analysed using SPSS 21. Results: Of the 66 students approached, 51(77.27%) completed the questionnaire; 23(45%) males and 28(55%) females. The agreement mean perception score was 4.73±2.13 regarding various statements characterising problem-based learning sessions. Lesser mean score of agreement 1.33±1.07 was observed regarding tutors' facilitation of the sessions and their students' evaluation. Conclusions: Students perceived problem-based learning sessions as effective and interactive, but did not positively perceive tutors' evaluations. abstract_id: PUBMED:25424671 The opinion of teachers about tutorial problem based learning Background: In 2004 the Faculty of Medicine of Universidad de La Frontera in Chile implemented curricular changes, incorporating small group problem based learning in different carriers. Aim: To explore aspects that hamper or facilitate tutorial problem based learning from the perspective of tutors. Material And Methods: Six in depth interviews and a focus group with tutors were carried out in 2010 and 2011. Data were analyzed through constant comparisons using the program ATLAS ti, guaranteeing credibility, reliance, validation and transferability. Results: Five hundred and twenty eight (528) significance units were identified and 25 descriptive categories emerged. The categories of tutor motivation, methodological domain, tutor responsibility, tutor critical capacity, disciplinary domain, student participation and tutor-student interaction were emphasized. Three qualitative domains were generated, namely tutor skills, transformation of student roles and institutional commitment. Conclusions: Tutorial teaching is favored by teachers when the institutions train them in the subject, when there is administrative support and an adequate infrastructure and coordination. abstract_id: PUBMED:34777094 Student Perceptions of Academic Engagement and Student-Teacher Relationships in Problem-Based Learning. Students' academic engagement depends on a variety of factors that are related to personal learner characteristics, the teacher, the teaching methodology, peers, and other features in the learning environment. Components that influence academic engagement can be cognitive, metacognitive, affective, social, task-related, communicative, and foreign language-related. Rather than existing in isolated spheres, the factors contributing to an individual's academic engagement intertwine and overlap. The relationships students cultivate with others are prominent in several of these areas. Positive interpersonal relationships enhance individuals' enthusiasm for learning (Mercer and Dörnyei, 2020), which benefits sustainable learning success and self-confidence. The relationships between students and teachers and the perceptions students have of their teachers seem to be particularly influential on students' engagement in academic undertakings. Problem-based learning (PBL), a teaching approach particularly suitable for tertiary education, involves students in authentic problem-solving processes and fosters students' self-regulation and teamwork. Intensive relationship-building is one of the key characteristics of this student-centered approach (Amerstorfer, 2020). The study reported in this article explores the connection between the academic engagement of 34 students and their perceptions of three instructors in a teacher education program for pre-service English teachers in Austria. An online questionnaire was used to investigate the participants' perceived academic engagement (effort, dedication, learning success) in a university course that implements PBL as its underlying teaching methodology in comparison to conventional teaching approaches. The study further examines how the students perceived the course instructors' caring, credibility, communication style, and feedback, which leads to new information about how PBL shapes student-teacher relationships. Due to Covid-19, the otherwise face-to-face course was taught online. Answer: Medical student perceptions of factors affecting productivity in problem-based learning (PBL) tutorial groups do suggest that culture may influence the outcome. A study identified that there were linear relations between a tutorial group's success and factors such as motivation, cohesion, sponging, withdrawal, interaction, and elaboration. It also noted significant differences in perceptions between male and female student groups, which could be indicative of cultural influences on group dynamics and productivity. The study emphasized the need to consider cultural issues when setting ground rules for PBL tutorials (PUBMED:12632710). Additionally, the quality of PBL can be affected by various factors, including the need for sufficient protected time for PBL, the structure of the PBL case, the role of tutors, and assessment in PBL, which can influence both tutor and students' performance (PUBMED:19502146). These factors could be influenced by the cultural context in which the PBL is implemented, as different cultures may have varying expectations and norms regarding education, group work, and authority figures such as tutors. Furthermore, the perception of medical students about PBL at Jouf University showed that while students perceived PBL sessions as effective and interactive, they did not positively perceive tutors' evaluations (PUBMED:34125761). This could suggest that cultural expectations about the role and authority of tutors may affect students' perceptions of the fairness and effectiveness of evaluations in PBL settings. In conclusion, medical student perceptions do indicate that cultural factors can influence the productivity of PBL tutorial groups, affecting various aspects of group dynamics, tutor roles, and assessment perceptions.
Instruction: Is CD19 an immunological diagnostic marker for acute appendicitis? Abstracts: abstract_id: PUBMED:24375063 Is CD19 an immunological diagnostic marker for acute appendicitis? Background: The appendix is considered as part of the gut-associated lymphoid tissue; however, lymphocyte subsets in this tissue are not fully defined. Objective: To investigate and compare the function and phenotype of lymphocyte subsets in peripheral blood and appendix of patients with normal and inflamed appendix tissues. Methods: Peripheral blood samples and appendiceal mononuclear cells were obtained from 81 patients (mean age; 23 ± 10.5 years), clinically suspected of having appendicitis. The phenotypic characteristics of lymphocyte subsets in peripheral blood (before and 48-72 hrs after appendectomy) and in appendix tissue were analyzed by three color-flow cytometry. The proliferative response of mononuclear cells was assessed by MTT method. Results: The frequency of CD19+DR+, HLA-DR+ and CD19+ cells in the appendix tissue were significantly higher than that of the peripheral blood in all the groups (p<0.001). The percentage of CD19+ cells and HLA-DR+CD19+ cells significantly decreased after appendectomy in the peripheral blood of the patients with acute appendicitis (p=0.047 and p=0.03, respectively). CD19 and HLA-DR plus CD19 had better diagnostic efficiency compared with T cell markers (area under the ROC curve [AUC]= 0.76 and 0.73, respectively). Conclusion: These results indicate a significant difference in CD19+ and HLA-DR+ lymphocytes between peripheral blood and the appendix tissue. abstract_id: PUBMED:10776800 Phenotypic pattern of B cells in the appendix: reduced intensity of CD19 expression. Unlabelled: The lymphoid tissue of the appendix is considered as part of the gut-associated lymphoid tissue (GALT). In order to understand better the immunological significance of the appendix we analyzed the cellular composition of normal and inflamed human appendix tissue by flow cytometer with special attention to expression of the CD19 and CD5 markers on B cells. Cellular analysis was also performed on peripheral and appendical vein blood samples as well as on omentum and peritoneal fluid samples. The study population included seventeen patients aged 2-15 yr. (mean age - 11.5 yr.) undergoing appendectomy. Ten children were diagnosed with acute appendicitis while 7 had a normal appendix. Results: Compared to the peripheral blood, the appendix contained a significantly higher percentage of CD19 cells (47.6% of total lymphocytes versus 15%, p<0.0001), and B1 cells (4.98% of total lymphocytes versus 2.42%, p=0.001). In addition, the intensity of CD19-staining was markedly decreased in the appendix (mean - 395.7), and also in the omentum (mean - 398.2) as compared to peripheral lymphocytes (mean - 497.7, p<0.0001 for both comparison). Comparison between the inflamed and the non-inflamed appendices revealed that the inflamed appendix contained a significantly higher proportion of B1 cells (5.64% of total lymphocytes versus 3.53%, p=0.032), and also a higher B1/b cell ratio (0.13 vs. 0.07, p=0.01). Conclusions: These results indicate that the appendix tissue contains higher number of B1 (and B) cells compared to the peripheral blood and that these cells play a role in the primary immune response to acute infection/inflammation in the appendix. Appendiceal B cell population is unique in term of CD19 intensity expression on their surface. abstract_id: PUBMED:32035730 Hyponatremia as a marker of complicated appendicitis: A systematic review. Background: Acute appendicitis, the most common cause of acute surgical abdomen, is associated with intra-abdominal complications, such as perforation, that increase morbidity and mortality. Early and accurate preoperative diagnosis of complicated appendicitis mandates the identification of new diagnostic markers. This systematic review summarizes current literature on the adoption of hyponatremia as an early diagnostic and predictive marker of complicated appendicitis. Methods: Pubmed, Cochrane Library, Scopus, Google Scholar, WHO Global Health Library, System for Information on Grey Literature, ISI Web of Science, EBSCOHost and Virtual Health Library were searched in accordance with the PRISMA guidelines in order to identify original human studies investigating the association between hyponatremia and the presence or development of complicated appendicitis. Results: A total of 7 studies conducted in 6 different countries were identified. A prospective diagnostic accuracy study reported a strong association between hyponatremia and complicated appendicitis in children. The largest sample size study performed in adults reported a significant association between hyponatremia and perforated or gangrenous appendicitis. Conclusions: The admission serum sodium level measurement, a routinely performed, low-cost test, should be taken into account in patients with clinical presentation compatible with acute appendicitis and suspicion of underlying complications. Future well-designed prospective diagnostic accuracy studies are required to further establish the association between hyponatremia and perforated appendicitis. abstract_id: PUBMED:24080115 Bilirubin; a diagnostic marker for appendicitis. Introduction: Every investigation that can contribute towards a diagnosis of appendicitis is valuable to the emergency general surgeon. Previous research has suggested that hyperbilirubinaemia is a more specific marker for both simple and perforated appendicitis than WBC (white blood count) and CRP (C-reactive protein), but this investigation is not commonly used to help diagnose appendicitis. Aims: This study investigated whether there is an association between hyperbilirubinaemia and appendicitis. We also reviewed the diagnostic value of bilirubin in perforated vs simple appendicitis, and compared it with the serum C-reactive protein (CRP) and white blood cell count (WBC). Methods: This single centre, prospective observational study included all patients admitted with right iliac fossa (RIF) pain who had liver function tests performed. Statistical analysis was performed using Fisher's exact test to compare bilirubin, WBC and CRP levels for normal appendices, simple appendicitis, and perforated appendicitis. Results: 242 patients were included in this study, of whom 143 were managed operatively for RIF pain. Hyperbilirubinaemia was significantly associated with appendicitis vs RIF pain of other aetiologies (p < 0.0001). Bilirubin had a higher specificity (0.96), than WBC (0.71) and CRP (0.62), but a lower sensitivity (0.27 vs 0.68 and 0.82 respectively). Hyperbilirubinaemia was associated with perforated appendicitis vs simple appendicitis with statistical significance (p < 0.0001). Bilirubin had a higher specificity (0.82) than both WBC (0.34) and CRP (0.21), but a lower sensitivity (0.70 vs 0.80 and 0.95 respectively). Conclusion: Our findings confirm that hyperbilirubinaemia has a high specificity for distinguishing acute appendicitis, especially when perforated, from other causes of RIF pain, particularly those not requiring surgery. abstract_id: PUBMED:22290042 Is neopterin a diagnostic marker of acute appendicitis? Background: The diagnosis of acute appendicitis, even for experienced surgeons, can sometimes be complex. A delay in diagnosis increases the complication rate. This experimental study aimed to investigate the suitability and significance of neopterin as a marker for acute appendicitis. Methods: The levels of neopterin were measured using an acute appendicitis animal model in 35 New Zealand male rabbits. They were divided into 5 groups as Group 1= control; Group 2= sham; and Groups 3 (12-hour); 4 (24-hour); and 5 (48-hour) (based on the elapsed time period before their appendectomies). The neopterin levels of each group were measured by neopterin enzyme immunoassay kit in blood samples (taken before the appendectomies in Groups 3, 4 and 5). Results: For the diagnosis of acute appendicitis, the optimal cut-off point was 34.475 nmol/L. The probability of acute appendicitis was found to be 4.667 times higher when the neopterin level was greater than 34.475 nmol/L. Conclusion: This study was an experimental animal study; however, it provides valuable clues useful in clinical assessment. Neopterin seems to have great potential as a new diagnostic marker for the diagnosis of acute appendicitis. abstract_id: PUBMED:27162905 The Diagnostic Value of Surface Markers in Acute Appendicitis; A Diagnostic Accuracy Study. Objective: To determine the diagnostic value of blood cells surface markers in patients with acute appendicitis. Methods: In this cross-sectional study, 71 patients who underwent appendectomy following a diagnosis of appendicitis were recruited during a one-year period. The patients were divided into two groups: patients with histopathologically confirmed acute appendicitis and subjects with normal appendix. Blood cell surface markers of all patients were measured. Univariate and multivariate analytical methods were applied to identify the most useful markers. Receiver operating characteristics (ROC) curves were also used to find the best cut-off point, sensitivity, and specificity. Results: Overall we included 71 patients with mean age of 22.6±10.7 years. Of the 71 cases, 45 (63.4%) had acute appendicitis while 26 (36.6%) were normal. There was no significant difference between two study groups regarding the age (p=0.151) and sex (p=0.142). The initial WBC count was significantly higher in those with acute appendicitis (p=0.033). Maximum and minimum area under the ROC curve in univariate analysis was reported for CD3/RA (0.71) and CD38 (0.533), respectively. Multivariate regression models revealed the percentage of accurate diagnoses based on the combination of γ/δ TCR, CD3/RO, and CD3/RA markers to be 74.65%. Maximum area under the ROC curve (0.79) was also obtained for the same combination. Conclusion: the best blood cell surface markers in the prediction of acute appendicitis were HLA-DR+CD19, a/β TCR, and CD3/RA. The simultaneous use of γ/δ TCR, CD3/RA, and CD3/RO showed the highest diagnostic value in acute appendicitis. abstract_id: PUBMED:26362537 Diagnostic accuracy and prognostic utility of D Dimer in acute appendicitis in children. Unlabelled: The objective was to determine the uselfulness of D Dimer (DD) as a diagnostic or prognostic marker in acute appendicitis (AA) in children using a prospective observational study in the pediatric emergency department of a tertiary hospital. We enrolled 135 patients aged 1-16 years presenting with abdominal pain consistent with AA, who required laboratory studies. We analyzed clinical, analytical variables and histopathology findings (when they underwent surgery). Statistical analysis was conducted using SPSS. 38.5% of the children were clinically diagnosed with AA (n = 52), confirmed by pathology in 51 patients. 55.8% were gangrenous appendicitis. Leucocyte count, C-reactive protein (CRP), and DD were higher in the AA group and in the gangrenous appendicitis group (p < 0.05), with highest values of DD in the gangrenous group. The area under the receiving operating characteristics (ROC) curve for DD in the diagnosis of AA is 0.66 (95% CI 0.56-0.75). For DD cut-off point of 230 ng/mL, sensitivity (Se) was 0.40, specificity (Sp) 0.80, positive predictive value (PPV) 0.57, and negative predictive value (NPV) 0.66. The area under the ROC curve for DD in children with gangrenous appendicitis is 0.93 (95% CI 0.87-1). A DD cut-off point of 230 ng/mL exhibited: Se = 0.69, Sp = 1, PPV = 1 and NPV = 0.72. Conclusion: DD levels increase in patients with AA. Although it does not constitute a useful diagnostic marker, it could be a good prognostic marker. abstract_id: PUBMED:35884054 Hyponatremia-A New Diagnostic Marker for Complicated Acute Appendicitis in Children: A Systematic Review and Meta-Analysis. Background: Acute appendicitis in the pediatric population remains a diagnostic challenge for clinicians. Despite many biochemical markers, imaging modalities and scoring systems, initial misdiagnosis and complication rates are high in children. This suggests the need for investigations directed towards new diagnostic tools to aid in the diagnosis. Recent studies have shown a correlation between serum sodium levels and complicated appendicitis. Although the exact reasons for hyponatremia in patients with complicated appendicitis are not known, there is persuasive data to support the role of pro-inflammatory cytokines such as IL-6 in the non-osmotic release of antidiuretic hormone. This meta-analysis aims to investigate all available data on hyponatremia as a diagnostic marker of complicated appendicitis in the pediatric population. Methods: The literature search was conducted by two independent investigators according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The scientific databases (PubMed, EMBASE, Web of Science, and Scopus) were systematically searched for relevant studies using the keywords (hyponatremia) AND (appendicitis) AND (children). The methodological quality was assessed using a validated scale, and RevMan 5.4 software was utilized for pooled analysis. Results: Seven studies were included in the final meta-analysis, five of which were retrospective. A total of 1615 and 2808 cases were distributed into two groups: group A with complicated appendicitis and group B with uncomplicated acute appendicitis, respectively. The studies compared serum sodium levels of patients among the groups. Pooling the data demonstrated significantly lower serum sodium levels in children with complicated appendicitis vs. the non-complicated appendicitis (WMD: −3.29, 95% CI = −4.52 to −2.07, p < 0.00001). The estimated heterogeneity among the included studies was substantial and statistically significant (I2 = 98%, p < 0.00001). Conclusion: The results of the present meta-analysis indicate that hyponatremia has potential to be utilized as a biochemical marker in the diagnosis of complicated appendicitis in the pediatric population. However, well designed prospective diagnostic efficiency studies are essential to consolidate the association between hyponatremia and complicated acute appendicitis. abstract_id: PUBMED:17708281 Serum D-lactate: a useful diagnostic marker for acute appendicitis. Background/aims: Acute appendicitis is the most common acute surgical infection seen in emergency department. The present study aims to evaluate the sensitivity, specificity, negative predictive value (NPV) and positive predictive value (PPV) of the serum D-lactate levels as a marker for the diagnosis of acute appendicitis. D-lactate is the stereoisomer of the mammalian L(+)-lactate, and is produced by indigenous bacteria (Escherichia coli, Klebsiella, Bacteroides, Lactobacillus) in the gastrointestinal tract. Once obstruction occurs, appendix is a good medium for bacterial proliferation, and ischemic injury leads to an increase in D-lactate levels. Methodology: A total of thirty-two consecutive patients with the suspicion of acute appendicitis were prospectively included in the study. Patient characteristics, ultrasonography (US) and laboratory assessment including white blood cell (WBC), C-reactive protein (CRP), D-lactate and intraoperative findings, histology results, clinical outcome were evaluated. Results: WBC level above 10(9)/L had an accuracy of 66%, whereas a CRP level above 5 mg/L had an accuracy of 75%. We observed that when the D-lactate level was greater than 0.25 mmol/L in acute appendicitis, the specificity was 60%, the false negative rate was 25% and the accuracy was 90%. The false negative rate of CRP (67%) was higher than that of D-lactate levels (25%). Ultrasound had a sensitivity of 96%, specificity 40% and accuracy 87% in our study. Conclusions: We found positive correlations between serum D-lactate levels and acute appendicitis and serum D-lactate had the lowest false negative rate among the other parameters. Therefore, we conclude that D-lactate might be a simple and reliable diagnostic marker for appendicitis. abstract_id: PUBMED:12958680 Diagnostic laparoscopy. Diagnostic laparoscopy began in a surgical unit in a developing country in 1972. The developers of this technique aimed to hasten diagnosis, reduce patient distress, and improve bed utilization in an overcrowded teaching hospital wherein simple investigations such as x-rays took weeks to materialize. Over a period of 18 years reaching to 1990, 3,200 diagnostic laparoscopies were performed on adults under local anesthesia with no mortality, a complication rate of 0.09%, an 84% diagnosis rate, and 74% undergoing histologic biopsies targeting a wide spectrum of pathology. The equipment cost spread out over the 3,200 patients works out to 30 rupees (0.60 dollar) per patient. With the availability of noninvasive diagnostic aids such as ultrasound, computed tomography, and magnetic resonance imaging used US, CT, MRI under the control of target biopsy, the role of diagnostic laparoscopy has altered. Since 1990, clinicians have had the sophistication of the video camera and the pneumoperitoneum insufflator. Diagnostic laparoscopy is used for the evaluation of liver and peritoneal pathology, abdominal tuberculosis, malignancy, acute abdomen, and abdominal trauma. It often is a prelude to laparoscopic treatment of the underlying pathology, specifically in cases of acute appendicitis. Answer: Yes, CD19 is considered an immunological diagnostic marker for acute appendicitis. Studies have shown that the frequency of CD19+ cells, as well as HLA-DR+ and CD19+ cells, is significantly higher in appendix tissue compared to peripheral blood in patients with acute appendicitis (PUBMED:24375063). Additionally, the percentage of CD19+ cells and HLA-DR+CD19+ cells significantly decreased after appendectomy in the peripheral blood of these patients, suggesting that CD19 and HLA-DR plus CD19 have better diagnostic efficiency compared with T cell markers for acute appendicitis (PUBMED:24375063). Another study found that the appendix tissue contains a higher number of B1 (and B) cells compared to peripheral blood, and these cells play a role in the primary immune response to acute infection/inflammation in the appendix, with appendiceal B cell population being unique in terms of CD19 intensity expression on their surface (PUBMED:10776800). Therefore, CD19 can be considered an immunological diagnostic marker for acute appendicitis.
Instruction: Can triggered electromyograph thresholds predict safe thoracic pedicle screw placement? Abstracts: abstract_id: PUBMED:12634564 Can triggered electromyograph thresholds predict safe thoracic pedicle screw placement? Study Design: A prospective clinical study of thoracic pedicle screws monitored with triggered electromyographic testing. Objective: To evaluate the sensitivity of recording rectus abdominis triggered electromyographs to assess thoracic screw placement. Summary Of Background Data: Triggered electromyographic testing from lower extremity myotomes has identified medially placed lumbar pedicle screws. Higher thresholds indicate intraosseous placement because of increased resistance to current flow. Lower thresholds correspond to compromised pedicles with potential for nerve impingement. No clinical study has correlated an identical technique with rectus muscle recordings, which are innervated from T6 to T12. Methods: A total of 677 thoracic screws were placed in 92 consecutive patients. Screws placed from T6 and T12 were evaluated using an ascending method of stimulation until a compound muscle action potential was obtained from the rectus abdominis. Threshold values were compared both in absolute terms and also in relation to other intrapatient values. Results: Screws were separated into three groups: Group A (n = 650 screws) had thresholds >6.0 mA and intraosseus placement. Group B (n = 21) had thresholds <6.0 mA but an intact medial pedicle border on reexamination and radiographic confirmation. Group C (n = 6) had thresholds <6.0 mA and medial wall perforations confirmed by tactile and/or visual inspection. Thus, 3.9% (27 of 677) of all screws had thresholds <6.0 mA. Only 22% (6 of 27) had medial perforation. Group B screws averaged a 54% decrease from the mean as compared with a 69% decrease for Group C screws (P = 0.0160). There were no postoperative neurologic deficits or radicular chest wall complaints. Conclusion: To assess thoracic pedicle screw placement, triggered electromyographic thresholds <6.0 mA, coupled with values 60-65% decreased from the mean of all other thresholds in a given patient, should alert the surgeon to suspect a medial pedicle wall breach. abstract_id: PUBMED:21386946 Reliability of Triggered EMG for Prediction of Safety during Pedicle Screw Placement in Adolescent Idiopathic Scoliosis Surgery. Study Design: We performed a prospective study to evaluate the reliability of using triggered electromyography (EMG) for predicting pedicle wall breakthrough during the placement of pedicle screw in adolescent idiopathic scoliosis surgery. Purpose: We wanted to correlate pedicle wall breakthrough with the triggered EMG threshold of stimulation and the postoperative computed tomography (CT) findings. Overview Of Literature: Pedicle wall breakthrough has been reported to be difficult to evaluate by radiographs. Triggered EMG had been found to be a more sensitive test to detect this breakthrough. Methods: Seven patients who underwent the insertion of 103 pedicle screws were evaluated. The triggered EMG activity was recorded from several muscles depending on the level of screw placement. The postoperative CT scans were read by a spine surgeon who was a senior fellow in orthopedics, and a musculoskeletal radiologist. Results: The mean age at surgery was 12.6 years (range, 11 to 17 years). The preoperative mean Cobb angle was 54.7° (range, 45 to 65°). There were 80 thoracic screws and 23 lumbar screws. All the screws had stimulation thresholds of ≥ 6 mA, except 3 screws with the stimulation threshold of < 6 mA. Ten screws (9.7%) showed violation of the pedicle wall on the postoperative CT scans. Five screws penetrated medially and another five penetrated laterally. No postoperative neurologic complications were noted in any of the seven patients. Conclusions: Measuring the stimulation threshold of triggered EMG helps to assess the pedicle screw placement. Pedicle screws that had stimulation threshold of ≥ 6 mA were safe, with 90.3% reliability, as was assessed on the postoperative CT scans. abstract_id: PUBMED:29954289 Accuracy of thoracic pedicle screw placement using freehand technique and triggered EMG in adolescent idiopathic scoliosis: Is it different between concave and convex side? Purpose: To evaluate and compare the accuracy of the placement of thoracic pedicle screws between concave and convex side curve using freehand technique and triggered electromyogram (EMG). Methods: Three-hundred sixty eight thoracic pedicle screws were included for the evaluation of accuracy from postoperative computed tomography (CT) scans in 24 patients with adolescent idiopathic scoliosis with a mean follow-up of 27 months (range 24-58 months). All screws had stimulation thresholds greater than 6 mA. We divided the screws into two groups: (1) group A: concave side curve-254 screws (2) group B: convex side curve-114 screws. Results: The mean age at surgery was 12.8 years (range 11-17 years). The preoperative Cobb angle was 57.5° (range 50-75°). There were 41 total breaches (medial = 21, lateral = 20, p = 0.66) with an overall incidence of 11.1%. The overall breach rate did not attain statistically significant difference (group A = 11.8%, group B = 9.6%, p = 0.59). The rate of medial breaches was also similar for both concave and convex side (group A = 6.1%, group B = 5.9%, p > 0.05). No postoperative neurological or vascular complications were noted in both groups. Conclusion: Thoracic pedicle screw placement using both freehand technique and triggered EMG for adolescent idiopathic scoliosis surgery is safe and reliable. The overall accuracy rate is 88.9% with similar breaches on the concave and convex side on postoperative CT scans. abstract_id: PUBMED:22396849 Free Hand Pedicle Screw Placement in the Thoracic Spine without Any Radiographic Guidance : Technical Note, a Cadaveric Study. Thoracic pedicle screw fixation techniques are still controversial for thoracic deformities because of possible complications including neurologic deficit. Methods to aid the surgeon in appropriate screw placement have included the use of intraoperative fluoroscopy and/or radiography as well as image-guided techniques. We describe our technique for free hand pedicle screw placement in the thoracic spine without any radiographic guidance and present the results of pedicle screw placement analyzed by computed tomographic scan in two human cadavers. This free hand technique of thoracic pedicle screw placement performed in a step-wise, consistent, and compulsive manner is an accurate, reliable, and safe method of insertion to treat a variety of spinal disorders, including spinal deformity. abstract_id: PUBMED:24403958 Percutaneous pedicle screw placement in the thoracic spine: A cadaveric study. Study Design: A cadaveric study to determine the accuracy of percutaneous screw placement in the thoracic spine using standard fluoroscopic guidance. Summary Of Background Data: While use of percutaneous pedicle screws in the lumbar spine has increased rapidly, its acceptance in the thoracic spine has been slower. As indications for pedicle screw fixation increase in the thoracic spine so will the need to perform accurate and safe placement of percutaneous screws with or without image navigation. To date, no study has determined the accuracy of percutaneous thoracic pedicle screw placement without use of stereotactic imaging guidance. Materials And Methods: Eighty-six thoracic pedicle screw placements were performed in four cadaveric thoracic spines from T1 to T12. At each level, Ferguson anterior-posterior fluoroscopy was used to localize the pedicle and define the entry point. Screw placement was attempted unless the borders of the pedicle could not be delineated solely using intraoperative fluoroscopic guidance. The cadavers were assessed using pre- and postprocedural computed tomography (CT) scans as well as dissected and visually inspected in order to determine the medial breach rate. Results: Ninety pedicles were attempted and 86 screws were placed. CT analysis of screw placement accuracy revealed that only one screw (1.2%) breached the medial aspect of the pedicle by more than 2 mm. A total of four screws (4.7%) were found to have breached medially by visual inspection (three Grade 1 and one Grade 2). One (1.2%) lateral breach was greater than 2 mm and no screw violated the neural foramen. The correlation coefficient of pedicle screw violations and pedicle diameter was found to be 0.96. Conclusions: This cadaveric study shows that percutaneous pedicle screw placement can be performed in the thoracic spine without a significant increase in the pedicle breach rate as compared with standard open techniques. A small percentage (4.4%) of pedicles, especially high in the thoracic spine, may not be safely visualized. abstract_id: PUBMED:36185942 Thoracic Pedicle Screw Placement Utilizing Hands-On Training Session on Three-Dimensional Models. The utilization of three-dimensional (3D) models has been an important element of medical education. We demonstrate a three-dimensionally-printed (3DP) thoracic spine model for use in the teaching of freehand pedicle screw placement. Neurosurgical residents with varying years of experience practiced screw placement on these models. Residents were timed, and models were evaluated for medial and lateral breaches. Overall, this technical report describes the utility of 3D spine models in the training of thoracic pedicle screw placement. The tactile feedback from the 3D models was designed to represent both cortical and cancellous bones. abstract_id: PUBMED:35843532 Effects of anode position on pedicle screw testing during lumbosacral spinal fusion surgery. Of Background Data: Pedicle screws are commonly placed with lumbar/lumbosacral fusions. Triggered electromyography (tEMG), which employs the application of electrical current between the screw and a complementary anode to determine thresholds of conduction, may be utilized to confirm the safe placement of such implants. While previous research has established clinical thresholds associated with safe screw placement, there is variability in clinical practice of anode placement which could lead to unreliable measurements. Purpose: To determine the variance in pedicle screw stimulation thresholds when using four unique anode locations (ipsilateral/contralateral and paraspinal/gluteal relative to tested pedicle screws). Study Design: Prospective cohort study. Tertiary medical center. Patient Sample: Twenty patients undergoing lumbar/lumbosacral fusion with pedicle screws using tEMG OUTCOME MEASURES: tEMG stimulation return values are used to assess varied anode locations and reproducibility based on anode placement. Methods: Measurements were assessed across node placement in ipsilateral/contralateral and paraspinal/gluteal locations relative to the screw being assessed. R2 coefficients of correlation were determined, and variances were compared with F-tests. Results: A total of 94 lumbosacral pedicle screws from 20 patients were assessed. Repeatability was verified using two stimulations at each location for a subset of the screws with an R2 of 0.96. Comparisons between the four anode locations demonstrated R2 values ranging from 0.76 to 0.87. F-tests comparing thresholds between each anode site demonstrated all groups not to be statistically different. Conclusion: The current study, a first-of-its-kind formal evaluation of anode location for pedicle screw tEMG testing, demonstrated very strong repeatability and strong correlation with different locations of anode placement. These results suggest that there is no need to change the side of the anode for testing of left versus right screws, further supporting that placing an anode electrode into gluteal muscle is sufficient and will avoid a sharp ground needle in the surgical field. abstract_id: PUBMED:33998212 Clinical application of three-dimensional printing technique combined with thoracic pedicle screw track detector in thoracic pedicle screw placement Objective: To investigate the clinical application of three-dimensional (3D) printing technique combined with a new type of thoracic pedicle screw track detector in thoracic pedicle screw placement. Methods: According to the characteristics of thoracic pedicle and common clinical screw placement methods, a new type of thoracic pedicle screw track detector was independently developed and designed. The clinical data of 30 patients with thoracic vertebrae related diseases who underwent posterior thoracic pedicle screw fixation between March 2017 and January 2020 were retrospectively analysed. Among them, there were 18 males and 12 females with an average age of 56.3 years (range, 32-76 years). There was 1 case of thoracic disc herniation, 4 cases of thoracic canal stenosis, 2 cases of ossification of posterior longitudinal ligament of thoracic vertebra, 16 cases of thoracic trauma, 2 cases of thoracic infection, and 5 cases of thoracic canal occupation. Three-dimensional CT of the thoracic vertebra was routinely performed preoperatively, and the model of the patient's thoracic vertebra was reconstructed and printed out. With the assistance of the model, preoperative simulation was performed with the combination of the new type thoracic pedicle screw track detector, and detected no nails after critical cortical damage. During operation, one side was randomly selected to use traditional hand screws placement (control group), and the other side was selected to use 3D printing technique combined with new type thoracic pedicle screw track detector to assist thoracic pedicle screws placement (observation group). The single screw placement time, adjustment times of single screw, and blood loss during screw placement were compared between the two groups. The accuracy of screw placement in the two groups was evaluated according to postoperative CT imaging data. Results: The single screw placement time, adjustment times of single screw, and blood loss during screw placement in the observation group were significantly less than those in the control group ( P<0.05). Postoperative CT examination showed that the observation group had 87 screws of grade 1, 3 screws of grade 2, and the acceptable screw placement rate was 100% (90/90); the control group had 76 screws of grade 1, 2 screws of grade 2, 11 screws of grade 3, and 1 screw of grade 4, and the acceptable screw placement rate was 86.7% (78/90); showing significant difference in screw placement between the two groups ( χ2=12.875, P=0.001). All patients were followed up 6-18 months, with an average of 11.3 months. There was no complication of vascular, nerve, spinal cord, or visceral injury, and screws or rods broken, and no patient was revised. Conclusion: The 3D printing technique combined with the new type of thoracic pedicle screw track detector assisted thoracic pedicle screw placement is convenient, and significantly improves the accuracy and safety of intraoperative screw placement, and overall success rate of the surgery. abstract_id: PUBMED:21512840 Recording triggered EMG thresholds from axillary chest wall electrodes: a new refined technique for accurate upper thoracic (T2-T6) pedicle screw placement. This study was aimed at evaluating the sensitivity and safety of a new technique to record triggered EMG thresholds from axillary chest wall electrodes when inserting pedicle screws in the upper thoracic spine (T2-T6). A total of 248 (36.6%) of a total of 677 thoracic screws were placed at the T2-T6 levels in 92 patients with adolescent idiopathic scoliosis. A single electrode placed at the axillary midline was able to record potentials during surgery from all T2-T6 myotomes at each side. Eleven screws were removed during surgery because of malposition according to intraoperative fluoroscopic views. Screw position was evaluated after surgery in the remaining 237 screws using a CT scan. Malposition was detected in 35 pedicle screws (14.7%). Pedicle medial cortex was breached in 24 (10.1%). Six screws (2.5%) were located inside the spinal canal. Mean EMG threshold was 24.44 ± 11.30 mA in well-positioned screws, 17.98 ± 8.24 mA (p < 0.01) in screws violating the pedicle medial cortex, and 10.38 ± 3.33 mA (p < 0.005) in screws located inside the spinal canal. Below a threshold of 12 mA, 33.4% of the screws (10/30) were malpositioned. Furthermore, 36% of the pedicle screws with t-EMG stimulation thresholds within the range 6-12 mA were malpositioned. In conclusion, assessment of upper thoracic pedicle screw placement by recording tEMG at a single axillary electrode was highly reliable. Thresholds below 12 mA should alert surgeons to suspect screw malposition. This technique simplifies tEMG potential recording to facilitate safe placement of pedicle screws at upper thoracic levels. abstract_id: PUBMED:26425154 Safe and accurate placement of thoracic and thoracolumbar percutaneous pedicle screws without image-navigation. Background: Percutaneous pedicle screw placement is now commonly used to treat spinal instability. It is imperative, especially at thoracic levels, to avoid damage to adjacent neurovascular structures. Although more technically demanding when compared with the lumbar spine, we believe that the percutaneous placement of thoracic pedicle screws can be performed safely without image-navigation. Purpose: The purpose was to evaluate the safety of percutaneous pedicle screw placement in the thoracic and thoracolumbar spine without image-navigation. Study Design/setting: A retrospective study at a single institution. Patient Sample: Patients over the age of 18 years who presented with degenerative disease, trauma or tumor that required surgical stabilization. Outcome Measures: Our outcomes included postoperative plain film X-rays and computerized tomography (CT). Materials And Methods: We performed a retrospective study of patients who underwent percutaneous pedicle screw placement without image-navigation between T2 and L2. Results: Between 2005 and 2011, a total of 507 pedicle screws were placed in 120 patients. The indications included trauma (17%), tumor (8%), and degenerative conditions (75%). The mean age was 61.3 years (range: 20-81 years). Fifty-seven percent were male, and 43% were female. The mean blood loss was 297 ± 40 ml. All patients underwent postoperative anterior-posterior and lateral films that showed safe placement of pedicle screws. Moreover, 57% of patients underwent postoperative CT imaging. There was 1 (0.4%) medial breach and 13 (5%) lateral breaches of the pedicle screw patients who underwent CT imaging as read by an independent neuroradiologist. None of the breaches resulted in adverse neurological sequelae either immediately after or at most recent follow-up. Conclusion: Thoracic and thoracolumbar percutaneous pedicle screw placement can be performed safely and accurately without image-navigation. Answer: Yes, triggered electromyograph (EMG) thresholds can predict safe thoracic pedicle screw placement. Studies have shown that higher triggered EMG thresholds typically indicate intraosseous placement of the screw due to increased resistance to current flow, suggesting a safe placement. Conversely, lower thresholds may correspond to compromised pedicles with potential for nerve impingement, indicating a riskier placement (PUBMED:12634564). In a study involving thoracic pedicle screws placed from T6 to T12, it was found that screws with thresholds greater than 6.0 mA were safely placed intraosseously (Group A), while those with thresholds less than 6.0 mA required further evaluation. Among the latter, only a portion had medial wall perforations confirmed by tactile and/or visual inspection (Group C). The study concluded that triggered EMG thresholds less than 6.0 mA, especially when coupled with values 60-65% decreased from the mean of all other thresholds in a given patient, should alert the surgeon to a potential medial pedicle wall breach (PUBMED:12634564). Another study on adolescent idiopathic scoliosis surgery found that pedicle screws with stimulation thresholds of ≥ 6 mA were safe, with 90.3% reliability as assessed on postoperative CT scans (PUBMED:21386946). Similarly, a study comparing the accuracy of thoracic pedicle screw placement between the concave and convex sides of the curve in adolescent idiopathic scoliosis using freehand technique and triggered EMG found an overall accuracy rate of 88.9%, with no significant difference in breach rates between the two sides (PUBMED:29954289). These findings are supported by other studies that have demonstrated the utility of triggered EMG in enhancing the safety and accuracy of thoracic pedicle screw placement, whether using freehand techniques, percutaneous approaches, or with the aid of three-dimensional models and specialized detectors (PUBMED:22396849, PUBMED:24403958, PUBMED:36185942, PUBMED:33998212). In conclusion, triggered EMG thresholds are a reliable method to predict safe thoracic pedicle screw placement, and thresholds below 6.0 mA should prompt careful evaluation for potential pedicle breach.
Instruction: Are cuffed peripherally inserted central catheters superior to uncuffed peripherally inserted central catheters? Abstracts: abstract_id: PUBMED:37349797 Right heart thrombus in transit and peripherally inserted central catheters. Right heart thrombus in transit or "free-floating right heart thrombus" is defined as thrombus(i) arising from the deep veins that embolises to the right atrium or right ventricle before reaching the pulmonary vasculature. It is almost always associated with pulmonary thromboembolism and is a medical emergency with reported mortality rates of over 40%. We present two cases of right heart thrombus in transit with pulmonary thromboembolism resulting from venous thrombosis associated with peripherally inserted central catheters that were managed with different approaches. The cases highlight that clinicians should have a low threshold to utilise imaging modalities such as computerised tomography and transthoracic echocardiography when there is an untoward change in physiological parameters among patients with peripherally inserted central catheters, particularly those with risk factors for peripherally inserted central catheter associated venous thrombosis. Furthermore, procedural optimisation surrounding peripherally inserted central catheters, such as insertion technique and choice of lumen size, is underscored. abstract_id: PUBMED:23648007 Are cuffed peripherally inserted central catheters superior to uncuffed peripherally inserted central catheters? A retrospective review in a tertiary pediatric center. Purpose: To assess the use of cuffed peripherally inserted central catheters (PICCs) compared with uncuffed PICCs in children with respect to their ability to provide access until the end of therapy. Materials And Methods: A retrospective review of PICCs inserted between January 2007 and December 2008 was conducted. Data collected from electronic records included patient age, referring service, clinical diagnosis, inserting team (pediatric interventional radiologists or neonatal intensive care unit [NICU] nurse-led PICC team), insertion site, dates of insertion and removal, reasons for removal, and need for a new catheter insertion. A separate subset analysis of the NICU population was performed. Primary outcome measured was the ability of the PICCs to provide access until the end of therapy. Results: Cuffed PICCs (n = 1,201) were significantly more likely to provide access until the end of therapy than uncuffed PICCs (n = 303) (P = .0002). Catheter removal before reaching the end of therapy with requirement of placement of a new PICC occurred in 26% (n = 311) of cuffed PICCs and 38% (n = 114) of uncuffed PICCs. Uncuffed PICCs had a significantly higher incidence of infections per 1,000 catheter days (P = .023), malposition (P = .023), and thrombus formation (P = .022). In the NICU subset analysis, cuffed PICCs had a higher chance of reaching end of therapy, but this was not statistically significant. Conclusions: In this pediatric population, cuffed PICCs were more likely to provide access until the end of therapy. Cuffed PICCs were associated with lower rates of catheter infection, malposition, and thrombosis than uncuffed PICCs. abstract_id: PUBMED:32063781 Atrial fibrillation induced by peripherally inserted central catheters. Peripherally inserted central catheters (PICCs), a form of central venous catheter (CVC) inserted into the cephalic or basilic veins, are most commonly used for administration of long-term antibiotics or for total parenteral nutrition. PICCs are associated with fewer complications than traditional CVCs; however, they have been implicated in accidental malpositioning, leading to both atrial and ventricular arrhythmias. We present a case of atrial fibrillation possibly triggered by migration of the tip of the PICC deep into the right atrium. Retraction of the tip resulted in resolution of the arrhythmia. abstract_id: PUBMED:34075655 A meta-analysis of the comparison of phlebitis between midline catheters and peripherally inserted central catheters in infusion therapy. Aims: To compare the risk of phlebitis between midline catheters and peripherally inserted central catheters in infusion therapy with a meta-analysis. Design: This was a systematic literature review and meta-analysis. Data Sources: Web of Science, PubMed, Scopus, Embase, Cochrane Library, ProQuest, CNKI, WanFang, VIP and SinoMed were searched from inception to May 2020. Review Methods: All studies comparing the risk of phlebitis between midline catheters and peripherally inserted central catheters were included. According to the inclusion and exclusion criteria, two authors independently assessed the literature and extracted the data. Meta-analyses were conducted to generate estimates of phlebitis risk in patients with midline catheters verse peripherally inserted central catheters, and publication bias was evaluated with RevMan 5.3. Results: A total of seven studies were collected, involving 1377 participants. The incidence of phlebitis with midline catheters and peripherally inserted central catheters was 1.52% and 3.41%. Meta-analysis showed that the incidence of phlebitis has no significant difference between midline catheters and peripherally inserted central catheters. The sensitivity analysis shows that the results from this meta-analysis are fair in overall studies. All studies have no significant publication bias. Conclusion: This study provides the first systematic assessment of the risk of phlebitis between midline catheters and peripherally inserted central catheters. The incidence of phlebitis has no significant difference between them. There are many factors to consider when choosing vascular access devices. abstract_id: PUBMED:32356479 Peripherally inserted central catheters inserted with current best practices have low deep vein thrombosis and central line-associated bloodstream infection risk compared with centrally inserted central catheters: A contemporary meta-analysis. Background: Peripherally inserted central catheters and centrally inserted central catheters have numerous benefits but can be associated with risks. This meta-analysis compared central catheters for relevant clinical outcomes using recent studies more likely to coincide with practice guidelines. Methods: Several databases, Ovid MEDLINE, Embase, and EBM Reviews were searched for articles (2006-2018) that compared central catheters. Analyses were limited to peer-reviewed studies comparing peripherally inserted central catheters to centrally inserted central catheters for deep vein thrombosis and/or central line-associated bloodstream infections. Subgroup, sensitivity analyses, and patient-reported measures were included. Risk ratios, incidence rate ratios, and weighted event risks were reported. Study quality assessment was conducted using Newcastle-Ottawa and Cochrane Risk of Bias scales. Results: Of 4609 screened abstracts, 31 studies were included in these meta-analyses. Across studies, peripherally inserted central catheters were protective for central line-associated bloodstream infection (incidence rate ratio = 0.52, 95% confidence interval: 0.30-0.92), with consistent results across subgroups. Peripherally inserted central catheters were associated with an increased risk of deep vein thrombosis (risk ratio = 2.08, 95% confidence interval: 1.47-2.94); however, smaller diameter and single-lumen peripherally inserted central catheters were no longer associated with increased risk. The absolute risk of deep vein thrombosis was calculated to 2.3% and 3.9% for smaller diameter peripherally inserted central catheters and centrally inserted central catheters, respectively. On average, peripherally inserted central catheter patients had 11.6 more catheter days than centrally inserted central catheter patients (p = 0.064). Patient outcomes favored peripherally inserted central catheters. Conclusion: When adhering to best practices, this study demonstrated that concerns related to peripherally inserted central catheters and deep vein thrombosis risk are minimized. Dramatic changes to clinical practice over the last 10 years have helped to address past issues with central catheters and complication risk. Given the lower rate of complications when following current guidelines, clinicians should prioritize central line choice based on patient therapeutic needs, rather than fear of complications. Future research should continue to consider contemporary literature over antiquated data, such that it recognizes the implications of best practices in modern central catheterization. abstract_id: PUBMED:37922685 Types of peripherally inserted central catheters and the risk of unplanned removals in patients: A prospective cohort study. Background: Peripherally inserted central catheters are widely used in clinical practice. Despite the many advantages of peripherally inserted central catheters, unplanned removals can occur owing to irreversible complications and may have adverse consequences for patients. However, the relationship between peripherally inserted central catheter type and the incidence of unplanned removal is unclear. Objective: To investigate the association between the type of peripherally inserted central catheter and unplanned catheter removal. Design: A multicenter prospective cohort study. Settings: Sixty hospitals located in 30 cities across seven provinces in China. Participants: Three thousand two hundred and twenty-two patients aged 18 years or older with one peripherally inserted central catheter inserted for treatment were enrolled, and 3166 (98.3 %) patients with follow-up data were included in the final analysis. Methods: The included participants had peripherally inserted central catheters that either had a proximal valve (n = 2436) or were open-ended (n = 730). A competing risk Cox regression model was used, and both univariate and multivariate analyses were conducted. A sub-distribution hazard model was applied to test the validity of the results. Results: Of the 2436 peripherally inserted central catheters with a proximal valve and 730 that were open-ended, 272 (11.2 %) and 46 (6.3 %) were removed unplanned, respectively. The whole incidence of unplanned removal was 10.04 %; multivariate Cox regression was conducted and it was determined that unplanned removal was statistically significantly associated with the proximal valve peripherally inserted central catheter type (hazard ratio: 1.86, 95 % confidence interval: 1.28 to 2.70, p = 0.001). In patients without cancer, the risk of unplanned removal of peripherally inserted central catheters with a proximal valve was 8.14 times higher than that of open-ended peripherally inserted central catheters (p < 0.001). Conclusions: In this study, peripherally inserted central catheters with a proximal valve had a higher risk of unplanned removals, especially in patients without cancer. The peripherally inserted central catheter type should be considered in advance, and clinical staff should take appropriate measures to prevent unplanned removals after insertion. abstract_id: PUBMED:36530028 Complications of implanted port catheters and peripherally inserted central catheters in chemotherapy-treated cancer patients: A meta-analysis. Background: One of the most significant advancements in nursing technology for cancer patients has been the development of implantable port catheters and peripherally inserted central venous catheters. They create an essential, dependable route for subjects to receive chemotherapy, long-term infusions and nutritional care, and provide a site for regular blood draws. Objectives: We performed a meta-analysis to evaluate the complications of implanted port catheters and peripherally inserted central catheters in chemotherapy-treated cancer patients. Material And Methods: A systematic literature search up to April 2022 was performed and a total of 11,801 articles have been retrieved. Of these, 5017 concerned peripherally inserted central catheters and 6784 implanted port catheters to administer chemotherapy. Odds ratios (ORs) and mean differences (MDs) with 95% confidence intervals (95% CIs) were calculated to assess the complications of implanted port catheters and peripherally inserted central catheters in chemotherapy-treated cancer patients using dichotomous and contentious methods with randomor fixed-effects models. Results: Peripherally inserted central catheters had significantly higher incidence of occlusion complications (OR: 5.43, 95% CI: 3.46-8.52, p < 0.001), longer durations of local infection (OR: 2.94, 95% CI: 2.17-4.00, p < 0.001), higher incidence of catheter-related infection (OR: 2.13, 95% CI: 1.19-3.83, p = 0.01), higher rate of malposition (OR: 6.46, 95% CI: 2.93-14.27, p < 0.001), higher rates of catheter-related thrombosis (OR: 2.71, 95% CI: 1.90-3.87, p < 0.001), higher incidence of phlebitis complications (OR: 6.67, 95% CI: 2.94-15.11, p < 0.001), higher incidence of accidental removal (OR: 3.38, 95% CI: 1.97-5.81, p < 0.001), and a shorter catheter lifespan (MD: -233.16, 95% CI: -449.52--16.80, p = 0.03) in subjects undergoing chemotherapy compared to those in whom implanted port catheters were used. Conclusion: Implantable port catheter has advantages over peripherally inserted central catheter in decreasing cancer patients' complications. The outcomes provide evidence for practitioners to select which type of central venous catheters is better for cancer chemotherapy subject. abstract_id: PUBMED:33454952 Supraclavicular ultrasonographic real-time guidance of peripherally inserted central catheters - A feasibility study. Background: Mispositioning is common during insertion of peripherally inserted central catheters. Ultrasonographic visualization of anatomical structures may enable continuous guidance during insertion. The aim was to investigate the feasibility of ultrasonographic real-time guidance of peripherally inserted central catheter placement using the supraclavicular fossa view. Methods: An observational quality control study was performed including 20 patients. Ultrasonography was performed via the right supraclavicular fossa using a microconvex probe identifying the junction between the right internal jugular- and the subclavian vein forming the right brachiocephalic vein. The wire guide tip was identified at the junction allowing estimation of catheter length. The catheter stiffening wire was followed in real-time into the right brachiocephalic vein towards the superior vena cava. Mispositions and the ability to redirect in real-time were detected. Final catheter tip positions were evaluated by either fluoroscopy or a chest radiograph. Results: Catheters were successfully placed in 19/20 patients. In all patients the junction and the right brachiocephalic vein was identified. Two thrombi were identified in the right brachiocephalic vein and left-sided insertions were performed. In 16 of 17 right-sided insertions, wire guide and catheter stiffening wire were visible. Of the 16 visual catheters, 15 could be followed into the right brachiocephalic vein. Real-time mispositioning was identified in eight cases and optimal redirection was successful in seven. All ultrasound-guided catheter length estimations were adequate. Conclusions: Supraclavicular ultrasonographic real-time guidance for peripherally inserted central catheter placement was feasible and enabled successful placement together with detection and redirection of mispositioned catheters without delay. abstract_id: PUBMED:32564705 The microbial biofilm composition on peripherally inserted central catheters: A comparison of polyurethane and hydrophobic catheters collected from paediatric patients. Background: Peripherally inserted central catheters are susceptible to microbial colonisation and subsequent biofilm formation, leading to central line-associated bloodstream infection, a serious peripherally inserted central catheter-related complication. Next-generation peripherally inserted central catheter biomaterials, such as hydrophobic materials (e.g. Endexo®), may reduce microbial biofilm formation or attachment, consequently reducing the potential for central line-associated bloodstream infection. Methods: Within a randomised controlled trial, culture-dependent and culture-independent methods were used to determine if the biomaterials used in traditional polyurethane peripherally inserted central catheters and hydrophobic peripherally inserted central catheters impacted microbial biofilm composition. This study also explored the impact of other clinical characteristics including central line-associated bloodstream infection, antibiotic therapy and dwell time on the microbial biofilm composition of peripherally inserted central catheters. Results: From a total of 32 patients, one peripherally inserted central catheter was determined to be colonised with Staphylococcus aureus, and on further analysis, the patient was diagnosed with central line-associated bloodstream infection. All peripherally inserted central catheters (n = 17 polyurethane vs n = 15 hydrophobic) were populated with complex microbial communities, including peripherally inserted central catheters considered non-colonised. The two main microbial communities observed included Staphylococcus spp., dominant on the colonised peripherally inserted central catheter, and Enterococcus, dominant on non-colonised peripherally inserted central catheters. Both the peripherally inserted central catheter biomaterial design and antibiotic therapy had no significant impact on microbial communities. However, the diversity of microbial communities significantly decreased with dwell time. Conclusion: More diverse pathogens were present on the colonised peripherally inserted central catheter collected from the patient with central line-associated bloodstream infection. Microbial biofilm composition did not appear to be affected by the design of peripherally inserted central catheter biomaterials or antibiotic therapy. However, the diversity of the microbial communities appeared to decrease with dwell time. abstract_id: PUBMED:30983490 Prevalence of idle peripherally inserted central catheters in adult patients: A multicenter cross-sectional study. Background: Increasing studies concern about idle vascular access devices, but still scant data on idle peripherally inserted central catheters. We aimed to assess the prevalence and risks of idle peripherally inserted central catheters in adult patients. Methods: A multicenter cross-sectional observational study was performed between April 2018 and July 2018. Patient demographics and peripherally inserted central catheters-related information were abstracted using a site questionnaire by directly inquiring and medical records reviewing right after their peripherally inserted central catheters were removed. Results: Three hundred and fifty-eight patients with peripherally inserted central catheter episodes who met the inclusion criteria were studied. Of the 58,000 total catheter-days recorded, 5311 (9.2%) were considered as idle based on our study criteria. Two hundred and fifty-five (71.2%) patients had at least 1 idle catheter-day with a mean duration of 14.84 (19.31) idle days. The incidence of catheter-related complications in patients with idle peripherally inserted central catheters was lower than that in patients without idle peripherally inserted central catheters, though the difference did not reach statistical significance (odds ratio = 0.635, 95% confidence interval = 0.367-1.099, p = 0.103). Conclusion: In conclusion, idle peripherally inserted central catheters were common in adult patients, but the low risks of catheter-related complications may justify maintaining a peripherally inserted central catheter for further observation when necessary. Answer: Yes, cuffed peripherally inserted central catheters (PICCs) appear to be superior to uncuffed PICCs. A retrospective review in a tertiary pediatric center found that cuffed PICCs were significantly more likely to provide access until the end of therapy compared to uncuffed PICCs. Additionally, cuffed PICCs were associated with lower rates of catheter infection, malposition, and thrombosis than uncuffed PICCs (PUBMED:23648007). This suggests that cuffed PICCs may offer better performance and fewer complications, making them a preferable choice in certain clinical settings.
Instruction: Does a history of childhood sexual abuse affect sexual outcomes in breast cancer survivors? Abstracts: abstract_id: PUBMED:15718324 Does a history of childhood sexual abuse affect sexual outcomes in breast cancer survivors? Purpose: Little is known about a history of childhood sexual abuse (CSA) in breast cancer survivors and its relationship to sexual functioning after cancer. As part of a larger survey study examining sexuality and intimacy in breast cancer survivors, we conducted in-person interviews with a subsample of participants. Methods: A total of 147 women in Los Angeles, CA, and Washington, DC, completed a structured interview that addressed sexual socialization and a history of sexual abuse. Trained female interviewers conducted the interviews. Descriptive statistics and regression analyses were used to examine the prevalence of CSA, and its potential impact on sexual health and functioning. Results: One in three women reported at least one CSA incident. Among women who had experienced CSA, 71% reported a single incident, and 22% reported a penetrative form of sexual contact. In multivariate regression analyses examining physical and psychological aspects of sexuality and body image, CSA was not a significant predictor of physical discomfort. However, a history of penetrative CSA was a significant predictor of psychological discomfort (P = .02). Conclusion: The prevalence of CSA in this sample was similar to the general population literature on this topic. In this small sample, a past history of CSA did not contribute significantly to the physical discomforts associated with sexual intimacy after breast cancer; however, our findings suggest that a past history of penetrative CSA is associated with increased psychological discomfort, and may warrant additional examination in future research. abstract_id: PUBMED:28818733 Effects of childhood trauma exposure and cortisol levels on cognitive functioning among breast cancer survivors. Cognitive functioning difficultiesin breast cancer patients receiving chemotherapy are common, but not all women experience these impairments. Exposure to childhood trauma may impair cognitive functioning following chemotherapy, and these impairments may be mediated by dysregulation of hypothalamic-pituitary-adrenal (HPA) axis function and cortisol slope. This study evaluated the association between childhood trauma exposure, cortisol, and cognition in a sample of breast cancer survivors. 56 women completed measures of trauma exposure (the Traumatic Events Survey), salivary cortisol, and self-reported cognitive functioning (the Functional Assessment of Cancer Therapy - Cognitive). We examined correlations between childhood trauma exposure and cognitive functioning, then used linear regression to control for factors associated with cognition (age, education, time since chemotherapy, depression, anxiety, and insomnia), and the MacArthur approach to test whether cortisol levels mediated the relationship between trauma and cognitive functioning. 57.1% of the sample had experienced at least one traumatic event in childhood, with 19.6% of the sample witnessing a serious injury, 17.9% experiencing physical abuse, and 14.3% experiencing sexual abuse. Childhood trauma exposure and cognitive functioning were moderately associated (r=-0.29). This association remained even when controlling for other factors associated with cognition; the final model explained 47% of the variance in cognitive functioning. The association between childhood trauma and cognitive functioning was mediated by steeper cortisol slope (partial r=0.35, p=0.02). Childhood trauma exposure is associated with self-reported cognitive functioning among breast cancer survivors and is mediated by cortisol dysregulation. Trauma should be considered, among other factors, in programs aiming to address cognition in this population. abstract_id: PUBMED:26246075 A qualitative cancer screening study with childhood sexual abuse survivors: experiences, perspectives and compassionate care. Objective: The childhood sexual abuse (CSA) survivor population is substantial and survivors have been identified as part of the population who were under-screened or never-screened for breast, cervical and colon cancer. Our objective was to learn CSA survivor perspectives on, and experiences with, breast, cervical and colon cancer screening with the intention of generating recommendations to help healthcare providers improve cancer screening participation. Design: A pragmatic constructivist qualitative study involving individual, semistructured, in-depth interviews was conducted in January 2014. Thematic analysis was used to describe CSA survivor perspectives on cancer screening and identify potential facilitators for screening. Participants: A diverse purposive sample of adult female CSA survivors was recruited. The inclusion criteria were: being a CSA survivor, being in a stable living situation, where stable meant able to meet one's financial needs independently, able to maintain supportive relationships, having participated in therapy to recover from past abuse, and living in a safe environment. 12 survivors were interviewed whose ages ranged from the early 40s to mid-70s. Descriptive saturation was reached after 10 interviews. Setting: Interviews were conducted over the phone or Internet. CSA survivors were primarily from urban and rural Ontario, but some resided elsewhere in Canada and the USA. Results: The core concept that emerged was that compassionate care at every level of the healthcare experience could improve cancer screening participation. Main themes included: desire for holistic care; unique needs of patients with dissociative identity disorder; the patient-healthcare provider relationship; appointment interactions; the cancer screening environment; and provider assumptions about patients. Conclusions: Compassionate care can be delivered by: building a relationship; practising respect; focusing attention on the patient; not rushing the appointment; keeping the environment positive and comfortable; maintaining patient dignity; sharing control whenever possible; explaining procedures; and using laughter to reduce power imbalance through shared humanity. abstract_id: PUBMED:32247915 Childhood maltreatment and monocyte gene expression among women with breast cancer. Background: Childhood adversity is reliably associated with immune alterations in adulthood, including increases in inflammatory processes. However, relatively few studies have investigated these associations in clinical populations such as cancer patients who are at risk for negative immune-related health outcomes. The current study tested the hypothesis that childhood maltreatment would be associated with alterations in immune-related gene expression in monocytes from women with breast cancer. Methods: Women (n = 86) were recruited after diagnosis with early-stage breast cancer but before onset of adjuvant therapy with radiation, chemotherapy, and/or endocrine therapy. Participants completed questionnaires to assess childhood maltreatment (Childhood Trauma Questionnaire; CTQ) and depressive symptoms (Center for Epidemiologic Studies Depression Scale; CES-D) and provided blood samples for immune assessment. CD14+ monocytes were isolated for RNA extraction and gene expression analyses. Results: Based on responses to the CTQ, 28% of participants were classified as experiencing physical and/or emotional abuse or neglect and 7% as experiencing sexual abuse. Genome-wide transcriptional profiling of isolated monocytes identified 202 gene transcripts that differed in average expression level by > 25% over the range of maltreatment exposure. Bioinformatics analyses of those gene transcripts identified a significantly greater prevalence of NF-κB-binding motifs within the promoters of up-regulated vs. down-regulated genes (p = .028) in women exposed to childhood maltreatment, indicating greater inflammatory signaling. Parallel analyses of Type I interferon signaling also indicated greater prevalence of Interferon Response Factor (IRF)-related binding sites in women with a childhood maltreatment history (p = .020). Results remained significant in analyses controlling for current depression; however, NF-κB and IRF-related gene expression was higher in women with both maltreatment exposure and current depression. Conclusions: In women recently diagnosed with early-stage breast cancer, childhood maltreatment was associated with increases in the classical NF-kB-related pro-inflammatory signaling pathway and with increases in the Type I interferon system. These results suggest a broad pattern of chronic immunologic activation in breast cancer patients with a history of childhood maltreatment, particularly those who are currently experiencing clinically significant depressive symptoms. These findings have implications for the long-term health and well-being of maltreatment exposed breast cancer patients. abstract_id: PUBMED:22946631 Association of intimate partner violence and childhood sexual abuse with cancer-related well-being in women. Background: Limited evidence suggests that intimate partner violence (IPV) may be associated with poorer cancer outcomes. We hypothesized that timing and type of IPV as well as childhood sexual abuse (CSA) may negatively affect depression, perceived stress, and cancer-related well-being. Methods: This was a cross-sectional study of women diagnosed with either breast, cervical, or colorectal cancer in the prior 12 months included in the Kentucky Cancer Registry. Consenting women were interviewed by phone (n=553). Multivariate analysis of covariance (MANCOVA) was used to determine the association between IPV (37% lifetime prevalence) and type, timing, and the range of correlated cancer-related well-being indicators, adjusting for confounding factors. Results: IPV (p=0.002) and CSA (p=0.03) were associated with the six correlated well-being indicators. Specifically, lifetime and current IPV were associated with lower Functional Assessment of Cancer Therapy-Breast Cancer (FACT-B) (p=0.006) and Functional Assessment of Chronic Illness Therapy-Spiritual Well-being Scale (FACIT-SP) (p=0.03) scores, higher perceived stress at diagnosis (p=0.006), and depressive symptom scores at diagnosis (p<0.0001), whereas CSA was associated with lower FACT-B (p=0.02), increased number of comorbid conditions (p=0.03), and higher current stress levels (p=0.04). Current and past IPV, as well as psychologic abuse, were associated with poorer well-being among women with a recent cancer diagnosis. Conclusions: Our results provide evidence that both IPV and CSA negatively influence cancer-related well-being indicators. These data suggest that identification of lifetime IPV and other stressors may provide information that healthcare providers can use to best support and potentially improve the well-being of female cancer patients. abstract_id: PUBMED:20400179 Traumatic stress symptoms and breast cancer: the role of childhood abuse. Objective: The present study investigated relations between reported childhood abuse and recent traumatic stress symptoms in women newly diagnosed with breast cancer (n=330). Methods: As part of a larger ongoing study, patients from eight public and private hospitals were referred by their physicians and completed the Childhood Trauma Questionnaire (CTQ), and the Impact of Events Scale-breast cancer (IES), which measured breast cancer-related intrusive and avoidant symptoms. Results: Emotional abuse, physical abuse, and sexual abuse were correlated with intrusive symptoms. Cancer-related avoidant symptoms approached significance in their relation to emotional and sexual abuse. Multivariate analysis, controlling for age and time since diagnosis, revealed that childhood emotional abuse was an independent predictor of breast cancer-related intrusive symptoms, but that childhood physical abuse and sexual abuse were not significant predictors. Conclusions: Childhood emotional, physical, and sexual abuse were associated with breast cancer-related intrusive symptoms. Emotional abuse uniquely predicted intrusive symptoms after controlling for other predictors. Results suggest that a cancer diagnosis may trigger cognitive and emotional responses that relate to patients' prior trauma experiences. Practice Implications: Physicians and psychologists treating women with breast cancer should be aware that a history of childhood abuse may exacerbate patients' cancer-related intrusive symptoms. Interventions for women affected by both childhood abuse and breast cancer may be most effective when they address both stressors and associated emotional responses. Findings highlight the importance of additional research to explore links between prior trauma and distress following a cancer diagnosis stress. abstract_id: PUBMED:37232847 Loneliness in Breast Cancer Patients with Early Life Adversity: An Investigation of the Effects of Childhood Trauma and Self-Regulation. Childhood trauma may be prevalent in the general population, and the psychosocial treatment of patients with cancer may require consideration of the effects of such early adversity on the healing and recovery process. In this study, we investigated the long-term effects of childhood trauma in 133 women diagnosed with breast cancer (mean age 51, SD = 9) who had experienced physical, sexual, or emotional abuse or neglect. We examined their experience of loneliness and its associations with the severity of childhood trauma, ambivalence about emotional expression, and changes in self-concept during the cancer experience. In total, 29% reported experiencing physical or sexual abuse, and 86% reported neglect or emotional abuse. In addition, 35% of the sample reported loneliness of moderately high severity. Loneliness was directly influenced by the severity of childhood trauma and was directly and indirectly influenced by discrepancies in self-concept and emotional ambivalence. In conclusion, we found that childhood trauma was common in breast cancer patients, with 42% of female patients reporting childhood trauma, and that these early experiences continued to exert negative effects on social connection during the illness trajectory. Assessment of childhood adversity may be recommended as part of routine oncology care, and trauma-informed treatment approaches may improve the healing process in patients with breast cancer and a history of childhood maltreatment. abstract_id: PUBMED:21054923 Sexual abuse in childhood and postoperative depression in women with breast cancer who opt for immediate reconstruction after mastectomy. Introduction: Breast reconstruction is routinely offered to women who undergo mastectomy for breast cancer. However, patient-reported outcomes are mixed. Child abuse has enduring effects on adults' well-being and body image. As part of a study into damaging effects of abuse on adjustment to breast cancer, we examined: (i) whether women with history of abuse would be more likely than other women to opt for reconstruction; and (ii) whether mood problems in women opting for reconstruction can be explained by greater prevalence of abuse. Patients And Methods: We recruited 355 women within 2-4 days after surgery for primary breast cancer; 104 had mastectomy alone and 29 opted for reconstruction. Using standardised questionnaires, women self-reported emotional distress and recollections of childhood sexual abuse. Self-report of distress was repeated 12 months later. Results: Women who had reconstruction were younger than those who did not. Controlling for this, they reported greater prevalence of abuse and more distress than those having mastectomy alone. They were also more depressed postoperatively, and this effect remained significant after controlling for abuse. Conclusions: One interpretation of these findings is that history of abuse influences women's decisions about responding to the threat of mastectomy, but it is premature to draw inferences for practice until the findings are replicated. If they are replicated, it will be important to recognise increased vulnerability of some patients who choose reconstruction. Studying the characteristics and needs of women who opt for immediate reconstruction and examining the implications for women's adjustment should be a priority for research. abstract_id: PUBMED:27940981 The Association Between Adverse Childhood Experiences and Risk of Cancer in Adulthood: A Systematic Review of the Literature. Context: Adverse childhood experiences (ACEs) can affect health and well-being across the life course. Objective: This systematic review summarizes the literature on associations between ACEs and risk of cancer in adulthood. Data Sources: We searched PubMed to identify relevant publications published on or before May 31, 2015. Study Selection: We included original research quantifying the association between ACEs and adult cancer incidence. Case reports and reviews were excluded. Data Abstraction: Two reviewers independently abstracted and summarized key information (eg, ACE type, cancer type, risk estimates) from included studies and resolved all discrepancies. Results: Twelve studies were included in the review. In studies in which ACE summary scores were calculated, significant associations were observed between the scores and an increased risk of cancer in adulthood. Of the different types of ACEs examined, physical and psychological abuse victimization were associated with risk of any cancer in 3 and 2 studies, respectively. Two studies also reported significant associations with regard to sexual abuse victimization (1 for cervical cancer and 1 for any cancer). However, 2 other studies reported no significant associations between childhood sexual or physical abuse and incidence of cervical or breast cancer. Limitations: Because of heterogeneity across studies, we were unable to compute a summary effect estimate. Conclusions: These findings suggest that childhood adversity in various forms may increase a person's cancer risk. Further research is needed to understand the mechanisms driving this relationship and to identify opportunities to prevent and mitigate the deleterious effects of early adversity on long-term health. abstract_id: PUBMED:36126825 Trauma-Informed Radiation Therapy: Implementation and Evaluation of a Sensitive Practice Tool for Female Patients Undergoing Radiotherapy for Breast Cancer. Purpose: For childhood sexual abuse survivors, cancer care can be retraumatizing because of perceived similarities to the original sexual violence. The authors' group designed and implemented a sensitive practice tool (SPT) and evaluated the feasibility of the tool in female patients receiving breast radiation. Methods: The SPT was offered as a "universal precaution" to patients with breast cancer as part of standard clinical care. Patients were given the SPT, which included an instructional video about radiotherapy and a survey about triggers and preferences. The survey results were provided to radiation therapists and used to personalize patients' care. A retrospective chart review and quality improvement survey of therapists were performed. Results: Of 739 eligible patients, 493 (66.7%) completed the SPT from November 2013 to June 2019. Among respondents, 281 (57.0%) reported potential triggers, 395 (80.1%) reported distress management preferences, and 59 (12.0%) requested psychosocial referrals. Mean patient satisfaction was high, and a majority of patients were likely to recommend the SPT to other patients (85.3%). Among radiation therapists (n = 13), 100% reported that the SPT made it easier to customize or individualize patient care. Trauma disclosure was not significantly associated with increased frequency of trigger endorsement (P = .07) but was associated with increased endorsement of distress management preferences (P = .02) and psychosocial referral requests (P < .001). Conclusions: The reported experience with the SPT in the breast radiotherapy setting demonstrated that potential triggers and distress management preferences among patients are common and that patient satisfaction with the SPT is high, yielding clinically meaningful and actionable sensitive practice information. Answer: Yes, a history of childhood sexual abuse (CSA) does affect sexual outcomes in breast cancer survivors. One study found that among breast cancer survivors, one in three women reported at least one incident of CSA. While a history of CSA did not significantly contribute to physical discomforts associated with sexual intimacy after breast cancer, it was associated with increased psychological discomfort, particularly when the CSA involved penetrative sexual contact (PUBMED:15718324). Another study indicated that childhood trauma exposure, including CSA, is associated with self-reported cognitive functioning among breast cancer survivors and is mediated by cortisol dysregulation (PUBMED:28818733). Additionally, CSA survivors have been identified as part of the population who were under-screened or never-screened for breast, cervical, and colon cancer, suggesting that CSA may impact health behaviors related to cancer screening (PUBMED:26246075). Moreover, childhood maltreatment, including CSA, has been associated with alterations in immune-related gene expression in monocytes from women with breast cancer, suggesting a pattern of chronic immunologic activation in these patients (PUBMED:32247915). CSA, along with intimate partner violence, has also been linked to poorer cancer-related well-being indicators such as increased depression, perceived stress, and lower quality of life scores (PUBMED:22946631). Furthermore, childhood emotional, physical, and sexual abuse were associated with breast cancer-related intrusive symptoms, with emotional abuse uniquely predicting such symptoms after controlling for other predictors (PUBMED:20400179). In addition, a study found that 29% of breast cancer patients reported experiencing physical or sexual abuse, and 86% reported neglect or emotional abuse, with 35% of the sample reporting loneliness of moderately high severity. Loneliness was directly influenced by the severity of childhood trauma (PUBMED:37232847). Lastly, women with a history of abuse were more likely to opt for breast reconstruction after mastectomy and were more likely to experience postoperative depression, suggesting that CSA may influence decisions and emotional outcomes related to breast cancer treatment (PUBMED:21054923). In summary, a history of CSA is associated with various negative sexual and psychological outcomes in breast cancer survivors, including increased psychological discomfort during sexual intimacy, cognitive functioning issues, immune system alterations, poorer well-being, and increased loneliness.
Instruction: Is long-term warfarin therapy necessary in Chinese patients with atrial fibrillation after bioprosthetic mitral valve replacement and left atrial appendage obliteration? Abstracts: abstract_id: PUBMED:25881216 Is long-term warfarin therapy necessary in Chinese patients with atrial fibrillation after bioprosthetic mitral valve replacement and left atrial appendage obliteration? Background: Long-term warfarin therapy has been used to decrease thromboembolic events in patients with atrial fibrillation (AF) following bioprosthetic mitral valve replacement (BMVR) and left atrial appendage obliteration (LAAO). A retrospective study was conducted to investigate the efficacy of long-term warfarin or aspirin therapy in patients with AF after BMVR and LAAO. Methods: A total of 215 patients with persistent AF were given anticoagulation therapy with warfarin for the first 3 months after BMVR and LAAO, continuing warfarin or aspirin therapy according to the surgeon's preference. A yearly follow-up with patients was performed by telephone or mail for postoperative condition, cerebrovascular, and bleeding events. Results: Seven patients died in the first 3 months after surgery, including 6 patients from heart failure and 1 patient from sudden death. The remaining 208 patients were divided into two groups: warfarin group (n = 84 patients) and aspirin group (n = 124). The patients in the warfarin group were older than those in the aspirin group and had a lower postoperative left ventricular ejection fraction. Other baseline and operative characteristics were similar. The two groups had similar incidence of thromboembolic events (9.5% versus 8.9%, P = .873) and bleeding events(7.1% versus 3.2%, P = .207). Each group had one intracranial hemorrhage. Eleven patients expired within three months after surgery, 4(4.8%) in the warfarin group and 10(8.1%)in the aspirin group (P = .411 by Fisher exact test). Cumulative survival was not significantly different in the two groups by Kaplan-Meier analysis (P = .55, log-rank test). Conclusions: At the current time in China, long-term warfarin or aspirin therapy may have no significantly different impact on long-term prognosis after 3 months anticoagulation with warfarin in patients with AF undergoing BMVR and LAAO. abstract_id: PUBMED:27986559 Left atrial appendage in rheumatic mitral valve disease: The main source of embolism in atrial fibrillation Objective: To demonstrate that surgical removal of the left atrial appendage in patients with rheumatic mitral valve disease and long standing persistent atrial fibrillation decreases the possibility of stroke. This also removes the need for long-term oral anticoagulation after surgery. Method: A descriptive, prospective, observational study was conducted on 27 adult patients with rheumatic mitral valve disease and long standing persistent atrial fibrillation, who had undergone mitral valve surgery and surgical removal of the left atrial appendage. Oral anticoagulation was stopped in the third month after surgery. The end-point was the absence of embolic stroke. An assessment was also made of postoperative embolism formation in the left atrium using transthoracic echocardiography. Results: None of the patients showed embolic stroke after the third post-operative month. Only one patient exhibited transient ischaemic attack on warfarin therapy within the three postoperative months. Left atrial thrombi were also found in 11 (40.7%) cases during surgery. Of these, 6 (54.5%) had had embolic stroke, with no statistical significance (P=.703). Conclusions: This study suggests there might be signs that the left atrial appendage may be the main source of emboli in rheumatic mitral valve disease, and its resection could eliminate the risk of stroke in patients with rheumatic mitral valve disease and long-standing persistent atrial fibrillation. abstract_id: PUBMED:38097932 Left atrial Thrombus formation after discontinuation of anticoagulation in patient with severe bioprosthetic mitral stenosis. Background: Mitral valve stenosis can be a highly symptomatic condition with significant complications if left untreated. In such cases, mitral valve replacement with a bioprosthetic or mechanical valve may be a viable solution to prevent progressive disease. Current guidelines do not recommend continued anticoagulation beyond 6 months for patients who have undergone bioprosthetic valve replacement without a separate indication for anticoagulation. With this case discussion we aim to 1) Review the current indications for anticoagulation for bioprosthetic mitral valves in patients without atrial fibrillation and 2) Discuss the constellation of comorbidities that may affect the decision to begin anticoagulation therapy. Case Presentation: We present a case describing a 55-year-old male with end-stage renal disease, coronary artery disease with coronary artery bypass graft surgery, and bioprosthetic mitral valve replacement 2 years prior with rapid degeneration of the replaced valve and on warfarin without a clear indication for anticoagulation. The patient was admitted for symptomatic, severe mitral stenosis and consideration of transcatheter mitral valve-in-valve replacement. During hospital admission, warfarin was discontinued and replaced with prophylactic anticoagulation. However, 8 days after warfarin cessation an intraoperative transesophageal echocardiography revealed a newly developed large left atrial thrombus leading to cancellation of the planned operation. Conclusions: This patient developed a left atrial thrombus after discontinuing warfarin in the setting of rapidly deteriorating bioprosthetic valve stenosis and vascular comorbidities. The decision to discontinue warfarin was made in concordance with current guidelines, which do not indicate systemic anticoagulation post 3-6 months after bioprosthetic valve replacement without separate indication for anticoagulation. This case identifies the need to investigate rebound hypercoagulability and further risk stratify comorbidities which may independently increase the risk of clot formation in the setting of severe mitral valve stenosis. abstract_id: PUBMED:29588790 Transcatheter Aortic Valve Replacement and Left Atrial Appendage Occlusion - A Stitch in Time? Many patients have now been able to receive transcatheter aortic valve replacement (TAVR) therapy for severe aortic stenosis. These patients have atrial fibrillation and are placed on warfarin for stroke prophylaxis. The opportunity for treatment with left atrial appendage occlusion (LAAO) in place of warfarin for this population exists, especially for those with increased bleeding risk. This paper discusses the prevalence and aetiology of stroke in patients presenting for TAVR (with a focus on the risk from chronic and acute atrial fibrillation) and also the benefit of LAAO closure in this population. abstract_id: PUBMED:24908619 Combined percutaneous balloon mitral valvuloplasty and left atrial appendage occlusion device implantation for rheumatic mitral stenosis and atrial fibrillation. Rheumatic heart disease is a common cause of cardiovascular morbidity and mortality worldwide, mostly in developing countries. Mitral stenosis and atrial fibrillation often coexist, related to both structural and inflammatory changes of the mitral valve and left atrium. Both predispose to left atrial thrombus formation, commonly involving the left atrial appendage. Thromboembolism can occur, with devastating consequences. We report the case of a 62 year old woman with rheumatic heart disease resulting in mitral stenosis and atrial fibrillation. Previous treatment with warfarin resulted in life-threatening gastrointestinal bleeding and she refused further anticoagulant therapy. A combined procedure was performed, including percutaneous balloon mitral valvuloplasty and left atrial appendage occlusion device implantation with the Atritech® Watchman® device. No thromboembolic or bleeding complications were encountered at one year follow-up. Long-term follow-up in a cohort of patients will be required to evaluate the safety and efficacy of this strategy. abstract_id: PUBMED:16341302 Natural course of left atrial thrombi after bioprosthetic mitral valve replacement in a patient with heparin-induced thrombocytopenia. Left atrial (LA) clots involving prosthetic mitral valves are unusual complications of heparin-induced thrombocytopenia (HIT), and little is known about their natural history. A 73-year-old woman underwent bioprosthetic mitral valve replacement, which was complicated by atrial fibrillation and LA thrombus due to HIT. She was treated medically and followed up for 18 months. The present case provides unique insights into the natural history of HIT-related LA thrombus involving a bioprosthetic valve. It highlights the importance of having a high index of suspicion for this condition and avoiding the use of warfarin early in the course of the condition. abstract_id: PUBMED:17286567 Left atrial appendage exclusion and the risk of thromboembolic events following mitral valve surgery. Objectives: We aimed to evaluate left atrial appendage (LAA) exclusion in patients undergoing mitral valve surgery with respect to thromboembolic events. Background: LAA is the predominant source of emboli in patients with atrial fibrillation. Prophylactic LAA exclusion at the time of heart surgery has been recommended to reduce the risk of future thromboembolism. Methods: An observational cohort of 136 patients undergoing LAA exclusion during mitral valve surgery was identified between May 1993 and November 1998 at our institution. Results: During a mean follow-up of 3.6 +/- 1.3 years, there were 14 (12.3%) thromboembolic events. Compared with patients who received warfarin upon hospital discharge, there were more thromboembolic events in patients not prescribed warfarin upon hospital discharge (n = 7/67, 10% vs n = 6/40, 15%, respectively). The warfarin status was not known for one patient. The majority of thromboembolic events (n = 10/14, 71%) occurred in those who underwent mitral valve repair. Conclusion: In this observational study, patients who undergo LAA exclusion during mitral valve surgery to reduce the risk of thromboembolism have a significant incidence of thromboembolic events, especially when warfarin therapy is not prescribed upon hospital discharge. abstract_id: PUBMED:31520437 Left atrial appendage closure: A therapy uniquely suited for specific populations of patients with atrial fibrillation. Atrial fibrillation (AF) is the most common clinically relevant arrhythmia and confers a fivefold increased risk for stroke. Cardioembolic stroke secondary to AF is a devastating event, but is largely preventable with appropriate oral anticoagulation (OAC). The PROTECT and PREVAIL trials demonstrated that the WATCHMAN left atrial appendage closure (LAAC) device in combination with short-term warfarin therapy is noninferior to long-term warfarin with respect to a composite endpoint of stroke, cardiovascular death, and systemic embolism. Importantly, the WATCHMAN confers a significant reduction in life-threatening bleeding compared to OAC. Although direct-acting oral anticoagulant (DOAC) are superior to warfarin in eligible patients, several important AF populations exist in whom left atrial appendage (LAA) closure may be preferable to DOAC. Populations warranting strong consideration of LAAC include patients with contraindications to DOAC, end-stage renal disease, prior intracranial hemorrhage, recurrent gastrointestinal bleeding, and patients undergoing transcatheter aortic valve replacement or left atrial electrical isolation. Device-related thrombosis is an important complication of LAAC, and DOAC may be preferential to warfarin for prevention and treatment of this complication remains unexplored. Prospective clinical trials comparing DOAC to LAAC in these unique populations are either ongoing or needed. abstract_id: PUBMED:8572814 Appendage obliteration to reduce stroke in cardiac surgical patients with atrial fibrillation. Background: Left atrial appendage obliteration was historically ineffective for the prevention of postoperative stroke in patients with rheumatic atrial fibrillation who underwent operative mitral valvotomy. It is, however, a routine part of modern "curative" operations for nonrheumatic atrial fibrillation, such as the maze and corridor procedures. Methods: To assess the potential of left atrial appendage obliteration to prevent stroke in nonrheumatic atrial fibrillation patients, we reviewed previous reports that identified the etiology of atrial fibrillation and evaluated the presence and location of left atrial thrombus by transesophageal echocardiography, autopsy, or operation. Results: Twenty-three separate studies were reviewed, and 446 of 3,504 (13%) rheumatic atrial fibrillation patients, and 222 of 1,288 (17%) nonrheumatic atrial fibrillation patients had a documented left atrial thrombus. Anticoagulation status was variable and not controlled for. Thrombi were localized to, or were present in the left atrial appendage and extended into the left atrial cavity in 254 of 446 (57%) of patients with rheumatic atrial fibrillation. In contrast, 201 of 222 (91%) of nonrheumatic atrial fibrillation-related left atrial thrombi were isolated to, or originated in the left atrial appendage (p < 0.0001). Conclusions: These data suggest that left atrial appendage obliteration is a strategy of potential value for stroke prophylaxis in nonrheumatic atrial fibrillation. abstract_id: PUBMED:38188314 Cardioembolism after thoracoscopic left atrial appendage clipping in a patient on oral anticoagulation therapy. Cardioembolism associated with atrial fibrillation is a major cause of ischemic stroke. Left atrial appendage occlusion in atrial fibrillation patients undergoing cardiac surgery reduces the risk of postoperative stroke. A 78-year-old man with a history of atrial fibrillation and severe mitral regurgitation underwent thoracoscopic mitral valve repair with left atrial appendage clipping and the cryo-maze procedure 4 years previously. He was taking a direct oral anticoagulant for stroke prevention because his atrial fibrillation had recurred. He presented with acute onset disturbed consciousness, omnidirectional gaze palsy, left facial palsy, severe dysarthria, bilateral limb ataxia, and sensory disturbance. National Institutes of Health Stroke Scale score was 16. Although non-contrast computed tomography showed no early ischemic changes, computed tomography angiography revealed occlusion of the basilar artery. Intravenous thrombolysis was performed, which resulted in recanalization. Transesophageal echocardiography showed left atrial spontaneous echo contrast and thrombus in the left atrial appendage. Contrast-enhanced chest computed tomography confirmed incomplete left atrial appendage occlusion. Cardioembolic stroke was diagnosed, and warfarin was initiated. Cardioembolism may occur after thoracoscopic left atrial appendage clipping despite direct oral anticoagulant therapy, particularly if appendage occlusion is incomplete. Occlusion status should be evaluated after thoracoscopic clipping. Learning Objective: To illustrate, incomplete left atrial appendage closure may increase the risk of ischemic stroke even after thoracoscopic left atrial appendage clipping is performed to prevent embolism. Answer: The necessity of long-term warfarin therapy in Chinese patients with atrial fibrillation after bioprosthetic mitral valve replacement and left atrial appendage obliteration is not clearly established. A retrospective study conducted in China found that long-term warfarin or aspirin therapy may have no significantly different impact on long-term prognosis after 3 months of anticoagulation with warfarin in patients with atrial fibrillation undergoing bioprosthetic mitral valve replacement and left atrial appendage obliteration (PUBMED:25881216). This suggests that long-term warfarin therapy might not be necessary in this specific patient population. Additionally, another study on patients with rheumatic mitral valve disease and long-standing persistent atrial fibrillation who underwent mitral valve surgery and surgical removal of the left atrial appendage showed that none of the patients experienced embolic stroke after the third post-operative month when oral anticoagulation was stopped (PUBMED:27986559). This further supports the idea that long-term warfarin therapy may not be essential after left atrial appendage obliteration in patients with atrial fibrillation. However, it is important to note that individual patient risk factors and comorbidities can influence the decision to continue anticoagulation therapy. For instance, a case report described a patient who developed a left atrial thrombus after discontinuing warfarin in the setting of rapidly deteriorating bioprosthetic valve stenosis and vascular comorbidities (PUBMED:38097932). This highlights the need for careful risk stratification and consideration of comorbidities that may independently increase the risk of clot formation. In conclusion, while some evidence suggests that long-term warfarin therapy may not be necessary for all Chinese patients with atrial fibrillation after bioprosthetic mitral valve replacement and left atrial appendage obliteration, decisions should be individualized based on patient-specific factors and risks.
Instruction: Can skin cancer prevention and early detection be improved via mobile phone text messaging? Abstracts: abstract_id: PUBMED:27374946 Can skin cancer prevention be improved through mobile technology interventions? A systematic review. Objective: Print-based health promotion interventions are being phased out to bring forth more appealing and assessable new technology applications. This review aimed to evaluate the current literature on the use of mobile text messaging and similar electronic technology interventions in the area of skin cancer prevention. Method: A search of studies guided by Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) was conducted on mobile technology interventions for improving skin cancer prevention in the electronic databases PubMed, MEDLINE, Cumulative Index to Nursing and Allied Health Literature (CINAHL), and PsycINFO. Results: Overall, 136 articles were screened for eligibility between 2001 (earliest relevant article found) and November 2015. Eight studies fulfilled the inclusion criteria and were reviewed according to the PRISMA guidelines. Of these, five were randomised controlled trials (RCTs), two were controlled clinical trials, and one was a cohort study. Five studies used text messages as an intervention, two used mobile phone applications, and another used electronic messages via email. All studies resulted in self-reported behaviour change in at least one of their outcome measures (e.g., sunscreen application, seeking shade). Conclusion: While the behaviour change outcomes are promising, the lack of change in more objective measures such as sunburn indicates a need to further improve mobile phone technology-delivered interventions in order to have a greater impact on skin cancer prevention. Future studies may consider the use of objective outcome measures (e.g., sunscreen weight), electronic diaries, or behavioural outcomes in social networks. abstract_id: PUBMED:25524612 Can skin cancer prevention and early detection be improved via mobile phone text messaging? A randomised, attention control trial. Objective: To test the impact of a theory-based, SMS (text message)-delivered behavioural intervention (Healthy Text) targeting sun protection or skin self-examination behaviours compared to attention control. Method: Overall, 546 participants aged 18-42 years were randomised using a computer-generated number list to the skin self-examination (N=176), sun protection (N=187), or attention control (N=183) text messages group. Each group received 21 text messages about their assigned topic over 12 months (12 weekly messages for 3 months, then monthly messages for the next 9 months). Data were collected via telephone survey at baseline, 3, and 12 months across Queensland from January 2012 to August 2013. Results: One year after baseline, the sun protection (mean change 0.12; P=0.030) and skin self-examination groups (mean change 0.12; P=0.035) had significantly greater improvement in their sun protection habits (SPH) index compared to the attention control group (reference mean change 0.02). The increase in the proportion of participants who reported any skin self-examination from baseline to 12 months was significantly greater in the skin self-examination intervention group (103/163; 63%; P<0.001) than the sun protection (83/173; 48%) or attention control (65/165; 36%) groups. There was no significant effect of the intervention for participants' self-reported whole-body skin self-examination, sun tanning, or sunburn behaviours. Conclusion: The Healthy Text intervention was effective in inducing significant improvements in sun protection and any type of skin self-examination behaviours. Trial Registration: The Australian and New Zealand Clinical Trials register (ACTRN12612000577819). Funding: Cancer Australia 1011999. abstract_id: PUBMED:28007691 Design and Feasibility of a Text Messaging Intervention to Prevent Indoor Tanning Among Young Adult Women: A Pilot Study. Background: Although skin cancer is largely preventable, it affects nearly 1 of 5 US adults. There is a need for research on how to optimally design persuasive public health indoor tanning prevention messages. Objective: The objective of our study was to examine whether framed messages on indoor tanning behavioral intentions delivered through short message service (SMS) text messaging would produce (1) positive responses to the messages, including message receptivity and emotional response; (2) indoor tanning efficacy beliefs, including response efficacy and self-efficacy; and (3) indoor tanning risk beliefs. Methods: We conducted a pilot study of indoor tanning prevention messages delivered via mobile phone text messaging in a sample of 21 young adult women who indoor tan. Participants completed baseline measures, were randomly assigned to receive gain-, loss-, or balanced-framed text messages, and completed postexposure outcome measures on indoor tanning cognitions and behaviors. Participants received daily mobile phone indoor tanning prevention text messages for 1 week and completed the same postexposure measures as at baseline. Results: Over the 1-week period there were trends or significant changes after receipt of the text messages, including increased perceived susceptibility (P<.001), response efficacy beliefs (P<.001), and message receptivity (P=.03). Ordinary least squares stepwise linear regression models showed an effect of text message exposure on self-efficacy to quit indoor tanning (t6=-2.475, P<.02). Ordinary least squares linear regression including all measured scales showed a marginal effect of SMS texts on self-efficacy (t20=1.905, P=.08). Participants endorsed highly favorable views toward the text messaging protocol. Conclusions: This study supports this use of mobile text messaging as an indoor tanning prevention strategy. Given the nature of skin cancer risk perceptions, the addition of multimedia messaging service is another area of potential innovation for disseminating indoor tanning prevention messages. abstract_id: PUBMED:33446158 Can adopting skin cancer preventive behaviors among seafarers be increased via a theory-based mobile phone-based text message intervention? A randomized clinical trial. Background: One of the main occupational hazards for seafarers is the long exposure to sunlight. This study aimed to determine the efficacy of a mobile phone-based text message intervention in adopting skin cancer preventive behaviors among a sample of seafarers in Genaveh port located in Bushehr province, Iran. Methods: In this randomized controlled trial, 136 seafarers were randomly selected and assigned to the intervention (n = 68) or a control groups (n = 68). As a theoretical basis, we followed the Protection Motivation Theory (PMT) variables to develop the text messages. The data related to PMT variables and skin cancer preventive behaviors were collected through a questionnaire. Forty-five text messages were designed, pre-tested and sent to the seafarers' phones in the intervention group in 45 days. Both groups were followed up 1 month after the intervention. Data collected in the two stages were analyzed using paired-samples t-test, ANCOVA, and Chi-square tests. Results: Following the intervention, the mean scores of adopting skin cancer preventive behaviors (p = 0.001), perceived self-efficacy (p = 0.01), protection motivation (p = 0.02), and fear (p = 0.001) were significantly higher in the intervention group than the control group. There was significant reduction in the response costs (p = 0.05) and perceived rewards (p = 0.01) scores in the intervention group compared with the control group after the intervention. However, there were no significant differences in the perceived vulnerability (p = 0.14), perceived severity (p = 0.09), and response efficacy (p = 0.64) between the two groups after the intervention. Conclusions: The results of the study indicated the effectiveness of mobile phone-based text message intervention for increasing skin cancer preventive behaviors in Iranian seafarers. Trial Registration: Iranian Registry for Clinical Trial (the link to trial: https://www.irct.ir/trial/7572 ). Registered 16 July, 2016. Prospectively registered. abstract_id: PUBMED:25697490 User preferences for text message-delivered skin cancer prevention and early detection. Evidence is needed for the acceptability and user preferences of receiving skin cancer-related text messages. We prepared 27 questions to evaluate attitudes, satisfaction with program characteristics such as timing and spacing, and overall satisfaction with the Healthy Text program in young adults. Within this randomised controlled trial (age 18-42 years), 546 participants were assigned to one of three Healthy Text message groups; sun protection, skin self-examination, or attention-control. Over a 12-month period, 21 behaviour-specific text messages were sent to each group. Participants' preferences were compared between the two interventions and control group at the 12-month follow-up telephone interview. In all three groups, participants reported the messages were easy to understand (98%), provided good suggestions or ideas (88%), and were encouraging (86%) and informative (85%) with little difference between the groups. The timing of the texts was received positively (92%); however, some suggestions for frequency or time of day the messages were received from 8% of participants. Participants in the two intervention groups found their messages more informative, and triggering behaviour change compared to control. Text messages about skin cancer prevention and early detection are novel and acceptable to induce behaviour change in young adults. abstract_id: PUBMED:34419232 Optimizing Texting Interventions for Melanoma Prevention and Early Detection: A Latin Square Crossover RCT. Introduction: Text messaging is an effective way to reach large populations with health promotion support. This study aims to establish the optimal text messaging intervention to achieve behavior change in young adults at risk of skin cancer. Study Design: Latin square crossover RCT. Setting/participants: Participants were women and men aged 18-40 years living in Queensland, Australia who owned a smartphone and had ≥2 skin cancer risk factors. Intervention: Participants were enrolled from December 2018 to February 2019 and completed an eligibility survey. Eligible participants were randomized to 4 different text message interventions using a Latin square design with varying personalization, interactivity, and message frequency (February 2019‒July 2019). Each intervention lasted for 1 month; between interventions, participants had a 1-week washout period in which they completed an online questionnaire. Participants completed a 6-month follow-up online survey in January 2020. Main Outcome Measures: Measures included self-reported sun protection habits and sunburns. Results: A total of 277 (71.2% response rate) participants completed the 6-month follow-up. The sun protection habits index was significantly higher in all the 4 text messaging interventions (p<0.01 for each intervention) than at baseline, with similar sun protection habits improvements among all interventions (p=0.27). Sunburn rates decreased significantly over time (p<0.01 each intervention), with all the 4 interventions achieving reductions in sunburn rates during the intervention periods (p=0.78). Overall, the sunburn rates decreased from 40.3% at baseline to 7.0% at the end of the intervention, and at 6-month follow-up, it remained significantly below baseline levels at 23.5% (p<0.01). Conclusions: Regular text messaging interventions result in significantly increased sun protection and decreased sunburn in young adults. Trial Registration: This study is registered at the Australian and New Zealand Clinical Trials Registry ACTRN12618001299291. abstract_id: PUBMED:25053602 Effects of a short messaging service-based skin cancer prevention campaign in adolescents. Background: Skin cancer prevention emphasizes early adoption and practice of sun protection behaviors. Adolescence represents a high-risk period for ultraviolet radiation exposure, presenting an opportunity for intervention. The ubiquity of mobile phones among teens offers an engaging medium through which to communicate prevention messages. Purpose: To evaluate a skin cancer prevention intervention using short messaging service (SMS, or text messages) to impact sun-related knowledge, beliefs, and behaviors among adolescents. Methods: The intervention was conducted in middle school youth (N=113) recruited in April or October 2012. Participants were English speakers, 11-14 years old, routinely carried a mobile phone, and completed a 55-minute sun safety education program. Participants were sent three sun safety-themed SMS messages each week for 12 weeks. Skin and sun protective knowledge, beliefs, behaviors, and post-intervention program satisfaction were collected and analyzed at baseline and end of intervention (April/June 2012; October 2012/January 2013). Paired responses were tested for equality using Wilcoxon signed-rank tests. Results: Ninety-six students (85%) completed the study. At 12 weeks, significant positive changes were reported for sun avoidance during peak ultraviolet radiation, sunscreen application, wearing hats and sunglasses, and knowledge about skin cancer risk. Participants expressed moderately high satisfaction with the program, and 15% shared messages with family or friends. Conclusions: A brief, SMS-based intervention affected youth skin cancer prevention behaviors and knowledge. Future research will determine whether program effects were sustained at 24 weeks and explore how sun safety parenting practices inform these effects. abstract_id: PUBMED:30526243 Systematic Review of Commercially Available Mobile Phone Applications for Prostate Cancer Education. Prostate cancer is the most commonly diagnosed non-skin cancer among all men and the second most common cause of death. To ameliorate the burden of prostate cancer, there is a critical need to identify strategies for providing men with information about prostate cancer screening and the importance of informed decision making. With mobile phones becoming more ubiquitous, many individuals are adopting their phones as sources for health information. The objective of this systematic review is to identify and evaluate commercially available apps for promoting informed prostate cancer screening decisions. Two keywords "prostate cancer screening" and "prostate cancer" were entered into the search engines of Google and iOS app stores in May 2017. Evaluations were conducted on apps' (a) quality, (b) grade-level readability, (c) cultural sensitivity, and (d) usability heuristics. None of the 14 apps meeting the inclusion criteria contained the full breadth of information covered in the 2016 American Cancer Society's Prostate Cancer Prevention and Early Detection Guidelines, but over half were inclusive of topics consistent with these guidelines. Most apps' readability was higher than an eighth-grade reading level. Most apps were also not framed and had a neutral tone. Only four apps met most criteria for being culturally sensitive to African Americans. Usability among apps was variable, but some contained major usability concerns. Recommendations for improving educational apps for prostate cancer screening include: disseminating evidence-based information; using culturally sensitive language; knowing the implications of the one and framing of content; making apps interactive; and following common usability principles. abstract_id: PUBMED:26555140 Mediation of improvements in sun protective and skin self-examination behaviours: results from the healthy text study. Objective: Melanoma is on the rise, especially in Caucasian populations exposed to high ultraviolet radiation such as in Australia. This paper examined the psychological components facilitating change in skin cancer prevention or early detection behaviours following a text message intervention. Methods: The Queensland-based participants were 18 to 42 years old, from the Healthy Text study (N = 546). Overall, 512 (94%) participants completed the 12-month follow-up questionnaires. Following the social cognitive model, potential mediators of skin self-examination (SSE) and sun protection behaviour change were examined using stepwise logistic regression models. Results: At 12-month follow-up, odds of performing an SSE in the past 12 months were mediated by baseline confidence in finding time to check skin (an outcome expectation), with a change in odds ratio of 11.9% in the SSE group versus the control group when including the mediator. Odds of greater than average sun protective habits index at 12-month follow-up were mediated by (a) an attempt to get a suntan at baseline (an outcome expectation) and (b) baseline sun protective habits index, with a change in odds ratio of 10.0% and 11.8%, respectively in the SSE group versus the control group. Conclusions: Few of the suspected mediation pathways were confirmed with the exception of outcome expectations and past behaviours. Future intervention programmes could use alternative theoretical models to elucidate how improvements in health behaviours can optimally be facilitated. abstract_id: PUBMED:22802521 Use of the mobile phone multimedia messaging service for teledermatology. We examined the feasibility of using mobile phone Multimedia Messaging Service (MMS) to send teledermatology referrals from a general practitioner to a dermatologist. Digital photographs of skin conditions in 40 consecutive patients were sent together with relevant clinical information to dermatologists at a university hospital. Two dermatologists separately assessed the MMS referrals. The suspected diagnosis, triage and management decisions were compared to those given after separate face-to-face (FTF) visits, and again after agreeing on a final clinical and/or histopathological diagnosis. Thirty-two patients (80%) were diagnosed with skin tumours and 8 patients (20%) with other skin conditions. Both dermatologists were able to make a correct diagnosis in 31 patients (78%) based solely on the MMS referral. They also provided adequate management recommendations for 98% of the patients. Adequate triage decisions after assessment of the MMS referrals were made for 34 (85%) and 38 (95%) patients by the two dermatologists. There was an inter-observer concordance of 68% for the teledermatology diagnosis, compared to 88% concordance after the separate FTF visits. The diagnostic accuracy and adequacy of the triage and management decisions achieved using MMS referrals were similar to those obtained with other store-and-forward teledermatology methods. Answer: Yes, skin cancer prevention and early detection can be improved via mobile phone text messaging. Several studies have demonstrated the effectiveness of text message-delivered interventions in promoting skin cancer preventive behaviors and early detection. A systematic review found that mobile technology interventions, including text messaging, resulted in self-reported behavior change in skin cancer prevention measures such as sunscreen application and seeking shade (PUBMED:27374946). A randomized, attention control trial showed that a text message-delivered behavioral intervention targeting sun protection or skin self-examination behaviors led to significant improvements in sun protection habits and any type of skin self-examination behaviors (PUBMED:25524612). A pilot study indicated that text messaging interventions could increase perceived susceptibility, response efficacy beliefs, and message receptivity, as well as self-efficacy to quit indoor tanning among young adult women (PUBMED:28007691). Another randomized clinical trial found that a mobile phone-based text message intervention increased skin cancer preventive behaviors, perceived self-efficacy, protection motivation, and fear among seafarers, with a significant reduction in response costs and perceived rewards (PUBMED:33446158). User preferences for text message-delivered skin cancer prevention and early detection were found to be high, with participants finding the messages informative and encouraging behavior change (PUBMED:25697490). A Latin square crossover RCT demonstrated that regular text messaging interventions resulted in significantly increased sun protection and decreased sunburn in young adults (PUBMED:34419232). An intervention using short messaging service (SMS) among adolescents led to significant positive changes in sun avoidance, sunscreen application, wearing hats and sunglasses, and knowledge about skin cancer risk (PUBMED:25053602). These studies collectively suggest that mobile phone text messaging is a viable and effective method for improving skin cancer prevention and early detection behaviors.
Instruction: PSA testing in office-based clinics: are we testing as much as we think? Abstracts: abstract_id: PUBMED:16310694 PSA testing in office-based clinics: are we testing as much as we think? Background: Self-report studies of widespread use of routine prostate-specific antigen (PSA) testing have fueled concerns about overuse and possible harm. The purpose of this study was to examine use of PSA testing during physician office visits in a national sample of prostate-cancer-free men ages 40 years and older. Methods: Bivariate and multivariate logistic regression models of receiving a PSA test by prostate-cancer-free men ages 40 years and older were performed using the 2000 National Ambulatory Medical Care Survey. Results: There were 2,709 primary care and urology office visits by prostate-cancer-free men 40 years of age and older, and 10.2% resulted in a PSA test. In a multivariate model, men in their 70s had considerably higher odds (odd ratio, 1.60; 95% CI, 1.11-2.32) and men with multiple medical comorbidities had considerably lower odds (odds ratio, 0.28; 95% CI, 0.10-0.65) of receiving a PSA test. Conclusions: We report lower use of PSA testing than previously published. Greater insight into use of PSA testing based on clinic and hospital administrative data are required to determine the impact of PSA testing on the cost of health care and prostate cancer incidence and mortality. abstract_id: PUBMED:32734806 PSA testing patterns in a large Swedish cohort before the implementation of organized PSA testing. Background: Organized PSA testing for asymptomatic men aged 50-74 years will be implemented in Sweden to reduce opportunistic testing in groups who will not benefit. The aim of this study was to describe the opportunistic PSA testing patterns in a Swedish region before the implementation of organized PSA testing programs. Method: We included all men in the Uppsala-Örebro health care region of Sweden who were PSA tested between 1 July 2012 and 30 June 2014. Information regarding previous PSA testing, prostate cancer diagnosis, socioeconomic situation, surgical procedures and prescribed medications were collected from population-wide registries to create the Uppsala-Örebro PSA cohort (UPSAC). The cohort was divided into repeat and single PSA testers. The background population used for comparison consisted of men 40 years or older, living in the Uppsala-Örebro region during this time period. Results: Of the adult male population in the region, 18.1% had undergone PSA testing. Among men over 85 years old 21% where PSA tested. In our cohort, 62.1% were repeat PSA testers. Of men with a PSA level ≤1µg/l 53.8% had undergone repeat testing. Prostate cancer was found in 2.7% and 4.8% of the repeat and single testers, respectively. Conclusion: Every fifth man in the male background population was PSA tested. Repeated PSA testing was common despite low PSA values. As repeated PSA testing was common, especially among older men who will not be included in organized testing, special measures to change the testing patterns in this group may be required. abstract_id: PUBMED:24975792 PSA testing, biopsy and cancer and benign prostate hyperplasia in France Introduction: Prostate-specific antigen (PSA) testing is high in France. The aim of this study was to estimate their frequency and those of biopsy and newly diagnosed cancer (PCa) according to the presence or absence of treated benign prostatic hyperplasia (BPH). Patients And Methods: This study concerned men 40 years and older covered by the main French national health insurance scheme (73 % of all men of this age). Data were collected from the national health insurance information system (SNIIRAM). This database comprehensively records all of the outpatient prescriptions and healthcare services reimbursed. This information are linked to data collected during hospitalisations. Results: The frequency of men without diagnosed PCa (10.9 millions) with at least one PSA test was very high in 2011 (men aged 40 years and older: 30 %, 70-74 years: 56 %, 85 years and older: 33 % and without HBP: 25 %, 41 % and 19 %). Men with treated BPH totalized 9 % of the study population, but 18 % of the men with at least one PSA test, 44 % of those with at least one prostate biopsy and 40 % of those with newly managed PCa. Over a 3-year period, excluding men with PCa, 88 % of men with BPH had at least one PSA test and 52 % had three or more PSA tests versus 52 % and 15 % for men without BPH. One year after PSA testing, men of 55-69 years with BPH more frequently underwent prostate biopsy than those without BPH (5.4 % vs 1.8 %) and presented PCa (1.9 % vs 0.9 %). Conclusions: PSA testing frequencies in France are very high even after exclusion of men with BPH, who can be a group with more frequent managed PCa. Level Of Evidence: 4. abstract_id: PUBMED:32361642 Factors associated with appropriate and low-value PSA testing. Background: Prostate-specific antigen (PSA) testing for early detection of prostate cancer is low-value when it is not indicated by guidelines and the harms outweigh the benefits. In this retrospective cohort study, we identify provider and patient factors associated with PSA testing, particularly in situations where testing would be low-value. Methods: We used electronic health record data from 2011 to 2018 representing 1,738,021 health system encounters in the United States. Using logistic generalized estimating equation models, we examined patient factors (age, comorbid illness, family history, race and prior PSA results), provider factors (gender, specialty, graduation year and medical school rank), and overall time trends associated with PSA testing in low-value and appropriate settings. Results: Comorbid illness (odds ratio (OR) 0.0 for 3+ conditions vs none) and no prior PSA testing (OR 0.2) were associated with a lower likelihood of PSA testing in low-value situations, while family history of prostate cancer (OR 1.6) and high prior PSA test results (OR 2.2 for PSA > 6 vs 0-1) were associated with a greater likelihood. Men aged 55-65 years were at greatest risk for PSA testing in low-value situations. The provider factor associated with PSA testing in low-value situations was specialty, with urologists being most likely (OR 2.3 versus advanced practice providers). Internal medicine physicians were more likely to perform PSA testing during low-value situations (OR 1.3 versus advanced practice providers) but much more likely to order a PSA test where appropriate (OR 2.2). All PSA testing decreased since 2011. Conclusion: We identified several patient and provider factors associated with PSA testing in low-value settings. Some aspects suggest attention to relevant factors for PSA testing in low-value settings (e.g. comorbid illness), while others may encourage PSA testing in low-value settings (e.g. family history). The greatest likelihood of PSA testing in low-value settings is among men within the age range most commonly recommended by guidelines. abstract_id: PUBMED:30794024 Definitive urine drug testing in office-based opioid treatment: a literature review. Individuals who receive buprenorphine treatment for opioid use disorder in office-based settings may be at risk for, or have a history of, polysubstance use. Urine drug testing is an important clinical tool for monitoring medication adherence and patient stability; and screening for illicit drug use and dangerous drug-drug interactions. This article is intended to educate practitioners in office-based opioid treatment settings on selecting appropriate substances for a definitive drug testing panel that are known to be used concurrently, sequentially, or in combination with buprenorphine for opioid use disorder. It is also intended to educate such practitioners on selecting appropriate testing technology to reduce risks to the health and safety of patients prescribed buprenorphine for opioid use disorder. In developing this article, the author conducted a search from May 2018 through December 2017 of peer-reviewed and government-supported articles in electronic databases. The literature showed that several common substances are often abused in conjunction with certain other substances, increasing the risk of serious adverse events, including death. Whether used on their own, concurrently, sequentially, or in combination, substances of abuse carry significant health risks. Definitive urine drug testing, given its high specificity and sensitivity, can accurately identify the use of specific prescription medications and illicit substances that, especially when taken with buprenorphine or other substances, may cause harm to a patient. When testing for buprenorphine and other opioids; sedatives, hypnotics, and anxiolytics; cocaine; amphetamines; and PCP and other club drugs, providers in office-based opioid treatment settings are strongly advised to use definitive urine drug tests as the primary testing methodology. In addition, practitioners must be able to identify all other substances that a patient may be consuming, taking into consideration the patient's historical and current drugs of choice, given that concurrent use with buprenorphine or other substances may cause serious adverse events. This article highlights the pressing market demand for comprehensive, definitive urine drug testing at a more reasonable cost. abstract_id: PUBMED:37007438 Feasibility and Efficacy of Same-Day, In-Office Genetic Testing for Inherited Retinal Diseases. Purpose: This article analyzes 2 practice patterns our institution uses for genetic testing of patients with inherited retinal diseases (IRDs) and compares testing completion and diagnostic yield rates. Methods: A retrospective, consecutive chart review series was conducted of patients with a clinically diagnosed rod-mediated IRD. All IRDs were diagnosed between 2 intervals: November 1, 2015, through November 30, 2016 (referral to a medical genetics clinic for testing) or December 1, 2016, through December 30, 2017 (same-day, in-office genetic testing). Results: A total of 189 patients were included in the study. Of patients who received an out-of-office referral for genetic testing, 10 of 84 (12%) patients proceeded with testing, whereas 74 of 84 (88%) patients did not complete testing. For patients who received in-office genetic testing, 104 of 105 (99%) completed testing. The difference in test completion was statistically significant (P < .001). In addition, genetic testing for out-of-office referrals identified a causative mutation in 5 of 10 (50%) patients, whereas in-office genetic testing identified a causative mutation in 42 of 104 (40.4%) patients. The difference in causative mutation discovery was not statistically significant (P = .18) between the 2 groups. Conclusions: In-office genetic testing is a novel practice pattern that provides a more consistent and accessible method for IRD genetic diagnosis. Compared with an out-of-office referral for genetic testing, in-office genetic testing offers a similar rate of causative gene mutation identification but a greatly higher rate of test completion, therefore potentially offering a much higher yield for genetic diagnosis of IRDs. abstract_id: PUBMED:25130956 Association of sociodemographic factors and prostate-specific antigen (PSA) testing. Objectives: There are conflicting recommendations regarding the use of prostate specific antigen (PSA) as a screening test. Integral to this debate is an understanding of who is currently being tested. The purpose of this study was to provide a detailed account of PSA testing practices in a major Canadian city (Calgary, Alberta) and to identify variables that may affect access to the PSA test. Design And Methods: PSA test counts were retrieved from Calgary Laboratory Services' Laboratory Information System from January 1, 2011 to December 31, 2011. A total of 75,914 individual PSA tests were included in our analysis. The frequency of PSA testing was plotted onto a dissemination area map of Calgary using ArcGIS software. Associations with sociodemographic variables were tested using Poisson regression. Results: The median PSA value was 0.93 μg/L and the median age at collection was 58 years. Forty-three percent of men aged 60-69 received a PSA test. Visible minority status 'Black' (P=0.0002) and Métis status (P=0.0075) were associated with lower PSA testing frequencies, while median household income (P=<0.0001) and university education (P=<0.0001) were associated with higher PSA testing frequencies. Conclusion: There are areas in Calgary which are significantly over or under tested relative to the mean. The amount of PSA testing in men <50 years of age is increasing, which is contrary to PSA testing guidelines. abstract_id: PUBMED:30543191 Relevance of total PSA and free PSA prescriptions. The total PSA testing has been frequently criticized for its effectiveness in the early detection of prostate cancer. These widely resumed in and out of the medical community have led to a decrease in prescriptions. Recommendations against the use of PSA for screening have been issued in France and in most countries. In the US, the significant decline in total PSA use since 2004 has led to an increase in advanced cancer diagnosis, which has led the authorities to back down. The US preventive services task force (USPSTF) now recommends that men between the ages of 55 and 69 make an individual decision about prostate cancer screening with their clinician. The purpose of this work is to summarize the good practices for the use of the total and free PSA with the views of these last data. abstract_id: PUBMED:36757372 Immuno-analytical characteristics of PSA and derived biomarkers (total PSA, free PSA, p2PSA) Prostate-specific antigen (PSA) is the recommended tumor marker for individual screening and follow-up of prostate cancer. This paper reviews main structural and physiological data about prostate specific antigen isoforms: total PSA, free PSA, [-2]proPSA (also named p2PSA). It describes the pre-, per- and post-analytical conditions for these different parameters. It presents the interpretation of results and derived calculated indices (free/total PSA ratio, Prostate Health Index or PHI) for the management of prostate cancer (initial diagnosis and follow-up). abstract_id: PUBMED:23267269 Office-based surgical and medical procedures: educational gaps. Over the past decade, the number of procedures performed in office-based settings by a variety of practitioners-including surgeons, gastroenterologists, ophthalmologists, radiologists, dermatologists, and others-has grown significantly. At the same time, patient safety concerns have intensified and include issues such as proper patient selection, safe sedation practices, maintenance of facilities and resuscitation equipment, facility accreditation and practitioner licensing, and the office staff's ability to deal with emergencies and complications. An urgent need exists to educate practitioners about safety concerns in the office-based setting and to develop various educational strategies that can meet the continued growth of these procedures. This review outlines educational needs and possible solutions such as simulation exercises and education during residency training. Answer: The study reported in PUBMED:16310694 found that the use of PSA testing in office-based clinics may be lower than previously published. In their analysis of 2,709 primary care and urology office visits by prostate-cancer-free men aged 40 years and older, only 10.2% resulted in a PSA test. This suggests that PSA testing may not be as widespread in office-based settings as some self-report studies have indicated. The study also found that men in their 70s were more likely to receive a PSA test, while those with multiple medical comorbidities were less likely to be tested. This indicates that age and health status may influence the likelihood of undergoing PSA testing in these settings. The authors concluded that a greater understanding of PSA testing practices based on clinic and hospital administrative data is necessary to assess the impact of PSA testing on healthcare costs and prostate cancer outcomes.
Instruction: Is there a relationship between physical activity and dietary calcium intake? Abstracts: abstract_id: PUBMED:26288793 Dietary calcium intake and physical activity levels among urban South Indian postmenopausal women. Introduction: Calcium is the most abundant mineral in our body with varied functions and its dietary deficiency leads to osteoporosis, besides playing a significant role in the pathogenesis of other diseases. The data regarding dietary calcium intake (DCI) among postmenopausal women in urban areas of South India is limited. Objectives: This study was aimed to assess DCI and physical activity among postmenopausal women. The risk factors for a low intake of dietary calcium were also assessed. Materials And Methods: A cross-sectional study was done among 106 postmenopausal women selected by systematic random sampling from the city of Erode, Tamil Nadu, India. DCI and physical activity were measured using validated questionnaires. Results: The mean DCI was 632.72 ± 28.23 mg/day. The proportion of women consuming less than 800 mg/day of dietary calcium was 74.5%. Only 10.4% of the women studied (11 out of 106) were on calcium supplements while 55% had low physical activity. A low knowledge score [adjusted odds ratio (OR): 5.17; 95% confidence interval (CI): 1.31-20.42] and a low socioeconomic status (SES) score of the family (adjusted OR: 4.00; 95% CI: 1.32-12.11) were significantly associated with low DCI after adjusting the age, dietary preferences, and educational and occupational statuses. Conclusions: DCI was below the Recommended Dietary Allowance (RDA) and the majority of postmenopausal women were physically inactive, indicating the need for better education regarding DCI and the need for calcium supplements and physical activity, all of which can contribute to the prevention of the consequences of osteoporosis. abstract_id: PUBMED:31433866 Physical activity, dietary calcium to magnesium intake and mortality in the National Health and Examination Survey 1999-2006 cohort. Calcium and magnesium affect muscle mass and function. Magnesium and calcium are also important for optimal vitamin D status. Vitamin D status modifies the associations between physical activity and risk of incident cardiovascular disease (CVD) and CVD mortality. However, no study examined whether levels of magnesium and calcium and the ratio of dietary calcium to magnesium (Ca:Mg) intake modify the relationship between physical activity and mortality. We included 20,295 National Health and Nutrition Examination Survey participants (1999-2006) aged >20 years with complete dietary, physical activity and mortality data (2,663 deaths). We assessed physical activity based on public health guidelines and sex-specific tertiles of MET-minutes/week. We used Cox proportional hazards models adjusted for potential confounding factors and stratified by the intakes of magnesium, calcium, Ca:Mg ratio. We found higher physical activity was significantly associated with reduced risk of total mortality and cause-specific mortality, regardless of Ca:Mg ratio, magnesium or calcium intake. In contrast, both moderate and high physical activity were significantly associated with substantially reduced risks of mortality due to cancer when magnesium intake was above the RDA level. We also found higher physical activity was significantly associated with a reduced risk of mortality due to cancer only when Ca:Mg ratios were between 1.7 and 2.6, although the interaction was not significant. Overall, dietary magnesium and, potentially, the Ca:Mg ratio modify the relationship between physical activity and cause-specific mortality. Further study is important to understand the modifying effects of the balance between calcium and magnesium intake on physical activity for chronic disease prevention. abstract_id: PUBMED:30613534 Dietary calcium intake and physical activity levels among people living in Karnataka, India - An observational hospital-based study. Background: Calcium is the most abundant mineral in our body with varied functions, and its dietary deficiency leads to osteoporosis. Various studies have shown that adequate dietary calcium intake (DCI) and moderately increased physical activity if maintained for long term prevent osteoporosis. The data regarding DCI of people living in Karnataka, south India, are limited. Thus, we aimed to assess DCI, physical activity, and their predictors among people living in Karnataka. Methodology: A cross-sectional study was done among 250 inpatients and normal relatives of orthopedics department of a tertiary care teaching hospital. Multistage random sampling was performed. DCI and physical activity were measured using validated questionnaires. Results: The mean (standard deviation) DCI was 499.94 (251.5). The prevalence [95 confidence interval (CI)] of poor intakers of DCI [DCI < Recommended Dietary Allowance (RDA)] was 76.6% (70.9, 81.7). However, only 43.52% of all participants with poor DCI were on supplements. Male gender [2.189 (1.125, 4.257)], elderly [1.988 (1.067, 3.705)], and low knowledge score [1.240 (1.093, 1.407)] were significant predictors of low DCI. The proportion (95 CI) of patients who were categorized as having low physical activity (LPA) was 44.0% (37.8, 50.4). The predictors for LPA [adjusted odds ratio (95 CI)] were marital status, being single [1.853 (1.047, 3.282)], and low socioeconomic status class [1.209 (1.002, 1.458)]. Conclusion: DCI was below the RDA for three-fourths of our patients with nearly half of them being physically inactive, indicating the need for better education regarding DCI and improving physical activity, all of which can prevent osteoporosis. abstract_id: PUBMED:9588616 Is there a relationship between physical activity and dietary calcium intake? A survey in 10,373 young French subjects. Purpose: The purpose of this study was to discuss, in the light of the results of a survey, the calcium ration of a sample of French youth and to determine whether various sports activities can be related to dietary calcium intake. Methods: Physical activity was evaluated using Baecke's questionnaire. Calcium intake was evaluated using a food frequency oriented questionnaire. The survey was performed on a population of 10,373 subjects (6,966 males and 3,407 females) including three different groups of subjects: school children and college students, military personnel, and athletes registered in sports federations. The mean age of this population was 19 +/- 9 yr, ages ranging between 7 and 50 yr. Results: The mean amount of declared calcium intake (DCI) for the total population was 1242 +/- 843 mg per 24 hr (mg x d(-1)). Fifty percent of this population consumed less than 1000 mg x d(-1) and 13% less than 500 mg x d(-1). There was no significant relationship between the index of activity and declared calcium intake. Calcium intake decreased with age and was lower in females compared to males. Conclusions: The subjects trained in individual endurance sports such as triathlon, biking, and road running have a lower DCI than subjects trained in team sports such as volley ball, handball, or basketball. This survey, performed on a large population, does indicate that for half of them daily calcium intake is below the threshold of 1,000 mg x d(-1) considered the daily requirement covering the needs of a population without age or gender distinction and that calcium intake is not related to the level of physical activity. abstract_id: PUBMED:30722726 Vitamin D intake, calcium intake and physical activity among children with wrist and ankle injuries and the association with fracture risk. Background: Several studies have revealed a substantial increase in the incidence of fractures in children in the past few decades. Aim: To assess the strength of the association between suggested risk factors and fracture prevalence in children. Method: A cross sectional observational study. Children aged 6-15 years and their guardians presenting to the Emergency Department of a single tertiary paediatric hospital were recruited. Self-reported data on vitamin D intake, calcium intake and physical activity were collected. All participants had a radiograph of their injured limb reported by a consultant radiologist, on the basis of which they were classified into fracture or no fracture groups. Statistical analysis included descriptive statistics and binary logistic regression. Results: Of the 130 patients recruited, 53 (41%) had sustained a fracture. The overwhelming majority of children (98%) did not consume the recommended daily dietary amount of vitamin D (400 IU/day). Low calcium intake and low levels of physical activity were also ascertained. However, there were no significant differences between fracture and no fracture groups for vitamin D intake, calcium intake or physical activity. Both site of injury (wrist) and sex (male) were associated with increased fracture risk ( p = 0.001 and p = 0.05, respectively). Logistic regression showed a statistically significant relationship between calcium intake and fracture risk (every additional unit of calcium consumption (mg/day) decreased the likelihood of fracture by 0.002, 95% confidence interval, 0.001-0.003). Conclusions: Low dietary intake of calcium and vitamin D and low levels of physical activity were evident. Fracture risk was significantly associated with reduced calcium intake but showed no association with vitamin D intake or physical activity. abstract_id: PUBMED:25130421 The longitudinal effects of physical activity and dietary calcium on bone mass accrual across stages of pubertal development. Childhood and adolescence are critical periods of bone mineral content (BMC) accrual that may have long-term consequences for osteoporosis in adulthood. Adequate dietary calcium intake and weight-bearing physical activity are important for maximizing BMC accrual. However, the relative effects of physical activity and dietary calcium on BMC accrual throughout the continuum of pubertal development in childhood remains unclear. The purpose of this study was to determine the effects of self-reported dietary calcium intake and weight-bearing physical activity on bone mass accrual across the five stages of pubertal development in a large, diverse cohort of US children and adolescents. The Bone Mineral Density in Childhood study was a mixed longitudinal study with 7393 observations on 1743 subjects. Annually, we measured BMC by dual-energy X-ray absorptiometry (DXA), physical activity and calcium intake by questionnaire, and pubertal development (Tanner stage) by examination for up to 7 years. Mixed-effects regression models were used to assess physical activity and calcium intake effects on BMC accrual at each Tanner stage. We found that self-reported weight-bearing physical activity contributed to significantly greater BMC accrual in both sexes and racial subgroups (black and nonblack). In nonblack males, the magnitude of the activity effect on total body BMC accrual varied among Tanner stages after adjustment for calcium intake; the greatest difference between high- and low-activity boys was in Tanner stage 3. Calcium intake had a significant effect on bone accrual only in nonblack girls. This effect was not significantly different among Tanner stages. Our findings do not support differential effects of physical activity or calcium intake on bone mass accrual according to maturational stage. The study demonstrated significant longitudinal effects of weight-bearing physical activity on bone mass accrual through all stages of pubertal development. abstract_id: PUBMED:20827348 Dietary intake based on physical activity level in Korean elementary school students. Prevalence of childhood obesity is increasing significantly worldwide due to energy imbalance perhaps stemming from undesirable dietary behavior and physical activity level. The objective of the study was to examine the effects of physical activity level on nutritional status in elementary school students. The subjects were comprised of 287 elementary school students between 4th and 6th grades in Seoul, Korea. The level of physical activity was scored with a modified Godin leisure-time exercise questionnaire and was categorized as active, moderately active, and sedentary. Dietary intakes were obtained using a 24-hour food recall method. An analysis of variance (ANOVA) was conducted to test for global significant differences of nutrient intakes by physical activity level. Boys were more active than girls. Daily intakes of energy in moderately active boys were significantly higher than in the sedentary group, but intakes of calcium and iron in moderately active boys were lower than active boys. For girls, physical activity level did not affect nutrient density at all. Intakes of calcium, vitamin C, and folate for both boys and girls were below 50% of recommended intake. Physical activity did not affect nutrient density and our participants were exposed to nutritional imbalance. Therefore, the results suggest that nutrition education regarding balanced diet and optimum physical activity is required for children's health and growth. abstract_id: PUBMED:19690797 Impact of dietary intake, education, and physical activity on bone mineral density among North Indian women. The aim of the study was to determine the relationship of dietary nutrients and bone mineral density (BMD) in North Indian women. This cross-sectional study was conducted from April 2006 to March 2008. Subjects included 255 healthy women, aged 20-69 years, who were relatives of patients being admitted in the hospital. Various demographic characteristics including socioeconomic status and serum parameters in relationship to BMD were evaluated. In addition, the daily dietary intake of energy, protein, fat, and calcium and the amount of physical activity were assessed. BMD at the lumbar spine, femoral neck, and Ward's triangle was measured by dual-energy X-ray absorptiometry (DXA). Body mass index (BMI), physical activity, and educational level were positively correlated with BMD. The daily intakes of energy (1563.4 +/- 267.2 kcal) and protein (48.7 +/- 8.7 g) were below the recommended dietary allowance. Daily dietary energy, protein, and calcium intakes were correlated with BMD at the lumbar spine. Stepwise multiple linear regression analyses showed that age, BMI, and physical activity were significant predictors for BMD at all sites. In addition, energy intake was also a predictor for BMD at the lumbar spine. The protein intake was associated with BMD at the spine (P = 0.02 and beta = 0.163) even after making adjustments for energy intake. Thus, dietary pattern coupled with higher education levels and greater physical activity favored bone health. abstract_id: PUBMED:32624008 The longitudinal relationship between dietary intake, physical activity and muscle strength among adolescents. The longitudinal relationship between muscle strength, dietary intake and physical activity among adolescents is not well understood. We investigated the trend and longitudinal effects of dietary intakes and physical activity scores on muscle strength in adolescents. This prospective cohort study consisted of 436 adolescents (134 males; 302 females) aged 13 years at baseline (2012) who were followed up at the ages of 15 (2014) and 17 (2016) years, respectively. We measured muscle strength using a calibrated hand dynamometer, estimated dietary intake with a 7-d dietary history and physical activity scores with a validated physical activity questionnaire for older children. A generalised estimating equation was used to examine the effect of dietary intakes and physical activity on muscle strength changes. The analysis was performed separately by sex. The muscle strength for males and females had increased within the 5-year period. The dietary intakes (energy and macronutrients) also increased initially but plateaued after the age of 15 years for both sexes. Females recorded a significant declining trend in physical activity scores compared with males as they grew older. A significant positive longitudinal relationship was found between protein (β = 0·035; P = 0·016), carbohydrate intake (β = 0·002; P = 0·013) and muscle strength among males. However, no longitudinal relationship was found between dietary intake, physical activity and muscle strength among females. Higher protein and carbohydrate intake among males was associated with higher muscle strength but was not observed in females. Nutrition and physical activity focusing on strength building are required in early adolescence and need to be tailored to males and females accordingly. abstract_id: PUBMED:38419086 Combined effect of dietary calcium consumption and physical activity on dental caries in children and adolescents: a study of the NHANES database. Background: Calcium (Ca) is a nutritional factor that associated with dental caries. A recent study showed that in the case of adequate Ca intake, a higher level of physical activity may contribute to bone mass accumulation. However, the combined effect between Ca intake and physical activity on caries experience is unclear. Herein, we aimed to explore the above combined effect on dental caries in children and adolescents. Methods: Data of 5,917 children and adolescents were extracted from the National Health and Nutrition Examination Surveys (NHANES) database in 2015-2020 in this cross-sectional study. The NHANES assessed the dietary Ca intake through the 24-hour dietary recalls, and the physical activity level was self-reported using the questionnaires. Also, the dental caries was diagnosed according to the Decayed, Missing and Filled Teeth/Surfaces (DMFT/S) index. Weighted univariate and multivariate logistic regression analyses were utilized to screen the covariates and to investigate the associations of dietary Ca intake and physical activity with dental caries, respectively, and assess the combined effect between dietary Ca intake and physical activity on dental caries. The evaluation indexes were odd ratios (ORs) and 95% confidence intervals (CIs). Subgroup analyses of age, obesity, and total sugar intake were also performed. Results: Among the eligible participants, 2,687 had caries experience. After adjusting for the covariates, we found that children and adolescents who not reach the recommendation level of Ca intake combined with physical activity less than 7 time in 1 week seemed to have higher odds of dental caries [OR = 1.77, 95%CI: (1.38-2.27)], compared with those who reached the standards. In addition, this potential combined effect was also found in age < 12 years old [OR = 1.62, 95%CI: (1.23-2.14)], non-obesity [OR = 1.88, 95%CI: (1.49-2.35)], and total sugar intake (all P < 0.05) subgroups. Conclusions: Ca intake and physical activity had a potential combined effect on dental caries in children and adolescents, but the causal relationships between them needed further clarification. Answer: The relationship between physical activity and dietary calcium intake has been explored in various studies, but the findings are not entirely consistent. Some studies have found that physical activity does not significantly affect dietary calcium intake. For instance, a survey in young French subjects found no significant relationship between the index of activity and declared calcium intake, indicating that calcium intake is not related to the level of physical activity (PUBMED:9588616). Similarly, a study on Korean elementary school students showed that physical activity did not affect nutrient density, and the participants were exposed to nutritional imbalance, suggesting that nutrition education regarding balanced diet and optimum physical activity is required for children's health and growth (PUBMED:20827348). However, other studies have highlighted the importance of both physical activity and dietary calcium intake for bone health and the prevention of osteoporosis. For example, a study among urban South Indian postmenopausal women showed that dietary calcium intake was below the Recommended Dietary Allowance and the majority of women were physically inactive, indicating the need for better education regarding dietary calcium intake and the need for calcium supplements and physical activity (PUBMED:26288793). Another study in Karnataka, India, found that dietary calcium intake was below the RDA for three-fourths of the patients, with nearly half of them being physically inactive, again emphasizing the need for better education and improvement in physical activity (PUBMED:30613534). Furthermore, the longitudinal effects of physical activity and dietary calcium on bone mass accrual across stages of pubertal development demonstrated significant effects of weight-bearing physical activity on bone mass accrual through all stages of pubertal development (PUBMED:25130421). This suggests that while the relationship between physical activity and dietary calcium intake may not be direct, both factors are crucial for bone health. In summary, while some studies do not show a direct relationship between physical activity and dietary calcium intake, the overall body of evidence suggests that both are important for maintaining bone health and preventing osteoporosis, particularly in populations at risk such as postmenopausal women and adolescents.
Instruction: Do pulmonary autografts provide better outcomes than mechanical valves? Abstracts: abstract_id: PUBMED:16305870 Do pulmonary autografts provide better outcomes than mechanical valves? A prospective randomized trial. Background: The objective of this study was to compare the performance of pulmonary autografts with mechanical aortic valves, in the treatment of aortic valve stenosis. Methods: Forty patients with aortic valve stenoses, and below the age of 55 years, were randomly assigned to receive either pulmonary autografts (n = 20) or mechanical valve (Edwards MIRA; Edwards Lifesciences, Irvine, CA) prostheses (n = 20). Clinical outcomes, left ventricular mass regression, effective orifice area, ejection fraction, and mean gradients were evaluated at discharge, 6 months, and one year after surgery. Follow-up was complete for all patients. Results: Hemodynamic performance was significantly better in the Ross group (mean gradient 2.6 mm Hg vs 10.9 mm Hg, p = 0.0005). Overall, a significant decrease in left ventricular mass was found one year postoperatively. However, there was no significant difference in the rate and extent of regression between the groups. There was one stroke in the Ross group and one major bleeding complication in the mechanical valve group. Both patients recovered fully. Conclusions: In our randomized cohort of young patients with aortic valve stenoses, the Ross procedure was superior to the mechanical prostheses with regard to hemodynamic performance. However, this did not result in an accelerated left ventricular mass regression. Clinical advantages like reduced valve-related complications and lesser myocardial strain will have to be proven in the long term. abstract_id: PUBMED:28336153 Quantification and comparison of the mechanical properties of four human cardiac valves. Objective: Although having the same ability to permit unidirectional flow within the heart, the four main valves-the mitral valve (MV), aortic (AV), tricuspid (TV) and pulmonary (PV) valves-experience different loading conditions; thus, they exhibit different structural integrity from one another. Most research on heart valve mechanics have been conducted mainly on MV and AV or an individual valve, but none quantify and compare the mechanical and structural properties among the four valves from the same aged patient population whose death was unrelated to cardiovascular disease. Methods: A total of 114 valve leaflet samples were excised from 12 human cadavers whose death was unrelated to cardiovascular disease (70.1±3.7years old). Tissue mechanical and structural properties were characterized by planar biaxial mechanical testing and histological methods. The experimental data were then fitted with a Fung-type constitutive model. Results: The four valves differed substantially in thickness, degree of anisotropy, and stiffness. The leaflets of the left heart (the AV leaflets and the anterior mitral leaflets, AML) were significantly stiffer and less compliant than their counterparts in the right heart. TV leaflets were the most extensible and isotropic, while AML and AV leaflets were the least extensible and the most anisotropic. Age plays a significant role in the reduction of leaflet stiffness and extensibility with nearly straightened collagen fibers observed in the leaflet samples from elderly groups (65years and older). Conclusions: Results from 114 human leaflet samples not only provided a baseline quantification of the mechanical properties of aged human cardiac valves, but also offered a better understanding of the age-dependent differences among the four valves. It is hoped that the experimental data collected and the associated constitutive models in this study can facilitate future studies of valve diseases, treatments and the development of interventional devices. Statement Of Significance: Most research on heart valve mechanics have been conducted mainly on mitral and aortic valves or an individual valve, but none quantify and compare the mechanical and structural properties among the four valves from the same relatively healthy elderly patient population. In this study, the mechanical and microstructural properties of 114 leaflets of aortic, mitral, pulmonary and tricuspid valves from 12 human cadaver hearts were mechanically tested, analyzed and compared. Our results not only provided a baseline quantification of the mechanical properties of aged human valves, but a age range between patients (51-87years) also offers a better understanding of the age-dependent differences among the four valves. It is hoped that the obtained experimental data and associated constitutive parameters can facilitate studies of valve diseases, treatments and the development of interventional devices. abstract_id: PUBMED:27765169 Biomechanics of Failed Pulmonary Autografts Compared to Native Aortic Roots. Background: Progressive autograft dilatation after a Ross operation suggests that remodeling does not effectively reproduce native aortic root biomechanics. In the first of this two-part series, we compared mechanical properties of explanted autografts to pulmonary roots at pulmonary pressures. The goal of this study was to compare mechanical properties of explanted autografts to native aortic roots at systemic pressures. Methods: Autograft specimens were obtained from patients undergoing reoperation after Ross operation. For comparison, native aortic roots were obtained from unused donor hearts. Biaxial stretch testing was performed to determine tissue mechanical properties. Tissue stiffness was determined at patient-specific physiologic stresses corresponding to systemic pressures (80 and 120 mm Hg) and hypertensive state (200 mm Hg). Results: Nonlinear stress-strain curves were present for both failed autografts and native aortic roots. Explanted autografts were significantly more compliant than native aortic roots at 80 mm Hg (1.53 ± 0.68 versus 2.99 ± 1.34 MPa; p = 0.011), 120 mm Hg (2.54 ± 1.18 versus 4.93 ± 2.21 MPa; p = 0.013), and 200 mm Hg (4.79 ± 2.30 versus 9.21 ± 4.16 MPa; p = 0.015). Autograft tissue stiffness at 80, 120, and 200 mm Hg was not correlated with age at the time of Ross operation (p = 0.666, p = 0.639, and p = 0.616, respectively) or time in the systemic circulation (p = 0.635, p = 0.637, and p = 0.647, respectively). Conclusions: Failed pulmonary autografts retained a nonlinear response to mechanical loading typical of healthy arterial tissue. Despite similar wall thickness between autografts and aorta, autograft stiffness in this patient population was significantly reduced compared with native aortic roots. We demonstrated that biomechanical remodeling was inadequate in these specimens to achieve native aortic mechanical properties, which may have resulted in progressive autograft root dilatation. abstract_id: PUBMED:9605077 Mechanical valves in the pulmonary position: a reappraisal. Objectives: To evaluate midterm results of mechanical valves in pulmonary position in patients with pulmonary regurgitation and right ventricular dysfunction as an alternative to bioprostheses. Patients: Mechanical valves (six tilting disc valves and two bileaflet valves) were implanted in eight patients previously operated on for tetralogy of Fallot (n = 7) and truncus arteriosus (n = 1), with severe right ventricular dysfunction caused by massive pulmonary regurgitation. Results: All patients survived prosthesis implantation and are currently well. At follow-up (3 months to 9 years), they do not show signs of valve failure, and right ventricular function has dramatically improved in all but one, who still shows moderate ventricular hypokinesia. Conclusion: After operative correction of congenital heart defects in selected patients who show severe dysfunction of the right ventricle caused by pulmonary regurgitation/stenosis, mechanical valves may represent an alternative to bioprosthetic valves. The selection of the valve type is still a matter of debate. However, according to literature data, complications seem to have occurred only in patients with bileaflet mechanical valves in the pulmonary position, whereas no thromboembolic episodes or valve failure is reported in subjects with tilting disc valves in the right ventricular outflow. Tilting disc valves might perform better in the right ventricular outflow than bileaflet valves. abstract_id: PUBMED:9323070 Circulating microemboli in patients after aortic valve replacement with pulmonary autografts and mechanical valve prostheses. Background: The pulmonary autograft procedure (Ross) is now considered the gold standard for aortic valve replacement. One of its advantages is the freedom from macroemboli without anticoagulation. Whether this holds true for circulating microemboli, detectable as high-intensity transient Doppler signals (HITS), has not yet been verified. Methods And Results: We investigated 8 patients (2 women, 6 men; mean age, 50.6+/-17.9 years) after the Ross procedure, 9 patients (3 women, 6 men; mean age, 67.2+/-9.46 years) after aortic valve replacement with a mechanical valve prosthesis, and 12 young healthy volunteers by unilateral 1-hour recording of the middle cerebral artery on digital audio tape. Patients with extracranial carotid artery disease were excluded by color duplex sonography. During the off-line evaluation, the investigator was not aware of any patient details. No HITS were detected in healthy volunteers (95% confidence interval [CI], 0% to 26.46%). After the Ross procedure, 1 patient had 11 and 1 patient had 1 HITS (95% CI, 3.19% to 65.09%). All recipients of mechanical valves had HITS, ranging from 2 to 84 per hour (95% CI, 66.7% to 100%). Significantly more recipients of mechanical valves exhibited HITS than recipients of pulmonary autografts (P<.05) or control subjects (P<.05). Conclusions: In contrast to mechanical valves, pulmonary autografts are seldom the source of microemboli, confirming the pulmonary autograft as the superior substitute for aortic valve replacement. abstract_id: PUBMED:32066803 Mechano-biological adaptation of the pulmonary artery exposed to systemic conditions. Cardiac surgeries may expose pulmonary arterial tissue to systemic conditions, potentially resulting in failure of that tissue. Our goal was to quantitatively assess pulmonary artery adaptation due to changes in mechanical environment. In 17 sheep, we placed a pulmonary autograft in aortic position, with or without macroporous mesh reinforcement. It was exposed to systemic conditions for 6 months. All sheep underwent 3 ECG-gated MRI's. Explanted tissue was subjected to mechanical and histological analysis. Results showed progressive dilatation of the unreinforced autograft, while reinforced autografts stabilized after two months. Some unreinforced pulmonary autograft samples displayed more aorta-like mechanical behavior with increased collagen deposition. The mechanical behavior of reinforced autografts was dominated by the mesh. The decrease in media thickness and loss of vascular smooth muscle cells was more pronounced in reinforced than in unreinforced autografts. In conclusion, altering the mechanical environment of a pulmonary artery causes changes in its mechano-biological properties. abstract_id: PUBMED:27457832 Biomechanics of Failed Pulmonary Autografts Compared With Normal Pulmonary Roots. Background: Progressive dilatation of pulmonary autografts after the Ross operation may reflect inadequate remodeling of the native pulmonary root to adapt to systemic circulation. Understanding the biomechanics of autograft root dilatation may aid designing strategies to prevent dilatation. We have previously characterized normal human pulmonary root material properties; however, the mechanical properties of failed autografts are unknown. In this study, failed autograft roots explanted during reoperation were acquired, and their material properties were determined. Methods: Failed pulmonary autograft specimens were obtained from patients undergoing reoperation after the Ross operation. Fresh human native pulmonary roots were obtained from the transplant donor network as controls. Biaxial stretch testing was performed to determine tissue mechanical properties. Tissue stiffness was determined at patient-specific physiologic stresses at pulmonary pressures. Results: Nonlinear stress-strain response was present in both failed autografts and normal pulmonary roots. Explanted pulmonary autografts were less stiff than were their native pulmonary root counterparts at 8 mm Hg (134 ± 42 vs 175 ± 49 kPa, respectively) (p = 0.086) and 25 mm Hg (369 ± 105 vs 919 ± 353 kPa, respectively) (p = 0.006). Autograft wall stiffness at both 8 and 25 mm Hg was not correlated with age at the Ross procedure (p = 0.898 and p = 0.813, respectively) or with time in the systemic circulation (p = 0.609 and p = 0.702, respectively). Conclusions: Failed pulmonary autografts retained nonlinear response to mechanical loading typical of healthy human arterial tissue. Remodeling increased wall thickness but decreased wall stiffness in failed autografts. Increased compliance may explain progressive autograft root dilatation in autograft failures. abstract_id: PUBMED:31477226 Pregnancy outcomes in women with mechanical prosthetic heart valves. Pregnancy outcomes in women with mechanical heart valves (MHV) remain poor. The thrombogenicity of MHV, the hypercoagulable state of pregnancy and limitations of current anticoagulation strategies combine to give a very high risk state. Recent additions to the literature in the form of registry data as well as meta-analyses and systematic reviews have highlighted the risks of thrombotic and haemorrhagic events in mothers and increased fetal complication rates. There remains a lack of consensus about the optimal anticoagulation approach to mitigate the risks to mother and fetus and registry data confirm poor quality of care. Current guidelines recommend the sequential use of heparin and vitamin K antagonists, but low molecular weight heparin is often used throughout the pregnancy, with inadequate monitoring in many instances. There is an urgent need to standardise the management of such women, which should be undertaken in specialist centres, supported by research on optimal anticoagulation regimens and the impact on pregnancy outcomes. abstract_id: PUBMED:26867221 Age-Dependent Changes in Geometry, Tissue Composition and Mechanical Properties of Fetal to Adult Cryopreserved Human Heart Valves. There is limited information about age-specific structural and functional properties of human heart valves, while this information is key to the development and evaluation of living valve replacements for pediatric and adolescent patients. Here, we present an extended data set of structure-function properties of cryopreserved human pulmonary and aortic heart valves, providing age-specific information for living valve replacements. Tissue composition, morphology, mechanical properties, and maturation of leaflets from 16 pairs of structurally unaffected aortic and pulmonary valves of human donors (fetal-53 years) were analyzed. Interestingly, no major differences were observed between the aortic and pulmonary valves. Valve annulus and leaflet dimensions increase throughout life. The typical three-layered leaflet structure is present before birth, but becomes more distinct with age. After birth, cell numbers decrease rapidly, while remaining cells obtain a quiescent phenotype and reside in the ventricularis and spongiosa. With age and maturation-but more pronounced in aortic valves-the matrix shows an increasing amount of collagen and collagen cross-links and a reduction in glycosaminoglycans. These matrix changes correlate with increasing leaflet stiffness with age. Our data provide a new and comprehensive overview of the changes of structure-function properties of fetal to adult human semilunar heart valves that can be used to evaluate and optimize future therapies, such as tissue engineering of heart valves. Changing hemodynamic conditions with age can explain initial changes in matrix composition and consequent mechanical properties, but cannot explain the ongoing changes in valve dimensions and matrix composition at older age. abstract_id: PUBMED:28697893 Mechanical valves in the pulmonary position: An international retrospective analysis. Objective: Life expectancy of patients with congenital heart disease has improved over the past decades, increasing the need for a durable pulmonary prosthetic valve. Biological valves in various forms have become the valve of choice for pulmonary valve replacement (PVR), but structural valve deterioration is unavoidable in the long term. Use of a mechanical valve could be an alternative, but data on long-term outcomes are sparse. Methods: We retrospectively collected and analyzed data on 364 patients with mechanical valves implanted in the pulmonary position between 1965 and 2014. The data originate from medical centers in Barcelona (Spain), Graz (Austria), Groningen (the Netherlands), Munich (Germany), Rochester (United States), Seoul (Republic of Korea), and Tehran (Iran). Results: Median follow-up duration was 4.26 years (range, 0-27 years), mean age at implantation was 27.16 ± 12.2 years. Tetralogy of Fallot was the most common primary cardiac diagnosis, with a subgroup of 69.8%. Freedom from valvular thrombosis was 91% (95% confidence interval [CI], 87%-94%) at 5 years and 86% (95% CI, 81%-91%) at 10 years post-PVR. With a success rate up to 88%, thrombolysis was a successful therapy. Freedom from reoperation was 97% (95% CI, 94%-99%) at 5 years post-PVR and 91% (95% CI, 85%-95%) at 10 years. Conclusions: Mechanical PVR is associated with a limited risk of valvular thrombosis. Thrombolysis was an effective treatment in the majority. Answer: Pulmonary autografts, also known as the Ross procedure, have been compared to mechanical valves in various studies to assess their outcomes in aortic valve replacement. According to a prospective randomized trial, pulmonary autografts demonstrated superior hemodynamic performance compared to mechanical aortic valves in young patients with aortic valve stenosis, as indicated by a significantly lower mean gradient (2.6 mm Hg vs 10.9 mm Hg, p = 0.0005). However, this did not result in an accelerated left ventricular mass regression, and the clinical advantages such as reduced valve-related complications and lesser myocardial strain would need to be proven in the long term (PUBMED:16305870). In terms of mechanical properties, pulmonary autografts have been shown to be less stiff than native aortic roots at systemic pressures, which may contribute to progressive autograft root dilatation (PUBMED:27765169). This suggests that the biomechanical remodeling of pulmonary autografts may be inadequate to achieve native aortic mechanical properties. When considering the risk of circulating microemboli, pulmonary autografts are less likely to be the source of microemboli compared to mechanical valves, as evidenced by the significantly fewer high-intensity transient Doppler signals (HITS) detected in patients with autografts (PUBMED:9323070). Mechanical valves in the pulmonary position have been associated with a limited risk of valvular thrombosis, and thrombolysis has been an effective treatment in the majority of cases. However, the risk of thromboembolic events and valve failure is a concern, especially in the context of pregnancy, where the management of anticoagulation is challenging and associated with poor outcomes (PUBMED:31477226, PUBMED:28697893). Overall, while pulmonary autografts may provide better hemodynamic performance and a lower risk of microemboli, the long-term clinical advantages over mechanical valves are not fully established, and the choice between the two options may depend on individual patient factors and the specific clinical scenario.
Instruction: Is oxygen a key factor in the lipodystrophy phenotype? Abstracts: abstract_id: PUBMED:17049073 Is oxygen a key factor in the lipodystrophy phenotype? Background: The lipodystrophic syndrome (LD) is a disorder resulting from selective damage of adipose tissue by antiretroviral drugs included in therapy controlling human-immunodeficiency-virus-1. In the therapy cocktail the nucleoside reverse transcriptase inhibitors (NRTI) contribute to the development of this syndrome. Cellular target of NRTI was identified as the mitochondrial polymerase-gamma and their toxicity described as a mitochondrial DNA (mtDNA) depletion resulting in a mitochondrial cytopathy and involved in fat redistribution. No mechanisms offer explanation whatsoever for the lipo-atrophic and lipo-hypertrophic phenotype of LD. To understand the occurrence we proposed that the pO2 (oxygen partial pressure) could be a key factor in the development of the LD. For the first time, we report here differential effects of NRTIs on human adipose cells depending on pO2 conditions. Results And Discussion: We showed that the hypoxia conditions could alter adipogenesis process by modifying expression of adipocyte makers as leptin and the peroxisome proliferator-activated receptor PPARgamma and inhibiting triglyceride (TG) accumulation in adipocytes. Toxicity of NRTI followed on adipose cells in culture under normoxia versus hypoxia conditions showed, differential effects of drugs on mtDNA of these cells depending on pO2 conditions. Moreover, NRTI-treated adipocytes were refractory to the inhibition of adipogenesis under hypoxia. Finally, our hypothesis that variations of pO2 could exist between adipose tissue from anatomical origins was supported by staining of the hypoxic-induced angiopoietin ANGPTL4 depended on the location of fat. Conclusion: Toxicity of NRTIs have been shown to be opposite on human adipose cells depending on the oxygen availability. These data suggest that the LD phenotype may be a differential consequence of NRTI effects, depending on the metabolic status of the targeted adipose tissues and provide new insights into the opposite effects of antiretroviral treatment, as observed for the lipo-atrophic and lipo-hypertrophic phenotype characteristic of LD. abstract_id: PUBMED:22985869 Hypertriglyceridemic waist phenotype is a risk factor for subclinical atherosclerosis in human immunodeficiency virus-infected patients Background And Objective: To study the association between hypertriglyceridemic waist phenotype and the presence of subclinical atherosclerosis in human immunodeficiency virus (HIV) infected patients. Patients And Methods: Cross sectional study. Hypertriglyceridemic waist phenotype was considered if the waist was ≥90cm and triglycerides ≥2.0mmol/l (178mg/dl) in men and ≥85cm and ≥1.5mmol/L (133mg/dl) in women, respectively. We used the intima-media thickness (IMT) to detect carotid subclinical atherosclerosis. Results: We analyzed 152 patients, of whom 128 (84.2%) were receiving antiretroviral therapy, 40.7% were receiving protease inhibitors and 38.1% were treated with non-nucleoside reverse transcriptase inhibitors. The prevalence of hypertriglyceridemic waist phenotype was 23.6% (95% confidence interval [CI] 16.8-30.3%). Patients with hypertriglyceridemic waist phenotype had higher cardiovascular risk according to the Framingham score (11.09 [7.6] vs 3.88 [4], P=0.001) and lipodystrophy (33.3 vs. 13.7%, P=0.032) and metabolic syndrome (69.4 vs. 1.9%, P<0.001) were more frequent. The IMT was elevated in 21 (13.8%) patients. Hypertriglyceridemic waist phenotype (odds ratio [OR] 4.66 [95%CI 1.05-20.6; P = 0.043]) and metabolic syndrome (OR 3.74 [95%CI 1.25-11.23; P = 0.018]) were independently associated with higher IMT. Conclusions: The hypertriglyceridemic waist phenotype is a risk factor for subclinical atherosclerosis in HIV infected patients and it is useful to detect patients with lipodystrophy, metabolic syndrome and high cardiovascular risk. abstract_id: PUBMED:32291752 Kosaki overgrowth syndrome: A novel pathogenic variant in PDGFRB and expansion of the phenotype including cerebrovascular complications. Heterozygous activating variants in platelet-derived growth factor, beta (PDGFRB) are associated with phenotypes including Kosaki overgrowth syndrome (KOGS), Penttinen syndrome and infantile myofibromatosis (IM). Here, we present three new cases of KOGS, including a patient with a novel de novo variant c.1477A > T p.(Ser493Cys), and the oldest known individual age 53 years. The KOGS phenotype includes characteristic facial features, tall stature, scoliosis, hyperelastic thin skin, lipodystrophy, variable intellectual and neurological deterioration, and abnormalities on brain imaging. Long-term outcome is unknown. Our cases confirm the phenotypic spectrum includes progressive flexion contractures, camptodactyly, widely spaced teeth, and constriction rings. We also propose novel occasional features including craniosynostosis, ocular pterygia, anterior chamber cleavage syndrome, early osteoporosis, increased pigmentation, recurrent haematomas, predisposition to cellulitis, nail dystrophy, carpal tunnel syndrome, recurrent hypoglycaemia in infancy, joint dislocation, and splenomegaly. Importantly, we report fusiform aneurysm of the basilar artery in two patients. Complications include thrombosis and stroke in the oldest reported patient and fatal rupture at the age of 21 in the patient with the novel variant. We conclude that cerebrovascular complications are part of the phenotypic spectrum of KOGS and KOGS-like disorders and suggest vascular imaging is indicated in these patients. abstract_id: PUBMED:34212753 SHORT Syndrome: Systematic Appraisal of the Medical and Dental Phenotype. Introduction: SHORT syndrome is a rare autosomal dominant condition described by its acronym of short stature, hyperextensibility of joints and/or inguinal hernia, ocular depression, Rieger abnormality, and teething delay. Individuals have a distinct progeroid craniofacial appearance with a triangular face, frontal bossing, hypoplastic or thin alae nasi, large low-set ears, and mandibular retrognathia. Objectives: To systematically appraise the literature and update the clinical phenotype with emphasis on the dental condition. Design: A systematic literature search was carried out to update the clinical phenotype, identifying reports of individuals with SHORT syndrome published after August 2015. The same search strategy but not limited to publication date was carried out to identify reports of the dental phenotype. Two independent reviewers screened 1937 articles with 55 articles identified for full-text review. Results: Nineteen individuals from 11 families were identified. Facial dysmorphism including ocular depression, triangular shaped face, frontal bossing, large low-set ears, and micrognathia were the most consistent features followed by lipodystrophy, insulin resistance, and intrauterine growth restriction. Teething delay, microdontia, hypodontia, and enamel hypoplasia have all been reported. Conclusion: Features that comprise the SHORT acronym do not accurately or completely describe the clinical phenotype. The craniofacial appearance is one of the most consistent features. Lipodystrophy and insulin resistance may also be considered cardinal features. After teething delay, enamel hypoplasia and microdontia are the most common dental manifestations. We present recommendations for the dental and orthodontic/orthognathic management of individuals with SHORT syndrome. abstract_id: PUBMED:32349771 Genotype-phenotype correlations of Berardinelli-Seip congenital lipodystrophy and novel candidate genes prediction. Background: Berardinelli-Seip congenital lipodystrophy (BSCL) is a heterogeneous autosomal recessive disorder characterized by an almost total lack of adipose tissue in the body. Mutations in the AGPAT2, BSCL2, CAV1 and PTRF genes define I-IV subtype of BSLC respectively and clinical data indicate that new causative genes remain to be discovered. Here, we retrieved 341 cases from 60 BSCL-related studies worldwide and aimed to explore genotype-phenotype correlations based on mutations of AGPAT2 and BSCL2 genes from 251 cases. We also inferred new candidate genes for BSCL through protein-protein interaction and phenotype-similarity. Results: Analysis results show that BSCL type II with earlier age of onset of diabetes mellitus, higher risk to suffer from premature death and mental retardation, is a more severe disorder than BSCL type I, but BSCL type I patients are more likely to have bone cysts. In BSCL type I, females are at higher risk of developing diabetes mellitus and acanthosis nigricans than males, while in BSCL type II, males suffer from diabetes mellitus earlier than females. In addition, some significant correlations among BSCL-related phenotypes were identified. New candidate genes prediction through protein-protein interaction and phenotype-similarity was conducted and we found that CAV3, EBP, SNAP29, HK1, CHRM3, OBSL1 and DNAJC13 genes could be the pathogenic factors for BSCL. Particularly, CAV3 and EBP could be high-priority candidate genes contributing to pathogenesis of BSCL. Conclusions: Our study largely enhances the current knowledge of phenotypic and genotypic heterogeneity of BSCL and promotes the more comprehensive understanding of pathogenic mechanisms for BSCL. abstract_id: PUBMED:32413188 Genotype-phenotype analysis of LMNA-related diseases predicts phenotype-selective alterations in lamin phosphorylation. Laminopathies are rare diseases associated with mutations in LMNA, which encodes nuclear lamin A/C. LMNA variants lead to diverse tissue-specific phenotypes including cardiomyopathy, lipodystrophy, myopathy, neuropathy, progeria, bone/skin disorders, and overlap syndromes. The mechanisms underlying these heterogeneous phenotypes remain poorly understood, although post-translational modifications, including phosphorylation, are postulated as regulators of lamin function. We catalogued all known lamin A/C human mutations and their associated phenotypes, and systematically examined the putative role of phosphorylation in laminopathies. In silico prediction of specific LMNA mutant-driven changes to lamin A phosphorylation and protein structure was performed using machine learning methods. Some of the predictions we generated were validated via assessment of ectopically expressed wild-type and mutant LMNA. Our findings indicate phenotype- and mutant-specific alterations in lamin phosphorylation, and that some changes in phosphorylation may occur independently of predicted changes in lamin protein structure. Therefore, therapeutic targeting of phosphorylation in the context of laminopathies will likely require mutant- and kinase-specific approaches. abstract_id: PUBMED:19768759 Evolution of the phenotype in a family with an LMNA gene mutation presenting with isolated cardiac involvement. The aim of this study is to report the evolution of a phenotype in members of a single family carrying the heterozygous exon 1 c.178 C/G, p.Arg 60 Gly LMNA gene mutation. All mutated family members underwent neurological and cardiological assessments for a period ranging from 10 to 20 years. At onset, 4 affected adult members presented a phenotype that required pacemaker implantation. Three subjects underwent cardiac transplantation leading to long-term survival in 2 of them. One of the 3 longest surviving relatives manifested late lipodystrophy, and the other 2 had lipodystrophy, insulin-resistant diabetes, and distal peripheral neuropathy. The findings demonstrate that the exon 1 c.178 C/G, p.Arg 60 Gly LMNA gene mutation is associated with a novel phenotype featuring cardiac involvement followed by late lipodystrophy, diabetes, and peripheral axonal neuropathy. abstract_id: PUBMED:33618333 Functional analysis of POLD1 p.ser605del variant: the aging phenotype of MDPL syndrome is associated with an impaired DNA repair capacity. Mandibular hypoplasia, Deafness and Progeroid features with concomitant Lipodystrophy define a rare systemic disorder, named MDPL Syndrome, due to almost always a de novo variant in POLD1 gene, encoding the DNA polymerase δ. We report a MDPL female heterozygote for the recurrent p.Ser605del variant. In order to deepen the functional role of the in frame deletion affecting the polymerase catalytic site of the protein, cellular phenotype has been characterised. MDPL fibroblasts exhibit in vitro nuclear envelope anomalies, accumulation of prelamin A and presence of micronuclei. A decline of cell growth, cellular senescence and a blockage of proliferation in G0/G1 phase complete the aged cellular picture. The evaluation of the genomic instability reveals a delayed recovery from DNA induced-damage. Moreover, the rate of telomere shortening was greater in pathological cells, suggesting the telomere dysfunction as an emerging key feature in MDPL. Our results suggest an alteration in DNA replication/repair function of POLD1 as a primary pathogenetic cause of MDPL. The understanding of the mechanisms linking these cellular characteristics to the accelerated aging and to the wide spectrum of affected tissues and clinical symptoms in the MDPL patients may provide opportunities to develop therapeutic treatments for progeroid syndromes. abstract_id: PUBMED:15242882 The phenotype of motor neuropathies associated with BSCL2 mutations is broader than Silver syndrome and distal HMN type V. Silver syndrome is a rare autosomal dominant neurodegenerative disorder characterized by marked amyotrophy and weakness of small hand muscles and spasticity in the lower limbs. The locus for Silver syndrome (SPG17) was assigned to a 13 cM region on chromosome 11q12-q14 in a single large pedigree. We recently found heterozygous mutations in the Berardinelli-Seip congenital lipodystrophy (BSCL2, seipin) gene causing SPG17 and distal hereditary motor neuropathy type V (distal HMN V). Here we report the clinical features of two families with heterozygous BSCL2 mutations. Interestingly, both families show a clinical phenotype different from classical Silver syndrome, and in some patients the phenotype is also different from distal HMN V. Patients in the first family had marked spasticity in the lower limbs and very striking distal amyotrophy that always started in the legs. Patients in the second family had distal amyotrophy sometimes starting and predominating in the legs, but no pyramidal tract signs. These observations broaden the clinical phenotype of disorders associated with BSCL2 mutations, having consequences for molecular genetic testing. abstract_id: PUBMED:27488495 Adiposity influences airway wall thickness and the asthma phenotype of HIV-associated obstructive lung disease: a cross-sectional study. Background: Airflow obstruction, which encompasses several phenotypes, is common among HIV-infected individuals. Obesity and adipose-related inflammation are associated with both COPD (fixed airflow obstruction) and asthma (reversible airflow obstruction) in HIV-uninfected persons, but the relationship to airway inflammation and airflow obstruction in HIV-infected persons is unknown. The objective of this study was to determine if adiposity and adipose-associated inflammation are associated with airway obstruction phenotypes in HIV-infected persons. Methods: We performed a cross-sectional analysis of 121 HIV-infected individuals assessed with pulmonary function testing, chest CT scans for measures of airway wall thickness (wall area percent [WA%]) and adipose tissue volumes (mediastinal and subcutaneous), as well as HIV- and adipose-related inflammatory markers. Participants were defined as COPD phenotype (post-bronchodilator FEV1/FVC < lower limit of normal) or asthma phenotype (doctor-diagnosed asthma or bronchodilator response). Pearson correlation coefficients were calculated between adipose measurements, WA%, and pulmonary function. Multivariable logistic and linear regression models were used to determine associations of airflow obstruction and airway remodeling (WA%) with adipose measurements and participant characteristics. Results: Twenty-three (19 %) participants were classified as the COPD phenotype and 33 (27 %) were classified as the asthma phenotype. Body mass index (BMI) was similar between those with and without COPD, but higher in those with asthma compared to those without (mean [SD] 30.7 kg/m(2) [8.1] vs. 26.5 kg/m(2) [5.3], p = 0.008). WA% correlated with greater BMI (r = 0.55, p < 0.001) and volume of adipose tissue (subcutaneous, r = 0.40; p < 0.001; mediastinal, r = 0.25; p = 0.005). Multivariable regression found the COPD phenotype associated with greater age and pack-years smoking; the asthma phenotype with younger age, female gender, smoking history, and lower adiponectin levels; and greater WA% with greater BMI, younger age, higher soluble CD163, and higher CD4 counts. Conclusions: Adiposity and adipose-related inflammation are associated with an asthma phenotype, but not a COPD phenotype, of obstructive lung disease in HIV-infected persons. Airway wall thickness is associated with adiposity and inflammation. Adipose-related inflammation may play a role in HIV-associated asthma. Answer: Yes, oxygen appears to be a key factor in the lipodystrophy phenotype. The study reported in abstract PUBMED:17049073 found that the toxicity of nucleoside reverse transcriptase inhibitors (NRTIs), which are used in antiretroviral therapy for HIV-1 and contribute to the development of lipodystrophic syndrome (LD), have opposite effects on human adipose cells depending on the oxygen availability. The research showed that hypoxia conditions could alter the adipogenesis process by modifying the expression of adipocyte markers such as leptin and PPARgamma and inhibiting triglyceride accumulation in adipocytes. Furthermore, NRTI-treated adipocytes were refractory to the inhibition of adipogenesis under hypoxia. The hypothesis that variations of pO2 (oxygen partial pressure) could exist between adipose tissue from different anatomical origins was supported by the location-dependent staining of the hypoxic-induced angiopoietin ANGPTL4. These findings suggest that the LD phenotype may be a differential consequence of NRTI effects, depending on the metabolic status of the targeted adipose tissues, and provide new insights into the opposite effects of antiretroviral treatment observed for the lipo-atrophic and lipo-hypertrophic phenotype characteristic of LD.
Instruction: Understanding if, how and why non-adherent decisions are made in an Australian community sample: a key to sustaining medication adherence in chronic disease? Abstracts: abstract_id: PUBMED:25103184 Understanding if, how and why non-adherent decisions are made in an Australian community sample: a key to sustaining medication adherence in chronic disease? Background: Adherence to medications for chronic disease is sub-optimal. Current adherence-enhancing strategies do not seem to adequately address the fundamental need to sustain adherence or prevent non-adherence. Intentional non-adherence, involving active medication-taking decisions, is not well described within the Australian community setting. Understanding if, how and why non-adherent decisions are made may help develop strategies to sustain adherence in chronic disease. Objective: This study aimed to describe intentional non-adherent behavior in chronic disease within the Australian community setting and identify the factors that promote and prevent non-adherent decisions. Methods: In-depth, semi-structured interviews were conducted with 21 patients (12 rural, 9 metropolitan; New South Wales) prescribed medications for a diverse range of chronic conditions. Using the Theory of Planned Behavior as the theoretical framework, an iterative thematic framework analysis method was used to characterize the intentions and the decisions underlying non-adherent behavior. Data were indexed and charted within the thematic framework using Excel, and linked themes were combined, and associations and explanations drawn. Results: Although there was a strong intent to follow prescribers' recommendations, most patients described instances of intentionally non-adherent behavior. Trading between perceived treatment inefficacy, unfavorable side effects and unaffordable medication costs promoted non-adherent decisions; trusting prescriber-patient relationships, positive family values and lack of perceived control over treatment choice maintained adherent intentions. Intentional non-adherence was mostly temporary. Conclusions: Intentional medication non-adherence in chronic disease appears reversible and amenable to interventions that address treatment-related barriers such as medication affordability. Strategies that strengthen patient-prescriber relationships and empower patients as informed decision-makers may help maintain adherence intentions. Crucially, regular and sustained interventions that are refreshed to meet the changing needs of patients are needed to curb the temporal decline in adherence to chronic disease medications. abstract_id: PUBMED:34061364 Prevalence and predictors of medication non-adherence among older community-dwelling people with chronic disease in Singapore. Aims: To determine the prevalence and predictors of medication non-adherence among older community-dwelling people with at least one chronic disease in Singapore. Design: A single-centre cross-sectional study. Methods: The study was conducted in the largest tertiary public hospital in Singapore between May 2019 and December 2019. The community nurses of the hospital recruited a total of 400 community-dwelling older people aged ≥60 years old, who were diagnosed with at least one chronic disease and prescribed with at least one long-term medication. Medication non-adherence was assessed using the self-report 5-item Medication Adherence Report Scale, operationalized as a score of <25. A list of potential factors of medication non-adherence was structured based on the World Health Organization five-domain framework and collected using a self-report questionnaire. Results: Sixty percent (n = 240) of our participants were non-adherent to their medication regime. Older people who smoked (OR 2.89, 95% CI 1.14-7.33), perceived their medication regime as being complicated (OR 2.54, 95% CI 1.26-5.13), felt dissatisfied with their regime (OR 2.50, 95% CI 1.17-5.31), did not know the purpose of all their medications (OR 2.56, 95% CI 1.42-4.63) and experienced side effects (OR 3.32, 95% CI 1.14-9.67) were found to be predictive of medication non-adherence. Conclusion: Medication adherence was found to be poor in community-dwelling older people in Singapore. The predictors identified in this study can help guide healthcare professionals in identifying older people who are at risk of medication non-adherence and inform the development of interventions to improve adherence. Impact: Medication non-adherence, especially in the older population with chronic diseases, constitutes a serious problem as it undermines the efforts to reduce morbidity and mortality associated with the underlying chronic diseases. To improve adherence, our findings propose the importance of assessing the older person's treatment satisfaction, which includes examining the aspects of side effects, effectiveness and convenience. Additionally, we highlight the need to address the older person's medication knowledge deficit. abstract_id: PUBMED:32402728 Development of an electronic tool (e-AdPharm) to address unmet needs and barriers of community pharmacists to provide medication adherence support to patients. Background: Community pharmacists are best placed to improve medication adherence because they frequently interact with patients and have been trained to manage medication-related problems. Therefore, it is essential to equip pharmacists adequately to detect non-adherent patients quickly and intervene to improve medication adherence. Objective: To design e-AdPharm, a tool that addresses unmet needs and barriers of community pharmacists to provide medication adherence support to patients with chronic diseases. Methods: A qualitative study using 4 focus group discussions with community pharmacists was conducted with a semi-structured interview guide and discussions lasting for 1-2 h. The discussions covered the barriers and needs of pharmacists related to medication adherence support provided to patients, their expectations of an electronic tool based on prescription refills to help them provide this support, and the design of the tool. Focus group data were coded and analyzed using an iterative process, with thematic and descriptive analyses. Results: Twenty-six community pharmacists participated. Lack of time and motivation from pharmacists and patients were common barriers to the provision of medication adherence support. Accordingly, community pharmacists wished to measure medication adherence quickly, provide easily interpretable data to patients on their medication use, and raise the patient's awareness of non-adherence. The pharmacists expressed their need to have an electronic tool to share medication adherence information with the treating physician. Regarding the design of e-AdPharm, the pharmacists wanted a table displaying medication adherence with a color code representing adherence level. They also stressed the importance of a structured section enabling them to continuously document the interventions made and the need for patient follow-ups. Conclusions: e-AdPharm meet the needs and overcome the barriers of community pharmacists to provide medication adherence support to their patients. Future studies should examine the feasibility of implementing e-AdPharm in community pharmacies and test its efficacy for improving medication adherence. abstract_id: PUBMED:37620970 Understanding reasons and determinants of medication non-adherence in community-dwelling adults: a cross-sectional study comparing young and older age groups. Background: Medication non-adherence has become a striking problem among patients with chronic diseases worldwide. However, literature on prevalence, reasons and factors associated with medication non-adherence in Singapore general population is still lacking. This study aimed to (1) estimate the prevalence of intentional and unintentional medication non-adherence in young (aged 21-64 years) and older adults (aged ≥ 65 years), respectively; (2) identify and compare the main reasons for non-adherence; and (3) examine the association between potential factors and non-adherence in each group. Methods: This study sampled 1,528 community-dwelling adults on medications (young adults:766, older adults: 762) from a cross-sectional population health survey conducted in the northern and central regions of Singapore in 2018/2019. Self-reported medication non-adherence and its reasons were collected using a modified questionnaire and compared between the two groups. Multiple logistic regressions were conducted to examine the association between potential factors (e.g., social-demographic factors, smoking and drinking status, presence of diabetes, hypertension, or dyslipidaemia, and presence of depressive symptoms) and medication non-adherence in each group. Results: The prevalence of non-adherence was 38.4% and 22.3% in young and older adults, respectively, with young adults reporting higher unintentional and intentional non-adherence rates than older adults. "Afraid of developing drug dependence" was the most common reason in both groups (young:74.8% vs. old:73.5%). Compared to young adults (3.7%), "Not understanding medication labels" was more prevalent in older adults (8.8%). Presence of depressive symptoms was associated with non-adherence in both young (odds ratio [95% confidence interval]: 3.00 [1.79, 5.05]) and older adults (4.16 [2.31, 7.51]). Being employed (2.92 [1.76, 4.84]) and taking ≥ 2 medications (1.42 [1.04, 1.95]) had positive association while personal income of SGD1,000-4,000 (0.53 [0.36, 0.77]) and current smoking (0.61 [0.39, 0.95]) had inverse association with non-compliance in young adults. Diagnosis of diabetes, hypertension, or dyslipidaemia (2.63 [1.25, 5.53]) was associated with higher odds of non-compliance in older adults. Conclusions: Young adults had higher prevalence of medication non-adherence than older adults. The main reasons for non-adherence reported by young and older adults were generally comparable. Presence of depressive symptoms was a risk factor of medication non-adherence in both groups. abstract_id: PUBMED:22876236 Diagnosing medication non-adherence in a patient with myocardial infarction. Background: Medication non-adherence continues to be a major challenge facing the healthcare system. A case is presented of a 48-year-old man with myocardial infarction who was found to be non-adherent to multiple medications. Conceptual models are reviewed along with current approaches for assessment and treatment of medication non-adherence. Design: Case report and literature review. Discussion: A theoretical model for medication non-adherence built on the Theory of Planned Behavior is presented. Empirical evidence is reviewed for determinants of non-adherent behavior such as health beliefs and self-efficacy. Current methods to assess medication non-adherence, including self-report, pill count, biological drug levels, pharmacy refill, and electronic bottles are summarized along with their limitations. Finally, an individualized approach for assessment is described using the case presented and the conceptual framework outlined above. Follow-up for the patient and potential interventions to improve medication adherence are discussed. Conclusion: Despite the challenges, a conceptual framework for medication non-adherence can guide assessment and treatment. Further research for innovative and effective methods to detect and treat medication non-adherence is urgently needed to aid clinicians in treating this pervasive behavioral problem. abstract_id: PUBMED:32301066 Effect of health professional intervention on adherence to statin use according to the cause of patient non-adherence. Background Non-adherence is a problem that particularly affects those with chronic diseases. Studying causes for not following the treatment is necessary to choose the best intervention to improve non-adherence. Objective Analyze how the intentionality of non-adherence modulates the effects of professional intervention in patients with hypercholesterolemia. Setting: Community pharmacies and primary care centres in Spain. Methods A 6-month randomized controlled trial was conducted in 46 community pharmacies and 50 primary care centres in Spain. Adherence to statin therapy was measured with the Morisky-Green-Levine test. Non-adherence was classified based on the intentionality. Results 746 Patients were recruited for the study (465 non-adherent and 281 adherent). Of those, 237 were randomly assigned to the intervention group and 228 to the non-intervention group. The 56.5% of non-adherent patients were classified as unintentional non-adherents and 43.5% as intentional non-adherents. More patients in the intervention group finished being adherent compared with the non-intervention group (+ 17.2% for intentional non-adherents and + 27.4% for unintentional non-adherence). The percentage of patients in the intervention group who completed the study as adherent was higher among those who previously had unintentional non-adherence (66.4%) compared to those with intentional non- adherence (55.3%) (p < 0.001). Conclusion Intervention provided to patients with unintentional non-adherence was more effective than intervention provided to patients with intentional non- adherence. abstract_id: PUBMED:27489544 Prescription Factors Associated with Medication Non-adherence in Japan Assessed from Leftover Drugs in the SETSUYAKU-BAG Campaign: Focus on Oral Antidiabetic Drugs. Background: Medication adherence has an important influence on health outcomes in patients with chronic diseases. However, few studies have been performed in Japan to determine factors related to medication non-adherence. Objective: The aim of this study was to identify prescription factors related to medication non-adherence by investigating patient characteristics, all prescriptions, and prescriptions for oral antidiabetic drugs (OADs). Methods: A retrospective cross-sectional survey of prescription data about implementation of dosing regimen was performed at community pharmacies engaged in appropriate use of leftover drugs. We evaluated the amount of drugs originally prescribed and the reduced amount after use of leftover drugs, and then calculated prescription reduction ratio (PRR). We analyzed prescription factors contributing to non-adherence based on the PRR. Results: Prescription information for 1207 patients was reviewed, revealing that patients were non-adherent to 58% of prescriptions. Lack of a drug copayment, fewer concurrent drugs, and drugs not in single-dose packaging were associated with non-adherence. Among the 1207 patients, 234 prescriptions for diabetes and 452 OAD formulations were included. Forty-seven percent of prescriptions and 29% of the formulations were non-adherent. A higher dosing frequency and preprandial administration were associated with non-adherence. Among the OADs, adherence was lower for α-glucosidase inhibitors and biguanides than for sulfonylureas. Conclusions: Several factors related to patient characteristics, general drug prescriptions, and OAD prescriptions were associated with non-adherence. Further consideration will be needed to improve adherence to medication in Japan. Health care providers should perform more careful monitoring of adherence in patients with the factors identified by this study. abstract_id: PUBMED:30504671 Effects of Using the Adherence Score Sheet According to Application Timing in Improving Medication Adherence in SETSUYAKU-BAG Campaign Maintaining medication adherence is a critical issue in determining health outcomes in patients with chronic diseases. However, many patients do not adhere to their prescribed regimens. This study aimed to determine the effects of using adherence score sheets according to application timing in improving medication adherence among non-adherent outpatients. In community pharmacies, both patients and pharmacists evaluated medication adherence based on application timing (morning, noon, evening, and before going to bed) in 11 levels (0-10) for >4 months. A total of 58 outpatients were included in the study. The median scores among application timing at intermediate (patient 9.3, pharmacist 9.0) and final (patient 9.5, pharmacist 9.5) analyses were significantly higher than that at baseline (patient 7.6, pharmacist 7.0). At the end of the investigation, the ratio of non-adherent patients prescribed with hyperlipidemic medications was higher than those prescribed with medications for other lifestyle diseases. Approximately 80% of the patients reported improved medication adherence based on the questionnaires regarding their understanding on diseases and medications, medication awareness, and communication with pharmacists. Therefore, the utilization of an adherence score sheet according to application timing improved medication adherence of patients with chronic diseases. abstract_id: PUBMED:38111691 Community Efficacy for Non-Communicable Disease Management and Medication Adherence: The Sequential Mediating Role of Self-Efficacy and Depressive Symptoms. Purpose: We assess whether the sequential mediating effects of self-efficacy and depressive symptoms on the relationship between community efficacy for non-communicable disease management (COEN) and medication adherence and whether these relationships differed by sex and age. Patients And Methods: Overall, 662 individuals from 12 communities in China were interviewed twice 1 year apart. Serial mediation analysis examined whether the relationship between COEN and medication adherence was mediated by self-efficacy and depressive symptoms. Model invariance across sex and age groups was assessed using multi-group analysis. Results: Serial mediation analysis indicated that self-efficacy and depressive symptoms sequentially mediated relationship between COEN and medication adherence. Multi-group analysis by sex showed that the path from self-efficacy to medication adherence was significant only for females and from depressive symptoms to medication adherence was significant only for males. Conclusion: Interventions that enhance individual self-efficacy may be beneficial in decreasing depressive symptoms and improving medication adherence. abstract_id: PUBMED:38320955 Validating methods used to identify non-adherence adverse drug events in Canadian administrative health data. Aims: Medication non-adherence is a type of adverse drug event that can lead to untreated and exacerbated chronic illness, and that drives healthcare utilization. Research using medication claims data has attempted to identify instances of medication non-adherence using the proportion of days covered or by examining gaps between medication refills. We sought to validate these measures compared to a gold standard diagnosis of non-adherence made in hospital. Methods: This was a retrospective analysis of adverse drug events diagnosed during three prospective cohorts in British Columbia between 2008 and 2015 (n = 976). We linked prospectively identified adverse drug events to medication claims data to examine the sensitivity and specificity of typical non-adherence measures. Results: The sensitivity of the non-adherence measures ranged from 22.4% to 37.5%, with a proportion of days covered threshold of 95% performing the best; the non-persistence measures had sensitivities ranging from 10.4% to 58.3%. While a 7-day gap was most sensitive, it classified 61.2% of the sample as non-adherent, whereas only 19.6% were diagnosed as such in hospital. Conclusions: The methods used to identify non-adherence in administrative databases are not accurate when compared to a gold standard diagnosis by healthcare providers. Research that has relied on administrative data to identify non-adherent patients both underestimates the magnitude of the problem and may label patients as non-adherent who were in fact adherent. Answer: The study conducted with an Australian community sample aimed to understand the intentional non-adherent behavior in chronic disease within the Australian community setting and identify factors that promote and prevent non-adherent decisions (PUBMED:25103184). The results indicated that although patients generally intended to follow prescribers' recommendations, most described instances of intentionally non-adherent behavior. Factors that promoted non-adherence included perceived treatment inefficacy, unfavorable side effects, and unaffordable medication costs. On the other hand, factors that maintained adherent intentions were trusting prescriber-patient relationships, positive family values, and a lack of perceived control over treatment choice. The study concluded that intentional medication non-adherence in chronic disease appears reversible and amenable to interventions that address treatment-related barriers such as medication affordability. Strategies that strengthen patient-prescriber relationships and empower patients as informed decision-makers may help maintain adherence intentions. Regular and sustained interventions that are refreshed to meet the changing needs of patients are needed to curb the temporal decline in adherence to chronic disease medications (PUBMED:25103184).
Instruction: Valve cuspidity: a risk factor for aortic valve repair? Abstracts: abstract_id: PUBMED:26798709 Can the Results of Aortic Valve Repair Equal the Results of a Biologic Aortic Valve Replacement? Aortic valve replacement (AVR) has been the default procedure for the surgical management of aortic valve disease, with repair techniques heterogeneously and infrequently used. However, surgical aortic valve repair has evolved with improved techniques. Yet many questions remain regarding the ideal techniques and real-world applicability and effectiveness of valve repair. The AORTA Great Debate highlighted and discussed the controversies regarding the surgical management of aortic valve disease. abstract_id: PUBMED:31084444 Predictors of Aortic Valve Repair Failure. Aortic valve repair is the preferred approach for the treatment of severe aortic insufficiency (AI), as it allows patients to keep their native aortic valve, thus substantially reducing the risk of prosthesis-related complications. Several studies have documented excellent long-term outcomes of aortic valve repair. The major complication of this operation is AI recurrence, with ensuingneed for reoperation. The surgical experience accumulated over the last two decades has allowed for better understanding of the mechanisms of recurrent AI after aortic valve repair. Herein, we review the current state of knowledge on predictors of aortic valve repair failure. These include unaddressed annular dilation, residual cusp prolapse or retraction, commissural orientation, and use of patch material. This enhanced understanding has led to the development of increasingly refined techniques and improved patient outcomes. Continued follow-up and detailed data collection at the time of surgery, together with three-dimensional echo imaging, will allow further improvements in aortic valve repair. abstract_id: PUBMED:36351612 Successful Repair of the Quadricuspid Aortic Valve. Quadricuspid aortic valve (QAV) is rare. Aortic valve replacement is the most frequent treatment for those patients with severe regurgitation or stenosis of the QAV. However, as our comprehension of the aortic valve grows, to repair a dysfunctional QAV becomes more and more feasible. We present a case of successful repair of the QAV using the surgical strategy of bicuspidization. abstract_id: PUBMED:37067839 Aortic valve repair surgery : state of the art and clinical experience of the university hospital center of Liege In the past 20 years, there has been a real development of aortic valve repair techniques with an increasing number of publications describing the long-term benefits of aortic valve repair in terms of survival, freedom from major adverse valve related-events and reoperations. Aortic valve repair can now be considered as a valuable alternative to prosthetic valve replacement in patients with dystrophic ascending aorta pathology associated or not to aortic insufficiency with pliable leaflets. In this paper, the authors describe the state of the art of aortic valve repair and present their clinical experience with aortic valve repair surgery in the university hospital center of Liege from April 2021 to September 2022. abstract_id: PUBMED:30871890 Aortic Valve Repair: Where Are We Now? The advent of aortic valve repair alongside the well-established technique of valve replacement changed the landscape of cardiac surgery, as well as the lives of patients suffering from valvular disease. Repair represents a novel option in those unfit for replacement and avoids the burden of lifelong anticoagulation in younger patients. Despite this, the associated risk of persistence of aortic insufficiency, and therefore reoperation, with valvular repair renders clinical decision making between the two techniques difficult. Unlike the burden of evidence supporting mitral valve repair over replacement, the debate surrounding aortic valve surgery continues. This article aims to explore the development of operative techniques underlying aortic valve repair as well as summarising existing research into short- and long-term patient outcomes in both aortic valve repair and replacement. abstract_id: PUBMED:31240175 Surgical anatomy of the aortic valve and root-implications for valve repair. The aortic root is an important anatomical structure positioned at the center of the heart, making it critical to the functioning of the major cardiac chambers. Deep knowledge of the anatomical "surroundings" of the aortic root is crucial for surgeon attempting to spare or repair a leaking aortic valve. In fact, root dissection is a necessary step to "skeletonize" the aortic valve, allowing the surgeon to work on the critical components of its structure, namely the aorto-ventricular junction, the virtual basal ring (VBR) and the sino-tubular junction (STJ). These three components, along with the insertion of the leaflet to the aortic wall, form the skeleton of the aortic valve that is essential in guaranteeing valve competence. A good anatomical proportion between the various component of the skeleton of the aortic valve need to be verified, or re-established in order to set the basis for an optimal aortic valve repair. Once the skeleton of the heart has been correctly addressed, the condition of the valve leaflets need to be considered. Excess of leaflet tissue is treated by leaflet plication or resection and lack of leaflet tissue is addressed by tissue extension with autologous or heterologous materials. In the present manuscript, we highlight the principal structure of the aortic root and describe in detail each anatomical component. This basic anatomical knowledge is also important for a through understanding of the normal function of the valve and root structure during the cardiac cycle. The close boundaries existing between the left ventricular cavity and the aorta are important in explaining the sophisticated function of opening and closing of the aortic valve. Similarly, the role played by the sinuses of Valsalva in regulating the blood flow exiting the ventricle underline the concept that "form follows function" and emphasizes the importance of a good anatomical reconstruction for an optimal and long-lasting valve function. abstract_id: PUBMED:33367683 Comparison of bicuspid and tricuspid aortic valve repair. Objectives: The aim of this study was to compare the outcomes of tricuspid aortic valve (TAV) and bicuspid aortic valve (BAV) repair. Methods: We assessed mortality, freedom from reoperation and the rate of aortic valve regurgitation recurrence. Mortality in both groups was compared with expected survival, and risk factors for reoperation were identified. Results: From January 2010 to April 2020, a total of 368 elective aortic valve repair procedures were performed, including 223 (60.6%) in patients with TAV. The perioperative mortality was 0.7% in the BAV group and 3.6% in the TAV group (P = 0.079). Estimated survival at 5 years in the BAV versus TAV group was 97 ± 3% vs 80 ± 6%, respectively (P < 0.001). Freedom from reoperation at 5 years in the TAV versus BAV group was 96 ± 3% vs 93 ± 4%, respectively (P = 0.28). Grade 2 or more aortic valve regurgitation was noted in 9.9% of BAV patients and 11% of TAV patients (P = 0.66). Reoperation was predicted by cusp perforation [hazard ratio 15.86 (4.44-56.61); P < 0.001], the use of pericardial patch [hazard ratio 8.58 (1.96-37.53); P = 0.004] and aortic valve annulus diameter >27.5 mm [hazard ratio 3.07 (0.99-9.58); P = 0.053]. Conclusions: BAV repair is as durable as TAV repair. BAV is not a predictor of a higher rate of reoperations. BAV repair yields survival comparable to expected. Cusp perforation, aortic valve annulus diameter >27.5 mm and the use of pericardial patch adversely impact long-term outcome of aortic valve repair. abstract_id: PUBMED:31220531 Aortic Valve Repair: From Concept to Future Targets. Aortic valve repair has become an important treatment alternative to patients with aortic insufficiency. In this paper, we review refinements and advances in the understanding of core concepts of aortic valve anatomy and pathophysiology which have enhanced our approach to aortic valve preservation and repair. With these improvements in understanding and techniques, the outcome for aortic valve repair continues to improve. We also review current challenges in the field and explore potential areas of innovation and future study including timing of surgical intervention for aortic insufficiency, comparisons between aortic valve repair and replacement in randomized trials, and development of personalized surgical management plan based on patient-specific pathologies. These advances will further establish the role of aortic valve repair in the management of aortic valve and aortic disease. abstract_id: PUBMED:24294032 Mitral valve repair versus replacement in simultaneous aortic and mitral valve surgery. Background: Double valve replacement for concomitant aortic and mitral valve disease is associated with substantial morbidity and mortality. Excellent results with valve repair in isolated mitral valve lesions have been reported; therefore, whether its potential benefits would translate into better outcomes in patients with combined mitral-aortic disease was investigated. Methods: A retrospective observational study was performed involving 341 patients who underwent aortic valve replacement with either mitral valve repair (n=42) or double valve replacement (n=299). Data were analyzed for early mortality, late valve-related complications and survival. Results: The early mortality rate was 11.9% for valve repair and 11.0% for replacement (P=0.797). Survival (± SD) was 67±11% in mitral valve repair with aortic valve replacement and 81±3% in double valve replacement at five years of follow-up (P=0.187). The percentage of patients who did not experience major adverse valve-related events at five years of follow-up was 83±9% in those who underwent mitral valve repair with aortic valve replacement and 89±2% in patients who underwent double valve replacement (P=0.412). Age >70 years (HR 2.4 [95% CI 1.1 to 4.9]; P=0.023) and renal dysfunction (HR 1.9 [95% CI 1.2 to 3.7]; P=0.01) were independent predictors of decreased survival. Conclusions: In patients with double valve disease, both mitral valve repair and replacement provided comparable early outcomes. There were no significant differences in valve-related reoperations, anticoagulation-related complications or prosthetic valve endocarditis. Patient-related factors appear to be the major determinant of late survival, irrespective of the type of operation. abstract_id: PUBMED:38204640 Aortic valve cusp repair does not affect durability of modified aortic valve reimplantation for tricuspid aortic valves. Objective: During aortic valve reimplantation, cusp repair may be needed to produce a competent valve. We investigated whether the need for aortic valve cusp repair affects aortic valve reimplantation durability. Methods: Patients with tricuspid aortic valves who underwent aortic valve reimplantation from January 2002 to January 2020 at a single center were retrospectively analyzed. Propensity matching was used to compare outcomes between patients who did and did not require aortic valve cusp repair. Results: Cusp repair was performed in 181 of 756 patients (24%). Patients who required cusp repair were more often male, were older, had more aortic valve regurgitation, and less often had connective tissue disease. Patients who underwent cusp repair had longer aortic clamp time (124 ± 43 minutes vs 107 ± 36 minutes, P = .001). In-hospital outcomes were similar between groups and with no operative deaths. A total of 98.3% of patients with cusp repair and 99.3% of patients without cusp repair had mild or less aortic regurgitation at discharge. The median follow-up was 3.9 and 3.2 years for the cusp repair and no cusp repair groups, respectively. At 10 years, estimated prevalence of moderate or more aortic regurgitation was 12% for patients with cusp repair and 7.0% for patients without cusp repair (P = .30). Mean aortic valve gradients were 6.2 mm Hg and 8.0 mm Hg, respectively (P = .01). Ten-year freedom from reoperation was 99% versus 99% (P = .64) in the matched cohort and 97% versus 97%, respectively (P = .30), in the unmatched cohort. Survival at 10 years was 98% after cusp repair and 93% without cusp repair (P = .05). Conclusions: Aortic valve reimplantation for patients with tricuspid aortic valves has excellent long-term results. Need for aortic valve cusp repair does not affect long-term outcomes and should not deter surgeons from performing valve-sparing surgery. Answer: The cuspidity of the aortic valve, whether it is bicuspid or tricuspid, does not appear to be a significant risk factor for aortic valve repair failure. A study comparing the outcomes of tricuspid aortic valve (TAV) and bicuspid aortic valve (BAV) repair found that BAV repair is as durable as TAV repair and that BAV is not a predictor of a higher rate of reoperations. The study also reported that BAV repair yields survival comparable to expected, and identified cusp perforation, aortic valve annulus diameter >27.5 mm, and the use of pericardial patch as factors that adversely impact the long-term outcome of aortic valve repair (PUBMED:33367683). Additionally, another study investigating the durability of modified aortic valve reimplantation for tricuspid aortic valves found that the need for aortic valve cusp repair does not affect the long-term outcomes of the procedure. The study concluded that aortic valve reimplantation has excellent long-term results regardless of whether cusp repair is needed, suggesting that surgeons should not be deterred from performing valve-sparing surgery due to concerns about the need for cusp repair (PUBMED:38204640). Therefore, based on the available evidence, valve cuspidity itself does not seem to be a risk factor for aortic valve repair failure. Instead, other factors such as cusp perforation, the size of the aortic valve annulus, and the use of certain materials in the repair may have a more significant impact on the success and durability of the repair.
Instruction: Is directly measured low-density lipoprotein clinically equivalent to calculated low-density lipoprotein? Abstracts: abstract_id: PUBMED:28575159 Performance of Calculated and Directly Measured Low-Density Lipoprotein Cholesterol in a Pediatric Population. Objectives: An assessment of methods for the accurate measurement of low-density lipoprotein cholesterol (LDL-C) at decreased concentrations has not yet been carried out. We evaluated the performance of the Friedewald equation, a direct enzymatic assay, and a novel equation for determining LDL-C levels in a pediatric population with elevated triglycerides and reduced LDL-C levels. Methods: LDL-C concentrations of 127 pediatric patients were determined by the Friedewald equation, a direct enzymatic assay, and a novel equation. The bias of each approach was assessed at selected LDL-C cutoffs and after stratifying samples by triglyceride content. The concordance of each approach, relative to the reference method, was determined at LDL-C cut-points of less than 70, 70 to 99, and 100 to 129 mg/dL. Results: The Friedewald equation substantially underestimated pediatric LDL-C concentrations below 100 mg/dL in the presence of elevated triglycerides. The Ortho Clinical Diagnostics (Raritan, NJ) direct LDL assay was positively biased at low LDL-C levels. The novel equation most effectively reduced the bias of the Friedewald equation at all LDL-C concentrations and increased the concordance of sample classification to the reference method. Conclusions: The novel equation should be used for accurate measurement of pediatric LDL-C when the concentration is below 100 mg/dL in the presence of elevated triglycerides (150-399 mg/dL). abstract_id: PUBMED:35591827 Comparison of low-density lipoprotein cholesterol level calculated using the modified Martin/Hopkins estimation or the Friedewald formula with direct homogeneous assay measured low-density lipoprotein cholesterol. Introduction: Low-density lipoprotein cholesterol (LDL-C) represents the primary lipoprotein target for reducing cardiovascular risk (CV). The aim of our study is to compare the direct and the calculated LDL-C levels in the range below 1.8 mmol/l and 2.6 mmol/l depending on triglycerides, and to evaluate the variation in remnant lipoprotein cholesterol. Material And Methods: We investigated 14 906 lipid profiles from fasting blood samples of Hungarian individuals with triglycerides < 4.5 mmol/l. Total cholesterol (TC), high-density lipoprotein cholesterol (HDL-C), triglycerides (TG) and direct LDL-C were measured by the enzymatic assay. We calculated LDL-C by Friedewald's formula (F-LDL-C) and by using the new Martin/Hopkins estimation (MH-LDL-C). Results: For F-LDL-C below 1.8 mmol/l, MH-LDL-C was 58% between 1.8 and 2.59 mmol/l when TG was in the range 2.3-4.5 mmol/l. For F-LDL-C below 2.6 mmol/l, the MH-LDL-C concordance was 73% in the same TG range (2.3-4.5 mmol/l. If MH-LDL-C was less than 1.8 mmol/l or between 1.8 and 2.59 mmol/l, the difference between non-HDL-C (TC - HDL-C = AC: atherogenic cholesterol) and (MH)LDL-C was less than 0.8 mmol/l in the TG range below 2.3 mmol/l. The remnant lipoprotein cholesterol values were on average 0.5 mmol/l lower by the Martin/Hopkins estimation compared to the Friedewald's calculation if the TG was above 2.3 mmol/l. Conclusions: The Friedewald equation tends to underestimate LDL-C levels in very high and high-risk settings. Our analysis supports the conclusion that in Hungarian patients, LDL-C estimation using the Martin/Hopkins formula, which is validated by the beta-quantification method, yields a more accurate LDL-C value than that calculated by the Friedewald formula. abstract_id: PUBMED:21122658 Is directly measured low-density lipoprotein clinically equivalent to calculated low-density lipoprotein? Background: Low-density lipoprotein cholesterol (LDL-C) can either be calculated or measured directly. Clinical guidelines recommend the use of calculated LDL-C (C-LDL-C) to guide therapy because the evidence base for cholesterol management is derived almost exclusively from trials that use C-LDL-C, with direct measurement of LDL-C (D-LDL-C) being reserved for those patients who are nonfasting or with significant hypertriglyceridemia. Objective: Our aim was to determine the clinical equivalence of directly measured-LDL-C, using a Siemens Advia Chemistry System, and fasting C-LDL-C. Methods: Eighty-one subjects recruited for two cholesterol treatment studies had at least one C-LDL-C and D-LDL-C performed simultaneously; 64 had a repeat lipid assessment after 4 to 6 weeks of therapy, resulting in 145 pairs of C-LDL-C and D-LDL-C. Results: There was significant correlation between D-LDL-C and C-LDL-C (r² = 0.86). Correlation was significantly better in those with lower total cholesterol, triglycerides, and high-density lipoprotein. In 60% of subjects, the difference between D-LDL-C and C-LDL-C was more than 5 mg/dL and greater than 6%. Clinical concordance between D-LDL-C and C-LDL-C was present in 40% of patients, whereas clinical discordance was noted in 25%. One-third had greater than a 15 mg/dL difference between D-LDL-C and C-LDL-C, whereas 25% had a greater than 20 mg/dL difference. In 47% of subjects, the difference between D-LDL-C and C-LDL-C at baseline and follow-up changed by a minimum of 10% or 10 mg/dL. Conclusions: Our findings suggest that D-LDL-C is not clinically equivalent to C-LDL-C. This puts into question the current recommendation of using D-LDL-C in situations in which C-LDL-C would be inaccurate. abstract_id: PUBMED:36525335 Association between measured or calculated small dense low-density lipoprotein cholesterol and oxidized low-density lipoprotein in subjects with or without type 2 diabetes mellitus. Background: Small dense low-density lipoprotein (sdLDL) possesses atherogenic potential and is predicted to be susceptible to atherogenic modifications, which further increases its atherogenicity. However, studies on the association between measured or estimated sdLDL cholesterol (sdLDL-C) levels and atherogenic modification in diverse population groups are lacking. Methods: Surplus serum samples were collected from male subjects with type 2 diabetes mellitus (DM) under treatment (n = 300) and without DM (non-DM; n = 150). sdLDL and oxidized LDL (oxLDL) levels were measured using the Lipoprint LDL subfractions kit (Quantimetrix Corporation) and the Mercodia oxidized LDL competitive enzyme-linked immunosorbent assay kit (Mercodia), respectively. The estimated sdLDL-Cs were calculated from two relevant equations. The effects of sdLDL-C on oxLDL were assessed using multiple linear regression (MLR) models. Results: The mean (±SD) of measured sdLDL-C and oxLDL concentrations were 11.8 ± 10.0 mg/dl and 53.4 ± 14.2 U/L in the non-DM group and 0.20 ± 0.81 mg/dl and 46.0 ± 15.3 U/L in the DM group, respectively. The effects of measured sdLDL-Cs were significant (p = 0.031), whereas those of estimated sdLDL-Cs were not (p = 0.060, p = 0.116) in the non-DM group in the MLR models. The effects of sdLDL-Cs in the DM group were not significant. Conclusion: In the general population, high level of sdLDL-C appeared to be associated with high level of oxLDL. The equation for estimating sdLDL-C developed from a general population should be applied with caution to a special population, such as patients with DM on treatment. abstract_id: PUBMED:10783327 Modulation of base excision repair by low density lipoprotein, oxidized low density lipoprotein and antioxidants in mouse monocytes. In the present study, we found that oxidized low density lipoprotein, but not low density lipoprotein, down-regulated base excision repair activity in extracts of mouse monocyte cell line PU5-1.8. An enzyme required in this pathway, DNA polymerase beta, was also down-regulated. In contrast, treatment of monocytes with a combination of ascorbate and alpha-tocopherol up-regulated base excision repair activity and expression of DNA polymerase beta. Co-treatment of monocytes with antioxidants plus oxidized low density lipoprotein prevented down-regulation by oxidized low density lipoprotein. Oxidative DNA damage, as measured by 8-hydroxyguanine accumulation in genomic DNA, was found in cells treated with oxidized low density lipoprotein; 8-hydroxyguanine was not found in the cells treated with low density lipoprotein, antioxidants or oxidized low density lipoprotein plus antioxidants. These results establish a linkage between the DNA base excision repair pathway, oxidative DNA damage and oxidized low density lipoprotein treatment in mouse monocytes. Since oxidized low density lipoprotein is implicated in chronic disease conditions such as atherogenesis, these findings facilitate understanding of genetic toxicology mechanisms related to human health and disease. abstract_id: PUBMED:32389186 Biomimetic sensors targeting oxidized-low-density lipoprotein with molecularly imprinted polymers. Oxidized-low-density lipoprotein (oxLDL) is well-recognized as an actual patho-atherogenic lipoprotein: elevated serum concentration of oxLDL increases the risk for developing atherosclerosis, leading to coronary artery disease (CAD). Herein, we report an approach for sensing oxLDL directly in serum with molecularly imprinted polymer (MIP) thin films on quartz crystal microbalance (QCM). The resulting MIP sensors show low cross-reaction toward low-density lipoprotein (LDL) and high-density lipoprotein (HDL): signals are around one magnitude smaller. Very-low-density lipoprotein (VLDL) and human serum albumin (HSA) do not lead to any significant sensor response. The sensor allowed for accurately assessing oxLDL over the detection range of 86-5600 μg dL-1, which covers the clinically relevant concentrations. The sensor determines oxLDL with recovery accuracy of 92-107% and a precision of 1-8% coefficient variation. Compared with commercially available oxLDL ELISA test kit our sensor reveals similar characteristics obtaining a correlation coefficient of 0.98. However, the sensors have rapid response times of 10 min compared to 210 min of ELISA, which demonstrates their efficiency in assessing this sensitive atherogenic biomarker for CAD diagnostics. abstract_id: PUBMED:24739131 Impact of short-term low-dose atorvastatin on low-density lipoprotein and high-density lipoprotein subfraction phenotype. Statins can significantly reduce low-density lipoprotein-cholesterol (LDL-C) and modestly raise or not alter high-density lipoprotein-cholesterol (HDL-C). However, their impact on high-density lipoprotein (HDL) and low-density lipoprotein (LDL) subfractions has been less examined. The aim of the present study was to investigate the short-term impact of low-dose atorvastatin on HDL and LDL subfractions in humans. In this randomized study, data from 52 subjects were analysed. Thirty-seven patients with atherosclerosis were randomized to treatment with atorvastatin 10 mg/day (n = 17) or 20 mg/day (n = 20) for 8 weeks, with 15 healthy subjects without therapy used as a control group. The lipid profile and lipoprotein subfractions were determined using the Lipoprint system at baseline and at 8 weeks. The data suggest that atorvastatin treatment (10 and 20 mg/day) for 8 weeks significantly decreases LDL-C levels and reduces the cholesterol concentration of all LDL subfractions, which is accompanied by an increase of the mean LDL particle size. Although 10 mg/day atorvastatin treatment for 8 weeks had no impact on the HDL subfraction, 20 mg/day atorvastatin for 8 weeks significantly increased the cholesterol concentration of large HDL particles and decreased the cholesterol concentration of small HDL particles without changing serum HDL-C levels in patients with atherosclerosis. Therefore, the results suggest that 20 mg/day atorvastatin treatment for 8 weeks may result in a favourable modification of the HDL subfraction phenotype in addition to its effects on the cholesterol concentration of all LDL subfractions and mean LDL particle size. abstract_id: PUBMED:25949053 A Leap above Friedewald Formula for Calculation of Low-Density Lipoprotein-Cholesterol. Purpose: The purpose was to compare the different calculated methods of low-density lipoprotein cholesterol (LDL-C) estimation and to determine which of them correlate best with the direct method. Materials And Methods: The records of 480 samples for lipid profile were analyzed. Apart from the direct method, LDL-C was calculated by Friedewald low-density lipoprotein cholesterol method (F-LDL-C), modified Friedewald low-density lipoprotein cholesterol method (MF-LDL-C), and Anandaraja low-density lipoprotein cholesterol method (A-LDL-C). Paired t-test and Pearson correlation were evaluated between the different methods. Degree of agreement between the calculated methods and direct method was detected by Bland-Altman graphical plots. Results: A strong correlation was found between all calculated LDL-C methods and direct low-density lipoprotein cholesterol method (D-LDL-C) assay, that is, F-LDL-C versus D-LDL-C = 0.94; A-LDL-C versus D-LDL-C = 0.93 and MF-LDL-C versus D-LDL-C = 0.95. No statistically significant difference was found between D-LDL-C and MF-LDL-C. Bland-Altman plot for MF-LDL-C showed minimal negative bias. Conclusions: The study pointed out that MF-LDL-C correlated maximally with D-LDL-C estimation at all levels of triglycerides and MF-LDL-C can be used in place of D-LDL-C when the direct method cannot be afforded. abstract_id: PUBMED:22540098 Triglycerides, total cholesterol, high density lipoprotein cholesterol and low density lipoprotein cholesterol in rats exposed to premium motor spirit fumes. Background: Deliberate and regular exposure to premium motor spirit fumes is common and could be a risk factor for liver disease in those who are occupationally exposed. A possible association between premium motor spirit fumes and plasma levels of triglyceride, total cholesterol, high density lipoprotein cholesterol and low density lipoprotein cholesterol using a rodent model could provide new insights in the pathology of diseases where cellular dysfunction is an established risk factor. Aim: The aim of this study was to evaluate the possible effect of premium motor spirit fumes on lipids and lipoproteins in workers occupationally exposed to premium motor spirit fumes using rodent model. Materials And Methods: Twenty-five Wister albino rats (of both sexes) were used for this study between the 4(th) of August and 7(th) of September, 2010. The rats were divided into five groups of five rats each. Group 1 rats were not exposed to premium motor spirit fumes (control group), group 2 rats were exposed for 1 hour daily, group 3 for 3 hours daily, group 4 for 5 hours daily and group 5 for 7 hours daily. The experiment lasted for a period of 4 weeks. Blood samples obtained from all the groups after 4 weeks of exposure were used for the estimation of plasma levels of triglyceride, total cholesterol, high density lipoprotein- cholesterol and low density lipoprotein- cholesterol. Result: Results showed significant increase in means of plasma total cholesterol and low density lipoprotein levels (P<0.05). The mean triglyceride and total body weight were significantly lower (P<0.05) in the exposed group when compared with the unexposed. The plasma level of high density lipoprotein, the ratio of low density lipoprotein to high density lipoprotein and the ratio of total cholesterol to high density lipoprotein did not differ significantly in exposed subjects when compared with the control group. Conclusion: These results showed that frequent exposure to petrol fumes may be highly deleterious to the liver cells. abstract_id: PUBMED:30593303 Correlation of serum low density lipoprotein subfractions with routine lipid profile and physical indices in medical students from a university in 2016 Objective: To investigate the correlation among the serum low density lipoprotein subfractions, lipoprotein a and other routine indices. Methods: Medical students who didn 't experience cardiovascular events were recruited at a university in Nanjing City, their physical indicators were measured( including height, weight, waist circumference and hip circumference) and fasting blood was collected to detect the seven items of serum lipid. Lipoprint system was used to detect low density lipoprotein subfractions, the correlation among the indices was analyzed ultimately. Results: A total of 84 students( 40 male and 44 female) at the age of 20-29 were enrolled in the study. Levels of body mass index, waist-to-hip ratio, very low density lipoprotein, small dense low density lipoprotein in male were significantly higher than those in female, while levels ofhigh density lipoprotein, apolipoprotein A1, intermediate density lipoprotein and mean low density lipoprotein size in male were significantly lower than those in female( P < 0. 05). In this population, the abnormal rate of lipoprotein a reached 27. 4% and was only significantly positively correlated with high density lipoprotein( r = 0. 265, P = 0. 015), and the mean low density lipoprotein size was significantly negatively correlated with waist-to-hip ratio, triglyceride and small dense low density lipoprotein etc. ( P < 0. 05). Conclusion: Male medical students have more risk factors of angiocardiopathy than young women, and the abnormal rate of serum lipoprotein a in medical students is higher. Answer: Directly measured low-density lipoprotein (LDL) is not clinically equivalent to calculated LDL (C-LDL). The study by PUBMED:21122658 found significant correlation between directly measured LDL-C (D-LDL-C) and C-LDL-C, but also noted that in 60% of subjects, the difference between D-LDL-C and C-LDL-C was more than 5 mg/dL and greater than 6%. Clinical concordance between the two was present in only 40% of patients, and clinical discordance was noted in 25%. The study concluded that D-LDL-C is not clinically equivalent to C-LDL-C, questioning the recommendation of using D-LDL-C in situations where C-LDL-C would be inaccurate. Other studies have also highlighted discrepancies between calculated and directly measured LDL-C. PUBMED:28575159 reported that the Friedewald equation, commonly used for calculating LDL-C, substantially underestimated pediatric LDL-C concentrations below 100 mg/dL in the presence of elevated triglycerides. A novel equation was suggested for more accurate measurement in such cases. Similarly, PUBMED:35591827 found that the Friedewald equation tends to underestimate LDL-C levels in very high and high-risk settings, and suggested that the Martin/Hopkins formula provides a more accurate estimation of LDL-C than the Friedewald formula. PUBMED:25949053 also compared different calculated methods of LDL-C estimation with the direct method and found that a modified Friedewald formula correlated maximally with the direct method at all levels of triglycerides, suggesting it could be used in place of direct measurement when the latter cannot be afforded. In summary, while there is a significant correlation between directly measured and calculated LDL-C, they are not clinically equivalent, and discrepancies can occur, particularly in the presence of elevated triglycerides or in specific patient populations. Therefore, the choice between direct measurement and calculated methods should be made based on the clinical context and the accuracy required for patient management.
Instruction: Does a functional activity programme improve function, quality of life, and falls for residents in long term care? Abstracts: abstract_id: PUBMED:18845605 Does a functional activity programme improve function, quality of life, and falls for residents in long term care? Cluster randomised controlled trial. Objective: To assess the effectiveness of an activity programme in improving function, quality of life, and falls in older people in residential care. Design: Cluster randomised controlled trial with one year follow-up. Setting: 41 low level dependency residential care homes in New Zealand. Participants: 682 people aged 65 years or over. Interventions: 330 residents were offered a goal setting and individualised activities of daily living activity programme by a gerontology nurse, reinforced by usual healthcare assistants; 352 residents received social visits. Main Outcome Measures: Function (late life function and disability instruments, elderly mobility scale, FICSIT-4 balance test, timed up and go test), quality of life (life satisfaction index, EuroQol), and falls (time to fall over 12 months). Secondary outcomes were depressive symptoms and hospital admissions. Results: 473 (70%) participants completed the trial. The programme had no impact overall. However, in contrast to residents with impaired cognition (no differences between intervention and control group), those with normal cognition in the intervention group may have maintained overall function (late life function and disability instrument total function, P=0.024) and lower limb function (late life function and disability instrument basic lower extremity, P=0.015). In residents with cognitive impairment, the likelihood of depression increased in the intervention group. No other outcomes differed between groups. Conclusion: A programme of functional rehabilitation had minimal impact for elderly people in residential care with normal cognition but was not beneficial for those with poor cognition. Trial registration Australian Clinical Trials Register ACTRN12605000667617. abstract_id: PUBMED:24652878 Exploring the Quality of Life of Younger Residents Living in Long-Term Care Facilities. The purpose of this qualitative study was to explore the characteristics of "quality of life" of younger residents in long-term care facilities. This multimethod study employed in-depth interviews with younger residents, focus groups with staff members and interviews with management team members at two care facilities in British Columbia, Canada. Data analysis revealed three themes: (a) a new chapter in life, (b) experiencing quality of life, and (c) nature of social life. These themes highlight the characteristics of younger residents' quality of life and provide insights into the salient contributing factors. Findings of this study are useful in better understanding aspects of younger residents' quality of life and their psychosocial needs and consequently can guide decision making to provide an appropriate care environment for this population segment in long-term care settings. abstract_id: PUBMED:24096468 A project to reduce accidental falls among mentally ill residents in a long-term care facility Background: Individuals with mental illness are highly vulnerable to personal health and safety threats. In recent years, the increasing incidence of accidental falls among residents of long-term care facilities has been attributed to aging, disease symptoms, and sedative drug effects. Purpose: This project aimed to reduce the fall incidence rate for mentally ill residents in our hospital from 0.0015% to 0.0012% in order to reduce patient injuries and avoid the long-term health consequences of these injuries. Resolution: This project was conducted between January 1st, 2011 and December 31st, 2011. Our approach included direct observation, literature review, meeting discussions, and data compilation. The intervention composed four facets: (1) Staff: providing education and training skills to prevent patient falls; (2) residents: enhancing patient motivation to keep physically fit and assess medication side effects; (3) environment: increasing living condition safety; (4) policy: replacing defective equipment, performing equipment checks on schedule, and managing a fall-prevention program. Results: The post-intervention fall incidence rate for mentally ill residents was 0.0007%, which was significantly better than our target. Conclusions: We reviewed plans and improved the fall-prevention strategy for mentally ill residents of long-term care facilities. This project provides a reference for care program planners and administrators. abstract_id: PUBMED:35279091 Associations among quality of life, activities, and participation in elderly residents with joint contractures in long-term care facilities: a cross-sectional study. Background: Joint contractures and degenerative osteoarthritis are the most common joint diseases in the elderly population, can lead to limited mobility in elderly individuals, can exacerbate symptoms such as pain, stiffness, and disability, and can interfere with social participation and quality of life, thus affecting mental health. However, relevant studies on this topic are very limited. This study describes the associations of joint contracture categories and sites in elderly residents in long-term care facilities with their quality of life, activities, and participation. Methods: Elderly individuals with joint contractures who were residents in long-term care facilities were recruited. The World Health Organization (WHO) Quality of Life and the WHO Disability Assessment Schedule 2.0 were used to survey the participants. Correlations, multiple linear regressions, and multiple analyses of variance, with joint contractures as the response variable, were used in the statistical analysis. Results: The final statistical analysis included 232 participants. The explanatory power of contracture sites on activities and participation had a moderate strength of association (η2 = .113). Compared with elderly residents with joint contractures and osteoarthritis isolated to the upper limbs, those with joint contractures and osteoarthritis in both the upper and lower limbs had significantly worse activity and participation limitations. No significant differences in activity and participation were found between elderly residents with joint contractures affecting only the upper limbs and those with joint contractures affecting only the lower limbs (F1,226 = 2.604 and F1,226 = 0.674, nonsignificant). Osteoarthritis had the greatest impact on activity limitations and participation restrictions among elderly residents with joint contractures affecting both the upper and lower limbs (F1,226 = 6.251, p = .014). Conclusions: Elderly residents in long-term care facilities belonging to minority groups, with a history of stroke, and with osteoarthritis are at a high risk of developing activity limitations and participation restrictions. Moreover, compared with other contraction sites, regardless of osteoarthritis, joint contractures affecting both the upper and lower limbs were associated with the greatest activity limitations and participation restrictions. Trial Registration: This study has been registered in the Chinese Clinical Trial Registry, registration number and date: ChiCTR2000039889 (13/11/2020). abstract_id: PUBMED:21239085 Mindfulness meditation to improve care quality and quality of life in long-term care settings. Quality of long-term care has been the focus of 2 recent Institute of Medicine reports: "Improving the Quality of Long-Term Care"(1) and "Improving the Quality of Care in Nursing Homes."(2) Although there has been some improvement in care quality since regulatory reforms were enacted in 1987,(3) poor care persists.(4) Certified nursing assistants (CNAs) are challenged in the provision of optimal care by chronic stress in the workplace, leading to absenteeism, reduced job satisfaction, and increased turnover.(5-7) Mindfulness training, which cultivates a practice of being present in the moment, recognizing stressful situations when they arise, and responding to stress in an adaptive manner,(8) holds promise as a simple, inexpensive approach to reduce CNA stress and improve quality of care and quality of life for residents in long-term care settings. Formal and informal mindfulness practices can readily be incorporated into CNA educational programs. abstract_id: PUBMED:31766368 Insomnia, Benzodiazepine Use, and Falls among Residents in Long-term Care Facilities. Background: Falls are leading cause of injury among older people, especially for those living in long-term care facilities (LTCFs). Very few studies have assessed the effect of sleep quality and hypnotics use on falls, especially in Chinese LTCFs. The study aimed to examine the association between sleep quality, hypnotics use, and falls in institutionalized older people. Methods: We recruited 605 residents from 25 LTCFs in central Shanghai and conducted a baseline survey for sleep quality and hypnotics use, as well as a one-year follow-up survey for falls and injurious falls. Logistic regression models were applied in univariate and multivariate analysis. Results: Among the 605 participants (70.41% women, mean age 84.33 ± 6.90 years), the one-year incidence of falls and injurious falls was 21.82% and 15.21%, respectively. Insomnia (19.83%) and hypnotics use (14.21%) were prevalent. After adjusting for potential confounders, we found that insomnia was significantly associated with an increased risk of falls (adjusted risk ratio (RR): 1.787, 95% CI, 1.106-2.877) and the use of benzodiazepines significantly increased the risk of injurious falls (RR: 3.128, 95% CI, 1.541-6.350). Conclusion: In elderly LTCF residents, both insomnia and benzodiazepine use are associated with an increased risk of falls and injuries. Adopting non-pharmacological approaches to improve sleep quality, taking safer hypnotics, or strengthening supervision on benzodiazepine users may be useful in fall prevention. abstract_id: PUBMED:36036561 Effectiveness of physical rehabilitation in improving physical functioning and quality of life for long-term-care residents with dementia: a systematic review protocol. Objective: The objective of this review is to evaluate the effectiveness of physical rehabilitation versus non-rehabilitation control in improving physical functioning and quality of life in long-term care residents with dementia. Introduction: Many long-term-care residents live with dementia and have impaired physical function and poor quality of life. Physical rehabilitation can improve physical function and quality of life for people living with dementia, yet many long-term-care residents with dementia do not receive this intervention, and health care providers are unsure of which rehabilitation interventions are effective. Studies synthesizing effective rehabilitation programs are needed to guide practice in the long-term-care sector where many residents live with dementia. Previous studies have focused broadly on long-term care, specific professions, interventions or outcomes, or people with dementia in the community. Our review will focus on long-term-care residents living with dementia and a broader definition of physical rehabilitation. Inclusion Criteria: This review will include studies that evaluate physical rehabilitation in comparison with non-rehabilitation controls among long-term-care residents with any severity of dementia. We will include studies that measure the effect on performance-based physical functioning and self- or proxy-reported quality of life. Methods: Searches will be conducted in APA PsycINFO (EBSCO), CINAHL (EBSCO), MEDLINE (Ovid), Embase, Scopus, and the Cochrane CENTRAL database with no date or language limitations. Two independent reviewers will conduct a critical appraisal of eligible studies, assess methodological quality, and extract the data. Where possible, studies will be pooled in a statistical meta-analysis. Systematic Review Registration Number: PROSPERO CRD42022308444. abstract_id: PUBMED:38287257 Resident-, family-, and staff-identified goals for rehabilitation of long-term care residents with dementia: a qualitative study. Background: Long-term care (LTC) residents with dementia can benefit from rehabilitation to improve function and quality of life. However, specific goals for rehabilitation with this population are not always clear. The purpose of this study was to describe the goals for rehabilitation for LTC residents with dementia from the perspective of residents, family, and staff. Methods: This was a phenomenological qualitative study. LTC residents with moderate to severe dementia, family members, and staff were recruited from two LTC homes in Halifax, Nova Scotia. Data were collected through semi-structured interviews and field notes from observations with residents while they were being active within the home. Data were analyzed via the principles of thematic content analysis, mapped onto the International Classification of Functioning, Disability, and Health (ICF) Model, and reported by the participant group (i.e., residents, family, or staff). Results: The 15 participants were three female residents aged 82 to 98 years, seven predominantly (86%) female family members aged 56 to 74 years, and five staff members (two females, three males, aged 22 to 55 years) who were physiotherapists, a physiotherapy assistant, a healthcare aide, and a registered licenced practical nurse. Most identified goals fell within the activities and participation constructs of the ICF model and focused on maintaining or improving function, mobility, and quality of life. Specific themes included preventing falls, walking or locomoting, stair climbing, maintaining activities of daily living, engaging in enjoyable exercise, maintaining independence and human connections, keeping busy, leaving the home for activities, and participating in group activities. Conclusions: Rehabilitation goals for LTC residents living with dementia often focus on quality of life and functional activities and participation in LTC and family activities and events. Function and quality of life are interrelated, whereby functional goals influence quality of life. While some goals focus on improvement in function, maintenance or prevention of decline were also key elements. Future work should ensure rehabilitation interventions are developed relative to individually identified goals, and interventional success is measured in relation to the goal. abstract_id: PUBMED:36367098 Do Residential Long-Term Care Policies Support Family Involvement in Residents' Quality of Life in Four Canadian Provinces? Family members are essential contributors to the quality of life (QoL) of persons living in residential long-term care (RLTC). This paper analyzes how the system enables or inhibits family involvement with residents in RLTC. Our analysis of 21 policies that regulate long-term care in four Canadian Provinces reveal differences in their portrayal of residents' families. Family roles are characterized procedurally (task-oriented) or relationally (interactive). Operational standards linked to licensing of RLTC homes employ more formal terminology, while RLTC program guidelines, use facilitative language to engage families and build relationships. Examples of orientation procedures, care protocols, living at risk, and end-of-life care reveal inter-provincial variations. We argue that there are opportunities to further engage families within the current regulatory framework and improve their continued contributions in the post-pandemic era. abstract_id: PUBMED:22688208 Promoting quality improvement in long-term care: a multi-site collaboration to improve outcomes with pneumonia, falls, bacteriuria and behavioural issues in dementia. The Bridges to Care for Long-Term Care research project aimed to facilitate improvements in outcomes for long-term care residents through the provision of knowledge-to-practice and quality improvement resources by trained facilitators. Point-of-care staff reported improved communication and collaboration, improved use of scope of practice and implementation of best practice knowledge. Overall, participating long-term care homes demonstrated an enhanced capacity for common care issues of the elderly (pneumonia, falls, bacteriuria and behavioural and psychological symptoms of dementia) and the ability to effectively engage in quality improvement processes with efficient and effective use of healthcare resources. Answer: The effectiveness of a functional activity programme in improving function, quality of life, and falls for residents in long-term care seems to vary based on the cognitive status of the residents and the specific interventions employed. According to a cluster randomised controlled trial, a programme of functional rehabilitation had minimal impact overall on elderly people in residential care. However, it may have helped maintain overall function and lower limb function in residents with normal cognition, but it was not beneficial for those with poor cognition, and in fact, increased the likelihood of depression in residents with cognitive impairment (PUBMED:18845605). Other studies have explored various aspects of quality of life and interventions in long-term care facilities. For instance, mindfulness meditation has been suggested as a potential method to reduce caregiver stress and improve the quality of care and life for residents (PUBMED:21239085). A project aimed at reducing accidental falls among mentally ill residents in a long-term care facility showed a significant reduction in fall incidence rate post-intervention, indicating that a multifaceted approach including staff education, resident motivation, environmental safety, and policy changes can be effective (PUBMED:24096468). Furthermore, the presence of joint contractures and osteoarthritis was found to be associated with activity limitations and participation restrictions, affecting the quality of life of elderly residents with these conditions (PUBMED:35279091). Insomnia and benzodiazepine use were also associated with an increased risk of falls among residents, suggesting that non-pharmacological approaches to improve sleep quality and careful management of hypnotics could be beneficial in fall prevention (PUBMED:31766368). A systematic review protocol aims to evaluate the effectiveness of physical rehabilitation in improving physical functioning and quality of life for long-term care residents with dementia (PUBMED:36036561), while a qualitative study identified that rehabilitation goals for such residents often focus on quality of life and functional activities (PUBMED:38287257). Additionally, family involvement is recognized as an important contributor to the quality of life of residents in long-term care, and policies that support family engagement can improve residents' quality of life (PUBMED:36367098). Lastly, a multi-site collaboration to improve outcomes with pneumonia, falls, bacteriuria, and behavioural issues in dementia through knowledge-to-practice and quality improvement resources showed improved communication, collaboration, and implementation of best practices (PUBMED:22688208).
Instruction: Does a direct cholera threat necessarily improve the knowledge, attitude and practices on the disease? Abstracts: abstract_id: PUBMED:8771929 Does a direct cholera threat necessarily improve the knowledge, attitude and practices on the disease? Objective: To assess and compare the knowledge, attitude, practices and beliefs on cholera in Mudzi and Wedza districts. Mudzi district shares a long border with Mozambique where cholera was already prevalent before the study, while Wedza district does not share any international border. Design: Cross sectional community based survey, Data was collected through interviews using a structured questionnaire. In villages, The source of water for domestic use as well as the toilets of the interviewed individuals were also inspected. Setting: Two districts of Mashonaland East Province in Zimbabwe. Subjects: Grade seven pupils, form four students and villagers. Main Outcome Measures: a. The level of knowledge on cholera. b. The prevalence of negative beliefs on the disease. c. The proportion of households using unsafe water. d. The proportion of households not using toilets. Results: 140 and 116 individuals were interviewed in Mudzi and Wedza respectively. The level of knowledge on cholera was very poor in both districts and poorer in Mudzi which shares a border with Mozambique. Twenty pc of the people interviewed had negative beliefs towards the disease; 35pc were using unsafe water; 20pc of households did not have toilets and 4.5 to 7.7pc of the available toilets were not being used. Conclusion: Health education activities on cholera should target all districts with the same intensity. Specific strategies should be found in order to address the misconceptions which may hinder the control of cholera. abstract_id: PUBMED:37104361 Knowledge, Attitude, and Practices of the General Population toward the Old-New Outbreak of Cholera in a Developing Country. Background: In October 2022, the first case of cholera since 1993 was recorded in Lebanon. This study aimed to develop and validate a tool to explore the knowledge, attitudes, and practices (KAP) toward cholera infection and its prevention among the general population in Lebanon and identify the associated factors related to the KAP assessment to guide prevention and awareness strategies. The nation's already precarious healthcare system might become overwhelmed by the response to the cholera outbreak. Therefore, evaluating the level of cholera-related KAP among the Lebanese population is crucial since it directly affects the disease's treatment, control, and prevention. Methods: This online cross-sectional study was carried out between October and November 2022 during the cholera outbreak in Lebanon. Snowball sampling was used to recruit 448 adults living in Lebanon. Results: The suggested KAP scales had adequate structural and convergent validity and were internally consistent. The disease knowledge was inversely associated with the reluctance to receive educational information (β = -1.58) and cigarette smoking (β = -1.31) but positively associated with the female gender (β = 1.74) and awareness of vaccine availability and efficacy (β = 1.34). For attitude, healthcare professionals were less frightened than others (β = 2.69). Better practices were related to better knowledge (β = 0.43), while inadequate practices were associated with getting information from social media (β = -2.47). Conclusions: This study could identify notable gaps in the knowledge, attitudes, and practices, which varied according to participant characteristics. Cholera incidence can be reduced by improved community education and training, increased access to safe water, sanitation, and hygiene amenities, and changes in behavior. These findings warrant additional actions by public health stakeholders and governmental authorities to promote better practices and curb disease transmission. abstract_id: PUBMED:34041066 Knowledge, attitude, and practice study regarding cholera among the people in Jazan city, KSA. Background: Cholera is a contagious infection that can be fatal; in spite of this, it can be easily prevented by sticking to proper hygienic measures as well as administering cholera vaccine. However, prevention of cholera is highly dependent on the knowledge and attitude of the general population toward the symptoms and preventive measures of cholera, which is unclear in medical literature. Objective: This survey analysis aims to explore the level of knowledge as well as attitude and practice of people in Jazan, Saudi Arbaia toward cholera infections. Design And Setting: A self-administered structured questionnaire was distributed via online link to individuals living in Jazan region in Saudi Arabia. The survey included questions on knowledge about cholera symptoms, etiology, and prevention as well as attitude and practices of the responders on this type of infection in addition to sociodemographic data. Data analysis was done through SPSS program version 24. Results: 400 participants responded to this questionnaire. The mean score for knowledge section was 1.86 ± 0.990, for practice section was 5.07 ± 1.353, and for attitude section was 6.14 ± 2.346, all of them were below average rating. There was statistically significant difference (P-value = 0.003) between different educational levels, with a positive correlation between educational level and level of knowledge about cholera. There was a statistically significant difference (P-value = 0.034) between different genders. Females showed a significantly improved practice towards cholera infection. Conclusion: The level of knowledge of the public in Saudi Arabia is poor. Also, the attitudes and practices of people in Jazan area, Saudi Arabia is considered unsatisfactory. Further studies in other regions of Saudi Arabia are highly recommended. abstract_id: PUBMED:36498316 Knowledge, Attitude, and Practice in a Sample of the Lebanese Population Regarding Cholera. The evaluation of knowledge, attitude, and practices towards an emerging disease is an essential component of public health preventive measures during an outbreak. In October 2022, an outbreak of cholera was reported in Lebanon, which is the first to be reported in the Middle Eastern country for 30 years. This study aimed to explore the level of knowledge as well as attitude and practice of the general public in Lebanon towards cholera. A self-administered structured questionnaire was distributed via an online link to individuals living in Lebanon during October-November 2022. The survey instrument comprised items to assess the sociodemographic data; questions on knowledge about cholera symptoms, transmission, and prevention; as well as attitude and practice questions. Our study involved 553 participants, with a median age of 24 years and a majority of females (72.5%). The results showed that the majority of respondents correctly identified diarrhea as a symptom of cholera and recognized the spread via contaminated water and food. Having a university level education compared with secondary school or less (adjusted odds ratio (aOR) = 2.09), being married compared with single (aOR = 1.67), and working in the medical field compared with unemployed (aOR = 4.19) were significantly associated with higher odds of having good cholera knowledge. Having good knowledge compared with having a poor level of cholera knowledge (aOR = 1.83) and older age (aOR = 1.03) were significantly associated with higher odds of having a good attitude towards cholera. The current study showed an overall high knowledge score on cholera among the Lebanese population. Nevertheless, gaps in cholera knowledge were identified and should be addressed, particularly among workers in the medical field. Thus, we recommend targeted health education to the general population that aims to strengthen the health resilience in the community. abstract_id: PUBMED:37539115 Cholera in Syria, a crisis following crises: Assessment of knowledge, attitude, and practice in a cohort of syrian population. Cholera is an acute bacterial disease caused by intestinal infection with Vibrio cholerae. It is one of the major re-emerging communicable diseases in Syria following the Syrian crisis and the Covid-19 pandemic. The current study was undertaken to explore the level of awareness in a cohort of the Syrian population by testing their knowledge and investigating their attitudes and practices. An internet-based survey that queried knowledge of cholera transmission, prevention, risk factors, and treatment was designed. Of particular interest was revealing sources of information associated with higher knowledge. Furthermore, individual attitudes and practices towards the disease were collected and analyzed. Participants were 1521, mostly females (68.3%), 18-25 years old (56.4%), single (72.7%), and college degree holders (75.9%). The main sources of information on cholera were school/college (31.7%), social media (28.7%), family and friends (13.2%), and online search engines (11.3%). The average total knowledge of the cohort was 40.39%. The participants >40 years old, highly educated, living in urban areas, and females demonstrated higher knowledge of cholera. Schools/colleges and online search engines were associated with better knowledge. Most of the cohort showed serious attitudes and considered cholera a very dangerous disease. They were very concerned about cholera resurfacing in the country and were worried about getting infected. The majority of the cohort were leading their lives favorably by eating out/takeaway <5 times a month; however, their practices regarding handling raw produce were suboptimal. Succeeding cholera re-emergence, females, highly educated, and >40 years old participants were more committed to changing practices and taking stricter safety measures. As cholera imposes a health threat not only to Syrians but to the people of the whole Middle East, preventive strategies were suggested, mainly raising awareness with more focused media means and vaccination of people at high risk in the affected areas of the governorates. abstract_id: PUBMED:35961490 Provision of Travel Health Advice Through Local Tourist Handlers: Perspectives of Safari Tour Operators in Uganda. Background: The public health burden of travel related diseases (TRDs) remains significant despite the fact that most can be avoided, particularly with the assistance of local tourist handlers such as Safari Tour Operators' (STOs). The STO's are the immediate close contact with tourist groups, and yet their perspectives regarding provision of travel health advice are unknown in Uganda. This study was conducted to determine the level of knowledge, attitude, and practices of STOs regarding TRDs in Uganda. Methods: A cross sectional study was carried out among selected STOs in Kampala and Wakiso Districts using self-administered questionnaires. Results: The overall level of knowledge about travel health information was inadequate. Although 77.5% of the STOs were aware of some TRDs, the availability of travel health information or database in their organisations was not observed. Knowledge of the current TRDs in Uganda among the STOs was very low, with a few individuals mentioning yellow fever (9.7%), malaria (9.2%), cholera (9.2%), and HIV/AIDs (8.9%). Knowledge related to the sources of the travel health information was also low, with STOs (19.1%) and travel health clinics (14.9%) being mentioned as the main sources. The STOs had mostly positive attitude towards provision of travel health advice related to travellers, and all agreed that STOs should play a big role in the provision of travel health advice. First Aid and simple treatment options were mentioned as practices that STOs conducted to keep the travellers safe. Conclusions: While the STOs are motivated to do the right thing and provide their clients with sound travel health advice, they have a limited awareness of the actual TRDs in Uganda. The STOs must be educated on pertinent travel health advice, and intra-travel disease and injury management. The responsible government and non-governmental entities in Uganda need to develop educational materials emphasising the relevance of travel health advice. abstract_id: PUBMED:32101587 Knowledge, attitude and practices on cholera in an arid county, Kenya, 2018: A mixed-methods approach. Background: Cholera remains a public health problem in Kenya despite increased efforts to create awareness. Assessment of knowledge, attitude and practice (KAP) in the community is essential for the planning and implementation of preventive measures. We assessed cholera KAP in a community in Isiolo County, Kenya. Methods: This cross-sectional study involved a mixed-methods approach utilizing a questionnaire survey and focus group discussions (FGDs). Using multistage sampling with household as the secondary sampling unit, interviewers administered structured questionnaires to one respondent aged ≥18 years old per household. We created knowledge score by allotting one point for each correct response, considered any total score ≥ median score as high knowledge score, calculated descriptive statistics and used multivariate logistic regression to examine factors associated with high knowledge score. In FGDs, we randomly selected the participants aged ≥18 years and had lived in Isiolo for >1 year, conducted the FGDs using an interview guide and used content analysis to identify salient emerging themes. Results: We interviewed 428 participants (median age = 30 years; Q1 = 25, Q3 = 38) comprising 372 (86.9%) females. Of the 425/428 (99.3%) who had heard about cholera, 311/425 (73.2%) knew that it is communicable. Although 273/428 (63.8%) respondents knew the importance of treating drinking water, only 216/421 (51.3%) treated drinking water. Those with good defecation practice were 209/428 (48.8%). Respondents with high knowledge score were 227/428 (53.0%). Positive attitude (aOR = 2.88, 95% C.I = 1.34-6.20), treating drinking water (aOR = 2.21, 95% C.I = 1.47-3.33), age <36 years (aOR = 1.75, 95% C.I = 1.11-2.74) and formal education (aOR = 1.71, 95% C.I = 1.08-2.68) were independently associated with high knowledge score. FGDs showed poor latrine coverage, inadequate water treatment and socio-cultural beliefs as barriers to cholera prevention and control. Conclusions: There was a high knowledge score on cholera with gaps in preventive practices. We recommend targeted health education to the old and uneducated persons and general strengthening of health education in the community. abstract_id: PUBMED:8921469 Using a knowledge, attitudes and practices survey to supplement findings of an outbreak investigation: cholera prevention measures during the 1991 epidemic in Peru. Background: To assess the effectiveness of the cholera prevention activities of the Peruvian Ministry of Health, we conducted a knowledge, attitudes, and practices (KAP) survey in urban and rural Amazon communities during the cholera epidemic in 1991. Methods: We surveyed heads of 67 urban and 61 rural households to determine diarrhoea rates, sources of cholera prevention information, and knowledge, attitudes, and practices regarding ten cholera prevention measures. Results: Twenty-five per cent of 482 urban and 11% of 454 rural household members had diarrhoea during the first 3-4 months of the epidemic. Exposure to mass media education was greater in urban areas, and education through interpersonal communication was more prevalent in rural villages. Ninety-three per cent of rural and 67% of urban respondents believed they could prevent cholera. The mean numbers of correct responses to ten knowledge questions were 7.8 for urban and 8.2 for rural respondents. Practices lagged behind knowledge and attitudes (mean correct response to ten possible: urban 4.9, rural 4.6). Seventy-five per cent of respondents drank untreated water and 91% ate unwashed produce, both of which were identified as cholera risk factors in a concurrently conducted case-control study. Conclusions: The cholera prevention campaign successfully educated respondents, but did not cause many to adopt preventive behaviours. Direct interpersonal education by community-based personnel may enhance the likelihood of translating education into changes in health behaviours. Knowledge, attitudes, and practices surveys conducted with case-control studies during an epidemic can be an effective method of refining education/control programmes. abstract_id: PUBMED:34948512 Knowledge into the Practice against COVID-19: A Cross-Sectional Study from Ghana. The COVID-19 pandemic has affected populations globally, including Ghana. Knowledge of the COVID-19 disease, and the application of preventive public health interventions are pivotal to its control. Besides a lockdown, measures taken against the spread of the virus include the wearing of face masks, social distancing, regular hand washing with soap and, more recently, vaccination against the virus. In order to establish a possible link between the knowledge of the disease and compliance with preventive measures, including vaccination, a cross-sectional study employing an interview-structured questionnaire was conducted in six regions of Ghana (n = 1560). An adequate level of knowledge of COVID-19 (69.9%) was reported. The linear multiple regression analysis further explicated the differences in the knowledge of COVID-19 among the respondents by their knowledge of cholera and influenza (adjusted R-Square = 0.643). Despite this profound knowledge of the illness, two thirds of the respondents were unwilling to follow basic preventive measures and only 35.3% were willing to be vaccinated. Amazingly, neither knowledge of COVID-19 nor the socio-demographic characteristics had any meaningful influence on the practice of preventive measures. Personal attitude leading to efficient public compliance with preventive measures, therefore, is a critical issue demanding special attention and effective interventions by the government and locals with authority to curb the spread of the pandemic which surpasses the traditional channels of public health communication. This includes a roll-out of persuasion, possibly including public figures and influencers, and in any case, a balanced and open discussion addressing the acceptance of the COVID-19 vaccine in order to avoid new variants and comparable problems currently facing many countries of Western Europe. Indeed, a profound hesitancy against vaccination may turn African countries such as Ghana for many years into hotspots of new viral variants. abstract_id: PUBMED:34718638 Intervention to Improve Diarrhea-Related Knowledge and Practices Among Informal Healthcare Providers in Slums of Kolkata. Background: In the densely populated slums of Kolkata, informal healthcare providers' (IHP) diarrhea-related knowledge and rationality of practices should be improved to reduce risk of adverse outcome, expenditure, and antimicrobial resistance. Methods: A multicomponent intervention was conducted among 140 representative IHPs in the slums of 8 wards in Kolkata to assess its impact on their diarrhea-related knowledge and practice. Six intervention modules in local languages were provided (1 per month) with baseline (N = 140) and postintervention (N = 124) evaluation. Results: Mean overall (61.1 to 69.3; P < .0001) and domain-specific knowledge scores for etiology/spread (5.4 to 8.1; P < .0001), management (6.4 to 7.2; P < .0001), and oral rehydration solution ([ORS] 5.7 to 6.5; P < .0001) increased significantly (at α = 0.05) after intervention and were well retained. Impact on knowledge regarding etiology/spread (adjusted odds ratio [aOR] = 5.6; P < .0001), cholera (aOR = 2.0; P = .0041), management (aOR = 3.1; P < .0001), ORS (aOR = 2.3; P = .0008), and overall (aOR = 4.3; P < .0001) were significant. Intervention worked better for IHPs who practiced for ≥10 years (aOR = 3.2; P < .0001), untrained IHPs (aOR = 4.8; P < .0001), and pharmacists (aOR = 8.3; P < .0001). Irrational practices like empirical antibiotic use for every cholera case (aOR = 0.3; P < .0001) and investigation for every diarrhea case (aOR = 0.4; P = .0003) were reduced. Rationality of testing (aOR = 4.2; P < .0001) and antibiotic use (aOR = 1.8; P = .0487) improved. Conclusions: Multicomponent educational intervention resulted in sustainable improvement in diarrhea-related knowledge and practices among IHPs in slums of Kolkata. Policy implications should be advocated along with implementation and scale-up. Answer: Based on the abstracts provided, a direct cholera threat does not necessarily improve knowledge, attitude, and practices regarding the disease. For instance, in Mudzi district, which shares a border with Mozambique where cholera was prevalent, the level of knowledge on cholera was very poor, even poorer than in Wedza district, which does not share an international border (PUBMED:8771929). This suggests that proximity to a cholera threat does not guarantee better knowledge or practices. Similarly, in Lebanon, despite the outbreak of cholera in October 2022, the study identified notable gaps in knowledge, attitudes, and practices, which varied according to participant characteristics (PUBMED:37104361). In Jazan city, Saudi Arabia, the level of knowledge, attitudes, and practices regarding cholera was also found to be below average (PUBMED:34041066). In another study conducted in Lebanon during the cholera outbreak, while there was an overall high knowledge score among the population, gaps in knowledge were identified, especially among workers in the medical field, indicating that direct threat does not uniformly improve knowledge across all groups (PUBMED:36498316). The situation in Syria also showed that despite cholera being a major re-emerging communicable disease, the average total knowledge of the cohort was only 40.39%, indicating a lack of adequate knowledge despite the direct threat (PUBMED:37539115). In Uganda, the knowledge about travel health information, including cholera, among Safari Tour Operators was inadequate, showing that direct threat does not necessarily translate to better knowledge (PUBMED:35961490). In Kenya, although there was a high knowledge score on cholera, there were gaps in preventive practices, suggesting that knowledge does not always lead to improved practices (PUBMED:32101587). Lastly, during the cholera epidemic in Peru, despite a successful education campaign, many respondents did not adopt preventive behaviors, indicating a disconnect between knowledge and practice (PUBMED:8921469). In conclusion, the evidence from these abstracts suggests that a direct cholera threat does not automatically lead to improved knowledge, attitudes, and practices regarding the disease. There are various factors, including education, communication methods, and socio-cultural beliefs, that influence whether knowledge and awareness translate into improved attitudes and practices.
Instruction: Are antibiotics beneficial for patients with sinusitis complaints? Abstracts: abstract_id: PUBMED:15689289 Are antibiotics beneficial for patients with sinusitis complaints? A randomized double-blind clinical trial. Background: Sinusitis is the fifth most common reason for patients to visit primary care physicians, yet clinical outcomes relevant to patients are seldom studied. Objective: To determine whether patients with purulent rhinitis, "sinusitis-type symptoms," improved with antibiotics. Second, to examine a clinical prediction rule to provide preliminary validation data. Methods: Prospective clinical trial, with double-blinded placebo controlled randomization. The setting was a suburb of Washington, DC, from Oct 1, 2001, to March 31, 2003. All participants were 18 years or older, presenting to a family practice clinic with a complaint of sinusitis and with pus in the nasal cavity, facial pressure, or nasal discharge lasting longer than 7 days. The main outcome measures were resolution of symptoms within a 14-day follow-up period and the time to improvement (days). Results: After exclusion criteria, 135 patients were randomized to either placebo (n=68) or amoxicillin (n=67) for 10 days. Intention-to-treat analyses showed that 32 (48%) of the amoxicillin group vs 25 (37%) of the placebo group (P=.26) showed complete improvement by the end of the 2-week follow-up period (relative risk=1.3; 95% confidence interval [CI], 0.87-1.94]). Although the rates of improvement were not statistically significantly different at the end of 2 weeks, the amoxicillin group improved significantly earlier, in the course of treatment, a median of 8 vs 12 days, than did the placebo group (P=.039). Conclusion: For most patients with sinusitis-type complaints, no improvement was seen with anti-biotics over placebo. For those who did improve, data suggested there is a subgroup of patients who may benefit from antibiotics. abstract_id: PUBMED:34159935 CLINICAL AND LABORATORY JUSTIFICATION FOR USE OF POLYVALENT BACTERIOPHAGE IN TECHNIQUE OF DELAYED PRESCRIBING OF ANTIBIOTICS IN PATIENTS WITH ARS. Objective: The aim: Of research was to evaluate the effectiveness of bacteriophage in patients with acute rhinosinusitis in respect of technique of delayed prescribing of antibiotics. Patients And Methods: Materials and methods: There were examined 155 patients who were given irrigation therapy with isotonic solution of sea water 4 times a day and mometasone furoate of 100 mg twice a day for 10 days. The patients of intervention group (n - 80) were additionally prescribed the polyvalent bacteriophage endonasally, in drops of 2-10 ml 3 times a day. Results: Results: The patients in the control group were observed the decrease in intensity of rhinorrhea, nasal congestion and post-nasal drip on the third day of supervision (p < 0.05), the reduction in bacterial load with Staphylococcus aureus, S. pneumoniae, Haemophilus influenza and M. Catarrhalis (р <0,005), the increase of IgА and sIgA levels (p<0,005) and the reduction in prescription of antibacterial medications by 20%. Conclusion: Conclusions: Adding of polyvalent bacteriophage contributes to reducing the use of antibiotics and is recommended in the framework of the strategy of delayed prescribing of antibiotics. abstract_id: PUBMED:36258155 Patient Perspectives on the Drivers and Deterrents of Antibiotic Treatment of Acute Rhinosinusitis: a Qualitative Study. Background: Antibiotics are prescribed in >80% of outpatient acute rhinosinusitis (ARS) visits, despite the low incidence of bacterial infection. Previous studies have shown patient expectations are the most robust predictor of antibiotics prescription in ARS. However, patient perceptions are not well known or understood. Objective: To understand patient perceptions regarding what drives or deters them from wanting, seeking, and taking antibiotic treatment of ARS. Design: Iterative thematic analysis of semi-structured interviews. Participants: Nineteen adults diagnosed with ARS within the prior 60 days at the Northwestern Medicine General Internal Medicine clinic in Chicago, IL. Main Measures: Perceptions of patients with ARS. Key Results: We interviewed 19 patients, identifying the following drivers of antibiotic use: (1) symptoms, especially discolored rhinorrhea, and seeking relief, (2) belief that antibiotics are a convenient and/or effective way to relieve/cure sinusitis, and (3) desire for tangible outcomes of a clinic visit. For deterrents, the following themes emerged: (1) concern about antibiotic resistance, (2) preference for other treatments or preference to avoid medications, and (3) desire to avoid a healthcare visit. Patients identified that a trustworthy physician's recommendation for antibiotics was a driver, and a recommendation against antibiotics was a deterrent to taking antibiotics; a delayed antibiotic prescription also served as a deterrent. Antibiotic side effects were viewed neutrally by most participants, though they were a deterrent to some. Conclusions: Patients have misconceptions about the indications and effectiveness of antibiotics for ARS. Intimate knowledge of key antibiotic drivers and deterrents, from the perspective of patients with ARS, can be leveraged to engage and increase patients' knowledge, and set appropriate expectations for antibiotics for ARS. abstract_id: PUBMED:29986562 The study of the rational use of antibiotics after nasal surgery Objective:To study whether the antibiotics should be used in the patients with or without chronic nasal sinusitis after the nasal surgery,and how to rationally use it.Method:Study design:prospective stratified randomized controlled study.Patients with sinusitis were divided into three groups.A group was without antibiotics,B group was with standard antibiotics using ,and C group was with prolonged antibiotics using.Patients without sinusitis were divided into D group without antibiotics,E group with standard antibiotics using,and F group with prolonged antibiotics using. Observe the postoperative infection rate in each group and compare them.Result:The infection rates were 3.53%,2.67%,0.00% in A,B and C group, and there was no significant differences between three groups. The infection rates were 1.22%,0.00%,1.39% in D,E and F group,and there was no significant differences between them.Conclusion: There was no influence in patients with or without using antibiotics,standard or prolonged using antibiotics after nasal surgery. Recommend not to use antibiotics after nasal surgery,and appropriately use antibioctics within 48 hours.Prolonged using is not recommended. abstract_id: PUBMED:33168390 Utilization of Prophylactic Antibiotics After Nasal Packing for Epistaxis. Background: There have been few investigations examining the benefits, consequences, and patterns of use for prophylactic antibiotics for nasal packing in the emergency department setting. Given the frequency of epistaxis in the emergency department, it is an ideal setting to study the efficacy and utilization patterns of prophylactic antibiotics in nasal packing. Objective: Our aim was to assess both rates of utilization and evidence of benefit for prophylactic antibiotics in patients with nasal packing for epistaxis. Methods: A single-institution retrospective review of 275 cases of anterior nasal packing in an urban emergency department between September 2013 and April 2017 was performed. Chi-square statistical analysis was used to evaluate results. Results: Among 275 cases studied, there were no instances of toxic shock syndrome. Roughly 73% of patients with nonabsorbable packing received prophylactic antibiotics. Only one (1.1%) case of sinusitis was noted among the nonabsorbable packing with prophylaxis group, with no such complication in the nonprophylaxis group. In contrast, 95% of patients with absorbable nasal packing were not given prophylactic antibiotics. Analysis of all cases given prophylactic antibiotics vs. no prophylaxis, regardless of packing type, revealed no statistically significant difference in the development of acute sinusitis (1% vs. 0.56%; p = 0.6793). Conclusions: There was no observed advantage or disadvantage to using prophylactic antibiotics in anterior nasal packing in the emergency department, regardless of whether patients received absorbable or nonabsorbable packing. However, patients who receive nonabsorbable nasal packing were more likely to receive antibiotic prophylaxis. abstract_id: PUBMED:18342685 Antibiotics for adults with clinically diagnosed acute rhinosinusitis: a meta-analysis of individual patient data. Background: Primary-care physicians continue to overprescribe antibiotics for acute rhinosinusitis because distinction between viral and bacterial sinus infection is difficult. We undertook a meta-analysis of randomised trials based on individual patients' data to assess whether common signs and symptoms can be used to identify a subgroup of patients who benefit from antibiotics. Methods: We identified suitable trials--in which adult patients with rhinosinusitis-like complaints were randomly assigned to treatment with an antibiotic or a placebo--by searching the Cochrane Central Register of Controlled Trials, Medline, and Embase, and reference lists of reports describing such trials. Individual patients' data from 2547 adults in nine trials were checked and re-analysed. We assessed the overall effect of antibiotic treatment and the prognostic value of common signs and symptoms by the number needed to treat (NNT) with antibiotics to cure one additional patient. Findings: 15 patients with rhinosinusitis-like complaints would have to be given antibiotics before an additional patient was cured (95% CI NNT[benefit] 7 to NNT[harm] 190). Patients with purulent discharge in the pharynx took longer to cure than those without this sign; the NNT was 8 patients with this sign before one additional patient was cured (95% CI NNT[benefit] 4 to NNT[harm] 47). Patients who were older, reported symptoms for longer, or reported more severe symptoms also took longer to cure but were no more likely to benefit from antibiotics than other patients. Interpretation: Common clinical signs and symptoms cannot identify patients with rhinosinusitis for whom treatment is clearly justified. Antibiotics are not justified even if a patient reports symptoms for longer than 7-10 days. abstract_id: PUBMED:33807404 Impact of a Social Marketing Intervention on General Practitioners' Antibiotic Prescribing Practices for Acute Respiratory Tract Complaints in Malta. Introduction: Antibiotics are commonly prescribed in primary care for acute respiratory tract complaints (aRTCs), often inappropriately. Social marketing interventions could improve prescribing in such settings. We evaluate the impact of a social marketing intervention on general practitioners' (GPs') antibiotic prescribing for aRTCs in Malta. Methods: Changes in GPs' antibiotic prescribing were monitored over two surveillance periods between 2015 and 2018. Primary outcome: change in antibiotic prescription for aRTCs. Secondary outcomes: change in antibiotic prescription: (i) for immediate use, (ii) for delayed antibiotic prescription, (iii) by diagnosis, and (iv) by antibiotic class. Data were analysed using clustered analysis and interrupted time series analysis (ITSA). Results: Of 33 participating GPs, 18 successfully completed the study. Although clustered analyses showed a significant 3% decrease in overall antibiotic prescription (p = 0.024), ITSA showed no significant change overall (p = 0.264). Antibiotic prescription decreased significantly for the common cold (p < 0.001), otitis media (p = 0.044), and sinusitis (p = 0.004), but increased for pharyngitis (p = 0.015). Conclusions: The intervention resulted in modest improvements in GPs' antibiotic prescribing. A more top-down approach will likely be required for future initiatives to be successful in this setting, focusing on diagnostic and prescribing support like rapid diagnostic testing, prescribing guidelines, and standardised delayed antibiotic prescriptions. abstract_id: PUBMED:34304540 Reasonable application of antibiotics in pediatric acute rhinosinusitis The incidence of upper respiratory tract infection in children is extremely high, and some of them are prone to develop to acute rhinosinusitis. Antibiotics are the first-line medication to alleviate bacterial infections. However, due to the lack of practical and accurate objective indicators for the diagnosis of acute rhinosinusitis, it is difficult to distinguish the acute bacterial rhinosinusitis subgroup, leading to the overuse of antibiotics. In recent years, the form of antibiotic resistance has become more severe, and the application conditions of antibiotics have become more stringent. In addition, the physiological conditions of children are different from adults. Chinese and foreign studies are controversial about the rational application of antibiotics in children with acute rhinosinusitis. The relevant researches of antibiotic treatment in children with acute rhinosinusitis are now reviewed, with a view to providing clinical reference for the rational use of antibiotics in this group of people. abstract_id: PUBMED:15720932 Prescribing antibiotics for respiratory tract infections by GPs: management and prescriber characteristics. Background: Due to clinical and non-clinical factors, considerable variation exists in the prescribing of antibiotics for respiratory tract infections (RTIs) by GPs based in the Netherlands. Aim: To assess, in patients with RTIs in Dutch general practice: the prescribing rates of antibiotics; the relationship between GP characteristics and antibiotic prescribing; and the type of antibiotics prescribed. Design Of Study: Descriptive and prognostic. Setting: Eighty-four GPs in the middle region of the Netherlands. Method: All patient consultations for RTIs were registered by 84 GPs during 3 weeks in autumn and winter 2001 and 2002. In addition, all GPs completed a questionnaire related to individual and practice characteristics. Results: The mean proportion of consultations in which GPs prescribed antibiotics was 33% (95% CI = 29 to 35%) of all RTIs. This proportion varied from 21% for patients with upper RTIs or an exacerbation of asthma/COPD, to about 70% when patients had sinusitis-like complaints or pneumonia. Amoxycillin and doxycycline were the most frequently prescribed antibiotics, while 17% of the antibiotics prescribed were macrolides. Multiple linear regression analysis showed that the longer GPs had practised, the more frequently they prescribed antibiotics, especially in combination with relatively little knowledge about RTIs or the less time GPs felt they had available per patient. The final model, with seven factors, explained 29% of the variance of antibiotic prescribing. Conclusion: The prescribing behaviour of Dutch GPs might be improved with regard to choice of type and indication of antibiotics. abstract_id: PUBMED:29680444 Sinonasal outcome after endoscopic mononostril transsphenoidal surgery: A single center cohort study. Objective: The endoscopic endonasal approach has been well established for skull base surgery. However, there are only few studies pointing out nasal complaints after surgery. In this study, the authors evaluated postoperative nasal complaints and complications after mononostril endoscopic procedures. Methods: All patients operated on parasellar and sellar pathologies at our department via an endoscopic mononostril transnasal transsphenoidal approach from January 2011 to May 2015 were analyzed. To assess specific postoperative nasal pathological conditions and complaints, a questionnaire was established. Applicable data of 79 patients with additional ENT follow-up could be included. Endpoints were the quantitative evaluation of complications and correlation of these data. Results: There was no vascular injury or worsening of visual function. Two patients had persisting CSF fistula and one of them meningitis. There was a significant decrease of nasal complaints during follow up after 2 years (p < 0.001). Further surgical treatment by ENT physician was necessary in 11.4%. Resurgery significantly increased the risk of postsurgical complaints (p < 0.005). The use of a tamponade significantly reduced the risk of postsurgical reduced nasal airflow (p = 0.026) and sinusitis (p = 0.002). Conclusions: Endoscopic endonasal procedures to skull base lesions achieve high radicality with low complication rates. However, they induce temporary and permanent nasal complaints and complications. Thereby, resurgeries increase the risk of nasal complaints and the use of nasal tamponades might increase the sinonasal outcome. Further prospective studies are necessary to objectify the evaluation of postsurgical nasal complications. Answer: The use of antibiotics for patients with sinusitis complaints has been a subject of research and debate. According to a randomized double-blind clinical trial, most patients with sinusitis-type complaints did not show improvement with antibiotics over placebo. However, there was a subgroup of patients who may benefit from antibiotics, as the amoxicillin group improved significantly earlier than the placebo group (PUBMED:15689289). Another study found that the use of polyvalent bacteriophage could reduce the use of antibiotics and is recommended as part of a strategy for delayed prescribing of antibiotics in patients with acute rhinosinusitis (PUBMED:34159935). Patient perceptions also play a role in antibiotic use for acute rhinosinusitis (ARS). Some patients believe antibiotics are a convenient and effective way to relieve or cure sinusitis, while others are deterred by concerns about antibiotic resistance and a preference to avoid medications. Trust in a physician's recommendation can either drive or deter antibiotic use, and a delayed antibiotic prescription can serve as a deterrent (PUBMED:36258155). In the context of nasal surgery, a study suggested that antibiotics may not be necessary postoperatively, as there was no significant difference in infection rates between patients who used antibiotics and those who did not (PUBMED:29986562). Similarly, a study on the use of prophylactic antibiotics after nasal packing for epistaxis found no observed advantage or disadvantage to using prophylactic antibiotics (PUBMED:33168390). A meta-analysis of individual patient data from randomized trials concluded that common clinical signs and symptoms cannot identify patients with rhinosinusitis who would clearly benefit from antibiotics, and antibiotics are not justified even if a patient reports symptoms for longer than 7-10 days (PUBMED:18342685). In summary, while there may be specific cases where antibiotics can benefit patients with sinusitis complaints, the overall evidence suggests that antibiotics are often not necessary and their use should be carefully considered, taking into account the potential for antibiotic resistance and the individual patient's clinical presentation.
Instruction: Is shade beneficial for mediterranean shrubs experiencing periods of extreme drought and late-winter frosts? Abstracts: abstract_id: PUBMED:18819947 Is shade beneficial for mediterranean shrubs experiencing periods of extreme drought and late-winter frosts? Background And Aims: Plants are naturally exposed to multiple, frequently interactive stress factors, most of which are becoming more severe due to global change. Established plants have been reported to facilitate the establishment of juvenile plants, but net effects of plant-plant interactions are difficult to assess due to complex interactions among environmental factors. An investigation was carried out in order to determine how two dominant evergreen shrubs (Quercus ilex and Arctostaphylos uva-ursi) co-occurring in continental, Mediterranean habitats respond to multiple abiotic stresses and whether the shaded understorey conditions ameliorate the negative effects of drought and winter frosts on the physiology of leaves. Methods: Microclimate and ecophysiology of sun and shade plants were studied at a continental plateau in central Spain during 2004-2005, with 2005 being one of the driest and hottest years on record; several late-winter frosts also occurred in 2005. Key Results: Daytime air temperature and vapour pressure deficit were lower in the shade than in the sun, but soil moisture was also lower in the shade during the spring and summer of 2005, and night-time temperatures were higher in the shade. Water potential, photochemical efficiency, light-saturated photosynthesis, stomatal conductance and leaf 13C composition differed between sun and shade individuals throughout the seasons, but differences were species specific. Shade was beneficial for leaf-level physiology in Q. ilex during winter, detrimental during spring for both species, and of little consequence in summer. Conclusions: The results suggest that beneficial effects of shade can be eclipsed by reduced soil moisture during dry years, which are expected to be more frequent in the most likely climate change scenarios for the Mediterranean region. abstract_id: PUBMED:33823054 Unravelling resilience mechanisms in forests: role of non-structural carbohydrates in responding to extreme weather events. Extreme weather events are increasing in frequency and intensity due to global climate change. We hypothesized that tree carbon reserves are crucial for resilience of beech, buffering the source-sink imbalance due to late frosts and summer droughts, and that different components of non-structural carbohydrates (NSCs) play specific roles in coping with stressful situations. To assess the compound effects on mature trees of two extreme weather events, first a late frost in spring 2016 and then a drought in summer 2017, we monitored the phenology, radial growth and the dynamics of starch and soluble sugars in a Mediterranean beech forest. A growth reduction of 85% was observed after the spring late frost, yet not after the drought event. We observed a strong impact of late frost on starch, which also affected its dynamic at the beginning of the subsequent vegetative season. In 2017, the increase of soluble sugars, associated with starch hydrolysis, played a crucial role in coping with the severe summer drought. Non-structural carbohydrates helped to counteract the negative effects of both events, supporting plant survival and buffering source-sink imbalances under stressful conditions. Our findings indicate a strong trade-off between growth and NSC storage in trees. Overall, our results highlight the key role of NSCs on beech trees, response to extreme weather events, confirming the resilience of this species to highly stressful events. These insights are useful for assessing how forests may respond to the potential impacts of climate change on ecosystem processes in the Mediterranean area. abstract_id: PUBMED:38488820 Transgenerational plasticity to drought: contrasting patterns of non-genetic inheritance in two semiarid Mediterranean shrubs. Background And Aims: Intra- and transgenerational plasticity may provide substantial phenotypic variation to cope with environmental change. Since assessing the unique contribution of the maternal environment to the offspring phenotype is challenging in perennial, outcrossing plants, little is known about the evolutionary and ecological implications of transgenerational plasticity and its persistence over the life cycle in these species. We evaluated how intra- and transgenerational plasticity interplay to shape the adaptive responses to drought in two perennial Mediterranean shrubs. Methods: We used a novel common garden approach that reduced within-family genetic variation in both the maternal and offspring generations by growing the same maternal individual in two contrasting watering environments, well-watered and drought, in consecutive years. We then assessed phenotypic differences at the reproductive stage between offspring reciprocally-grown in the same environments. Key Results: Maternal drought had an effect on offspring performance only in Helianthemum squamatum. Offspring of drought-stressed plants showed more inflorescences, less sclerophyllous leaves and higher growth rates in both watering conditions, and heavier seeds under drought, than offspring of well-watered maternal plants. Maternal drought also induced similar plasticity patterns across maternal families, showing a general increase in seed mass in response to offspring drought, a pattern not observed in the offspring of well-watered plants. In contrast, both species expressed immediate adaptive plasticity, and the magnitude of intragenerational plasticity was larger than the transgenerational plastic responses. Conclusions: Our results highlight that adaptive effects associated with maternal drought can persist beyond the seedling stage and provide evidence of species-level variation in the expression of transgenerational plasticity. Such differences between co-occurring Mediterranean species in the prevalence of this form of non-genetic inheritance may result in differential vulnerability to climate change. abstract_id: PUBMED:34076289 Nonlinear plant-plant interactions modulate impact of extreme drought and recovery on a Mediterranean ecosystem. Interaction effects of different stressors, such as extreme drought and plant invasion, can have detrimental effects on ecosystem functioning and recovery after drought. With ongoing climate change and increasing plant invasion, there is an urgent need to predict the short- and long-term interaction impacts of these stressors on ecosystems. We established a combined precipitation exclusion and shrub invasion (Cistus ladanifer) experiment in a Mediterranean cork oak (Quercus suber) ecosystem with four treatments: (1) Q. suber control; (2) Q. suber with rain exclusion; (3) Q. suber invaded by shrubs; and (4) Q. suber with rain exclusion and shrub invasion. As key parameter, we continuously measured ecosystem water fluxes. In an average precipitation year, the interaction effects of both stressors were neutral. However, the combination of imposed drought and shrub invasion led to amplifying interaction effects during an extreme drought by strongly reducing tree transpiration. Contrarily, the imposed drought reduced the competitiveness of the shrubs in the following recovery period, which buffered the negative effects of shrub invasion on Q. suber. Our results demonstrate the highly dynamic and nonlinear effects of interacting stressors on ecosystems and urges for further investigations on biotic interactions in a context of climate change pressures. abstract_id: PUBMED:25359123 The combined effects of a long-term experimental drought and an extreme drought on the use of plant-water sources in a Mediterranean forest. Vegetation in water-limited ecosystems relies strongly on access to deep water reserves to withstand dry periods. Most of these ecosystems have shallow soils over deep groundwater reserves. Understanding the functioning and functional plasticity of species-specific root systems and the patterns of or differences in the use of water sources under more frequent or intense droughts is therefore necessary to properly predict the responses of seasonally dry ecosystems to future climate. We used stable isotopes to investigate the seasonal patterns of water uptake by a sclerophyll forest on sloped terrain with shallow soils. We assessed the effect of a long-term experimental drought (12 years) and the added impact of an extreme natural drought that produced widespread tree mortality and crown defoliation. The dominant species, Quercus ilex, Arbutus unedo and Phillyrea latifolia, all have dimorphic root systems enabling them to access different water sources in space and time. The plants extracted water mainly from the soil in the cold and wet seasons but increased their use of groundwater during the summer drought. Interestingly, the plants subjected to the long-term experimental drought shifted water uptake toward deeper (10-35 cm) soil layers during the wet season and reduced groundwater uptake in summer, indicating plasticity in the functional distribution of fine roots that dampened the effect of our experimental drought over the long term. An extreme drought in 2011, however, further reduced the contribution of deep soil layers and groundwater to transpiration, which resulted in greater crown defoliation in the drought-affected plants. This study suggests that extreme droughts aggravate moderate but persistent drier conditions (simulated by our manipulation) and may lead to the depletion of water from groundwater reservoirs and weathered bedrock, threatening the preservation of these Mediterranean ecosystems in their current structures and compositions. abstract_id: PUBMED:28514052 Shift in community structure in an early-successional Mediterranean shrubland driven by long-term experimental warming and drought and natural extreme droughts. Global warming and recurring drought are expected to accelerate water limitation for plant communities in semiarid Mediterranean ecosystems and produce directional shifts in structure and composition that are not easily detected, and supporting evidence is scarce. We conducted a long-term (17 years) nocturnal-warming (+0.6°C) and drought (-40% rainfall) experiments in an early-successional Mediterranean shrubland to study the changes in community structure and composition, contrasting functional groups and dominant species, and the superimposed effects of natural extreme drought. Species richness decreased in both the warming and drought treatments. Responses to the moderate warming were associated with decreases in herb abundance, and responses to the drought were associated with decreases in both herb and shrub abundances. The drought also significantly decreased community diversity and evenness. Changes in abundance differed between herbs (decreases) and shrubs (increases or no changes). Both warming and drought, especially drought, increased the relative species richness and abundance of shrubs, favoring the establishment of shrubs. Both warming and drought produced significant shifts in plant community composition. Experimental warming shifted the community composition from Erica multiflora toward Rosmarinus officinalis, and drought consistently shifted the composition toward Globularia alypum. The responses in biodiversity (e.g., community biodiversity, changes of functional groups and compositional shifts) were also strongly correlated with atmospheric drought (SPEI) in winter-spring and/or summer, indicating sensitivity to water limitation in this early-successional Mediterranean ecosystem, especially to natural extreme droughts. Our results suggest that the shifts in species assembles and community diversity and composition are accelerated by the long-term nocturnal-warming and drought, combined with natural severe droughts, and that the magnitude of the impacts of climate change is also correlated with the successional status of ecosystem. The results thus highlight the necessity for assessing the impacts on ecosystemic functioning and services and developing effective measures for conserving biodiversity. abstract_id: PUBMED:28865146 Cocoa agroforestry is less resilient to sub-optimal and extreme climate than cocoa in full sun. Cocoa agroforestry is perceived as potential adaptation strategy to sub-optimal or adverse environmental conditions such as drought. We tested this strategy over wet, dry and extremely dry periods comparing cocoa in full sun with agroforestry systems: shaded by (i) a leguminous tree species, Albizia ferruginea and (ii) Antiaris toxicaria, the most common shade tree species in the region. We monitored micro-climate, sap flux density, throughfall, and soil water content from November 2014 to March 2016 at the forest-savannah transition zone of Ghana with climate and drought events during the study period serving as proxy for projected future climatic conditions in marginal cocoa cultivation areas of West Africa. Combined transpiration of cocoa and shade trees was significantly higher than cocoa in full sun during wet and dry periods. During wet period, transpiration rate of cocoa plants shaded by A. ferruginea was significantly lower than cocoa under A. toxicaria and full sun. During the extreme drought of 2015/16, all cocoa plants under A. ferruginea died. Cocoa plants under A. toxicaria suffered 77% mortality and massive stress with significantly reduced sap flux density of 115 g cm-2 day-1 , whereas cocoa in full sun maintained higher sap flux density of 170 g cm-2 day-1 . Moreover, cocoa sap flux recovery after the extreme drought was significantly higher in full sun (163 g cm-2 day-1 ) than under A. toxicaria (37 g cm-2 day-1 ). Soil water content in full sun was higher than in shaded systems suggesting that cocoa mortality in the shaded systems was linked to strong competition for soil water. The present results have major implications for cocoa cultivation under climate change. Promoting shade cocoa agroforestry as drought resilient system especially under climate change needs to be carefully reconsidered as shade tree species such as the recommended leguminous A. ferruginea constitute major risk to cocoa functioning under extended severe drought. abstract_id: PUBMED:34721486 Plant and Soil Enzyme Activities Regulate CO2 Efflux in Alpine Peatlands After 5 Years of Simulated Extreme Drought. Increasing attention has been given to the impact of extreme drought stress on ecosystem ecological processes. Ecosystem respiration (Re) and soil respiration (Rs) play a significant role in the regulation of the carbon (C) balance because they are two of the largest terrestrial C fluxes in the atmosphere. However, the responses of Re and Rs to extreme drought in alpine regions are still unclear, particularly with respect to the driver mechanism in plant and soil extracellular enzyme activities. In this study, we imposed three periods of extreme drought events based on field experiments on an alpine peatland: (1) early drought, in which the early stage of plant growth occurred from June 18 to July 20; (2) midterm drought, in which the peak growth period occurred from July 20 to August 23; and (3) late drought, in which the wilting period of plants occurred from August 23 to September 25. After 5 years of continuous extreme drought events, Re exhibited a consistent decreasing trend under the three periods of extreme drought, while Rs exhibited a non-significant decreasing trend in the early and midterm drought but increased significantly by 58.48% (p < 0.05) during the late drought compared with the ambient control. Plant coverage significantly increased by 79.3% (p < 0.05) in the early drought, and standing biomass significantly decreased by 18.33% (p < 0.05) in the midterm drought. Alkaline phosphatase, polyphenol oxidase, and peroxidase increased significantly by 76.46, 77.66, and 109.60% (p < 0.05), respectively, under late drought. Structural equation models demonstrated that soil water content (SWC), pH, plant coverage, plant standing biomass, soil β-D-cellobiosidase, and β-1,4-N-acetyl-glucosaminidase were crucial impact factors that eventually led to a decreasing trend in Re, and SWC, pH, β-1,4-glucosidase (BG), β-1,4-xylosidase (BX), polyphenol oxidase, soil organic carbon, microbial biomass carbon, and dissolved organic carbon were crucial impact factors that resulted in changes in Rs. Our results emphasize the key roles of plant and soil extracellular enzyme activities in regulating the different responses of Re and Rs under extreme drought events occurring at different plant growth stages. abstract_id: PUBMED:31960957 Nutrients mitigate the impacts of extreme drought on plant invasions. Extreme climate events, such as drought, are becoming increasingly important drivers of plant community change, yet little is known about their impacts on invasive plants. Further, drought impacts may be altered by other anthropogenic stressors, such as eutrophication. We found drought dramatically reduced density of invasive Lepidium latifolium in salt marshes, and this die-back was mitigated by nutrient addition. In a 3-yr field experiment (2014-2016) conducted during an unprecedented drought (2012-2015), we tracked the effects of drought and nutrient addition on the plant community. We conducted this research at four salt marshes across a salinity gradient in the San Francisco Bay, California, USA. We manipulated paired native and invaded plots, one-half of which were treated monthly with N and P for 1.5 yr during the most intense period of the drought and one subsequent wet winter. In addition, we monitored unmanipulated L. latifolium-invaded transects within our freshest and most saline sites throughout the three years of our manipulative experiment and one additional wet winter. We documented a dramatic die-back of invasive L. latifolium during extreme drought, with reductions in stem density (52-100%) and height (17-47%) that were more severe at low salinity sites than high salinity sites. We found nutrient application lessened the effect of drought on L. latifolium stem density, but not height. In native plots, extreme drought reduced native plant cover (4-24%), but nutrient addition mitigated this impact. Interestingly, native plants in invaded plots did not suffer reductions in cover due to drought, perhaps because they were simultaneously benefiting from the die-back of the invader. Our results show drought negatively impacted both native and invasive plants and this impact was stronger on the invader, which experienced persistent declines two years after the end of the drought. However, by mitigating the effect of drought on invasive plants, nutrient addition potentially erased the advantage drought provided native plants over invasive plants under ambient nutrient conditions. abstract_id: PUBMED:35720605 Radial Growth of Trees Rather Than Shrubs in Boreal Forests Is Inhibited by Drought. Of all forest biomes, boreal forests are experiencing the most significant warming. Drought caused by warming has a dramatic impact on species in boreal forests. However, little is known about whether the growth of trees and shrubs in boreal forests responds consistently to warming and drought. We obtained the tree-ring width data of 308 trees (Larix gmelinii and Pinus sylvestris var. mongolica) and 133 shrubs (Pinus pumila) from 26 sites in northeastern China. According to the climate data from 1950 to 2014, we determined three extreme drought years (1954, 1967, and 2008). The response difference of radial growth of trees and shrubs in boreal forests to drought was compared using resilience index, moving correlation and response analysis. The results showed that high temperature (mean and maximum temperature) in previous and current growing seasons promoted the growth of P. pumila, but inhibited the growth of trees. On the contrary, wetter conditions (higher PDSI) promoted tree growth but were not conducive to P. pumila growth in high latitudes. Moving correlation analysis showed similar results. In addition, water deficit was more likely to inhibit P. pumila growth in low latitudes. The drought resistance of P. pumila was stronger than that of L. gmelinii and P. sylvestris var. mongolica. Therefore, the growth loss and recovery time of P. pumila during drought was less than those of trees. We concluded that L. gmelinii and P. sylvestris var. mongolica are more prone to growth decline than P. pumila after the drought caused by climate warming. In the future climate warming, shrub growth may benefit more than trees. Our findings are of great significance in predicting the future changes in ecosystem composition and species distribution dynamics in extreme climate susceptible areas. Answer: The impact of shade on Mediterranean shrubs during periods of extreme drought and late-winter frosts appears to be complex and species-specific. According to the study on two dominant evergreen shrubs, Quercus ilex and Arctostaphylos uva-ursi, shade was found to be beneficial for the leaf-level physiology of Q. ilex during winter but detrimental during spring for both species. In summer, shade had little consequence. The study suggests that the beneficial effects of shade can be overshadowed by reduced soil moisture during dry years, which are expected to become more frequent in the Mediterranean region due to climate change (PUBMED:18819947). In contrast, another study on a Mediterranean beech forest found that non-structural carbohydrates (NSCs) in trees played a crucial role in coping with stressful situations such as late frosts and summer droughts. The study observed that trees used their carbon reserves to buffer the source-sink imbalance caused by these extreme weather events, supporting plant survival and resilience (PUBMED:33823054). Furthermore, research on the transgenerational plasticity to drought in two semiarid Mediterranean shrubs indicated that maternal drought had an effect on offspring performance only in Helianthemum squamatum, with offspring of drought-stressed plants showing more inflorescences, less sclerophyllous leaves, higher growth rates, and heavier seeds under drought than offspring of well-watered maternal plants. This suggests that adaptive effects associated with maternal drought can persist beyond the seedling stage (PUBMED:38488820). In summary, while shade can provide some physiological benefits to Mediterranean shrubs during certain periods, such as winter, its overall effectiveness in mitigating the negative effects of extreme drought and late-winter frosts is not straightforward and can be influenced by other factors such as soil moisture and species-specific responses. Additionally, other mechanisms, such as the use of carbon reserves and transgenerational plasticity, also play roles in the resilience of these plants to extreme weather events.
Instruction: Do antidepressants induce rapid cycling? Abstracts: abstract_id: PUBMED:26142612 Antidepressants worsen rapid-cycling course in bipolar depression: A STEP-BD randomized clinical trial. Background: The use of antidepressants in rapid-cycling bipolar disorder has been controversial. We report the first randomized clinical trial with modern antidepressants on this topic. Methods: As part of the Systematic Treatment Enhancement Program for Bipolar Disorder (STEP-BD) study, we analyzed, as an a priori secondary outcome, rapid cycling as a predictor of response in 68 patients randomized to continue vs. discontinue antidepressant treatment, after initial response for an acute major depressive episode. Outcomes assessed were percent time well and total number of episodes. All patients received standard mood stabilizers. Results: In those continued on antidepressants (AD), rapid cycling (RC) subjects experienced 268% (3.14/1.17) more total mood episodes/year, and 293% (1.29/0.44) more depressive episodes/year, compared with non-rapid cycling (NRC) subjects (mean difference in depressive episodes per year RC vs. NRC was 0.85 ± 0.37 (SE), df = 28, p = 0.03). In the AD continuation group, RC patients also had 28.8% less time in remission than NRC patients (95% confidence intervals (9.9%, 46.5%), p = 0.004). No such differences between RC and NRC subjects were seen in the AD discontinuation group (Table 1). Analyses within the rapid-cycling subgroup alone were consistent with the above comparisons between RC and NRC subjects, stratified by maintenance antidepressant treatment, though limited by sample size. Conclusions: In an a priori analysis, despite preselection for good antidepressant response and concurrent mood stabilizer treatment, antidepressant continuation in rapid-cycling was associated with worsened maintenance outcomes, especially for depressive morbidity, vs. antidepressant discontinuation. abstract_id: PUBMED:25682854 Risk factors for rapid cycling in bipolar disorder. Objectives: The aim of this study was to investigate the clinical factors associated with the development of rapid cycling, as well as to elucidate the role of antidepressants. Methods: The present study (NCT01503489) is a prospective, naturalistic cohort study conducted in a sample of 289 patients diagnosed with bipolar disorder followed and treated for up to 14 years. The patients were divided into two groups on the basis of the development of a rapid cycling course (n = 48) or no development of such a course (n = 241), and compared regarding sociodemographic, clinical, and outcome variables. Results: Among the 289 patients, 48 (16.6%) developed a rapid cycling course during the follow-up. Several differences were found between the two groups, but after performing Cox regression analysis, only atypical depressive symptoms (p = 0.001), age at onset (p = 0.015), and number of suicide attempts (p = 0.030) persisted as significantly associated with the development of a rapid cycling course. Conclusions: The development of rapid cycling during the course of bipolar disorder is associated with a tendency to chronicity, with a poorer outcome, and with atypical depressive symptomatology. Our study also suggests that the development of rapid cycling is associated with a higher use of antidepressants. abstract_id: PUBMED:8295640 Rapid cycling in bipolar disorders Two patients, a woman aged 32 and a man aged 27 years, were suffering from rapid cycling bipolar disorder (i.e. a course with four or more episodes a year). Hypothyroidism and the use of antidepressants may be involved in the development of such a disease course. Treatment with carbamazepine was successful in both patients. Patients suffering from rapid cycling bipolar disorder should be treated with lithium or carbamazepine, or a combination of these. Antidepressants should only be used with great caution. abstract_id: PUBMED:12934983 Do antidepressants induce rapid cycling? A gender-specific association. Objective: To investigate the influence of antidepressant use and gender in the genesis of rapid-cycling bipolar illness. Method: The charts of bipolar patients treated at the Massachusetts General Hospital Bipolar Clinic (Boston, Mass.) were reviewed for gender, presence or absence of rapid cycling, and antidepressant use prior to first mania. Results: Data were obtained for 129 bipolar patients (55% women), 45% of whom had experienced a rapid-cycling course. Overall, there was no significant difference in the rates of rapid cycling between the subjects who were exposed to antidepressants prior to their first manic/ hypomanic episode and those who were not. Additional analysis carried out separately by gender found a significant association between rapid cycling and antidepressant use prior to first mania/hypomania for women but not for men. A logistic regression analysis with rapid cycling as dependent variable revealed a significant interaction between antidepressant use prior to first mania/hypomania and gender. Conclusion: We found a gender-specific relationship between antidepressant use prior to first manic/hypomanic episode and rapid-cycling bipolar illness. When antidepressants are prescribed to depressed women who have a risk of bipolar disorder, the risk of inducing rapid cycling should be considered. Differing proportions of women and men in previous studies may account for conflicting results reported in the literature for the relationship of antidepressants and rapid cycling. However, this naturalistic trial was uncontrolled, and controlled research is required to confirm our findings. abstract_id: PUBMED:34017922 A continuum hypothesis of psychotomimetic rapid antidepressants. Ketamine, classical psychedelics and sleep deprivation are associated with rapid effects on depression. Interestingly, these interventions also have common psychotomimetic actions, mirroring aspects of psychosis such as an altered sense of self, perceptual distortions and distorted thinking. This raises the question whether these interventions might be acute antidepressants through the same mechanisms that underlie some of their psychotomimetic effects. That is, perhaps some symptoms of depression can be understood as occupying the opposite end of a spectrum where elements of psychosis can be found on the other side. This review aims at reviewing the evidence underlying a proposed continuum hypothesis of psychotomimetic rapid antidepressants, suggesting that a range of psychotomimetic interventions are also acute antidepressants as well as trying to explain these common features in a hierarchical predictive coding framework, where we hypothesise that these interventions share a common mechanism by increasing the flexibility of prior expectations. Neurobiological mechanisms at play and the role of different neuromodulatory systems affected by these interventions and their role in controlling the precision of prior expectations and new sensory evidence will be reviewed. The proposed hypothesis will also be discussed in relation to other existing theories of antidepressants. We also suggest a number of novel experiments to test the hypothesis and highlight research areas that could provide further insights, in the hope to better understand the acute antidepressant properties of these interventions. abstract_id: PUBMED:37265558 Clinical characteristics and prescriptions associated with a 2-year course of rapid cycling and euthymia in bipolar disorder: a multicenter treatment survey for bipolar disorder in psychiatric clinics. Objective: In patients with bipolar disorder (BD), rapid cycling (RC) presents a risk for a more severe illness, while euthymia (EUT) has a better prognosis. This study focused on the progression of RC and EUT, which are contrasting phenomenology, and aimed to clarify the influence of patient backgrounds and prescription patterns on these different progressions, using a large sample from the first and second iterations of a multicenter treatment survey for BD in psychiatric clinics (MUSUBI). Methods: In the cross-sectional study (MUSUBI), a questionnaire based on a retrospective medical record survey of consecutive BD cases (N = 2,650) was distributed. The first survey was conducted in 2016, and the second one in 2017. The questionnaire collected information on patient backgrounds, current episodes, and clinical and prescribing characteristics. Results: In the first survey, 10.6% of the participants had RC and 3.6% had RC for two consecutive years, which correlated with BP I (Bipolar disorder type I), suicidal ideation, duration of illness, and the use of lithium carbonate and antipsychotic medications. Possible risk factors for switching to RC were comorbid developmental disorders and the prescription of anxiolytics and sleep medication. Moreover, 16.4% of the participants presented EUT in the first survey, and 11.0% presented EUT for two consecutive years. Possible factors for achieving EUT included older age; employment; fewer psychotic symptoms and comorbid personality disorders; fewer antidepressants, antipsychotics, and anxiolytics, and more lithium prescriptions. Conclusion: RC and EUT generally exhibit conflicting characteristics, and the conflicting social backgrounds and factors contributing to their outcomes were distinctive. Understanding these clinical characteristics may be helpful in clinical practice for management of patients with BD. abstract_id: PUBMED:37271762 Characteristics of rapid cycling in 1261 bipolar disorder patients. Background: Rapid-cycling (RC; ≥ 4 episodes/year) in bipolar disorder (BD) has been recognized since the 1970s and associated with inferior treatment response. However, associations of single years of RC with overall cycling rate, long-term morbidity, and diagnostic subtypes are not clear. Results: We compared descriptive and clinical characteristics in 1261 BD patients with/without RC, based on history and prospective follow-up for several years. RC in any previous year was identified in 9.36% of BD subjects (3.74% in BD1, 15.2% BD2), and somewhat more among women than men. RC-BD subjects had 3.21-fold greater average prospective annual rates of recurrence but not hospitalizations, had less difference in %-time-ill, received more mood-stabilizing treatments, and had greater suicidal risk, lacked familial psychiatric illnesses, had more cyclothymic temperament, were more likely to be married, had more siblings and children, experienced early sexual abuse, but were less likely to abuse drugs (not alcohol) or smoke. In multivariable regression modeling, older age, mood-switching with antidepressants, and BD2 > BD1 diagnosis, as well as more episodes/year were independently associated with RC. Notably, prospective mean recurrence rates were below 4/year in 79.5% of previously RC patients, and below 2/year in 48.1%. Conclusions: Lifetime risk of RC in BD was 9.36%, more likely in women, with older age, and in BD2 > BD1. With RC, recurrence rates were much higher, especially for depression with less effect on %-time ill, suggesting shorter episodes. Variable associations with unfavorable outcomes and prospective recurrence rates well below 4/year in most previously RC patients indicate that RC was not a sustained characteristic and probably was associated with use of antidepressants. abstract_id: PUBMED:35545157 Evidence-based treatment strategies for rapid cycling bipolar disorder, a systematic review. Objectives: Rapid cycling is a phase of bipolar disorder with increased episode frequencies. It is a severe and disabling condition that often poses a major challenge to the clinician. The aim of this paper is to give an overview of the evidence-based treatment options for rapid cycling. Methods: A systematic search on Pubmed, Embase and Cochrane databases from inception until December 2021 was conducted according to the PRISMA guidelines. An additional search on clinicaltrials.gov was done. References of retrieved papers and key reviews were hand-searched. Randomized controlled trials including at least 10 patients with bipolar disorder, rapid cycling, reporting an objective outcome measure were selected. Results: Our search, initially revealing 1330 articles, resulted in 16 papers about treatment of an acute mood episode, relapse prevention or both. Lithium, anticonvulsants, second generation antipsychotics, antidepressants and thyroid hormone were assessed as treatment options in the presented data. Evidence supporting the use of aripiprazole, olanzapine, quetiapine, valproate and lamotrigine for treatment of rapid cycling bipolar disorder was found. Limitations: Small sample sizes, different index episodes and variety of outcome measures. Conclusion: Evidence regarding treatment of rapid cycling remains scarce. Evidence supports the use of aripiprazole, olanzapine, and valproate for acute manic or mixed episodes, quetiapine for acute depressive episodes and aripiprazole and lamotrigine for relapse prevention. Given the paucity of available evidence, and the burden that accompanies rapid cycling, future research is warranted. abstract_id: PUBMED:30058481 Rapid-Acting Antidepressants. Background: Conventional antidepressants are thought to produce their impact on clinical symptoms by increasing the central availability of biogenic amine neurotransmitters (the monoamine hypothesis of depression). These drugs continue to be the primary medicines used in major depressive disorder. Although they have biological effects after acute dosing, full antidepressant response generally takes weeks of daily administration. Lack of rapid onset is a large limitation in antidepressant therapy (e.g., suicide, lack of medication compliance, difficulty switching medications). Methods: The present review of the literature discusses the preclinical and clinical findings on compounds that can produce immediate symptom relief. Results: These compounds include ketamine, scopolamine, and mechanistically-related drugs. Newer additions to the list of potential rapid-acting agents include antagonists of metabotropic (mGlu) 2/3 receptors, negative allosteric modulators of α5-containing GABAA receptors, and psychedelic compounds. An additional benefit of these compounds is that they have demonstrated large effect sizes and, importantly, demonstrated efficacy in patient's refractory to other treatments. A drawback of some of these compounds, to date, is finding ways to expand the duration of clinical efficacy. In addition, for some compounds, the side-effect profile requires management. A primary mechanism by which rapid effects might be produced is through the amplification of excitatory neurotransmission through activation of AMPA receptors. The extracellular efflux of glutamate induced by these drugs has been documented and provides the hypothesized triggering mechanism for AMPA receptor amplification. Conclusion: The preclinical and clinical literature strongly suggests that rapid-acting antidepressants are the current focus of antidepressant drug discovery. Promising clinical findings exist for several compounds including ketamine and other NMDA receptor antagonists, scopolamine, and psilocybin. Two compounds are in late stage clinical development: GLYX-13 (Rapastinel) and eskekamine. abstract_id: PUBMED:34367791 Manic Episode Induced by Lamotrigine in Rapid Cycling Bipolar Disorder. Bipolar disorder is characterized by persistent and/or recurrent mood changes between depressive and manic poles. Rapid cycling is a frequent, although underrecognized, condition in bipolar disorder, and it is known to worsen the prognosis of the disease. With regard to the treatment of bipolar depression, there is a shortage of evidence-supported treatment choices, and the literature on the subject includes few references to cases of manic switch induced by lamotrigine. The authors describe a case of a rapid cycling bipolar patient who presented manic symptoms after initiating treatment with lamotrigine. Answer: The relationship between antidepressants and rapid cycling in bipolar disorder is complex and has been the subject of various studies. Some research suggests that antidepressants may worsen the course of rapid cycling in bipolar depression. For instance, a STEP-BD randomized clinical trial found that rapid cycling subjects who continued on antidepressants experienced significantly more total mood episodes per year and had less time in remission compared to non-rapid cycling subjects, despite being preselected for good antidepressant response and concurrent mood stabilizer treatment (PUBMED:26142612). This indicates that antidepressant continuation in rapid-cycling patients was associated with worsened maintenance outcomes, particularly for depressive morbidity. Another study found that the development of rapid cycling in the course of bipolar disorder is associated with a higher use of antidepressants, among other factors such as atypical depressive symptoms and age at onset (PUBMED:25682854). Similarly, a study that reviewed the charts of bipolar patients treated at the Massachusetts General Hospital Bipolar Clinic found a gender-specific relationship between antidepressant use prior to the first manic/hypomanic episode and rapid-cycling bipolar illness, particularly in women (PUBMED:12934983). However, it is important to note that rapid cycling is not a sustained characteristic and may not be solely induced by antidepressants. A study of 1261 bipolar disorder patients found that while lifetime risk of rapid cycling was more likely in women, with older age, and in BD2 > BD1, the prospective mean recurrence rates were below 4/year in most previously rapid cycling patients, suggesting that rapid cycling was not a sustained characteristic and probably was associated with the use of antidepressants (PUBMED:37271762). In summary, while there is evidence to suggest that antidepressants may be associated with the induction or worsening of rapid cycling in some bipolar disorder patients, particularly in women, the relationship is not definitive and may not apply to all patients. Rapid cycling can be influenced by various factors, and antidepressants are just one of the potential contributors. More research is needed to fully understand the role of antidepressants in rapid cycling and to develop effective treatment strategies for this challenging phase of bipolar disorder (PUBMED:35545157).
Instruction: Is roll-your-own tobacco substitute for manufactured cigarettes: evidence from Ireland? Abstracts: abstract_id: PUBMED:23554511 Is roll-your-own tobacco substitute for manufactured cigarettes: evidence from Ireland? Background: When tax policies increase tobacco prices some smokers may switch to smoking cheaper roll-your-own (RYO) tobacco. To reduce the harm from smoking, this substitution effect should be avoided. This study analyses whether RYO tobacco is a substitute for manufactured cigarettes (MCs) in Ireland, a country with relatively high price for both products. Methods: Data on duty-paid consumption of RYO tobacco from 1978 to 2011 are used to estimate the demand by applying seemingly unrelated regression and error correction models. Covariates include prices of tobacco in Ireland and in the UK, income and a variable describing tobacco-related health policies. Results: We failed to find evidence of RYO tobacco being a substitute for MC due to price differences. However, an increase in incomes (1%) is associated with a reduction in the consumption of RYO tobacco (-0.4%), which can be due to substitution towards MCs in addition to quitting or cutting back. Also, an increase in the price of RYO tobacco (1%) is associated with a reduction in its consumption (-1%). Conclusions: Increasing prices via taxation is an effective way of reducing the consumption of RYO tobacco but due to associations between RYO tobacco smoking and lower incomes, these policies should be accompanied by measures aimed at helping smokers to quit. abstract_id: PUBMED:34725318 Effectiveness of smoking cessation programs of roll-your-own tobacco smokers in Galicia. Objective: Currently in developed countries there is an increase in the consumption of roll your own tobacco, which is associated with a higher proportion of users of this form of tobacco who wish to make an attempt to quit. The objective of this study was to analyze the effectiveness of tobacco cessation interventions based on the type of tobacco consumed. Methods: Longitudinal study of a cohort of 641 smokers recruited between 2015 and 2018 in a health area of Galicia included in smoking cessation programs, based on psychological counseling and pharmacological treatment. The characteristics and success of the intervention were evaluated in two groups: roll your own tobacco smokers (RYO) and manufactured tobacco smokers (MT). A logistic regression was performed to determine the probability of success in smoking cessation. The risk measure was the odds ratio (OR) with its 95% confidence interval. Results: RYO users started at a younger age and had a lower sociocultural stratum than MT users. Interventions to quit smoking in RYO users were associated with less abstinence at 4 weeks (52% MT vs 38% LT) (OR: 0.5; 95% CI 0.35-0.99; p=0.045;) and at 3 months (42% TM vs 30% TL) (OR:0.6; 95% CI 0.33-0.98; p=0.04). Conclusions: Smoking cessation programs are less effective in short-term RYO users. There are no differences in long-term abstinence between the two groups. abstract_id: PUBMED:28222364 Manufactured and roll-your-own cigarettes: A changing pattern of smoking in Barcelona, Spain. Aims: The objectives of the present study were to describe smoking prevalence and compare the smoking attributes of adult smokers according to the type of tobacco product consumed. Methods: Repeated cross-sectional surveys (2004-2005 and 2011-2012) of a representative sample of the adult (≥16 years) population in Barcelona, Spain, were used to assess self-reported tobacco consumption, smoking attributes, and salivary cotinine concentration. The survey conducted in 2004-2005 included information on 1245 subjects and the survey in 2011-2012 on 1307 individuals. Results: Smoking prevalence decreased over the study period (from 26.6% to 24.1% in self-reported daily smokers). The prevalence of daily smokers who reported the use of manufactured cigarettes declined from 23.7% in 2004-2005 to 17.3% in 2011-2012. The prevalence of roll-your-own cigarette users increased from 0.4% to 3.7%. According to data obtained in 2011-2012, the proportion of self-reported roll-your-own cigarette users was higher among men (19.8% vs. 9.5% of women), participants aged 16-44 years (22.9% vs. 5.8% of participants aged 45-65 years and 4.0% of participants aged ≥65 years), and participants with secondary and university education (17.7% and 18.5% vs. 7.9% of participants with less than primary and primary education). We did not observe differences in cotinine concentrations according to the type of tobacco product consumed. Conclusions: Systematic collection of data on smoking prevalence and smoker attributes from representative samples of the population is necessary for policymakers to develop efficient tobacco control interventions. Considering the increase of roll-your-own cigarette users and the unclear health consequences of their use, policymakers should aim to implement tax policies to equalize the prices of different types of tobacco products. abstract_id: PUBMED:30771628 Comparison of TSNAs concentration in saliva according to type of tobacco smoked. Objective: To compare tobacco-specific nitrosamines (TSNAs) measured in saliva according to different types of tobacco smoked in a sample of smokers of the city of Barcelona (Spain). Methods: We used data from a cross-sectional study of a sample of the adult smoking population of Barcelona, Spain in 2013-2014 (n = 165). We classified smokers in five groups according to the type of tobacco smoked: a) manufactured cigarettes only, b) roll-your-own (RYO) cigarettes only, c) dual smokers (both manufactured and RYO cigarettes), d) manufactured plus other types of tobacco products different from RYO and e) other types of tobacco products different from manufactured and RYO cigarettes. We calculated the geometric mean (GM) and geometric standard deviation (GSD) of TSNAs concentration in saliva (pg/mL), including N'-nitroaonornicotine (NNN), 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK) and 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanol (NNAL) according to the five tobacco groups. We also described all TSNAs concentration in each tobacco group stratified by the number of cigarettes smoked per day. Results: Smokers from the RYO cigarette group had higher TSNAs concentration than smokers from the manufactured cigarette group: 13 pg/mL vs 4.9 pg/mL of NNN, 1.9 pg/mL vs 1.7 pg/mL in NNK and 1.1 pg/mL vs 0.9 pg/mL of NNAL. There were significant differences in NNN concentrations between smokers of RYO vs manufactured cigarettes. The higher the number of cigarettes smoked, the higher the TSNAs concentrations. After adjusted by number of cigarettes smoked, there were not statistically significant differences in TSNAs between RYO and manufactured cigarettes. Conclusions: Our data shows that RYO cigarette is at least as hazardous as manufactured cigarettes. Regulating RYO tobacco prices could be an effective strategy to control tobacco use. abstract_id: PUBMED:30089477 It's not all about price: factors associated with roll-your-own tobacco use among young people - a qualitative study. Background: Smoking prevalence in Ireland is falling in all age groups, but the prevalence of roll-your-own (RYO) tobacco use is rising among young people. This qualitative study aims to explore and understand the factors associated with young people's use of RYO products. Methods: Semi-structured individual and focus group interviews were conducted with young people aged 16-22 years. Participants were recruited from a higher education institution and youth organisations working with early school leavers across Dublin. In total, there were 62 participants in the study, consisting of 22 individual interviews and eight focus group interviews with 40 participants. Categoric and thematic data analysis was used to generate the findings. Results: We identified two broad themes, incentivising and disincentivising factors. The lower cost of RYO products compared to pre-manufactured cigarettes was the most important incentive for users. However, other product characteristics, such as the artisanal factors associated with RYO products were also found. Social and environmental influences were apparent, in which certain groups and environments facilitated and normalised RYO practices. Amenities and facilities often provided smokers with normalised spaces which could be dedicated to the enactment of rolling practices and to the creation and maintenance of social bonds with other users. Disincentives included negative features related to the product itself, adverse health effects, and the effects of tobacco denormalisation. Conclusions: While the lower cost of RYO products is very important for young smokers, other product characteristics and influences also incentivise and disincentivise use. A more comprehensive understanding of the multi-dimensional appeal of these products will assist policymakers to target strategies to reduce the attractiveness to young smokers of these products. abstract_id: PUBMED:36381382 Prevalence and correlates of roll-your-own cigarette smoking among South African adults during 2010-2011 and 2017-2018. Introduction: The increasing use of roll-your own (RYO) cigarettes has been documented globally, but there are no recent data from South Africa, particularly among youths and low-income groups. We assessed changes in prevalence and correlates of RYO smoking among South African adults during 2010-2011 and 2017-2018, and explored expenditure differences between daily smokers of RYO and manufactured cigarettes. Methods: Nationally representative data of South Africans aged ≥16 years used in this study were from the South African Social Attitudes Survey (SASAS) for 2010-2011 (n=6116), and 2017-2018 (n=5799). Current RYO cigarette use included daily and non-daily use. Annual expenditures were estimated based on typical usage patterns for daily users. Descriptive and multivariable analyses were performed using Stata Version 15 with the level of statistical significance set at p<0.05. Results: The prevalence of ever RYO cigarette use increased from 6.5% (95% CI: 5.6-7.5) during 2010-2011, to 8.5% (95% CI: 7.0-10.0) during 2017-2018 (p=0.026). Current RYO cigarette use prevalence however remained largely unchanged when we compared 2010-2011 to 2017-2018 (5.2% vs 6.3%, p=0.544). During 2017-2018, current RYO cigarette use prevalence was highest among men (11.6%), those who self-identified as Coloreds (11.1%), people aged 25-34 years (7.8%), those with no schooling (7.5%), and those unemployed (9.8%). Annual expenditures associated with typical patterns of daily RYO cigarette smoking were substantially less than for smoking of manufactured cigarettes. Conclusions: The prevalence of ever RYO cigarette use increased between 2010-2011 and 2017-2018. Current RYO cigarette use during 2017-2018 was more prevalent among Coloreds, Black Africans, youths, those with lower education, and the unemployed. This study's findings highlight the need to harmonize taxation of cigarettes and RYO cigarettes, and to intensify implementation of evidence-based tobacco control and prevention interventions in South Africa. abstract_id: PUBMED:34313121 Roll-to-Roll Manufactured Sensors for Nitroaromatic Organophosphorus Pesticides Detection. A fully roll-to-roll manufactured electrochemical sensor with high sensing and manufacturing reproducibility has been developed for the detection of nitroaromatic organophosphorus pesticides (NOPPs). This sensor is based on a flexible, screen-printed silver electrode modified with a graphene nanoplatelet (GNP) coating and a zirconia (ZrO2) coating. The combination of the metal oxide and the 2-D material provided advantageous electrocatalytic activity toward NOPPs. Manufacturing, scanning electron microscopy-scanning transmission electron microscopy image analysis, electrochemical surface characterization, and detection studies illustrated high sensitivity, selectivity, and stability (∼89% current signal retention after 30 days) of the platform. The enzymeless sensor enabled rapid response time (10 min) and noncomplex detection of NOPPs through voltammetry methods. Furthermore, the proposed platform was highly group-sensitive toward NOPPs (e.g., methyl parathion (MP) and fenitrothion) with a detection limit as low as 1 μM (0.2 ppm). The sensor exhibited a linear correlation between MP concentration and current response in a range from 1 μM (0.2 ppm) to 20 μM (4.2 ppm) and from 20 to 50 μM with an R2 of 0.992 and 0.991, respectively. Broadly, this work showcases the first application of GNPs/ZrO2 complex on flexible silver screen-printed electrodes fabricated by entirely roll-to-roll manufacturing for the detection of NOPPs. abstract_id: PUBMED:25500162 Opposite trends in the consumption of manufactured and roll-your-own cigarettes in Spain (1991-2020). Objective: The aim of this study is to describe trends in the consumption of manufactured and roll-your-own cigarettes between 1991 and 2012 in Spain, and to project these trends up to 2020. Methods: We estimated daily consumption per capita during 1991-2012 using data on sales of manufactured cigarettes (20-packs) and rolling tobacco (kg) from the Tobacco Market Commission, and using data of the Spanish adult population from the National Statistics Institute. We considered different weights (0.5, 0.8 and 1 g) to compute the number of rolled cigarettes per capita. We computed the annual per cent of change and assessed possible changes in trends using joinpoint regression, and projected the consumption up to 2020 using Bayesian methods. Results: Daily consumption per capita of manufactured cigarettes decreased on average by 3.0% per year in 1991-2012, from 7.6 to 3.8 units, with three trend changes. However, daily consumption per capita of roll-your-own cigarettes increased on average by 14.1% per year, from 0.07 to 0.92 units of 0.5 g, with unchanged trends. Together, daily consumption per capita decreased between 2.9% and 2.5%, depending on the weight of the roll-your-own cigarettes. Projections up to 2020 indicate a decrease of manufactured cigarettes (1.75 units per capita) but an increase of roll-your-own cigarettes (1.25 units per capita). Conclusions: While the consumption per capita of manufactured cigarettes has decreased in the past years in Spain, the consumption of roll-your-own cigarettes has increased at an annual rate around 14% over the past years. Whereas a net decrease in cigarette consumption is expected in the future, use of roll-your-own cigarettes will continue to increase. abstract_id: PUBMED:31327545 Removing the last billboard for the tobacco industry: Tobacco standardized packaging in Ireland. The World Health Organization's Framework Convention on Tobacco Control (FCTC) implementation guidelines recommend that governments adopt plain packaging (also known as standardized packaging (SP)) along with pictorial health warning labels covering 50% or more of the tobacco package. The 2014 revised European Union Tobacco Product Directive also supports EU member states to introduce SP. Following these developments and recommendations, in 2013 the Department of Health launched a "Tobacco Free Ireland" policy which aimed to reduce the smoking prevalence rate to less than 5% by 2025. Tobacco Free Ireland set out 60 recommendations and measures to reach the 2025 target, including introducing SP. Tobacco companies and trademark and intellectual property organizations opposed the SP proposal claiming it would; 1) not work to reduce smoking levels, 2) increase illicit tobacco trade, 3) create unnecessary problems for retailers, and 4) violate domestic and international laws governing trademarks. However leading health groups in Ireland presented evidence on how SP would communicate the harms of tobacco to smokers and discourage young people from beginning to smoke. These efforts combined with strong political will helped Ireland to become the second country in the world to enact standardized packaging for all tobacco products. abstract_id: PUBMED:38038349 Use of Roll Your Own Tobacco in the Basque Country: magnitude, socio-economic inequalities and evolution in the period 2013-2018 Objective: Despite the global reduction in smoking, the consumption of roll-your-own tobacco has increased in recent years in many countries, which may be slowing down the downward trend in smoking. The aim of this paper was to analyse the evolution of the number of people who use roll-your-own tobacco and tobacco of any type between 2013 and 2018 in the Basque Country and measure whether there were socioeconomic inequalities in its consumption. Methods: We carried out an observational and cross-sectional study of a representative sample of the population living in the Basque Country aged 16 to 59 years (n2013=6929 y n2018=7961) from the Basque Health Survey (2013 and 2018) and the Basque Addictions Survey (2018). We calculated prevalences of regular tobacco smoking in general and of roll-your-own cigarettes according to different socioeconomic variables for the years 2013 and 2018, as well as prevalence ratios for the estimation of change between these years and according to the above variables through robust Poisson regression models. We stratified analyses by sex and two age groups. Results: Overall smoking prevalence decreased between 2013 (men=27.7% and women=24.1%) and 2018 but remained the same for those who used roll-your-own tobacco (in 2013, men=5.1% and women=3.2%; in 2018=5.4% and 3.5%, respectively). In contrast to tobacco in general, which showed a clear socio-economic gradient, roll-your-own tobacco use was more prevalent among young people (in 2018, men=6.4% and women=4.1%), a group where the pattern by socio-economic status was less evident. Conclusions: Considering the persistence of roll-your-own tobacco consumption, we also should direct tobacco control towards this kind of product, and its use should be monitored and supervised, especially among young people. Answer: The study conducted in Ireland, as reported in abstract PUBMED:23554511, found no evidence that roll-your-own (RYO) tobacco is a substitute for manufactured cigarettes (MCs) due to price differences. However, an increase in incomes was associated with a reduction in the consumption of RYO tobacco, which could be due to substitution towards MCs in addition to quitting or cutting back. Additionally, an increase in the price of RYO tobacco was associated with a reduction in its consumption. The study concluded that increasing prices via taxation is an effective way of reducing the consumption of RYO tobacco, but such policies should be accompanied by measures aimed at helping smokers to quit, considering the association between RYO tobacco smoking and lower incomes.
Instruction: Vertical shear injuries: is there a relationship between residual displacement and functional outcome? Abstracts: abstract_id: PUBMED:37215508 Does the residual displacement of pelvic ring affect the functional outcome in pelvic ring injures? Purpose: The effects of residual displacement on the functional outcome of the patient are not distinctly known and the acceptability criteria of residual displacement of the pelvic ring remain disputed. The purpose of this study is to evaluate the effect of residual displacement on functional outcome in pelvic ring injuries. Materials And Methods: A total of 49 patients with pelvic ring injuries (both operative and non-operative) were followed up for six months. Anteroposterior (AP), Vertical and rotational displacements were measured at admission, after surgery and at six months. Resultant displacement (vector addition of AP and vertical displacement) was taken for comparison. Displacement was graded as excellent, good, fair and poor according to Matta's criteria. Functional outcome assessment was done at six months using Majeed score. Work adjusted Majeed score was calculated for non-working patients by taking the percentage score. Results: We compared the means of residual displacement with functional outcome (Excellent/Good/Fair) and found that there was no significant difference between the groups in operative (P=0.33) or non-operative patients (P=0.09). This showed that patients with relatively higher residual displacement also had satisfactory functional outcomes. The functional outcomes were compared after dividing the residual displacement into 2 groups: <10 mm and >10 mm and no significant difference was found in outcomes for either operative or non-operative patients. Conclusion: Up to 10 mm of residual displacement is acceptable in pelvic ring injuries. More prospective studies with a longer follow up are needed for determination of correlation between reduction and functional outcome. abstract_id: PUBMED:10372618 Vertical shear injuries: is there a relationship between residual displacement and functional outcome? Background: Residual vertical displacement is often cited as being related to poor outcome in patients with pelvic injuries. This study attempts to clarify the relationship between residual vertical displacement and functional outcome. Methods: From 1982 to 1989, over 500 patients with pelvic ring injuries were treated at two Level I trauma centers. Thirty-three patients with vertical shear (Tile C) fractures and residual displacement (2-52 mm) were evaluated. Outcomes were quantified by using SF-36 Short-Form Health Survey (SF-36) and the Iowa Pelvic Score (IPS). Results: There was no correlation between IPS or SF-36 scales and residual vertical displacement. The IPS correlated (p<0.05) with seven of eight SF-36 categories, excluding mental health. Patients reporting limp and leg length discrepancy also correlated with the IPS and select SF-36. Conclusion: Pelvic injuries showed no correlation between functional outcome and residual vertical displacement suggesting other factors. The degree of residual vertical displacement does not affect functional outcome. abstract_id: PUBMED:20165681 Outcome analysis of pelvic ring fractures. Background: The behavior of pelvic ring fractures in the long run has been very sparsely studied. The purpose of this study is to assess the long-term outcome of pelvic ring fractures. Materials And Methods: A total of 24 patients with pelvic ring fractures, not involving the acetabulum, were followed up for an average duration of 33 months (range 24-49 months). The clinicoradiological assessment was done using the pelvic scoring system adapted from Cole et al. Parameters assessed included sacroiliac (SI) joint involvement and, among SI joint injuries, the presence of a fracture disruption and the degree of displacement. Results: Pain and limp were present in 13 patients (54.2%) each and residual working disability in 9 patients (37.5%). The overall Cole's pelvic score was 31.3 +/- 7.02 of a total score of 40. The average pelvic score in patients with SI disruption was 29.2 +/- 6.75; much lower than patients without SI disruption with an average score of 34.9 +/- 6.25 reaching statistical significance. The pelvic score among patients with a displacement </=10 mm was 33.0 +/- 3.92 and with a displacement >10 mm 25.88 +/- 7.14. The difference was statistically significant. Conclusions: Pelvic ring injuries can lead to long term problems significantly. The involvement of the SI joint affects the long-term outcome adversely, more so if the residual displacement is >10 mm. The pelvic scoring system is comprehensive and depicts subtle differences in the outcome, which the individual parameters of the assessment fail to show. abstract_id: PUBMED:26229163 Vertical pelvic ring displacement in pelvic ring injury: Measurements in pelvic outlet radiograph and in cadavers. Background: Vertical pelvic ring displacement (VPRD) is a serious injury and needs assessment. Pelvic outlet radiographs are routinely taken. However, relationship of radiographic and actual VPRD is still in question. Thus, measurement of VPRD from pelvic radiographs was studied. Materials And Methods: 2 dry pelvic bones and 1 sacrum from same cadaver was reconstructed to be the pelvic ring. Five specimens were enrolled. 10, 20 and 30 mm vertical displacement of right pelvic bone was performed at levels of sacroiliac joint and pubic symphysis for representing right VPRD. Then, the pelvis was set sacral inclination at 60° from X-ray table for outlet and anteroposterior pelvic radiographs. Right VPRD was measured by referring to superior most pelvic articular surface of both sacroiliac joints and sacral long axis. Radiographic VPRD and actual displacement were analyzed by Pearson correlation coefficient at more than 0.90 for the strong correlation and strongly significant simple regression analysis was set at P < 0.01. Results: Radiographic VPRD from outlet and anteroposterior pelvic views at 10 mm actual displacement were 20.12 ± 1.98 and 4.08 ± 3.76 mm, at 20 mm were 40.31 ± 1.97 and 9.94 ± 7.27 mm and at 30 mm were 58.56 ± 2.53 and 11.29 ± 2.89 mm. Statistical analyses showed that radiographic VPRD from pelvic outlet view is 1.95 times of actual displacement with strong correlation at 0.992 coefficient and strongly significant regression analysis (P < 0.001) with 0.984 of R (2) value. Whereas, the measurement from anteroposterior pelvic radiograph was not strongly significant. Conclusion: Pelvic outlet radiograph provides efficient measurement of VPRD with 2 times of actual displacement. abstract_id: PUBMED:33652506 Long-term outcomes of initially conservatively treated midshaft clavicle fractures. Background: Recent studies about completely displaced midshaft clavicle fractures have reported that their nonunion/malunion rates were significantly higher in conservatively treated patients compared to surgically treated patients. The purpose of this study was to evaluate the factors associated with treatment decisions for midshaft clavicle fractures and also the factors that affect patient satisfaction with their treatment choice. Methods: We retrospectively reviewed the records of 75 patients who had been diagnosed with a midshaft clavicle fracture and were treated conservatively at a single institution between March 1, 2013, and December 31, 2014. Their medical records were reviewed to investigate the severity of the initial vertical displacement. A telephone survey was carried out to identify the presence of any patient-perceived deformity and determine if the patient eventually underwent surgery and whether the patient would prefer surgery if the injury recurred. Results: Significantly more patients with vertical displacement ≥100% (9/28) eventually underwent surgery compared to patients with vertical displacement <100% (3/32; p=0.028). Patients with vertical displacement ≥100% (13/28) were significantly more likely to prefer surgery compared to patients with vertical displacement <100% (7/32, p=0.044). Among the conservatively treated patients, nine of 32 participants with a patient-perceived deformity and one of 16 without a patient-perceived deformity responded that they would prefer to receive surgery in same situation in the future (p=0.079). Conclusions: Patients with a midshaft clavicle fracture with vertical displacement of ≥100% may eventually require surgical treatment. When conservative treatment is carried out, the long-term patient results may be unsatisfactory due to perceived residual deformities. abstract_id: PUBMED:28390686 Do columns or sagittal displacement matter in the assessment and management of Lisfranc fracture dislocation? An alternate approach to classification of the Lisfranc injury. Aim: The classification of a Lisfranc injury has conventionally been based around Myerson's system. The aims of this study were to review whether a novel classification system based on sagittal displacement of the tarsometatarsal joint and breadth of injury as determined by a columnar theory was associated with functional outcomes and thus had a greater utility. Patients: We retrospectively reviewed 54 Lisfranc injuries with a minimum follow up of two years at our Level One Trauma Centre. Each fracture was sub-classified based on our novel classification system which assessed for evidence of sagittal displacement and involvement of columns of the midfoot. Our primary outcome measures were the FFI and AOFAS midfoot scores. Results: Injuries involving all three of the columns of the midfoot were associated with significantly worse functional outcome scores (FFI p=0.004, AOFAS p=0.036). Conversely, sagittal displacement, whether dorsal or plantar, had no significance (FFI p=0.147, AOFAS p=0.312). The best predictor of outcome was the quality of anatomical reduction (FFI p=0.008, AOFAS p=0.02). Conclusion: Column involvement and not sagittal displacement is the most significant factor in considering the severity Lisfranc injury and long term functional outcomes. This classification system has greater clinical utility than those currently proposed. abstract_id: PUBMED:24172672 Functional outcome analysis of triplane and tillaux fractures after closed reduction and percutaneous fixation. Background: There are limited data regarding the outcomes of triplane and tillaux fractures. The purpose of our study was to provide a long-term follow-up analysis of the functional outcomes as they relate to articular displacement using validated outcome tools. Methods: A retrospective chart and radiographic review was performed to identify the age, the sex, the treatment method, complications, the fracture type, and articular displacement. Most patients underwent closed reduction with percutaneous fixation. We used 2 validated outcome measures, namely the Foot and Ankle Outcomes Score and the Marx Activity Scale, to assess functional results at a minimum of 2 years from the time of injury. Results: We identified 78 patients between 2000 and 2009 who underwent computed tomographic scan evaluation of either a triplane (n=58) or a tillaux fracture (n=20). Triplane fractures occurred at a significantly younger age than tillaux fractures for both men and women (P=0.01). Women were found to suffer transitional fractures at a younger age than men (P<0.001). Patients with residual articular displacement of up to 2.4 mm after definitive treatment did not have worse functional outcomes compared with those who had an anatomic reduction on any of the Foot and Ankle Outcomes Score subscales and the Marx Activity Scale. Similarly, there was no correlation between the magnitude of residual gap or step-off with functional outcome when the gap and step-off are <2.5 mm. Patients with longer-term follow-up (>4 y) did not have deterioration of function compared with those with a follow-up of 2 to 4 years. Conclusions: Tillaux and triplane fractures have good medium-term outcomes after reduction and fixation. Using primarily closed reduction and percutaneous fixation techniques, patients with a residual displacement of <2.5 mm after treatment have a uniformly good result. Functional results do not deteriorate at longer-term follow-up (4 to 10 y) as long as adequate reduction has been achieved. Percutaneous fixation yielded uniformly good results on all scales. Level Of Evidence: Therapeutic level III. abstract_id: PUBMED:24458167 Isolated fractures of the greater tuberosity in proximal humerus: does the direction of displacement influence functional outcome? An analysis of displacement in greater tuberosity fractures. Background And Aim Of The Work: The treatment of isolated fractures of the greater tuberosity of the proximal humerus is still debated, especially in the case of minimally dispaced fractures. Differently from other proximal humerus fractures this kind of injury frequently affects young patients with high functional request. The aim of this study is to assess the outcome of patients treated conservatively for greater tubero-sity isolated fractures, comparing the clinical results in patients with different direction of fracture displacement. Methods: 38 patients with isolated fracture of greater tuberosity participated to the study and were evaluated at 17 months (10-26) of follow up using Constant Murley Score and DASH Score. Patients were divided in groups with different direction of displacement (10 no displacement, 13 postero-superior, 6 antero-superior and 9 antero-inferior displacement). Results: Patients with no displaced fractures had a mean Constant Murley Score of 82 and a mean DASH Score of 15; those with postero-superior displacement had a mean a Constant Murley Score of 61 and a mean DASH Score of 43; for antero-superior displacement the mean Constant Murley Score was 76 and the mean DASH Score of 18; for the antero-inferiorly displaced the mean Constant Murley Score was 83 and the DASH Score 16. Conclusion: Data suggest that postero-superior displacement leads to poorer outcomes than undisplaced or displaced in other directions fractures treated conservatively. abstract_id: PUBMED:30838075 Functional outcome of the elbow in toddlers with transphyseal fracture of the distal humerus treated surgically. Purpose: To quantitatively evaluate the upper extremity and elbow function with the Mayo Elbow Performance Score (MEPS) in children with transphyseal fracture of the distal humerus (TFDH) treated surgically. Methods: During the period between 2005 and 2015, a total of 16 patients (ten male, six female) met the inclusion criteria. Mean age at the time of injury was 18 months (11 to 37) and mean follow-up was 42.3 months (6 to 98). Based on a modified version of Delee's classification (Group A to C), the clinical and radiographic outcome of TFDH in toddlers treated surgically were retrospectively evaluated. Results: Mean humeral-ulnar (HU) angle of the injured and non-injured side was 1.2° (-18° to 14°) and 8.8° (2° to 19°), respectively (p = 0.001). Closed and open reduction showed similar HU angle values (p = 0.682). Mean MEPS score of the injured and non-injured side was 85.5 points (70 to 95) and 95 points (90 to 100), respectively (p = 0.002). No significant association was identified between MEPS score and gender, side, age at trauma, direction of displacement, time from trauma to surgery, presence of ossified capitellum, type of surgery and type of fracture. Conclusion: Functional outcome was generally good regardless of surgical procedure performed, closed or open and type of fracture according to modified Delee's classification. However, a residual cubitus varus was commonly observed among toddlers with transphyseal fractures of the distal humerus. Level Of Evidence: Level IV - Therapeutic study. abstract_id: PUBMED:28050457 Evaluation of Clinical Outcomes of Management of Paediatric Bone Forearm Fractures using Titanium Elastic Nailing System: A Prospective Study of 50 Cases. Introduction: Forearm fractures are common injuries in children. In older children, there management is difficult due to increased chances of re-displacement after closed reduction. Aim: The aim of this prospective study was to evaluate the clinical outcome of managing paediatric forearm fractures using Titanium Elastic Nailing System (TENS). Materials And Methods: A total of 50 patients in the age group 6 to 14 years with fractures in both bones of forearm were managed by internal fixation using TENS and prospective follow-up were done for six months. Results: On final evaluation in terms of symptoms and range of motion of adjacent joints using Price CT et al., criteria for results, we had excellent outcomes in 92% patients and good in 8%. There were no significant complications except for superficial pin tract infections at entry site of nail in only 6% of patients. Conclusion: We conclude that TENS is an effective and minimally invasive method of fixation of forearm fractures with excellent results in terms of bony union and functional outcomes with minimal complications and without jeopardizing the integrity of the physis. Answer: The relationship between residual displacement and functional outcome in vertical shear injuries has been investigated in several studies. According to the research, there appears to be no significant correlation between residual vertical displacement and functional outcome in patients with pelvic injuries. Specifically, a study that evaluated 33 patients with vertical shear (Tile C) fractures and residual displacement ranging from 2-52 mm found no correlation between residual vertical displacement and functional outcomes as quantified by the SF-36 Short-Form Health Survey (SF-36) and the Iowa Pelvic Score (IPS). The study concluded that the degree of residual vertical displacement does not affect functional outcome, suggesting that other factors may play a more significant role in determining the outcome for patients with pelvic injuries (PUBMED:10372618). Furthermore, another study that assessed the long-term outcome of pelvic ring fractures found that the involvement of the sacroiliac (SI) joint and the degree of displacement were significant factors affecting long-term outcomes. Specifically, patients with SI joint disruption and residual displacement greater than 10 mm had significantly lower pelvic scores, indicating worse outcomes. This suggests that while vertical displacement alone may not be a determinant, the involvement of specific structures like the SI joint and the extent of displacement can have a significant impact on long-term functional outcomes (PUBMED:20165681). In conclusion, while individual studies may show no direct correlation between residual vertical displacement and functional outcome in vertical shear injuries, the overall context of the injury, including the involvement of specific anatomical structures and the degree of displacement, can influence the long-term functional results. It is important to consider these factors when evaluating the potential impact of residual displacement on patient outcomes.
Instruction: Can ECG monitoring identify seizures? Abstracts: abstract_id: PUBMED:26324174 Can ECG monitoring identify seizures? Background: Seizures affect the autonomic control of the heart rate and respiration, and changes in these two variables are known to occur during, and even before the EEG onset of the seizure. Goal: This work aims to quantify these changes and use them to identify the ECG onset. Methods: Single-lead ECG signals were recorded from patients suffering from focal and generalized seizures. Two algorithms are proposed: one quantifies changes in the QRS morphology using principal component analysis, and one assesses cardiorespiratory interactions using phase rectified signal averaging. Results: Positive predictive values of 86.6% and 77.5% and sensitivities of 100% and 90% were achieved for focal and generalized seizures respectively. Conclusion: Results for focal seizures are in accordance with the literature, and detection of generalized seizures is improved after including respiratory information. Significance: These findings could improve monitoring systems in epilepsy, and closed-loop techniques that aim to stop seizures. abstract_id: PUBMED:29443088 Simultaneous Video-EEG-ECG Monitoring to Identify Neurocardiac Dysfunction in Mouse Models of Epilepsy. In epilepsy, seizures can evoke cardiac rhythm disturbances such as heart rate changes, conduction blocks, asystoles, and arrhythmias, which can potentially increase risk of sudden unexpected death in epilepsy (SUDEP). Electroencephalography (EEG) and electrocardiography (ECG) are widely used clinical diagnostic tools to monitor for abnormal brain and cardiac rhythms in patients. Here, a technique to simultaneously record video, EEG, and ECG in mice to measure behavior, brain, and cardiac activities, respectively, is described. The technique described herein utilizes a tethered (i.e., wired) recording configuration in which the implanted electrode on the head of the mouse is hard-wired to the recording equipment. Compared to wireless telemetry recording systems, the tethered arrangement possesses several technical advantages such as a greater possible number of channels for recording EEG or other biopotentials; lower electrode costs; and greater frequency bandwidth (i.e., sampling rate) of recordings. The basics of this technique can also be easily modified to accommodate recording other biosignals, such as electromyography (EMG) or plethysmography for assessment of muscle and respiratory activity, respectively. In addition to describing how to perform the EEG-ECG recordings, we also detail methods to quantify the resulting data for seizures, EEG spectral power, cardiac function, and heart rate variability, which we demonstrate in an example experiment using a mouse with epilepsy due to Kcna1 gene deletion. Video-EEG-ECG monitoring in mouse models of epilepsy or other neurological disease provides a powerful tool to identify dysfunction at the level of the brain, heart, or brain-heart interactions. abstract_id: PUBMED:25812938 Using photoplethysmography in heart rate monitoring of patients with epilepsy. Heart rate is a useful neurophysiological sign when monitoring seizures in patients with epilepsy. In an ambulatory setting, heart rate is measured with ECG involving electrodes on the skin. This method is uncomfortable which is burdensome for patients and is sensitive to motion artifacts, which decrease the usability of measurements. In this study, green light photoplethysmography, an optical technique arising from the fitness industry, was evaluated for usefulness in a medical setting. Simultaneous overnight measurements of HR with a commercially available optical heart rate (OHR) sensor and with ECG (HRECG) were performed in 7 patients with epilepsy. Overall, there was no significant difference between OHR and HRECG in random 10-minute periods during wakefulness (p=0.69) and sleep (p=1.00). The Bland-Altman analysis showed negligible mean differences. Limits of agreement were higher during wakefulness and during the occurrence of two seizures possibly because of less reliable HRECG measurements due to motion artifacts. Optical heart rate seems less sensitive to these motion artifacts, and measurements are more user-friendly. The optical heart rate sensor may fill the gap of systems for ambulatory heart rate monitoring and can be especially useful in the context of seizure detection in patients with epilepsy. abstract_id: PUBMED:29314105 Ambulatory diagnostic ECG monitoring for syncope and collapse: An assessment of clinical practice in the United States. Introduction: Diagnostic ambulatory electrocardiogram (AECG) monitoring is widely used for evaluating syncope and collapse, and practice guidelines provide recommendations regarding optimal AECG device selection. However, whether physicians utilize AECGs in accordance with the pertinent guidelines is unclear. This study assessed utilization of AECG monitoring systems for syncope and collapse diagnosis by physicians in the United States. Methods And Results: A quantitative survey was undertaken of physicians comprising multiple specialties (emergency department, n = 35; primary care, n = 35; hospitalists, n = 30; neurologists, n = 30; nonimplanting, n = 34, and implanting-cardiologists, n = 35). Depending on specialty, respondents reported that neural-reflex and orthostatic causes accounted for 17-23%, cardiac causes for 12-20%, and "neurological causes" (specifically psychogenic pseudo-syncope/pseudo-seizures and acute cerebrovascular conditions) for 7-12% of their syncope/collapse cases. The choice of AECG technology varied by specialty. Thus, despite patients having daily symptoms, 25% of respondents chose an AECG technology other than a Holter-type monitor. Conversely, when monitoring for infrequent events (e.g., less than monthly), 12-18% indicated that they would choose a 24- to 48-hour Holter, 20-34% would choose either a conventional event recorder or a mobile cardiac telemetry system, and only 53-65% would select an insertable cardiac monitor. Conclusions: In evaluation of syncope/collapse, most U.S. clinicians across specialties use AECGs appropriately, but in a substantial minority there remains discordance between choice of AECG technology and guideline-based recommendations. abstract_id: PUBMED:34250754 Performance of ECG-based seizure detection algorithms strongly depends on training and test conditions. Objective: To identify non-EEG-based signals and algorithms for detection of motor and non-motor seizures in people lying in bed during video-EEG (VEEG) monitoring and to test whether these algorithms work in freely moving people during mobile EEG recordings. Methods: Data of three groups of adult people with epilepsy (PwE) were analyzed. Group 1 underwent VEEG with additional devices (accelerometry, ECG, electrodermal activity); group 2 underwent VEEG; and group 3 underwent mobile EEG recordings both including one-lead ECG. All seizure types were analyzed. Feature extraction and machine-learning techniques were applied to develop seizure detection algorithms. Performance was expressed as sensitivity, precision, F1 score, and false positives per 24 hours. Results: The algorithms were developed in group 1 (35 PwE, 33 seizures) and achieved best results (F1 score 56%, sensitivity 67%, precision 45%, false positives 0.7/24 hours) when ECG features alone were used, with no improvement by including accelerometry and electrodermal activity. In group 2 (97 PwE, 255 seizures), this ECG-based algorithm largely achieved the same performance (F1 score 51%, sensitivity 39%, precision 73%, false positives 0.4/24 hours). In group 3 (30 PwE, 51 seizures), the same ECG-based algorithm failed to meet up with the performance in groups 1 and 2 (F1 score 27%, sensitivity 31%, precision 23%, false positives 1.2/24 hours). ECG-based algorithms were also separately trained on data of groups 2 and 3 and tested on the data of the other groups, yielding maximal F1 scores between 8% and 26%. Significance: Our results suggest that algorithms based on ECG features alone can provide clinically meaningful performance for automatic detection of all seizure types. Our study also underscores that the circumstances under which such algorithms were developed, and the selection of the training and test data sets need to be considered and limit the application of such systems to unseen patient groups behaving in different conditions. abstract_id: PUBMED:37106678 The Power of ECG in Semi-Automated Seizure Detection in Addition to Two-Channel behind-the-Ear EEG. Long-term home monitoring of people living with epilepsy cannot be achieved using the standard full-scalp electroencephalography (EEG) coupled with video. Wearable seizure detection devices, such as behind-the-ear EEG (bte-EEG), offer an unobtrusive method for ambulatory follow-up of this population. Combining bte-EEG with electrocardiography (ECG) can enhance automated seizure detection performance. However, such frameworks produce high false alarm rates, making visual review necessary. This study aimed to evaluate a semi-automated multimodal wearable seizure detection framework using bte-EEG and ECG. Using the SeizeIT1 dataset of 42 patients with focal epilepsy, an automated multimodal seizure detection algorithm was used to produce seizure alarms. Two reviewers evaluated the algorithm's detections twice: (1) using only bte-EEG data and (2) using bte-EEG, ECG, and heart rate signals. The readers achieved a mean sensitivity of 59.1% in the bte-EEG visual experiment, with a false detection rate of 6.5 false detections per day. Adding ECG resulted in a higher mean sensitivity (62.2%) and a largely reduced false detection rate (mean of 2.4 false detections per day), as well as an increased inter-rater agreement. The multimodal framework allows for efficient review time, making it beneficial for both clinicians and patients. abstract_id: PUBMED:36994379 Ictal ECG-based assessment of sudden unexpected death in epilepsy. Introduction: Previous case-control studies of sudden unexpected death in epilepsy (SUDEP) patients failed to identify ECG features (peri-ictal heart rate, heart rate variability, corrected QT interval, postictal heart rate recovery, and cardiac rhythm) predictive of SUDEP risk. This implied a need to derive novel metrics to assess SUDEP risk from ECG. Methods: We applied Single Spectrum Analysis and Independent Component Analysis (SSA-ICA) to remove artifact from ECG recordings. Then cross-frequency phase-phase coupling (PPC) was applied to a 20-s mid-seizure window and a contour of -3 dB coupling strength was determined. The contour centroid polar coordinates, amplitude (alpha) and angle (theta), were calculated. Association of alpha and theta with SUDEP was assessed and a logistic classifier for alpha was constructed. Results: Alpha was higher in SUDEP patients, compared to non-SUDEP patients (p < 0.001). Theta showed no significant difference between patient populations. The receiver operating characteristic (ROC) of a logistic classifier for alpha resulted in an area under the ROC curve (AUC) of 94% and correctly classified two test SUDEP patients. Discussion: This study develops a novel metric alpha, which highlights non-linear interactions between two rhythms in the ECG, and is predictive of SUDEP risk. abstract_id: PUBMED:34240748 The power of ECG in multimodal patient-specific seizure monitoring: Added value to an EEG-based detector using limited channels. Objective: Wearable seizure detection devices could provide more reliable seizure documentation outside the hospital compared to seizure self-reporting by patients, which is the current standard. Previously, during the SeizeIT1 project, we studied seizure detection based on behind-the-ear electroencephalography (EEG). However, the obtained sensitivities were too low for practical use, because not all seizures are associated with typical ictal EEG patterns. Therefore, in this paper, we aim to develop a multimodal automated seizure detection algorithm integrating behind-the-ear EEG and electrocardiography (ECG) for detecting focal seizures. In this framework, we quantified the added value of ECG to behind-the-ear EEG. Methods: This study analyzed three multicenter databases consisting of 135 patients having focal epilepsy and a total of 896 seizures. A patient-specific multimodal automated seizure detection algorithm was developed using behind-the-ear/temporal EEG and single-lead ECG. The EEG and ECG data were processed separately using machine learning methods. A late integration approach was applied for fusing those predictions. Results: The multimodal algorithm outperformed the EEG-based algorithm in two of three databases, with an increase of 11% and 8% in sensitivity for the same false alarm rate. Significance: ECG can be of added value to an EEG-based seizure detection algorithm using only behind-the-ear/temporal lobe electrodes for patients with focal epilepsy. abstract_id: PUBMED:38359446 An approach to detect and predict epileptic seizures with high accuracy using convolutional neural networks and single-lead-ECG signal. One of the epileptic patients' challenges is to detect the time of seizures and the possibility of predicting. This research aims to provide an algorithm based on deep learning to detect and predict the time of seizure from one to two minutes before its occurrence. The proposed Convolutional Neural Network (CNN) can detect and predict the occurrence of focal epilepsy seizures through single-lead-ECG signal processing instead of using EEG signals. The structure of the proposed CNN for seizure detection and prediction is the same. Considering the requirements of a wearable system, after a few light pre-processing steps, the ECG signal can be used as input to the neural network without any manual feature extraction step. The desired neural network learns purposeful features according to the labelled ECG signals and then performs the classification of these signals. Training of 39-layer CNN for seizure detection and prediction has been done separately. The proposed method can detect seizures with an accuracy of 98.84% and predict them with an accuracy of 94.29%. With this approach, the ECG signal can be a promising indicator for the construction of portable systems for monitoring the status of epileptic patients. abstract_id: PUBMED:24359249 Characterization of seizure-induced syncopes: EEG, ECG, and clinical features. Objective: Ictal bradycardia and ictal asystole (IA) are rare but severe complications of epileptic seizures. They are difficult to recognize within a seizure and their consequences remain unclear. Herein we aimed to extend the description of electrical and clinical features of seizures with IA and/or syncope. Methods: Among 828 patients with epilepsy who were admitted for presurgical video-electroencephalogram (EEG) monitoring evaluation between 2003 and 2012, we selected those presenting IA and/or syncope. We studied the electroclinical sequence of these manifestations and their correlation with electrocardiogram (ECG), and we compared seizures with or without IA among the same patients. Results: Nine (1.08%) of 828 patients (four men, mean age 43 ± 6 years) showed IA. Six patients had temporal lobe epilepsy and the others had frontal, temporooccipital, or occipital epilepsy, demonstrated by intracranial EEG in two. In these patients, 59 of 103 recorded seizures induced a reduction of heart rate (HR), leading to IA in 26. IAs were mostly (80%) symptomatic, whereas ictal HR decreases alone were not. In seizures with IA, we identified usual ictal symptoms, and then symptoms related primarily to cerebral hypoperfusion (pallor, atonia, early myoclonic jerks, loss of consciousness, hypertonia, and fall) and secondarily to cerebral reperfusion (skin flushing, late myoclonic jerks). At 32 ± 18 s after the onset of the seizure, the HR decreased progressively during 11 ± 6 s, reaching a sinusal pause for 13 ± 7 s. The duration of the IA was strongly correlated with electroclinical consequences. IA was longer in patients with atonia (14.8 ± 7 vs. 5.7 ± 3 s), late myoclonic jerks (15.8 ± 7 vs. 8 ± 6 s), hypertonia (19 ± 4.5 vs. 8.3 ± 5 s), and EEG hypoperfusion changes (16 ± 5.6 vs. 6.9 ± 5.5 s). IA may induce a fall during atonia or hypertonia. Surface and intracerebral EEG recordings showed that ictal HR decrease and IA often occurred when seizure activity became bilateral. Finally, we identified one patient with ictal syncopes but without IA, presumably related to vasoplegia. Significance: We provide a more complete description of the electroclinical features of seizures with IA, of the mechanism of falls, and distinguish between hypoperfusion and reperfusion symptoms of syncope. Identification of the mechanisms of syncope may improve management of patients with epilepsy. A pacemaker can be proposed, when parasympathetic activation provokes a negative chronotropic effect that leads to asystole. It is less likely to be useful when vasoplegic effects predominate. Answer: Yes, ECG monitoring can identify seizures. Seizures are known to affect the autonomic control of the heart rate and respiration, and changes in these variables can occur during and even before the EEG onset of the seizure. Research has aimed to quantify these changes and use them to identify the ECG onset of seizures. Single-lead ECG signals recorded from patients with focal and generalized seizures have been analyzed using algorithms that quantify changes in QRS morphology and assess cardiorespiratory interactions. These studies have achieved positive predictive values and sensitivities that suggest ECG monitoring can be effective in identifying seizures (PUBMED:26324174). Additionally, simultaneous video-EEG-ECG monitoring has been used to identify neurocardiac dysfunction in mouse models of epilepsy, demonstrating the utility of ECG in conjunction with EEG to monitor for abnormal brain and cardiac rhythms (PUBMED:29443088). Photoplethysmography, an optical technique, has also been evaluated for heart rate monitoring in patients with epilepsy and found to be less sensitive to motion artifacts and more user-friendly compared to ECG, suggesting its potential usefulness in seizure detection (PUBMED:25812938). Moreover, ECG-based seizure detection algorithms have been developed and tested, showing that performance strongly depends on training and test conditions. These algorithms can provide clinically meaningful performance for automatic detection of all seizure types when developed and tested under appropriate conditions (PUBMED:34250754). The addition of ECG to behind-the-ear EEG in a semi-automated multimodal wearable seizure detection framework has also been shown to increase sensitivity and reduce false detection rates (PUBMED:37106678). Furthermore, novel metrics derived from ECG, such as cross-frequency phase-phase coupling, have been predictive of sudden unexpected death in epilepsy (SUDEP) risk, highlighting the power of ECG in seizure assessment (PUBMED:36994379). The integration of ECG with limited-channel EEG-based detectors has been shown to improve the performance of multimodal patient-specific seizure monitoring (PUBMED:34240748). Lastly, deep learning algorithms using single-lead-ECG signals have been proposed to detect and predict the occurrence of focal epilepsy seizures with high accuracy (PUBMED:38359446). In summary, ECG monitoring, either alone or in combination with other modalities such as EEG, has been demonstrated to be a valuable tool in identifying seizures and assessing related cardiac dysfunctions.
Instruction: Can measures of cognitive function predict locomotor behaviour in complex environments following a traumatic brain injury? Abstracts: abstract_id: PUBMED:17453761 Can measures of cognitive function predict locomotor behaviour in complex environments following a traumatic brain injury? Primary Objective: To determine the relationships between clinical measures of executive function and attention, and laboratory measures of anticipatory locomotor adaptations with dual tasks following a TBI. Methods And Procedures: Ten people with moderate or severe TBI were compared to 10 healthy subjects for neuropsychological measures in the clinic, as well as locomotor patterns and reading time in the laboratory for adapted Stroop tasks (Bar and Word) during unobstructed and obstructed walking. Main Outcomes And Results: As previously found 1 (Vallee M, McFadyen BJ, Swaine B, Doyon J, Cantin JF, Dumas D. Effects of environmental demands on locomotion after traumatic brain injury. Archives of Physical Medicine Rehabilitation 2006;87:806--813) during the locomotor activities, subjects with TBI walked slower, had higher clearance margins and took longer to read during the Stroop tasks than healthy subjects. In general, subjects with TBI also showed deficits in executive functions and attention. Significant relationships were specifically observed between scores on Trail Making B and clearance margins for subjects with TBI, but not for healthy subjects. Alternatively, significant relationships between clinical scores on Stroop and dual task Stroop reading times were obtained for healthy subjects but not for subjects with TBI. Conclusions: These results suggest that measures of executive functioning and attention may be associated to locomotor behaviour in complex environments following a moderate to severe TBI. abstract_id: PUBMED:22275890 A combined robotic and cognitive training for locomotor rehabilitation: evidences of cerebral functional reorganization in two chronic traumatic brain injured patients. It has been demonstrated that automated locomotor training can improve walking capabilities in spinal cord-injured subjects but its effectiveness on brain damaged patients has not been well established. A possible explanation of the discordant results on the efficacy of robotic training in patients with cerebral lesions could be that these patients, besides stimulation of physiological motor patterns through passive leg movements, also need to train the cognitive aspects of motor control. Indeed, another way to stimulate cerebral motor areas in paretic patients is to use the cognitive function of motor imagery. A promising possibility is thus to combine sensorimotor training with the use of motor imagery. The aim of this paper is to assess changes in brain activations after a combined sensorimotor and cognitive training for gait rehabilitation. The protocol consisted of the integrated use of a robotic gait orthosis prototype with locomotor imagery tasks. Assessment was conducted on two patients with chronic traumatic brain injury and major gait impairments, using functional magnetic resonance imaging. Physiatric functional scales were used to assess clinical outcomes. Results showed greater activation post-training in the sensorimotor and supplementary motor cortices, as well as enhanced functional connectivity within the motor network. Improvements in balance and, to a lesser extent, in gait outcomes were also found. abstract_id: PUBMED:26872445 Relationships between risky sexual behaviour, dysexecutive problems, and mental health in the years following interdisciplinary TBI rehabilitation. Little is known about the long-term consequences of traumatic brain injury (TBI) regarding risky sexual behaviour. The objectives of the study were (1) to compare risky sexual behaviour in a sample of individuals with TBI having received interdisciplinary rehabilitation with that of healthy controls, and (2) to explore the relationships between risky sexual behaviour, executive functions, and mental health in individuals with TBI. The study group consisted of 42 individuals with TBI with a mean age of 37.9 years (SD = 9.7), 12.8 years of education (SD = 3.3), and 3.3 years post-injury (SD = 4.3). Healthy controls consisted of 47 participants, with a mean age of 37.6 years (SD = 10.7), and 13 years of education (SD = 3). Risky sexual behaviour was measured with the Sexual Risk Survey and executive function with the Dysexecutive Questionnaire. Mental health measures included the Generalised Anxiety Disorder Scale, and the Patient Health Questionnaire for depression. Compared to healthy controls, individuals with TBI reported more dysexecutive and mental health problems, without differences in risky sexual behaviour. In individuals with TBI, risky sexual behaviour was associated with behavioural, cognitive and emotional dysexecutive problems, but not with anxiety or depression. It was concluded that special attention should be given to individuals with TBI showing difficulties in executive functions given their association with risky sexual behaviour. abstract_id: PUBMED:22294408 Predicting story goodness performance from cognitive measures following traumatic brain injury. Purpose: This study examined the prediction of performance on measures of the Story Goodness Index (SGI; Lê, Coelho, Mozeiko, & Grafman, 2011) from executive function (EF) and memory measures following traumatic brain injury (TBI). It was hypothesized that EF and memory measures would significantly predict SGI outcomes. Method: One hundred sixty-seven individuals with TBI participated in the study. Story retellings were analyzed using the SGI protocol. Three cognitive measures--Delis-Kaplan Executive Function System (D-KEFS; Delis, Kaplan, & Kramer, 2001) Sorting Test, Wechsler Memory Scale--Third Edition (WMS-III; Wechsler, 1997) Working Memory Primary Index (WMI), and WMS-III Immediate Memory Primary Index (IMI)--were entered into a multiple linear regression model for each discourse measure. Two sets of regression analyses were performed, the first with the Sorting Test as the first predictor and the second with it as the last. Results: The first set of regression analyses identified the Sorting Test and IMI as the only significant predictors of performance on measures of the SGI. The second set identified all measures as significant predictors when evaluating each step of the regression function. Conclusion: The cognitive variables predicted performance on the SGI measures, although there were differences in the amount of explained variance. The results (a) suggest that storytelling ability draws on a number of underlying skills and (b) underscore the importance of using discrete cognitive tasks rather than broad cognitive indices to investigate the cognitive substrates of discourse. abstract_id: PUBMED:33361617 The effectiveness of cognitive behaviour therapy for reducing anxiety symptoms following traumatic brain injury: A meta-analysis and systematic review. Background: Anxiety is a common neuropsychological sequela following traumatic brain injury (TBI). Cognitive Behaviour Therapy (CBT) is a recommended, first-line intervention for anxiety disorders in the non-TBI clinical population, however its effectiveness after TBI remains unclear and findings are inconsistent. Objective: There are no current meta-analyses exploring the efficacy of CBT as an intervention for anxiety symptoms following TBI, using controlled trials. The aim of the current study, therefore, was to systematically review and synthesize the evidence from controlled trials for the effectiveness of CBT for anxiety, specifically within the TBI population. Method: Three electronic databases (Web of Science, PubMed and PsycInfo) were searched and a systematic review of intervention studies utilising CBT and anxiety related outcome measures in a TBI population was performed through searching three electronic databases. Studies were further evaluated for quality of evidence based on Reichow's (2011) quality appraisal tool. Baseline and outcome data were extracted from the 10 controlled trials that met the inclusion criteria, and effect sizes were calculated. Results: A random effects meta-analysis identified a small overall effect size (Cohen's d) of d = -0.26 (95%CI -0.41 to -0.11) of CBT interventions reducing anxiety symptoms following TBI. Conclusions: This meta-analysis tentatively supports the view that CBT interventions may be effective in reducing anxiety symptoms in some patients following TBI, however the effect sizes are smaller than those reported for non-TBI clinical populations. Clinical implications and limitations of the current meta-analysis are discussed. abstract_id: PUBMED:31017049 Cognitive Reserve and Age Predict Cognitive Recovery after Mild to Severe Traumatic Brain Injury. The persistence of injury-related cognitive impairments can have devastating consequences for everyday function after traumatic brain injury (TBI). This longitudinal study examined the association of long-term cognitive recovery in 109 adults (71% male) experiencing complicated mild-to-severe TBI with age, pre-morbid intelligence (IQ), and injury severity measured by post-traumatic amnesia (PTA) duration. Participants' twice completed measures of pre-morbid IQ (National Adult Reading Test), attention (Digit Symbol Coding Test), memory (Rey Auditory Verbal Learning Test), and executive function (Trail Making Test Part-B) at a mean of 43.73 days post-TBI and again at a mean of 3.70 years (range 23-72 months) post-injury. A healthy control group comprising 63 adults (59% male) completed the measures once. At initial assessment, TBI participants performed significantly worse on all measures compared with the healthy control group. Within the TBI group, shorter PTA duration, younger age, and higher pre-morbid IQ were associated with better initial cognitive performance. Cognitive task performance improved significantly in the TBI group at follow-up between two to five years later but remained significantly below control group means. Notably, higher pre-morbid IQ and younger age were associated with greater cognitive recovery at follow-up, whereas PTA duration was not. These findings support the role of cognitive reserve and age in cognitive recovery after TBI and may inform prognostication and rehabilitation. Additional research is needed to elucidate the biological mechanisms of cognitive reserve in cognitive recovery after TBI. abstract_id: PUBMED:36572789 Establishing severity levels for patient-reported measures of functional communication, participation, and perceived cognitive function for adults with acquired cognitive and language disorders. Purpose: To empirically assign severity levels (e.g., mild, moderate) to four relatively new patient-reported outcome measures (PROMs) for adults with acquired cognitive/language disorders. They include the Communicative Participation Item Bank, the Aphasia Communication Outcome Measure, and Neuro-QoL's item banks of Cognitive Function (v2.0) and Ability to Participate in Social Roles and Activities (v1.0). Method: We conducted 17 focus groups that comprised 22 adults with an acquired cognitive/language disorder from stroke, Parkinson's disease, or traumatic brain injury; 30 care partners of an adult with an acquired cognitive/language disorder; and 42 speech-language pathologists who had experience assessing/treating individuals with those and other cognitive/language disorders. In a small, moderated focus-group format, participants completed "PROM-bookmarking" procedures: They discussed hypothetical vignettes based on PROM item responses about people with cognitive/language disorders and had to reach consensus regarding whether their symptoms/function should be categorized as within normal limits or mild, moderate, or severe challenges. Results: There was generally good agreement among the stakeholder groups about how to classify vignettes, particularly when they reflected very high or low functioning. People with aphasia described a larger range of functional communication challenges as "mild" compared to other stakeholder types. Based on a consensus across groups, we present severity levels for specific score ranges for each PROM. Conclusion: Standardized, stakeholder-informed severity levels that aid interpretation of PROM scores can help clinicians and researchers derive better clinical meaning from those scores, for example, by identifying important clinical windows of opportunity and assessing when symptoms have returned to a "normal" range. abstract_id: PUBMED:35743597 Evaluating a Novel Treatment Adapting a Cognitive Behaviour Therapy Approach for Sexuality Problems after Traumatic Brain Injury: A Single Case Design with Nonconcurrent Multiple Baselines. There has been little progress in development of evidence-based interventions to improve sexuality outcomes for individuals with traumatic brain injury (TBI). This study aimed to evaluate the preliminary efficacy of an individualised intervention using a cognitive behaviour therapy (CBT) framework to treat sexuality problems after TBI. A nonconcurrent multiple baseline single-case design with 8-week follow-up and randomisation to multiple baseline lengths (3, 4, or 6 weeks) was repeated across nine participants (five female) with complicated mild-severe TBI (mean age = 46.44 years (SD = 12.67), mean post-traumatic amnesia = 29.14 days (SD = 29.76), mean time post-injury = 6.56 years (median = 2.50 years, SD = 10.11)). Treatment comprised eight weekly, individual sessions, combining behavioural, cognitive, and educational strategies to address diverse sexuality problems. Clinical psychologists adopted a flexible, patient-centred, and goal-orientated approach whilst following a treatment guide and accommodating TBI-related impairments. Target behaviour was subjective ratings of satisfaction with sexuality, measured three times weekly. Secondary outcomes included measures of sexuality, mood, self-esteem, and participation. Goal attainment scaling (GAS) was used to measure personally meaningful goals. Preliminary support was shown for intervention effectiveness, with most cases demonstrating sustained improvements in subjective sexuality satisfaction and GAS goal attainment. Based on the current findings, larger clinical trials are warranted. abstract_id: PUBMED:30990370 Predictors of working alliance in cognitive behaviour therapy adapted for traumatic brain injury. Cognitive Behaviour Therapy (CBT) has the strongest preliminary support for treatment of depression and anxiety following traumatic brain injury (TBI). TBI associated cognitive impairments may pose an obstacle to development of a strong working alliance, on which therapeutic gains depend. The current study examined the association of demographic (i.e., gender, age at study entry, years of education and premorbid IQ) and injury-related (i.e., years since injury, post-trauma amnesia duration, memory and executive functioning test performance) variables with alliance in CBT adapted for TBI (CBT-ABI). The audio-recordings of 177 CBT-ABI sessions from 31 participants were assessed with an observer version of the Working Alliance Inventory at nine time-points. Multi-level mixed model regressions showed that participants and therapists maintained a relatively strong alliance across all sessions. Pre-intervention symptom severity was considered as a confounder variable and was found to have no statistically significant influence on the models. None of the demographic variables were significantly associated with alliance scores. More years since injury was associated with a stronger alliance. These findings demonstrate that TBI associated cognitive impairments do not necessarily pose an obstacle to development and maintenance of a strong working alliance, which is more likely to develop with more time post-brain injury. abstract_id: PUBMED:21199379 Systematic review of current executive function measures in adults with and without cognitive impairments. Background: Executive function pertains to higher cognitive processes historically linked to frontal lobes. Several measures are available to screen for executive function; however, no gold standard exists. The difficulty in assessing executive function is the existence of its many subsets. Objectives: To evaluate the psychometric properties of executive function measures and determine the most effective measure(s) through a systematic review of the literature. Search Strategy: The search strategy utilised a comprehensive literature review of articles written in the English language published from January 2003 to September 2009. The following electronic databases were searched: SCOPUS, PUBMED, Medline Ovid, PsychArticles and CINAHL Plus. Initial key words used were 'executive function', 'measures', 'reliability' and 'validity' followed by the addition of 'traumatic brain injury'. The initial search elicited 226 articles, of which 28 were retrieved. After further exclusion 19 were included in the review. Results: Eight measures underwent factor analysis and 18 underwent various forms of reliability and/or validity testing. Factor analysis showed different aspects of executive functions. According to preset evaluation criteria, only the Test of Practical Judgment performed all of the recommended reliability and validity testing. REVIEWER'S CONCLUSION: Of the recently developed measures, several show promise for future use yet further validity and reliability testing is warranted. Future tool development should measure all subsets of executive function rather than only a few and include the recommended components of reliability and validity testing. Answer: Yes, measures of cognitive function can predict locomotor behaviour in complex environments following a traumatic brain injury (TBI). A study found significant relationships between clinical measures of executive function and attention, and laboratory measures of anticipatory locomotor adaptations with dual tasks following a TBI. Specifically, scores on the Trail Making B test were related to clearance margins for subjects with TBI, indicating that executive functioning and attention may be associated with locomotor behaviour in complex environments after a moderate to severe TBI (PUBMED:17453761). Additionally, cognitive training combined with robotic gait orthosis and locomotor imagery tasks has shown evidence of cerebral functional reorganization in chronic TBI patients, suggesting that cognitive aspects of motor control are important for gait rehabilitation (PUBMED:22275890).
Instruction: Can CT features be used to diagnose surgical adult bowel intussusceptions? Abstracts: abstract_id: PUBMED:19620445 Can CT features be used to diagnose surgical adult bowel intussusceptions? Objective: The purpose of our study was to identify whether any CT characteristics can be used to diagnose surgical intussusceptions. Materials And Methods: A search of CT reports on adults revealed 118 patients with 136 intussusceptions. Two blinded readers independently reviewed the CT examinations and documented intussusception characteristics. Medical records were reviewed to determine patient outcome. Performance, interobserver agreement (A), and significance of CT characteristics to identify surgical intussusceptions were calculated. Results: Of 95,223 CT examinations, 0.13% (121/95,223) documented 136 intussusceptions over a 7-year period, of which 88.2% (120/136) were enteroenteric, 3.7% (5/136) were enterocolic, and 4.4% (6/136) were colocolic lesions or in other locations. Eight (5.9%) were surgical and 128 (94.12%) were nonsurgical lesions. Five of eight (63%) surgical lesions involved the colon. Only two of eight surgical lesions had malignant lead points. The mean sensitivity, specificity, positive predictive value, and negative predictive value for diagnosing surgical enteroenteric lesions using a measured lesion length of > 3.5 cm were 100%, 57.3%, 5.7%, and 100% (A = 0.68), respectively. Similar figures using the measured axial diameter > 3 cm were 100%, 32.9%, 3.7% and 100% (A= 0.65), respectively. Conclusion: Surgical intussusceptions in adults are infrequent among the intussusceptions that are detected on CT. Most enteroenteric lesions are nonsurgical lesions, whereas lesions that affect the colon are often surgical. Many nonsurgical enteroenteric intussusceptions are longer than 3.5 cm and thicker than 3 cm, suggesting these CT features may not be useful for diagnosing surgical bowel intussusceptions in adults. abstract_id: PUBMED:12668740 Distinguishing features of self-limiting adult small-bowel intussusception identified at CT. Purpose: To determine if clinical or computed tomographic (CT) findings can be used to distinguish self-limiting cases of adult small-bowel intussusception from those requiring surgery. Materials And Methods: Thirty-seven cases of adult small-bowel intussusception were identified by a retrospective computerized search of 69,040 abdominopelvic CT examinations performed over a 4-year period. Two independent readers recorded CT features. Clinical findings and outcomes were determined by review of all available medical records. Outcome was classified as either surgical or self-limiting. Association between predictive variables and outcome was assessed by the Fisher exact test and logistic regression models. A multivariate, stepwise, logistic regression model was used to determine the best predictors of outcome. Results: Six patients (16%) underwent surgery, and all had lead-point tumors. Thirty-one patients were cared for conservatively (84%) and none required surgery at a mean follow-up of 5.2 months (range, 0-46 months). Multivariate, stepwise, logistic regression analysis showed intussusception length was the only variable that was independently predictive of outcome. All 20 patients with an intussusception length of 3.5 cm or less, as measured by either reader, had cases that were self-limiting. Seventeen patients had an intussusception length greater than 3.5 cm, as measured by either reader. Eleven patients had an intussusception that was self-limiting, and six patients had an intussusception that required surgery. Conclusion: Intussusception length is the main factor in distinguishing the majority of small-bowel intussusceptions detected at CT that are self-limiting from the minority that require surgery. An intussusception that is shorter than 3.5 cm is likely to be self-limiting. abstract_id: PUBMED:36989624 Adult intussusception: A case report. Introduction: Intussusception is a recognised cause of bowel obstruction and is significantly more common in paediatric patients. Adult intussusception is rare and requires a predisposing factor in the vast majority of patients and this may include a benign lesion, a malignant lesion or bowel wall abnormality such as inflammatory bowel disease. Presentation Of Case: We present the case of a patient presenting with upper abdominal pain, nausea, vomiting and weakness. The patient had ongoing mild intermittent abdominal pain for months which was investigated with an oesophagogastroduodenoscopy (OGD) that showed gastritis. There were features of peritonism on clinical examination and this was associated with raised inflammatory markers. A Computed Tomography (CT) scan showed telescoping of the ileum with proximal bowel distension suggesting bowel obstruction. The patient underwent emergency surgical resection. Histopathology assessment did not identify a causative factor for the intussusception. Discussion: We present a case of idiopathic entero-enteric intussusception in an adult which is not commonly seen. Clinical history of chronic intermittent abdominal pain and CT abdomen are helpful in establishing the diagnosis. Despite that conservative approach is described in the literature, surgery continues to be the only option in patients who are unstable and show signs of peritonitis. Conclusion: Adult intussusception is not a common condition and can be difficult to diagnose. Patients with intussusception may report a relatively long period of intermittent abdominal pain that worsen acutely due to complete obstruction. CT is the most useful investigative modality to confirm the diagnosis of intussusception. abstract_id: PUBMED:34498667 Role of clinical and CT findings in the identification of adult small-bowel intussusception requiring surgical intervention. Background: In adults, intussusception has been considered traditionally to have an underlying aetiology. The aim of this study was to determine CT and clinical features of small-bowel intussusceptions that required surgical intervention. Methods: Adult patients were identified in whom small-bowel intussusceptions were noted on CT images. The appearance, number, type (enteroenteric versus enterocolic), length and maximum short-axis diameter of intussusceptions, and presence of bowel obstruction (short-axis diameter of proximal small bowel greater than 3 cm) were analysed. The outcome was defined as surgical (complicated) or self-limiting (uncomplicated). Associations between complicated and uncomplicated intussusceptions and patient characteristics were investigated. Results: Among 75 patients (56 male) with a mean age of 45 years, 103 intussusceptions were identified, of which 98 (95 per cent) were enteroenteric and 5 (5 per cent) enterocolic. Only 12 of 103 intussusceptions (12 per cent) in 12 of 75 (16 per cent) patients required surgical therapy and were considered to be complicated, with half of these having a neoplastic lead point. Length (P < 0.001), diameter (P < 0.001) and type (P = 0.002) of intussusception as well as presence of vessels (P = 0.023) within an intussusception on a CT scan, clinical symptoms (P = 0.007) and signs of bowel obstruction (P < 0.001) were associated with a surgical outcome. Conclusion: Clinical symptoms, signs of bowel obstruction, type and length of intussusception, and a visible tumour within an intussusception on CT scan were critical signs of complicated intussusception, requiring surgical intervention. abstract_id: PUBMED:31762778 Ultrasound in the evaluation of necrotic bowel in children: A pictorial essay. Necrotic bowel is a serious condition involving death of gastrointestinal tissue. The diagnosis is difficult to make clinically, and plain radiography is often inconclusive. Ultrasonography is an inexpensive, portable and readily available complementary diagnostic tool. In some cases, ultrasonography can detect features of necrotic bowel earlier than plain radiography or when plain radiography is equivocal and does not correlate with the clinical findings. This pictorial essay aims to compare the ultrasonography features of normal bowel and necrotic bowel in children. The role of ultrasonography and the ultrasonographic features of necrotic bowel will be illustrated by discussing some of the causes of necrotic bowel in children. Correlation with plain radiographs and pathological specimens is made. Frequent causes of necrotic bowel in neonates include necrotising enterocolitis, malrotation with small bowel volvulus and incarcerated inguinal hernias. Causes in older children include intussusception, complications of Meckel's diverticulum, post-surgical adhesions, internal hernias and vasculitic abnormalities. Ultrasonography features suggestive of necrotic bowel include persistently dilated loops of aperistaltic bowel, increase or decrease of bowel wall thickness, intramural or portal venous gas, loss of bowel wall perfusion, and free intraperitoneal gas and fluid. The diagnosis of necrotic bowel may be made earlier on ultrasonography than on abdominal radiographs alone. This pictorial essay will familiarise the reader with the role of ultrasonography and the ultrasonographic features of necrotic bowel through a wide range of conditions that may cause necrotic bowel in children. Familiarity with these findings will facilitate timely imaging diagnosis of necrotic bowel before complications develop. abstract_id: PUBMED:18357454 Chronic intestinal pseudo-obstruction in adult patients: multidetector row helical CT features. Chronic intestinal pseudo-obstruction (CIPO) is a rare condition due to severe gastrointestinal motility disorder. Adult patients with CIPO experience symptoms of mechanical obstruction, but reliable clinical signs that may help distinguish between actual mechanical obstruction and CIPO are lacking. Additionally, abdominal plain films that commonly show bowel dilatation with air-fluid levels do not reach acceptable degrees of specificity to exclude actual obstruction. Therefore, most adult patients with CIPO usually undergo multiple and often fruitless surgery, often leading to repeated bowel resections before diagnosis is made. In these patients who present with abdominal signs mimicking symptoms that would warrant surgical exploration, multidetector-row helical CT (MDCT) is helpful to resolve this diagnostic dilemma. MDCT shows a diffusely distended bowel and helps to rule out a mechanical cause of obstruction, thus suggesting CIPO and obviating the need for unnecessary laparotomy. In adult patients with CIPO, MDCT may show pneumatosis intestinalis, pneumoperitoneum or intussusception. However, these conditions generally do not require surgery in patients with CIPO. This pictorial essay presents the more and less common MDCT features of CIPO in adult patients, to make the reader more familiar with this disease. abstract_id: PUBMED:11677436 Adult intestinal intussusception: comparison between CT features and surgical findings Purpose: To report our experience in the use of CT in the evaluation of adult intestinal intussusception. A correlation between radiologic findings and surgery was attempted in order to verify our diagnostic hypothesis. Material And Methods: Intestinal intussusception was diagnosed by CT between September 1993 and December 2000 in 10 patients (6 men and 4 women, age range 18-82 years). For 9/10 patients the diagnosis was confirmed by surgery; in one patient the condition resolved spontaneously as confirmed by a follow-up CT performed 24 hours after the first. Six patients were studied by a third generation CT and four by a helical CT unit. Five patients were administered iodinated oral contrast medium and in seven i.v. iodinated contrast medium. Diagnosis was hypothesized at CT on the basis of the following aspects: target, sausage, and pseudokidney. Results: Surgery confirmed the CT diagnosis of intussusception in 9/10 patients; in the patient with transient intussusception a spontaneous resolution was confirmed at follow up CT after 24 h. In the 10 patients studied the following CT aspects were observed in the bowel involved by intussusception: a target aspect in 4 patients; a sausage aspect in 4 patients; in one of them a peritoneal effusion was also present; a pseudokidney aspect was observed in 2 patients, with a long tract intussusception and peritoneal adipose tissue; in both a hyperdensity of the peritoneal adipose tissue and enlarged vessels, that were herniated within the lumen of the bowel, were recognized. At surgey the causes of intestinal intussusception were: malignant tumors of the colon (4 cases), hamartomatous polyps of the small bowel (2 cases), mixoid fibrolipoma of the colon (one case), leiomyoma of the distal ileum (one case). In one patient no cause could be identified at CT, but the condition was found to co-exist with acute appendicitis at surgery. Conclusions: CT is an accurate technique in the evaluation of intestinal intussusception in adults. It allows to identify the condition, its site, and sometimes the type of the lesion causing the intussusception, and any vascular alteration, thus playing an important role in establishing the most appropriate therapeutic strategy. abstract_id: PUBMED:1437925 Intussusception in the adult: clinical, radiological and histological features. Intussusception in the adult is an unusual cause of bowel obstruction. Unlike in childhood the clinical presentation is not clear cut and there are no distinct pathognomonic features. The radiological features are variable. Five patients presented to Frenchay Hospital over a five month period. The patients' clinical courses demonstrate the differing presenting features. Ultrasound investigation and CT scanning may show characteristic signs. Surgical treatment is mandatory as there is nearly always an underlying pathological abnormality which may be malignant. abstract_id: PUBMED:34026581 Bowel intussusception in adult: Prevalence, diagnostic tools and therapy. Intussusception is defined as invagination of one segment of the bowel into an immediately adjacent segment. The intussusception refers to the proximal segment that invaginates into the distal segment, or the intussusception (recipient segment). Intussusception, more common occur in the small bowel and rarely involve only the large bowel. In direct contrast to pediatric etiologies, adult intussusception is associated with an identifiable cause in almost all the symptomatic cases while the idiopathic causes are extremely rare. As there are many common causes of acute abdomen, intussusception should be considered when more frequent etiologies have been ruled out. In this review, we discuss the symptoms, location, etiology, characteristics, diagnostic methods and treatment strategies of this rare and enigmatic clinical entity in adult. abstract_id: PUBMED:25998975 Conservative management of adult small bowel intussusception detected at abdominal computed tomography Background/aims: The incidence of adult small bowel intussusception detected at CT has increased with advanced imaging techniques and universal utilization of CT scan. We aimed to identify factors that could predict the necessity of surgical intervention in adult patients with small bowel intussusception detected at CT during the past decade. Methods: There were 39 cases of adult small-bowel intussusception detected at CT from January 2004 to June 2014. The data on clinical factors, radiological factors and outcomes were collected by retrospectively reviewing all available medical records. Patients were classified as surgical group and conservative group according to the outcome. Association between predictive factors and outcome was assessed by Fisher's exact test and logistic regression models. Results: Among a total of 39 patients, there were 32 patients (82%) in the conservative group and 7 patients (18%) in the surgical group. Spontaneous reduction was confirmed at short-term follow-up studies (abdominal ultrasonography [n=14], single contrast small bowel series [n=14], CT [n=4]) in the conservative group. No recurrence occurred during the median follow-up period of 14.1 months (range, 0-67.5 months). Patients in the surgical group had significantly higher white blood cell (WBC) counts (OR 1.001, p=0.048), more frequent obstruction (n=4 vs. n=4, p=0.022) or leading point (n=5 vs. n=0, p<0.001) and longer intussuception length (OR 1.929, p=0.032). Conclusions: Factors associated with the necessity to resort to surgical intervention in adults with small bowel intussusceptions were higher WBC counts, presence of obstruction or leading point, and longer intussuception length. Conservative management can be considered with short-term follow-up for those without these predictive factors. Answer: Yes, CT features can be used to diagnose surgical adult bowel intussusceptions. Studies have identified certain CT characteristics that can help differentiate between intussusceptions that require surgical intervention and those that are self-limiting. For instance, a study found that most enteroenteric lesions are nonsurgical, whereas lesions affecting the colon are often surgical. The study also noted that many nonsurgical enteroenteric intussusceptions are longer than 3.5 cm and thicker than 3 cm, suggesting that these measurements alone may not be useful for diagnosing surgical bowel intussusceptions in adults (PUBMED:19620445). Another study indicated that intussusception length is a key factor in distinguishing self-limiting small-bowel intussusceptions from those requiring surgery, with intussusceptions shorter than 3.5 cm likely to be self-limiting (PUBMED:12668740). Further research has shown that clinical symptoms, signs of bowel obstruction, type and length of intussusception, and a visible tumor within an intussusception on a CT scan are critical signs of complicated intussusception that necessitate surgical intervention (PUBMED:34498667). Additionally, factors such as higher white blood cell counts, the presence of obstruction or a leading point, and longer intussusception length have been associated with the need for surgical intervention (PUBMED:25998975). In summary, while CT features can be used to diagnose surgical adult bowel intussusceptions, it is important to consider a combination of CT findings, clinical symptoms, and other factors to accurately determine the necessity of surgical treatment.
Instruction: Does health differ between participants and non-participants in the MRI-HUNT study, a population based neuroimaging study? Abstracts: abstract_id: PUBMED:22846223 Does health differ between participants and non-participants in the MRI-HUNT study, a population based neuroimaging study? The Nord-Trøndelag health studies 1984-2009. Background: Bias with regard to participation in epidemiological studies can have a large impact on the generalizability of results. Our aim was to investigate the direction and magnitude of potential bias by comparing health-related factors among participants and non-participants in a MRI-study based on HUNT, a large Norwegian health survey. Methods: Of 14,033 individuals aged 50-65, who had participated in all three large public health surveys within the Norwegian county of Nord-Trøndelag (HUNT 1, 2 and 3), 1,560 who lived within 45 minutes of travel from the city of Levanger were invited to a MRI study (MRI-HUNT). The sample of participants in MRI-HUNT (n = 1,006) were compared with those who were invited but did not participate (n = 554) and with those who were eligible but not invited (n = 12,473), using univariate analyses and logistic regression analyses adjusting for age and education level. Results: Self-reported health did not differ between the three groups, but participants had a higher education level and were somewhat younger than the two other groups. In the adjusted multivariate analyses, obesity was consistently less prevalent among participants. Significant differences in blood pressure and cholesterol were also found. Conclusion: This is the first large population-based study comparing participants and non-participants in an MRI study with regard to general health. The groups were not widely different, but participants had a higher level of education, and were less likely to be obese and have hypertension, and were slightly younger than non-participants. The observed differences between participants and non-invited individuals are probably partly explained by the inclusion criterion that participants had to live within 45 minutes of transport to where the MRI examination took place. One will expect that the participants have somewhat less brain morphological changes related to cardiovascular risk factors than the general population. Such consequences underline the crucial importance of evaluation of non-participants in MRI studies. abstract_id: PUBMED:28878932 Individuals at high risk for type 2 diabetes invited to a lifestyle program: characteristics of participants versus non-participants (the HUNT Study) and 24-month follow-up of participants (the VEND-RISK Study). Objective: Prevention of type 2 diabetes mellitus is possible through lifestyle programs, but the effect depends on the program's content, resources, and setting. Lifestyle programs are often confronted with high rates of non-participation and attrition. This study invited individuals at high risk for type 2 diabetes to a lifestyle program in the Norwegian primary healthcare setting. The aims were to investigate possible differences in characteristics between participants and non-participants and to study the effect of the lifestyle program at 24-month follow-up for participants. Research Design And Methods: Individuals identified at high risk for type 2 diabetes during the third survey of the Nord-Trøndelag Health Study (HUNT3) from two municipalities (n=332) were invited to a lifestyle program (the VEND-RISK Study). A cross-sectional design was used to explore if the participants' characteristics differed from non-participants. A non-randomized, single-arm, pre-post examination was used to examine the effect of the lifestyle program on participants' characteristics at 24-month follow-up. Results: Of all individuals at high risk for type 2 diabetes invited to the lifestyle program, 86% (287/332) declined to participate. Non-participating women had fewer years of education (p<0.001), compared with participating women. For men, no differences were seen between non-participants and participants. Among all participants (n=45) at 24-month follow-up, none had developed type 2 diabetes, and HbA1c (p<0.001) had decreased significantly. There was a small reduction in mean body mass index from baseline to 24 months that was not statistically significant. For women, waist circumference (-4.0 cm, p<0.001) decreased significantly. Conclusions: Future research regarding individuals at high risk for type 2 diabetes in the primary healthcare lifestyle program should focus on how to promote recruitment of women with low education. Participants attending this study's lifestyle program improved their cardiometabolic markers. Clinical Trials Registration: NCT01135901; Results. abstract_id: PUBMED:37425375 Dementia in former amateur and professional contact sports participants: population-based cohort study, systematic review, and meta-analysis. Background: Although there is growing evidence that former professional athletes from sports characterised by repetitive head impact subsequently experience an elevated risk of dementia, the occurrence of this disorder in retired amateurs, who represent a larger population, is uncertain. The present meta-analysis integrates new results from individual-participant analyses of a cohort study of former amateur contact sports participants into a systematic review of existing studies of retired professionals and amateurs. Methods: The cohort study comprised 2005 male retired amateur athletes who had competed internationally for Finland (1920-1965) and a general population comparison group of 1386 age-equivalent men. Dementia occurrence was ascertained from linked national mortality and hospital records. For the PROSPERO-registered (CRD42022352780) systematic review, we searched PubMed and Embase from their inception to April 2023, including cohort studies published in English that reported standard estimates of association and variance. Study-specific estimates were aggregated using random-effect meta-analysis. An adapted Cochrane Risk of Bias Tool was used to assess study quality. Findings: In the cohort study, up to 46 years of health surveillance of 3391 men gave rise to 406 dementia cases (265 Alzheimer's disease). After adjustment for covariates, former boxers experienced elevated rates of dementia (hazard ratio: 3.60 [95% CI 2.46, 5.28]) and Alzheimer's disease (4.10 [2.55, 6.61]) relative to general population controls. Associations were of lower magnitude in retired wrestlers (dementia: 1.51 [0.98, 2.34]; Alzheimer's disease: 2.11 [1.28, 3.48]) and soccer players (dementia: 1.55 [1.00, 2.41]; Alzheimer's disease: 2.07 [1.23, 3.46]), with some estimates including unity. The systematic review identified 827 potentially eligible published articles, of which 9 met our inclusion criteria. These few retrieved studies all sampled men and the majority were of moderate quality. In sport-specific analyses according to playing level, there was a marked difference in dementia rates in onetime professional American football players (2 studies; summary risk ratio: 2.96 [95% CI 1.66, 5.30]) relative to amateurs in whom there was no suggestion of an association (2 studies; 0.90 [0.52, 1.56]). For soccer players, while dementia occurrence was raised in both erstwhile professionals (2 studies; 3.61 [2.92, 4.45]) and amateurs (1 study; 1.60 [1.11, 2.30]) there was again a suggestion of a risk differential. The only studies of boxers comprised former amateurs in whom there was a tripling in the rates of dementia (2 studies; 3.14 [95% CI 1.72, 5.74]) and Alzheimer's disease (2 studies; 3.07 [1.01, 9.38]) at follow-up compared to controls. Interpretation: Based on a small number of studies exclusively sampling men, former amateur participants in soccer, boxing, and wrestling appeared to experience an elevated risk of dementia relative to the general population. Where data allowed comparison, there was a suggestion that risks were greater amongst retired professionals relative to amateurs in the sports of soccer and American football. Whether these findings are generalisable to the contact sports not featured, and to women, warrants examination. Funding: This work was unfunded. abstract_id: PUBMED:31933509 Thirty years later: Locating and interviewing participants of the Chicago Longitudinal Study. Retaining study participants over time is essential for longitudinal studies to prevent selection bias and to achieve their long-term goals. The present paper examines the extent to which participants can be retained in a 30-year longitudinal study when a multi-pronged approach is employed. The paper specifically describes the approach that was used to locate and interview participants of the Chicago Longitudinal Study (CLS), three decades after the study began. The CLS is a prospective cohort investigation that examines the effects of the Child-Parent Center (CPC) program, a school-based intervention for low-income children from preschool through 3rd grade. The original CLS sample included a complete cohort of 1,539 children who were born in low-income areas in 1979-1980 and attended kindergarten in 1985-1986 at Chicago Public Schools. The CLS conducted a follow-up survey when participants were approximately age 35. After relatively slow initial progress, CLS researchers developed a comprehensive strategy to locate and interview participants, including: (a) adoption of detailed, manualized tracking protocol, (b) utilization of multiple search platforms, ranging from public search engines to social media, (c) assistance from state correctional facilities, and (d) neighborhood canvassing and in-person interviews. This tracking and interview process facilitated 735 completed interviews within 27 months, compared to 370 completed interviews in the 32 months prior to the launch of the comprehensive tracking protocol. Altogether, 1,105 interviews were conducted, representing an effective completion rate of 76.5%. Recommendations for strengthening response rates in other longitudinal studies are discussed. abstract_id: PUBMED:26950220 Incidental Intracranial Findings and Their Clinical Impact; The HUNT MRI Study in a General Population of 1006 Participants between 50-66 Years. Objectives: Evaluate types and prevalence of all, incidental, and clinically relevant incidental intracranial findings, i.e. those referred to primary physician or clinical specialist, in a cohort between 50 and 66 years from the Nord-Trøndelag Health (HUNT) study. Types of follow-up, outcome of repeated neuroimaging and neurosurgical treatment were assessed. Material And Methods: 1006 participants (530 women) underwent MRI of the head at 1.5T consisting of T1 weighted sagittal IR-FSPGR volume, axial T2 weighted, gradient echo T2* weighted and FLAIR sequences plus time of flight cerebral angiography covering the circle of Willis. The nature of a finding and if it was incidental were determined from previous radiological examinations, patient records, phone interview, and/or additional neuroimaging. Handling and outcome of the clinically relevant incidental findings were prospectively recorded. True and false positives were estimated from the repeated neuroimaging. Results: Prevalence of any intracranial finding was 32.7%. Incidental intracranial findings were present in 27.1% and clinically relevant findings in 15.1% of the participants in the HUNT MRI cohort. 185 individuals (18.4%) were contacted by phone about their findings. 40 participants (6.2%) underwent ≥ 1 additional neuroimaging session to establish etiology. Most false positives were linked to an initial diagnosis of suspected glioma, and overall positive predictive value of initial MRI was 0.90 across different diagnoses. 90.8% of the clinically relevant incidental findings were developmental and acquired cerebrovascular pathologies, the remaining 9.2% were intracranial tumors, of which extra-axial tumors predominated. In total, 3.9% of the participants were referred to a clinical specialist, and 11.7% to their primary physician. 1.4% underwent neurosurgery/radiotherapy, and 1 (0.1%) experienced a procedure related postoperative deficit. Conclusions: In a general population between 50 and 66 years most intracranial findings on MRI were incidental, and >15% of the cohort was referred to clinical-follow up. Hence good routines for handling of findings need to be in place to ensure timely and appropriate handling. abstract_id: PUBMED:26024925 Perivascular spaces and headache: A population-based imaging study (HUNT-MRI). Background: In four previous clinic-based MRI studies headache sufferers (in particular migraineurs) had more perivascular spaces (PVS) than individuals who were headache-free. Methods: The present study was part of a large longitudinal, epidemiological study (Nord-Trøndelag Health Survey (HUNT)). The 1006 participants, age 50-65 years at inclusion, had participated in all previous HUNT surveys (1-3), and been randomly selected to a population-based imaging study of the head (HUNT-MRI, 2007-2009). The number of visible PVS in the basal ganglia (BG) and hemispheric white matter (HWM) was compared in headache sufferers (migraine with and without aura, non-migrainous headache) and people who were headache-free. Results: The results showed in general small differences between headache sufferers and headache-free participants. In the cross-sectional analysis migraineurs without aura had fewer PVS than headache-free individuals in BG (OR = 0.84, 95% CI = 0.76--0.94, p value = 0.003) and in BG and HWM together (OR = 0.97, 95% CI = 0.95-1.00, p value = 0.046). No difference between long-term headache sufferers and long-term headache-free individuals with regard to number of PVS was found. Discussion: In contrast to previous studies, the present large, blinded, population-based study showed no increase in number of dilated PVS among headache sufferers. Fewer PVS among those with migraine without aura may be a spurious finding. abstract_id: PUBMED:25674037 Incidental findings in MRI of the paranasal sinuses in adults: a population-based study (HUNT MRI). Background: Diagnostic imaging of the head is used with increasing frequency, and often includes the paranasal sinuses, where incidental opacifications are found. To determine the clinical relevance of such findings can be challenging, and for the patient such incidental findings can give rise to concern if they are over-reported. Studies of incidental findings in the paranasal sinuses have been conducted mostly in patients referred for diagnostic imaging, hence the prevalence in the general population is not known. The purpose of this study was to determine the prevalence and size of incidental opacification in the paranasal sinuses in a non-selected adult population using magnetic resonance imaging (MRI) without medical indication, and to relate the results to sex and season. Methods: Randomly and independent of medical history, 982 participants (518 women) with a mean age of 58.5 years (range, 50-66) underwent MRI of the head as part of a large public health survey in Norway. The MRIs included 3D T1 weighted volume data and 2D axial T2 weighted image (WI). Opacifications, indicating mucosal thickenings, polyps, retention cysts, or fluid, were recorded if measuring more than 1 mm. Results: Opacifications were found in 66% of the participants. Mucosal thickenings were found in 49%, commonly in the maxillary sinuses (29%) where 25% had opacifications that were less than 4 mm in size. Other opacifications occurred in the anterior ethmoid (23%), posterior ethmoid (21%), frontal sinus (9%), and sphenoid (8%). Polyps and retention cysts were also found mainly in the maxillary sinuses in 32%. Fluid was observed in 6% of the MRIs. Mucosal thickening was observed more frequently in men than in women (P <0.05). No seasonal variation was found. Conclusions: In this large non-selected sample, incidental opacification in the paranasal sinuses was seen in two out of three participants, and mucosal thickening was seen in one out of two. Fluid was rare. Knowledge of incidental opacification is important because it can affect clinical practice. abstract_id: PUBMED:20819238 Comparison of participants and non-participants to the ORISCAV-LUX population-based study on cardiovascular risk factors in Luxembourg. Background: Poor response is a major concern in public health surveys. In a population-based ORISCAV-LUX study carried out in Grand-Duchy of Luxembourg to assess the cardiovascular risk factors, the non-response rate was not negligible. The aims of the present work were: 1) to investigate the representativeness of study sample to the general population, and 2) to compare the known demographic and cardiovascular health-related profiles of participants and non-participants. Methods: For sample representativeness, the participants were compared to the source population according to stratification criteria (age, sex and district of residence). Based on complementary information from the "medical administrative database", further analysis was carried out to assess whether the health status affected the response rate. Several demographic and morbidity indicators were used in the univariate comparison between participants and non-participants. Results: Among the 4452 potentially eligible subjects contacted for the study, there were finally 1432 (32.2%) participants. Compared to the source population, no differences were found for gender and district distribution. By contrast, the youngest age group was under-represented while adults and elderly were over-represented in the sample, for both genders. Globally, the investigated clinical profile of the non-participants was similar to that of participants. Hospital admission and cardiovascular health-related medical measures were comparable in both groups even after controlling for age. The participation rate was lower in Portuguese residents as compared to Luxembourgish (OR = 0.58, 95% CI: 0.48-0.69). It was also significantly associated with the professional status (P < 0.0001). Subjects from the working class were less receptive to the study than those from other professional categories. Conclusion: The 32.2% participation rate obtained in the ORISCAV-LUX survey represents the realistic achievable rate for this type of multiple-stage, nationwide, population-based surveys. It corresponds to the expected rate upon which the sample size was calculated. Given the absence of discriminating health profiles between participants and non-participants, it can be concluded that the response rate does not invalidate the results and allows generalizing the findings for the population. abstract_id: PUBMED:33641624 Participants' experiences and impressions of a group-based positive psychology intervention programme for rural adults in Ghana. Introduction: There is growing evidence that group-based mental health intervention programmes can encourage the development of peer support, psychosocial skills, and collaborative therapeutic relationships with longer lasting effects. This study explored participants' experiences of, perceived benefits of, and recommendations to improve a 10-session group-based multicomponent positive psychology intervention (mPPI)-the Inspired Life Programme (ILP)-designed to promote positive mental health and reduce symptoms of depression and negative affect in a sample of rural Ghanaian adults.Method: Face-to-face semi-structured individual interviews were conducted with 18 randomly selected programme participants three months after their participation in the ILP. Data were analysed thematically with an inductive approach.Results: Participants described their experience of the ILP as a forum for growth that granted them the opportunity to introspect, practicalise and situate everyday life challenges, connect with others, and to develop a sense of mutual accountability. Results indicate that the ILP led participants to develop a stronger sense of positivity and well-being, fructify their ideas, and to cultivate stronger social networks and relationships that led to increased vocational productiveness. Participants recommended that researchers include facets of physical health promotion in the programme and invite close relations of participants to participate in the programme.Conclusion: This study provides the first insight into participants' experiences of a group-based mPPI in Ghana. These findings may provide useful information to inform the design of context-appropriate community-based mental health interventions to fit participants' specific needs, capacities, and circumstances. abstract_id: PUBMED:34842946 Incidental findings on 3 T neuroimaging: cross-sectional observations from the population-based Rhineland Study. Purpose: Development of best practices for dealing with incidental findings on neuroimaging requires insight in their frequency and clinical relevance. Methods: Here, we delineate prevalence estimates with 95% confidence intervals and clinical management of incidental findings, based on the first 3589 participants of the population-based Rhineland Study (age range 30-95 years) who underwent 3 Tesla structural neuroimaging (3D, 0.8 mm3 isotropic resolution). Two trained raters independently assessed all scans for abnormalities, with confirmation and adjudication where needed by neuroradiologists. Participants were referred for diagnostic work-up depending on the potential benefit. Results: Of 3589 participants (mean age 55 ± 14 years, 2072 women), 867 had at least one possible incidental finding (24.2%). Most common were pituitary abnormalities (12.3%), arachnoid cysts (4.1%), developmental venous anomalies (2.5%), non-acute infarcts (1.8%), cavernomas (1.0%), and meningiomas (0.7%). Forty-six participants were informed about their findings, which was hitherto unknown in 40 of them (1.1%). Of these, in 19 participants (48%), a wait-and-see policy was applied and nine (23%) received treatment, while lesions in the remainder were benign, could not be confirmed, or the participant refused to inform us about their clinical diagnosis. Conclusion: Nearly one-quarter of participants had an incidental finding, but only 5% of those required referral, that mostly remained without direct clinical consequences. Answer: Yes, health does differ between participants and non-participants in the MRI-HUNT study, a population-based neuroimaging study. The study found that participants had a higher level of education and were less likely to be obese and have hypertension compared to non-participants. They were also somewhat younger than the two other groups compared in the study, which included those who were invited but did not participate and those who were eligible but not invited. Significant differences in blood pressure and cholesterol were also found between the groups. The observed differences between participants and non-invited individuals are probably partly explained by the inclusion criterion that participants had to live within 45 minutes of transport to where the MRI examination took place. It is expected that the participants have somewhat less brain morphological changes related to cardiovascular risk factors than the general population (PUBMED:22846223).
Instruction: C sign: talocalcaneal coalition or flatfoot deformity? Abstracts: abstract_id: PUBMED:31789975 Tarsal coalition. Purpose Of Review: Tarsal coalitions may cause painful pes planovalgus and recurrent sprains, and can lead to arthrosis if improperly managed. In this review, we discuss the current topics related to talocalcaneal and calcaneonavicular coalitions. Recent Findings: Tarsal coalitions are initially managed with conservative therapy, and when this approach fails, surgery is performed. Treatment of calcaneonavicular coalitions involves resection of the coalition and interposition of the extensor digitorum brevis muscle or fat, and in cases of marked valgus deformity, correction of the deformity. In talocalcaneal coalitions, recommendations include coalition resection for those affecting less than 50% of the area of the posterior facet and with a less than 16° valgus, coalition resection and valgus correction for those affecting less than 50% of the area and valgus greater than 16°, and isolated valgus correction for those affecting more than 50% of the area and with a more than or less than 16° valgus. Arthrodesis is reserved as a salvage procedure. Summary: Talocalcaneal and calcaneonavicular coalitions can cause painful pes planovalgus. Their diagnoses are confirmed by plain radiograph, computed tomography, and, in cases of fibrous or cartilaginous coalitions, MRI. Initial treatment is conservative, and when symptoms persist, resection of the coalition is recommended along with tissue graft interposition with or without associated valgus correction. Arthrodesis is indicated as a salvage procedure whenever treatment fails or with advanced arthrosis. abstract_id: PUBMED:11310204 The C sign: more specific for flatfoot deformity than subtalar coalition. Objective: To assess the sensitivity and specificity of the C sign, a C-shaped line created by the outline of the talar dome and the inferior margin of the sustentaculum tali on lateral ankle radiographs, for patients with both flatfoot deformity and specifically talocalcaneal (TC) coalition. Design And Patients: All patients in this retrospective study were under 35 years of age and had a lateral ankle radiograph and follow-up CT scan for a non-traumatic indication. Forty-eight cases over the past 5 years fulfilled these criteria. Two masked musculoskeletal radiologists determined the presence or absence of the C sign for each lateral radiograph by consensus. Each CT study was then assessed by a third musculoskeletal radiologist for the presence of tarsal coalition. Observations were correlated with clinical history regarding presence or absence of flatfoot deformity. Results: Ten cases of TC coalition were diagnosed, four of which demonstrated a C sign (40%). Eight cases with a C sign were encountered, four of which had TC coalition (50%) and four did not. All patients with a positive C sign had a flatfoot clinically (100%), while only eight of 24 flatfooted patients had a C sign (33%). Conclusion: The C sign is specific, but not sensitive, for flatfoot deformity, and is neither sensitive nor specific for subtalar coalition. abstract_id: PUBMED:33453609 The C-sign in normal feet: a prospective study and novel imaging algorithm for subtalar joint tarsal coalition. Objective: The C-sign on lateral radiographs has been subject to considerable debate with respect to its reliable association to subtalar joint tarsal coalition. The purpose of this study was to determine to what degree subtalar joint pronation factors into (a) the appearance of both complete and incomplete type A C-signs and (b) the conspicuity of the middle facet in both flatfeet and rectus feet. Study Design: Forty-seven normal adult volunteers were enrolled into the study with a total of 92 feet, of which 42 were flexible flatfeet and 50 were rectus feet with normal subtalar joint range of motion. Lateral weight-bearing radiographs were taken of each foot in a position of (a) standing and (b) maximum subtalar joint pronation. Investigators evaluated images for the visibility of the middle facet, and the presence or absence of a continuous or interrupted type A C-sign. Principal Results: No continuous C-signs were produced with extreme pronation in either the rectus/normal or flatfoot populations. Three incomplete type A C-signs were produced with pronation, two of which were in rectus feet. Two absent middle facet signs were also produced with pronation. Absent visualization of the facet had high positive and negative predictive values (1.00 and 0.99 respectively) for the presence of an incomplete C-Sign. Major Conclusions: The standard, routine lateral foot/ankle radiographic image protocol for suspected talocalcaneal coalition should be modified routinely positioning the foot maximally supinated, thereby minimizing the possibility of producing C-signs (complete or type A incomplete) and/or absent middle facet signs with pronation in either normal or flatfeet. A novel algorithmic approach can then be simply applied to determine when ancillary CT scanning is warranted for suspected posterior facet coalition. abstract_id: PUBMED:37307332 Decision-Making and Management of Tarsal Coalition in the Young Adult Patient: A Critical Analysis Review. » Tarsal coalitions most commonly affect the calcaneonavicular and talocalcaneal joints in up to 13% of the general population. They alter the mechanics of the subtalar joint, limiting inversion and eversion, and place excessive stress on neighboring joints causing pain, recurrent ankle sprains, and/or progressive pes planus during the adolescent growth spurt.» While many coalitions are identified on radiographs, advanced imaging with computed tomography or magnetic resonance imaging is sometimes required. These advanced imaging modalities also serve an essential role for surgical planning to quantify coalition involvement, identify fibrous or cartilaginous coalitions, and aid in determining the degree of deformity within the foot.» Surgical treatment is reserved for feet with persistent activity-related pain not relieved by prolonged attempts at nonoperative management, which include nonsteroidal anti-inflammatory drugs, shoe orthotics, and periods of non-weight-bearing in a cast. These conservative modalities may be successful in up to 85% of cases.» For adolescent patients, recent surgical options attempt to avoid arthrodesis and focus on coalition resection and interposition grafting with or without deformity correction. The ultimate decision is based on the location of the pain, the size and histology of the coalition, the health of the posterior subtalar facet, the degree of flatfoot deformity, and the presence of degenerative changes in the subtalar and/or adjacent joints.» While many studies focus on subtalar motion and gait kinematics, the critical outcomes remain pain relief and future need for arthrodesis, which may be related not only to resection of the coalition but assessment of deformity, including after the resection has been performed. abstract_id: PUBMED:29130734 The Rare Lateral Cuneocuboid Coalition: A Case Report. Tarsal coalitions typically occur at the talocalcaneal or calcaneonavicular joints. Common findings are pain, limited range of motion, and a pes planus deformity. The focus of this case report includes the presentation, imaging, treatment, and outcomes for a 21-year-old woman diagnosed with a rare lateral cuneocuboid coalition with chronic pain. Clinical and radiographic examinations, typically used to diagnose the common coalition, were unremarkable. Magnetic resonance imaging was diagnostic of the lateral cuneocuboid coalition, which was successfully treated with surgical resection. At 6-year follow-up, she reports resolution of symptoms and has returned to her normal presurgical activity level pain-free. This case is only the third lateral cuneocuboid coalition reported in the literature. The rarity of this coalition and its nonsuspicious clinical presentation make it worthy of acknowledgment. abstract_id: PUBMED:26964697 Calcaneonavicular Coalition Resection With Pes Planovalgus Reconstruction. Calcaneonavicular coalitions can lead to a painful, rigid pes planovalgus deformity. Historical treatment of coalitions recommends resection. A newer concept in coalition treatment includes concomitant flatfoot reconstruction. In the present study, we hoped to demonstrate the ability to reconstruct a flatfoot deformity with concomitant calcaneonavicular coalition resection. We performed a retrospective comparative study of patients undergoing isolated calcaneonavicular bar excision (group A) with those undergoing calcaneonavicular bar excision and concomitant pes planovalgus reconstruction (group B). The radiographic parameters of pes planovalgus correction were measured on the pre- and postoperative radiographs, including talar head uncoverage, calcaneal inclination, and anteroposterior and lateral talo-first metatarsal angle. Calcaneonavicular coalition resection was performed using a standard technique with or without biologic spacers. Associated flatfoot reconstruction included posterior muscle group lengthening, calcaneal osteotomy, and/or midfoot osteotomy. Of the 27 patients, 20 were included in group A and 7 in group B. Their mean age was 18.1 years. Improvement was seen in the radiographic parameters for patients undergoing calcaneonavicular bar excision with concomitant flatfoot procedures, with statistical significance found in the calcaneal inclination (p = .013). Talar head uncoverage improved in both groups (p = .011). No change was found in the radiographic angles in patients undergoing isolated calcaneonavicular bar excision. No patients in either group developed recurrence of the coalition. The results of the present study demonstrated radiographic improvement in patients undergoing calcaneonavicular bar excision with concomitant pes planovalgus reconstruction. Hindfoot reconstruction with joint sparing osteotomies during coalition excision is a reasonable option to improve alignment and avoid arthrodesis. abstract_id: PUBMED:37451927 Coalition excision and corrective osteotomies versus coalition excision and arthroereisis in management of pes planovalgus with talo-calcaneal coalition in adolescents: A randomized controlled trial. Background: Talocalcaneal coalition is the most common cause of rigid flat foot in adolescents. It presents with recurrent ankle sprains, foot and ankle pain, and foot deformity. Management is still controversial. Multiple options were utilized during the last 40 years, including coalition excision only or coalition excision with hind foot arthrodesis or corrective osteotomies. However, the effect of arthroereisis after coalition excision is still questionable. Patients And Methods: Thirty feet in 28 patients with rigid flat foot due to talocalcaneal coalition, who presented to our institution between September 2018 and April 2020, were prospectively analyzed. Randomization was performed by random allocation using a computer-based system into two groups: group A for coalition excision and arthroereisis, group B for coalition excision and osteotomies. Functional and radiological outcomes and complications were recorded and analyzed using Statistical Package for the Social Sciences software. Results: Thirty feet in 28 patients were included in the final analysis (15 feet in each group). One patient in each group had bilateral affection. The mean age was 14.5 years, and the mean follow-up duration was 24 months. At final follow-up, the mean AOFAS was 78.8 ± 4.04 in group A and 76.73 ± 4.66 in group B, while the FAAM scores were 80 ± 5 and 79 ± 3 in groups A and B, respectively. The complication rate was higher in group A, however with no statistical significance. Conclusion: The combination of talocalcaneal coalition resection with either corrective osteotomies or arthroereisis had a significant improvement of functional and radiological outcomes in the management of rigid pes planovalgus. abstract_id: PUBMED:29601967 Tarsal coalition in paediatric patients. Tarsal coalition is due to failure of segmentation between two or more foot bones during embryological development at a site where the joint cleft fails to develop. Depending on the nature of the tissue connecting the bones, the abnormality is a syndesmosis, synchondrosis, or synostosis. Although the coalition exists at birth, synostosis usually develops only late during growth. Talo-calcaneal and calcaneo-navicular coalitions account for over 90% of all cases of tarsal coalition. The joint at the site of the coalition is stiff. Pain during physical activity is the main symptom, although recurrent ankle sprain is another possible presenting manifestation. During the physical examination, range-of-motion limitation at the hindfoot or midfoot should be sought, as well as varus or valgus malalignment of the hindfoot. Either pes planus or pes cavus may be seen. Calcaneo-navicular coalition may be visible on the standard radiograph, whereas talo-calcaneal coalition is best visualised by computed tomography or magnetic resonance imaging. As growth proceeds, the coalition becomes ossified and range-of-motion diminishes. Onset of the pain is often in the second decade of life or later. In patients with symptomatic tarsal coalition, the initial management should always consist in non-operative treatment for at least 6 months. A consensus exists that surgery should be offered when non-operative treatment fails. Open resection of the coalition is the treatment of choice, although endoscopic resection is also an option. Sound evidence exists that resection of the coalition can produce favourable outcomes even in the long-term. Fusion should be reserved for failure of resection, extensive coalitions, multiple coalitions, and patients with advanced osteoarthritis. abstract_id: PUBMED:29079242 Fibulocalcaneal Impingement in a Growing Child With Otherwise Asymptomatic Talocalcaneal Coalition. Subfibular impingement has been described in patients with flatfoot. It possibly occurs with valgus deformity associated with talocalcaneal coalition. We observed symptomatic unilateral fibular impingement initially on the left foot of an 11-year-old female with an otherwise asymptomatic bilateral talocalcaneal coalition. From the age of 8 years, she had complained of pain around the left fibular tip. Magnetic resonance imaging showed a partial talocalcaneal coalition. At 10 years of age, it was questioned whether the pain was related to the coalition. However, imaging of the asymptomatic right foot also showed a talocalcaneal coalition, with the coalition in both feet appearing equal. Additionally, the peroneal trochlea appeared particularly prominent, more so on the left than on the right foot. Therefore, the symptoms were suspected to have been caused only by fibulocalcaneal impingement owing to a relatively long fibula. Subperiosteal shortening of the fibula was performed at when she was 11 years old. A bed for the peroneal tendons was created around the remaining epiphysis of the fibula, and the fibular ligaments were reattached with Arthrex® anchors (Arthrex, Naples, FL). At 14 months postoperatively, the patient was free of pain with unrestricted movement, although the follow-up imaging studies showed complete bony fusion on the medial aspect of the coalition between the talus and calcaneus. Approximately 1.5 years after surgery, our female patient at 12.5 years old complained of the same problems on her right foot, definitely occurring only around the fibula. The same procedure was performed as she had undergone on the left foot. At the last follow-up examination, she was asymptomatic 2.5 years after the initial surgery of the left foot and 11 months after surgery on the right. Talocalcaneal coalition can cause moderate to severe hindfoot deformity, leading to fibulocalcaneal impingement. Hence, treatment should be determined accordingly. abstract_id: PUBMED:25655516 Tibialis Spastic Varus Foot With Calcaneonavicular Coalition: A Case Report and Review of the Literature. Unlabelled: Peroneal spastic flat foot is a well-known condition usually occurring with tarsal coalition. Conversely, tibialis spastic varus foot is a rare condition, which can be difficult to diagnose. Moreover, tibialis spastic varus foot with calcaneonavicular coalition is extremely rare, with only a few published case reports. Resection of the calcaneonavicular bar is performed in the majority of patients. We report a case of tibialis spastic varus foot with calcaneonavicular coalition in an 11-year-old boy with intellectual disability. His family noticed his right varus foot deformity 1.5 years earlier. There was no obvious history of trauma. The deformity gradually worsened with running. Because conservative treatment failed, resection of the coalition was performed that facilitated a good outcome. In this article, we report our experience of tibialis spastic varus foot with calcaneonavicular coalition and review the English literature of this condition. Level Of Evidence: Therapeutic, Level IV: Case report. Answer: The C sign is a radiographic feature that has been associated with both flatfoot deformity and talocalcaneal (TC) coalition. However, it appears to be more specific for flatfoot deformity than for subtalar coalition. A study assessing the sensitivity and specificity of the C sign found that while all patients with a positive C sign had a flatfoot clinically (100%), only 33% of flatfooted patients had a C sign. Furthermore, the C sign was neither sensitive nor specific for subtalar coalition, as only 40% of TC coalition cases demonstrated a C sign, and 50% of cases with a C sign had TC coalition (PUBMED:11310204). Another study suggested that the C sign is not reliably associated with subtalar joint tarsal coalition and that pronation factors into the appearance of both complete and incomplete type A C-signs, as well as the conspicuity of the middle facet in both flatfeet and rectus feet. This study recommended modifying the standard lateral foot/ankle radiographic image protocol for suspected talocalcaneal coalition to minimize the possibility of producing C-signs with pronation in either normal or flatfeet (PUBMED:33453609). Therefore, while the C sign can be present in cases of talocalcaneal coalition, it is more specifically associated with flatfoot deformity and should be interpreted with caution, considering the clinical context and other imaging findings.
Instruction: Lost to follow-up: the bottleneck of early referral to renal units? Abstracts: abstract_id: PUBMED:14586276 Lost to follow-up: the bottleneck of early referral to renal units? Background: Early referral is a major goal in chronic kidney diseases; however, loss to follow-up, potentially limiting its advantages, has never been studied. Material/methods: In order to assess the prevalence and causes of loss to follow-up, a telephone inquiry was performed in a renal outpatient unit, mainly dedicated to early referral of diabetic patients. Patients were considered to be in follow-up if there was at least one check-up in the period February 2001-February 2002, and lost to follow-up if the last check-up had occurred in the previous year. The reasons for loss to follow-up were related to typical clinical-biochemical parameters to define a "drop-out profile". Results: 195 patients were on follow-up: median creatinine 1.4 mg/dL, age 64, 76.9% diabetics. 81 patients were lost to follow-up: creatinine 1.4 mg/dL, age 70, 73.8% diabetics. A telephone number was available in 87.6% of the cases; 25 were not found, 7 had died, 24 were non-compliant, 1 was bed-ridden, 12 had changed care unit, 2 had started dialysis. Renal care was shorter in those lost to follow-up; among the latter, serum creatinine and age were significantly lower in non-compliant patients. A logistic regression model confirmed the significance of lower serum creatinine at last check-up in non-compliant patients (p=0.018). Conclusions: Loss to follow-up is a problem in nephrology; lack of awareness probably causes the higher drop-out rate at lower creatinine levels. The initial period of care may be crucial for long-term compliance. Further studies are needed to tailor organizational and educational interventions. abstract_id: PUBMED:21794824 Characteristics of early arthritis units that may be associated with better referral efficiency: survey of SERAP units Objective: To identify characteristics of early arthritis units, that may be associated with better referral eficiency. Methods: A national survey of the 36 early arthritis units (EAU) in Spanish Rheumatology Units in 2004 (SERAP project). Survey collected information about general practitioners (GP) educational program to improve knowledge and practical skills of early arthritis, networking and feed-back system and referral efficiency. EAU were classified in two groups according to 25 and 50% of inappropriate referral process, respectively. Results: Thirty four of the 36 (94%) EAU, answered the survey. GP were trained in only 1 medical meeting in the primary care clinic, with one or more rheumatologists responsible of GP education. Fourteen of the 34 EAU (42.4%) regularly interacted with GP and only 20 (39.4%) contacted the GP who were responsible for the wrong referral process. Median lag time for referral to the Rheumatology out-patient clinic, was 73 days (15-365 days). The percentage of wrongly referred patients was between 0 and 80% (38% ± 21). Only 10 EAU (27.8%) referred patients appropriately according to the most strict criteria (25% of inappropriate referral) and 27 EAU (75%), according to 50% of inappropriate referral criteria. Conclusions: Only two aspects of the EAU implementation strategy were associated with better referral efficiency: 1) interaction with the GP responsible of the inappropriate referral process and 2) a lower median lag time for referral to the Rheumatology out-patient clinic. abstract_id: PUBMED:1320224 Early deaths on renal replacement therapy: the need for early nephrological referral. Forty-four patients who commenced renal replacement therapy between January 1983 and January 1988 died within 1 year of starting treatment. To examine the factors influencing early mortality of patients on renal replacement therapy these patients (group A) were compared with a group of 44 age- and sex-matched subjects who started dialysis over the same period and who survived more than 1 year (group B). The interval between first presentation and dialysis was significantly shorter in group A (median 36 days) than group B (median 30 months) patients. Plasma urea and creatinine were significantly greater in group A than group B at the time of first presentation to a nephrologist but not at first dialysis. Patients in group A were more often treated first by haemodialysis. Systemic disease as the cause of renal failure did not appear to influence early death. Early death on renal replacement therapy appears to be associated with late referral to a nephrologist. Early referral may be beneficial because it allows for planning of dialysis and treatment of the complications of progressive uraemia. abstract_id: PUBMED:15752229 The need for early nephrology referral. The incidence of end-stage renal disease is increasing. Progression to end stage can be slowed if kidney damage is detected at an early stage. Prognosis and outcomes in patients with chronic kidney disease have been related to the quality of predialysis care and the timing of referral. Many patients with chronic kidney disease are referred to a nephrologist close to commencement of renal replacement therapy. This leads to suboptimal management of complications of chronic renal insufficiency, and increased morbidity and mortality of patients on renal replacement therapy. This article addresses the evidence that examines the view that patients need to be referred early in order to avoid complications of chronic renal insufficiency. Early referral can be achieved through improved communication between primary health care givers and nephrology services. A multidisciplinary approach has a significant impact on outcomes. In the face of rising incidence of chronic kidney disease, early referral of all patients is not possible. Therefore, identification of patients at risk for rapid deterioration of renal function is important in order to rationalize and reduce health care expenditure. abstract_id: PUBMED:19200276 The significance of the timing of referral for renal care. Research has shown that referral of patients in the stages of renal disease, to a Nephrologist has significant impact on the patients overall health and well-being. This article is a review of the literature discussing the key impact that late referral has on the patient. This will include looking at the preparation of the patient for dialysis-education and vascular access as well as discussing the modality choice for the patient. It will also address the longer-term impact such as patient survival on dialysis. abstract_id: PUBMED:12037244 Late referral for dialysis: improving the management of chronic renal disease. Background: Timely nephrological referral of patients with chronic renal failure (CRF) is important, but referral at a late stage of disease is common. Aim: To investigate whether late referral of patients is avoidable, and where the missed opportunities lie. Design: Prospective ascertainment of new cases and comprehensive review of pre-end stage history. Methods: Patients admitted to Bristol and Portsmouth renal units for chronic RRT between June 1997 and May 1998 were identified from computer databases. Data were collected from case notes and hospital records, and a self-administered patient questionnaire. Late referral, defined as dialysis within 4 months of first referral to a dialysing nephrologist, was categorized by algorithm as unavoidable or avoidable. Results: Of 250 patients, 96 (38%) were referred late. Forty-three (45%) had definite avoidable reasons: 35 (37%) with raised serum creatinine for a median 3.7 years (IQR 1.5-8.2) before referral, and eight (8%) with risk factors for renal disease but scant assessment of renal function; 12/43 (31%) had a diagnosis of diabetic nephropathy. Late referred patients were less likely to receive standard renal therapies for chronic renal failure, were in a poorer clinical state at start of RRT, and more often required emergency dialysis, compared to patients referred early. Late referrals were as likely from a hospital as a primary care physician. Discussion: A significant proportion of patients are avoidably referred to a dialysing renal unit at a very late stage. Guidelines on referral should be developed by nephrologists, primary and secondary care physicians, and patient groups, and further research is needed into the cost-effectiveness of early referral strategies. abstract_id: PUBMED:11684571 An economic evaluation of early versus late referral of patients with progressive renal insufficiency. Patients with progressive renal insufficiency (PRI) who start renal replacement therapy (RRT) within 4 months of seeing a nephrologist (late referral) have increased morbidity, mortality, and health care costs. We performed an economic evaluation of early versus late referral of patients with PRI to a multidisciplinary clinic. A decision analysis was performed from the perspective of the health care provider, using a Markov model to simulate progression of PRI and survival of patients on RRT. Our simulated patient cohort comprised 1,000 patients with PRI and estimated creatinine clearance of 20 mL/min. The study time horizon was 5 years. Clinical and cost data were taken from published Canadian and U.S. data, where available. Where published data were lacking, we used data from our prospectively maintained database. The study intervention was attendance at a PRI clinic where patients receive treatment to slow the rate of renal progression, receive treatment of complications of PRI, and are prepared for RRT. Endpoints were total cost of patient care, patient life-years, patient life-years free of RRT, and hospital admission days. Early referral resulted in cost savings and improved patient survival along with more life-years free of RRT and fewer hospital inpatient days. Cost-effectiveness was unaffected by univariate sensitivity analyses. Cost-effectiveness decreased as rates of renal function loss for patients referred early versus late approximated each other. In conclusion, early referral of patients with PRI to a multidisciplinary clinic appears cost-effective. abstract_id: PUBMED:19270201 Optimal referral is early referral. The number of patients receiving renal replacement therapy in the United Kingdom is rapidly rising. Chronic kidney disease (CKD) is a worldwide public health problem with significant comorbidity and mortality. Several organizational guidelines have been developed in an attempt to identify when appropriate referral to nephrology services should occur; however, many of these guidelines provide conflicting recommendations on referral. Recent surveys suggest that more than 30% of patients with CKD are referred later than is ideal. Late referral of patients with CKD is associated with increased patient morbidity and mortality, increased need for and duration of hospital admission, and increased initial costs of care following commencement of dialysis. Additional benefits of early referral include identifying and treating reversible causes of renal impairment and managing the multiple coexisting conditions associated with CKD. Referral time also affects the choice of treatment modality. Patients and their families should receive sufficient information regarding the nature of their CKD and the options for treatment so that they can make informed decisions concerning their care. Literature addressing when to refer to low-clearance or pre-dialysis clinics is limited. Existing data suggest that such clinics and patient education programs may facilitate improved medical care for patients, greater patient involvement in selection of the mode of dialysis, reduction in the need for "urgent start" dialysis, and improved short-term survival and quality of life after initiation of dialysis. Audit of our pre-dialysis clinic has demonstrated improved patient outcomes, and we view the early-referral service as an essential component of the patient pathway. abstract_id: PUBMED:10024934 Late referral of end-stage renal failure. We studied all new patients accepted for renal replacement therapy (RRT) in one unit from 1/1/96 to 31/12/97 (n = 198), to establish time from nephrology referral to RRT, evidence of renal disease prior to referral and the adequacy of renal management prior to referral. Sixty four (32.3%, late referral group) required RRT within 12 weeks of referral. Fifty-nine (29.8%) had recognizable signs of chronic renal failure > 26 weeks prior to referral. Patients starting RRT soon after referral were hospitalized for significantly longer on starting RRT (RRT within 12 weeks of referral, median hospitalization 25.0 days (n = 64); RRT > 12 weeks after referral, median 9.7 days (n = 126), (p < 0.001)). Observed survival at 1 year was 68.3% overall, with 1-year survival of the late referral and early referral groups being 60.5% and 72.5%, respectively (p = NS). Hypertension was found in 159 patients (80.3%): 46 (28.9%) were started on antihypertensive medication following referral, while a further 28 (17.6%) were started on additional antihypertensives. Of the diabetic population (n = 78), only 26 (33.3%) were on an angiotensin-converting-enzyme inhibitor (ACEI) at referral. Many patients are referred late for dialysis despite early signs of renal failure, and the pre-referral management of many of the patients, as evidenced by the treatment of hypertension and use of ACEI in diabetics, is less than optimal. abstract_id: PUBMED:33123632 Referral and Lost to System Rates of Two Newborn Hearing Screening Programs in Saudi Arabia. Congenital hearing loss has been commonly reported as a significant health problem. Lost to system (LTS) is a major challenge facing newborn hearing screening (NHS) programs. This retrospective cross-sectional descriptive study aimed to determine the referral and LTS rates after the two-stage NHS based on transient evoked otoacoustic emissions (TEOAEs) in two main hospitals in Riyadh, Saudi Arabia (SA). NHS was performed on newborns before hospital discharge. Newborns were only rescreened if NHS initially revealed a fail/refer outcome in one or both ears. Those who failed the first and second screenings or had risk factors were referred for auditory brainstem response (ABR) testing to confirm or exclude hearing loss. In total, 20,171 newborns (40,342 ears; 52% males; 48% females) were screened, of whom 19,498 (96.66%) passed the initial screening, while 673 (3.34%) failed. Of the 673 newborns, 235 (34.92%) were LTS, and 438 (65.08%) were rescreened, of whom 269 (61.42%) failed and were referred for a comprehensive audiological assessment to confirm the existence of hearing loss. The referral rate after the initial two-stage screening was equal to 1.33%. The lack of awareness of the importance of NHS among parents seems to be the major cause behind the LTS rate. The stakeholders have to work efficiently to reduce the LTS rate. Answer: Yes, loss to follow-up is indeed a bottleneck in the process of early referral to renal units. The abstract from PUBMED:14586276 highlights that loss to follow-up is a significant problem in nephrology, particularly affecting the advantages of early referral. The study found that a lack of awareness likely causes a higher drop-out rate at lower creatinine levels, suggesting that the initial period of care is critical for ensuring long-term compliance. This indicates that patients who are lost to follow-up may miss out on the benefits of early intervention in chronic kidney diseases. Furthermore, the abstract from PUBMED:1320224 supports the notion that early referral may be beneficial as it allows for planning of dialysis and treatment of the complications of progressive uraemia. The study found that early death on renal replacement therapy is associated with late referral to a nephrologist, which could potentially be mitigated by earlier intervention. The abstract from PUBMED:15752229 also emphasizes the importance of early referral, stating that many patients with chronic kidney disease are referred to a nephrologist too close to the commencement of renal replacement therapy, leading to suboptimal management and increased morbidity and mortality. Additionally, the abstract from PUBMED:19200276 discusses the significant impact of the timing of referral on patient health and well-being, including preparation for dialysis and patient survival on dialysis. Lastly, the abstract from PUBMED:12037244 points out that a significant proportion of patients are avoidably referred late to a dialysing renal unit, and that guidelines on referral should be developed to improve early referral strategies. In summary, the evidence from these abstracts suggests that loss to follow-up is indeed a critical issue that hinders the effectiveness of early referral to renal units, with significant implications for patient outcomes and healthcare efficiency.
Instruction: Hospital-associated Clostridium difficile infection: is it necessary to track community-onset disease? Abstracts: abstract_id: PUBMED:25449250 Severity and frequency of community-onset Clostridium difficile infection on an Australian tertiary referral hospital campus. Background: Clostridium difficile infection (CDI) is increasingly being found in populations without traditional risk factors. We compared the relative frequency, risk factors, severity, and outcomes of community-onset CDI with hospital-acquired infection. Methods: This was a retrospective, observational study of CDI at a tertiary hospital campus in Sydney, Australia. Patients aged 15 years and older with a first episode of CDI from January 1 to December 31, 2011 were included. CDI was defined as the presence of diarrhoea with a positive enzyme immunoassay in conjunction with a positive cell cytotoxicity assay, toxin culture, or organism culture. Main outcome measures were onset of infection (hospital or community), risk factors, markers of severity, and outcomes for the two groups. Results: One hundred and twenty-nine cases of CDI infection were identified, of which 38 (29%) were community-onset. The community-onset infection group were less likely to have a recent history of antibiotic use (66% vs. 98%; p<0.001) or proton pump inhibitor use (38% vs. 69%; p=0.03) than the hospital-acquired infection group. Markers of severity and outcomes were similar in the two groups, with an overall mortality of 9%. Conclusions: Community-onset CDI accounts for a large proportion of C. difficile infections and has a similar potential for severe disease as hospital-acquired infection. Using a history of previous antibiotic use, proton pump inhibitor use, or recent hospitalization to predict cases is unreliable. We recommend that patients with diarrhoea being investigated in emergency departments and community practice are tested for Clostridium difficile infection. abstract_id: PUBMED:19239377 Hospital-associated Clostridium difficile infection: is it necessary to track community-onset disease? Objectives: To compare Clostridium difficile infection (CDI) rates determined with use of a traditional definition (ie, with healthcare-onset CDI defined as diagnosis of CDI more than 48 hours after hospital admission) with rates determined with use of expanded definitions, including both healthcare-onset CDI and community-onset CDI, diagnosed within 48 hours after hospital admission in patients who were hospitalized in the previous 30 or 60 days, and to determine whether differences exist between patients with CDI onset in the community and those with CDI onset in a healthcare setting. Design: Prospective cohort. Setting: Tertiary acute care facility. Patients: General medicine patients who received a diagnosis of CDI during the period January 1, 2004, through December 31, 2005. Methods: CDI was classified as healthcare-onset CDI, healthcare facility-associated CDI after hospitalization within the previous 30 days, and/or healthcare facility-associated CDI after hospitalization within the previous 60 days. Patient demographic characteristics and medication exposures were obtained. The CDI incidence with use of each definition, CDI rate variability, patient demographic characteristics, and medication exposures were compared. Results: The healthcare-onset CDI rate (1.6 cases per 1,000 patient-days) was significantly lower than the 30-day healthcare facility-associated CDI rate (2.4 cases per 1,000 patient-days; P< .01) and the 60-day healthcare facility-associated CDI rate (2.6 cases per 1,000 patient-days; P< .01). There was good correlation between the healthcare-onset CDI rate and both the 30-day (correlation, 0.69; P< .01) and 60-day (correlation, 0.70; P< .01) healthcare facility-associated CDI rates. There were no months in which the CDI rate was more than 3 standard deviations from the mean. Compared with patients with healthcare-onset CDI, patients with community-onset CDI were less likely to have received a fourth-generation cephalosporin (P= .02) or intravenous vancomycin (P+ .01) during hospitalization. Conclusions: Compared with the traditional definition, expanded definitions identify more patients with CDI. There is good correlation between traditional and expanded CDI definitions; therefore, it is unclear whether expanded surveillance is necessary to identify an abnormal change in CDI rates. Cases that met the expanded definitions were less likely to have occurred in patients with fourth-generation cephalosporin and vancomycin exposure. abstract_id: PUBMED:36455757 Community-onset Clostridioides difficile infection in south Serbia. Background: Data from the past decade indicates that Clostridioides difficile infection (CDI) is not only a nosocomial infection but is also increasingly recognized as a disease in the community. Objective: We aimed to study community-onset (CO) CDI in the various age groups in south Serbia with its clinical characteristics, risk factors and microbiological characterization. Methods: The study group included 93 patients with CO-CDI (median age 62). The control group consisted of 186 patients with community-onset diarrhea and stool samples negative tested for CDI. Results: Of all CDI cases diagnosed with a community onset, 74.19% had a previous contact with a healthcare facility in the previous 12 weeks, but 34.40% have no record on hospitalization in the previous 12 months. Using a multivariate statistical regression model, the following risk factors for CO-CDI development were found; antacid usage (OR = 0.267, 95%C.I.:0.10-0.291, p < 0.01), chronic kidney disease (OR = 0.234, 95%C.I.:0.10-0.51, p < 0.01) and antibiotic use during the prior 2 months (OR = 0.061, 95%C.I.:0.02-0.17, p < 0.01), especially tetracycline's (OR = 0.146, 95% C.I.:0.07-0.22, p < 0.01) and cephalosporin's (OR = 0.110, 95%C.I.:0.14-0.42, p < 0.01). The most common ribotypes (RTs) detected in patients with CO-CDI were RT001 (32.3%) and RT027 (24.7%). All tested toxin producing C. difficile isolates were sensitive to metronidazole, vancomycin and tigecycline. A high rate of resistance to moxifloxacin (73.11%) and rifampicin (23.65%) was found. Conclusion: Patients with CO-CDI had frequently contact with healthcare facility in the previous 12 weeks. Restriction of antacid usage and of high-risk antibiotics in the community may help reduce the incidence of CO-CDI. abstract_id: PUBMED:28089100 Community-onset Clostridium difficile infection at a tertiary medical center in southern Taiwan, 2007-2015. Background: Clostridium difficile infection (CDI) is well-known as the major cause of infectious diarrhea in hospitalized patients. Community-onset CDI (CO-CDI) is an emerging threat. However, clinical information of CO-CDI in Taiwan remains scarce. Methods: A retrospective study was conducted at a medical center in southern Taiwan. Symptomatic patients between 2007 and 2015 with C. difficile toxin or tcdB detected in stool were identified as CDI, and were classified as CO-CDI [including community-associated CDI (CA-CDI) and community-onset health care facility-associated CDI (CO-HCFA-CDI)] and health care facility-onset CDI (HCFO-CDI). Results: Of 427 patients, 15 (3.5%) were CA-CDI, 49 (11.5%) CO-HCFA-CDI, and 363 (85.0%) HCFO-CDI. Despite major involvement of the elderly (mean age: 66.1 years vs. 69.9 years, p = 0.46), no significant differences were noted between CA-CDI and CO-HCFA-CDI groups, except that solid organ cancer was more common in the CO-HCFA-CDI group. The CO-CDI group more often presented with abdominal pain but had shorter hospital stays and less exposure of proton-pump inhibitors or broad-spectrum antibiotics than the HCFO-CDI group did. The mortality rate related to CDI was 4.7% (3 patients) in the CO-CDI group. Despite a lower in-hospital mortality rate in the CO-CDI group (10.9% vs. 22.0%; p = 0.04), the recurrence rate was similar (10.9% vs. 7.2%; p = 0.3). Conclusions: CO-CDI is not common but associated with substantial morbidity and mortality. Physicians should put CDI into consideration among patients who present community-onset fever, diarrhea, or abdominal pain alone or in combination. abstract_id: PUBMED:31063089 Modelling diverse sources of Clostridium difficile in the community: importance of animals, infants and asymptomatic carriers. Clostridium difficile infections (CDIs) affect patients in hospitals and in the community, but the relative importance of transmission in each setting is unknown. We developed a mathematical model of C. difficile transmission in a hospital and surrounding community that included infants, adults and transmission from animal reservoirs. We assessed the role of these transmission routes in maintaining disease and evaluated the recommended classification system for hospital- and community-acquired CDIs. The reproduction number in the hospital was 1 for nearly all scenarios without transmission from animal reservoirs (range: 1.0-1.34). However, the reproduction number for the human population was 3.5-26.0%) of human exposures originated from animal reservoirs. Symptomatic adults accounted for <10% transmission in the community. Under conservative assumptions, infants accounted for 17% of community transmission. An estimated 33-40% of community-acquired cases were reported but 28-39% of these reported cases were misclassified as hospital-acquired by recommended definitions. Transmission could be plausibly sustained by asymptomatically colonised adults and infants in the community or exposure to animal reservoirs, but not hospital transmission alone. Under-reporting of community-onset cases and systematic misclassification underplays the role of community transmission. abstract_id: PUBMED:30668844 Outcomes of Community and Healthcare-onset Clostridium difficile Infections. Background: Community-onset Clostridium difficile infections (CDI) are increasingly common, but there is little data on outcomes. The purpose of this study is to describe the epidemiology and outcomes of CDI in the Veterans Health Administration (VHA) system and compare these variables between hospital-onset (HCF) and community-onset (CO) cases. Methods: We conducted a retrospective cohort study that included all patients with a positive test for C. difficile (toxin or toxin genes) within the VHA Corporate Data Warehouse between 2011 and 2014. Results: We identified 19270 episodes of CDI, involving 15972 unique patients; 95% were male, 44% of the cases were HCF, and 42% were CO. Regarding severity, 31% percent of cases were non-severe, 40% were severe, and 21% were fulminant. Exposure to proton pump inhibitors was found in 53% of cases (47% in CO, 62% in HCF). Overall, 40% of patients received antibiotics in the 90 days before CDI (44% in HCF, 36% in CO). Recurrence was 18.2%, and 30-day all-cause mortality was 9.2%. Risk factors for a fulminant case were exposure to clindamycin (odds ratio [OR]: 1.23, P = .01) or proton pump inhibitors (OR: 1.20, P < .001) in the 90 days prior to diagnosis. Conclusions: CO accounts for a significant proportion of CDI in the VHA system. CO patients are younger and their cases are less severe, but recurrence is more common than in HCF CDI. Therefore CO CDI may account for a considerable reservoir of CDI cases, and prevention efforts should include interventions to reduce CO CDI. abstract_id: PUBMED:22314411 Clostridium difficile O27 colitis: hospital-onset but community-acquired. Large outbreaks of Clostridium difficile (CD) associated colitis in North America and Europe have been attributed to the emergence of the epidemic/toxin PCR Ribotype O27 CD strain (CD027). Due to the increased virulence of this epidemic strain and its propension for causing outbreaks, we performed a structured risk-assessment approach in order to determine the risks associated with the introduction of this strain within our university hospital. From February 2009 to January 2010, we identified 31 cases of CD027 associated colitis, whereby twenty one (67.7%) had symptoms onset more than 48 hours after admission and were classified as nosocomial. These patients had received wide-spectrum antimicrobials for other infections in the hospital before CD027 colitis diagnosis. The 31 patients with CD027 were admitted in 20 different units, managed by distinct health-care workers (HCWs), and no contact was identified between patients during their hospital stay. Furthermore, infection control audits showed 100% compliance with institutional guidelines for control of CD colitis. These findings suggest that CD027 is most frequently acquired in the community and emerges sporadically under antibiotic pressure during hospitalization. abstract_id: PUBMED:21471341 Characterization of cases of Clostridium difficile infection (CDI) presenting at an emergency room: molecular and clinical features differentiate community-onset hospital-associated and community-associated CDI in a tertiary care hospital. Definition of community-onset, hospital-acquired Clostridium difficile infection (CO-HA-CDI) is difficult in patients presenting with diarrhea at hospitals or outpatient clinics, especially 4 to 12 weeks after the last discharge. We performed C. difficile stool culture for 272 diarrheic patients visiting the emergency room (ER) between January 2006 and June 2010. C. difficile was isolated from 36 cases (13.2%), and isolation rates increased year by year, from 10.1% in 2008 to 12.4% in 2009 and 16.7% in 2010. Among 32 toxin-positive isolates, 13 (40.6%) and 19 (59.4%) were associated with CO-HA-CDI and community-acquired CDI (CA-CDI), respectively, if cases with CDI diagnosed within 12 weeks after discharge were considered hospital associated. The majority (70%) of CO-HA-CDI cases occurred within 2 weeks after hospital discharge, although the interval from discharge to onset of symptoms was as long as 10 weeks. We found via tcdA and tcdB and repetitive sequence PCR analysis, that toxin A-positive/toxin B-positive isolates were the most prevalent in both CO-HA-CDI (53.8%) and CA-CDI (94.7%) cases. Toxin A-negative/toxin B-positive isolates were also still highly associated with HA-CDI cases but were also observed in CA-CDI cases. Younger age, fewer underlying diseases, lack of prior antibiotic use, and genetic diversity of isolates in repetitive sequence PCR were the main characteristics in CA-CDI cases visiting the ER. abstract_id: PUBMED:36535316 Predictors of hospital-onset Clostridioides difficile infection in children with antibiotic-associated diarrhea. Background: This study aimed to determine the predictors of hospital-onset Clostridioides difficile infection (CDI) in pediatric patients with antibiotic-associated diarrhea (AAD) and to develop a predictive scoring system to identify at-risk patients. Methods: This retrospective case-control study included patients aged ≥2-18 years with AAD who underwent C. difficile polymerase chain reaction testing >3 days after hospital admission. Patients with hospital-onset CDI were selected as cases and matched with the control patients without CDI. Univariate and multivariate logistic regressions were used to determine predictors of CDI and to construct a prediction score for the outcomes of interest. Results: Sixty-five patients with hospital-onset CDI and 130 controls were enrolled. Independent predictors for CDI identified and combined into the prediction score included abdominal pain (adjusted odds ratio [95% confidence interval]: 7.940 [3.254-19.374]), hospitalization for ≥14 days before the onset of diarrhea (3.441 [1.034-11.454]), antibiotic use for ≥10 days before the onset of diarrhea (6.775 [1.882-24.388]), receipt of meropenem (4.001 [1.098-14.577]) and clindamycin (14.842 [4.496-49.000]). The area under the receiver operating characteristic curve for this score was 0.883. Conclusions: The presented scoring system can be easily applied by clinicians at the bedside to decide which patients with AAD are likely to have CDI. abstract_id: PUBMED:20869672 Epidemiology of community-onset Clostridium difficile infection in a community in the South of England. Background And Aims: Clostridium difficile infection (CDI) has traditionally been considered a hospital acquired infection but there are a rising number of infections in the community. This study estimates the prevalence of community-onset CDI (CO-CDI), defined as onset of symptoms in a community setting and outside the hospital, and examines the risk factors for CO-CDI in 2-64 year-olds. Methods: A standard questionnaire was used to retrospectively obtain information on the CDI risk factors of 58 cases of CO-CDI diagnosed between 1st April 2008 and 31st March 2009 in a community in the South of England. Each case was reviewed for the presence of 'established' risk factors for CDI, i.e., age ≥65 years, in-patient hospital stay, and recent (within ≤4 weeks) receipt of broad spectrum antibiotics, and other, 'non-established' risk factors for CDI, such as exposure to antibiotics more than 4 weeks preceding symptom onset, out-patient and day-surgery hospital exposure, contact with a hospitalised patient, and travel outside of the UK. Results: Fifty-eight cases of CO-CDI were diagnosed among a total community population of 418,000, representing an estimated prevalence of CO-CDI of 1.29 per 10,000. All 58 cases were successfully contacted, representing a 100% response rate. Four cases were excluded from further analysis due to co-infection with Salmonella spp. and Campylobacter spp. Cases were more likely to be female, aged between 31 and 40 years, and present in the spring season (March-May), 2009. 46.3% (25/54) of cases had established risk factors for CDI, 20.4% (11/54) had non-established risk factors, 16.7% (9/54) had no risk factors and in the remaining 16.7% (9/54), available information was insufficient to classify by risk factor category. Conclusions: This study suggests that CDI should be included in the differential diagnosis of community-onset diarrhea in patients with or without established risk factors for CDI. Answer: The necessity of tracking community-onset Clostridium difficile infection (CDI) alongside hospital-associated CDI is supported by several studies that highlight the significance of community-onset cases in the overall epidemiology of CDI. One study found that community-onset CDI accounted for a large proportion of C. difficile infections and had a similar potential for severe disease as hospital-acquired infection (PUBMED:25449250). This suggests that community-onset CDI is not only prevalent but also clinically significant, warranting attention and tracking. Another study compared traditional definitions of healthcare-onset CDI with expanded definitions that included community-onset cases and found that expanded definitions identified more patients with CDI (PUBMED:19239377). Although there was good correlation between traditional and expanded CDI definitions, the study concluded that it is unclear whether expanded surveillance is necessary to identify an abnormal change in CDI rates. However, the fact that cases meeting expanded definitions were less likely to have occurred in patients with certain antibiotic exposures suggests that community-onset CDI may have different risk factors and transmission dynamics, which could be important for prevention strategies. Additional research in south Serbia indicated that a significant percentage of community-onset CDI cases had previous contact with a healthcare facility, and identified risk factors such as antacid usage, chronic kidney disease, and antibiotic use (PUBMED:36455757). This further supports the idea that community-onset CDI is closely linked to healthcare settings and should be monitored. A study in southern Taiwan found that community-onset CDI is associated with substantial morbidity and mortality, and that physicians should consider CDI among patients presenting with community-onset symptoms (PUBMED:28089100). This highlights the importance of awareness and potential tracking of community-onset CDI to improve patient outcomes. Lastly, a mathematical model assessing the role of various transmission routes in maintaining CDI suggested that transmission could be sustained by asymptomatically colonized adults and infants in the community or exposure to animal reservoirs, but not by hospital transmission alone (PUBMED:31063089). This indicates that community-onset CDI plays a critical role in the overall transmission of CDI and underscores the importance of tracking it. In conclusion, the evidence suggests that tracking community-onset CDI is necessary due to its prevalence, potential for severe disease, and distinct risk factors and transmission routes compared to hospital-acquired CDI.
Instruction: Are we doing enough to screen for colorectal cancer? Abstracts: abstract_id: PUBMED:32746843 The association between recreational screen time and cancer risk: findings from the UK Biobank, a large prospective cohort study. Background: Evidence is suggestive of sedentary behaviour being associated with an increased risk of endometrial cancer, but the evidence base is too limited to draw any conclusions for other cancers. The aim of the study was to investigate the association between recreational screen time and site-specific cancer risk. Methods: We analysed data from the prospective UK Biobank cohort study. Cox proportional hazards models were used to estimate hazard ratios (HRs) and 95% confidence intervals (CIs) for associations between daily recreational screen time (including television (TV) viewing time, computer use time and total screen time) and site-specific cancer risk. Partition models and isotemporal substitution models investigated the impact of substituting recreational screen time with physical activity. Results: During a mean follow-up of 7.6 years, 28,992 incident cancers were identified among 470,578 adults. A 1-h increase in daily TV viewing time was associated with higher risks of oropharyngeal, oesophago-gastric and colon cancer in fully adjusted models. Participants who reported ≤1, compared with 1- ≤ 3, hours/day of TV viewing time had lower risks of lung, breast, and oesophago-gastric cancer. Findings were inconsistent for daily recreational computer use and daily total recreational screen time. The majority of observed associations were small, and were attenuated after excluding cancers diagnosed within the first two years of follow-up, except for oesophago-gastric and colon cancers (HR 1.05, 95% CI: 1.01, 1.10; and HR 1.04, 95% CI: 1.01, 1.07 per 1-h increase in daily TV viewing time, respectively). However, isotemporal substitution models showed reduced risk of some site-specific (oropharyngeal, lung, breast and colorectal) cancers when replacing 1-h/day of TV viewing with 1-h of moderate-intensity physical activity or walking. Conclusions: Our findings show that daily recreational screen time, particularly TV viewing, was associated with small increased risks of oesophago-gastric and colon cancer. Replacing 1-h/day of TV viewing with 1-h of moderate-intensity physical activity or walking was associated with lower risk of oropharyngeal, lung, breast and colorectal cancers. Further research from other large prospective cohort studies is required, while mechanistic research is warranted to enhance the biological plausibility of these findings. abstract_id: PUBMED:32213071 Occurrence and characteristics of faecal immunochemical screen-detected cancers vs non-screen-detected cancers: Results from a Flemish colorectal cancer screening programme. Background: Colorectal cancer (CRC) and its precursor lesions are detected at an early stage by CRC screening programmes, which reduce CRC-related mortality. An important quality indicator for CRC screening is the occurrence of interval CRC (IC) between screening rounds. Currently there is no guideline regarding acceptable levels of ICs in CRC screening programmes, and ICs reported in prior work vary considerably. Methods: This study describes the occurrence of screen-detected (SD) CRC and non-screen-detected CRC within the population-based CRC screening programme of Flanders, stratified by multiple variables such as sex, age, tumour location and tumour stage between October 2013 and July 2017. In addition, faecal immunochemical test (FIT) IC proportions over the sum of SD-CRCs and FIT-ICs are calculated, together with FIT sensitivity and programme sensitivity to display the effectiveness of detecting CRC by the screening programme. Results: Of 1,212,354 FIT participants, 4094 were diagnosed with SD-CRC, whereas 772 participants were diagnosed with CRC between FIT-screening rounds. Significant associations were shown between people not being SD for CRC and women, older individuals, right-sided tumour location and more advanced tumour stage. Furthermore, a clear distinction was shown between the right-sided and the left-sided colorectum concerning all above-mentioned variables and distributions of tumour stages. Conclusion: The Flemish FIT-interval CRC proportion of 15.9% was within the limits of previously published results. In addition, calculations show that the effectiveness of the screening programme is dependent on tumour location, suggesting that future research should report results stratified by location. abstract_id: PUBMED:36189423 Functional high-throughput screen identifies microRNAs that promote butyrate-induced death in colorectal cancer cells. The gut fermentation product butyrate displays anti-cancer properties in the human proximal colon, including the ability to inhibit proliferation and induce apoptosis in colorectal cancer (CRC) cells. A natural histone deacetylase inhibitor (HDACi), butyrate can alter histone acetylation patterns in CRC cells, and thereby regulate global gene expression, including the non-coding transcriptome and microRNAs (miRNAs). Dysregulated miRNA expression affects CRC development and progression; however, the interplay between miRNA activity and butyrate response remains to be elucidated. A high-throughput functional screen was employed to identify miRNAs that can act as enhancers of the anti-cancer properties of butyrate. Validation studies confirmed that several miRNAs, including miR-125b, miR-181a, miR-593, and miR-1227, enhanced apoptosis, decreased proliferation, and promoted cell-cycle arrest in the presence of butyrate. Pathway analyses of predicted miRNA target genes highlighted their likely involvement in critical cancer-related growth pathways, including WNT and PI3K signaling. Several cancer-associated miRNA targets, including TRIM29, COX2, PIK3R3, CCND1, MET, EEF2K, DVL3, and NUP62 were synergistically regulated by the combination of cognate miRNAs and butyrate. Overall, this study has exposed the potential of miRNAs to act as enhancers of the anti-cancer effects of HDAC inhibition and identifies specific miRNAs that might be exploited for therapeutic benefit. abstract_id: PUBMED:35263578 CRISPR-Cas9 screen identifies oxidative phosphorylation as essential for cancer cell survival at low extracellular pH. Unlike most cell types, many cancer cells survive at low extracellular pH (pHe), a chemical signature of tumors. Genes that facilitate survival under acid stress are therefore potential targets for cancer therapies. We performed a genome-wide CRISPR-Cas9 cell viability screen at physiological and acidic conditions to systematically identify gene knockouts associated with pH-related fitness defects in colorectal cancer cells. Knockouts of genes involved in oxidative phosphorylation (NDUFS1) and iron-sulfur cluster biogenesis (IBA57, NFU1) grew well at physiological pHe, but underwent profound cell death under acidic conditions. We identified several small-molecule inhibitors of mitochondrial metabolism that can kill cancer cells at low pHe only. Xenografts established from NDUFS1-/- cells grew considerably slower than their wild-type controls, but growth could be stimulated with systemic bicarbonate therapy that lessens the tumoral acid stress. These findings raise the possibility of therapeutically targeting mitochondrial metabolism in combination with acid stress as a cancer treatment option. abstract_id: PUBMED:23132838 Developing a computer touch-screen interactive colorectal screening decision aid for a low-literacy African American population: lessons learned. African Americans have higher colorectal cancer (CRC) mortality than White Americans and yet have lower rates of CRC screening. Increased screening aids in early detection and higher survival rates. Coupled with low literacy rates, the burden of CRC morbidity and mortality is exacerbated in this population, making it important to develop culturally and literacy appropriate aids to help low-literacy African Americans make informed decisions about CRC screening. This article outlines the development of a low-literacy computer touch-screen colonoscopy decision aid using an innovative marketing method called perceptual mapping and message vector modeling. This method was used to mathematically model key messages for the decision aid, which were then used to modify an existing CRC screening tutorial with different messages. The final tutorial was delivered through computer touch-screen technology to increase access and ease of use for participants. Testing showed users were not only more comfortable with the touch-screen technology but were also significantly more willing to have a colonoscopy compared with a "usual care group." Results confirm the importance of including participants in planning and that the use of these innovative mapping and message design methods can lead to significant CRC screening attitude change. abstract_id: PUBMED:14681952 Receiving a screen-detected diagnosis of cancer: the experience of participants in the UK flexible sigmoidoscopy trial. The experience of receiving a screen-detected diagnosis of colorectal cancer was explored using open-ended interviews. Twenty four people who had been diagnosed with cancer at flexible sigmoidoscopy screening were interviewed at their homes over the telephone. Thematic analysis of the transcripts showed that the experience of gaining a diagnosis of cancer through screening was characterised by a lack of prior expectation that cancer would be detected and feelings of shock. This was largely because of the absence of symptoms and current feelings of well-being. Some interviewees expressed feelings of relief and gratitude at having cancer diagnosed at an early enough stage that 'something could be done about it'. The experience of receiving a screen-detected diagnosis could be summarised as one of 'moderated shock' whereby the shock of the unexpected diagnosis was often moderated by the news that the cancer had been caught early. Whilst these screen-detected cancers were diagnosed relatively rapidly, a significant number of interviewees had a period in which they were effectively 'symptomatic' (e.g. knew they had an adenoma but did not know whether it was malignant or benign). However, they did not use this period to prepare themselves for a possible cancer diagnosis. Raising awareness of the adenoma-carcinoma sequence may help reduce the shock of a screen-detected diagnosis. However, any interventions aimed at reducing the distress of a screen-detected cancer would need to consider the overall benefit to screening attenders, most of whom will have benign polyps detected. abstract_id: PUBMED:30889686 Screen-printed interdigitated electrodes modified with nanostructured carbon nano-onion films for detecting the cancer biomarker CA19-9. Nanostructured capacitive biosensors, combined with inexpensive fabrication technologies, may provide simple, sensitive devices for detecting clinically relevant cancer biomarkers. Herein, we report a novel platform for detecting the pancreatic cancer biomarker CA19-9 using low-cost screen-printed interdigitated electrodes (SPIDEs). The SPIDEs were modified by carbon nano-onions (CNOs) and graphene oxide (GO) films, on which a layer of anti-CA19-9 antibodies was immobilized. The modification with CNOs and GO significantly improved the analytical performance of the biosensor, which displayed superior results to those prepared only with GO. The biossensor exhibited high reproducibility and a relatively low limit of detection of 0.12 U mL-1. Using these devices in combination with information visualization methods we were able to detect CA19-9 in whole cell lysates of colorectal adenocarcinoma. The fabrication of these low-cost, disposable immunosensors is a successful attempt to explore CNOs in capacitive biosensors, which may be extended for detection of different cancer biomarkers. abstract_id: PUBMED:38325403 Oncologic outcomes of screen-detected and non-screen-detected T1 colorectal cancers. Background: The incidence of T1 colorectal cancer (CRC) has increased with the implementation of CRC screening programs. It is unknown whether the outcomes and risk models for T1 CRC based on non-screen-detected patients can be extrapolated to screen-detected T1 CRC. This study aimed to compare the stage distribution and oncologic outcomes of T1 CRC patients within and outside the screening program. Methods: Data from T1 CRC patients diagnosed between 2014 and 2017 were collected from 12 hospitals in the Netherlands. The presence of lymph node metastasis (LNM) at diagnosis was compared between screen-detected and non-screen-detected patients using multivariable logistic regression. Cox proportional hazard regression was used to analyze differences in the time to recurrence (TTR), metastasis-free survival (MFS), cancer-specific survival (CSS), and overall survival. Additionally, the performance of conventional risk factors for LNM was evaluated across the groups. Results: 1803 patients were included (1114 [62%] screen-detected), with median follow-up of 51 months (interquartile range 30). The proportion of LNM did not significantly differ between screen- and non-screen-detected patients (12.6% vs. 8.9%; odds ratio 1.41; 95%CI 0.89-2.23); a prediction model for LNM performed equally in both groups. The 3- and 5-year TTR, MFS, and CSS were similar for patients within and outside the screening program. However, overall survival was significantly longer in screen-detected T1 CRC patients (adjusted hazard ratio 0.51; 95%CI 0.38-0.68). Conclusions: Screen-detected and non-screen-detected T1 CRCs have similar stage distributions and oncologic outcomes and can therefore be treated equally. However, screen-detected T1 CRC patients exhibit a lower rate of non-CRC-related mortality, resulting in longer overall survival. abstract_id: PUBMED:37612734 CRISPR/dCAS9-mediated DNA demethylation screen identifies functional epigenetic determinants of colorectal cancer. Background: Promoter hypermethylation of tumour suppressor genes is frequently observed during the malignant transformation of colorectal cancer (CRC). However, whether this epigenetic mechanism is functional in cancer or is a mere consequence of the carcinogenic process remains to be elucidated. Results: In this work, we performed an integrative multi-omic approach to identify gene candidates with strong correlations between DNA methylation and gene expression in human CRC samples and a set of 8 colon cancer cell lines. As a proof of concept, we combined recent CRISPR-Cas9 epigenome editing tools (dCas9-TET1, dCas9-TET-IM) with a customized arrayed gRNA library to modulate the DNA methylation status of 56 promoters previously linked with strong epigenetic repression in CRC, and we monitored the potential functional consequences of this DNA methylation loss by means of a high-content cell proliferation screen. Overall, the epigenetic modulation of most of these DNA methylated regions had a mild impact on the reactivation of gene expression and on the viability of cancer cells. Interestingly, we found that epigenetic reactivation of RSPO2 in the tumour context was associated with a significant impairment in cell proliferation in p53-/- cancer cell lines, and further validation with human samples demonstrated that the epigenetic silencing of RSPO2 is a mid-late event in the adenoma to carcinoma sequence. Conclusions: These results highlight the potential role of DNA methylation as a driver mechanism of CRC and paves the way for the identification of novel therapeutic windows based on the epigenetic reactivation of certain tumour suppressor genes. abstract_id: PUBMED:27461835 Development of a High-Throughput Gene Expression Screen for Modulators of RAS-MAPK Signaling in a Mutant RAS Cellular Context. The RAS-MAPK pathway controls many cellular programs, including cell proliferation, differentiation, and apoptosis. In colorectal cancers, recurrent mutations in this pathway often lead to increased cell signaling that may contribute to the development of neoplasms, thereby making this pathway attractive for therapeutic intervention. To this end, we developed a 26-member gene signature of RAS-MAPK pathway activity utilizing the Affymetrix QuantiGene Plex 2.0 reagent system and performed both primary and confirmatory gene expression-based high-throughput screens (GE-HTSs) using KRAS mutant colon cancer cells (SW837) and leveraging a highly annotated chemical library. The screen achieved a hit rate of 1.4% and was able to enrich for hit compounds that target RAS-MAPK pathway members such as MEK and EGFR. Sensitivity and selectivity performance measurements were 0.84 and 1.00, respectively, indicating high true-positive and true-negative rates. Active compounds from the primary screen were confirmed in a dose-response GE-HTS assay, a GE-HTS assay using 14 additional cancer cell lines, and an in vitro colony formation assay. Altogether, our data suggest that this GE-HTS assay will be useful for larger unbiased chemical screens to identify novel compounds and mechanisms that may modulate the RAS-MAPK pathway. Answer: The provided abstracts offer insights into various aspects of colorectal cancer (CRC) screening, detection, and outcomes, as well as related research methodologies. However, they do not directly address the adequacy of current CRC screening efforts on a population level. The abstract from the UK Biobank study (PUBMED:32746843) does not discuss CRC screening but rather the association between recreational screen time and cancer risk, including colon cancer. The Flemish colorectal cancer screening program (PUBMED:32213071) describes the occurrence of screen-detected CRCs and interval CRCs, indicating that the effectiveness of the screening program is dependent on tumor location, which suggests that there may be room for improvement in detecting right-sided tumors. The high-throughput screen identifying microRNAs that promote butyrate-induced death in CRC cells (PUBMED:36189423) and the CRISPR-Cas9 screen identifying oxidative phosphorylation as essential for cancer cell survival at low extracellular pH (PUBMED:35263578) are focused on understanding the molecular and genetic factors that may contribute to CRC development and survival, rather than screening practices. The development of a computer touch-screen interactive colorectal screening decision aid for a low-literacy African American population (PUBMED:23132838) addresses the need for culturally and literacy-appropriate aids to help individuals make informed decisions about CRC screening, suggesting that there may be gaps in reaching certain populations effectively. The experience of receiving a screen-detected diagnosis of cancer (PUBMED:14681952) and the oncologic outcomes of screen-detected and non-screen-detected T1 colorectal cancers (PUBMED:38325403) provide insights into the patient experience and outcomes of screen-detected CRCs, but do not evaluate the overall sufficiency of screening efforts. The screen-printed interdigitated electrodes modified with nanostructured carbon nano-onion films for detecting the cancer biomarker CA19-9 (PUBMED:30889686) and the CRISPR/dCAS9-mediated DNA demethylation screen identifying functional epigenetic determinants of CRC (PUBMED:37612734) are examples of research into novel detection methods and biomarkers, which could potentially enhance future screening strategies.
Instruction: Should We Routinely Investigate Incidental Head and Neck Findings on 18-Fluorodeoxyglucose Positron Emission Tomography in Patients Being Staged for Non-small Cell Lung Cancer? Abstracts: abstract_id: PUBMED:24959935 Should We Routinely Investigate Incidental Head and Neck Findings on 18-Fluorodeoxyglucose Positron Emission Tomography in Patients Being Staged for Non-small Cell Lung Cancer? A Retrospective Analysis. Background: Positron emission tomography (PET) with 18-fluorodeoxyglucose (FDG) is routinely performed for investigating suspected lung cancers. Despite the known potential for false-positive FDG uptake in the head and neck, most suspicious lesions on FDG-PET are investigated. Methods: Between October 2002 and January 2010, FDG-PET/PET-computed tomography (CT) reports showing significant incidental uptake were retrospectively analyzed using controls to compare time to treatment for lung cancer and the rate of secondary malignancy in those with and without incidental head and neck FDG uptake. Results: In this study, 48/1,846 PET/PET-CT scan reports reviewed, and demonstrated nonthyroidal head and neck abnormalities. A total of 30 patients had proven non-small cell lung cancer (1.6%); 3/30 (10%) went on to have biopsy proven malignancy. Furthermore, there was a significant increase in time to treatment in those patients with incidental head and neck findings (p = 0.002). Conclusion: These unexpected findings are mostly of no clinical significance and the patient should continue treatment without delay; however, aggressive work-up should be pursued if concerns for head and neck pathology are raised following clinical assessment. abstract_id: PUBMED:22284669 The clinical significance of incidental intra-abdominal findings on positron emission tomography performed to investigate pulmonary nodules. Background: Lung cancer is a common cause of cancer-related death. Staging typically includes positron emission tomography (PET) scanning, in which (18)F-fluoro-2-dexoy-D-glucose (FDG) is taken up by cells proportional to metabolic activity, thus aiding in differentiating benign and malignant pulmonary nodules. Uptake of FDG can also occur in the abdomen. The clinical significance of incidental intraabdominal FDG uptake in the setting of pulmonary nodules is not well established. Our objective was to report on the clinical significance of incidental intra-abdominal FDG activity in the setting of lung cancer. Methods: Fifteen hundred FDG-PET reports for studies performed for lung cancer were retrospectively reviewed for the presence of incidental FDG-positive intraabdominal findings. Patient charts with positive findings were then reviewed and information extracted. Results: Twenty-five patients (25/1500) demonstrated incidental intraabdominal FDG uptake thought to be significant (1.7%) with a mean patient age of 71 years. Colonic uptake was most common (n = 17) with 9 (52%) being investigated further. Of these 9 cases, a diagnosis of malignancy was made in 3 patients, pre-malignant adenomas in 2 patients, a benign lipoma in 1 patient and no abnormal findings in the remaining patients. 8 patients were not investigated further (3 diagnosed with metastatic lung cancer and 2 were of advanced age) secondary to poor prognosis. Conclusion: Incidental abdominal findings in the colon on FDG-PET scan for work-up of pulmonary nodules need to be further investigated by colonoscopy. abstract_id: PUBMED:10486538 Current data and perspectives on positron emission tomography oncology-radiotherapy Positron emission tomography (PET) is one of the most promising diagnostic procedures in oncology. Using the glucose analogue fluorodeoxyglucose, PET produces whole-body images and is highly sensitive for tumor diagnosis and staging. We review three particular clinical situations in which PET-FDG has proven not only its diagnostic accuracy, but also its impact on patient management, i.e., the staging of non-small cell lung cancer, diagnosis and staging of colo-rectal cancer and head and neck cancer recurrence. Image registration yields anatomo-metabolic images that could be used as additional information for the determination of radiation fields. Tracer and technical issues remain to be solved before PET can be routinely used for that purpose. abstract_id: PUBMED:37425462 Prognostic role of preoperative fluorine-18 fluorodeoxyglucose-positron emission tomography/computed tomography with an image-based harmonization technique: A multicenter retrospective study. Objectives: Despite the prognostic impacts of preoperative fluorine-18 fluorodeoxyglucose-positron emission tomography/computed tomography examination, fluorine-18 fluorodeoxyglucose-positron emission tomography/computed tomography-based prognosis prediction has not been used clinically because of the disparity in data between institutions. By applying an image-based harmonized approach, we evaluated the prognostic roles of fluorine-18 fluorodeoxyglucose-positron emission tomography/computed tomography parameters in clinical stage I non-small cell lung cancer. Methods: We retrospectively examined 495 patients with clinical stage I non-small cell lung cancer who underwent fluorine-18 fluorodeoxyglucose-positron emission tomography/computed tomography examinations before pulmonary resection between 2013 and 2014 at 4 institutions. Three different harmonization techniques were applied, and an image-based harmonization, which showed the best-fit results, was used in the further analyses to evaluate the prognostic roles of fluorine-18 fluorodeoxyglucose-positron emission tomography/computed tomography parameters. Results: Cutoff values of image-based harmonized fluorine-18 fluorodeoxyglucose-positron emission tomography/computed tomography parameters, maximum standardized uptake, metabolic tumor volume, and total lesion glycolysis were determined using receiver operating characteristic curves that distinguish pathologic high invasiveness of tumors. Among these parameters, only the maximum standardized uptake was an independent prognostic factor in recurrence-free and overall survivals in univariate and multivariate analyses. High image-based maximum standardized uptake value was associated with squamous histology or lung adenocarcinomas with higher pathologic grades. In subgroup analyses defined by ground-glass opacity status and histology or by clinical stages, the prognostic impact of image-based maximum standardized uptake value was always the highest compared with other fluorine-18 fluorodeoxyglucose-positron emission tomography/computed tomography parameters. Conclusions: The image-based fluorine-18 fluorodeoxyglucose-positron emission tomography/computed tomography harmonization was the best fit, and the image-based maximum standardized uptake was the most important prognostic marker in all patients and in subgroups defined by ground-glass opacity status and histology in surgically resected clinical stage I non-small cell lung cancers. abstract_id: PUBMED:20441469 Circulating procalcitonin in aseptic carcinoma patients: a specificity study with (18)F-fluorodeoxyglucose positron-emission tomography/computed tomography as benchmark. Background: The goal of the study was to evaluate the specificity of serum procalcitonin (PCT) in a large cohort of patients with solid carcinomas at different stages. Methods: The study involved 447 patients having histologically confirmed carcinoma of the breast, head and neck, ovary, cervix or non-small cell lung carcinoma. Patients with a history of small cell lung cancer, neuroendocrine tumors, medullary thyroid carcinoma, sepsis, systemic inflammatory syndrome, renal failure and/or serum C-reactive protein above 5 ng/mL were excluded. Additionally, those with suspicious infectious or inflammatory diseases in (18)F-fluorodeoxyglucose positron-emission tomography/computed tomography were also excluded. Serum PCT concentrations were measured using a Kryptor system (BRAHMS) and a clinical cut-off at 0.5 ng/mL was used to define positive results. Results: Serum PCT concentrations did not change at different cancer stages (Kruskal-Wallis, p>0.05). No patient had a PCT concentration >0.5 ng/mL. Conclusions: Our data show that solid carcinomas "per se" did not increase circulating PCT concentrations, regardless of the histotype and stage of the disease. abstract_id: PUBMED:23927734 Fluorodeoxyglucose-positron-emission tomography/computed tomography imaging for adrenal masses in patients with lung cancer: review and diagnostic algorithm. Background And Purpose: Positron-emission tomography/computed tomography (PET/CT) with fluorine-18 fluorodeoxyglucose (FDG) is used as first-line staging for patients with newly diagnosed non-small cell lung cancer (NSCLC). Our purpose was to review the accuracy of FDG-PET/CT to predict adrenal gland metastasis, explain the causes for false-positive PET, and provide a diagnostic algorithm. Patients And Methods: Two patients with incidentally discovered lung masses were found to have hypermetabolic adrenal activity by FDG-PET/CT with maximal standard uptake value (SUV) of 4.5 and 6.5. A MEDLINE search was performed on the topic of FDG-PET/CT, adrenal gland metastasis, and NSCLC. Literature was reviewed with regard to diagnosis, accuracy, outcomes, and alternative imaging or diagnostic strategies. Results: Both patients underwent transabdominal laparoscopic adrenalectomy and were found to have nodular hyperplasia without evidence of adrenal tumor. A total of seven articles containing 343 patients were identified as having pertinent oncologic information for NSCLC patients with adrenal lesions. Sensitivity and specificity of PET/CT for distant metastasis was 94% and 85%, respectively, but only 13% (44/343) of these patients had histologically confirmed adrenal diagnoses. Based on this, a diagnostic algorithm was created to aid in decision making. Conclusions: Although PET/CT has high sensitivity and specificity for adrenal metastasis in the setting of NSCLC, adrenal biopsy or other secondary imaging should be considered to confirm the finding. Adrenalectomy in lieu of biopsy may have both diagnostic and therapeutic benefit in cases where the adrenal mass is ≥10 mm with high PET maximum SUV (≥3.1) and SUV ratios (>2.5), where washout CT or chemical shift MRI is positive, or where percutaneous biopsy is deemed too difficult or unsafe. abstract_id: PUBMED:12202868 The impact of fluorodeoxyglucose F 18 positron-emission tomography on the surgical staging of non-small cell lung cancer. Objectives: Staging data on patients with non-small cell lung cancer were prospectively collected to evaluate the accuracy and anatomic information provided by fluorodeoxyglucose F 18 positron-emission tomography and its impact on improving the accuracy of surgical staging. Methods: A total of 142 patients with potentially resectable non-small cell lung cancer were imaged with positron-emission tomography (neck to pelvis). Positron-emission tomographic scans were read prospectively with thoracic computed tomographic comparison. Patients without distant metastases at positron-emission tomography underwent staging with bronchoscopy and mediastinoscopy, with or without mediastinotomy or thoracoscopy. Patients with metastases, pleural implants, or N2 or N3 disease did not undergo primary resection. Results: Positron-emission tomography revealed unsuspected distant metastases in 24 of 142 patients (16.9%) and unsuspected pleural implants in 6 others. Nodal stage was surgically established in 118 cases. Positron-emission tomography showed that 5 patients had nodal disease not accessible by mediastinoscopy. In 35 (24.6%) of these 142 cases, positron-emission tomography directed the evaluation away from routine bronchoscopy and mediastinoscopy staging that would have resulted in inappropriate treatment selection. Positron-emission tomography correctly differentiated resectable stages IA through IIIA (N1) from stages IIIA (N2) through IV in 88.7% of cases. In identifying N2 or N3 disease, positron-emission tomography had an accuracy of 90.7%, a sensitivity of 80.9%, a specificity of 96%, and positive and negative predictive values of 91.9% and 90.1%, respectively. Of the 8 cases in which positron-emission tomography missed N2 disease, 7 had the disease discovered by mediastinoscopy and 1 had it discovered at thoracotomy. Conclusions: The diagnostic accuracy of positron-emission tomography-enhanced clinical staging is high. Positron-emission tomography has previously been used primarily to screen for lymph node spread and distant metastases, but it also provides localizing information that allows directed and more sensitive surgical staging and refinement of patient selection for curative resection. Positron-emission tomography and surgical staging play complementary roles in the journey toward more accurate overall staging. abstract_id: PUBMED:27331212 Incidental finding of silent appendicitis on (18)F-FDG PET/CT in a patient with small cell lung adenocarcinoma. We report the incidental diagnosis of acute asymptomatic appendicitis on a fluorine-18-fluorodeoxyglucose positron emission tomography with computed tomography ((18)F-FDG PET/CT) performed for staging of a non small cell lung carcinoma. The patient was asymptomatic and laboratory tests were normal. The case illustrates: a) the possibility to diagnose appendicitis on (18)F-FDG PET/CT and b) the possibility of silent acute appendicitis, although this is a rare occurrence. abstract_id: PUBMED:31052088 Investigating the Impact of Positron Emission Tomography-Computed Tomography Versus Computed Tomography Alone for High-risk Volume Selection in Head and Neck and Lung Patients Undergoing Radiotherapy: Interim Findings. Introduction: The aim of this study was to quantify the impact of positron emission tomography-computed tomography (PET-CT) on clinical target volume (CTV) selection in non-small cell lung cancer (NSCLC) and head and neck squamous cell cancer (HNSCC) cancer patients. Methods: Eight radiation oncologists with expertise in either NSCLC or HNSCC prospectively contoured target volumes with and without PET-CT findings. All volumes were contoured manually, and computed tomography (CT)-alone contours were identified as gross tumour volume CT and clinical target volume (CTV) CT, whereas those contoured with the aid of PET-CT were GTV PET and CTV PET. PET-CT contours were used for actual treatment delivery. Test treatment plans were generated based on the CT-alone volumes and applied to the final PET-CT contours. PET-CT had an impact if the test plans failed department quality assurance guidelines. For each patient, the dose to critical structures and any changes in the treatment plan were recorded. Results: Eighty patients (49 HNSCC and 31 NSCLC) were analyzed. PET-CT impacted 42.9% of HNSCC cases and 45.2% of NSCLC cases. On average, PET-CT volumes were significantly larger than CT-alone volumes for HNSCC cases (P < .01) but not for NSCLC cases (P = .29). For organs at risk, no statistically significant differences were noted, with the exception of mean parotid dose for the right and left parotids (P = .0137and P = .0330, respectively). Conclusions: Interim analysis of data found that the use of PET-CT in the radiation therapy planning process impacted CTV selection, resulting in a major change in radiation therapy plans in 43.7% (HNSCC 42.9% and NSCLC 45.2%) of patients. abstract_id: PUBMED:11902566 Clinical role of positron emission tomography in oncology. Positron emission tomography (PET) is now in routine use in oncology, through the success of metabolic imaging, mainly with fluorodeoxyglucose (FDG). Clear benefit is obtained with FDG PET in the assessment of patients with recurrent or residual disease, especially colorectal cancer and lymphoma. Preoperative staging of non-small-cell lung cancer with FDG PET is of proven benefit. Staging and restaging of patients with melanoma of stage II or greater is useful, and FDG PET has also been successfully used to investigate single pulmonary nodules. Tumour grading has been assessed, especially in the brain, but an important and emerging indication is the evaluation of tumour response with PET. Rapid decline of FDG uptake has been observed in responsive cancers. Further advances are being made with other fluorine-18-labelled and generator-based PET tracers, the only ones that can be used in units without dedicated cyclotrons. Answer: Based on the information provided in the abstracts, it is not recommended to routinely investigate incidental head and neck findings on 18-fluorodeoxyglucose positron emission tomography (FDG-PET) in patients being staged for non-small cell lung cancer (NSCLC). The retrospective analysis in the study by PUBMED:24959935 found that most incidental findings in the head and neck region on FDG-PET are not clinically significant and do not warrant a delay in the treatment of the primary lung cancer. However, if clinical assessment raises concerns for head and neck pathology, an aggressive work-up should be pursued. The study suggests that while incidental findings can sometimes indicate a secondary malignancy, as was the case in 10% of the patients with biopsy-proven malignancy, the majority of these findings do not have clinical significance. Therefore, the decision to investigate further should be based on a thorough clinical assessment rather than an automatic response to the incidental finding on FDG-PET. This approach is consistent with the general principle of medical imaging interpretation, where incidental findings should be evaluated in the context of the patient's overall clinical picture, including symptoms, physical examination, and other relevant diagnostic information. It is important to balance the potential benefits of early detection of a secondary malignancy against the risks and costs of additional investigations, as well as the potential for treatment delays for the primary cancer (PUBMED:24959935).
Instruction: Is the Montreal Cognitive Assessment superior to the Mini-Mental State Examination to detect poststroke cognitive impairment? Abstracts: abstract_id: PUBMED:30083878 Conversions of cognitive screenings : Mini-Mental State Examination vs. Montreal Cognitive Assessment vs. DemTect Background: For a comprehensive specification and quantification of neuropsychological deficits, extensive neuropsychological assessment is needed. Due to its time intensiveness, this cannot be accomplished in every clinical setting and is not always necessary. Therefore, screening instruments provide a first step. Because the selection differs between and sometimes even within clinics, a comparison of results for different screening procedures would be helpful. The current study aimed at achieving this in the German-speaking area, i.e. conversions between sum scores of the Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA) and Dementia Detection Test (DemTect) can be accomplished. Method: In the Department of Neurology at the University Hospital of Cologne, 8240 patients with different neurological diseases were examined between 2008 and 2017. Conversion scores using the results in the MMSE, MoCA and DemTect were computed by using the equipercentile equating method. Results: The calculated bidirectional conversion tables enable a quick and easy comparison between the three most commonly used screening instruments. They are also similar to those from previous studies in English-speaking countries. Conclusion: The results enable an enhanced longitudinal assessment of cognitive functions in different clinical settings, provide comparability, and offer more flexibility for determination of patient status. An extension of the current study might be the transfer of the method presented to other cognitive or affective domains, such as memory and depression. abstract_id: PUBMED:35762453 Can the Montreal Cognitive Assessment and Mini-Mental State Examination detect cognitive decline in elderly patients with essential tremor? Objective Given the association between essential tremor (ET) and higher rates of cognitive decline, assessing this decline is an important element of research and clinical care. The Mini-Mental State Examination (MMSE) and the Montreal Cognitive Assessment (MoCA) are two broad, brief measures that are widely used to monitor cognitive impairment in various neurological disorders. We sought to determine the relative ability of the MoCA and the MMSE to detect cognitive decline in elderly patients with ET. Methods We administered a neuropsychological battery to an ET cohort every 18 months over 4.5 years. We defined the gold standard for change in cognition as the change in neuropsychological test scores over consecutive evaluations and applied the Reliable Change Method to detect meaningful increases/decreases in test scores. We performed receiver operating characteristics (ROC) analysis to quantify the area under the curve (AUC) and compare the ability of the MoCA and the MMSE to detect cognitive decline. Results The AUCs for the MoCA and the MMSE did not differ significantly at any interval or when all intervals were pooled for analysis. Across all intervals, the ability of the MoCA and the MMSE to detect cognitive decline was consistently fair or poor. Conclusions We found that the ability of the MoCA and the MMSE to detect cognitive decline in ET patients over 18-month intervals is inadequate. Unchanged scores on the MoCA and the MMSE in ET over time should be approached with caution. We propose that these screening tools should be supplemented with additional neuropsychological tests. abstract_id: PUBMED:27992895 Is the Montreal Cognitive Assessment (MoCA) test better suited than the Mini-Mental State Examination (MMSE) in mild cognitive impairment (MCI) detection among people aged over 60? Meta-analysis. Objectives: Screening tests play a crucial role in dementia diagnostics, thus they should be very sensitive for mild cognitive impairment (MCI) assessment. Nowadays, the MiniMental State Examination (MMSE) is the most commonly used scale in cognitive function evaluation, albeit it is claimed to be imprecise for MCI detection. The Montreal Cognitive Assessment (MoCA), was created as an alternative method for MMSE. Aim. MoCA vs. MMSE credibility assessment in detecting MCI, while taking into consideration the sensitivity and specificity by cut-off points. Methods: A systematic literature search was carried out by the authors using EBSCO host Web, Wiley Online Library, Springer Link, Science Direct and Medline databases. The following medical subject headings were used in the search: mild cognitive impairment, mini-mental state examination, Montreal cognitive assessment, diagnostics value. Papers which met inclusion and exclusion criteria were chosen to be included in this review. At the end, for the evaluation of MoCA 20, and MMSE 13 studies were qualified. Research credibility was established by computing weighted arithmetic mean, where weight is defined as population for which the result of sensitivity and specificity for the cut-off point was achieved. The cut-offs are shown as ROC curve and accuracy of diagnosis for MoCA and MMSE was calculated as the area under the curve (AUC). Results: ROC curve analysis for MoCA demonstrated that MCI best detection can be achieved with a cut-off point of 24/25 (n = 9350, the sensitivity of 80.48% and specificity of 81.19%). AUC was 0.846 (95% CI 0.823-0.868). For MMSE, it turned out that more important cut-off was of 27/28 (n = 882, 66.34% sensitivity and specificity of 72.94%). AUC was 0.736 (95% CI 0.718-0.767). Conclusions: MoCA test better meets the criteria for screening tests for the detection of MCI among patients over 60 years of age than MMSE. abstract_id: PUBMED:31595118 Poststroke Cognitive Decline: A Longitudinal Study from a Tertiary Care Center. Objectives Poststroke cognitive decline (PSCD) is a serious disabling consequence of stroke. The purpose of this study is to find the prevalence of PSCD and sociodemographic and clinical determinants of risk factors of PSCD. Materials and Methods This study was a prospective, hospital-based study conducted on 200 stroke patients from stroke registry during October 2015 to April 2017. Detailed clinical evaluation was done. Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA) scores were used to determine PSCD after 3 and 6 months as per the Diagnostic and Statistical Manual of Mental Disorders V . Chi-squared test was used to find the association between two variables. The Wilcoxon signed-rank test was used to compare the difference in cognitive impairment between two follow-ups at 3 and 6 months, respectively. A p-value < 0.05 was considered statistically significant. Results The prevalence of PSCD measured by MoCA scale at 3 and 6 months was 67 and 31.6%, respectively. By MMSE scale, cognitive decline prevalence at 3 months was found to be 87 (46.3%), which reduced to 22 (17.1%) at 6 months. The association between MMSE scale and type of stroke was significant at 3 months. Conclusion One-third of the stroke patients developed PSCD within 3 months of onset of stroke, with different levels of severity. The major predictors of new-onset poststroke cognitive impairment were diabetes and hypertension. The prevalence of PSCD reduced significantly at 6 months of stroke on follow-up. abstract_id: PUBMED:34786231 Comparison of Postoperative Cognitive Decline Using the Mini-Mental State Examination and Montreal Cognitive Assessment After Minor Elective Surgery in Elderly. Introduction and aim Postoperative cognitive dysfunction (POCD) is an important complication associated with increased morbidity, mortality, and reduced quality of life. Generally, studies have focused on major surgery so there is little evidence of the incidence of cognitive dysfunction in minor surgery. We aimed to compare the Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA) in terms of detecting cognitive decline in elderly patients after elective inguinal herniorrhaphy procedure with general or spinal anesthesia. Material and methods This observational study was conducted from June 2014 to March 2015 at Ankara Numune Education and Research Hospital. The type of anesthesia was determined according to the anesthesiologist's preference who is blind to the study. Patients were grouped according to anesthesia received: general or spinal anesthesia. The MMSE and MoCA were evaluated presurgery and 24 hours after the operation. Results The postoperative (24th hour) MMSE scores of patients (26.23±2.77) were significantly lower than the preoperative scores (27.17±1.93) in only the general anesthesia group (p =0.003). The postoperative (24th hour) MoCA scores (22.87±3.88 for general and 23.13±4.08 for spinal anesthesia) were significantly lower than the preoperative scores (24.32±3.19 for general and 24.35±2.84 for spinal anesthesia) in both the general and spinal anesthesia groups (p =0.000 and 0.019, respectively). The incidence of postoperative cognitive dysfunction was 32.9% using the MoCA and 15.2% using the MMSE (p=0,018). Conclusion Early POCD is an important problem after elective minor surgeries, even with spinal anesthesia, in elderly patients. The MoCA is an alternative tool that can be more sensitive than the MMSE to identify cognitive decline in elderly patients undergoing minor surgeries under both general and spinal anesthesia. abstract_id: PUBMED:32848712 Evaluation of the Mini-Mental State Examination and the Montreal Cognitive Assessment for Predicting Post-stroke Cognitive Impairment During the Acute Phase in Chinese Minor Stroke Patients. Objective: To assess the value of the Mini-Mental State Examination (MMSE) and the Montreal Cognitive Assessment (MoCA) during acute phase in predicting post-stroke cognitive impairment (PSCI) at 3-6 months. Methods: We prospectively recruited 229 patients who had suffered their first-ever ischemic stroke. PSCI was determined in 104 of these patients by a comprehensive neuropsychological battery performed at 3-6 months. Receiver operating characteristic (ROC) curve analysis was then performed to compare the discriminatory ability of the MMSE and MoCA. Also, we applied a decision tree generated by the classification and regression tree methodology. Results: In total, 66 patients had PSCI when evaluated 3-6 months after the onset of minor stroke. Logistic regression analysis revealed that education, body mass index (BMI), and baseline MoCA scores were independently associated with PSCI. ROC curve analysis showed that the ability to predict PSCI was similar when compared between baseline MoCA scores [area under curve (AUC), 0.821; 95% confidence interval (CI), 0.743-0.898] and baseline MMSE scores (AUC, 0.809; 95% CI, 0.725-0.892, P = 0.75). Both MMSE and MoCA exhibited similar predictive values at their optimal cut-off points (MMSE ≤27; sensitivity, 0.682; specificity, 0.816; MoCA ≤21; sensitivity, 0.636; specificity, 0.895). Classification and regression tree-derived analysis yielded an AUC of 0.823 (sensitivity, 0.803; specificity, 0.842). Conclusion: When applied within 2 weeks of stroke, the MMSE and MoCA are both useful and have similar predictive value for PSCI 3-6 months after the onset of minor stroke. abstract_id: PUBMED:34668124 Screening for cognitive sequelae of SARS-CoV-2 infection: a comparison between the Mini-Mental State Examination (MMSE) and the Montreal Cognitive Assessment (MoCA). Background: Due to SARS-CoV-2-related encephalopathic features, COVID-19 patients may show cognitive sequelae that negatively affect functional outcomes. However, although cognitive screening has been recommended in recovered individuals, little is known about which instruments are suitable to this scope by also accounting for clinical status. This study thus aimed at comparatively assessing the Mini-Mental State Examination (MMSE) and the Montreal Cognitive Assessment (MoCA) in detecting cognitive deficits in post-COVID-19 patients premorbidly/contextually being or not at risk for cognitive deficits (RCD + ; RCD-). Methods: Data from N = 100 COVID-19-recovered individuals having been administered both the MMSE and the MoCA were retrospectively analyzed separately for each group. RCD ± classification was performed by taking into consideration both previous and disease-related conditions. Equivalent scores (ESs) were adopted to examine classification performances of the two screeners. Results: The two groups were comparable as for most background and cognitive measures. MMSE or MoCA adjusted scores were mostly unrelated to disease-related features. The two screeners yielded similar estimates of below-cut-off performances-RCD + : MMSE: 20%, MoCA: 23.6%; RCD-: MMSE: 2.2%, MoCA: 4.4%. However, agreement rates dropped when also addressing borderline, "low-end" normal, and normal ability categories-with the MoCA attributing lower levels than the MMSE (RCD + : Cohen's k = .47; RCD-: Cohen's k = .17). Discussion: Although both the MMSE and the MoCA proved to be equally able to detect severe cognitive sequelae of SARS-CoV-2 infection in both RCD + and RCD- patients, the MoCA appeared to be able to reveal sub-clinical defects and more sharply discriminate between different levels of ability. abstract_id: PUBMED:32087430 Concordance of Mini-Mental State Examination, Montreal Cognitive Assessment and Parkinson Neuropsychometric Dementia Assessment in the classification of cognitive performance in Parkinson's disease. Background: Cognitive impairment (CI) is frequently observed in Parkinson's disease (PD) and negatively influences the patient's and carer's quality of life. As a first step, assessment of CI is often accomplished by using screening instruments (level I diagnosis). Three commonly used instruments are the Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), and Parkinson Neuropsychometric Dementia Assessment (PANDA). Because different preferences regarding test selection exist between clinics, this study aims to provide evidence about the concordance of these tests. It also converts total test scores of the three instruments to assist clinical practice. Methods: Between January and December 2018, 96 patients with idiopathic PD were examined at the University Hospital of Cologne, Germany. Comparability of MMSE, MoCA, and PANDA scores was investigated by calculating correlations, classification agreements, and percentile ranks. Additionally, we converted test scores among the three screening instruments by implementing the equipercentile equating method and log-linear smoothing. Results: The MMSE classified 26%, the PANDA 32.3% and the MoCA 54.2% of PD patients as having CI. The screening instruments' concordance in classifying cognition into normal cognition versus CI was 75% (AC1 = 0.62) for MMSE and PANDA, 63.5% (AC1 = 0.28) for MoCA and PANDA, and 57.3% (AC1 = 0.24) for MMSE and MoCA. The provided conversion table enables a quick and easy transformation of the three screening instruments within PD diagnostics. Conclusion: These results contribute to a better understanding of the screenings' utility and concordance in a population of PD patients. Additionally, communication between clinics may be enhanced. abstract_id: PUBMED:31658065 Cognitive Screening Tests in Huntington Gene Mutation Carriers: Examining the Validity of the Mini-Mental State Examination and the Montreal Cognitive Assessment. Background: Due to high prevalence of cognitive impairment in Huntington's disease (HD) gene mutation carriers, even before onset of motor symptoms, cognitive screening is important for the optimal management of patients. The Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA) are widely used, but the validity for HD has only been evaluated in few studies with important limitations. Objective: To evaluate the discriminative validity of the MMSE and the MoCA for the assessment of cognitive dysfunction in HD gene mutation carriers, independently of motor manifestation and furthermore, to report estimated probabilities for cognitive impairment with different score ranges on the MMSE and the MoCA. Methods: 106 pre-motor-manifest and motor-manifest HD gene mutation carriers and 40 non-HD gene mutation carriers were administered the MMSE, the MoCA, and an extensive neuropsychological battery with operationalized criteria for cognitive impairment. The same physician and the same neuropsychologist performed all examinations; blinded to one another. Results: The area under the receiver operating characteristic (ROC) curve was 0.70 for the MMSE and 0.82 for the MoCA. The latter correctly diagnosed 82% of the cognitively impaired and not-impaired HD gene mutation carriers and non-HD gene mutation carriers, whereas the MMSE only diagnosed 73% correctly. Conclusions: The MMSE and the MoCA can both be used as cognitive screening tests in HD gene mutation carriers, but both have important limitations. Our results indicate that the MoCA is a better cognitive screening test for HD than the MMSE. In addition, our study provides estimated probabilities for cognitive impairment with different score ranges, which may be used as clinical guidelines in the interpretation of results from the two tests. abstract_id: PUBMED:32063455 Early Cognitive Assessment Following Acute Stroke: Feasibility and Comparison between Mini-Mental State Examination and Montreal Cognitive Assessment. Objectives: Cognitive assessment is not performed routinely in the acute stroke setting. We investigated factors associated with cognitive impairment and the differences between the Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA) scores in patients with acute stroke. Methods: In this prospective study, 881 consecutive patients (median age, 73 years) with acute stroke were enrolled. Clinical characteristics, such as education, vascular risk factors, premorbid cognitive status using the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE), and stroke severity, were assessed. Cognitive performance was measured using MMSE and MoCA within 5 days of stroke onset. Results: Both MMSE and MoCA were feasible in 621 (70.5%) patients. Factors independently associated with nonfeasibility were age (odds ratio [OR]: 1.05; 95% confidence interval [CI]: 1.02-1.08), IQCODE score (OR: 1.02; 95%CI: 1.00-1.04), and National Institutes of Health Stroke Scale (NIHSS) score (OR, 1.16; 95%CI, 1.12-1.20). Impaired MoCA (with a cut-off <26/30) performance was observed in 544 of 621 (87.6%) patients. Factors independently associated with cognitive impairment were age (OR: 1.06; 95%CI: 1.03-1.10) and NIHSS score (OR: 1.34; 95%CI: 1.14-1.57). Eighty percent of patients with normal MMSE scores had an impaired MoCA score (MMSE-MoCA mismatch). The differences were highest in the visuospatial (94.8% versus 65.3%; P < .0001), recall (76.6% versus 35.6%; P < .0001), abstraction (82.5% versus 49.8%; P < .0001), and language (72.3% versus 65.9%; P < .0001) domains between the normal MMSE and MoCA group and MMSE-MoCA mismatch group. Conclusions: The MoCA can be particularly useful in patients with cognitive deficits undetectable on the MMSE in the acute stroke phase. Answer: The Montreal Cognitive Assessment (MoCA) appears to be superior to the Mini-Mental State Examination (MMSE) in detecting poststroke cognitive impairment (PSCI). According to a study by PUBMED:31595118, the prevalence of PSCI measured by the MoCA scale at 3 and 6 months was 67% and 31.6%, respectively, compared to 46.3% and 17.1% by the MMSE scale at the same time points. This suggests that the MoCA may be more sensitive in detecting cognitive decline poststroke. Another study (PUBMED:32848712) assessed the value of the MMSE and MoCA during the acute phase in predicting PSCI at 3-6 months. The results indicated that both the MMSE and MoCA had similar predictive value for PSCI when applied within 2 weeks of stroke, with the MoCA showing a slightly higher area under the curve (AUC) in receiver operating characteristic (ROC) curve analysis, although the difference was not statistically significant. Furthermore, a study by PUBMED:32063455 found that 80% of patients with normal MMSE scores had an impaired MoCA score, indicating that the MoCA may detect cognitive deficits that are undetectable on the MMSE in the acute stroke phase. This suggests that the MoCA can be particularly useful in patients with cognitive deficits that are not apparent on the MMSE. In summary, the evidence suggests that the MoCA is a more sensitive tool than the MMSE for detecting cognitive impairment following a stroke, although both tools have been used and show some predictive value (PUBMED:31595118, PUBMED:32848712, PUBMED:32063455).
Instruction: Does the declining prevalence of Helicobacter pylori unmask patients with idiopathic peptic ulcer disease? Abstracts: abstract_id: PUBMED:21560446 Reflections on the marked changes of the prevalence of Helicobacter pylori infection and peptic ulcer disease in the last two decades A short overview is presented of some of the persistent questions of the relationship of Helicobacter pylori infection and peptic ulcer disease and about the increasing number of patients with idiopathic peptic ulcer disease. The relationship between the decrease of the prevalence of peptic ulcer disease in countries with marked decrease of Helicobacter pylori prevalence in the general population, including the Czech Republic, was discussed. Several possible causes of the decrease in Helicobacter pylori prevalence were noted (improving of socioeconomic conditions, such as low income and poor education, such as poor sanitation and overcrowding, bad hygienic conditions especially during childhood, the marked increase of Helicobacter pylori eradication rate, associated also with an extended therapy by macrolides or other antibiotics (amoxicillin, etc.) used for several conditions, often at a very young age, and sometimes by chance together with proton pump inhibitors, natural loss of the older generation, which had a higher HP-prevalence than the general population, the change of the stomach microbiome, etc.) but with the implication, that the real cause of these deep changes remains indistinctly identified. abstract_id: PUBMED:15256980 Does the declining prevalence of Helicobacter pylori unmask patients with idiopathic peptic ulcer disease? Trends over an 8 year period. Objectives: Recent studies have suggested that the prevalence of Helicobacter pylori infection in patients with ulcer disease who were not using non-steroidal anti-inflammatory drugs (NSAIDs) has been overestimated. The decreasing prevalence of H. pylori could lead to a relative increase in the number of patients with this idiopathic peptic ulcer disease (IPUD). This study aimed to investigate the prevalence of IPUD and any possible trends. Design And Methods: The reports of all upper gastro-intestinal endoscopies performed in a Dutch regional hospital over the period 1991 to 1998 were reviewed. If a gastric and/or duodenal ulcer had been diagnosed, data concerning possible H. pylori infection (culture, histology, rapid in-house urease test) were retrieved. If H. pylori tests were negative, hospital files were examined for possible use of NSAIDs or other rare causes of ulcer disease. When these were not found, stored biopsy specimens were tested for H. heilmanii by using the polymerase chain reaction technique. Results: Ulcer disease was diagnosed in 405 patients who had undergone endoscopy (159 with gastric ulcer, 235 with duodenal ulcer, and 11 with both gastric and duodenal ulcer). H. pylori infection was found in 349 of these patients (86.2%). Thirty-three of the 56 H. pylori negative patients used NSAIDs and three patients had Crohn's disease, leaving 20 patients with IPUD (4.9%, 12 gastric ulcer and eight duodenal ulcer). Time trends over the study period showed a decrease of H. pylori associated peptic ulcer disease (P <0.002) and an increase of NSAID associated peptic ulcer disease (P <0.0005). The prevalence of IPUD remained stable (P=0.978). Conclusions: The prevalence of patients with H. pylori negative ulcer disease significantly decreased in our study population due to an increase in the number of patients with NSAID associated peptic ulcer disease. IPUD was rare and its prevalence did not increase over a period of 8 years. abstract_id: PUBMED:37223285 Clinical, endoscopic, and demographic characteristics of idiopathic duodenal ulcers compared with helicobacter pylori positive ulcers. Background: Helicobacter pylori infection is the most common cause of peptic ulcer disease. However, the prevalence rates of non-helicobacter pylori idiopathic peptic ulcers have increased over the past few years. This study aims to compare the characteristics of Helicobacter pylori-positive with idiopathic duodenal ulcers. Methods: A cross-sectional cohort study was conducted on 950 patients which were excluded from the analysis process duo to the concomitant presence of gastric ulcer, malignancy, Zollinger Ellison syndrome, Crohn's disease, esophageal varices, history of taking anti-Helicobacter pylori therapy, and history of taking NSAID or aspirin. Eventually, 647 subjects were enrolled for the analysis process. In this case, these subjects were divided into two groups: (I) Helicobacter pylori-positive ulcer group and (II) Helicobacter pylori-negative and non-NSAID (idiopathic) ulcer group. Results: The findings showed that 417 patients (64.5%) had duodenal ulcers induced by Helicobacter pylori, and 111 patients (17.1%) had Helicobacter pylori-negative and non-NSAID ulcers. The mean ages of patients in Helicobacter pylori-positive and idiopathic ulcer groups were 39±15 and 42±17, respectively. In this case, 33 patients (29.7%) with idiopathic ulcers and 56 patients (25.1%) with Helicobacter pylori-positive ulcers had upper gastrointestinal bleeding. Also, 22 patients (21%) with idiopathic ulcers and 31 patients (16.5%) with Helicobacter pylori-positive ulcers had multiple duodenal ulcers. Conclusion: The present study demonstrated that the idiopathic ulcers included 17.1% of duodenal ulcers. Also, it was concluded that patients with idiopathic ulcers were predominantly male with an age range older than the other group. In addition, patients in this group had more ulcers. abstract_id: PUBMED:16336851 Examination of patients with dyspepsia in areas with a low prevalence of Helicobacter pylori infection Examination of patients with dyspepsia is quantitatively a major task in gastroenterology. Several strategies have been used. In recent years the "test and treat strategy" has gained wide acceptance, because it is easy and without discomfort to the patient. The primary step in this strategy is a 13C-urea breath test for Helicobacter pylori. If the test is positive, the Helicobacter pylori must be eradicated. Only patients with alarm symptoms or refractory symptoms are offered gastroscopy. This strategy has been found useful in clinical trials performed in areas with a high prevalence of Helicobacter pylori infection. To examine whether these results are valid in clinical practice in a low-prevalence country, we performed a one-year retrospective study of our modified strategy of 666 patients, in which gastroscopy is offered to patients who test negative--an "open access test and scope strategy". In all, 576 patients were referred and examined according to the strategy; 90 patients, of whom 43 had alarm symptoms, were referred erronously. We performed 322 breath tests, of which 66 were positive--49 in patients without alarm symptoms. Of the latter, 39 were improved by Helicobacter pylori eradication therapy. These were the patients who really benefitted from the strategy. Of the 230 patients with a negative breath test, 197 agreed to have a gastroscopy done, and 31 cases of oesophagitis and 3 idiopathic ulcers were found. It is concluded that the strategy is advantageous but complicated in an open-access setting. abstract_id: PUBMED:1485982 Prevalence of Helicobacter pylori in gastric pathology in Aragon The aim of this study was to evaluate the prevalence of Helicobacter pylori in 400 patients referred for upper digestive tract endoscopy. In our area it hasn't developed yet any epidemiologic study about this disease. Helicobacter pylori was observed in 281 of the 400 patients studied (70%). No significant differences by sex were found in the subjects analyzed. There was a rise in the percentage of positivities as age increased. Helicobacter pylori were positive in 74 of the 88 patients with chronic superficial gastritis (84.1%), in 53 of the 65 with chronic atrophic gastritis (81.5%), in 16 of the 25 with chronic atrophic gastritis and intestinal metaplasia (64%), in 49 of the 63 with gastric ulcer (77.8%), in 73 of the 85 with duodenal ulcer (85.9%), in 9 of the 24 patients with gastric carcinoma (37.5%), in 5 of the 19 with stump gastritis (26.3%), where as only a few Helicobacter pylori were found in 2 of the 31 histologically normal subjects (6.5%). These findings support the wiew that Helicobacter pylori may be etiologically related to chronic gastritis and peptic ulceration, even though their precise role still remains to be determined. abstract_id: PUBMED:20483098 Helicobacter pylori resistance Helicobacter pylori (H. pylori) is recognized as the causative of several gastroduodenal disorders. The discovery of H. pylori revolutionized the treatment for ulcus pepticum. Antibiotics combined with proton pump inhibitors or bismuth have been effective in the treatment of H. pylori infections, but there is an emerging problem with H. pylori resistance against the most frequently used antibiotics, which substantially impairs the treatment of H. pylori-associated disorders. In this article the focus is on the prevalence of H. pylori resistance, its clinical implications and the molecular mechanisms behind resistance development. abstract_id: PUBMED:34370686 Helicobacter pylori - 2021. Összefoglaló. A Helicobacter pylori továbbra is a világ legelterjedtebb fertőzése: prevalenciája a fejlődő országokban 70-80%, a fejlett országokban csökkenő tendenciát mutat. A dél-magyarországi véradókban a prevalencia 32%-ra csökkent. A migráció a befogadó ország számára a fertőzés fokozott kockázatával jár. A szövettani diagnózisban az immunhisztokémiai vizsgálat pontosabb a hagyományos Giemsa-festésnél. A mesterséges intelligencia érzékenysége a hagyományos endoszkópiáéval összehasonlítva 87%, pontossága 86%. Az újgenerációs szekvenálással lehetséges egy biopsziás mintából több antibiotikumérzékenység meghatározása. A Helicobacter pylori kezelésének európai regisztere kimutatta, hogy 2013 és 2018 között a bizmutalapú négyes vagy a 14 napos egyidejű négyes kezelések hatásosabbak, mint a hagyományos hármas kezelés, de elterjedésük igen lassú folyamat, jelentős földrajzi különbségekkel. Az új típusú koronavírus (SARS-CoV-2) felléphet Helicobacter pylori fertőzésben is, egymás kóros hatását felerősítve. A diagnosztikai módszerek korlátozottak. Protonpumpagátlók szedése növeli a COVID-19-fertőzés kockázatát és annak súlyos kimenetelét. Előzetesen ismert peptikus fekély, vérzés, illetve antikoguláns kezelés előtt az eradikáció a vírusos fertőzés lezajlása után indokolt. A probiotikumoknak az eradikációra gyakorolt hatásáról 20, közepes minőségű metaanalízis született, így a konszenzusokban foglalt álláspontok sem egyértelműek: a jövőben ezt tisztázni kell. Orv Hetil. 2021; 162(32): 1275-1282. Summary. Helicobacter pylori is still the most widespread infection in the world: its overall prevalence is 70-80% in developing regions, but fortunately it is decreasing in the Western world. The prevalence in blood donors from South-Eastern Hungary decreased from 63% in the 1990's to 32% in 2019. Migration constitutes an increased risk of infection for the destination countries. Immunohistochemistry has proven to be more accurate in histological diagnosis than the conventional Giemsa stain. The sensitivity and accuracy of artificial intelligence as compared to videoendoscopy were 87% and 86%, respectively. The European Register on the management of Helicobacter pylori infection revealed that concomitant quadruple and 14-day bismuth-based therapies are more efficient than triple combinations, although their incorporation in practice is a long-lasting process, with large geographical variations. The novel type of coronavirus (SARS-CoV-2) can also occur in Helicobacter pylori-infected patients, mutually enhancing their pathogenetic effects. Diagnostic possibilities are limited in this setting. The use of proton pump inhibitors increases the risk of viral infection and the severity of the disease. Eradication treatment seems justified in patients with previously known peptic ulcers or gastrointestinal bleeding, or before starting anticoagulant treatment, but must be postponed after resolution of viral infection. The effect of probiotics on eradication was addressed by 20, medium-to-low quality meta-analyses and so, the recommendations of the guidelines are equivocal, which must be clarified in the future with higher quality studies. Orv Hetil. 2021; 162(32): 1275-1282. abstract_id: PUBMED:8560031 Prevalence of Helicobacter pylori infection in gastrectomy and vagotomy Objective: To report the prevalence of Helicobacter pylori infection in patients undergoing gastrectomy or vagotomy plus pyloroplasty because of peptic ulcer disease. Methods: Eighty-five patients were studied (mean age = 61 years; 85% males) who had undergone gastric surgery: Billroth I gastrectomy (n = 25), Billroth II (n = 51) and vagotomy plus pyloroplasty (n = 9). During endoscopy biopsy specimens were obtained from fundus and both sides of anastomosis for histological (hematoxylin-eosin) and microbiological (Gram stain and culture) investigations. Results: The overall percentage of Helicobacter pylori infection was 43.6% (Billroth I = 40%; Billroth II = 37%; vagotomy = 89%) and no differences were observed between both types of surgical reconstruction. However, differences were indeed observed (p < 0.01) when comparing percentages of infection between patients undergoing gastrectomy and vagotomy. Among infected gastrectomized patients H. pylori was detected in fundus in 93% of cases, whereas the recovery rate from anastomotic mouth biopsies was only 72% (p < 0.05). Conclusions: The prevalence of H. pylori infection in gastrectomized patients (Billroth I and II) was low regarding the cause of surgery (peptic ulcer disease), and no differences were observed between both types of surgical reconstruction. The prevalence of infection after vagotomy and pyloroplasty was significantly higher. Among infected gastrectomized patients, H. pylori was detected more frequently in gastric fundus compared with biopsy specimens obtained from the anastomotic mouth. abstract_id: PUBMED:24721956 Lower prevalence of Helicobacter pylori infection observed in patients with erosive esophagitis Objectives: To investigate the prevalence of helicobacter pylori infection and the histological type of gastritis in patients with erosive esophagitis and compare these findings with a control group without esophagitis. Materials And Methods: Prospective study in the settings of a private clinic in Lima, Peru. Each esophagitis and control patient had two biopsies taken from the antrum, two from the corpus as well as two additional samples for the ureasa (CLO) test. If any of the samples was positive for helicobacter, the patient was considered infected. The histological type of gastritis was assessed in the biopsies. Results: 140 adult patients (69% males, mean age 48 years) with endoscopically proven esophagitis and 140 control cases, matched by age and sex, were included. A lower prevalence of helicobacter infection was found in the esophagitis group (27 vs. 44% p= 0.004).The risk for esophagitis was also lower in the patients with helicobacter OR: 0.48 IC95% (0.28 - 0.82. A higher percentage of normal gastric histology was found in the patients with esophagitis (44 vs. 31% p= 0.036). In 16 patients the erosive esophagitis was developed "de novo", after the eradication of helicobacter. Conclusions: A lower prevalence of helicobacter infection and a higher percentage of normal gastric mucosa were found in patients with endoscopic proven erosive esophagitis. abstract_id: PUBMED:23758027 Helicobacter pylori - Update 2013 Helicobacter pylori has an important role in the pathogenesis of peptic ulcer, adenocarcinoma of the stomach, lymphoma of the stomach and autoimmune gastritis. Furthermore Helicobacter pylori is involved in the development of symptoms in patients with dyspepsia. Guidelines of the German Society of Digestive Diseases (DGVS) and recommendations of the European Helicobacter Study Group (Maastricht-Consensus) exist for the diagnosis and treatment of Helicobacter pylori and were recently published in updated versions. The German approval and introduction of a new quadrupel eradication therapy for Helicobacter pylori infections is a good occasion to outline and discuss the current state of the art of diagnosis and treatment of Helicobacter pylori in Germany. Answer: The declining prevalence of Helicobacter pylori has been associated with a relative increase in the number of patients with idiopathic peptic ulcer disease (IPUD). A study conducted over an 8-year period investigated the prevalence of IPUD and any possible trends. The results showed that while the prevalence of H. pylori-associated peptic ulcer disease decreased, the prevalence of NSAID-associated peptic ulcer disease increased. However, the prevalence of IPUD remained stable over the study period, suggesting that the declining prevalence of H. pylori did not unmask an increase in patients with IPUD (PUBMED:15256980). Another study aimed to compare the characteristics of H. pylori-positive with idiopathic duodenal ulcers. It found that 17.1% of duodenal ulcers were idiopathic, and patients with idiopathic ulcers were predominantly male and older than those with H. pylori-positive ulcers. Additionally, patients with idiopathic ulcers had more ulcers (PUBMED:37223285). Overall, while the declining prevalence of H. pylori may lead to a relative increase in the number of patients with IPUD, the available evidence does not suggest a significant unmasking effect resulting in an increased prevalence of IPUD. Instead, the prevalence of IPUD has remained relatively stable despite the changes in H. pylori prevalence (PUBMED:15256980).
Instruction: Should we ignore dimensional risk factors in prevention of schizophrenia? Abstracts: abstract_id: PUBMED:11129312 Should we ignore dimensional risk factors in prevention of schizophrenia? Signposts to prevention. Objective: The objective was to outline the development of the concept of allusive thinking as a genetic marker of predisposition to schizophrenia and relate this to other cognitive markers of this predisposition. Method: Publications were reviewed which were considered relevant to the objective. Results: Allusive thinking as detected clinically could be measured objectively from subjects' performance on an Object Sorting Test. Using this test it was shown that parents, both of patients with schizophrenia and of university students with allusive thinking, themselves showed allusive thinking, indicating it was familially transmitted. Subjects with allusive thinking showed reduced cortical evoked brain P300 potentials, suggesting the transmission was genetic. The hypothesis that allusive thinking was associated with weaker cortical inhibitory processes was supported by the finding that subjects with such thinking chose more remote word associations. It was suggested that reasons allusive thinking has not been used as a marker in intervention studies is that as a dimension of abstract thinking, marked allusive thinking is not associated with a high risk of developing schizophrenia, and that administration of the Object Sorting Test is time-consuming. Other dimensional cognitive factors, such as psychoticism and perceptual anhedonia and aberration, are independent of allusive thinking and are also associated with a low risk of developing schizophrenia. Genetic transmission of schizophrenia would appear to involve a number of predisposing factors distributed dimensionally in the population with the contribution of each factor being small. Conclusions: As they are associated with only a low risk of predisposition to schizophrenia, cognitive markers may not be of immediate value in the prevention of schizophrenia when compared with the less specific markers used for this purpose. However, it would seem that their study will be necessary if the nature of the genetic transmission of the illness is to be understood. This understanding could be expected to ultimately lead to more effective prevention. abstract_id: PUBMED:27117756 Developing a broad categorisation scheme to describe risk factors for mental illness, for use in prevention policy and planning. Objectives: The prevention of mental illness involves identifying and modifying those characteristics and exposures of an individual that threaten their mental health - commonly referred to as risk factors. Existing categorisations of risk factors for mental illness are either limited in their scope or oversimplified in their description. As part of a large mental health workforce and service planning project, we set out to develop a more detailed and comprehensive categorisation scheme to describe risk factors for mental illness. Methods: We conducted a rapid review of MEDLINE and Google Scholar for meta-analytic studies that examined the characteristics and exposures that typify the population with mental illness in order to identify and categorise potential risk factors. Results: The search uncovered 1628 relevant studies, from which 10 primary and 23 secondary categories of risk factors were identified, ranging from genetic and biomedical to psychological and sociocultural. The review revealed interesting distortions in the focus of the literature, with the majority of studies focused on a few disorders (schizophrenia, depression and neurodegenerative disorders) and genetic, psychological and physiological risks. In contrast, environmental (e.g. media exposure) and occupational (e.g. employee health) were under-represented. Conclusion: The categorisation scheme developed in this paper is a step towards a more detailed taxonomy of risk factors for mental illness; this will be most useful in guiding clinicians, researchers and policy-makers in driving the prevention agenda forward. abstract_id: PUBMED:15994566 Schizophrenia and suicide: systematic review of risk factors. Background: Suicide risk is greatly increased in schizophrenia. Detection of those at risk is clinically important. Aims: To identify risk factors for suicide in schizophrenia. Method: The international literature on case-control and cohort studies of patients with schizophrenia or related conditions in which suicide was reported as an outcome was systematically reviewed. Studies were identified through searching electronic databases and reference lists, and by consulting experts. Results: Twenty-nine eligible studies were identified. Factors with robust evidence of increased risk of suicide were previous depressive disorders (OR=3.03, 95% CI 2.06-4.46), previous suicide attempts (OR=4.09, 95% CI 2.79-6.01), drug misuse (OR=3.21, 95% CI 1.99-5.17), agitation or motor restlessness (OR=2.61, 95% CI 1.54-4.41), fear of mental disintegration (OR=12.1, 95% CI 1.89-81.3), poor adherence to treatment (OR=3.75, 95% CI 2.20-6.37) and recent loss (OR=4.03, 95% CI 1.37-11.8). Reduced risk was associated with hallucinations (OR=0.50, 95% CI 0.35-0.71). Conclusions: Prevention of suicide in schizophrenia is likely to result from treatment of affective symptoms, improving adherence to treatment, and maintaining special vigilance in patients with risk factors, especially after losses. abstract_id: PUBMED:23868715 Risk factors for inpatient suicide Objective: The study aimed to examine suicidal behaviour during in-patient care in a psychiatric university hospital. Method: Based upon a psychiatric basic documentation system prevalence and risk factors of in-patient suicides and suicide attempts were investigated (1995 - 2010). Results: A total of 42 in-patient suicides and 166 attempts were found among 16 251 patients. According to the multivariate logistic regression analysis the risk of suicide during hospitalization increases significantly for male patients, with more previous psychiatric hospitalizations and suicidality according to clinical impression at admission or suicide attempt before admission. Patients with affective or schizophrenic disorders were at highest risk. The following risk factors are associated with suicide attempt during stay: female gender, borderline personality disorder (F60.3), more previous psychiatric hospitalizations, shorter duration of disorder, earlier age of onset, suicidality according to clinical impression at admission or suicide attempt before admission. Conclusion: As depressive and schizophrenic patients represent the high-risk group of in-patient suicide, suicide prevention should be a major goal in their treatment. More frequent suicide risk assessment is recommended particularly before granting a leave or an outing. abstract_id: PUBMED:29439776 Clinical risk factors for the development of tardive dyskinesia. Background: Tardive dyskinesia (TD) is a severe condition that can affect almost 1 out of 4 patients on current or previous antipsychotic treatment, including both first-generation antipsychotics (FGAs) and second-generation antipsychotics (SGAs). While two novel vesicular monoamine transporter inhibitors, deutetrabenazine and valbenazine, have shown acute efficacy for TD, the majority of patients do not remit, and TD appears to recur once treatment is withdrawn. Hence, prevention of TD remains a crucial goal. Methods: We provide a clinically oriented overview of risk factors for TD, dividing them into patient-, illness- and treatment-related variables, as well as nonmodifiable and modifiable factors. Results: Unmodifiable patient-related and illness-related risk factors for TD include older age, female sex, white and African descent, longer illness duration, intellectual disability and brain damage, negative symptoms in schizophrenia, mood disorders, cognitive symptoms in mood disorders, and gene polymorphisms involving antipsychotic metabolism and dopamine functioning. Modifiable comorbidity-related and treatment-related factors include diabetes, smoking, and alcohol and substance abuse, FGA vs SGA treatment, higher cumulative and current antipsychotic dose or antipsychotic plasma levels, early parkinsonian side effects, anticholinergic co-treatment, akathisia, and emergent dyskinesia. Discussion: Clinicians using dopamine antagonists need to consider risk factors for TD to minimize TD and its consequences. abstract_id: PUBMED:16433060 Suicide among patients with schizophrenia: a consideration of risk and protective factors. Background: The elevated rate of suicide among patients with schizophrenia is well documented and thus frequently researched. The majority of research has focused solely on the identification of risk factors that predispose patients to attempt or commit suicide. This review serves to expand on the literature pertaining to suicide risk factors by additionally outlining how protective factors may shield against suicide within this unique patient population. Methods: A literature review of English-language publications pertaining to suicide among people with schizophrenia was completed using PsychINFO and MEDLINE databases between the years of 1960-2004 and 1950-2004 respectively. Special emphasis was given to studies of risk factors and protective factors for suicide. Results: Commonly supported risk factors for suicide were identified: previous attempts, severity of illness, comorbidity, social isolation, temporal relationships, and demographic characteristics. Risk factors such as a history of violence, the presence of command hallucinations, and recent tragic loss were found to warrant future study. Social support, positive coping skills, life satisfaction, and resiliency emerged as protective factors that may mitigate suicide. Conclusions: Understanding why some patients wish to end their lives but also why some desire to live allows for more comprehensive suicide prevention. abstract_id: PUBMED:20513465 In-patients suicide: epidemiology and prevention Background: Few studies have been devoted to in-patients' suicides. This covers all suicides that occurred during hospitalisation, whatever the place (inside or outside the institution) and often, for psychiatric in-patients, suicides carried out within 24 hours after leaving the institution. Literature Findings: However, the incidence of suicide in hospital is high, higher than that observed in the general population. It is 250 per 100,000 admissions in psychiatric hospitals and 1.8 per 100,000 admissions in general hospitals, which is four to five times more than in general population. Five to 6.5% of suicides are committed in the hospital: 3 to 5.5% occur in psychiatric hospitals and about 2% in general hospitals. Many risk factors for suicide were identified in this context. The accessibility to one or more means of suicide (water, rail, high floor [third floor or beyond], knives, possibility of hanging...) is a recognized factor in psychiatric institutions. In the psychiatric environment, hospitalisation period also determines the risk of suicide: it is highest during the 1st week of hospitalisation and within 2 weeks after leaving. The same is true for the conditions of care: inadequate supervision, the underestimation of the risk of suicide by teams, poor communication within the teams and the lack of intensive care unit promote suicide risk. The controlled studies conducted in a psychiatric environment distinguish two periods for identifying risk factors. The first period is the time of hospitalisation. Are recognized as risk factors: the existence of suicidal personal history (but also family) and attempted suicide shortly before admission, the diagnosis of schizophrenia or mood disorder (non-controlled studies also emphasize the importance of alcoholic comorbidity), being hospitalised without consent, living alone, absence from the service without permission. The second period covers the time-period immediately following the hospitalisation. For this period, risk factors are: the existence of personal history of suicide and suicidal ideation or attempt of suicide shortly before admission (but also attempt of suicide during hospitalisation), the existence of relational difficulties, the existence of stress or loss of employment, living alone, a decision on leaving the hospital unplanned and lack of contact with nursing in the immediate postdischarge period. In general hospitals, the chronicity and severity of the somatic disease, the personality of the patient and the existence of a psychiatric comorbidity are the suicidal factors most often quoted. Furthermore, we also found only a low rate of psychiatric consultation during the hospitalisation of patient who will commit suicide. Among the countries which have a national program of suicide prevention, only England registered the question of the in-patients suicide among its priorities. The elements of a prevention policy appear however in certain scientific publications and some programs of local or regional initiative. These elements can be grouped under five items: securing the hospital environment, optimisation of the care of the patients at suicidal risk, training of the medical teams in the detection of the risk and in the care of the suicidal subjects, involvement of the families in the care and implementation of post-event procedures following a completed suicide or an attempt. abstract_id: PUBMED:11129316 Universal interventions for the primary prevention of schizophrenia. Objective: To speculate on the role of universal (population-based) interventions designed to prevent schizophrenia. Method: A nonsystematic review of candidate risk factors for schizophrenia, with particular emphasis on the quality of the evidence, effect size of the risk factor, population attributable risk and potential for population-based intervention. Results: In order to design population-based interventions for the prevention of schizophrenia, we need to identify weak but prevalent risk factors. Based on current knowledge, the leading risk factors for schizophrenia include genes, season and place of birth, obstetric complications, prenatal exposure to viruses and prenatal nutrition. For the nongenetic exposures, we need to consider if potential interventions are effective, safe, cheap and acceptable. Conclusions: Vaccinations and interventions related to improving prenatal nutrition and antenatal care appear to be the most plausible options for universal prevention. abstract_id: PUBMED:21180656 Primary prevention in geriatric psychiatry. Background: It is estimated that the number of older adults with mental illness will increase from 4 million in 1970 to 15 million by 2030. The cost of untreated mental illness in the United States is estimated to be >$100 billion annually and nearly half of that cost can be attributed to persons age ≥60. In this paper we present a comprehensive review of empiric data available on primary prevention for the most common psychiatric illnesses in this patient population. Methods: We did an English language literature review of published studies and selected only those with the strongest emphasis on primary prevention for the most common psychiatric illnesses in older adults. Results: Modifiable and nonmodifiable risk factors and several primary prevention strategies have been described, several of them with positive outcomes mainly for depressive disorders and suicide. In conditions such as bipolar disorder, dementias, and geriatric schizophrenia, there is either a paucity of studies or no robust primary prevention strategies identified. Conclusions: Modification of risk factors, a healthy lifestyle that includes a healthy diet, exercise, socialization, and education are important aspects of primary prevention in elderly patients. Genetic engineering and vaccine therapies may open new and exciting opportunities for prevention of many psychiatric illnesses in the near future. abstract_id: PUBMED:12126596 Suicidality in schizophrenia: a review of the evidence for risk factors and treatment options. Suicide is a major contributor to the morbidity and mortality of schizophrenia, accounting for approximately 10% of deaths in these patients. The known risk factors for suicide in schizophrenia include prior suicide attempts, substance abuse, male sex, onset during first decade of illness, social isolation, depression, and feelings of hopelessness. There is significant evidence suggesting that clozapine reduces the suicide rate in patients with schizophrenia and schizoaffective disorder. Possible factors that lead to a decrease in suicidality with clozapine include the following: a direct antidepressant action, improved cognitive function and insight, diminished negative symptoms, reduced substance abuse, and improved compliance. These effects may converge or lessen feelings of hopelessness and more of its converse optimism. The International Suicide Prevention Trial (InterSePT) is a large prospective, 2-year randomized trial of the comparative effects of clozapine and olanzapine involving 980 patients at high risk for suicide in 11 countries in 56 sites. The study included complete freedom to augment these treatments if needed, blinded ratings, a blinded Suicide Monitoring Board, and equivalent clinical contact. The results support the superiority of clozapine over olanzapine to reduce the risk of suicidality and suggest its use should be considered for all patients with schizophrenia with high risk for suicide. Answer: No, we should not ignore dimensional risk factors in the prevention of schizophrenia. The development of the concept of allusive thinking as a genetic marker of predisposition to schizophrenia, along with other cognitive markers, indicates that these factors are familially transmitted and may have a genetic basis (PUBMED:11129312). Although cognitive markers such as allusive thinking, psychoticism, perceptual anhedonia, and aberration are associated with only a low risk of predisposition to schizophrenia, their study is necessary to understand the genetic transmission of the illness, which could ultimately lead to more effective prevention (PUBMED:11129312). Furthermore, a comprehensive categorization scheme to describe risk factors for mental illness, including schizophrenia, has been developed, which ranges from genetic and biomedical to psychological and sociocultural factors (PUBMED:27117756). This indicates the importance of considering a broad range of risk factors in prevention policy and planning. Additionally, research on suicide among patients with schizophrenia has identified several risk factors, such as previous depressive disorders, previous suicide attempts, drug misuse, agitation or motor restlessness, fear of mental disintegration, poor adherence to treatment, and recent loss (PUBMED:15994566; PUBMED:16433060; PUBMED:12126596). These findings underscore the importance of not ignoring dimensional risk factors, as they can be crucial in identifying individuals at high risk and implementing targeted prevention strategies. In conclusion, while dimensional risk factors may be associated with a low risk of developing schizophrenia, they are still relevant in the context of understanding the complex etiology of the disorder and designing effective prevention strategies. Therefore, these factors should not be ignored in the prevention of schizophrenia (PUBMED:11129312).
Instruction: Can the ABCD2 risk score predict positive diagnostic testing for emergency department patients admitted for transient ischemic attack? Abstracts: abstract_id: PUBMED:19628796 Can the ABCD2 risk score predict positive diagnostic testing for emergency department patients admitted for transient ischemic attack? Background And Purpose: We sought to determine if the ABCD2 score, typically used for risk stratification, could predict having a positive diagnostic test in patients evaluated acutely for transient ischemic attack. Methods: We performed a retrospective cohort study for patients admitted from our emergency department with a new diagnosis of transient ischemic attack confirmed by a neurologist. ABCD2 scores were calculated and patients with a score of > or = 4 were placed in the high-risk cohort. Tests evaluated included electrocardiogram, CT, MRI, MR angiography, carotid ultrasonography, and echocardiography. Specific test findings considered to signify positive diagnostic tests were created a priori. Results: We identified 256 patients with transient ischemic attack for inclusion; 167 (61%) were female, the median age was 60 years (interquartile range, 50 to 72), and 162 (63%) patients had an ABCD2 score of > or = 4. Rates of completion of diagnostic testing were electrocardiogram, 270 (100%); CT, 224 (88%); MRI, 89 (35%); MR angiography, 68 (27%); carotid ultrasonography, 125 (49%); and echocardiography, 135 (53%). Univariate analysis found a significant association only with elevated ABCD2 score and carotid duplex testing (P<0.05). Conclusion: An elevated ABCD2 score may help predict patients with severe carotid occlusive disease but does not predict positive outcome in other commonly ordered tests for patients being evaluated for transient ischemic attack. An elevated ABCD2 score cannot be recommended as a tool to guide diagnostic testing in patients presenting acutely with transient ischemic attack. abstract_id: PUBMED:36621122 Evaluation of ABCD2 score during the development of stroke in COVID-19 patients diagnosed with transient ischemic attack in the emergency department. Background: The aim of the present study is to reveal the association between the risk of stroke using ABCD2 score and COVID-19 in patients who presented to our emergency department during the pandemic and were diagnosed with TIA. Methods: According to the recommendations of the European Stroke Association, patients with an ABCD2 score of <4 were classified as low-risk, and patients with an ABCD2 score of ≥4 were classified as high-risk. Within 90 days of the patient's admission to the emergency room, the development of stroke was tracked and recorded on the system. Results: Stroke occurred in 35.78% of the patients. Regarding COVID-19, 75.34% of stroke patients were positive for COVID-19 and 65.75% had COVID-19 compatible pneumonia on 'thoracic CT'. Regarding mortality, 16.4% of the patients who were positive for COVID-19 and developed a stroke died. The presence of COVID-19 compatible pneumonia on thorax CT, PCR test result and ABCD2 score were determined as independent risk factors for the development of stroke. According to the PCR test results, the probability of having a stroke decreases 0.283 times in patients who are negative for COVID-19. According to the PCR test results, the probability of having a stroke increased 2.7 times in COVID-19 positive patients. Conclusions: Adding the presence of COVID-19 and the presence of COVID-19 pneumonia to the ABCD2 score, based on the information about the increased risk of stroke in TIA patients, improves the predictive power of the score. More studies are needed in this regard. abstract_id: PUBMED:24424004 The ABCD2 score: a poor predictor of stroke risk in the emergency department. Clinical Question: In patients presenting with transient ischemic attack in the emergency department, what is the accuracy of the ABCD2 score for predicting stroke? Article Chosen: Perry JJ, Sharma M, Sivilotti ML, et al. Prospective validation of the ABCD2 score for patients in the emergency department with TIA. CMAJ 2011;183:1137-45. Objective: The study collaborators sought to externally validate the ABCD2 score as a tool for identifying patients seen in the emergency department with transient ischemic attack who are at high risk for stroke within 7 (primary outcome) and 90 (one of the secondary outcomes) days. abstract_id: PUBMED:21790923 Low positive predictive value of the ABCD2 score in emergency department transient ischaemic attack diagnoses: the South Western Sydney transient ischaemic attack study. Background: The ABCD(2) stroke risk score is recommended in national guidelines for stratifying care in transient ischaemic attack (TIA) patients, based on its prediction of early stroke risk. We had become concerned about the score accuracy and its clinical value in modern TIA cohorts. Methods: We identified emergency department-diagnosed TIA at two hospitals over 3 years (2004-2006). Cases were followed for stroke occurrence and ABCD(2) scores were determined from expert record review. Sensitivity, specificity and positive predictive values (PPV) of moderate-high ABCD(2) scores were determined. Results: There were 827 indexed TIA diagnoses and record review was possible in 95.4%. Admitted patients had lower 30-day stroke risk (n = 0) than discharged patients (n = 7; 3.1%) (P < 0.0001). There was no significant difference in proportion of strokes between those with a low or moderate-high ABCD(2) score at 30 (1.2 vs 0.8%), 90 (2.0 vs 1.9%) and 365 days (2.4 vs 2.4%) respectively. At 30 days the sensitivity, specificity and PPV of a moderate-high score were 57% (95% confidence interval (CI) 25.0-84.2), 32.2% (95% CI 29.1-35.6) and 0.75% (95% CI 0.29-1.91) respectively. Conclusions: Early stroke risk was low after an emergency diagnosis of TIA and significantly lower in admitted patients. Moderate-high ABCD(2) scores did not predict early stroke risk. We suggest local validation of ABCD(2) before its clinical use and a review of its place in national guidelines. abstract_id: PUBMED:22442167 Application of the ABCD2 score to identify cerebrovascular causes of dizziness in the emergency department. Background And Purpose: Dizziness can herald a cerebrovascular event. The ABCD(2) score predicts the risk of stroke after transient ischemic attack partly by distinguishing transient ischemic attack from mimics. We evaluated whether this score would also identify cerebrovascular events among emergency department patients with dizziness. Methods: We retrospectively identified consecutive adults presenting to a university emergency department with a primary symptom of dizziness, vertigo, or imbalance. Two neurologists independently reviewed medical records to determine whether the presenting symptom was caused by a cerebrovascular event (ischemic stroke, transient ischemic attack, or intracranial hemorrhage). ABCD(2) scores were then assigned using clinical information from the medical record. The ability of the score to discriminate between patients with cerebrovascular events and those with other diagnoses was quantified using the c statistic. Results: Among 907 dizzy patients (mean age, 59 years; 58% female), 37 (4.1%) had a cerebrovascular cause, the majority of which were ischemic strokes (n=24). The median ABCD(2) score was 3 (interquartile range, 3-4). The ABCD(2) score predicted ultimate diagnosis of a cerebrovascular event (c statistic, 0.79; 95% CI, 0.73-0.85). Only 5 of 512 (1.0%) patients with a score of ≤ 3 had a cerebrovascular event compared to 25 of 369 patients (6.8%) with a score of 4 or 5 and 7 of 26 patients (27.0%) with a score of 6 or 7. Conclusions: The ABCD(2) score may provide useful information on dizzy emergency department patients at low-risk for having a cerebrovascular diagnosis and may aid frontline providers in acute management if validated prospectively. abstract_id: PUBMED:34325273 Expedited and Comprehensive Management of Low-Risk TIA Patients in the Emergency Department is Safe and Less Costly. Objectives: Transient ischemic attack (TIA) can be a warning sign of an impending stroke. The objective of our study is to assess the feasibility, safety, and cost savings of a comprehensive TIA protocol in the emergency room for low-risk TIA patients. Materials And Methods: This is a retrospective, single-center cohort study performed at an academic comprehensive stroke center. We implemented an emergency department-based TIA protocol pathway for low-risk TIA patients (defined as ABCD2 score < 4 and without significant vessel stenosis) who were able to undergo vascular imaging and a brain MRI in the emergency room. Patients were set up with rapid outpatient follow-up in our stroke clinic and scheduled for an outpatient echocardiogram, if indicated. We compared this cohort to TIA patients admitted prior to the implementation of the TIA protocol who would have qualified. Outcomes of interest included length of stay, hospital cost, radiographic and echocardiogram findings, recurrent neurovascular events within 30 days, and final diagnosis. Results: A total of 138 patients were assessed (65 patients in the pre-pathway cohort, 73 in the expedited, post-TIA pathway implementation cohort). Average time from MRI order to MRI end was 6.4 h compared to 2.3 h in the pre- and post-pathway cohorts, respectively (p < 0.0001). The average length of stay for the pre-pathway group was 28.8 h in the pre-pathway cohort compared to 7.7 h in the post-pathway cohort (p < 0.0001). There were no differences in neuroimaging or echocardiographic findings. There were no differences in the 30 days re-presentation for stroke or TIA or mortality between the two groups. The direct cost per TIA admission was $2,944.50 compared to $1,610.50 for TIA patients triaged through the pathway at our institution. Conclusions: This study demonstrates the feasibility, safety, and cost-savings of a comprehensive, emergency department-based TIA protocol. Further study is needed to confirm overall benefit of an expedited approach to TIA patient management and guide clinical practice recommendations. abstract_id: PUBMED:21646462 Prospective validation of the ABCD2 score for patients in the emergency department with transient ischemic attack. Background: The ABCD2 score (Age, Blood pressure, Clinical features, Duration of symptoms and Diabetes) is used to identify patients having a transient ischemic attack who are at high risk for imminent stroke. However, despite its widespread implementation, the ABCD2 score has not yet been prospectively validated. We assessed the accuracy of the ABCD2 score for predicting stroke at 7 (primary outcome) and 90 days. Methods: This prospective cohort study enrolled adults from eight Canadian emergency departments who had received a diagnosis of transient ischemic attack. Physicians completed data forms with the ABCD2 score before disposition. The outcome criterion, stroke, was established by a treating neurologist or by an Adjudication Committee. We calculated the sensitivity and specificity for predicting stroke 7 and 90 days after visiting the emergency department using the original "high-risk" cutpoint of an ABCD2 score of more than 5, and the American Heart Association recommendation of a score of more than 2. Results: We enrolled 2056 patients (mean age 68.0 yr, 1046 (50.9%) women) who had a rate of stroke of 1.8% at 7 days and 3.2% at 90 days. An ABCD2 score of more than 5 had a sensitivity of 31.6% (95% confidence interval [CI] 19.1-47.5) for stroke at 7 days and 29.2% (95% CI 19.6-41.2) for stroke at 90 days. An ABCD2 score of more than 2 resulted in sensitivity of 94.7% (95% CI 82.7-98.5) for stroke at 7 days with a specificity of 12.5% (95% CI 11.2-14.1). The accuracy of the ABCD2 score as calculated by either the enrolling physician (area under the curve 0.56; 95% CI 0.47-0.65) or the coordinating centre (area under the curve 0.65; 95% CI 0.57-0.73) was poor. Interpretation: This multicentre prospective study involving patients in emergency departments with transient ischemic attack found the ABCD2 score to be inaccurate, at any cut-point, as a predictor of imminent stroke. Furthermore, the ABCD2 score of more than 2 that is recommended by the American Heart Association is nonspecific. abstract_id: PUBMED:20855130 An assessment of the incremental value of the ABCD2 score in the emergency department evaluation of transient ischemic attack. Study Objective: We study the incremental value of the ABCD2 score in predicting short-term risk of ischemic stroke after thorough emergency department (ED) evaluation of transient ischemic attack. Methods: This was a prospective observational study of consecutive patients presenting to the ED with a transient ischemic attack. Patients underwent a full ED evaluation, including central nervous system and carotid artery imaging, after which ABCD2 scores and risk category were assigned. We evaluated correlations between risk categories and occurrence of subsequent ischemic stroke at 7 and 90 days. Results: The cohort consisted of 637 patients (47% women; mean age 73 years; SD 13 years). There were 15 strokes within 90 days after the index transient ischemic attack. At 7 days, the rate of stroke according to ABCD2 category in our cohort was 1.1% in the low-risk group, 0.3% in the intermediate-risk group, and 2.7% in the high-risk group. At 90 days, the rate of stroke in our ED cohort was 2.1% in the low-risk group, 2.1% in the intermediate-risk group, and 3.6% in the high-risk group. There was no relationship between ABCD2 score at presentation and subsequent stroke after transient ischemic attack at 7 or 90 days. Conclusion: The ABCD2 score did not add incremental value beyond an ED evaluation that includes central nervous system and carotid artery imaging in the ability to risk-stratify patients with transient ischemic attack in our cohort. Practice approaches that include brain and carotid artery imaging do not benefit by the incremental addition of the ABCD2 score. In this population of transient ischemic attack patients, selected by emergency physicians for a rapid ED-based outpatient protocol that included early carotid imaging and treatment when appropriate, the rate of stroke was independent of ABCD2 stratification. abstract_id: PUBMED:27041082 Emergency Department Management of Transient Ischemic Attack: A Survey of Emergency Physicians. Background: Patients are at high risk for stroke following a transient ischemic attack (TIA). The ideal setting for evaluating and treating patients with TIA has not been established, resulting in variability in emergency department (ED) TIA management. We conducted a survey to describe ED TIA management and factors that influence disposition determination for TIA patients. Methods: We administered a mail survey to 480 randomly selected members of the Michigan College of Emergency Physicians. Survey questions addressed current ED TIA management, the acceptability of the ABCD(2) risk-stratification tool, and disposition recommendations for a series of hypothetical TIA patients. Results: A total of 188 (39%) responses were received. Head computed tomography (96.2%) and antiplatelet therapy (88.2%) were the most commonly reported ED interventions. Over 85% of respondents reported admitting most or all TIA patients. The ABCD(2) score had low acceptability among emergency medicine physicians and was rarely incorporated into practice (10.7%). Respondents identified a short-term risk of stroke of less than 2% (95% confidence interval: 1.6-2.4) as an acceptable threshold for discharge; however, most respondents recommended admission even for low-risk TIA patients. Those with access to an outpatient TIA clinic were less likely to admit low-risk TIA patients; those with access to an observation unit were more likely to admit. Conclusions: In this survey, ED physicians preferred hospital admission for most TIA patients, including those at low risk for stroke. The ABCD(2) risk-stratification tool had low acceptability. Further research is needed to refine risk-stratification tools and define the optimal setting for TIA evaluations. abstract_id: PUBMED:37344364 Evaluating the use of the ABCD2 score as a clinical decision aid in the emergency department: Retrospective observational study. Objective: Clinical decision aids (CDAs) can help clinicians with patient risk assessment. However, there is little data on CDA calculation, interpretation and documentation in real-world ED settings. The ABCD2 score (range 0-7) is a CDA used for patients with transient ischaemic attack (TIA) and assesses risk of stroke, with a score of 0-3 being low risk. The aim of this study was to describe ABCD2 score documentation in patients with an ED diagnosis of TIA. Methods: Retrospective observational study of patients with a working diagnosis of a TIA in two Australian EDs. Data were gathered using routinely collected data from health informatics sources and medical records reviewed by a trained data abstractor. ABCD2 scores were calculated and compared with what was documented by the treating clinician. Data were presented using descriptive analysis and scatter plots. Results: Among the 367 patients with an ED diagnosis of TIA, clinicians documented an ABCD2 score in 45% (95% CI 40-50%, n = 165). Overall, there was very good agreement between calculated and documented scores (Cohen's kappa 0.90). The mean documented and calculated ABCD2 score were similar (3.8, SD = 1.5, n = 165 vs 3.7, SD = 1.8, n = 367). Documented scores on the threshold of low and high risk were more likely to be discordant with calculated scores. Conclusions: The ABCD2 score was documented in less than half of eligible patients. When documented, clinicians were generally accurate with their calculation and application of the ABCD2. No independent predictors of ABCD2 documentation were identified. Answer: The ABCD2 risk score is commonly used for risk stratification in patients with transient ischemic attack (TIA), but its ability to predict positive diagnostic testing outcomes is not well supported. A retrospective cohort study found that an elevated ABCD2 score may help predict patients with severe carotid occlusive disease but does not predict positive outcomes in other commonly ordered tests for patients being evaluated for TIA. Therefore, an elevated ABCD2 score cannot be recommended as a tool to guide diagnostic testing in patients presenting acutely with TIA (PUBMED:19628796). Additionally, other studies have shown that the ABCD2 score has a low positive predictive value in emergency department TIA diagnoses and does not add incremental value beyond an emergency department evaluation that includes central nervous system and carotid artery imaging (PUBMED:21790923, PUBMED:20855130). Furthermore, a multicentre prospective study found the ABCD2 score to be inaccurate, at any cut-point, as a predictor of imminent stroke (PUBMED:21646462). However, one study suggested that adding the presence of COVID-19 and the presence of COVID-19 pneumonia to the ABCD2 score, based on the information about the increased risk of stroke in TIA patients, improves the predictive power of the score (PUBMED:36621122). Another study indicated that the ABCD2 score may provide useful information on dizzy emergency department patients at low-risk for having a cerebrovascular diagnosis and may aid frontline providers in acute management if validated prospectively (PUBMED:22442167). In summary, while the ABCD2 score is a tool for risk stratification, its ability to predict positive diagnostic testing specifically for TIA in the emergency department is limited and should not be solely relied upon for guiding diagnostic testing.
Instruction: Are compulsory admissions more frequent in immigrants and ethnic minorities? Abstracts: abstract_id: PUBMED:18802877 Are compulsory admissions more frequent in immigrants and ethnic minorities? Objective: To assess the frequency of compulsory admissions to psychiatric hospitals in immigrants and members of ethnic minorities compared to German patients. Methods: The rates and quotas of compulsory admissions were assessed in the city of Frankfurt / Main during one year (2005). The data were compared to the epidemiological data of the inhabitants of the catchment area of the clinic (control group). Results: The findings show a lower rate of admission to the psychiatric hospital in immigrants and members of ethnic minorities. The QUOTA of compulsory admissions compared to the total rate of admissions to the psychiatric hospital was higher in immigrants and members of ethnic minorities. The RATE of compulsory admissions out of 100 000 population in immigrants and members of ethnic minorities was similar to the rate of compulsory admissions in German population. Conclusions: Although some methodological limitations exist the findings are in line with results from other European and non-European countries. abstract_id: PUBMED:34718522 Socioeconomic and demographic risk factors in COVID-19 hospitalization among immigrants and ethnic minorities. Background: Immigrants and ethnic minorities have been shown to be at increased risk of hospitalization from COVID-19. Our aim was to analyse the contribution of socioeconomic and demographic risk factors on hospital admissions for COVID-19 among immigrants and ethnic minorities compared to the majority population. Methods: We used nationwide register data on all hospitalized COVID-19 cases between February and June 2020 (N = 2232) and random controls from the general population (N = 498 117). We performed logistic regression analyses and adjusted for age, sex, comorbidity, and socioeconomic and demographic factors. The main outcome measure was hospitalization with COVID-19 and was estimated using odds ratios (OR) and 95% confidence intervals (95% CI). Results: Among 2232 COVID-19 cases, the OR of hospitalization with COVID-19 among immigrants and descendants of non-Western origin was 2.5 times higher (95% CI: 2.23-2.89) compared with individuals of Danish origin with most pronounced results among individuals from Iraq, Morocco, Pakistan and Somalia. The OR was largely attributed to comorbidity and socioeconomic factors, especially household size, occupation, and population density. Conclusion: There is a significantly higher OR of hospitalization with COVID-19 among non-Western immigrants and ethnic minorities compared with ethnic Danes. This knowledge is crucial for health policymakers and practitioners in both the current and future pandemics to identify more vulnerable groups and target prevention initiatives. abstract_id: PUBMED:29986547 Suicide Risk among Immigrants and Ethnic Minorities: A Literature Overview. Recent studies have demonstrated that immigrants and ethnic minorities may be at higher risk of suicidal behaviour as compared to the general population. We conducted a literature search to identify studies in English from 1980 to 2017 related to suicide risk among immigrants and ethnic minorities. Six hundred and seventy-eight reports were screened, and 43 articles were included in the qualitative synthesis of the review. Some studies reported lower rates of suicide attempts, while other findings suggested higher rates of suicidal behaviour and deaths among immigrants as compared to the native population. Also, a positive correlation was found between suicidal behaviour and specific countries of origin. Non-European immigrant women were at the highest risk for suicide attempts, a group which included young women of South Asian and black African origin. Risk factors among migrants and ethnic minorities were found to be: language barriers, worrying about family back home, and separation from family. The lack of information on health care system, loss of status, loss of social network, and acculturation were identified as possible triggers for suicidal behaviour. Overall, results suggest that specific migrant populations and ethnic minorities present a higher risk of suicidal behaviour than native populations, as well as a higher risk of death by suicide. abstract_id: PUBMED:31193820 Psychosocial Interventions to Reduce Compulsory Psychiatric Admissions: A Rapid Evidence Synthesis. Background: Compulsory mental health hospital admissions are increasing in several European countries but are coercive and potentially distressing. It is important to identify which mental health service models and interventions are effective in reducing compulsory admissions. Methods: We conducted a rapid evidence synthesis to explore whether there is any evidence for an effect on compulsory admissions for 15 types of psychosocial intervention, identified by an expert group as potentially relevant to reducing compulsory admission. A search for randomised controlled trials (RCTs) reporting compulsory admission as a primary or secondary outcome or adverse event was carried out using clinical guidelines, recent systematic reviews, and database searches postdating these reviews. Findings: We found 949 RCTs reporting on the interventions of interest, of which 19 reported on compulsory admission. Our narrative synthesis found some evidence for the effectiveness of crisis planning and self-management, while evidence for early intervention services was mixed. We did not find evidence to support adherence therapy, care from crisis resolution teams and assertive community treatment, but numbers of relevant studies were very small. We found no trials which tested effects on compulsory admission of the nine other intervention types. Interpretation: Crisis planning and self-management interventions with a relapse prevention element are most promising for preventing compulsory admissions. Given our broad search strategy, the lack of evidence demonstrates that there is an urgent need for more research on interventions which may reduce compulsory admissions. Funding: Independent research commissioned and funded by the National Institute for Health Research Policy Research Programme. abstract_id: PUBMED:25411892 Survey nonresponse among ethnic minorities in a national health survey--a mixed-method study of participation, barriers, and potentials. Objectives: The participation rate in the Danish National Health Survey (DNHS) 2010 was significantly lower among ethnic minorities than ethnic Danes. The purpose was to characterize nonresponse among ethnic minorities in DNHS, analyze variations in item nonresponse, and investigate barriers and incentives to participation. Design: This was a mixed-method study. Logistic regression was used to analyze nonresponse using data from DNHS (N = 177,639 and chi-square tests in item nonresponse analyses. We explored barriers and incentives regarding participation through focus groups and cognitive interviews. Informants included immigrants and their descendants of both sexes, with and without higher education. Results: The highest nonresponse rate was for non-Western descendants (80.0%) and immigrants 25 (72.3%) with basic education. Immigrants and descendants had higher odds ratios (OR = 3.07 and OR = 3.35, respectively) for nonresponse than ethnic Danes when adjusted for sex, age, marital status, and education. Non-Western immigrants had higher item nonresponse in several question categories. Barriers to non-participation related to the content, language, format, and layout of both the questionnaire and the cover letter. The sender and setting in which to receive the questionnaire also influenced answering incentives. We observed differences in barriers and incentives between immigrants and descendants. Conclusions: Nonresponse appears related to linguistic and/or educational limitations, to alienation generated by the questions' focus on disease and cultural assumptions, or mistrust regarding anonymity. Ethnic minorities seem particularly affected by such barriers. To increase survey participation, questions could be sensitized to reflect multicultural traditions, and the impact of sender and setting considered. abstract_id: PUBMED:37191830 COVID-19 mortality and use of intensive care among ethnic minorities - a national register-based Danish population study. Migrants and ethnic minorities are disproportionately affected by the Coronavirus Disease 2019 (COVID-19) pandemic compared to the majority population. Therefore, we studied mortality and use of mechanical ventilation (MV) by country of birth and migrant status in a nationwide cohort in Denmark. Nationwide register data on all cases hospitalized for > 24-hours with COVID-19 between February 2020 and March 2021. Main outcome measures were mortality and MV within 30 days of hospitalization for COVID-19. Odds ratios (OR) and 95% confidence intervals (95% CI) were estimated by region of origin and migrant status using logistic regression analyses, adjusting for age, sex, comorbidity and sociodemographic factors. Of 6,406 patients, 977 (15%) died and 342 (5%) were treated with mechanical ventilation. Immigrants (OR:0.55;95%CI: 0.44-0.70) and individuals of non-Western origin had a lower odds (OR: 0.49; 95% CI: 0.37-0.65) of death upon admission with COVID-19 compared to Danish born individuals. Immigrants and descendants (OR: 1.62; 95% CI: 1.22-2.15) as well as individuals of non-Western origin (OR: 1.83; 95% CI: 1.35-2.47) had a significantly higher odds of MV compared to Danish born individuals. Outcomes of individuals with Western origin did not differ. Immigrants and individuals of non-Western origin had a significantly lower COVID-19 associated mortality compared to individuals of Danish origin after adjustment for sociodemographic factors and comorbidity. In contrast, the odds of MV was higher for immigrants and individuals of non-Western origin compared to individuals of Danish origin. abstract_id: PUBMED:28259634 Determinants of compulsory admissions in a state psychiatric hospital-Case control study. Objective: Compulsory admissions are against the patient's will and are presumably due to diverse reasons. There has been a rise in compulsory admissions world over. The objective of the study was to determine the risk factors for compulsory admissions in a state psychiatric hospital. Methods: A case control study was conducted at the state psychiatric hospital, Trivandrum, Kerala. Cases were involuntary non-legal admissions while controls were voluntary admissions. Putative risk factors studied included social support, severity of psychopathology etc., in addition to the socio-demographic variables. Results: Risk factors for compulsory admissions were higher age, 30-49 years, OR=1.98, 95% CI [1.03-3.81]; >50years, OR=2.2, 95% CI [1.03-4.72], being from an urban locale,OR=1.99, 95% CI [1.13-3.52], living in joint & extended families OR=2.12, 95% CI [1.3-3.4], homelessness OR=2.24, 95% CI [1.32-3.79] and poor social support, OR=4.45 [2.53-7.81]. The type of illness, its duration, diagnosis, or symptom severity were not significantly related to compulsory admissions, but past compulsory admissions OR=5.36, 95% CI [2.09-13.75], poor functioning OR=2.54, 95% CI [1.31-4.91] and poor compliance to medication OR=1.78, 95% CI [1.05-3.01] were associated with compulsory admission. Poor social support, past involuntary admissions and poor functional status retained their association after multivariate analysis. Conclusions: By addressing the modifiable factors like poor social support, poor functional status, and poor compliance to medication, compulsory admissions could be prevented. Since it is found that compulsory admissions are likely to repeat, such patients form a high risk group requiring specific interventions. abstract_id: PUBMED:21628357 Factors associated with higher risks of emergency compulsory admission for immigrants: a report from the ASAP study. Objective: Specific ethnic groups appear to be disproportionally represented in emergency compulsory admissions. This may be due to a parallel higher risk of psychopathology, but different pathways to care in patients from ethnic minorities may also be an explanatory factor. In this article we concentrate on the influence of ethnic background, pathways to psychiatric emergency services and the amount of past psychiatric treatment as predictors of emergency compulsory admission. Methods: All the patients coming into contact consecutively with the Psychiatric Emergency Service Amsterdam (PESA) from September 2004 to September 2006 were included in the study. We collected socio-demographic and clinical characteristics, and data about prior use of mental health services and referral to the emergency service. Results: The risk of compulsory admission for immigrants from Surinam and the Dutch Antilles (OR 2.6), sub-Saharan Africa (OR 3.1), Morocco and other non-Western countries (each OR 1.7) was significantly higher than for native Dutch people. After controlling for socio-demographic characteristics, diagnosis, referral pattern and psychiatric treatment history, this correlation was no longer found. Conclusion: We found no evidence to substantiate the hypothesis that ethnic background plays an independent role in emergency compulsory admission. Police referral rather than referral by a GP, and being diagnosed with a psychosis, seemed to be explanatory factors for the high risk of compulsory admission for non-Western immigrants. Infrequent contact with secondary mental healthcare during the previous five years was not found to be consistently associated with a higher risk. abstract_id: PUBMED:33751555 Participation in civil society organizations and ethnic minorities' interethnic friendships in Britain. In Britain, civil society organizations (CSOs) have garnered much praise for promoting interethnic friendships (IEF) and strengthening community cohesion. Yet, there is very little empirical evidence to suggest that participation in CSOs promotes ethnic minorities' IEF. Using nationally representative longitudinal (2011-2019) and cross-sectional (2010) data, this article explores the association between participation in CSOs and IEF formation among five British ethnic minority groups and analyses how this relationship is affected by the ethnic composition of CSOs. Overall, fixed effects models show that participation in CSOs only significantly promotes IEF for Indians. For other minority groups it has either no effect or, in the case of Pakistanis, significantly decreases IEF. Further analyses show that compared with ethnic minorities that do not participate in any CSOs, those who participate in mostly interethnic CSOs tend to have significantly more IEF, whereas those who participate in mostly co-ethnic CSOs tend to have significantly less IEF. Taken together, these findings suggest that the association between civic participation and ethnic minorities' IEF is much more nuanced than previously thought and policy interventions seeking to improve ethnic integration should, therefore, take the ethnic background of participants and the ethnic composition of CSOs into account. abstract_id: PUBMED:30818332 Impact of acculturation on oral health among immigrants and ethnic minorities: A systematic review. Objective: Cultural changes faced by immigrants and ethnic minorities after moving to a host country may have a detrimental or beneficial influence on their oral health and oral health-related behaviors. Therefore, this paper reviews the literature to see the impact of acculturation on immigrants and ethnic minorities' oral health outcomes. Methods: We searched seven electronic databases up to January 2018. All cross-sectional and longitudinal quantitative studies that examined associations between acculturation and oral health status and/or oral health behaviors among ethnic minority and immigrant population[s] were included. Study selection, data extraction, and risk of bias assessment were completed in duplicate. The Newcastle-Ottawa checklist was used to appraise the methodological quality of the quantitative studies. A meta-analytic approach was not feasible. Results: A total of 42 quantitative studies were identified. The studies showed a positive association between acculturation and oral health status/behaviors. The most frequently used acculturation indicators were language spoken by immigrant and ethnic minorities and length of stay at the host country. High-acculturated immigrant and ethnic minority groups demonstrated better oral health outcomes, oral health behaviors, dental care utilization, and dental knowledge. Conclusions: According to existing evidence, a positive effect of acculturation on oral health status and behaviors was found. Practical Implications: Dental practitioners should be culturally competent to provide the appropriate services and treatments to immigrant and ethnic minorities. Policymakers should also be sensitive to cultural diversities and properly address the unique needs of each group in order to maintain oral health equity. Answer: The frequency of compulsory admissions to psychiatric hospitals in immigrants and ethnic minorities compared to the general population varies according to different studies. One study conducted in the city of Frankfurt/Main during 2005 found that the quota of compulsory admissions compared to the total rate of admissions to the psychiatric hospital was higher in immigrants and members of ethnic minorities, although the rate of compulsory admissions out of 100,000 population in these groups was similar to that of the German population (PUBMED:18802877). This suggests that while the overall number of compulsory admissions may be similar, immigrants and ethnic minorities are overrepresented in the subset of admissions that are involuntary. Another study examining the determinants of compulsory admissions in a state psychiatric hospital found that risk factors for compulsory admissions included higher age, urban locale, living in joint & extended families, homelessness, and poor social support. The type of illness, its duration, diagnosis, or symptom severity were not significantly related to compulsory admissions, but past compulsory admissions, poor functioning, and poor compliance to medication were associated with compulsory admission (PUBMED:28259634). A report from the ASAP study indicated that specific ethnic groups appeared to be disproportionately represented in emergency compulsory admissions. However, after controlling for socio-demographic characteristics, diagnosis, referral pattern, and psychiatric treatment history, the correlation with ethnic background was no longer found. Factors such as police referral rather than referral by a GP, and being diagnosed with a psychosis, were explanatory factors for the high risk of compulsory admission for non-Western immigrants (PUBMED:21628357). Overall, while some studies suggest that immigrants and ethnic minorities may be more frequently subjected to compulsory psychiatric admissions, the evidence is mixed and may be influenced by a range of factors including socio-demographic characteristics, pathways to care, and previous psychiatric treatment history.
Instruction: Can the Distress Thermometer be improved by additional mood domains? Abstracts: abstract_id: PUBMED:19296461 Can the Distress Thermometer be improved by additional mood domains? Part II. What is the optimal combination of Emotion Thermometers? Purpose: To examine the added value of an algorithmic combination of visual-analogue thermometers compared with the Distress Thermometer (DT) when attempting to detect depression, anxiety or distress in early cancer. Methods: We report Classification and Regression Tree and logistic regression analyses of the new five-domain Emotion Thermometers tool. This is a combination of five visual-analogue scales in the form of four mood domains (distress, anxiety, depression, anger) as well as need for help. 130 patients attending for their first chemotherapy treatment were assessed. We calculated optimal accuracy for each domain alone and in combination against several criterion standards. Results: When attempting to diagnose depression the Depression Thermometer (DepT) used alone was the optimal approach, but when attempting to detect broadly defined distress or anxiety then a combination of thermometers was most accurate. The DepT was significantly more accurate in detecting depression than the DT. For broadly defined distress a combination of depression, anger and help thermometers was more accurate than the DT alone. For anxiety, while the anxiety thermometer (AnxT) improves upon the DT alone, a combination of the DepT and AnxT are optimal. In each case the optimal strategy allowed the detection of at least an additional 9% of individuals. However, combinations are more laborious to score. In settings where the simplest possible option is preferred the most accurate single thermometer might be preferable as a first stage assessment. Conclusion: The DT can be improved by specific combinations of simple thermometers that incorporate depression, anxiety, anger and help. abstract_id: PUBMED:19296462 Can the Distress Thermometer be improved by additional mood domains? Part I. Initial validation of the Emotion Thermometers tool. Purpose: To examine the value of a new screening instrument in a visual-analogue format. Methods: We report the design and validation of a new five-dimensional tool called the Emotion Thermometers (ET). This is a combination of five visual-analogue scales in the form of four predictor domains (distress, anxiety, depression, anger) and one outcome domain (need for help). Between March and August 2007, 130 patients attending the chemotherapy suite for their first chemotherapy treatment were asked to complete several questionnaires with validation for distress, anxiety and depression. Results: Of 81 with low distress on the Distress Thermometer (DT), 51% recorded emotional difficulties on the new ET tool, suggesting added value beyond the DT alone. Of those with a broadly defined emotional complication, 93.3% could be identified using the Anxiety Thermometer (AnxT) alone, compared with 54.4% who would be recognized using the DT alone. Using a cut-off of 3v4 on all thermometers against the total Hospital Anxiety and Depression Scale (HADS) score (cut-off 14v15), the optimal thermometer was the Anger Thermometer (sensitivity 61%, specificity 92%). Against HADS anxiety scale, the optimal thermometer was AnxT (sensitivity 92%, specificity 61%) and against the HADS depression scale, the optimal thermometer was the Depression Thermometer (DepT; sensitivity 60%, specificity 78%). Finally, against DSM-IV major depression, the optimal thermometer was the DepT (sensitivity 80%, specificity 79%). Further improvements may be possible by using a combination of thermometers or by repeating the screen. Conclusion: The diagnostic accuracy of the DT can be improved by the inclusion of simple addition linear domains without substantially increasing the time needed to apply the test. abstract_id: PUBMED:23910979 Development and validation of the distress thermometer for parents of a chronically ill child. Objective: To develop and validate a Distress Thermometer for Parents (DT-P) for chronically ill children and to determine a cutoff score for clinical distress. Study Design: Parents of a chronically ill child (0-18 years) were recruited via announcements or were actively approached at the outpatient clinics of the Emma Children's Hospital/Academic Medical Center and Vrije Universiteit Medical Center. We modeled the development of the DT-P on the Distress Thermometer used in oncology medical care. The DT-P consists of a thermometer score from 0 (no distress) to 10 (extreme distress) and a problem list (practical, social, emotional, physical, cognitive, and parenting domains). The DT-P was validated with the Hospital Anxiety and Depression Scale (HADS) and the Parenting Stress Index. Results: The mean thermometer score of the 706 participating parents was 3.7 (SD 3.0). The thermometer score and the scores in the practical, emotional, physical, and cognitive problem domains were strongly related to anxiety, depression, and the total score of the HADS (0.55 ≤ r ≤ 0.72). The thermometer score and all problem domain scores were moderately-to-strongly related to the Parenting Stress Index (0.38 ≤ r ≤ 0.63). A cutoff-score of 4 correctly identified 86% of "clinical HADS cases" (sensitivity) and 67% of "nonclinical HADS cases" (specificity). Conclusions: We developed the DT-P and examined its diagnostic utility in a large sample. The DT-P appeared to be a valid and useful short screening-tool for identifying parental distress. abstract_id: PUBMED:25701689 Improving the Diagnostic Accuracy of the Distress Thermometer: A Potential Role for the Impact Thermometer. Context: Screening for and management of distress in clinical practice is an important issue in the field of psycho-oncology. The Distress Thermometer (DT) is a common screening tool, but other methods recently have been proposed, with the aim of improving its diagnostic accuracy. Objectives: To investigate the diagnostic accuracy of the DT alone and combined with the Impact Thermometer (IT), via the use of two possible combination methods. Methods: A heterogeneous sample of 385 adult patients with cancer completed the DT, the IT, and the Brief Symptom Inventory-18. Results: The results of the DT were comparable with those found in previous studies, indicating that the DT was adequate for "screening" but had limited value for "case finding." The DT and the IT combined (summed or paired scores) showed minor differences in accuracy indexes compared with the DT alone. Conclusion: The results of this study suggest that the addition of the IT to the DT failed to improve its accuracy for identifying distress. abstract_id: PUBMED:33936593 Utility of a modified distress thermometer in screening COVID-19 patients for psychological distress: a prospective Egyptian study. Background: The National Comprehensive Cancer Network (NCCN) has adopted the distress thermometer (DT) as one of the best-known distress-screening instruments. We have adopted a modified version of the NCCN distress thermometer. We questioned if this modified DT (m-DT) could be utilized for measuring the prevalence of psychological distress among COVID-19 patients. Methods: The prospective study included 2 phases; modification of the original DT and its associated problem list (PL), and evaluation of this m-DT in measuring the prevalence of psychological distress among COVID-19 patients. Egyptian adult subjects with suspected or confirmed cases of COVID-19 at 2 University Hospitals were enrolled. Binary logistic regression tests were carried out to explore the association between the m-DT cut-off scores of 4 and the clinical variables. Results: One hundred sixty-nine (60.4%) patients experienced significant distress (m-DT cut off score ≥4). Logistic regression showed that occupation, presence of special habits, length of quarantine time, worry, cough, shortness of breath, and fever, were independent factors associated with significant distress in COVID-19 patients. Conclusion: With the modified distress thermometer (m-DT), 60% of Egyptian COVID-19 patients experienced significant distress. This distress was significantly related to age, marital status, occupation, presence or absence of special habits, and length of the quarantine time. With m-DT, the current study had identified worry, being a health-care worker, shortness of breath, fever, length of quarantine time, presence of special habits, and cough as independent factors associated with significant distress in COVID-19 patients. Further studies are warranted. abstract_id: PUBMED:31623591 Assessing the validity of the clinician-rated distress thermometer in palliative care. Background: The distress of patients suffering from a terminal illness can lead to a state of despair and requests for euthanasia and assisted suicide. It is a major challenge for palliative care workers. The Distress Thermometer (DT) is recommended by the National Comprehensive Cancer Network as a means of more easily assessing distress. It is available as a Self-assessment reported Distress Thermometer, but for a wider use in palliative care it should also be implemented in the form of a clinician-reported outcome (clinRO). Clinicians need to rate patient's distress when the patient is not able to do so (subject that cannot be addressed, defensive patient…). The primary aim of the quantitative study was to assess the validity of the Clinician-Rated Distress Thermometer in palliative care. Method: The assessments were performed by teams working in three palliative care centres. The primary endpoint was concordance between the patient and clinicians' responses via Lin's concordance coefficient. Eligible patients were aged 18 years or older, suffering from a severe disease in the palliative phase, and with a sufficient level of awareness to consent to participate in the study. A total of 51 patients were recruited, 55% were male, with a mean age of 65.8 years [39-90 years]. Results: Three hundred sixty-four clinician-Rated Distress Thermometer and 467 Self-Reported Distress Thermometer were performed. Only 364 of the 467 Self-Reported Distress Thermometer were used for the study, as investigators did not systematically ask the patient to give an account of his distress. Concordance between patient and clinician responses: The Lin's concordance coefficient with a threshold (alpha) of 5% was 0.46 [0.38; 0.54]. At the first assessment, it was 0.61 [0.44; 0.79]. The Cohen's kappa coefficient was 0.52, with a concordance rate of 79.6%. The sensitivity was 82.9% [66.4-93.4] and the specificity 71.4% [41.9-91.6]. Conclusion: The first assessment gave the best results in terms of concordance between Clinician-Rated DT and Self-Reported DT. In the next assessments, the Clinician-Rated DT were less consistent with the patients' Self-Reported DT. abstract_id: PUBMED:30788626 Psychosocial distress in oncology: using the distress thermometer for assessing risk classes. Purpose: Cancer patients often suffer for psychological distress, which can compromise their quality of life. Our study aimed to recognize risk classes for the development of psychosocial distress. Methods: Three hundred seventy-two adult cancer patients were assessed by the Distress Thermometer (DT) and Problem List at the National Cancer Research Centre "Giovanni Paolo II" of Bari. We also compiled a socio-medical and clinical-medical record survey ad hoc for collecting socio-demographic information and clinical variables. To examine the interplay among the different variables and distinguish internally homogeneous subgroups of patients with diverse risks of distress, the RECursive Partitioning and Amalgamation (RECPAM) technique was used. Results: Most of patients were female and the most frequent diagnosis was breast cancer, followed by gastro-intestinal cancer and hematological cancer. Distress was present in 43% of the sample, with a total of 156 patients with a DT > 5. The RECPAM analysis identified three distinct and homogeneous patient subgroups (RECPAM classes) with different risks of distress: diagnosis, marital status, and Eastern Cooperative Oncology Group Performance Status. Conclusion: The use of the distress thermometer allows clinicians to identify patients with certain characteristics that may increase the risk of developing psychosocial distress. This evaluation can allow timely psychological intervention and improve the patient's therapeutic program. abstract_id: PUBMED:36367151 Evaluation of distress management in inpatients with cancer by means of the distress thermometer: A mixed methods approach. Objectives: To improve psychosocial care for oncology inpatients, we implemented screening for distress by means of distress thermometer (DT) at the Comprehensive Cancer Centre Zurich in 2011. Since then, several screening barriers have been reported regarding the application of the DT. This study aimed to evaluate the distress screening process between 2012 and 2016 to identify barriers preventing sustainability. Methods: In this sequential explanatory mixed methods study, we synthesized the results of 2 quantitative retrospective descriptive studies and 1 qualitative focus group study. To compare and analyze the data, we used thematic triangulation. Results: Nurses screened 32% (N = 7034) of all newly admitted inpatients with the DT, and 47% of the screenings showed a distress level ≥5. Of these cases, 9.7% were referred to psycho-oncological services and 44.7% to social services. In 15.7% of these cases, nurses generated a psychosocial nursing diagnosis. In focus group interviews, nurses attributed the low screening rate to the following barriers: adaptation to patients' individual needs, patient-related barriers and resistance, timing, communication challenges, established referral practice, and lack of integration in the nursing process. Significance Of Results: To improve distress screening performance, the screening process should be tailored to patients' needs and to nurses' working conditions (e.g., timing, knowledge, and setting-specific factors). To gain more evidence on distress management as a basis for practical improvements, further evaluations of distress screening are required. abstract_id: PUBMED:29385290 The distress thermometer provides a simple screening tool for selecting distressed childhood cancer survivors. Aim: We investigated the value of the distress thermometer, a one-item screening tool, in childhood cancer survivors. Methods: The participants were 286 childhood cancer survivors who visited an outpatient clinic at Erasmus MC University-Sophia Children's Hospital, Rotterdam, The Netherlands, for the first time from 2001 to 2008 and completed the distress thermometer and Hospital Anxiety and Depression Scale (HADS). Higher scores reflected more distress. A HADS score ≥15 was used as the cut-off point for emotional distress. We calculated the correlation between the HADS and distress thermometer, the relationship between the HADS anxiety and the HADS depression ratings, and the distress score and the sensitivity and specificity for different cut-off scores of the distress thermometer. Results: A moderate correlation was found between the HADS score and the distress thermometer (r: 0.56, p < 0.01, interclass correlation 0.40, p < 0.01). In total, 39% of the variability of distress, as measured by the distress thermometer, could be explained by the HADS anxiety and HADS depression ratings. A cut-off score of at least three on the distress thermometer resulted in a sensitivity of 92% and specificity of 79%. Conclusion: The distress thermometer provided a rapid screening tool for identifying distressed childhood cancer survivors who needed further psychological support. abstract_id: PUBMED:34258811 Screening for distress in women with gynaecological cancer: Adaptation of the distress thermometer for gynaecological oncology patients. Objective: Generic distress screening tools may not recognise the unique concerns reported in some cancer populations. The face and content validity of a screening tool derived from the National Cancer Comprehensive Network distress thermometer and problem list and adapted specifically for women with gynaecological cancer is presented. Methods: Building on existing work, panels of clinicians and researchers, and focus groups with women treated for gynaecological cancer, developed a nuanced distress screening tool. Results: The clinical reference group used an iterative process to reduce 54 items to 22 across four domains (practical/family/psychological/physical). These items were included in the draft tool, which was reviewed by two focus groups of long-term cancer survivors. Participants unanimously thought the tool was necessary though several changes were recommended. The final draft tool contained a global distress score and 25 items across the four domains. Conclusions: This measure provides a structured screening tool tailored to the concerns of women with gynaecological cancer, enhancing communication between clinicians and their patients about potentially identified and unrecognised sources of distress. Future research will focus on establishing sensitivity and specificity of this tool and further assessing its utility in clinical settings for all gynaecological cancers (including rare cancers like vulvar cancer). Answer: Yes, the Distress Thermometer (DT) can be improved by additional mood domains. Research has shown that the diagnostic accuracy of the DT can be enhanced by including simple additional linear domains without substantially increasing the time needed to apply the test (PUBMED:19296462). Specifically, the Emotion Thermometers (ET) tool, which adds four mood domains (distress, anxiety, depression, anger) and one outcome domain (need for help), has been found to provide added value beyond the DT alone. For instance, the Depression Thermometer (DepT) used alone was the optimal approach for diagnosing depression, while combinations of thermometers were most accurate for detecting broadly defined distress or anxiety (PUBMED:19296461). The ET tool identified emotional difficulties in 51% of patients who had low distress on the DT, suggesting that it captures a broader range of emotional complications. The Anxiety Thermometer (AnxT) alone could identify 93.3% of those with broadly defined emotional complications, compared with 54.4% recognized using the DT alone. The optimal thermometer against the Hospital Anxiety and Depression Scale (HADS) was the Anger Thermometer for sensitivity and specificity (PUBMED:19296462). In conclusion, the DT can be improved by specific combinations of simple thermometers that incorporate additional mood domains such as depression, anxiety, anger, and help. These combinations allow for the detection of at least an additional 9% of individuals with emotional difficulties, providing a more comprehensive assessment of distress in patients (PUBMED:19296461).
Instruction: Is the timing of post-vasectomy seminal analysis important? Abstracts: abstract_id: PUBMED:9523670 Is the timing of post-vasectomy seminal analysis important? Objective: To review the practice in two hospitals with differing protocols in the timing of seminal analysis after vasectomy. Patients And Methods: The results from 245 vasectomies carried out at Hospital A, where semen was assessed 3 months after vasectomy, were reviewed and compared with those from 100 consecutive vasectomies at Hospital B, where semen was assessed 6 months after vasectomy. The results of seminal analysis at Hospital A were also audited after changing to the 6-month protocol. The patients' preferences for the timing of seminal analysis were also obtained. Results: Of the 245 patients at Hospital A, 58 (24%) failed to provide samples, leaving 187 (76%) for evaluation; 528 samples were examined (mean 2.8 per patient, range 1-13). The first sample was positive in 36 (19.3%) and the second positive in 10 (5.3%), the first being negative. Four (2%) patients had persistent spermatozoa at 6 months, one subsequently undergoing exploration. Thirty-one (17%) patients provided further samples despite providing two consecutive clear ones. At Hospital B, 24 (24%) patients failed to provide samples; 10 (13%) patients had persistent spermatozoa at 6 months and live spermatozoa were detected in one patient's samples. All eventually produced clear samples, with none requiring exploration. After changing the protocol, 87 vasectomies were performed, with 18 (21%) patients failing to provide samples; seven (10%) of the samples collected showed occasional nonmotile spermatozoa at 6 months in either the first, second or both samples, with all samples clear by 8 months after vasectomy. Conclusions: The complete disappearance of spermatozoa after vasectomy takes longer than is generally believed and we therefore suggest that given adequate counselling, seminal analysis 6 months after vasectomy is cost-effective and in the patient's interest. abstract_id: PUBMED:1563639 Cholesterol and phospholipids in human seminal plasma before and after vasectomy Before and 2.6 months after vasectomy the alterations of secretory capacity and physiologic damage of the accessory genital glands, were evaluated by means of chemical analyses of certain constituents of seminal plasma such as free cholesterol, esterified cholesterol and phospholipids. The results of these studies showed that the ratio between the concentration of free cholesterol and esterified cholesterol is constant six months after vasectomy and the concentration of phospholipids decreases two months after vasectomy. These results are significant and may be important to know the alterations of secretory capacity of the human genital glands after vasectomy. abstract_id: PUBMED:38404558 Providing a post-vasectomy semen analysis cup at the time of vasectomy rather than post-operatively improves compliance. Background: Post-vasectomy semen analysis (PVSA) completion rates after vasectomy are poor, and minimizing the need for an additional in-person visit may improve compliance. We hypothesized that providing PVSA specimen cup at time of vasectomy instead of at a postoperative appointment might be associated with higher PVSA completion rates. Methods: We performed a retrospective cohort study with historical control using medical records of all patients seen by a single provider for vasectomy consultation between October 2016 and June 2022. All patients who underwent vasectomy were included. Patients who underwent vasectomy prior to 05/01/2020 had PVSA specimen cup given at postoperative appointment two weeks following vasectomy, and those who underwent vasectomy after 05/01/2020 were given PVSA specimen cup at time of vasectomy. PVSA completion, demographic, and clinical outcomes data were collected. Logistic regressions were used to investigate associations between PVSA completion rates and timing of PVSA specimen cup provision. Results: There were no significant differences among study cohorts across all patient demographics analyzed, including age, body mass index (BMI), age of primary partner, presence of children, and history of prior genitourinary infection. A total of 491 patients were seen for vasectomy consultation between October 2016 and June 2022; among these patients, 370 underwent vasectomy. Of these, 173 (46.8%) patients underwent vasectomy prior to 05/01/2020 and were given PVSA specimen cup at postoperative visit; 197 (53.2%) patients underwent vasectomy after 05/01/2020 and were given PVSA specimen cup at vasectomy. Providing PVSA specimen cup at time of vasectomy was associated with higher odds of PVSA completion than providing PVSA specimen cup at postoperative visit [62.4% vs. 49.7%; odds ratio (OR) =1.68; 95% confidence interval (CI): 1.11, 2.55]. Adjusting for all identified confounders excludes 35 (9.5%) patients without a primary partner and shows no statistically significant association in cup timing [adjusted OR (aOR) =1.53; 95% CI: 0.98, 2.39]. Adjusting for all identified confounders except age of primary partner revealed timing of specimen cup provision at time of vasectomy was associated with higher odds of PVSA completion (aOR =1.64; 95% CI: 1.08, 2.52). Conclusions: PVSA specimen cup provision at time of vasectomy versus at postoperative appointment is associated with higher rates of PVSA completion in this retrospective cohort study. abstract_id: PUBMED:6862669 Effect of vasectomy on the steroid profile of human seminal plasma. The effect of vasectomy on the steroid profile of seminal plasma was studied in 19 healthy men at 4-monthly intervals during a 2-year post-vasectomy period. The steroids analyzed by radioimmunoassays were: androstenedione (A), testosterone (T), dihydrotestosterone (DHT), oestradiol (E2) and the sulphoconjugated forms of pregnenolone (delta 5P-S), dehydroepiandrosterone (DHEA-S), testosterone (T-S), dihydrotestosterone (DHT-S), oestrone (E1-S) and oestradiol (E2-S). The quantitatively most important steroids present in the seminal plasma were DHEA-S, delta 5P-S and DHT-S. Vasectomy resulted in a significant but non-progressive reduction in DHT (40%) and T (23%) levels. The levels of E2, delta 5P-S and DHT-S also exhibited a smaller reduction following vasectomy. No significant changes were observed in the levels of the other steroids studied. The results are interpreted as suggesting that under normal conditions the bulk of the steroids present in the ejaculate is contributed by the accessory reproductive organs. In addition, a significant part of the DHT and T reach the ejaculate together with the sperm directly from the testis and/or epididymis. It is suggested that the normal steroid levels in seminal plasma established in this investigation could serve as reference values in the hormonal evaluation of male infertility. abstract_id: PUBMED:16644001 Vasectomy effect on canine seminal plasma biochemical components and their correlation with seminal parameters. Three semen samples were collected at 48 h intervals from 20 mature research dogs previously conditioned to manual semen collection. Vasectomy was performed in all dogs, and 15 days after surgery, another three ejaculates were similarly collected. The semen was evaluated, and centrifuged to obtain seminal plasma for measurement of pH, and concentrations of total proteins (TP), total chlorides (Cl), calcium (Ca), potassium (K), and sodium (Na). The seminal plasma protein profile was evaluated by SDS-PAGE; molecular weights and the integrated optical density (IOD) of each band were estimated. There was a negative correlation between K concentration and progressive motility (r = -0.49, P = 0.027), sperm vigor (r = -0.60, P = 0.0053), and plasma integrity, evaluated by both the hypo-osmotic swelling test (r = -0.50, P = 0.026) and a fluorescent stain (r = -0.45, P = 0.046). Positive correlations between Na and K pre- and post-vasectomy (r = 0.88, P < 0.001; r = 0.56, P < 0.01, respectively) were verified. There were a total of 37 bands pre-vasectomy and 35 post-vasectomy (range, 100.6-3.6 kDa). Bands B9 and B13 (42.6 and 29.2 kDa) were not present post-vasectomy. The IOD of band B3 (73.5 kDa) was higher (P = 0.03) pre-vasectomy, compared to post-vasectomy; conversely, the IODs of bands B29 and B37 (7.8 and 3.6 kDa) increased (P = 0.026 and 0.047). Pre-vasectomy, there was a positive correlation (r = 0.49, P = 0.029) between band B37 band (3.6 kDa) and the Na:K ratio. In conclusion, K appeared to be involved in sperm motility in dogs and could be a tool to evaluate sperm function. The prostate contributed several elements to canine seminal plasma. Vasectomy changed Ca concentrations and the protein profile of the seminal plasma. Further studies must be performed to clarify the function of these elements on the in vivo fertility of dogs. abstract_id: PUBMED:32164161 Incidence of Post-Vasectomy Pain: Systematic Review and Meta-Analysis. This is the first systematic review and meta-analysis to ascertain incidences of post-vasectomy pain following traditional scalpel, or non-scalpel vasectomy. Electronic databases PubMed, Embase and PsycINFO were searched up to 1 July 2019 for peer-reviewed articles recording post-vasectomy pain. We identified 733 publications, screened 559 after removal of duplicates and excluded 533. Of the remaining 26 full-text articles, 8 were excluded with reasons, leaving 18 for detailed analyses. Meta-analysis was performed on 25 separate datasets (11 scalpel, 11 non-scalpel, 3 other/combined). Study follow-up ranged from 2 weeks to 37 years and sample sizes from 12 to 723 patients. The overall incidence of post-vasectomy pain was 15% (95% CI 9% to 25%). The incidences of post-vasectomy pain following scalpel and non-scalpel techniques were 24% (95% CI 15% to 36%) and 7% (95% CI 4% to 13%), respectively. Post-vasectomy pain syndrome occurred in 5% (95% CI 3% to 8%) of subjects, with similar estimates for both techniques. We conclude that the overall incidence of post-vasectomy pain is greater than previously reported, with three-fold higher rates of pain following traditional scalpel, compared to non-scalpel vasectomy, whereas the incidence of post-vasectomy pain syndrome is similar. abstract_id: PUBMED:28725612 Vasectomy reversal for post-vasectomy pain syndrome. Post-vasectomy pain syndrome (PVPS) is a rare, but devastating outcome following vasectomy. Given the widespread utilization of vasectomy for permanent contraception, with more than 500,000 procedures performed annually in the United States, it can be a significant challenge for both patients and providers. Vasectomy reversal is a surgical option for men who fail conservative or medical management. Despite improvements in technique, vasectomy carries some inherent risks making pre-procedure counseling regarding the risks of PVPS paramount. Chronic post-operative pain, or PVPS, occurs in 1-2% of men undergoing the procedure. This review will examine the utility of vasectomy reversal as a means of addressing PVPS. abstract_id: PUBMED:33903633 Systematic STR analysis of old post-vasectomy seminal fluid stains to examine evidence stored for 16 years. To understand stored evidence and the insertion in genetic databases is important in forensic investigations. Blood, pre- and post-vasectomy semen from 90 fertile male individuals, aged 24 to 45, were donated for research after informed consent. The semen samples were stored in the form of 30 µL stains on cotton fabric, for 16 years at room temperature in the laboratory. As well as the seminal fluid post vasectomy stains, which were performed after microscopy analyzes and certainty of the absence of spermatozoon. The pre vasectomy stains contained mainly haploid spermatozoon and the post vasectomy stains diploid epithelial cells and leukocytes. DNA extraction was performed with magnetic resin, followed by quantification and analysis of degradation of DNA. In this study we analyze these genetic profiles of DNA from stains on cotton fabric, using two Short Tandem Repeat multiplex systems, the PowerPlex Fusion 6C and Y23. Electrophoresis was performed on a 3500xL and analyzed using the Gene Mapper ID-X software. The genetic profiles of the 90 individuals were fully amplified in pre-vasectomy and partially in post-vasectomy stain samples, using the both multiplex systems. The results provide information about 0.25 cm2 semen stains on cotton fabric from 90 individuals, correlating concentration, degradation, and allele analysis. It also provides an understanding of the cells present in semen stains and the implications of individual factors. In the stains of post-vasectomy samples the small quantity of DNA was one of the limiting factors, in addition to degradation. Considering that all evaluations were carried out in a laboratory that has a quality control certificate and audited for being part of the national genetic profile database, the results were very consistent. Many aspects of the semen samples stored in the form of stains on cotton fabric have been clarified. The performance and sensitivity of the amplification systems used in the genotyping of azoospermic individuals were assessed. Conclusions: Genetic profiles were satisfactorily amplified in pre-vasectomy stain samples, and partially amplified in post-vasectomy stain samples, stored for almost two decades at room temperature in a tropical country. The small amount of DNA was one of the limitations in post-vasectomy stain samples, in addition to degradation and fragmentation. There are no publications in the literature on PowerPlex Fusion 6C and Y23 analyses using blood, sperm, and seminal fluids of the same individual, much less in the form of stains. This study can serve as a benchmark for the tracking analyses of stored samples. In addition, it anticipates a few social issues related to the analysis of post-vasectomy samples in forensic cases, most notably sex crimes. abstract_id: PUBMED:23539611 microRNA signature is altered in both human epididymis and seminal microvesicles following vasectomy. Study Question: Does vasectomy impact microRNA (miRNA) expression in the epididymis and seminal microvesicles (SMVs) in a non-reversible manner? Summary Answer: The miRNA signature in the epididymis and SMVs is altered by vasectomy and only partially restored after vasovasostomy surgery. What Is Known Already: Vasectomy modifies the epididymal transcriptome and triggers non-reversible changes that affect sperm function. Some vasovasostomized men experience a reduced fertility outcome. Study Design, Size, Duration: Human epididymides provided by three control donors and three vasectomized donors were collected under artificial circulation through Transplant Quebec (Quebec, QC, Canada). Semen from three normal, three vasectomized and five vasovasostomized donors was provided by the andrology clinic. Participants/materials, Setting, Methods: Epididymides and semen were collected from donors between 26 and 50 years of age with no known pathologies that could potentially affect reproductive function. After RNA extraction, epididymal miRNA profiles were determined by microarray (Affimetrix), compared by ANOVA and confirmed by real-time PCR. The correlation between miRNA and gene expression profiles was investigated by an integrated genomic approach. miRNA signature from purified SMVs was established by microarray. Main Results And The Role Of Chance: Vasectomy significantly modified the expression of epididymal miRNAs, which were mainly correlated with mRNAs for transcription factors. Vasectomy also impacted the detection of 118 of the miRNAs found in SMVs from normal donors, including miRNAs of epididymal origin contained in epididymosomes. Among seminal miRNAs changes, 52 were reversible according to the expression levels of miRNA in the semen samples from vasovasostomized donors, while 66 were non-reversible. Limitations, Reasons For Caution: Identification of miRNAs responsive to vasectomy was determined with a limited number of samples due to the low number of human specimen samples available. Wider Implications Of The Findings: According to the critical role played by miRNAs in all biological systems, we believe that miRNA changes occurring upstream and downstream of the vasectomy site may be related to the reduced fertility outcome reported following surgically successful vasectomy reversal. This study may provide new tools for predicting vasovasostomy success and open avenues for the identification of the molecular players involved in male infertility. abstract_id: PUBMED:569891 Effect of vasectomy on thyroid function. Long-Evans rats vasectomized unilaterally or bilaterally at 2 months of age showed marked reductions in weight of testes, seminal vesicles and prostate glands 2 and 12 months post-vasectomy. Thyroid weight, thyroid 131I uptake, serum PB131I, circulating levels of serum T4 and chromatographic analysis of thyroid hydrolysates revealed no alterations due to vasectomy. These findings indicate that vasectomy, unilateral or bilateral, and whether of 2 or 12 months duration, did not exert an effect on thyroid function. Answer: Yes, the timing of post-vasectomy seminal analysis is important. A study reviewing the practice in two hospitals with differing protocols for the timing of seminal analysis after vasectomy found that the complete disappearance of spermatozoa after vasectomy takes longer than is generally believed. The study suggested that seminal analysis 6 months after vasectomy is cost-effective and in the patient's interest, as it allows for adequate counseling and ensures the complete clearance of spermatozoa (PUBMED:9523670). This is supported by the finding that the ratio between the concentration of free cholesterol and esterified cholesterol is constant six months after vasectomy, indicating that this time frame may be significant for assessing post-vasectomy changes in seminal plasma (PUBMED:1563639). Moreover, providing a post-vasectomy semen analysis (PVSA) cup at the time of vasectomy rather than post-operatively has been shown to improve compliance with PVSA completion rates, suggesting that the timing of when patients are given the tools for analysis can also impact the effectiveness of post-vasectomy follow-up (PUBMED:38404558). In conclusion, the timing of post-vasectomy seminal analysis is crucial to ensure that spermatozoa have been completely cleared from the semen, which is essential for the effectiveness of the vasectomy as a contraceptive method. Additionally, the timing of when patients receive materials for PVSA can influence their compliance with completing the analysis.
Instruction: Is overweight in patients with type 1 diabetes in the puberty a problem? Abstracts: abstract_id: PUBMED:14575618 Is overweight in patients with type 1 diabetes in the puberty a problem? Unlabelled: AIM OF THE STUDY was the appreciation of the body mass index (BMI) of adolescents with diabetes type 1 treated with different methods of insulin therapy as well as the correlation of the BMI with the age of the patients, duration of the diabetes therapy and with the metabolic control. Material And Methods: The examinations included 205 patients aged 14 to 18 years (X 16.8). The examined group consisted of 112 girls (54.5%) and 93 boys (45.5%). In all the patients the body mass index (BMI) was calculated HbA1c, total cholesterol, HDL, LDL, triglyceride the duration and method of therapy of diabetes were determined. The patients were divided in 4 groups. Group 1 and 3 included patients treated with insulin in multiple daily doses (4-5 doses). Group 1 included 49 girls, group 3-36 boys. In group 2 and 4 there were patients treated with intensive therapy with insulin analogs (Humalog, NOVO Rapid) and NPH insulin. In group 2 there were 63 girls, in group 4 -57 boys. Results: In the groups of boys higher statistically significant levels of HbA1c were observed in those treated with multiple daily doses (gr. 3), the body mass index was higher in boys treated with insulin analogs (gr. 4) [p<0.05, vs p<0.05]. Conclusions: 1. The therapy of diabetes type 1 in the puberty with multiple daily doses as well as with the intensive method is not a risk factor for obesity. 2. Normal weight evidenced a proper diabetic metabolic control and a good education about the rational diet and optimal insulin dose. 3. In patients in whom BMI is over normal range it is urgently necessary to verify of the therapeutical management. 4. In patients with diabetes type 1 a strict control of body mass index and the growth velocity during each visit in the out patients is necessary. abstract_id: PUBMED:24443957 Puberty as an accelerator for diabetes complications. Much is written about how difficult it is to deal with diabetes during adolescence, and rightly so. Less is understood as to how puberty may be an accelerator of vascular complications. With the increase in childhood diabetes, complication risks need to be revisited in relation to puberty and the secular increase in adiposity. Recent data suggest greater risk for severe vascular complications in those with diabetes during puberty, compared with young people who develop diabetes after puberty. It is also widely recognized that higher hemoglobin A1c (HbA1c) results are often seen during the pubertal period. This article will review complication outcomes in relation to puberty and examine mechanisms by which puberty may modify risk above glycemic exposure, and possible gender disparities in the risk of complications in the adolescent period. abstract_id: PUBMED:21446926 Prospective evaluation of risk factors for the development of islet autoimmunity and type 1 diabetes during puberty--TEENDIAB: study design. Type 1 diabetes (T1D) has a peak incidence in childhood and adolescence. The TEENDIAB study investigates the period of puberty and adolescence in the natural course of T1D development. Evidence suggests that the immune phenotype of children developing autoimmunity during puberty and adolescence differs from that in childhood. We hypothesize that these differences reflect heterogeneity in the genetic and environmental factors that influence the development of autoimmunity in puberty versus early infancy. TEENDIAB is an observational cohort study that enrols and follows 1500 children aged 8-12 and who have a first degree relative with T1D to test this hypothesis. Data collection and analyses will focus on determining the phenotype of islet autoimmunity, genotypes of T1D- and type 2 diabetes-associated genes, insulin resistance, and β-cell function, growth, obesity, and physical exercise. The findings of this study will increase the understanding of pathogenetic mechanisms behind the increasing diabetes incidence in youth and the impact of obesity on diabetes development in this age period. abstract_id: PUBMED:1600705 Body fat and overweight among children and adolescents with diabetes mellitus. Body composition was assessed in 68 children and adolescents with diabetes mellitus. Body fat was measured by skinfold thickness and the results were compared with those obtained by bioelectrical impedance methods (r = 0.76, p less than 0.001). Percentage body fat from skinfold thickness estimations was 12-38%. Girls appeared to be most at risk of obesity during puberty with mean body fat 31%, significantly greater than in prepubertal girls (22%, p = 0.001) and pubertal boys (21%, p less than 0.001). Body fat increased progressively through puberty in girls, with those in late puberty (stages 4 and 5) having significantly more percentage body fat than those in earlier puberty (stages 2 and 3, p less than 0.001). Those in late puberty also received significantly more insulin per kilogram body weight (p = 0.05) and had higher HbA1 levels (p less than 0.05) than prepubertal girls. abstract_id: PUBMED:9628271 Contributions of age, gender and insulin administration to weight gain in subjects with IDDM. Overweight in insulin-dependent diabetes mellitus (IDDM) has been repeatedly reported, especially in girls during adolescence. Potential pathophysiologic factors include tight metabolic control, insulin dose, treatment regimen, puberty and genetics. A standardized data-base from all IDDM patients treated at our institution was evaluated. IDDM patients with hypothyroidism or celiac's disease as well as all records from the first year of diabetes were excluded, resulting in a total of 427 patients (2454 patient-years) available for analysis. BMI and SD-score for BMI based on the Zurich longitudinal growth study were evaluated. Standardized BMI was higher in pubertal children ( + 1.07+/-0.06) compared to prepubertal children (+ 0.68+/-0.07; p < 0.002). This increase was present both for boys and girls. Increasing overweight during puberty was found irrespective of the age at diagnosis of diabetes (prepubertal or pubertal). The daily dose of insulin and the long-term metabolic control had only a minor impact on the development of overweight. In contrast, in pubertal children, SDS-BMI was significantly higher in patients on intensified insulin regimens (3 or 4 daily injections) compared to patients with 2 injections (p < 0.05). These data demonstrate that both boys as well as girls with IDDM develop overweight during puberty. Multiple injection therapy, not daily dose of insulin or the level of metabolic control achieved, was the main predictor of weight gain. This finding may be explained by increased caloric intake due to the flexibility allowed by intensified treatment. abstract_id: PUBMED:10652800 Pubertal growth, final height and weight gain in girls diagnosed with IDDM during pre-pubertal period Objective: Girls diagnosed of type 1 diabetes mellitus during the prepubertal stage may present reduced longitudinal growth and have a tendency to be overweight after puberty starts. Our objective was to evaluate the growth, pubertal growth spurt, final height and weight gain of diabetic girls diagnosed of insulin dependent diabetes during the prepubertal period. Patients And Methods: We evaluated retrospectively the longitudinal growth and body mass index (BMI) of 17 girls diagnosed of type 1 diabetes before puberty, from the time of diagnosis until final height. The effects of age at diagnosis, evolution time, insulin therapy (conventional or intensified), insulin dose and metabolic control on growth were analyzed. Results: The final height attained was 161 +/- 4 cm, which is within the target height range. The height at diagnosis (0.31 +/- 0.71) was normal in relationship to the reference population, showing a significant reduction upon reaching the final height (-0.18 +/- 0.77). The BMI was significantly increased when it was compared at final height (0.99 +/- 0.79) with the index obtained one year after the onset of diabetes (0.04 +/- 0.63). The insulin therapy regimen did not affect the evolution of the patients and no significant relationship existed between the age at diagnosis, duration of diabetes, daily insulin dosage or metabolic control and height or BMI. Conclusions: The final height of type 1 diabetic girls is reduced and adolescent diabetic girls have a tendency to be overweight. Metabolic control (for the range observed in these patients) or insulin dosage does not seem to affect the growth of diabetic girls. abstract_id: PUBMED:15601360 Ponderal gain, waist-to-hip ratio, and pubertal development in girls with type-1 diabetes mellitus. Objectives: We assessed pubertal development, height, weight, and waist-to-hip ratio (WHR), an index of central adiposity during puberty, in girls with type-1 diabetes mellitus (T1DM), compared to a contemporary control group. Methods: Pubertal development, weight, height and WHR were studied in 100 pubertal girls with T1DM, and were compared to a control group of 576 normal girls (C), recruited from schools with a similar socioeconomic level and ethnicity. The age of onset of various pubertal stages was estimated by using probit analysis. Results: Breast Tanner stage 2 (BT2) began at 8.89 +/- 0.11 and 9.10 +/- 0.28 yr in C and T1DM, respectively. A delay of 6 months was observed in T1DM for BT3 and BT4 (p < 0.05). Menarche occurred 6 months later in girls with T1DM (p = 0.03). WHR decreased during puberty in C (p < 0.001), but not in T1DM. In girls with T1DM, the body mass index standard deviation score (BMI-SDS) increased throughout puberty (p < 0.001), but it was stable in C. In T1DM girls, BMI-SDS, but not hemoglobin A1c levels (HbA1c), was a significant determinant of pubertal development. Final height was similar in T1DM and C. Conclusions: Pubertal development in girls with T1DM occurred earlier than described in historical cohorts, but a later onset of menarche and final stages of breast development were observed. The increase in BMI-SDS and the stability of WHR in girls with T1DM during puberty suggest that this period may be critical for determining later weight gain and body composition in adult women with this condition. abstract_id: PUBMED:8927959 Eating behavior, diabetes and weight control in girls with insulin-dependent diabetes mellitus (type 1) The object of the study was, first, to investigate whether girls suffering from insulin-dependent diabetes mellitus (IDDM) are more overweight than an age- and puberty-matched control group and, second, to study the impact of diet, calorie intake and pubertal stage on body mass index (BMI), body weight and fat content. We studied 43 girls with IDDM and controls, divided into two age groups: group 1 (n = 21; 10-13 years) and group 2 (n = 22; > 13 years, 13.1-20.7 years). Overweight was assessed by BMI, relative weight and body fat from skinfold thickness. Food consumption data were collected over a one week food and drink protocol. The diabetic girls, particularly those after puberty, were more overweight than the controls. Although the calorie intake was increased compared with their peers, the proportions of energy derived from protein, fat and carbohydrate were as recommended by the American and Swiss Diabetes Association. Most importantly, the recommended proportion of saturated fatty acids (< 10%) was not achieved by either the diabetic patients or the control girls. Insulin dose/unit body weight correlated with BMI and fat content. Therefore, the increased insulin dose may be responsible for the relatively increased energy intake and, in addition, increased intake of saturated fatty acid which has been related to poor metabolic control and obesity. The food intake of the control girls was identical to that reported in adults by the Swiss Government in 1991 in the Third Report on Food Consumption. abstract_id: PUBMED:33058582 Polycystic ovary syndrome in obese or type 1 diabetic (T1D) adolescent girls Polycystic ovary syndrome (PCOS) is frequent during adolescence (prevalence ≈ 6 %), and the prevalence increases in obese or type 1 diabetic (T1D) adolescent girls. During puberty, PCOS diagnosis is difficult because of the overlap with some pubertal physiologic signs. The 2017 international consortium suggests two required diagnostic criteria: persistent menstrual disturbances and hyperandrogenism. PCOS physiopathology is complex, including interactions between genetic, epigenetic factors, primary ovarian abnormalities, neuroendocrine alterations, hormonal and metabolic factors. Insulin seems to have a central place in obese or T1D adolescent girls. The treatment is still debated and should be monitored according to the main symptoms. abstract_id: PUBMED:12878789 Insulin resistance in newly diagnosed type 1 diabetic children and adolescents The aim of this study was to estimate insulin resistance in newly diagnosed type 1 diabetic children and adolescents and to analyse the correlation between insulin secretion and impaired insulin action. 37 patients with type 1 diabetes mellitus aged 12.9 +/- 3 years were included in the study. Duration of diabetes was 6 months. Euglycemic-hyperinsulinemic clamp was performed to estimate insulin resistance. Glucose disposal rate was calculated as index M - mg/kg/min. Insulin secretion was measured by glucagon test. The serum level of cholesterol, HDL-Ch, triglycerides and HbA1 was examined. The height, weight, skinfold, waist and hip circumference were measured. Body mass index and waist/hip ratio were calculated. In children and adolescents with type1 diabetes mellitus insulin resistance of various degree was observed. The glucose disposal rate (M index) was 3.2 - 11.8 mg/kg/min., mean 7.08 +/- 2.5 mg/kg/min. The insulin resistance depended on patients' age (r= - 0.3, p<0.05,) and the stage of puberty. There was no difference in insulin secretion in insulin-resistant and insulin-sensitive subjects. The insulin resistance was related to BMI (r=-0.33; p=0.04), and with skinfold thickness (r=-0.59; p=0.001). In insulin-sensitive children the insulin dose was lower (0.45: 0.67; p<0.02). No influence of insulin resistance on metabolic control was observed. Insulin resistance is observed in newly diagnosed type 1 diabetic children and adolescents. No relationship between insulin secretion and impaired insulin action was found. Insulin resistance was greater during III Tanner stage of puberty and in obese children. Answer: Overweight in patients with type 1 diabetes during puberty is indeed a problem. Adolescents with type 1 diabetes treated with different methods of insulin therapy have been observed to have varying body mass index (BMI) levels, and those with higher BMI may require a reevaluation of their therapeutic management (PUBMED:14575618). Puberty is recognized as a period when higher hemoglobin A1c (HbA1c) results are often seen, and it may accelerate the risk of severe vascular complications in individuals with diabetes (PUBMED:24443957). Girls with type 1 diabetes diagnosed before puberty are at risk of being overweight after puberty starts, and their final height may be reduced (PUBMED:10652800). Additionally, pubertal girls with type 1 diabetes have been found to have increased BMI and waist-to-hip ratio (WHR), suggesting that puberty may be critical for determining later weight gain and body composition in adult women with this condition (PUBMED:15601360). Insulin resistance has also been observed in newly diagnosed type 1 diabetic children and adolescents, and it is greater during the third Tanner stage of puberty and in obese children (PUBMED:12878789). Furthermore, girls with type 1 diabetes, especially those after puberty, were found to be more overweight than controls, and insulin dose per unit body weight correlated with BMI and fat content (PUBMED:8927959). These findings indicate that overweight during puberty in patients with type 1 diabetes is a concern that needs to be addressed through careful management of therapy and lifestyle interventions.
Instruction: Area variation in recreational cycling in Melbourne: a compositional or contextual effect? Abstracts: abstract_id: PUBMED:18791047 Area variation in recreational cycling in Melbourne: a compositional or contextual effect? Objective: To examine whether compositional and/or contextual area characteristics are associated with area socioeconomic inequalities and between-area differences in recreational cycling. Setting: The city of Melbourne, Australia. Participants: 2349 men and women residing in 50 areas (58.7% response rate). Main Outcome Measure: Cycling for recreational purposes (at least once a month vs never). Design: In a cross-sectional survey participants reported their frequency of recreational cycling. Objective area characteristics were collected for their residential area by environmental audits or calculated with Geographic Information Systems software. Multilevel logistic regression models were performed to examine associations between recreational cycling, area socioeconomic level, compositional characteristics (age, sex, education, occupation) and area characteristics (design, safety, destinations or aesthetics). Results: After adjustment for compositional characteristics, residents of deprived areas were less likely to cycle for recreation (OR 0.66; 95% CI 0.43 to 1.00), and significant between-area differences in recreational cycling were found (median odds ratio 1.48 (95% credibility interval 1.24 to 1.78). Aesthetic characteristics tended to be worse in deprived areas and were the only group of area characteristics that explained some of the area deprivation differences. Safety characteristics explained the largest proportion of between-area variation in recreational cycling. Conclusion: Creating supportive environments with respect to safety and aesthetic area characteristics may decrease between-area differences and area deprivation inequalities in recreational cycling, respectively. abstract_id: PUBMED:35047348 The emergence of recreational cycling in Hanoi during the Covid-19 pandemic. Objectives: This study examined recreational cycling in Hanoi, Vietnam, with a particular focus on changes brought about by the Covid-19 pandemic. The research questions were: (1) Has recreational cycling increased during the Covid-19 pandemic?; (2) If so, what factors have led to the increase?; and (3) What are some of the typical traits and behaviors of recreational cyclists? Method: Face-to-face surveys of 356 recreational cyclists were carried out in March-April 2021 in four areas of inner Hanoi. The survey data were modelled through two binary and one ordinal logit regression. Results: Recreational cycling has become much more popular in Hanoi during the Covid-19 pandemic, with early morning being the preferred time for this activity (to avoid heavy traffic). A quarter of the participants had starting cycling recreationally since the first lockdown in April 2020, and about three quarters reported having noticed an increase in cycling activity around them. Nearly half of the participants cycled regularly (more than four times per week). The groups that were most likely to have taken up cycling for recreations during the pandemic included men, those living with children in the home, and those not working or studying. Age and income were not significant. People have been embracing cycling as a way to maintain or increase physical activity levels, and to safely socialize at the time when social gatherings are limited, walking is prohibited in some zones, and social distancing rules apply everywhere. Conclusion: To sustain the growth in recreational cycling and widen the time-window for this activity while ensuring everyone's safety, Hanoi's planning and public health sectors should join forces to adopt a combination of 'hard' and 'soft' measures. The authors recommend creating multi-use paths for micromobility modes, launching a public bikesharing scheme, regulating motorised modes, and social marketing that promotes recreational cycling as trendy. abstract_id: PUBMED:36531519 The role of neighbourhood design in cycling activity during COVID-19: An exploration of the Melbourne experience. COVID-19 restrictions imposed significant changes on human mobility patterns, with some studies finding significant increases or decreases in cycling. However, to date there is little understanding on how the neighbourhood-level built environment influenced cycling behaviour during the COVID-19 restrictions. As different neighbourhood have different built environment characteristics, it is possible that cycling trends varied across different built environment settings. We aimed to answer this question by examining recreational cycling during different stages of lockdown in Melbourne, Australia. We compared self-reported recreational cycling frequency (weekly) data from 1344 respondents between pre-COVID and two different stages in lockdown. We tested whether the built environment of their residential neighbourhood and different sociodemographic characteristics influenced leisure cycling rates and whether the effect of these factors varied between different stages of COVID-19 restriction. We found that cycling declined significantly during the two stages of COVID-19 lockdown. Cycling infrastructure density and connectivity are two built environment factors that had a significant effect on limiting the decline in leisure cycling during the pandemic. Furthermore, men and younger people had higher cycling rates in comparison to other groups, suggesting that restrictions on indoor activities and travel limits were not enough to encourage women or older people to cycle more during the pandemic. abstract_id: PUBMED:34311793 Placebo response and effect in randomized clinical trials: meta-research with focus on contextual effects. Background: Contextual effects (i.e., placebo response) refer to all health changes resulting from administering an apparently inactive treatment. In a randomized clinical trial (RCT), the overall treatment effect (i.e., the post-treatment effect in the intervention group) can be regarded as the true effect of the intervention plus the impact of contextual effects. This meta-research was conducted to examine the average proportion of the overall treatment effect attributable to contextual effects in RCTs across clinical conditions and treatments and explore whether it varies with trial contextual factors. Methods: Data was extracted from trials included in the main meta-analysis from the latest update of the Cochrane review on "Placebo interventions for all clinical conditions" (searched from 1966 to March 2008). Only RCTs reported in English having an experimental intervention group, a placebo comparator group, and a no-treatment control group were eligible. Results: In total, 186 trials (16,655 patients) were included. On average, 54% (0.54, 95%CI 0.46 to 0.64) of the overall treatment effect was attributable to contextual effects. The contextual effects were higher for trials with blinded outcome assessor and concealed allocation. The contextual effects appeared to increase proportional to the placebo effect, lower mean age, and proportion of females. Conclusion: Approximately half of the overall treatment effect in RCTs seems attributable to contextual effects rather than to the specific effect of treatments. As the study did not include all important contextual factors (e.g., patient-provider interaction), the true proportion of contextual effects could differ from the study's results. However, contextual effects should be considered when assessing treatment effects in clinical practice. Trial Registration: PROSPERO CRD42019130257 . Registered on April 19, 2019. abstract_id: PUBMED:38030821 The effect of gaze information associated with the search items on contextual cueing effect. Previous research on the mechanisms of contextual cueing effect has been inconsistent, with some researchers showing that the contextual benefit was derived from the attentional guidance whereas others argued that the former theory was not the source of contextual cueing effect. We brought the "stare-in-the-crowd" effect that used pictures of gaze with different orientations as stimuli into a traditional contextual cueing effect paradigm to investigate whether attentional guidance plays a part in this effect. We embedded the letters used in a traditional contextual cueing effect paradigm into the gaze pictures with direct and averted orientation. In Experiment 1, we found that there was a weak interaction between the contextual cueing effect and the "stare-in-the-crowd" effect. In Experiments 2 and 3, we found that the contextual cueing effect was influenced differently when the direct gaze was combined with the target or distractors. These results suggested that attentional guidance played an important role in the generation of a contextual cueing effect and the direct gaze had a special impact on visual search. To summarize the three findings, the direct gaze on target location facilitates the contextual cueing effect, and such an effect is even greater when we compared condition with the direct gaze on target location with condition with the direct gaze on distractor location (Experiments 2 and 3). Such an effect of gaze on a contextual cueing effect is manifested even when the effect of gaze ("stare-in-the-crowd" effect) was absent in the New configurations (search trials without learning). abstract_id: PUBMED:27014175 Contextual Effect of Wealth on Independence: An Examination through Regional Differences in China. The current study disentangled two different effects of wealth on psychological tendency toward independence: one is an effect exerted at the individual level (i.e., being rich) and the other one is a contextual effect (i.e., being surrounded by rich individuals). Past research has found a stronger tendency toward independence among people in economically developed societies. This association has often been explained as a result of a greater amount of choices, and thus more opportunities to express individuality that wealth affords individuals. In addition to this individual-level process, theories in cultural psychology imply that the wealth-independence link also reflects social processes-living in a rich society, regardless of one's own wealth, promotes independence (contextual effect of wealth on independence). Through a large-scale survey in China, using multilevel analyses, we found that wealth had both the individual-level effect and contextual effect on independence as well as related psychological tendencies (influence orientation and generalized trust), suggesting that individuals are more likely to be independent with greater personal wealth and when surrounded by wealthy others. Possible processes through which independence is promoted by liing in a wealthy area are discussed. abstract_id: PUBMED:28912739 Contextual Cueing Effect in Spatial Layout Defined by Binocular Disparity. Repeated visual context induces higher search efficiency, revealing a contextual cueing effect, which depends on the association between the target and its visual context. In this study, participants performed a visual search task where search items were presented with depth information defined by binocular disparity. When the 3-dimensional (3D) configurations were repeated over blocks, the contextual cueing effect was obtained (Experiment 1). When depth information was in chaos over repeated configurations, visual search was not facilitated and the contextual cueing effect largely crippled (Experiment 2). However, when we made the search items within a tiny random displacement in the 2-dimentional (2D) plane but maintained the depth information constant, the contextual cueing was preserved (Experiment 3). We concluded that the contextual cueing effect was robust in the context provided by 3D space with stereoscopic information, and more importantly, the visual system prioritized stereoscopic information in learning of spatial information when depth information was available. abstract_id: PUBMED:33210555 A field study investigating sensory manifestations in recreational female cyclists using a novel female-specific cycling pad. This randomised controlled field study aimed to design a female-specific cycling pad with reduced padding in the crotch area (half-pad) and test its effects on self-reported sensory manifestations in comparison with full-padded cycling bib shorts. Recreational female road cyclists (n = 183) participated (divided into two groups). Self-reported sensory manifestations were collected six times over 12 weeks. Sitting discomfort, wetness perception, thermal, texture sensation, and wear discomfort decreased over time for the crotch and sitting-bones areas in both groups. Irritation and tenderness in the crotch area also decreased over time in both groups. Irritation and tenderness in the sitting-bones area were only higher at week two in the half-pad compared with the full-pad group. Cycling with the half-padded shorts compared with the full-padded ones had no negative effects on sensory manifestations beside the observed transient change at week two. This suggests that foam thickness in the crotch area can be reduced in female-specific cycling pads. Practitioner's Summary: Road cycling might result in discomfort and non-traumatic injuries in the female genital area. This field study compares two different cycling pads; a half-pad and a full-pad, over a 12-week period among female recreational road cyclists. Reducing the foam thickness in the crotch area of the pad does not change sensory manifestations, i.e. discomfort, wetness perception, texture-, and thermal-sensation as well as wear discomfort. Abbreviations: CS-Q: online Cycling bib Shorts Questionnaire; VADER: Valence Aware Dictionary and sEntiment Reasoner. abstract_id: PUBMED:35027042 An investigation of cancer survival inequalities associated with individual-level socio-economic status, area-level deprivation, and contextual effects, in a cancer patient cohort in England and Wales. Background: People living in more deprived areas of high-income countries have lower cancer survival than those in less deprived areas. However, associations between individual-level socio-economic circumstances and cancer survival are relatively poorly understood. Moreover, few studies have addressed contextual effects, where associations between individual-level socio-economic status and cancer survival vary depending on area-based deprivation. Methods: Using 9276 individual-level observations from a longitudinal study in England and Wales, we examined the association with cancer survival of area-level deprivation and individual-level occupation, education, and income, for colorectal, prostate and breast cancer patients aged 20-99 at diagnosis. With flexible parametric excess hazard models, we estimated excess mortality across individual-level and area-level socio-economic variables and investigated contextual effects. Results: For colorectal cancers, we found evidence of an association between education and cancer survival in men with Excess Hazard Ratio (EHR) = 0.80, 95% Confidence Interval (CI) = 0.60;1.08 comparing "degree-level qualification and higher" to "no qualification" and EHR = 0.74 [0.56;0.97] comparing "apprenticeships and vocational qualification" to "no qualification", adjusted on occupation and income; and between occupation and cancer survival for women with EHR = 0.77 [0.54;1.10] comparing "managerial/professional occupations" to "manual/technical," and EHR = 0.81 [0.63;1.06] comparing "intermediate" to "manual/technical", adjusted on education and income. For breast cancer in women, we found evidence of an association with income (EHR = 0.52 [0.29;0.95] for the highest income quintile compared to the lowest, adjusted on education and occupation), while for prostate cancer, all three individual-level socio-economic variables were associated to some extent with cancer survival. We found contextual effects of area-level deprivation on survival inequalities between occupation types for breast and prostate cancers, suggesting wider individual-level inequalities in more deprived areas compared to least deprived areas. Individual-level income inequalities for breast cancer were more evident than an area-level differential, suggesting that area-level deprivation might not be the most effective measure of inequality for this cancer. For colorectal cancer in both sexes, we found evidence suggesting area- and individual-level inequalities, but no evidence of contextual effects. Conclusions: Findings highlight that both individual and contextual effects contribute to inequalities in cancer outcomes. These insights provide potential avenues for more effective policy and practice. abstract_id: PUBMED:29163306 Quantifying Contextual Interference and Its Effect on Skill Transfer in Skilled Youth Tennis Players. The contextual interference effect is a well-established motor learning phenomenon. Most of the contextual interference effect literature has addressed simple skills, while less is known about the role of contextual interference in complex sport skill practice, particularly with respect to skilled performers. The purpose of this study was to assess contextual interference when practicing the tennis serve. Study 1 evaluated tennis serve practice of nine skilled youth tennis players using a novel statistical metric developed specifically to measure between-skill and within-skill variability as sources of contextual interference. This metric highlighted that skilled tennis players typically engaged in serve practice that featured low contextual interference. In Study 2, 16 skilled youth tennis players participated in 10 practice sessions that aimed to improve serving "down the T." Participants were stratified into a low contextual interference practice group (Low CI) and a moderate contextual interference practice group (Moderate CI). Pre- and post-tests were conducted 1 week before and 1 week after the practice period. Testing involved a skill test, which assessed serving performance in a closed setting, and a transfer test, which assessed serving performance in a match-play setting. No significant contextual interference differences were observed with respect to practice performance. However, analysis of pre- and post-test serve performance revealed significant Group × Time interactions. The Moderate CI group showed no change in serving performance (service displacement from the T) from pre- to post-test in the skill test, but did display improvements in the transfer test. Conversely, the Low CI group improved serving performance (service displacement from the T) in the skill test but not the transfer test. Results suggest that the typical contextual interference effect is less clear when practicing a complex motor skill, at least with the tennis serve skill evaluated here. We encourage researchers and applied sport scientists to use our statistical metric to measure contextual interference. Answer: The study on area variation in recreational cycling in Melbourne aimed to determine whether differences in recreational cycling were due to compositional characteristics of the individuals (such as age, sex, education, occupation) or contextual characteristics of the areas (such as design, safety, destinations, or aesthetics) (PUBMED:18791047). The results indicated that after adjusting for compositional characteristics, residents of deprived areas were less likely to cycle for recreation. Aesthetic characteristics, which were generally worse in deprived areas, partially explained the differences in recreational cycling related to area deprivation. Safety characteristics accounted for the largest proportion of between-area variation in recreational cycling. This suggests that the variation in recreational cycling in Melbourne is influenced more by contextual effects related to the characteristics of the areas rather than just the composition of the residents (PUBMED:18791047).
Instruction: Outcomes of resection of giant hepatocellular carcinoma in a tertiary institution: does size matter? Abstracts: abstract_id: PUBMED:36896301 Does size matter for resection of giant versus non-giant hepatocellular carcinoma? A meta-analysis. Background: Research on long-term survival after resection of giant (≥ 10 cm) and non-giant hepatocellular carcinoma (HCC) (&lt; 10 cm) has produced conflicting results. Aim: This study aimed to investigate whether oncological outcomes and safety profiles of resection differ between giant and non-giant HCC. Methods: PubMed, MEDLINE, EMBASE, and Cochrane databases were searched. Studies designed to investigate the outcomes of giant vs non-giant HCC were included. The primary endpoints were overall survival (OS) and disease-free survival (DFS). The secondary endpoints were postoperative complications and mortality rates. All studies were assessed for bias using the Newcastle-Ottawa Scale. Results: 24 retrospective cohort studies involving 23747 patients (giant = 3326; non-giant = 20421) who underwent HCC resection were included. OS was reported in 24 studies, DFS in 17 studies, 30-d mortality rate in 18 studies, postoperative complications in 15 studies, and post-hepatectomy liver failure (PHLF) in six studies. The HR was significantly lower for non-giant HCC in both OS (HR 0.53, 95%CI: 0.50-0.55, P &lt; 0.001) and DFS (HR 0.62, 95%CI: 0.58-0.84, P &lt; 0.001). No significant difference was found for 30-d mortality rate (OR 0.73, 95%CI: 0.50-1.08, P = 0.116), postoperative complications (OR 0.81, 95%CI: 0.62-1.06, P = 0.140), and PHLF (OR 0.81, 95%CI: 0.62-1.06, P = 0.140). Conclusion: Resection of giant HCC is associated with poorer long-term outcomes. The safety profile of resection was similar in both groups; however, this may have been confounded by reporting bias. HCC staging systems should account for the size differences. abstract_id: PUBMED:35813892 Outcomes of surgical resection of super-giant (≥15 cm) hepatocellular carcinoma: Volume does matter, if not the size. Background And Aim: Resection for giant hepatocellular carcinoma (HCC) (≥10 cm) is deemed safe and feasible. However, a super-giant HCC (≥15 cm) poses unique technical complexity for hepatectomy with limited data suggesting feasibility and oncologic efficiency. This study aims to evaluate the short-term and long-term outcomes of hepatectomy in patients with super-giant HCC. Methods: A retrospective review was conducted on patients with super-giant HCC who underwent hepatectomy from 2011 to 2021. We report perioperative and oncologic outcomes such as length of stay (LOS), 30-day readmission, 90-day mortality, and cumulative survival rate. Results: Of the 18 patients, the median tumor diameter was 172.5 mm (range 150-250). The most common risk factor was chronic hepatitis B virus (HBV) infection (n=7, 38.9%). Most of the patients were Barcelona Clinic Liver Cancer (BCLC) Stage B (n=14, 77.8%) and Hong Kong Liver Cancer (HKLC) Stage IIb (n=15, 83.3%). Extended right hepatectomy was the most common procedure. The median LOS was 11 days (range 3-90). The most common post-operative complication was pneumonia (n=4, 22.2%). Fourteen patients were discharged well without any need for invasive therapy (n=7, 38.9% no complications, n=1, 5.6% Clavien Grade I, n=6, 33.3% Clavien Grade II). Thirty-day readmission rate was 5.6% (n=1) and 90-day mortality rate was 5.6% (n=1). There were 12 patients (66.7%) with microvascular invasion and three patients (16.7%) with macrovascular invasion. Most patients had Grade III (poorly differentiated) HCC (n=9, 50%). At a median follow-up of 11 months (range 2-95), 12 (66.7%) patients had local recurrence, and 9 (50%) developed distant metastasis. The 1-, 2-, and 3-year cumulative disease-free survival (DFS) was 36%, 18%, and 18%, respectively. The 1-, 2-, and 3-year cumulative overall survival was 49% and 39%, and 29%, respectively. Conclusion: Primary hepatic resection is safe in patients with super-giant HCC. However, long-term outcomes are poor, and high tumor volume may be associated with inferior oncological outcomes in HCC. Relevance For Patients: The presentation of super-giant HCCs may be asymptomatic and some patients are diagnosed late with limited treatment options. In some centers, this group of patients are denied surgical resection and recommended for only locoregional therapies like TACE. This paper demonstrates that hepatic resection is safe and may be an option in patients who present at an advanced stage with a high tumor burden. abstract_id: PUBMED:26334002 Outcomes of resection of giant hepatocellular carcinoma in a tertiary institution: does size matter? Background: The surgical management of giant hepatocellular carcinoma (G-HCC), or HCC of ≥10 cm in diameter, remains controversial. The aim of this study was to compare the outcomes of surgical resection of, respectively, G-HCC and small HCC (S-HCC), or HCC measuring &lt;10 cm. Methods: A retrospective review of all patients (n = 86) diagnosed with HCC and submitted to resection in a tertiary hospital during the period from January 2007 to June 2012 was conducted. Overall survival (OS), recurrence rates and perioperative mortality at 30 days were compared between patients with, respectively, G-HCC and S-HCC. Prognostic factors for OS were analysed. Results: The sample included 23 patients with G-HCC (26.7%) and 63 with S-HCC (73.3%) based on histological tumour size. Patient demographics and comorbidities were comparable. Median OS was 39.0 months in patients with G-HCC and 65.0 months in patients with S-HCC (P = 0.213). Although size did not affect OS in this cohort, the presence of satellite lesions [hazard ratio (HR) 3.70, P = 0.012] and perioperative blood transfusion (HR 2.85, P = 0.015) were negative predictors for OS. Conclusions: Surgical resection of G-HCC provides OS comparable with that after resection of S-HCC. abstract_id: PUBMED:30887184 Short-term outcomes of laparoscopic repeat liver resection after open liver resection: a systematic review. Background: Laparoscopic repeat liver resection (LRLR) still represents a challenge for surgeons especially in case with previous open liver surgery. The aim of the study is to perform a systematic review of the current literature to investigate the feasibility of LRLR after open liver resection (OLR) for liver diseases. Methods: A computerized search was performed for all English language studies evaluating LRLR. A meta-analysis was performed to evaluate the short-term outcomes in comparative studies between LRLR with previous laparoscopic liver resection (LLR) and OLR. Results: From the initial 55 manuscripts, 8 studies including 3 comparative studies between LRLR after OLR and LLR were investigated. There was a total of 108 patients. Considering initial surgery, the extent of initial liver resection was major liver resection in 20% of patients in whom it was reported. In all the patients, the most frequent primary histology was hepatocellular carcinoma, followed by colorectal liver metastasis. A half of reported patients had severe adhesions at the time of LRLR. The median operative time for LRLR was ranged from 120 to 413 min and the median blood loss ranged from 100 to 400 mL. There were 11% of the patients conversions to open surgery, hand-assisted laparoscopic surgery, or tumor ablation. The overall postoperative morbidity was 15% of all the patients, and there was no postoperative mortality. The median postoperative hospital stay was ranged from 3.5 to 10 days. The meta-analysis shows that LRLR after OLR is associated with a longer operative time and a more important blood loss compared to LRLR after LLR. However, no difference between LRLR after OLR and LLR was shown as far as hospital stay and morbidity rate are concerned. Conclusions: LRLR after OLR has been described in eight articles with favorable short-term outcomes in highly selected patients. abstract_id: PUBMED:30178770 Laparoscopic repeat liver resection after open liver resection: A comparative study from a single-centre. Background: Technological innovations have made it possible to use laparoscopic liver resection in cases with pre-existing adhesions or cicatricial changes. However, laparoscopic repeat liver resection (LRLR) still represents a challenge for surgeons, especially in case of previous open liver surgery. This study evaluated the outcomes of LRLR after open liver resection (OLR) in cases of recurrent liver cancer. Materials And Methods: A total of 62 patients who underwent laparoscopic minor liver resection at our institution between September 2012 and September 2016 were retrospectively divided into an LRLR group (n = 13) and a laparoscopic primary liver resection group (LPLR; n = 49). The two groups were compared in terms of patient demographics, surgical procedures and short-term outcomes. Recurrence-free survival (RFS) and overall survival (OS) were compared for patients with hepatocellular carcinoma (HCC) or colorectal liver metastases (CRLMs). Results: There was a significant intergroup difference in the hepatitis virus background, although the two groups' primary histology and pre-operative liver function were comparable. The two groups had statistically similar values for extent of resection, operative time, estimated blood loss, transfusion requirement, conversion to laparotomy, post-operative complications, surgical margins, time to oral intake and hospital stay. No significant differences were detected when we stratified the cases according to low and intermediate difficulty. Furthermore, there were no intergroup differences in RFS or OS in the two groups for patients with HCC and CRLM. Conclusions: The findings suggest that minor LRLR after OLR is safe and comparable with minor LPLR in the present study. abstract_id: PUBMED:32863931 Outcomes of resection for hepatocellular carcinoma with macroscopic bile duct tumour thrombus: A propensity score matched study. The incidence of hepatocellular carcinoma (HCC) with bile duct tumour thrombus (BDTT) is low, and related studies, especially studies on long-term survival, are uncommon. The present study aimed to evaluate the clinicopathological characteristics, prognostic factors and postoperative long-term outcomes of BDTT in patients with HCC. The clinicopathological characteristics and postoperative long-term outcomes of patients with HCC both with and without BDTT were compared before and after propensity score matching (PSM). Prognostic risk factors were assessed by Cox proportional hazards regression analyses after PSM. Tumour stages in the BDTT group were significantly higher than those in the group without BDTT (P=0.001). Overall survival (OS) and recurrence-free survival (RFS) rates were significantly higher in the group without BDTT than in the BDTT group before PSM (P&lt;0.001 and P=0.003, respectively). However, no significant difference in OS or RFS was found between the two groups after PSM (P=0.249 and P=0.121, respectively). Moreover, the median OS and RFS times of the BDTT patients who underwent tumour thrombectomy and bile duct resection were not significantly different (P=0.891 and P=0.787, respectively). In the multivariate analysis, macrovascular invasion (HR, 3.701; 95% CI, 1.313-9.10.437; P=0.013) was the only independent predictor of OS. Although the clinicopathological characteristics of the BDTT group suggested more advanced stage disease and poorer oncological outcomes than the group without BDTT, BDTT was not a poor prognostic factor for patients with HCC who underwent liver resection. Curative resection is recommended for patients with HCC and BDTT, even for those with poor liver function, after proper perioperative management in order to achieve good long-term survival. abstract_id: PUBMED:34702312 Improving outcomes of liver resection for hepatocellular carcinoma associated with portal vein tumor thrombosis over the evolving eras of treatment. Background: The outcomes and management of hepatocellular carcinoma (HCC) have undergone several evolutionary changes. This study aimed to analyze the outcomes of patients who had undergone liver resection for HCC with portal vein tumor thrombosis (PVTT) in terms of the evolving era of treatment. Materials And Methods: A retrospective analysis of 157 patients who had undergone liver resection for HCC associated with PVTT was performed. The outcomes and prognostic factors related to different eras were further examined. Results: Overall, 129 (82.1%) patients encountered HCC recurrence after liver resection, and the median time of recurrence was 4.1 months. Maximum tumor size ≥ 5 cm and PVTT in the main portal trunk were identified as the major prognostic factors influencing HCC recurrence after liver resection. Although the recurrence-free survival had no statistical difference between the two eras, the overall survival of patients in the second era was significantly better than that of the patients in the first era (p = 0.004). The 1-, 2-, and 3-year overall survival rates of patients in the second era were 60.0%, 45.7%, and 35.8%, respectively, with a median survival time of 19.6 months. Conclusion: The outcomes of HCC associated with PVTT remain unsatisfactory because of a high incidence of tumor recurrence even after curative resection. Although the management and outcomes of patients with HCC and PVTT have greatly improved over the years, surgical resection remains an option to achieve a potential cure of HCC in well-selected patients. abstract_id: PUBMED:26693235 Comparison of survival outcomes after anatomical resection and non-anatomical resection in patients with hepatocellular carcinoma. Backgrounds/aims: Liver resection is a curative procedure performed worldwide for hepatocellular carcinoma (HCC). Deciding on the appropriate resection range for postoperative hepatic function preservation is an important surgical consideration. This study compares survival outcomes of HCC patients who underwent anatomical or non-anatomical resection, to determine which offers the best clinical survival benefit. Methods: One hundred and thirty-one patients underwent liver resection with HCC, between January 2007 and February 2015, and were divided into two groups: those who underwent anatomical liver resection (n=88) and those who underwent non-anatomical liver resection (n=43). Kaplan-Meier survival analysis and Cox regressions were used to compare the disease-free survival (DFS) and overall survival (OS) rates between the groups. Results: The mean follow-up periods were 27 and 40 months in the anatomical and non-anatomical groups, respectively (p=0.229). The 3- and 5-year DFS rates were 70% and 60% in the anatomical group and 62% and 48% in the non-anatomical group, respectively. The 3 and 5-year OS rates were 94% and 78% in the anatomical group, and 86% and 80% in the non-anatomical group, respectively. The anatomical group tended to show better outcomes, but the findings were not significant. However, a relative risk of OS between the anatomical and non-anatomical group was 0.234 (95% CI, 0.061-0.896; p=0.034), which is statistically significant. Conclusions: Although statistical significance was not detected in survival curves, anatomical resection showed better results. In this respect, anatomical resection is more likely to perform in HCC patients with preserve liver function than non-anatomical resection. abstract_id: PUBMED:36672437 Positioning of Minimally Invasive Liver Surgery for Hepatocellular Carcinoma: From Laparoscopic to Robot-Assisted Liver Resection. Laparoscopic liver resection (LLR) is widely accepted in the surgical treatment of hepatocellular carcinoma (HCC) through international consensus conferences and the development of difficulty classifications. LLR has been reported to result in earlier postoperative recovery and fewer postoperative complications than open liver resection (OLR) for HCC. However, the prevalence of liver cirrhosis, obesity, the elderly, HCC recurrence (repeat liver resection), and major resection must be considered for LLR for HCC. Some systematic reviews, meta-analysis studies, and large cohort studies indicated that LLR is technically feasible for selected patients with HCC with these factors that led to less intraoperative blood loss, fewer transfusions and postoperative complication incidences, and shorter hospital stays than OLR. Furthermore, some reported LLR prevents postoperative loss of independence. No difference was reported in long-term outcomes among patients with HCC who underwent LLR and OLR; however, some recent reports indicated better long-term outcomes with LLR. In recent years, robot-assisted liver resection (RALR) has gradually become popular, and its short- and long-term results for HCC are not different from those of LLR. Additionally, RALR is expected to become the mainstay of minimally invasive surgery in the future. abstract_id: PUBMED:27515744 Hepatic resection for predominantly large size hepatocellular carcinoma: Early and long-term results from a tertiary care center in India. Background And Aim: There are no large series on hepatic resection for hepatocellular carcinoma (HCC) from India. We present the results of consecutive patients of HCC treated with hepatic resection at a tertiary care center in India. Methods: The records of all patients who underwent hepatic resection for HCC in the Department of Gastrointestinal Surgery, All India Institute of Medical Sciences (New Delhi), were reviewed. The relevant perioperative and follow up data were extracted from a prospectively maintained database. Results: Between January 1987 and December 2013, 81 patients [71 males; mean age: 49.2 ± 15.6 years] underwent hepatic resection for HCC. Of these, 23 (28 %) were cirrhotic and 36 (49 %) had hepatitis B. Hepatitis B was significantly more common in cirrhotic (77 % vs. 37 %; p = 0.001). Most patients had locally advanced disease at presentation [tumor size ≥10 cm in 61 (75 %); vascular tumor thrombus in 10 (12 %)]. Anatomical resection was done in 61 (75 %) including 56 major hepatic resections (≥3 segments). Overall in-hospital mortality was 13 (16 %) [cirrhotic 5 (22 %) vs. noncirrhotic 8 (14 %), p = 0.503]. Grade III-V complications (modified Clavien-Dindo classification) occurred in 25 (31 %) patients (cirrhotic 48 % vs. noncirrhotic 24 %; p = 0.037). Follow up information was available for 51 (75 %) patients. The median time to recurrence was 12 months, and most (86 %) occurred within 1 year. The recurrence-free survival at 1, 3, and 5 years was 48 %, 40 %, and 36 %, respectively. Positive resection margin and vascular invasion were significantly associated with very poor prognosis. Conclusion: Majority of Indian HCC patients present with locally advanced disease. Despite this, surgical resection provides a chance for long-term recurrence-free survival in a third of them. Answer: The outcomes of resection of giant hepatocellular carcinoma (HCC) in a tertiary institution were investigated to determine if size matters. The study included 23 patients with giant HCC (G-HCC), defined as HCC of ≥10 cm in diameter, and compared their outcomes with those of 63 patients with small HCC (S-HCC), which measured <10 cm. The median overall survival (OS) was 39.0 months for patients with G-HCC and 65.0 months for patients with S-HCC, but the difference was not statistically significant (P = 0.213). The study concluded that surgical resection of G-HCC provides OS comparable with that after resection of S-HCC. However, the presence of satellite lesions and perioperative blood transfusion were identified as negative predictors for OS (PUBMED:26334002). In contrast, a meta-analysis that included 24 retrospective cohort studies involving 23747 patients (giant = 3326; non-giant = 20421) found that resection of giant HCC is associated with poorer long-term outcomes. The hazard ratio (HR) was significantly lower for non-giant HCC in both overall survival (OS) (HR 0.53) and disease-free survival (DFS) (HR 0.62). However, no significant difference was found for 30-day mortality rate, postoperative complications, and post-hepatectomy liver failure (PHLF) between giant and non-giant HCC (PUBMED:36896301). Another study focused on super-giant HCC (≥15 cm) and reported that primary hepatic resection is safe in patients with super-giant HCC, but long-term outcomes are poor, with high tumor volume possibly associated with inferior oncological outcomes (PUBMED:35813892). Overall, while some studies suggest that size may not significantly impact overall survival after resection of giant HCC, others indicate that giant HCC is associated with poorer long-term outcomes compared to non-giant HCC. The presence of satellite lesions and perioperative blood transfusion were identified as negative predictors for OS, and high tumor volume in super-giant HCC may be associated with inferior outcomes.
Instruction: Mastitis and methicillin-resistant Staphylococcus aureus (MRSA): the calm before the storm? Abstracts: abstract_id: PUBMED:28435202 Phenotypic and genotypic characterization of methicillin-resistant Staphylococcus aureus from bovine mastitis. Aim: This study was conducted to determine the occurrence of methicillin-sensitive and Staphylococcus aureus (MSSA) methicillin-resistant S. aureus (MRSA) from bovine mastitis and to characterize them with respect to antibiotic resistance gene mecA. Materials And Methods: A total of 160 mastitic milk samples were screened for the presence of S. aureus. The presumptive positive isolates were confirmed using nuc and 23S rRNA gene-based polymerase chain reaction. All the confirmed isolates were subjected to in vitro antibiogram using a number of antibiotics. Isolates which showed resistance against methicillin were characterized for the presence of mecA gene. Results: Out of the total 160 milk samples, 36 (22.5%) samples yielded S. aureus. The in vitro antibiogram revealed that 16.6% S. aureus isolates were resistant to all antibiotics screened for and 5.5% isolates were sensitive to all of them. Furthermore, the study found 94.4%, 83.3%, 77.7%, 66.6%, 50%, and 27.7% of S. aureus isolates resistant to penicillin, ampicillin, amoxicillin-sulbactam, enrofloxacin, ceftriaxone, and methicillin, respectively. Out of the 36 S. aureus isolates, only 6 (16.6%) isolates were confirmed as MRSA while rest were MSSA. Conclusion: The higher occurrence of S. aureus-mediated mastitis was concluded due to improper hygienic and poor farm management. The multiple drug resistance reveals the indiscriminate use of drugs and presence of methicillin resistance gene determinant is an alarming situation as such infections are difficult to treat. abstract_id: PUBMED:32235414 Molecular Detection of Multidrug Resistant Staphylococcus aureus Isolated from Bovine Mastitis Milk in Bangladesh. The current study was conducted to isolate and identify multidrug-resistant Staphylococcus aureus (MDR-SA) from mastitis milk samples and to determine their antimicrobial susceptibility pattern. A total of 48 bovine mastitis (BM) milk samples were collected from different parts of the Rangpur division, Bangladesh. After the collection of milk samples, mastitis was confirmed using the California mastitis test. Isolation and identification of Staphylococcus aureus were performed using conventional cultural and biochemical tests as well as using molecular methods of PCR. Nucleotide sequence analysis of the 23S rRNA gene of Staphylococcus aureus was determined. The antibiogram of the isolated bacteria was conducted using the disc diffusion method. Phylogenetic analysis of 23S rRNA was done using MEGA 7, ClustalW multiple sequence alignment, and NCBI-BLAST tools, where the sequence of the isolate showed 98% to 99% identity. Antibiogram test using 15 antimicrobial agents showed that all of the Staphylococcus aureus isolates were classified as multidrug-resistant (MDR). It was found that the isolates were resistant to tetracycline, novobiocin, methicillin, vancomycin, and cephradine, and the isolates were sensitive to ciprofloxacin, azithromycin, norfloxacin, levofloxacin, gentamicin, and amoxicillin. The detection of MDR-SA in mastitis milk is alarming and represents a great public health concern. The findings of the present study help identify Staphylococcus aureus at the molecular level using 23S rRNA gene sequencing and will help select the appropriate and effective antimicrobial agent to control BM in the northern part of Bangladesh. abstract_id: PUBMED:19762816 Short communication: methicillin-resistant Staphylococcus aureus detection in US bulk tank milk. Staphylococcus aureus is a major cause of mastitis in dairy cattle. This study estimated the herd prevalence of methicillin-resistant Staph. aureus (MRSA) among US dairy herds by testing bulk tank milk (BTM) samples using genotypic and phenotypic methods. A nationally representative sample of 542 operations had BTM cultured for Staph. aureus, and 218 BTM samples were positive upon initial culture. After 4 wk to 4 mo of frozen storage, 87% of 218 samples (n = 190) were still culture positive for Staph. aureus on blood agar, but none were positive for MRSA on the selective indicator medium CHROMagar MRSA. A duplex PCR was used to detect the Staph. aureus-specific nuc gene and the methicillin resistance gene, mecA, in mixed staphylococcal isolates from the 190 BTM samples that were positive for Staph. aureus after storage. Seven samples tested positive for nuc and mecA, and 2 samples tested positive for mecA only. MecA-positive Staphylococcus spp., but not MRSA, were subsequently isolated from 5 samples, whereas neither mecA-positive Staphylococcus spp. nor MRSA was isolated from the remaining 4 samples. Presence of methicillin-resistant, coagulase-negative Staphylococcus spp. may complicate the detection of MRSA by means of PCR on BTM. Bulk tank milk in the United States is not a common source of MRSA. abstract_id: PUBMED:29394099 Antimicrobial Resistance of Staphylococcus aureus Isolates from Dairy Cows and Genetic Diversity of Resistant Isolates. Staphylococcus aureus is a frequent and major contagious mastitis bacterial pathogen. The antibiotic treatment cure rates vary considerably from 4% to 92%. Staphylococcus aureus readily becomes resistant to antibiotics, resulting in persistent noncurable intramammary infection that usually results in culling of infected animals. Because of its notorious ability to acquire resistance to the commonly used as well as last resort antimicrobials such as methicillin and vancomycin and the development of multidrug-resistant strains, antimicrobial resistance (AMR) in S. aureus is of paramount importance in human medicine. The objective of this study was to evaluate the prevalence of AMR and genetic diversity of S. aureus isolates from milk of dairy cattle. Staphylococcus aureus isolates (n = 239) from 33 dairy farms in Tennessee were tested against 10 antimicrobials by broth microdilution method using the Sensititer system. Genetic diversity of resistant isolates was evaluated by pulsed-field gel electrophoresis (PFGE). Overall, AMR of the S. aureus isolates varied from as low as 1.3% for ceftiofur to as high as 25% for sulfadimethoxine. Out of 239 S. aureus isolates, 82 (34.3%) of them were resistant to at least 1 of the 10 antimicrobials. The AMR isolates belonged to two major PFGE types, indicating the presence of dominant clonal patterns among the resistant isolates. In general, there was a variation of prevalence of AMR within and among farms over time, with an increasing trend in tetracycline resistance. Judicious use of antimicrobials in dairy cattle farms can reduce the development of antimicrobial-resistant S. aureus. abstract_id: PUBMED:27837983 Short communication: Outbreak of methicillin-resistant Staphylococcus aureus (MRSA)-associated mastitis in a closed dairy herd. Cows are probably the main source of contamination of raw milk with Staphylococcus aureus. Mammary glands with subclinical mastitis can shed large numbers of Staph. aureus in milk. Because of the risk of this pathogen to human health as well as animal health, the aim of this paper was to describe an outbreak of mastitis caused by methicillin-resistant Staph. aureus (MRSA), oxacillin-susceptible mecA-positive Staph. aureus (OS-MRSA), and methicillin-susceptible Staph. aureus (MSSA) on a dairy farm. Milk samples were obtained from all quarters, showing an elevated somatic cell count by the California Mastitis Test. The isolates were identified by phenotypic and genotypic methods. Staphylococcus spp. were isolated from 53% (61/115) of the milk samples, with 60 isolates identified as Staph. aureus (98.4%) and 1 isolate identified as Staphylococcus epidermidis (1.6%). The presence of the mecA gene was verified in 48.3% of Staph. aureus isolates. Of the Staph. aureus isolates, 23.3% were MRSA and 25.0% were OS-MRSA. The total of mastitis cases infected with MRSA was 12.2%. The detection of this large percentage of mastitis cases caused by MRSA and OS-MRSA is of great concern for the animals' health, because β-lactams are still the most important antimicrobials used to treat mastitis. In addition, Staph. aureus isolates causing bovine mastitis represent a public health risk. abstract_id: PUBMED:33382154 Therapeutic applications of lysostaphin against Staphylococcus aureus. Staphylococcus aureus, an opportunistic pathogen, causes diverse community and nosocomial-acquired human infections, including folliculitis, impetigo, sepsis, septic arthritis, endocarditis, osteomyelitis, implant-associated biofilm infections and contagious mastitis in cattle. In recent days, both methicillin-sensitive and methicillin-resistant S. aureus infections have increased. Highly effective anti-staphylococcal agents are urgently required. Lysostaphin is a 27 kDa zinc metallo antimicrobial lytic enzyme that is produced by Staphylococcus simulans biovar staphylolyticus and was first discovered in the 1960s. Lysostaphin is highly active against S. aureus strains irrespective of their drug-resistant patterns with a minimum inhibitory concentration of ranges between 0·001 and 0·064 μg ml-1 . Lysostaphin has activity against both dividing and non-dividing S. aureus cells; and can seep through the extracellular matrix to kill the biofilm embedded S. aureus. In spite of having excellent anti-staphylococcal activity, its clinical application is hindered because of its immunogenicity and reduced bio-availability. Extensive research with lysostaphin lead to the development of several engineered lysostaphin derivatives with reduced immunogenicity and increased serum half-life. Therapeutic efficacy of both native and engineered lysostaphin derivatives was studied by several research groups. This review provides an overview of the therapeutic applications of native and engineered lysostaphin derivatives developed to eradicate S. aureus infections. abstract_id: PUBMED:34103935 Prevalence of Mastitis and Phenotypic Characterization of Methicillin-Resistant Staphylococcus aureus in Lactating Dairy Cows of Selected Dairy Farms in and Around Adama Town, Central Ethiopia. Background: The emergence of Methicillin resistant Staphylococcus aureus (MRSA) poses a serious public health threat. Strains of Staphylococcus aureus resistant to β-lactam antibiotics are known as MRSA. MRSA has gained attention as community pathogen. MRSA has been increasingly reported as emerging problem in veterinary medicine. However, little is known in Ethiopia. This study was, therefore, conducted to identify MRSA, to determine its drug susceptibility patterns, and mastitis infection in dairy cattle in and around Adama town, Central Ethiopia. Methods: A cross-sectional study was conducted to estimate the occurrence of MRSA in mastitic dairy cows in and around Adama town, central Ethiopia. A total of 384 lactating cows were included from the conveniently selected dairy farms in the study area. Approximately 10 ml of milk was aseptically collected from clinical and subclinical mastitic cows into sterile universal bottles after discarding the first 3 milking streams. Then, Staphylococcus aureus was isolated using the conventional bacteriological procedure. Resistance to methicillin was detected using the Kirby-Bauer disc diffusion antibiotic susceptibility method. Oxacillin disc was used to detect methicillin resistant Staphylococcus aureus strains. Antimicrobial susceptibility test was conducted against MRSA strains using streptomycin (S, 10 µg), amoxicillin (Am, 25 µg), kanamycin (k, 30 µg), nalidixic acid (NA, 30 µg), oxytetracycline (OT, 30 µg) sulphonamide (S, 300 µg) and ceftriazole (CRO, 30 µg). Results: The study found that the prevalence of mastitis was 121(31.5%). Among this 37(30.6%) were clinical mastitis and 84 (69.4%) of them were sub-clinical mastitis. Of 121 mastitis cases, Staphylococcus aureus was isolated in 37 (30.6%) of mastitic cow milk samples. The prevalence of mastitis was significantly affected by breed, age, floor type and hygienic status of the milkers (P &lt; .05). Moreover, 32.4% of Staphylococcus aureus isolates were resistant to oxacillin. A total of 75% percent of MRSA isolates were resistant to amoxicillin, 66.7% were resistant to oxytetracycline, and 50% were resistant to sulphonamide. However, 75% of MRSA isolates were susceptible to kanamycin, 58.3% were susceptible to streptomycin, and 50% were susceptible to nalidixic acid. Conclusion: The study revealed that relatively high number of strains are resistant to the antibiotics commonly used in the therapeutic protocol of many human and animal infections. Therefore, antimicrobial susceptibility test should be carried out at a regular basis and proper hygienic practices should be introduced at farm level. Creating public awareness about transmission, prevention and control of MRSA should also be considered. abstract_id: PUBMED:35906609 Prevalence, antimicrobial susceptibilities and risk factors of Methicillin resistant Staphylococcus aureus (MRSA) in dairy bovines. Staphylococcus aureus is a common mastitis pathogen in dairy cows, and methicillin-resistant S. aureus (MRSA) has been found in dairy farms all over the world. The study carried out on bovines from three governorates in Egypt, with the goal of determining the prevalence of MRSA in positive milk samples of subclinical mastitis, performing an antibiotic susceptibility test against MRSA isolates and determining the risk factors associated with MRSA. A total of 350 quarter milk samples (n = 200 mixed breed cow; n = 150 water buffalo) were collected and examined for subclinical mastitis using the California mastitis test (CMT) before being exposed to standard microbiological procedures for S. aureus isolation. The disc diffusion method was used to phenotypically analyse the positive S. aureus isolates for MRSA, which was verified by a PCR assay targeting the mecA gene. According to the findings of the study, 41.4% (145/350) milk samples were positive based on CMT, while 35.7% (125/350) of positive samples identified as MRSA based on PCR assay. However, the obtained results revealed non-significant disparity between cattle and buffalo and all predicted risk factors were strongly associated with prevalence of subclinical mastitis. The in-vitro antibiotic susceptibility test revealed that cefoxitin was completely resistant, whereas linezolid, ciprofloxacin, and trimethoprim + sulphamethoxazole were sensitive against the MRSA isolates. The relevance of S. aureus to public health, as well as the development of resistance to antibiotics like methicillin, needs ongoing testing of antimicrobial medications against MRSA isolates. abstract_id: PUBMED:28900682 Treatment of Staphylococcus aureus Infections. Staphylococcus aureus, although generally identified as a commensal, is also a common cause of human bacterial infections, including of the skin and other soft tissues, bones, bloodstream, and respiratory tract. The history of S. aureus treatment is marked by the development of resistance to each new class of antistaphylococcal antimicrobial drugs, including the penicillins, sulfonamides, tetracyclines, glycopeptides, and others, complicating therapy. S. aureus isolates identified in the 1960s were sometimes resistant to methicillin, a ß-lactam antimicrobial active initially against a majority S. aureus strains. These MRSA isolates, resistant to nearly all ß-lactam antimicrobials, were first largely confined to the health care environment and the patients who attended it. However, in the mid-1990s, new strains, known as community-associated (CA-) MRSA strains, emerged. CA-MRSA organisms, compared with health care-associated (HA-) MRSA strain types, are more often susceptible to multiple classes of non ß-lactam antimicrobials. While infections caused by methicillin-susceptible S. aureus (MSSA) strains are usually treated with drugs in the ß-lactam class, such as cephalosporins, oxacillin or nafcillin, MRSA infections are treated with drugs in other antimicrobial classes. The glycopeptide drug vancomycin, and in some countries teicoplanin, is the most common drug used to treat severe MRSA infections. There are now other classes of antimicrobials available to treat staphylococcal infections, including several that have been approved after 2009. The antimicrobial management of invasive and noninvasive S. aureus infections in the ambulatory and in-patient settings is the topic of this review. Also discussed are common adverse effects of antistaphylococcal antimicrobial agents, advantages of one agent over another for specific clinical syndromes, and the use of adjunctive therapies such as surgery and intravenous immunoglobulin. We have detailed considerations in the therapy of noninvasive and invasive S. aureus infections. This is followed by sections on specific clinical infectious syndromes including skin and soft tissue infections, bacteremia, endocarditis and intravascular infections, pneumonia, osteomyelitis and vertebral discitis, epidural abscess, septic arthritis, pyomyositis, mastitis, necrotizing fasciitis, orbital infections, endophthalmitis, parotitis, staphylococcal toxinoses, urogenital infections, and central nervous system infections. abstract_id: PUBMED:25648812 Short communication: Prevalence of Staphylococcus aureus and methicillin-resistant S. aureus in bulk tank milk from dairy goat farms in Northern Italy. Staphylococcus aureus is regarded as a leading cause of mastitis in goats. However, few data are available on the presence of methicillin-resistant S. aureus (MRSA) in this species. The aim of this study was to assess the prevalence of S. aureus and MRSA in bulk tank milk samples from dairy goat farms in Northern Italy. Eighty-five out of 197 samples (43.1%) tested positive for S. aureus with counts ranging from 10 to more than 1.5 × 10(4) cfu/mL. The MRSA was screened by both direct plating followed by a disk diffusion test to evaluate methicillin resistance and a selective enrichment method. Methicillin-resistance was confirmed by mecA-specific PCR. Methicillin-resistant S. aureus was identified in 4 samples (2.0%) and multilocus sequence typing (MLST) showed the presence of livestock-associated MRSA belonging to lineages ST398 (n = 3) and ST1 (n = 1). In one case we demonstrated that the same MRSA strain was able to persist over time on the farm, being isolated from both bulk tank milk and the udder of 3 goats 1 yr after the first isolation. The high prevalence of S. aureus-positive herds detected in this study and the presence of MRSA strains belonging to livestock-associated genotypes is of concern, and represents a novel finding in the Italian dairy goat production system. The application of stringent measures for the control of S. aureus mastitis at the farm level seems appropriate to reduce the economic losses, and to minimize the risk of foodborne illness and the transmission of MRSA to humans by occupational exposure. Answer: The abstracts provided indicate that methicillin-resistant Staphylococcus aureus (MRSA) is a significant concern in the context of bovine mastitis, a common and costly infection in dairy cows. The prevalence of MRSA in mastitis cases varies across different studies and regions, but it is clear that MRSA strains are present in dairy herds and can be isolated from mastitic milk samples (PUBMED:28435202, PUBMED:32235414, PUBMED:19762816, PUBMED:27837983, PUBMED:34103935, PUBMED:35906609, PUBMED:25648812). The studies show that MRSA strains are not only present but also exhibit multidrug resistance, including resistance to methicillin and other antibiotics such as tetracycline, novobiocin, vancomycin, and cephradine (PUBMED:32235414). This resistance complicates the treatment of mastitis and poses a risk to both animal and human health. The occurrence of MRSA in mastitis cases has been linked to poor farm management and hygiene practices, as well as the indiscriminate use of antibiotics (PUBMED:28435202). The detection of MRSA in bulk tank milk and the persistence of MRSA strains on farms over time suggest that dairy herds can act as reservoirs for MRSA, potentially leading to the spread of this pathogen within and between herds, and possibly to humans through occupational exposure or consumption of contaminated milk (PUBMED:19762816, PUBMED:25648812). Given the public health implications and the challenges in treating MRSA-associated mastitis, there is a need for regular antimicrobial susceptibility testing, improved farm hygiene, and judicious use of antimicrobials to reduce the development of antimicrobial-resistant S. aureus strains (PUBMED:29394099, PUBMED:34103935). Additionally, alternative therapeutic strategies, such as the use of lysostaphin, are being explored to combat S. aureus infections, including those caused by MRSA (PUBMED:33382154). In summary, the presence of MRSA in dairy cows with mastitis is a growing concern that could be described as "the calm before the storm," given the potential for these strains to spread and the limited treatment options available. Proactive measures are needed to address this issue to prevent a larger public health crisis.
Instruction: Meditation: a modulator of the immune response to physical stress? Abstracts: abstract_id: PUBMED:33523434 The Roles of Stress-Induced Immune Response in Female Reproduction. Stress response plays pivotal roles in physiological process, including reproduction and embryonic development. It's long been acknowledged that stress stimulates the activation of both hormone and immune system resulting in disorders of maternal immune function and infertility. However, the stress types, biological alterations, clinical outcomes, and the potential underlying mechanisms remain largely unclear. Recent studies suggest that more stress factors and relative mechanisms are identified to be involved in female reproductive immune response stimulation, and they may lead to immune dysregulations that negatively influence maternal health. In this part, we focus on the outcomes or mechanisms of common stress factors which affect female immune response before and during pregnancy. abstract_id: PUBMED:8668232 Stress and the immune system Research in psychoneuroimmunology has demonstrated that biopsychosocial factors such as psychological stress can influence the immune system. Chronic stress has been associated with the suppression of the immune function. In contrast, acute psychological stressors and physical exercise have been shown to transiently enhance immune responses. These stress effects on immunity seem to be mediated via endocrine factors, since hormones, neurotransmitters, and neuropeptides can interact with cellular components of the immune system. In summary, experimental and clinical evidence suggests a functional relationship between stress, immunity, and diseases. abstract_id: PUBMED:27822213 Cellular Stress Response and Immune Signaling in Retinal Ischemia-Reperfusion Injury. Ischemia-reperfusion injury is a well-known pathological hallmark associated with diabetic retinopathy, glaucoma, and other related retinopathies that ultimately can lead to visual impairment and vision loss. Retinal ischemia pathogenesis involves a cascade of detrimental events that include energy failure, excitotoxic damage, calcium imbalance, oxidative stress, and eventually cell death. Retina for a long time has been known to be an immune privileged site; however, recent investigations reveal that retina, as well as the central nervous system, elicits immunological responses during various stress cues. Stress condition, such as reperfusion of blood supply post-ischemia results in the sequestration of different immune cells, inflammatory mediators including cytokines, chemokines, etc., to the ischemic region, which in turn facilitates induction of inflammatory conditions in these tissues. The immunological activation during injury or stress per se is beneficial for repair and maintenance of cellular homeostasis, but whether the associated inflammation is good or bad, during ischemia-reperfusion injury, hitherto remains to be explored. Keeping all these notions in mind, the current review tries to address the immune response and host stress response mechanisms involved in ischemia-reperfusion injury with the focus on the retina. abstract_id: PUBMED:23990626 Cellular stress response and innate immune signaling: integrating pathways in host defense and inflammation. Extensive research in the past decade has identified innate immune recognition receptors and intracellular signaling pathways that culminate in inflammatory responses. Besides its role in cytoprotection, the importance of cell stress in inflammation and host defense against pathogens is emerging. Recent studies have shown that proteins in cellular stress responses, including the heat shock response, ER stress response, and DNA damage response, interact with and regulate signaling intermediates involved in the activation of innate and adaptive immune responses. The effect of such regulation by cell stress proteins may dictate the inflammatory profile of the immune response during infection and disease. In this review, we describe the regulation of innate immune cell activation by cell stress pathways, present detailed descriptions of the types of stress response proteins and their crosstalk with immune signaling intermediates that are essential in host defense, and illustrate the relevance of these interactions in diseases characteristic of aberrant immune responses, such as chronic inflammatory diseases, autoimmune disorders, and cancer. Understanding the crosstalk between cellular stress proteins and immune signaling may have translational implications for designing more effective regimens to treat immune disorders. abstract_id: PUBMED:7916952 Exercise and the immune system: a model of the stress response? Exercise influences natural immunity, T- and B-cell functions, and cytokine responses, through circulatory (hemodynamic) changes and by endocrine hormones secreted in response to physical stress. The magnitude of the effects on the immune system reflects the intensity, duration and chronicity of the exercise. In this review, Laurie Hoffman-Goetz and Bente Klarlund Pedersen suggest that exercise-immune interactions can be viewed as a subset of stress immunology. abstract_id: PUBMED:8808540 Meditation: a modulator of the immune response to physical stress? A brief report. Objective: To test the hypothesis that stress reducing techniques such as meditation alter immune responses after strenous physical stress. Methods: The hypothesis was tested by studying six meditating and six non-meditating male runners in a concurrent, controlled design. After a period of six months with meditation for the experimental group, blood samples were taken immediately before and after a maximum oxygen uptake test (VO2max). Results: The increase in CD8+ T cells after VO2max was significantly less in the meditation group than in the control group (P = 0.04). The amount of CD2+ cells doubled after VO2max, mainly because of a rise in the CD8+ fraction. Conclusions: Meditation may modify the suppressive influence of strenous physical stress on the immune system. abstract_id: PUBMED:25451133 Maternal stress, nutrition and physical activity: Impact on immune function, CNS development and psychopathology. Evidence suggests that maternal and fetal immune dysfunction may impact fetal brain development and could play a role in neurodevelopmental disorders, although the definitive pathophysiological mechanisms are still not completely understood. Stress, malnutrition and physical inactivity are three maternal behavioral lifestyle factors that can influence immune and central nervous system (CNS) functions in both the mother and fetus, and may therefore, increase risk for neurodevelopmental/psychiatric disorders. First, we will briefly review some aspects of maternal-fetal immune system interactions and development of immune tolerance. Second, we will discuss the bidirectional communication between the immune system and CNS and the pathways by which immune dysfunction could contribute to neurodevelopmental disorders. Third, we will discuss the effects of prenatal stress and malnutrition (over and undernutrition) on perinatal programming of the CNS and immune system, and how this might influence neurodevelopment. Finally, we will discuss the beneficial impact of physical fitness during pregnancy on the maternal-fetal unit and infant and how regular physical activity and exercise can be an effective buffer against stress- and inflammatory-related disorders. Although regular physical activity has been shown to promote neuroplasticity and an anti-inflammatory state in the adult, there is a paucity of studies evaluating its impact on CNS and immune function during pregnancy. Implementing stress reduction, proper nutrition and ample physical activity during pregnancy and the childbearing period may be an efficient strategy to counteract the impact of maternal stress and malnutrition/obesity on the developing fetus. Such behavioral interventions could have an impact on early development of the CNS and immune system and contribute to the prevention of neurodevelopmental and psychiatric disorders. Further research is needed to elucidate this relationship and the underlying mechanisms of protection. This article is part of a Special Issue entitled SI: Neuroimmunology in Health And Disease. abstract_id: PUBMED:25172396 Cellular stress response and pulmonary inflammation. Innate immunity as the first line of the immune system, provides initial protection against various pathogens and infections. Recent studies suggest a link between cell stress response and immune response upon exogenous insults in the lung. The key proteins in cellular stress responses were demonstrated to be involved in the activation and regulation of the immune signaling pathways. Further research on the function of these stress proteins in innate immunity defenses, particularly in pulmonary diseases and inflammation may help to clarify the disease pathogenesis and provide potential therapeutic treatments for various infectious and inflammatory lung diseases. abstract_id: PUBMED:32835298 Isolation, social stress, low socioeconomic status and its relationship to immune response in Covid-19 pandemic context. The coronavirus disease 2019 (COVID-19) outbreak was first reported December 2019, in Wuhan, China, and has since spread worldwide. Social distancing or isolation measures were taken to mitigate the pandemic. Furthermore, stress and low socioeconomic status in humans confer increased vulnerability to morbidity and mortality, what can be biologically observed. This condition tends to remain during the Covid-19 pandemic. Social disruption stress (SDR) raises important questions regarding the functioning of the immune system, and the release of several stress hormones. A molecular pattern, conserved transcriptional response to adversity (CTRA), is thought to have evolved to defend against physical injury during periods of heightened risk. Chronic CTRA activation could leave an organism vulnerable to viral infections, leading to increased pro-inflammatory gene expression and a suppression of anti-viral gene expression. The activation of such transcriptional status is related to conditions of social stress through either hostile human contact, or increased predatory vulnerability due to separation from the social group and also low socioeconomic status. This review aims to point out questions for government officials, researchers and health professionals to better target their actions during a pandemic and encourage studies for a better understanding of these characteristics. abstract_id: PUBMED:27187351 Chronic Heat Stress Induces Immune Response, Oxidative Stress Response, and Apoptosis of Finishing Pig Liver: A Proteomic Approach. Heat stress (HS) negatively affects human health, animal welfare, and livestock production. We analyzed the hepatic proteomes of finishing pigs subjected to chronic heat stress (HS), thermal neutral (TN), and restricted feed intake conditions, identifying differences between direct and indirect (via reduced feed intake) HS. Twenty-four castrated male pigs were randomly allocated to three treatments for three weeks: (1) thermal neutral (TN) (22 °C) with ad libitum feeding; (2) chronic HS (30 °C) with ad libitum feeding; and (3) TN, pair-fed to HS intake (PF). Hepatic proteome analysis was conducted using two-dimensional gel electrophoresis and mass spectrometry. Both HS and PF significantly reduced liver weight (p &lt; 0.05). Forty-five hepatic proteins were differentially abundant when comparing HS with TN (37), PF with TN (29), and HS with PF (16). These proteins are involved in heat shock response and immune defense, oxidative stress response, cellular apoptosis, metabolism, signal transduction, and cytoskeleton. We also observed increased abundance of proteins and enzymes associated with heat shock response and immune defense, reduced the redox state, enhanced multiple antioxidant abilities, and increased apoptosis in HS liver. Heat-load, independent of reduced feed intake, induced an innate immune response, while food restriction caused stress and cellular apoptosis. Our results provide novel insights into the effects of chronic HS on liver. Answer: Yes, meditation may modulate the immune response to physical stress. A study that investigated the effects of meditation on immune responses after strenuous physical stress found that meditation could modify the suppressive influence of strenuous physical stress on the immune system. Specifically, the study observed that the increase in CD8+ T cells after a maximum oxygen uptake test (VO2max) was significantly less in the meditation group than in the control group. This suggests that meditation may have an impact on the immune system's response to intense physical exertion (PUBMED:8808540).
Instruction: 'Are we there yet? Abstracts: abstract_id: PUBMED:30197212 Relevance of phenotypic information for the taxonomy of not-yet-cultured microorganisms. To date, far less than 1% of the estimated global species of Bacteria and Archaea have been described and their names validly published. Aside from these quantitative limitations, our understanding of phenotypic and functional diversity of prokaryotes is also highly biased as not a single species has been described for 85 of the 118 phyla that are currently recognized. Due to recent advances in sequencing technology and capacity, metagenomic datasets accumulate at an increasing speed and new bacterial and archaeal genome sequences become available at a faster rate than newly described species. The growing gap between the diversity of Bacteria and Archaea held in pure culture and that detected by molecular methods has led to the proposal to establish a formal nomenclature for not-yet-cultured taxa primarily based on sequence information. According to this proposal, the concept of Candidatus species would be extended to groups of closely related genome sequences and their names validly published following established rules of bacterial nomenclature. The corresponding sequences would be deposited in public databases as the type. The suggested alterations of the International Code of Nomenclature of Prokaryotes raise concerns regarding (1) the reliability and stability of nomenclature, (2) the technological and conceptual limitations as well as availability of reference genomes, (3) the information content of in silico functional predictions, and (4) the recognition of evolutionary units of microbial diversity. These challenges need to be overcome to arrive at a meaningful taxonomy of not-yet-cultured prokaryotes with so far poorly understood phenotypes. abstract_id: PUBMED:31238973 Mining traits for the enrichment and isolation of not-yet-cultured populations. Background: The lack of pure cultures limits our understanding into 99% of bacteria. Proper interpretation of the genetic and the transcriptional datasets can reveal clues for the enrichment and even isolation of the not-yet-cultured populations. Unraveling such information requires a proper mining method. Results: Here, we present a method to infer the hidden traits for the enrichment of not-yet-cultured populations. We demonstrate this method using Candidatus Accumulibacter. Our method constructs a whole picture of the carbon, electron, and energy flows in the not-yet-cultured populations from the genomic datasets. Then, it decodes the coordination across three flows from the transcriptional datasets. Based on it, our method diagnoses the status of the not-yet-cultured populations and provides strategy to optimize the enrichment systems. Conclusion: Our method could shed light to the exploration into the bacterial dark matter in the environments. abstract_id: PUBMED:34796698 "Porous and Yet Dense" Electrodes for High-Volumetric-Performance Electrochemical Capacitors: Principles, Advances, and Challenges. With the ever-rapid miniaturization of portable, wearable electronics and Internet of Things, the volumetric performance is becoming a much more pertinent figure-of-merit than the conventionally used gravimetric parameters to evaluate the charge-storage capacity of electrochemical capacitors (ECs). Thus, it is essential to design the ECs that can store as much energy as possible within a limited space. As the most critical component in ECs, "porous and yet dense" electrodes with large ion-accessible surface area and optimal packing density are crucial to realize desired high volumetric performance, which have demonstrated to be rather challenging. In this review, the principles and fundamentals of ECs are first observed, focusing on the key understandings of the different charge storage mechanisms in porous electrodes. The recent and latest advances in high-volumetric-performance ECs, developed by the rational design and fabrication of "porous and yet dense" electrodes are then examined. Particular emphasis of discussions then concentrates on the key factors impacting the volumetric performance of porous carbon-based electrodes. Finally, the currently faced challenges, further perspectives and opportunities on those purposely engineered porous electrodes for high-volumetric-performance EC are presented, aiming at providing a set of guidelines for further design of the next-generation energy storage devices. abstract_id: PUBMED:37428219 Temperature-Mediated Phase Separation Enables Strong yet Reversible Mechanical and Adhesive Hydrogels. Hydrogels with strong yet reversible mechanical and adhesive properties fabricated in a facile and friendly manner are important for engineering and intelligent electronics applications but are challenging to create and control. Existing approaches for preparing hydrogels involve complicated pretreatments and produce hydrogels that suffer from limited skin applicability. Copolymerized hydrogels are expected to present an intriguing target in this field by means of thermoresponsive features, while the perceived intrinsic flaws of brittleness, easy fracture, and weak adhesion enervate the development prospects. Herein, we report a hydrogel with strong yet reversible mechanical and adhesive properties using cellulose nanofibrils to simultaneously address multiple dilemmas inspired by a temperature-mediated phase separation strategy. This strategy applies temperature-driven formation and dissociation of hydrogen bonds between common copolymers and cellulose nanofibrils to trigger the onset and termination of phase separation for dynamically reversible on-demand properties. The resulting hydrogel exhibits up to 96.0% (117.2 J/m2 vs 4.8 J/m2 for interfacial toughness) and 85.7% (0.02 MPa vs 0.14 MPa for mechanical stiffness) adhesive and mechanical tunability when worked on skin, respectively. Our strategy offers a promising, simple, and efficient way to directly achieve robust adhesion performance in one step using common copolymers and biomass resources, with implications that could go beyond strong yet adhesive hydrogels. abstract_id: PUBMED:23727053 SYFSA: a framework for systematic yet flexible systems analysis. Although technological or organizational systems that enforce systematic procedures and best practices can lead to improvements in quality, these systems must also be designed to allow users to adapt to the inherent uncertainty, complexity, and variations in healthcare. We present a framework, called Systematic Yet Flexible Systems Analysis (SYFSA) that supports the design and analysis of Systematic Yet Flexible (SYF) systems (whether organizational or technical) by formally considering the tradeoffs between systematicity and flexibility. SYFSA is based on analyzing a task using three related problem spaces: the idealized space, the natural space, and the system space. The idealized space represents the best practice-how the task is to be accomplished under ideal conditions. The natural space captures the task actions and constraints on how the task is currently done. The system space specifies how the task is done in a redesigned system, including how it may deviate from the idealized space, and how the system supports or enforces task constraints. The goal of the framework is to support the design of systems that allow graceful degradation from the idealized space to the natural space. We demonstrate the application of SYFSA for the analysis of a simplified central line insertion task. We also describe several information-theoretic measures of flexibility that can be used to compare alternative designs, and to measure how efficiently a system supports a given task, the relative cognitive workload, and learnability. abstract_id: PUBMED:24194484 The "as-yet situation" in winnicott's "fragment of an analysis": your father "never did you the honor of"… yet. The author proposes a new hypothesis in relation to Winnicott's "Fragment of an Analysis": that as early as 1955, in the case described in this text, Winnicott is creating the paternal function in his patient's psychic functioning by implicitly linking his interpretations regarding the father to the Freudian concept of Nachträglichkeit. The author introduces an original clinical concept, the as-yet situation, which she has observed in her own clinical work, as well as in Winnicott's analysis of the patient described in "Fragment of an Analysis" (1955). abstract_id: PUBMED:29659479 Oral Dysbiotic Communities and Their Implications in Systemic Diseases. The human body supports the growth of a wide array of microbial communities in various niches such as the oral cavity, gastro-intestinal and urogenital tracts, and on the surface of the skin. These host associated microbial communities include yet-un-cultivable bacteria and are influenced by various factors. Together, these communities of bacteria are referred to as the human microbiome. Human oral microbiome consists of both symbionts and pathobionts. Deviation from symbiosis among the bacterial community leads to &amp;ldquo;dysbiosis&amp;rdquo;, a state of community disturbance. Dysbiosis occurs due to many confounding factors that predispose a shift in the composition and relative abundance of microbial communities. Dysbiotic communities have been a major cause for many microbiome related systemic infections. Such dysbiosis is directed by certain important pathogens called the &amp;ldquo;keystone pathogens&amp;rdquo;, which can modulate community microbiome variations. One such persistent infection is oral infection, mainly periodontitis, where a wide array of causal organisms have been implied to systemic infections such as cardio vascular disease, diabetes mellitus, rheumatoid arthritis, and Alzheimer&amp;rsquo;s disease. The keystone pathogens co-occur with many yet-cultivable bacteria and their interactions lead to dysbiosis. This has been the focus of recent research. While immune evasion is one of the major modes that leads to dysbiosis, new processes and new virulence factors of bacteria have been shown to be involved in this important process that determines a disease or health state. This review focuses on such dysbiotic communities, their interactions, and their virulence factors that predispose the host to other systemic implications. abstract_id: PUBMED:35467387 Two-Dimensional Cell Separation: a High-Throughput Approach to Enhance the Culturability of Bacterial Cells from Environmental Samples. Culture-independent sequence data from various environmental samples have revealed an immense microbial diversity of environmental, clinical, and industrial importance that has not yet been cultured. Cultivation is imperative to validate findings emerging from cultivation-independent molecular data and exploit the isolated organisms for biotechnological purposes. Efforts have been made to boost the cultivability of microbes from environmental samples by use of a range of techniques and instrumentation. The manuscript presents a novel yet simple and innovative approach to improving the cultivability of natural microorganisms without sophisticated instrumentation. By employing gradient centrifugation combined with serial dilution ("two-dimensional cell separation"), significantly higher numbers of genera (&gt;2-fold higher) and species (&gt;3-fold higher) were isolated from environmental samples, including soil, anaerobic sludge, and landfill leachate, than from using serial dilution alone. This simple and robust protocol can be modified for any environment and culture medium and provides access to untapped microbial diversity. IMPORTANCE In the manuscript, we have developed a novel yet simple and innovative approach to improving the cultivability of natural microorganisms without sophisticated instrumentation. The method used gradient centrifugation combined with serial dilution (two-dimensional cell separation) to improve taxum recovery from samples. This simple and robust protocol can be modified for any environment and culture medium and provides access to untapped microbial diversity. This approach can be incorporated with less labor and complexity in laboratories with minimal instrumentation. As cultivation is a workflow that is well suited to lower-resource microbiology labs, we believe improvements in cultivability can increase opportunities for scientific collaborations between low-resource labs and groups focused on high-resource cultivation-independent methodologies. abstract_id: PUBMED:21954394 Cultivating uncultured bacteria from northern wetlands: knowledge gained and remaining gaps. Northern wetlands play a key role in the global carbon budget, particularly in the budgets of the greenhouse gas methane. These ecosystems also determine the hydrology of northern rivers and represent one of the largest reservoirs of fresh water in the Northern Hemisphere. Sphagnum-dominated peat bogs and fens are the most extensive types of northern wetlands. In comparison to many other terrestrial ecosystems, the bacterial diversity in Sphagnum-dominated wetlands remains largely unexplored. As demonstrated by cultivation-independent studies, a large proportion of the indigenous microbial communities in these acidic, cold, nutrient-poor, and water-saturated environments is composed of as-yet-uncultivated bacteria with unknown physiologies. Most of them are slow-growing, oligotrophic microorganisms that are difficult to isolate and to manipulate in the laboratory. Yet, significant breakthroughs in cultivation of these elusive organisms have been made during the last decade. This article describes the major prerequisites for successful cultivation of peat-inhabiting microbes, gives an overview of the currently captured bacterial diversity from northern wetlands and discusses the unique characteristics of the newly discovered organisms. abstract_id: PUBMED:36149751 Heterogeneous Nanostructures Cause Anomalous Diffusion in Lipid Monolayers. The diffusion and mobility in biomembranes are crucial for various cell functions; however, the mechanisms involved in such processes remain ambiguous due to the complex membrane structures. Herein, we investigate how the heterogeneous nanostructures cause anomalous diffusion in dipalmitoylphosphatidylcholine (DPPC) monolayers. By identifying the existence of condensed nanodomains and clarifying their impact, our findings renew the understanding of the hydrodynamic description and the statistical feature of the diffusion in the monolayers. We find a universal characteristic of the multistage mean square displacement (MSD) with an intermediate crossover, signifying two membrane viscosities at different scales: the short-time scale describes the local fluidity and is independent of the nominal DPPC density, and the long-time scale represents the global continuous phase taking into account nanodomains and increases with DPPC density. The constant short-time viscosity reflects a dynamic equilibrium between the continuous fluid phase and the condensed nanodomains in the molecular scale. Notably, we observe an "anomalous yet Brownian" phenomenon exhibiting an unusual double-peaked displacement probability distribution (DPD), which is attributed to the net dipolar repulsive force from the heterogeneous nanodomains around the microdomains. The findings provide physical insights into the transport of membrane inclusions that underpin various biological functions and drug deliveries. Answer: Based on the provided abstracts, the question "Are we there yet?" in the context of various scientific fields suggests an ongoing journey towards understanding and achieving certain goals, but with the acknowledgment that there is still progress to be made. In the field of microbiology, the taxonomy of not-yet-cultured microorganisms is still being developed, with less than 1% of estimated global species described and validly published, and no species described for 85 of the 118 recognized phyla (PUBMED:30197212). Similarly, the enrichment and isolation of not-yet-cultured populations remain a challenge, with new methods being proposed to infer hidden traits from genomic and transcriptional datasets to optimize enrichment systems (PUBMED:31238973). In materials science, the design of "porous and yet dense" electrodes for high-volumetric-performance electrochemical capacitors is a challenge that researchers are actively working to overcome (PUBMED:34796698). Likewise, the development of strong yet reversible mechanical and adhesive hydrogels for engineering and intelligent electronics applications is an ongoing endeavor (PUBMED:37428219). In systems analysis, the SYFSA framework represents an approach to designing systems that balance systematic procedures with the flexibility needed to adapt to healthcare's inherent uncertainty and complexity (PUBMED:23727053). In psychoanalysis, the concept of the "as-yet situation" reflects an evolving understanding of the paternal function in psychic functioning (PUBMED:24194484). In the study of the human microbiome, understanding the implications of oral dysbiotic communities in systemic diseases is an area where knowledge is still being gained, with the interactions between keystone pathogens and yet-cultivable bacteria being a focus of recent research (PUBMED:29659479). In environmental microbiology, efforts to improve the cultivability of bacteria from northern wetlands and other environments continue, with significant progress made but gaps remaining in our understanding of the indigenous microbial communities (PUBMED:21954394). The novel approach of two-dimensional cell separation is one such effort to enhance the cultivability of bacterial cells from environmental samples (PUBMED:35467387). Finally, in biophysics, the study of anomalous diffusion in lipid monolayers due to heterogeneous nanostructures is an area where new insights are being discovered, but the full understanding of membrane transport mechanisms is not yet complete (PUBMED:36149751).
Instruction: Balanced orbital decompression combined with fat removal in Graves ophthalmopathy: do we really need to remove the third wall? Abstracts: abstract_id: PUBMED:12644756 Balanced orbital decompression combined with fat removal in Graves ophthalmopathy: do we really need to remove the third wall? Purpose: To compare the reduction of proptosis and the incidence of new-onset diplopia after 3-wall (medial, lateral, and inferior) orbital decompression versus balanced medial and lateral wall decompression combined with orbital fat excision in patients with Graves ophthalmopathy. Methods: Three-wall orbital decompression including medial, inferior, and lateral walls was performed in 13 eyes of 7 patients (group 1), and balanced medial and lateral wall decompression combined with fat removal was performed in 18 eyes of 11 patients (group 2). A transnasal endoscopic approach was used for medial wall removal. A lateral canthotomy incision combined with a short upper eyelid incision was used for extended lateral wall removal, and this was combined with an inferior conjunctival fornix incision when floor decompression was performed. Results: The mean reduction of proptosis was 6.9+/-1.6 mm and 6.5+/-1.3 mm in the first and second groups, respectively; the difference was not statistically significant (P=0.37). After 3-wall decompression, 57.1% of the patients had permanent new-onset diplopia (group 1), whereas none of the patients had permanent postoperative diplopia after balanced medial and lateral wall decompression combined with fat removal (group 2). The difference in permanent new-onset postoperative diplopia between two groups was statistically significant (P&lt;0.001). Conclusions: Balanced medial and lateral wall decompression combined with orbital fat removal provides an effective reduction in proptosis and reduces the incidence of postoperative permanent diplopia when compared with 3-wall decompression. This technique may eliminate the need for orbital floor excision. abstract_id: PUBMED:25835075 Endoscopical Orbital Fat Decompression with Medial Orbital Wall Decompression for Dysthyroid Optic Neuropathy. Objective: To describe a novel approach to orbital decompression for dysthyroid optic neuropathy (DON). Methods: An augmented endoscopic transethmoid medial orbital wall decompression (ETMOWD) was performed on 43 consecutive patients (72 eyes) with DON in order to access the orbital apex adequately. Concurrently, endoscopic transethmoid fat decompression (ETFD) using a novel aspiration/cutting instrument to remove orbital fat was performed for further reduction of proptosis. All patients were followed up periodically for at least 6 months. Outcomes such as improvement of visual acuity (VA), color vision, degree of proptosis reduction as well as the incidence of surgical induced diplopia were analyzed at the final review. Results: Sixty-nine out of 72 eyes (95.8%) had a statistically significant improvement in VA from -0.65 ± 0.30 to -0.25 ± 0.22 postoperatively, with a mean improvement of 0.55 ± 0.17 (p &lt; 0.001). Thirty-four out of 45 eyes had an improvement in color vision (p &lt; 0.001). The range of proptosis reduction was 4 to 9 mm (mean 6.2 ± 1.2 mm). Postoperative symmetry was achieved to within 2 mm using an exophthalmometer in 39 of 43 patients (90.7%). Five patients developed diplopia in the postoperative phase, but had complete resolution within 3 months. Two patients had further deterioration in their diplopia following surgery. Conclusion: Our technique of a combined ETFD with ETMOWD appears to be effective in managing patients with DON, with minimal morbidities and a low incidence of postoperative diplopia. abstract_id: PUBMED:29618929 Original endoscopic orbital decompression of lateral wall through hairline approach for Graves' ophthalmopathy: an innovation of balanced orbital decompression. Background: Orbital decompression is an important surgical procedure for treatment of Graves' ophthalmopathy (GO), especially in women. It is reasonable for balanced orbital decompression of the lateral and medial wall. Various surgical approaches, including endoscopic transnasal surgery for medial wall and eye-side skin incision surgery for lateral wall, are being used nowadays, but many of them lack the validity, safety, or cosmetic effect. Patients And Methods: Endoscopic orbital decompression of lateral wall through hairline approach and decompression of medial wall via endoscopic transnasal surgery was done to achieve a balanced orbital decompression, aiming to improve the appearance of proptosis and create conditions for possible strabismus and eyelid surgery afterward. From January 29, 2016 to February 14, 2017, this surgery was performed on 41 orbits in 38 patients with GO, all of which were at inactive stage of disease. Just before surgery and at least 3 months after surgery, Hertel's ophthalmostatometer and computed tomography (CT) were used to check proptosis and questionnaires of GO quality of life (QOL) were completed. Findings: The postoperative retroversion of eyeball was 4.18±1.11 mm (Hertel's ophthalmostatometer) and 4.17±1.14 mm (CT method). The patients' QOL was significantly improved, especially the change in appearance without facial scar. The only postoperative complication was local soft tissue depression at temporal region. Obvious depression occurred in four cases (9.76%), which can be repaired by autologous fat filling. Interpretation: This surgery is effective, safe, and cosmetic. Effective balanced orbital decompression can be achieved by using this original and innovative surgery method. The whole manipulation is safe and controllable under endoscope. The postoperative scar of endoscopic surgery through hairline approach is covered by hair and the anatomic structure of anterior orbit is not impacted. abstract_id: PUBMED:24793198 Surgical effects of various orbital decompression methods in thyroid-associated orbitopathy: computed tomography-based comparative analysis. Objectives: To evaluate the surgical effects of orbital fat decompression and bony decompression in each orbital wall using computed tomography (CT) in thyroid-associated orbitopathy (TAO). Methods: In 27 TAO patients (48 orbits) with exophthalmos who underwent orbital wall decompression combined with fatty decompression, we recorded the resected orbital fat volume intraoperatively and estimated the decompression volume of the orbital wall in the deep lateral, medial and inferior walls using postoperative orbit CT images. Then, the correlation between exophthalmos reduction by Hertel reading and decompression volume in each area was analyzed to validate the surgical predictability, surgical efficiency and contribution level to total exophthalmos reduction. Results: The decompression volume in orbital fat and the deep lateral wall showed relatively high correlation with exophthalmos reduction (surgical predictability) compared to medial and inferior wall. The surgical efficiency was highest at deep lateral wall (2.704 ± 0.835 mm/cm(3)), followed by medial wall (0.892 ± 0.527 mm/cm(3)), orbital fat (0.638 ± 0.178 mm/cm(3)) and inferior wall (0.405 ± 0.996 mm/cm(3)). The actual contribution level to total exophthalmos reduction was highest in fatty decompression, followed by deep lateral decompression. Conclusion: In TAO patients with exophthalmos, orbital fat and deep lateral orbital wall are more predictable and contributory surgical targets for postsurgical exophthalmos reduction. abstract_id: PUBMED:26932949 Orbital Decompression in the Endoscopic Age: The Modified Inferomedial Orbital Strut. Objective: Postoperative diplopia occurs in up to 45% of patients following orbital decompression for exophthalmos associated with Graves' orbitopathy. We sought to describe outcomes of our balanced orbital decompression strategy that includes the preservation of a modified inferomedial orbital strut (mIOS). Study Design: Case series with chart review. Setting: Academic medical center. Subjects And Methods: A total of 115 consecutive orbital decompressions were performed on 73 patients (42 bilateral) with Graves' orbitopathy. All patients underwent (1) a balanced decompression technique incorporating an endoscopic medial and external lateral decompression and (2) a mIOS technique with preservation of the anterior half of the inferomedial orbital strut. A periorbital periosteal (orbital) sling was utilized in patients (n = 54) without threatened vision loss, proptosis &gt;28 mm, or periorbital disruption to prevent prolapse of the medial rectus muscle. Results: Utilization of the mIOS technique with or without a sling did not adversely affect the reduction in proptosis (5.1 mm with sling vs 5.0 mm without sling; P = .85).The incidence of new-onset postoperative diplopia was 17% (n = 6). The sling was not associated with postoperative diplopia (odds ratio = 0.54, 95% confidence interval: 0.08-3.40, P = .51), while it was associated with resolution of preexisting diplopia (odds ratio = 6.67, 95% confidence interval: 1.06-42.06, P = .04). No intraoperative complications occurred, and no patients suffered a decrement in visual acuity. Conclusion: Balanced orbital decompression utilizing a mIOS in patients with Graves' orbitopathy provides a safe and effective reduction in proptosis with a low rate of new-onset diplopia as compared with historical values. Utilization of an orbital sling may be beneficial in reducing postoperative diplopia in select patients. abstract_id: PUBMED:31691668 Bony orbital decompression in the treatment of endocrine ophthalmopathy The article reveiws the problems of bony orbital decompression for thyroid-related orbitopathy. Indications for this kind of surgery and different approaches are discussed. Advantages and limitations of each method are analyzed. Special attention is paid to complications of different bony orbital decompression approaches, complication rate and its efficacy. abstract_id: PUBMED:28255288 Orbital Decompression for Thyroid Eye Disease. Although most cases of thyroid eye disease (TED) can be managed medically, some refractory or severe cases are treated surgically with orbital decompression. Due to a lack of randomized controlled trials comparing surgical techniques for orbital decompression, none have been deemed superior. Thus, each case of TED is managed based on patient characteristics and surgeon experience. Surgical considerations include the extent of bony wall removal, the surgical approach, the choice of incision, and the use of fat decompression. Outcomes vary based on surgical indications and techniques; hence, vision can improve or worsen after the surgery. abstract_id: PUBMED:34164724 Extraocular muscle expansion after deep lateral orbital wall decompression: influence on proptosis reduction and its predictive factors. Purpose: To examine the relationship between extraocular muscle expansion and proptosis reduction (Δproptosis) in patients with thyroid eye disease who underwent deep lateral orbital wall decompression and to analyze the factors that contribute to extraocular muscle expansion. Methods: This retrospective, observational study included 133 sides from 77 patients with thyroid eye disease. The cross-sectional areas of the greater wing of the sphenoid bone (trigone), extraocular muscles, and superior ophthalmic vein were measured on computed tomographic images. Variables influencing Δproptosis were analyzed using multivariate linear regression analyses with stepwise variable selection. Predictive factors for the rate of postoperative increase in the cross-sectional extraocular muscle areas (Δextraocular muscle) were analyzed using the same statistical method. Results: The amount of orbital fat removed (P &lt; 0.001) and rate of Δlateral rectus muscle (P &lt; 0.001) were positively and negatively correlated with Δproptosis, respectively (r = 0.425; adjusted r2 = 0.168; P &lt; 0.001). The cross-sectional trigone area (P &lt; 0.001) was positively correlated with the rate of Δlateral rectus muscle, whereas the preoperative cross-sectional lateral rectus muscle area (P &lt; 0.001) and amount of orbital fat removed (P = 0.036) were negatively correlated with the rate of Δlateral rectus muscle (r = 0.551; adjusted r2 = 0.288; P &lt; 0.001). Conclusion: Lateral rectus muscle expansion was negatively correlated with proptosis reduction and proved to be predictable before surgery. The results of this study will help predict proptosis reduction after deep lateral orbital wall decompression and to preoperatively plan additional orbital bony and fat decompression. abstract_id: PUBMED:32627588 Orbital anatomical parameters affecting outcome of deep lateral orbital wall decompression. Purpose: To determine orbital anatomical parameters that affect surgical outcome of deep lateral orbital wall decompression. Methods: Twenty orbits of 20 patients with moderate proptosis (maximum 25 mm) due to thyroid eye disease who were stable for at least 6 months were included in this prospective cohort study. Four parameters including lateral orbital wall distance (LOWD), removable surface area (RSA), removable surface length (RSL), and sphenoid door jamb thickness (SDJ) were evaluated by computed tomography (CT) scan prior to surgery in these patients. Deep lateral orbital wall decompression without fat removal was done in all patients. Relationship between all these factors and proptosis reduction, 6 months after surgery was assessed. Results: Mean proptosis before decompression surgery (mean ± SD) was 23.22 ± 1.19 mm and reduction in Hertel exophthalmometry (mean ± SD) was -3.27 ± 1.03 mm (p = 0.006). Pearson correlation showed that LOWD, RSL, and SDJ had statistically significant correlation with proptosis reduction (p = 0.017, 0.002, and 0.001, respectively.). Linear multivariate regression analysis showed SDJ as the only independent factor in predicting surgical outcome. Conclusion: several orbital anatomical factors have significant role in predicting surgical outcome following deep lateral wall orbital decompression. Among these parameters, SDJ is an independent factor, whereas LOWD and RSL are dependent factors. Preoperative assessment of the bony sphenoid anatomy may be helpful in predicting the amount of potential decompression. abstract_id: PUBMED:27051255 Surgical Outcomes of Balanced Deep Lateral and Medial Orbital Wall Decompression in Korean Population: Clinical and Computed Tomography-based Analysis. Purpose: To evaluate the clinical outcomes of balanced deep lateral and medial orbital wall decompression and to estimate surgical effects using computed tomography (CT) images in Korean patients with thyroid-associated ophthalmopathy (TAO). Methods: Retrospective chart review was conducted in TAO patients with exophthalmos who underwent balanced deep lateral and medial orbital wall decompression. Exophthalmos was measured preoperatively and postoperatively at 1 and 3 months. Postoperative complications were evaluated in all study periods. In addition, decompressed bone volume was estimated using CT images. Thereafter, decompression volume in each decompressed orbital wall was analyzed to evaluate the surgical effect and predictability. Results: Twenty-four patients (48 orbits) with an average age of 34.08 ± 7.03 years were evaluated. The mean preoperative and postoperative exophthalmos at 1 and 3 months was 18.91 ± 1.43, 15.10 ± 1.53, and 14.91 ± 1.49 mm, respectively. Bony decompression volume was 0.80 ± 0.29 cm(3) at the medial wall and 0.68 ± 0.23 cm(3) at the deep lateral wall. Postoperative complications included strabismus (one patient, 2.08%), upper eyelid fold change (four patients, 8.33%), and dysesthesia (four patients, 8.33%). Postsurgical exophthalmos reduction was more highly correlated with the deep lateral wall than the medial wall. Conclusions: In TAO patients with exophthalmos, balanced deep lateral and medial orbital wall decompression is a good surgical method with a low-risk of complications. In addition, deep lateral wall decompression has higher surgical predictability than medial wall decompression, as seen with CT analysis. Answer: Balanced orbital decompression combined with fat removal in Graves ophthalmopathy has been shown to be an effective method for reducing proptosis and minimizing the incidence of postoperative permanent diplopia. The question of whether the removal of the third wall is necessary has been addressed in several studies. One study compared the outcomes of three-wall decompression (medial, lateral, and inferior walls) with balanced medial and lateral wall decompression combined with orbital fat excision. The results indicated that the mean reduction of proptosis was similar between the two groups, but the incidence of permanent new-onset diplopia was significantly higher in the three-wall decompression group (57.1%) compared to none in the group that underwent balanced decompression with fat removal. This suggests that balanced decompression with fat removal may reduce the need for orbital floor excision while still providing effective proptosis reduction and a lower risk of diplopia (PUBMED:12644756). Other studies have also supported the effectiveness of balanced decompression techniques. For instance, a novel approach combining endoscopic transethmoid medial orbital wall decompression with endoscopic transethmoid fat decompression showed significant improvement in visual acuity and color vision, with a mean proptosis reduction of 6.2 ± 1.2 mm and a low incidence of postoperative diplopia (PUBMED:25835075). Similarly, an original endoscopic orbital decompression of the lateral wall through a hairline approach, combined with medial wall decompression, resulted in significant improvement in patients' quality of life and appearance without facial scarring, with a mean retroversion of the eyeball of 4.18±1.11 mm (PUBMED:29618929). Furthermore, a study evaluating the surgical effects of various orbital decompression methods found that the decompression volume in orbital fat and the deep lateral wall showed a relatively high correlation with exophthalmos reduction, suggesting that these areas are more predictable and contributory surgical targets for postsurgical exophthalmos reduction (PUBMED:24793198). In conclusion, the evidence suggests that balanced medial and lateral wall decompression combined with orbital fat removal can effectively reduce proptosis and minimize the risk of new-onset diplopia, potentially eliminating the need to remove the third wall (orbital floor) in patients with Graves ophthalmopathy.
Instruction: Is there an incremental rise in the risk of obstetric intervention with increasing maternal age? Abstracts: abstract_id: PUBMED:9800928 Is there an incremental rise in the risk of obstetric intervention with increasing maternal age? Objective: To determine whether increasing maternal age increases the risk of operative delivery and to investigate whether such a trend is due to fetal or maternal factors. DESIGN ANALYSIS: of prospectively collected data on a maternity unit database. Setting: A postgraduate teaching hospital. Population: 6410 nulliparous women with singleton cephalic pregnancies delivering at term (3742 weeks of gestation) between 1 January 92 and 31 December 95. Main Outcome Measures: Mode of delivery, rates of prelabour caesarean section, induction of labour and epidural usage. Results: There was a positive, highly significant association between increasing maternal age and obstetric intervention. Prelabour (P &lt; 0.001) and emergency (P &lt; 0.001) caesarean section, instrumental vaginal delivery (spontaneous labour P &lt; 0001; induced labour P = 0.001), induction of labour (P &lt; 0.001) and epidural usage in spontaneous labour (P = 0.005) all increased with increasing age. In the second stage of labour fetal distress and failure to advance, requiring instrumental delivery, were both more likely with increasing maternal age (in both P &lt; 0.001). Epidural usage in induced labour and the incidence of small for gestational age newborns did not increase with increasing maternal age (P = 0.68 and P = 0.50, respectively). Conclusions: This study demonstrates that increasing maternal age is associated with an incremental increase in obstetric intervention. Previous studies have demonstrated a significant effect in women older than 35 years of age, but these data show changes on a continuum from teenage years. This finding may reflect a progressive, age-related deterioration in myometrial function. abstract_id: PUBMED:27131580 The association between maternal age at first delivery and risk of obstetric trauma. Background: There are a number of poor birth outcomes with advancing maternal age. Although there is some evidence of a higher risk of trauma to obstetric anal sphincter and the levator ani muscle with advancing age, findings to date are inconclusive. Objective: The aim of this study was to assess the risk of pelvic floor injury using translabial 3- and 4-dimensional ultrasound relative to advancing maternal age in primiparous women after a singleton vaginal delivery at term and to determine any association between maternal age and obstetric trauma, including obstetric anal sphincter injuries, levator avulsion, and irreversible overdistension of the levator hiatus. Study Design: This is a subanalysis of a perinatal intervention trial conducted in a specialist urogynecology referral unit at 2 tertiary units. All primiparous women with singleton birth at term underwent 3- and 4-dimensional translabial pelvic floor ultrasound both ante- and postnatally for the assessment of the obstetric trauma including levator ani muscle avulsion, hiatal overdistension to 25 cm(2) or more, and obstetric anal sphincter injuries. A multivariate logistic regression analysis was performed to examine the association between maternal age and obstetric trauma diagnosed on 3- and 4-dimensional translabial ultrasound. Multiple confounders were included, and the most significant (forceps and vacuum delivery) were used for probability modeling. Results: Of 660 women recruited for the original study, a total of 375 women who had a vaginal delivery with complete data sets were analyzed. A total of 174 women (46.4%) showed evidence of at least 1 form of major pelvic floor trauma. Advancing maternal age at first delivery carries with it a significant incremental risk of major pelvic floor trauma with an odds ratio of 1.064 for overall risk of injury for each increasing year of age past age 18 years (P = .003). The probability of any type of trauma appears to be substantially higher for forceps delivery. Vacuum delivery appears to increase the risk of obstetric anal sphincter injuries but not of levator avulsion. Conclusion: There is a significant association between the risk of major pelvic floor injury and increasing maternal age at first delivery. abstract_id: PUBMED:31736510 Influence of Maternal Age on Selected Obstetric Parameters. Introduction In recent decades, there has been a continuous rise in the average age at which women give birth. A maternal age of 35 years and above is considered an independent risk factor in pregnancy and birth, due to higher rates of intervention. This study investigates the influence of maternal age on birth procedure, gestational age, and rate of interventions during delivery. The influence of maternal parity is also analyzed. Material and Methods Data from the Austrian Register of Births was retrospectively collected and evaluated. The collected data was the data of all singleton live births in Austria between January 1, 2008 and December 31, 2016 (n = 686 272). Multiple births and stillbirths were excluded from the study. Maternal age and parity were analyzed in relation to predefined variables (birth procedure, gestational age, episiotomy in cases of vaginal delivery, epidural anesthesia in both vaginal and cesarean deliveries, and intrapartum micro-blood gas analysis). Statistical data was evaluated using (1) descriptive univariate analysis, (2) bivariate analysis, and (3) multinomial regression models. Results The cesarean section rate and the rate of surgically-assisted vaginal deliveries increased with advancing maternal age, especially in primiparous women, while the rate of spontaneous deliveries decreased with increasing maternal age. A parity of ≥ 2 had a protective effect on the cesarean section rate. The rate of premature births also increased with increasing maternal age, particularly among primiparous women. Discussion Although higher maternal age has a negative effect on various obstetric parameters, it was nevertheless not possible to identify a causal connection. Maternal age should not be assessed as an independent risk factor; other factors such as lifestyle or prior chronic disease and parity must be taken into consideration. abstract_id: PUBMED:15512544 The rise in obstetric intervention with maternal age: a continuous phenomenon. Population-based data from the Cardiff Births Survey has been used to examine the trends in obstetric intervention with increasing maternal age. As maternal age increases, spontaneous vaginal delivery rates fall, instrumental delivery rates, elective and emergency caesarean section, induction of labour and epidural rates rise. Induction of labour and epidural in labour are both associated with decreased spontaneous vaginal delivery rates and an increase in instrumental delivery rates and emergency caesarean section. abstract_id: PUBMED:15863534 Impact of maternal age on obstetric outcome. Objective: To estimate the effect of maternal age on obstetric outcomes. Methods: A prospective database from a multicenter investigation of singletons, the FASTER trial, was studied. Subjects were divided into 3 age groups: 1) less than 35 years, 2) 35-39 years, and 3) 40 years and older. Multivariable logistic regression analysis was used to assess the effect of age on outcomes after adjusting for race, parity, body mass index, education, marital status, smoking, medical history, use of assisted conception, and patient's study site. Results: A total of 36,056 women with complete data were available: 28,398 (79%) less than 35 years of age; 6,294 (17%) 35-39 years; and 1,364 (4%) 40 years and older. Increasing age was significantly associated with miscarriage (adjusted odds ratio [adjOR]2.0 and 2.4 for ages 35-39 years and age 40 years and older, respectively), chromosomal abnormalities (adjOR 4.0 and 9.9), congenital anomalies (adjOR 1.4 and 1.7), gestational diabetes (adjOR 1.8 and 2.4), placenta previa (adjOR 1.8 and 2.8), and cesarean delivery (adjOR 1.6 and 2.0). Patients aged 35-39 years were at increased risk for macrosomia (adjOR 1.4). Increased risk for abruption (adjOR 2.3), preterm delivery (adjOR 1.4), low birth weight (adjOR 1.6), and perinatal mortality (adjOR 2.2) was noted in women aged 40 years and older. Conclusion: Increasing maternal age is independently associated with specific adverse pregnancy outcomes. Increasing age is a continuum rather than a threshold effect. abstract_id: PUBMED:36897311 Increases in Obstetric Interventions and Changes in Gestational Age Distributions of U.S. Births. Objective: To examine how changes in induction of labor (IOL) and cesarean deliveries between 1990 and 2017 affected gestational age distributions of births in the United States. Materials and Methods: Singleton first births were drawn from the National Vital Statistics System Birth Data for years 1990-2017. Separate analytic samples were created (1) by maternal race/ethnicity (Hispanic, non-Hispanic Black, non-Hispanic Asian, and non-Hispanic white), (2) by maternal age (15-19, 20-24, 25-29, 30-34, 35-39, 40-49), (3) by U.S. states, and (4) for women at low risk for obstetric interventions (e.g., age 20-34, no hypertension, no diabetes, no tobacco use). Gestational age was measured in weeks, and obstetric intervention status was measured as: (1) no IOL, vaginal delivery; (2) no IOL, cesarean delivery; and (3) IOL, all deliveries. The joint probabilities of birth at each gestational week by obstetric intervention status for years 1990-1991, 1998-1999, 2007-2008, and 2016-2017 were estimated. Results: Between 1990 and 2017, the percent of singleton first births occurring between 37 and 39 weeks of gestation increased from 38.5% to 49.5%. The changes were driven by increases in IOL and a shift in the use of cesarean deliveries toward earlier gestations. The changes were observed among all racial/ethnic groups and all maternal ages, and across all U.S. states. The same changes were also observed among U.S. women at low risk for interventions. Conclusion: Changes in gestational age distributions of U.S. births and their underlying causes are likely national-level phenomena and do not appear to be responding to increases in maternal risk for interventions. abstract_id: PUBMED:27716789 Risk of Adverse Obstetric and Neonatal Outcomes by Maternal Age: Quantifying Individual and Population Level Risk Using Routine UK Maternity Data. Objective: The objective of this study was to investigate whether moderately increased maternal age is associated with obstetric and neonatal outcome in a contemporary population, and to consider the possible role of co-morbidities in explaining any increased risk. Study Design: Secondary analysis of routinely collected data from a large maternity unit in London, UK. Data were available on 51,225 singleton deliveries (≥22 weeks) occurring to women aged ≥20 between 2004 and 2012. Modified Poisson regression was used to estimate risk ratios for the association between maternal age and obstetric and neonatal outcome (delivery type, postpartum haemorrhage, stillbirth, low birthweight, preterm birth, small for gestational age, neonatal unit admission), using the reference group 20-24 years. Population attributable fractions were calculated to quantify the population impact. Results: We found an association between increasing maternal age and major postpartum haemorrhage (≥1000ml blood loss) (RR 1.36 95% CI 1.18-1.57 for age 25-29 rising to 2.41 95% CI 2.02-2.88 for age ≥40). Similar trends were observed for caesarean delivery, most notably for elective caesareans (RR 1.64 95% CI 1.36-1.96 for age 25-29 rising to 4.94 95% CI 4.09-5.96 for age ≥40). There was evidence that parity modified this association, with a higher prevalence of elective caesarean delivery in older nulliparous women. Women aged ≥35 were at increased risk of low birthweight and preterm birth. We found no evidence that the risk of stillbirth, small for gestational age, or neonatal unit admission differed by maternal age. Conclusions: Our results suggest a gradual increase in the risk of caesarean delivery and postpartum haemorrhage from age 25, persisting after taking into account maternal BMI, hypertension and diabetes. The risk of low birthweight and preterm birth was elevated in women over 35. Further research is needed to understand the reasons behind the high prevalence of elective caesarean delivery in nulliparous older mothers. abstract_id: PUBMED:28915858 Risk of obstetric anal sphincter injury increases with maternal age irrespective of parity: a population-based register study. Background: Obstetric anal sphincter injury (OASI) is a rare but serious outcome of vaginal birth. Based on concerns about the increasing number of women who commence childbearing later than previous generation, this study aimed at investigating age-related risk of OASI in women of different parity. Methods: A population-based register study including 959,559 live singleton vaginal births recorded in the Swedish Medical Birth Register 1999 to 2011. In each parity group risks of OASI at age 25-29 years, 30-34 years, and ≥35 years compared with age &lt; 25 years were investigated by logistic regression analyses, adjusted for year of birth, education, region of birth, smoking, Body Mass Index, infant birthweight and fetal presentation; and in parous women, history of OASI and cesarean section. Additional analyses also adjusted for mediating factors, such as epidural analgesia, episiotomy, and instrumental delivery, and maternal age-related morbidity. Results: Rates of OASI were 6.6%, 2.3% and 0.9% in first, second and third births respectively. Age-related risk increased from 25-29 years in first births (Adjusted OR 1.66; 95% CI 1.59-1.72) and second births (Adjusted OR 1.78; 95% CI 1.58-2.01), and from 30-34 years in third births (Adjusted OR 1.60; 95% CI 1.00-2.56). In all parity groups the risk was doubled at age ≥ 35 years, compared with the respective reference group of women under 25 years. Adding mediating factors and maternal age-related morbidity only marginally reduced these risk estimates. Conclusion: Maternal age is an independent risk factor for OASI in first, second and third births. Although age-related risks by parity are relatively similar, more nulliparous than parous women will be exposed to OASI due to the higher baseline rate. abstract_id: PUBMED:29458905 Obstetric outcomes of twin pregnancies at advanced maternal age: A retrospective study. Objective: To evaluate obstetric outcomes in twin pregnancies of advanced maternal age (≥35 years). Materials And Methods: A retrospective study involved 470 twin pregnancies in a single center from Sep. 1, 2012 to Mar. 31, 2015. Clinical characteristics and obstetric outcomes were recorded and compared among twin pregnancies who were classified as follows: age 20-29, 30-34, 35-39 and ≥40 years. Results: The incidence of gestational diabetes (age 20-29 years 15.8%; 30-34 years 24.3%; 35-39 years 30.4%; ≥40 years 57.1%; p = 0.004) and premature delivery (20-29 years 58.6%; 30-34 years 69.1%; 35-39 years 72.2%; ≥40 years 85.7%; p = 0.001) significantly increased with increasing age whereas spontaneous abortion (20-29 years 27.6%; 30-34 years 11.6%; 35-39 years 11.4%; ≥40 years 0.0%; p = 0.021) decreased in twin pregnancies of advanced maternal age. In addition, the rate of postpartum hemorrhage increased almost continuously by age and advanced maternal age was described as a risk factor for postpartum hemorrhage (age 35-39, adjusted OR 3.377; 95% confidence interval 1729-6.598; p &lt; 0.001; age ≥ 40, adjusted OR 10.520; 95% CI 1.147-96.492; p = 0.037). However, there was no significant difference between advanced maternal age and adverse neonatal outcomes. Conclusion: In twin pregnancies, advanced maternal age experienced significant higher risk of postpartum hemorrhage, gestational diabetes and premature delivery. Neither adverse neonatal outcomes nor stillbirth was significantly associated with maternal age. abstract_id: PUBMED:31586783 Combined effects of increasing maternal age and nulliparity on hypertensive disorders of pregnancy and small for gestational age. Background: The mean age of women delivering for the first time is increasing, and this combination could lead to an increased risk of perinatal complications. Objectives: The objective was to evaluate the potential combined effects of nulliparity and increasing maternal age on small for gestational age (SGA &lt; 10th percentile) and hypertensive disorders of pregnancy (HDP). Study Design: A population-based cohort study was conducted using data routinely collected on all births in 11 hospitals in the Burgundy perinatal network between 2007 and 2016. Pregnant women with singleton deliveries aged 20 years or older were included at delivery and divided into groups according to maternal age (20 to 24-year-old group as a reference). Multivariate logistic regression models, adjusted on smoking, body mass index, chronic high blood pressure and birth date, were performed. Results: A total of 137,791 women were included. Whatever the parity, the risks of SGA and HDP increased with maternal age, but the increase began earlier in nulliparous women. Compared to multiparous women, the risk of SGA in nulliparous women increased with maternal age (aOR = 1.5 95% CI [1.4-1.7] for age 20-24 rising to 2.2 [1.8-2.8] for age 40-49). We found evidence that parity modified the association between maternal age and SGA (test for interaction p &lt; 0.001). The risk of HDP was constantly higher in nulliparous women, whatever the maternal age. Conclusion: The combination of increasing maternal age and nulliparity has a more negative impact on the occurrence of SGA than either risk factor alone. Answer: Yes, there is an incremental rise in the risk of obstetric intervention with increasing maternal age. Studies have shown a positive and highly significant association between increasing maternal age and various forms of obstetric intervention. This includes an increase in prelabour and emergency caesarean sections, instrumental vaginal delivery, induction of labour, and epidural usage with increasing age (PUBMED:9800928). Additionally, the risk of major pelvic floor trauma, including obstetric anal sphincter injuries and levator avulsion, also increases significantly with each additional year of age past 18 years (PUBMED:27131580). Further research indicates that the cesarean section rate and the rate of surgically-assisted vaginal deliveries increase with advancing maternal age, especially in primiparous women, while the rate of spontaneous deliveries decreases with increasing maternal age (PUBMED:31736510). This trend is consistent across different studies, which also report that as maternal age increases, spontaneous vaginal delivery rates fall, and rates of instrumental delivery, elective and emergency caesarean section, induction of labour, and epidural rates rise (PUBMED:15512544). Moreover, increasing maternal age is independently associated with specific adverse pregnancy outcomes, including miscarriage, chromosomal abnormalities, congenital anomalies, gestational diabetes, placenta previa, and cesarean delivery (PUBMED:15863534). The risk of obstetric anal sphincter injury has also been shown to increase with maternal age irrespective of parity (PUBMED:28915858). Additionally, advanced maternal age in twin pregnancies is associated with a higher risk of postpartum hemorrhage, gestational diabetes, and premature delivery (PUBMED:29458905). Lastly, the combined effects of increasing maternal age and nulliparity have been found to have a more negative impact on the occurrence of small for gestational age (SGA) and hypertensive disorders of pregnancy (HDP) than either risk factor alone (PUBMED:31586783).
Instruction: Back-carrying infants to prevent developmental hip dysplasia and its sequelae: is a new public health initiative needed? Abstracts: abstract_id: PUBMED:24942071 Back-carrying infants to prevent developmental hip dysplasia and its sequelae: is a new public health initiative needed? Background: Developmental dysplasia of the hip (DDH) is rarely encountered in the native sub-Saharan African population. We present a retrospective review of the incidence of symptomatic DDH in Malawi and a systematic review of the role of back-carrying as a potential influence of prevalence in this population group. Methods: We retrospectively reviewed the diagnosis and management of all infants seen at the Beit CURE International Hospital, Malawi and its mobile clinics, from November 2002 to September 2012. In addition, methodical review of the literature using the Preferred Reporting Items for Systematic Reviews and Meta-analyses checklist and algorithm was performed. Results: A total of 40,683 children aged less than 16 years were managed at our institute over a 10-year period, of which 9842 children underwent surgery. No infant presented with, or underwent surgical intervention, for symptomatic DDH. Conclusions: The majority of mothers in Malawi back-carry their infants during the first 2 to 24 months of life, in a position that is similar to that of the Pavlik harness. We believe this to be the prime reason for the low incidence of DDH in the country. In addition, there is established evidence indicating that swaddling, the opposite position to back-carrying, causes an increase in the incidence of DDH. There is a need for the establishment of a large clinical trial into back-carrying and prevention of DDH in non-African population groups. Level Of Evidence: Level II. abstract_id: PUBMED:34909975 Epidemiology and costs of surgical treatment of developmental dysplasia of hip in the Brazilian Public Health System in a decade. Objective: To describe and analyze the epidemiology and costs of surgical treatment of hip dysplasia in the Brazilian Public Health System. Methods: An ecological analytical study that evaluated a time series and the geographic distribution of surgical treatment of hip dysplasia in Brazil. Frequencies of cases, number of cases and associated factors were analyzed. Correlations, frequency maps and flow maps are presented and discussed. Results: During the study, 14,584 patients with dysplasia were admitted to hospitals according to Information Technology Department of the Public Health System. Patients underwent hospital treatment specific for dysplasia in 8,592 cases (at an average cost of R$ 2.225,50, total cost of R$ 19.124.086,25- updated values). In this group, mortality rate was 0.046% and mean hospitalization time was 4.41 days (standard deviation of 2,39 days). Age between 1 and 4 years (37.7%), female sex (64.5%) and white race (46%) were more frequent. Greater rates of specialists (R²=0.82; p&lt;0.001), greater proportion of counties with high/very high human development index (R²=0.79; p&lt;0.001), and higher per capita income (R²=0.68; p&lt;0.001) correlated to greater rates of treatments undertaken per 1,000 live births (as per State of treatment). The factor most related to treatment rate per 1,000 live births (as per State of residence) was white race (R²=0.90; p&lt;0.001). Southern states had higher treatment rates (as per State of residence, rate of 0.73/1,000), and Southeast states had greater absolute frequency of cases (46.7%) and greater flow of patients. Conclusion: The surgical treatment of hip dysplasia in Brazil occurs frequently, at relevant costs, and is distributed in a heterogenous and unequal fashion in the Public Health System. Southern states have a higher incidence of cases, and there is an association with racial and socioeconomic factors. There was no large variation in the incidence of cases over time. abstract_id: PUBMED:27698744 Clinical evaluation of ultrasound screening in follow-up visits of infants with cerebral palsy at high risk for developmental dysplasia of the hip. The aim of the study was to assess the clinical value of ultrasound screenings for the developmental dysplasia of the hip (DDH) and explore its etiology in high-risk infants with cerebral palsy in follow-up visits. A group of 98 cases of infants at high-risk of cerebral palsy who received rehabilitation treatment between July, 2009 and July, 2010 were selected. Infants included 58 men and 40 women, aged &lt;6 months and not lost to follow-up visits. Ultrasound (using Graf static inspection) screening of hips was performed and the infants with abnormalities were given clinical intervention, and 1- to 2-year-old infants were given outpatient follow-ups. The results were analyzed and there were 40 abnormal cases among the 98 cases of infants at high risk of cerebral palsy, including 18 cases of unstable hip joint, and 22 cases of DDH (12 cases of hip dysplasia, 3 cases of hip subluxation and 7 cases of hip dislocation). Early clinical intervention for infants with hip dysplasia and outpatient follow up for infants aged 1-2 years was carried out and had ischemic necrosis of femoral head, with the exception of 1 case of femoral detorsion that was poorly restored. In conclusion, the probability of DDH was higher in infants at high-risk of cerebral palsy compared to the normal infants. Hip ultrasound is a safe, simple, and effective screening method for these infants, which is of great clinical significance for an earlier diagnosis and treatment of DDH in infants with cerebral palsy. abstract_id: PUBMED:26644785 Chiropractic Management of Low Back Pain in a 75-Year-Old Man With Bilateral Developmental Hip Dysplasia. Objective: The purpose of this case report is to describe chiropractic management of an elderly man with untreated bilateral hip joint dysplasia presenting with mild acute mechanical low back pain. Clinical Features: A 75-year-old man presented with an insidious-onset intermittent low back pain of 3 days' duration. Physical examination findings supported a mechanical cause for mild acute low back pain. Plain radiography revealed dysplasia of hip joints with absence of femoral heads and necks and bilateral high dislocation. Intervention And Outcome: Chiropractic management included vibration, mobilization, light drop-piece adjustments of the lower lumbar and sacroiliac joints, and recommendation of the use of heat at home. Treatments were given 3 times over the course of 1 week. The low back pain intensity over this period dropped from 5 to 0 on an 11-point numerical rating scale, and the patient was discharged. Conclusion: This patient with substantial postural and gait abnormalities as a result of severe bilateral hip dysplasia associated with an unusual pattern of osteoarthritic change in the spine responded favorably to a short course of chiropractic care. abstract_id: PUBMED:29168577 Screening for developmental dysplasia of the hip in a rural health district: An analysis of practice. Objective: To describe hip dysplasia screening practices in a local rural health district. Design: Cross-sectional study of hospital birth records, local physiotherapy records, public community health electronic medical record (Community Health Information Management Enterprise database) and a survey of local clinicians who work with infants and children. Setting: Three rural public hospitals and community health centres in a New South Wales health district. Participants: Birth records (n = 196) from March 2012 to May 2012; attendance at child and family nurse checks (n = 788) May 2013-April 2014; 13 cases of managed developmental dysplasia of the hip (DDH) 2012 and local clinicians (n = 49). Results: At birth, the majority of infants (91%, 179/196) had documented hip screening. Community health records show this dropped to 75% (587/788) at 1-4 weeks and 29% (227/788) at 6-8 weeks. A survey of local clinicians (54% response rate; 49/91) revealed most (78%) screen for DDH and less than half (43%) use guidelines. Almost all (97%) clinicians reported screening for DDH at 6-8 weeks of age. Only 51% of clinicians reported having specific training for DDH screening and 76% would like further training. The rate of late DDH requiring management in 2012 was 0.87% (7/806) and the rate of late DDH requiring surgery was 0.25% (2/806). Conclusion: DDH screening practices are well established at birth in the rural health district. There is variability in DDH screening practices beyond 8 weeks of age. Clinicians report variations in their knowledge and training. Training in DDH screening and hip screening prompts added to the personal health record might improve rates of DDH screening beyond 8 weeks of age. abstract_id: PUBMED:37484038 Remote diagnostic imaging using artificial intelligence for diagnosing hip dysplasia in infants: Results from a mixed-methods feasibility pilot study. Objectives: Infant hip dysplasia or Developmental Dysplasia of the Hip (DDH) occurs in 1-2% of births worldwide and leads to hip arthritis if untreated. We sought to evaluate the feasibility of implementing an artificial intelligence-enhanced portable ultrasound tool for infant hip dysplasia (DDH) screening in primary care, through determining its effectiveness in practice and evaluating patient and provider feedback. Methods: A US-FDA-cleared artificial intelligence (AI) screening device for DDH (MEDO-Hip) was added to routine well-child visits from age 6 to 10 weeks. A total of 306 infants were screened during a 1-year pilot study within three family medicine clinics in Alberta, Canada. Patient and provider satisfaction were quantified using the System Usability Survey (SUS), while provider perceptions were further investigated through semi-structured interviews. Results: Provider and user surveys commonly identified best features of the tool as immediate diagnosis, offering reassurance/knowledge and avoiding travel, and noted technical glitches most frequently as a barrier. A total of 369 scans of 306 infants were performed from Feb 1, 2021 until Mar 31, 2022. Eighty percent of hips scanned were normal on initial scans, 14% of scans required a follow-up study in the primary care clinic, and DDH cases were identified and treated at the expected 2% rate (6 infants). Conclusions: It is feasible to implement a point-of-care ultrasound AI screening tool in primary care to screen for infants with DDH. Beyond improved screening and detection, this innovation was well accepted by patients and fee-for-service providers with a culture and history of innovation. abstract_id: PUBMED:34807750 Return to Sports and Minimum 2-Year Outcomes of Hip Arthroscopy in Elite Athletes With and Without Coexisting Low Back Pain: A Propensity-Matched Comparison. Background: Patient-reported outcomes (PROs) and return to sports (RTS) after hip arthroscopy for femoroacetabular impingement syndrome (FAIS) have not been established in elite athletes with coexisting low back pain (LBP). Purpose: (1) To report minimum 2-year PROs and RTS rates after primary hip arthroscopy for FAIS in elite athletes with coexisting LBP and (2) to compare clinical results with a propensity-matched control group of elite athletes without back pain. Study Design: Cohort study; Level of evidence, 3. Methods: Data were reviewed for elite athletes (college and professional) who underwent hip arthroscopy for FAIS and had coexisting LBP between October 2009 and October 2018. Inclusion criteria were preoperative and minimum 2-year follow-up for the modified Harris Hip Score, Nonarthritic Hip Score, Hip Outcome Score-Sports Specific Subscale (HOS-SSS), and visual analog scale for pain. Exclusion criteria were Tönnis grade &gt;1, hip dysplasia (lateral center-edge angle &lt;18°), and previous ipsilateral hip or spine surgery or conditions. Rates of achieving the minimal clinically importance difference (MCID), patient acceptable symptomatic state (PASS), and maximum outcome improvement satisfaction threshold were recorded in addition to RTS. For the subanalysis, the elite athlete study group was propensity matched to an elite athlete control group without back pain. Results: A total of 48 elite athletes with LBP who underwent primary hip arthroscopy met inclusion criteria, and follow-up was available for 42 (87.5%) at 53.2 ± 31.6 months (mean ± SD). Elite athletes with coexisting LBP demonstrated significant improvements in all recorded PROs and achieved the MCID and PASS for the HOS-SSS at rates of 82.5% and 67.5%, respectively. They also returned to sports at a high rate (75.8%), and 79% of them did not report LBP postoperatively. PROs, rates of achieving the MCID and PASS for the HOS-SSS, and RTS rates were similar between the study group and propensity-matched control group. Conclusion: Elite athletes with coexisting LBP who undergo primary hip arthroscopy for FAIS may expect favorable PROs, rates of achieving the MCID and PASS for the HOS-SSS, and RTS rates at minimum 2-year follow-up. These results were comparable to those of a propensity-matched control group of elite athletes without back pain. In athletes with hip-spine syndrome, successful treatment of their hip pathology may help resolve their back pain. abstract_id: PUBMED:38097782 Greater pelvic obliquity in adolescent idiopathic scoliosis combined with hip dysplasia. Background: Adolescent idiopathic scoliosis (AIS) is a spinal deformity that affects adolescents and poses a challenging public health problem. Compared to the general population, adolescents with AIS have a higher prevalence of hip dysplasia. However, the mechanisms underlying the impact of hip dysplasia on the coronal balance of the spine remain poorly understood. We hypothesized that the combination of AIS with hip dysplasia would exacerbate coronal imbalance. Methods: We retrospectively analyzed the medical records and radiographs of adolescents diagnosed with AIS between 2015 and 2020. Participants were divided into two groups: those with hip dysplasia and those without. We recorded parameters related to the coronal deformity of the spine, sacral and pelvic obliquity, and center edge angle (CEA). We investigated differences in these parameters in those with and without hip dysplasia and analyzed their relationships in those with combined AIS and hip dysplasia. Results: A total of 103 adolescents were included, 36 with hip dysplasia and 67 without. Those with hip dysplasia had significantly higher sacroiliac discrepancy (SID) compared to those without (t = - 2.438, P = 0.017). In adolescents with hip dysplasia, only iliac obliquity angle (IOA) was significantly correlated with SID (r = - 0.803, P &lt; 0.001), with a linear relationship between them (r2 = 0.645, P &lt; 0.001). Conclusions: The incidence of hip dysplasia is higher in the AIS population. In adolescents with combined AIS and hip dysplasia, pelvic obliquity is greater, potentially contributing to the increased prevalence of low back pain. abstract_id: PUBMED:28629805 Association of Hip Radiograph Findings With Pain and Function in Patients Presenting With Low Back Pain. Background: Relationships between low back pain (LBP) and the hip in patient cohorts have been described primarily in patients with moderate to severe hip osteoarthritis (OA). Less is known about the links of LBP with hip radiographic findings of hip deformity and minimal OA. Objective: To describe the incidence of radiographic hip deformity or hip OA; to describe and compare spine- and hip-related pain and function in the subset of patients who were found to have radiographic hip deformity or hip OA; and to compare patients with evidence of radiographic hip deformity or hip OA to patients without hip radiographic findings. Design: Prospective cohort study with cross-sectional design. Setting: Tertiary university. Patients: A total of 63 patients (40 women, 23 men) with a mean age of 48.5 ± 14 years with LBP and a minimum of one positive provocative hip test. Methods: Hip radiographs were assessed by an independent examiner for hip OA and deformity. Main Outcome Measurements: Comparisons of hip and lumbar spine pain and function were completed for patients with radiographic findings of hip OA or deformity. Results: Moderate to severe hip OA was found in 12 of 60 patients (20.0%). At least one measurement of femoroacetabular impingement (FAI) was found in 14 of 60 patients (23.3%) to 33 of 45 patients (73.3%). At least one measurement of developmental hip dysplasia (DDH) was found in 7 of 60 patients (11.6%) to 11 of 63 patients (17.4%). Greater pain and reduced hip and lumbar spine function were found in the patients with moderate to severe hip OA. Patients with LBP and FAI were found to have significantly greater extremes of pain and reduced lumbar spine function. Conclusion: Links between the hip and the spine affecting pain and function may be found in patients with LBP and hip deformity and before the onset of radiographic hip OA, and may be associated with hip deformity. Further investigation is needed to better understand these links and their potential impact on prognosis and treatment of LBP. Level Of Evidence: II. abstract_id: PUBMED:36275488 Surgical procedures in infants with early diagnoses of developmental dysplasia of the hip. A prospective 4-year follow-up study. Background: Dysplastic hips infants may normalize spontaneously without any intervention due to the natural growth. However, some infants need one or more surgical interventions to achieve stable and non-dysplastic hips. The purpose of this study is to determine the proportion of infants diagnosed with DDH before the age of 6 months undergoing surgical procedures of the hip(s) before the age of four years and to determine the number and types of surgical procedures in each infant. Methods: A prospective and consecutive study of all infants aged 0-6 months diagnosed with dysplasia of the hips in the combined paediatric orthopaedic and radiologic examination in the Region of Southern Denmark 2013-2017. From medical records, we obtained information about all surgical procedures in the hips including open or closed reductions, arthrographies, tenotomies, and pelvic osteotomies before the age of 4 years. Results: Overall, 281 infants with hip dysplasia were included. The median age at first examination was 48 days. In 254 (90%) of the infants, the hips resolved spontaneously, and 27 (10%) needed one or more surgical interventions. Overall, the 27 infants had 47 surgical intervention as 12 infants had more than one intervention. One infants had five surgical interventions. The most frequent surgical procedures were closed reduction and arthrography with or without adductor tenotomy (58%) and pelvic osteotomy (27%). Among infants with surgical interventions, 23 (8.2%) had unstable hips, and four (1.4%) had stable hips. All four infants with stable hips had an arthrography and none required a pelvic osteotomy. Discussion: This study supports the propensity for spontaneous normalization early dysplasia of the hips in infants. Only a small proportion of the infants needed surgical interventions to achieve stable and non-dysplastic hips. Answer: The practice of back-carrying infants, which is common in Malawi, has been associated with a low incidence of developmental dysplasia of the hip (DDH) in that country. A retrospective review of children managed at the Beit CURE International Hospital in Malawi over a 10-year period revealed that no infant presented with, or underwent surgical intervention for, symptomatic DDH (PUBMED:24942071). This suggests that the position in which infants are carried on the back, similar to that of the Pavlik harness, may be protective against DDH. The authors of this study argue that there is a need for a large clinical trial to investigate the potential of back-carrying as a preventive measure for DDH in non-African population groups, indicating that a new public health initiative could be beneficial (PUBMED:24942071). In contrast, other studies have focused on the epidemiology, costs, and treatment outcomes of DDH in different settings. For example, in Brazil, the surgical treatment of hip dysplasia in the public health system is frequent, costly, and distributed in a heterogeneous and unequal fashion, with an association with racial and socioeconomic factors (PUBMED:34909975). Additionally, ultrasound screening has been shown to be a safe, simple, and effective method for early diagnosis and treatment of DDH in infants at high risk of cerebral palsy (PUBMED:27698744). Furthermore, the feasibility of implementing an artificial intelligence-enhanced portable ultrasound tool for DDH screening in primary care has been demonstrated, with the tool being well accepted by patients and providers (PUBMED:37484038). Given the evidence of the protective effect of back-carrying against DDH observed in Malawi, and the challenges and costs associated with the treatment of DDH in other regions, it seems reasonable to consider a public health initiative that promotes back-carrying or similar protective practices. Such an initiative could potentially reduce the incidence of DDH and its associated healthcare burdens. However, before implementing such an initiative, further research would be needed to confirm the protective effect of back-carrying in diverse populations and to develop guidelines for safe and effective infant carrying practices.
Instruction: 13(2)-hydroxy-bacteriopheophorbide a methyl ester pharmacokinetics measurements with fluorescence versus absorption spectroscopy. Is there a difference? Abstracts: abstract_id: PUBMED:9796488 13(2)-hydroxy-bacteriopheophorbide a methyl ester pharmacokinetics measurements with fluorescence versus absorption spectroscopy. Is there a difference? Objective: Quantification of photosensitizer concentration in tissue improves the planning and, subsequently, the outcome of photodynamic therapy. This study was designed to determine if the method of photosensitizer evaluation influences the accuracy of pharmacokinetic results. Materials And Methods: In vivo 13(2)-Hydroxy-bacteriopheophorbide a methyl ester (13(2)-OH-BPME) pharmacokinetics in mice bearing LEWIS lung carcinoma was studied using fluorescence in situ and absorption spectroscopy (following photosensitizer chemical extraction) as photosensitizer quantification methods. The correlation of 13(2)-OH-BPME fluorescence intensity and its concentration using the absorption spectroscopy were determined for each tissue. Results: The parenchymatous organs showed maximum 13(2)-OH-BPME concentration and fluorescence intensity at 2 h post-injection (lung, liver, spleen), and at 2 h post-injection in the kidney with both quantification methods (identical correlation). There was a difference in the time of maximum photosensitizer fluorescence intensity and its concentration in tumor, muscle, and skin (low correlation). The time of maximum fluorescence intensity in muscle, skin, and tumor was at 4 h, 12 h, and 12 h post injection respectively while its maximum concentration was at 2 h, 4 h, and 8 h post-injection. Conclusion: The method of photosensitizer evaluation affects the accuracy of its pharmacokinetic results. abstract_id: PUBMED:20084798 Characterization of a new photosensitizer (13(2)-hydroxy- bacteriopheophorbide-a methylester) for future treatment of ovarian carcinoma (an experimental study). Unlabelled: The photosensitizer 132-hydroxy bacteriopheophorbide-a methyl ester (13(2) OH- BPME) is characterized by a high absorption coefficient at the far red wavelength 750 nm and a good singlet oxygen quantum yield. Methods & Results: The pharmacokinetics of 132-OH- BPME were studied in ovarian carcinoma on mice after i.v. administration of 7.8 micromole/kg body weight at different incubation intervals. The accumulated dye was chemically extracted from selected tissues and the concentrations were measured by absorption spectroscopy. The parenchymatous organs (liver, spleen and kidney) showed maximum 13(2)- OH- BPME concentrations after 2 hours incubation (liver, spleen), and 4 hours post injection (kidney). A high uptake was detected in the lung with maximum concentration at 2 hours. The malignant tissue accumulated high 13(2)- OH- BPME concentrations between 2-12 hours post injection with peaking at 8 hours. The 13(2)- OH- BPME concentrations in muscle tissue, representing the normal tumour surroundings, and in the skin were very low. Conclusion: The results of our study suggest that PDT using 13(2)-OH-BPME could be effective at 8h post injection, where the tumour 13(2)- OH-BPME uptake is maximum and the muscle and skin uptake will be minimum. abstract_id: PUBMED:15741112 Solvatochromism of 3-[2-(4-diphenylaminophenyl)benzoxazol-5-yl]alanine methyl ester. A new fluorescence probe. The photophysical properties of 3-[2-(4-diphenylaminophenyl)benzoxazol-5-yl]alanine methyl ester (1b) and its Boc derivative (1a) were studied in a series of solvents. Its UV-Vis absorption spectra are less sensitive to the solvent polarity than the corresponding fluorescence spectra which show pronounced solvatochromic effect leading to large Stokes shifts. Using an efficient solvatochromic method, based on the molecular-microscopic empirical solvent polarity parameter E(T)(N), a large change of the dipole moment on excitation has been found. From an analysis of the solvatochromic behaviour of the UV-Vis absorption and fluorescence spectra in terms of bulk solvent polarity functions, f(epsilon(r),n) and g(n), a large excited-state dipole moment (mu(e) = 11D), almost perpendicular to the smaller ground-state dipole moment, was observed. This demonstrates the formation of an intramolecular charge-transfer excited state. Large changes of the fluorescence quantum yields as well as the fluorescence lifetimes with an increase of a solvent polarity cause that the new non-proteinogenic amino acid, 3-[2-(4-diphenylaminophenyl)benzoxazol-5-yl]-alanine methyl ester, is a new useful fluorescence probe for biophysical studies of peptides and proteins. abstract_id: PUBMED:34298065 Cocaine binding to the Fab fragment of a humanized anti-cocaine mAb quantitated by dye absorption and fluorescence spectroscopy. In this work, we establish that cocaine binding to the Fab fragment of a recombinant humanized anti-cocaine mAb (h2E2) can be directly and easily quantitated using simple and inexpensive absorption and fluorescence measurements, employing dyes typically used for differential scanning fluorimetry, DASPMI and SYPRO Orange. For concentrated samples of the Fab fragment, absorbance spectroscopy employing these dyes reveals the number of cocaine sites present, using either DASPMI (by measuring the increase in dye absorbance) or SYPRO Orange (by measuring the change in dye maximal absorbance wavelength). Interestingly, we observed that cocaine binding to the Fab fragment had a much different effect on the SYPRO Orange dye absorbance than previously reported for the intact h2E2 mAb, resulting in a large decrease in the total dye absorbance for the Fab fragment, in contrast to previous results with the intact h2E2 mAb. For dilute samples of Fab fragment, a dye fluorescence emission spectroscopy assay was developed to quantitate the number of cocaine (and other high affinity cocaine metabolites) binding sites via the ligand-induced decrease in fluorescence emission of both of these extrinsic dyes. The difference in the cocaine titrations for the high affinity (Kd &lt; 30 nM) ligands, cocaine, cocaethylene and benzoylecgonine and the low affinity (Kd &gt; 30 μM) ligands, norcocaine, ecgonine methyl ester, and ecgonine were obvious using this assay. These simple, direct, and inexpensive techniques should prove useful for evaluation of other small molecule antigen binding Fab fragments, enabling quantitation and rapid biochemical assessments necessary for determining Fab fragment suitability for in vivo uses and other assays and experiments. abstract_id: PUBMED:31141783 Effect of reabsorption of fluorescence on transient absorption measurements. Elimination of fluorescence reabsorption effects is necessary to obtain reliable kinetic data in fluorescence spectroscopy. This effect must also be considered in transient absorption spectroscopy. We devised two methods to achieve this goal. The first was use of a thin optical cell (&lt;10 μm) rather than a much thicker conventional cuvette in the experimental setup. The second was use of an equation to correct data obtained using a conventional cuvette when there were fluorescence reabsorption effects. These methods were successfully used in sub-nanosecond transient absorption spectroscopy to obtain the kinetics of excimer formation by perylene in toluene. abstract_id: PUBMED:15570387 Determination of the in vivo pharmacokinetics of palladium-bacteriopheophorbide (WST09) in EMT6 tumour-bearing Balb/c mice using graphite furnace atomic absorption spectroscopy. Palladium-bacteriopheophorbide (WST09), a novel bacteriochlorophyll derivative, is currently being investigated for use as a photodynamic therapy (PDT) drug due to its strong absorption in the near-infrared region and its ability to efficiently generate singlet oxygen when irradiated. In this study, we determined the pharmacokinetics and tissue distribution of WST09 in female EMT6 tumour-bearing Balb/c mice in order to determine if selective accumulation of this drug occurs in tumour tissue. A total of 41 mice were administered WST09 by bolus injection into the tail vein at a dose level of 5.0 +/- 0.8 mg kg(-1). Three to six mice were sacrificed at each of 0.08, 0.25, 0.5, 1.0, 3.0, 6.0, 9.0, 12, 24, 48, 72, and 96 h post injection, and an additional three control mice were sacrificed without having been administered WST09. Terminal blood samples as well as liver, skin, muscle, kidney and tumour samples were obtained from each mouse and analyzed for palladium content (from WST09) using graphite furnace atomic absorption spectroscopy (GFAAS). The representative concentration of WST09 in the plasma and tissues was then calculated. Biphasic kinetics were observed in the plasma, kidney, and liver with clearance from each of these tissues being relatively rapid. Skin, muscle and tumour did not show any significant accumulation at all time points investigated. No selective drug accumulation was seen in the tumour and normal tissues, relative to plasma. Thus the results of this study indicate that vascular targeting resulting from WST09 in the circulation, as opposed to selective WST09 accumulation in tumour tissues, may be responsible for PDT effects in tumours that have been observed in other WST09 studies. abstract_id: PUBMED:23466325 Fluorescence quenching of bovine serum albumin by NNMB. A new type of fluorophore 2-(naphthalen-1-yl)-1-((naphthalen-1-yl)methyl)-1H-benzimidazole (NNMB) has been prepared and characterized by (1)H NMR, (13)C NMR, mass and IR spectral analysis. Absorption, fluorescence and synchronous fluorescence spectral studies have been made for the mutual interaction of NNMB with bovine serum albumin (BSA). Absorption spectroscopy proved the formation of a ground state BSA…NNMB complex. Fluorescence spectrum of BSA in the presence of NNMB clearly shows that NNMB acts as a quencher. Based on the theory of Forester's non-radiation energy transfer (FRET) binding distance has been deduced. The Stern-Volmer quenching constant (KSV), binding site number (n), apparent binding constant (KA) and corresponding thermodynamic parameters (ΔG, ΔH and ΔS) were determined. abstract_id: PUBMED:31316803 Solving self-absorption in fluorescence. One of the most common types of experiment in X-ray absorption spectroscopy (XAS) measures the secondary inelastically scattered fluorescence photon. This widespread approach has a dominant systematic of self-absorption of the fluorescence photon. The large impact of self-absorption compromises accuracy, analysis and insight. Presented here is a detailed self-consistent method to correct for self-absorption and attenuation in fluorescence X-ray measurements. This method and the resulting software package can be applied to any fluorescence data, for XAS or any other experimental approach detecting fluorescence or inelastically scattered radiation, leading to a general solution applicable to a wide range of experimental investigations. The high intrinsic accuracy of the processed data allows these features to be well modelled and yields deeper potential insight. abstract_id: PUBMED:18546062 Synthesis, crystal, absorption and fluorescence spectroscopy of nitro-stilbene derivatives with benzophenones. In this paper, we have presented a range of new nitro-stilbene derivatives with benzophenones via ether or ester bridged bond. These nitro-stilbene derivatives with benzophenones have been conveniently obtained by condensation reaction. The linked benzophenones were efficiently introduced to nitro-stilbene dyes. The structures of these compounds have been characterized with NMR and element analysis. The single crystals of two target compounds (11 and 12) have been obtained, and their X-ray crystallographic data have been determined and discussed. Remarkably different absorption and fluorescence spectroscopy was observed for nitro-stilbene derivatives with benzophenones via different linked bonds. The results show that electron-donating or electron withdrawing bridged bonds have significant influence on the absorption and fluorescence spectroscopy, which makes it possible for the development of ideal nitro-stilbene dyes with benzophenones through chemical strategy. abstract_id: PUBMED:16965430 Topical application of 5-aminolaevulinic acid, methyl 5-aminolaevulinate and hexyl 5-aminolaevulinate on normal human skin. Background: 5-Aminolaevulinic acid (ALA) and its ester derivatives are used in photodynamic therapy. Despite extensive investigations, the differences in biodistribution and pharmacokinetics of protoporphyrin IX (PpIX) induced by ALA and its derivatives are still not well understood, notably for humans. Objectives: To study porphyrin accumulation after topical application of ALA and two of its ester derivatives in normal human skin. Methods: Creams containing 0.2%, 2% and 20% (w/w) of ALA, methyl 5-aminolaevulinate (MAL) and hexyl 5-aminolaevulinate (HAL) were applied on normal human skin of six volunteers. The amount and distribution of porphyrins formed in the skin was investigated noninvasively by means of fluorescence spectroscopy. Results: Fluorescence emission and excitation spectra exhibited similar spectral shapes for the all drugs, indicating that mainly PpIX was formed. Low concentrations (0.2% and 2%) of MAL induced considerably less PpIX in normal human skin than similar concentrations of ALA and HAL. A high concentration (20%) of ALA gave higher PpIX fluorescence in normal human skin than was found for MAL and HAL. Conclusions: The concentrations inducing half of the maximal PpIX fluorescence are around 2% for ALA, 8% for MAL and 1% for HAL. Answer: Yes, there is a difference in the pharmacokinetics measurements of 13(2)-hydroxy-bacteriopheophorbide a methyl ester (13(2)-OH-BPME) when using fluorescence versus absorption spectroscopy. The study designed to determine the influence of the method of photosensitizer evaluation on the accuracy of pharmacokinetic results found that the parenchymatous organs showed maximum 13(2)-OH-BPME concentration and fluorescence intensity at similar times post-injection when using both quantification methods. However, there was a discrepancy in the time of maximum photosensitizer fluorescence intensity and its concentration in tumor, muscle, and skin, indicating a low correlation between the two methods for these tissues. Specifically, the time of maximum fluorescence intensity in muscle, skin, and tumor was at 4 h, 12 h, and 12 h post-injection respectively, while the maximum concentration was at 2 h, 4 h, and 8 h post-injection. This suggests that the method of photosensitizer evaluation does affect the accuracy of pharmacokinetic results (PUBMED:9796488).
Instruction: The influence of teaching setting on medical students' clinical skills development: is the academic medical center the "gold standard"? Abstracts: abstract_id: PUBMED:16306293 The influence of teaching setting on medical students' clinical skills development: is the academic medical center the "gold standard"? Purpose: Many medical schools have revised their curricula to include longitudinal clinical training in the first and second years, placing an extra burden on academic teaching faculty and expanding the use of community-based preceptors for clinical teaching. Little is known about the impact of different learning settings on clinical skills development. Method: In 2002-03 and 2003-04, the authors evaluated the clinical skills of two sequential cohorts of second-year medical students at Dartmouth Medical School (n = 155) at the end of a two-year longitudinal clinical course designed to prepare them for their clerkship year. Students' objective structured clinical examination (OSCE) scores were compared on a cardiopulmonary and an endocrine case according to precepting sites (academic medical center [AMC] clinics, AMC-affiliated office-based clinics, or community-based primary care offices) and core communication, history taking, physical examination, and patient education skills were assessed. Study groups were compared using descriptive statistics and analysis of variance (mixed model). Results: Ninety-five students (61%) had community-based preceptors, 31 (20%) AMC clinic-based preceptors, and 29 (19%) AMC-affiliated office-based preceptors. Students' performances did not differ among clinical learning sites with overall scores in the cardiopulmonary case of 61.2% in AMC clinics, 63.3% in office-based AMC-affiliated clinics, and 64.9% in community-based offices (p = .20). Scores in the endocrine case similarly did not differ with overall scores of 65.5% in AMC clinics, 68.5% in office-based AMC-affiliated clinics, and 66.4% in community-based offices (p = .59). Conclusions: Students' early clinical skill development is not influenced by educational setting. Thus, using clinicians for early clinical training in any of these settings is appropriate. abstract_id: PUBMED:30245099 Differences in expectations of passing standards in communication skills for pre-clinical and clinical medical students. Objective: Communication and interpersonal skills (CIS) are essential elements of competency-based education. We examined defensible CIS passing levels for medical students completing basic sciences (second-year students) and clinical training (fourth-year students), using five standard setting methods. Methods: A 14-item CIS scale was used. Data from second-year (n = 190) and fourth-year (n = 170) students were analyzed using descriptive statistics and generalizability studies. Fifteen judges defined borderline CIS performance. Cut scores and fail rates from five standard setting methods (Angoff, Borderline-Group, Borderline-Regression, Contrasting-Groups, and Normative methods) were examined. Results: CIS performance was similar during second-year (Mean = 74%, SD = 6%) and fourth-year (Mean = 72%, SD = 5%) students. Judges using the Angoff method expected greater competence at the fourth-year level, as reflected in the Angoff cut scores (second-year = 53% with 0% fail, fourth-year = 66% with 10% fail). Cut scores from the remaining methods did not differentiate between training levels. We found evidence of case specificity. Conclusion: Performance on CIS may be case specific. Passing standards for communication skills may require employing approaches such as the Angoff method that are sensitive to expectations of learner performance for different levels of training, competencies, and milestone levels. Practice Implications: Institutions that want to encourage continued growth in CIS should apply appropriate standard setting methods. abstract_id: PUBMED:33402444 Suturing Skills for Medical Students: A Systematic Review. Aim: This systematic review aimed to identify all published evidence on teaching suturing skills for medical students. We aimed to outline significant positive teaching outcomes and devise a comprehensive framework for the optimal teaching of suturing skills for medical students. Materials And Methods: We searched MEDLINE® (via Ovid), EMBASE and SCOPUS databases until July 2019 with no language restriction using predefined 'Population, Intervention, Comparison, Outcome (PICO)' criteria. Data were summarised in discrete thematic axes using a qualitative synthesis approach. Results: Our search yielded a total of 2,562 articles, out of which 25 were included in the final data synthesis. We provide a structured breakdown of educational interventions including participants, instructors and nature of teaching intervention. We also describe discrete means for assessment of performance and retention of suturing skills. Based on those we propose a standardised framework on teaching suturing skills for novices. Conclusion: To our knowledge this is the first systematic review investigating teaching interventions used to teach suturing skills in medical students. After extraction of individual positive teaching outcomes and utilising widely known learning theories and principles, we devised a comprehensive framework for more efficient and cost-effective teaching of suturing skills to medical students in the future. abstract_id: PUBMED:20711483 Ready or not? Expectations of faculty and medical students for clinical skills preparation for clerkships. Background: Preclerkship clinical-skills training has received increasing attention as a foundational preparation for clerkships. Expectations among medical students and faculty regarding the clinical skills and level of skill mastery needed for starting clerkships are unknown. Medical students, faculty teaching in the preclinical setting, and clinical clerkship faculty may have differing expectations of students entering clerkships. If students' expectations differ from faculty expectations, students may experience anxiety. Alternately, congruent expectations among students and faculty may facilitate integrated and seamless student transitions to clerkships. Aims: To assess the congruence of expectations among preclerkship faculty, clerkship faculty, and medical students for the clinical skills and appropriate level of clinical-skills preparation needed to begin clerkships. Methods: Investigators surveyed preclinical faculty, clerkship faculty, and medical students early in their basic clerkships at a North American medical school that focuses on preclerkship clinical-skills development. Survey questions assessed expectations for the appropriate level of preparation in basic and advanced clinical skills for students entering clerkships. Results: Preclinical faculty and students had higher expectations than clerkship faculty for degree of preparation in most basic skills. Students had higher expectations than both faculty groups for advanced skills preparation. Conclusions: Preclinical faculty, clerkship faculty, and medical students appear to have different expectations of clinical-skills training needed for clerkships. As American medical schools increasingly introduce clinical-skills training prior to clerkships, more attention to alignment, communication, and integration between preclinical and clerkship faculty will be important to establish common curricular agendas and increase integration of student learning. Clarification of skills expectations may also alleviate student anxiety about clerkships and enhance their learning. abstract_id: PUBMED:16807173 Nursing faculty teaching basic skills to medical students. Medical education often uses a 'see one, do one, teach one' approach to teaching basic skills, whereas nursing education uses a more intense, competency based approach. Many nursing faculty become experts in teaching skills; however, there is little literature evaluating medical student skills training led by nursing faculty. The purpose of this paper is to describe and report initial evaluation data on an innovative teaching strategy using nursing faculty to teach specific skills to medical students. Nursing faculty designed a hands-on four hour clinical skills training laboratory for medical students. All (176) medical students completed and evaluated this skills laboratory and their own preparation for these skills. Medical students felt the laboratory was outstanding or good (94%), the content was at an appropriate level (94%) and the relevance was high or medium (96%). Medical students felt more prepared to perform the skills after completing the skills laboratory than before, and when compared to previous medical students without the skills laboratory. Many medical students commented positively about the nursing faculty. Nursing faculty teaching specific skills to medical students is acceptable and effective and provides medical students with positive exposure to nurses as experts. abstract_id: PUBMED:38188384 Teaching Skills Training for Pre-clinical Medical Students Through Weekly Problem-Based Learning Teaching Topic Presentations and Directed Feedback. Problem: Medical students commonly encounter scenarios in which they are charged with teaching medical content, but studies find a paucity of teaching skills training especially in the pre-clerkship phase of undergraduate medical programs. Intervention: Videos lessons were created to instruct on five teaching skills identified as useful for presenting short lessons on medical topics: effective learning objectives, appropriate lesson complexity, audience engagement, relevance to practice, and resource selection. A rubric was generated to assess the performance level of each teaching skill. Context: First-year medical students viewed the video lessons and were instructed to implement these teaching skills for the creation and delivery of weekly learning issue (LI) presentations within a problem-based learning (PBL) course. PBL facilitators assessed students by using the rubric to assign a score of 0-2 corresponding to the level of skill performance. Impact: Scores in every dimension of our LI assessment rubric showed significant improvement above week 1 at the end of the initial 4 weeks of practice and assessment. Follow-up assessment showed durable performance and significant improvement for 3 out of 5 at weeks 8 and 12. Lessons Learned: Our novel framework was effective in fostering the adoption and implementation of five teaching skills among first-year medical students over a 4-week period, with most skills remaining durable over 12 weeks. Furthermore, end-of-course surveys showed that students found feedback received using the framework helpful in improving their LIs, and faculty reported that student LI presentation quality improved overall. Supplementary Information: The online version contains supplementary material available at 10.1007/s40670-023-01912-x. abstract_id: PUBMED:36685146 Teaching Clinical Skills Using Online Modality through Modified Peyton's Framework: An Experience from a Medical University in Pakistan. Introduction: Medical institutions worldwide faced an unprecedented situation during COVID-19 of identifying alternative strategies to sustain the continuum of learning process. This led to several innovations in the traditional medical curriculum. This study explored the effectiveness and feasibility of using the Peyton's framework modified for a virtual platform (Microsoft Teams) for teaching clinical skills to first and second-year medical students at The Aga Khan University, Karachi, Pakistan. Methods: In 2020-2021, the modified Peyton's framework was integrated in the clinical skills (CS) curriculum for all first- and second-year students (N=200). For evaluation, a mixed-method design was used, with pre-and post-session questionnaires. Students' satisfaction was obtained through the standard session evaluation tool of the university. For the qualitative arm, to explore the instructors' experiences, purposive sampling was used (n=8) and a focused group discussion (FGD) was conducted. Finally, performance of the students at the end of year summative Objective Structured Clinical Examination (OSCE) was compared with the students of previous year. Quantitative data were analysed using STATA® version 15.1, using paired t-test to compare the differences in OSCE scores in selected CS stations. A p-value of &lt;0.05 was considered significant for all tests. The FGD was transcribed and analysed through manual content analysis. Results: Nine clinical skills (that included history and examination skills) were taught using the virtual platform. There was a significant improvement in post-session questionnaires in seven of these skills (p value &lt;0.01). Session evaluations showed that most students were satisfied with the learning experience. The instructors showed that the online teaching offered a promising platform for teaching history taking skills. The OSCE scores showed mixed results, with significant improvement in two out of four repeated stations by using paired t-test [abdominal exam (87.33 ± 8.99, &lt;0.001); and precordial examination (88.45 ± 8.36, 0.001)]. Conclusions: Modifying Peyton's framework to a virtual platform allowed us to sustain the continuum of clinical education during the COVID-19 pandemic. The results support the use of a blended learning environment for teaching clinical skills. abstract_id: PUBMED:28178910 MedTalks: developing teaching abilities and experience in undergraduate medical students. Objectives: According to the CanMEDS' Scholar competency, physicians are expected to facilitate the learning of colleagues, patients and other health professionals. However, most medical students are not provided with formal opportunities to gain teaching experience with objective feedback. Methods: To address this, the University's Medical Education Interest Group (MEIG) created a pilot teaching program in January 2015 entitled 'MedTalks'. Four 3-hour sessions were held at the University Faculty of Medicine, where first and second year medical students taught clinically oriented topics to undergraduate university students. Each extracurricular session included three 30-minute content lectures, and a 90-minute small group session on physical examination skills. Each medical student-teacher received formal feedback from undergraduate students and from faculty educators regarding teaching style, communication abilities, and professionalism. In addition, medical student-teachers self-evaluated their own teaching experience. Results: Over 50 medical students from the University participated as medical student-teachers. Based on quantitative and qualitative evaluation surveys, 100% of medical students agreed that MedTalks was a useful way to develop teaching skills and 92% gained a greater confidence in individual teaching capabilities, based largely on the opportunity to gain experience (with feedback) in teaching roles. Conclusions: A program designed to give medical students multi-source teaching experience (lecture- and small group-based) and feedback on their teaching (from learners and Faculty observers, in addition to their own self-reflection) can improve medical student confidence and enthusiasm towards teaching. Future studies will clarify if medical student self-perceived enhancements in teaching ability can be corroborated by independent (Faculty, learner) observations of future teaching activity. abstract_id: PUBMED:28359724 Tutor-led teaching of procedural skills in the skills lab: Complexity, relevance and teaching competence from the medical teacher, tutor and student perspective. Background: In current medical curricula, the transfer of procedural skills has received increasing attention. Skills lab learning and tutor-led teaching have become an inherent part of all medical curricula at German medical faculties. In 2011, the initial basis for the classification of clinical skills in medical school was created by the German Association for Medical Education (GMA) Committee's consensus statement on procedural skills. As a recommendation for medical curricula, the National Competency-based Catalogue of Learning Objectives (NKLM, 2015) lists procedural skills according to their curriculum integration and competency level. However, classification in regard to the perceived complexity, relevance, or teaching competency is still lacking. Objective: The present study aimed to investigate procedural skills taught at the Medical Faculty of Heidelberg in regard to their complexity, relevance, and required teaching skills. To achieve this aim (1) the specific procedural skills in terms of complexity, that is, the degree of difficulty, and (2) the perceived relevance of taught procedural skills for studying and subsequent medical profession as well as (3) the personal preparation and required teaching skills were assessed in medical teachers, tutors and students. Method: During the winter semester 2014/2015, the evaluations of all medical teachers, student tutors, and medical students in the skills lab teaching departments of internal medicine, surgery, pediatrics, gynecology, and otorhinolaryngology at the Medical Faculty of Heidelberg were assessed via a quantitative cross-sectional questionnaire survey using 7-point Likert scales. The questionnaire comprised four item sets concerning 1) demographic details, 2) procedural skill complexity, 3) practical relevance, and 4) required preparation and teaching skills. Descriptive, quantitative analysis was used for questionnaire data. Results: The survey included the data from 17 of 20 physicians (return rate: 85 %), 10 of 10 student tutors (return rate: 100 %) and a total of 406 of 691 students (return rate: 58.8 %). In terms of complexity and relevance, no major differences between medical teachers, tutors, and students were found. Procedural skills, assigned to the competence level of final year medical education in the NKLM, were also perceived as more complex than other skills. All skills were considered equally relevant, and student tutors were seen to have equally competent teaching skills as experienced medical teachers. Conclusion: This study largely underpins the NKLM's classification of procedural skills. The complexity assessment allows for conclusions to be drawn as to which skills are perceived to require particularly intensive training. Finally, our study corroborates extant findings that student tutors are apt at teaching procedural skills if they have been properly trained. abstract_id: PUBMED:23972052 Using peer-assisted learning to teach basic surgical skills: medical students' experiences. Standard medical curricula in the United Kingdom (UK) typically provide basic surgical-skills teaching before medical students are introduced into the clinical environment. However, these sessions are often led by clinical teaching fellows and/or consultants. Depending on the roles undertaken (e.g., session organizers, peer tutors), a peer-assisted learning (PAL) approach may afford many benefits to teaching surgical skills. At the University of Keele's School of Medicine, informal PAL is used by the Surgical Society to teach basic surgical skills to pre-clinical students. As medical students who assumed different roles within this peer-assisted model, we present our experiences and discuss the possible implications of incorporating such sessions into UK medical curricula. Our anecdotal evidence suggests that a combination of PAL sessions--used as an adjunct to faculty-led sessions--may provide optimal learning opportunities in delivering a basic surgical skills session for pre-clinical students. Answer: The influence of teaching setting on medical students' clinical skills development has been a topic of interest, particularly as medical schools have expanded the use of community-based preceptors for clinical teaching. A study conducted at Dartmouth Medical School evaluated the clinical skills of two cohorts of second-year medical students at the end of a two-year longitudinal clinical course. The study compared students' objective structured clinical examination (OSCE) scores across different precepting sites: academic medical center (AMC) clinics, AMC-affiliated office-based clinics, and community-based primary care offices. The results showed that students' performances did not differ significantly among clinical learning sites, with overall scores in the cardiopulmonary case being 61.2% in AMC clinics, 63.3% in office-based AMC-affiliated clinics, and 64.9% in community-based offices (p = .20). Similarly, scores in the endocrine case did not differ significantly among the sites (PUBMED:16306293). These findings suggest that the educational setting, whether it be an AMC, AMC-affiliated office, or community-based office, does not influence early clinical skill development in medical students. Therefore, it is appropriate to use clinicians for early clinical training in any of these settings. This challenges the notion that the academic medical center is the "gold standard" for clinical skills development, indicating that a variety of clinical learning environments can effectively contribute to the education of medical students.
Instruction: Do nurses have worse pregnancy outcomes than non-nurses? Abstracts: abstract_id: PUBMED:24935873 Do nurses have worse pregnancy outcomes than non-nurses? Background: Nurses encounter multiple occupational exposures at work which may harm their reproductive health. The purpose of the study was to compare pregnancy complications and outcomes including cesarean deliveries, tocolysis, miscarriage, and preterm labor between female nurses and comparable women who were not nurses in Taiwan. Methods: This nationwide population-based study was performed using the National Health Insurance Research Database from 1997 to 2008. We identified 3,656 pregnancies among 2,326 nurses and 111,889 pregnancies among 74,919 non-nurses. A generalized estimating equation was used to compare risks between the two groups. Results: The rates of tocolysis (28.6 vs 22.3%), miscarriage (6.0 vs 5.3%), and preterm labor (8.1 vs 4.4%) were significantly higher among nurses than non-nurses. After adjustment for background differences, nurses had significantly higher risks for cesarean section (adjusted OR 1.12 [95% confidence interval (CI) 1.03-1.22]), tocolysis (OR 1.18 [95% CI 1.09-1.29]), and preterm labor (OR 1.46 [95% CI 1.28-1.67]) than non-nurses. Conclusions: Nurses are at higher risk for cesarean section, tocolysis, and preterm labor than non-nurses. Occupational exposure related to these adverse pregnancy outcomes should be examined. Strategies to decrease the risks should be developed to improve reproductive health among nurses. abstract_id: PUBMED:35751907 International Nurses Day: Nurses can change the world, given the investment and support they deserve. Every year on International Nurses Day (IND), the International Council of Nurses (ICN) releases a report with resources that can be used by nurses around the world. This year's report responds to the World Health Organization's (WHO) latest strategy for nursing and midwifery, and the findings of the State of the World's Nursing report. In this article, ICN Chief Executive Officer Howard Catton describes the main points of the IND report and recommends that nurses everywhere read it, alongside the WHO strategy, and use that information to increase their knowledge of the global nursing workforce and influence global healthcare policy. It is a toolkit that can assist nurses, other healthcare providers, governments and international organisations to turn global strategy into meaningful local action and improved clinical practice on the ground. abstract_id: PUBMED:37907929 Associations between maternal occupational exposures and pregnancy outcomes among Chinese nurses: a nationwide study. Background: Several studies have provided evidence about adverse pregnancy outcomes of nurses involved in occupational exposure. However, the pregnancy outcomes among nurses in middle-income countries are not well demonstrated. The main aim of this study is to present the prevalence and influencing factors of pregnancy outcomes among female nurses in China. Methods: We included 2243 non-nurse health care workers, and 4230 nurses in this national cross-sectional study in China. Information on occupational exposures and pregnancy outcomes was collected using a face-to-face investigation. Odds ratios (ORs) were estimated through logistic regression. Results: The proportion of threatened abortion, spontaneous abortion, and stillbirth of female nurses was 2.6%, 7%, and 2.1%, respectively. We found an increased risk of threatened abortion among nurses with overtime work (OR = 1.719, 95% CI 1.158-2.550). The risk of threatened abortion and spontaneous abortion was elevated among nurses handling disinfectant (OR = 2.293 and 1.63, respectively). We found a nearly twofold increased risk of premature birth (OR = 2.169, 95% CI 1.36-3.459) among nurses handling anti-cancer drugs. Conclusions: Our findings suggested that maternal occupational exposures might be associated with the risk of adverse pregnancy outcomes among female nurses in China. We recommend that policy-markers and hospital managers work together to reduce exposure to occupational hazards and improve pregnancy outcomes among female nurses. abstract_id: PUBMED:17099323 Nurses with dermal exposure to antineoplastic drugs: reproductive outcomes. Background: Nurses and other hospital workers are exposed to antineoplastic drugs during daily activities. Previous studies suggest that antineoplastic drugs at occupational exposure levels may be toxic to reproduction, but these studies are not consistent or conclusive. Methods: Self-administered questionnaires were completed by 4393 exposed and nonexposed nurses employed between 1990 and 1997 (79% response). Questions were asked about pregnancy outcome, work-related exposures, and lifestyle. Exposure to antineoplastic drugs was estimated using task-based dermal exposure measurements and self-reported task frequencies. Time to pregnancy was modeled using survival analysis, and odds ratios (ORs) with 95% confidence intervals (CIs) were calculated for other reproductive outcomes using multiple logistic regression analysis. Associations were further explored by nonparametric regression modeling. Results: Nurses highly exposed to antineoplastic drugs took longer to conceive than referent nurses (adjusted hazard ratio = 0.8; CI = 0.6-0.9). Exposure to antineoplastic drugs was associated with premature delivery (OR per unit increase in ln[exposure] = 1.08; CI = 1.00-1.17) and low birth weight (OR per unit increase in ln[exposure] = 1.11; 1.01-1.21). Penalized smoothed spline plots corroborated these log-linear relations. Spontaneous abortion, stillbirth, congenital anomalies, and sex of offspring appeared not to be related to exposure to antineoplastic drugs. Conclusion: Antineoplastic drugs may reduce fertility and increase poor neonatal outcomes among occupationally exposed oncology nurses. abstract_id: PUBMED:37494686 Factors Related to Burnout Among Perinatal Home Visiting Nurses. Purpose: Risks for burnout among nurses have been understudied in public and community health settings. This qualitative descriptive study aimed to describe the experiences of nurses working in a perinatal community health program for high-risk families with a focus on challenges and risks for burnout. Methods: We conducted field observation and semistructured individual interviews in 2019 with nurses at 2 sites implementing Prenatal Care Coordination (PNCC), a Medicaid benefit for high-risk pregnant people and infants in Wisconsin. A larger parent study assessed the implementation of PNCC overall through a thematic analysis process. Initial deductive coding was guided by the Consolidated Framework for Implementation Research. Findings: Emergent themes demonstrated that PNCC nurses face several significant barriers and known risks for burnout, including role strain related to scope of practice and training, discordant racial and socioeconomic identities leading to role conflict, and low control combined with high psychological demand. Conclusions: Given that the COVID-19 pandemic has exacerbated burnout risk among nurses in hospital settings, and that nurses addressing social determinants of health may be at increased risk for burnout, these findings represent a critical perspective on the experiences and needs of perinatal nurses in public and community health settings. abstract_id: PUBMED:36999769 Barriers to nurses health advocacy role. Background: Speaking up to safeguard patients is a crucial ethical and moral obligation for nurses, but it is also a difficult and potentially dangerous component of nursing work. Health advocacy is gaining impetus in the medical literature, despite being hampered by barriers resulting in many nurses in Ghana remaining mute when faced with advocacy-required situations. We explored situations that thwart nurses from performing their health advocacy role. Research Question: What would cause nurses to take no action when they witness situations that require them to act as health advocates for their clients or communities? Research Design: An inductive, descriptive qualitative design was used to collect and analyse data on barriers that prevent nurses from practising their health advocacy role in Ghana. Individual one-on-one in-depth interviews were conducted using a semi-structured interview guide. The data were analysed using qualitative content analysis. Participants And Research Context: Twenty-four professional nurses and midwives registered with the Nursing and Midwifery Council were recruited from three regional hospitals in Ghana. These public hospitals were chosen from the upper, middle and coastal regions. Ethical Considerations: The UKZN Ethics Review Committee in South Africa and the GHS Ethics Review Committee in Ghana both gave their approval for this study. Findings: Intrapersonal barriers, interpersonal barriers, and structural barriers emerged as major obstacles that nurses experience when performing their health advocacy role. Conclusions: Barriers to health advocacy have undermined nurses' ability to function as health advocates and are preventing them from utilising their health advocacy position in nursing practise. Giving nursing students positive role models in the classroom and in the clinic can help them become more effective health advocates. abstract_id: PUBMED:34019846 Experiences of Nurses Who Support Parents During Perinatal Death. Objective: To describe the experience of nurses who support parents during perinatal death, particularly how perinatal death influences the nurse, how the nurse feels when caring for a suffering parent, and how the perinatal death contributes to the nurse's understanding of self. Design: Descriptive qualitative. Setting: Four regions of Quebec, Canada. Participants: Twenty-five nurses from different perinatal clinical and community backgrounds who worked with parents who experienced perinatal death. Methods: We conducted individual, semistructured interviews during which the participants were given the opportunity to describe what they felt and experienced when they supported parents who experienced perinatal death. Results: Analysis of the data showed three main themes related to the nurse's experience of perinatal death: Unrealistic Self-Expectations, Self-Denial, and Negative Self-Awareness. Conclusion: Our results suggest that during perinatal death, nurses want to feel useful and to relieve the suffering of parents. A clear understanding of this experience can help nurses to better understand their own experiences. abstract_id: PUBMED:32886235 The impact of nurses' and physicians' shift change on obstetrical outcomes. Purpose: To evaluate the effect around nurses' shift change and on-call physicians' shift change on obstetrical outcomes. Methods: A retrospective study of women who had an attempt of labor in a single-medical center, January 2006-December 2017. Obstetrical outcomes were compared between the time around nurses' shift change (6:00-8:00, 14:00-16:00, and 22:00-00:00) to the rest of the day, and between the time around on-call physicians' shift change (6:00-8:00, 14:00-16:00) to the rest of the day. Results: 32,861 women were included, 7826 deliveries occurred during nurses' shift-change, and 25,035 deliveries occurred during the rest of the day. The groups had similar general and obstetrical characteristics, with no statistical difference in cesarean delivery rate (10% vs. 9.8%, P = 0.45) (Table 1). Nurses' shift change had no measurable effect on obstetrical outcomes, including induction of labor, preterm labor, 5-min-Apgar score and cord pH value, except PPH which was less likely to occur during nurses' shift change period (3.8% vs. 4.4%, P = 0.045) (Table 2). From 32,861 deliveries, 5155 deliveries occurred during on-call physicians' shift-change, and 27,706 deliveries occurred during the rest of the day. Induction\augmentation of labor and epidural analgesia were less likely to happen during on-call physicians' shift change (34.4% vs. 38%, P &lt; 0.0001, 59.6% vs. 61.8%, P = 0.003, respectively) (Table 3). The two groups had similar obstetrical outcomes, without statistical difference in cesarean delivery rate (10% vs. 9.8%, P = 0.63) (Table 4). Table 1 General and obstetric characteristics of women giving birth during the time of nurses shift change versus during the rest of the day Variable Change of nurses shifts (n = 7826) All other hours of the day (n = 25,035) P value Maternal age, y 30.3 ± 5.1 30.2 ± 5.2 0.09 Gestational age at birth (weeks) 39.7 ± 1.09 39.8 ± 1.10 0.55 Nulliparity 2077 (35%) 7067 (37%) 0.01 Induction\augmentation of labor 2905 (37) 9368 (38) 0.62 Epidural analgesia 4746 (61) 15,396 (62) 0.16 Neonatal birth weight, g 3340 ± 422 3330 ± 423 0.06 Data is presented as mean ± S.D or N (%) Table 2 Maternal and neonatal adverse outcomes of women giving birth during the time of nurses shift change versus during the rest of the day Variable (%) Change of nurses shifts (n = 7826) All other hours of the day (n = 25,035) P value Vacuum assisted delivery 615 (7.9) 2002 (8.0) 0.69 Cesarean delivery 788 (10) 2443 (9.8) 0.45 Postpartum hemorrhage 294 (3.8) 1089 (4.4) 0.045 Third- and fourth-degree perineal laceration 106 (1.4) 372 (1.5%) 0.51 5-min Apgar score &lt; 7 39 (0.5) 139 (0.6) 0.65 Umbilical pH &lt; 7.2 170 (23) 580 (23) 0.96 Prolonged second stage 190 (2.5) 559 (2.2) 0.22 Maternal and fetal composite adverse outcome* 1309 (16.7%) 4219 (16.9%) 1.00 Data is presented as N (%) *Maternal and fetal composite adverse outcome was defined as the presence of any of the following: vacuum delivery, CD, prolonged second stage, postpartum hemorrhage, third and fourth degree perineal laceration, 5-min Apgar score &lt; 7 and umbilical cord pH &lt; 7.2 Table 3 General and obstetric characteristics of women giving birth during the time of the on-call physicians shift change versus during the rest of the day Variable Change of physicians shifts (n = 5155) All other hours of the day (n = 27,706) P value Maternal age, years 30.3 ± 5.1 30.2 ± 5.2 0.38 Gestational age at birth (weeks) 39.8 ± 1.09 39.8 ± 1.10 0.95 Nulliparity (%) 1303 (33.4) 7841 (37) &lt; 0.0001 Induction\augmentation of labor (%) 1769 (34.3) 10,504 (38) &lt; 0.0001 Epidural analgesia (%) 3067 (59.6) 17,075 (61.8) 0.003 Neonatal birth weight (gr) 3345 ± 416 3330 ± 424 0.019 Data is presented as mean ± S.D or N (%) Table 4 Maternal and neonatal adverse outcomes of women giving birth during the time of physicians on-call shift change versus during the rest of the day Variable (%) Change of physicians shifts (n = 5155) All other hours of the day (n = 27,706) P value Vacuum assisted delivery 397 (7.7) 2220 (8.0) 0.45 Cesarean delivery 517 (10.0) 2714 (9.8) 0.63 Postpartum hemorrhage 209 (4.1) 1174 (4.3) 0.54 Third- and fourth-degree perineal laceration 67 (1.3) 411 (1.5) 0.31 5-min Apgar score &lt; 7 22 (0.5) 156 (0.6) 0.30 Umbilical pH &lt; 7.2 94 (20.3) 656 (23.3) 0.15 Prolonged second stage 127 (2.5%) 622 (2.3%) 0.36 Maternal and fetal composite adverse outcome* 852 (16.5%) 4676 (16.9%) 1.00 Data is presented as N (%) *Maternal and fetal composite adverse outcome was defined as the presence of any of the following: vacuum delivery, CD, prolonged second stage, postpartum hemorrhage, third and fourth degree perineal laceration, 5-min Apgar score &lt; 7 and umbilical cord pH &lt; 7.2 CONCLUSION: Nurses' shift change and on-call physicians' shift change does not appear to be associated with an increase in adverse maternal or neonatal outcomes. abstract_id: PUBMED:34324400 Perinatal Loss: Experiences of Midwives and Nurses. The aim was to determine the experiences of Turkish midwives and nurses when caring for families with perinatal loss. It was a descriptive and qualitative study. One-to-one interviews were conducted with ten midwives and five nurses using a semi-structured interview form. It was found that the nurses and midwives experienced pain and grief during caring for women who had perinatal losses, and they tried to relieve these feelings by using positive thinking and prayer. The nurses and midwives stated that they were unsure how to approach women or family experiencing perinatal loss, and they needed mentoring and training. The results showed that it would be beneficial to plan training for nurses and midwives and to conduct mentoring for those experiencing difficulties. For nurses and midwives frequently facing perinatal loss, there will be benefit in planning grief care training at regular intervals and mentoring for those experiencing difficulties. abstract_id: PUBMED:32920100 Prepregnancy handling of antineoplastic drugs and risk of miscarriage in female nurses. Purpose: To examine the association betweenantineoplastic drug (AD) handling and risk of miscarriage. Methods: Nurses' Health Study-3 participants self-reported AD administration and engineering controls (ECs) and personal protective equipment (PPE) use at baseline. We estimated the hazard ratio (HR) of miscarriage in relation to baseline AD handling using multivariable Cox proportional regression. Results: Overall, 2440 nurses reported 3327 pregnancies, with 550 (17%) ended in miscarriages. Twelve percent of nurses self-reported currently handling AD and 28% previously handling AD. Compared with nurses who never handled AD, nurses who handled AD at baseline had an adjusted HR of miscarriage of 1.26 (95% confidence interval [CI], 0.97-1.64). This association was stronger after 12-weeks gestation (HR=2.39 [95% CI, 1.13-5.07]). Nurses who did not always use gloves had HR of 1.51 (95% CI, 0.91-2.51) compared with 1.19 (95% CI, 0.89-1.60) for those always using gloves; nurses who did not always use gowns had HR of 1.32 (95% CI, 0.95-1.83) compared with 1.19 (95% CI, 0.81-1.75) for nurses always using gowns. Conclusions: We observed a suggestive association between AD handling and miscarriage, particularly among nurses who did not consistently use PPE and EC with stronger associations for second trimester losses. Answer: Yes, nurses appear to have worse pregnancy outcomes than non-nurses. Studies have shown that nurses encounter multiple occupational exposures at work which may harm their reproductive health. A nationwide population-based study in Taiwan found that nurses had significantly higher risks for cesarean section, tocolysis, and preterm labor than non-nurses (PUBMED:24935873). Similarly, a nationwide study in China suggested that maternal occupational exposures might be associated with the risk of adverse pregnancy outcomes among female nurses, with increased risks of threatened abortion, spontaneous abortion, stillbirth, and premature birth associated with certain occupational exposures such as handling disinfectants and anti-cancer drugs (PUBMED:37907929). Another study indicated that nurses highly exposed to antineoplastic drugs took longer to conceive and had associations with premature delivery and low birth weight (PUBMED:17099323). Furthermore, a study within the Nurses' Health Study-3 cohort found a suggestive association between antineoplastic drug handling and miscarriage, especially among nurses who did not consistently use personal protective equipment and engineering controls, with stronger associations for second-trimester losses (PUBMED:32920100). These findings highlight the need for strategies to decrease the risks and improve reproductive health among nurses.
Instruction: Early thromboembolic prophylaxis in patients with blunt solid abdominal organ injuries undergoing nonoperative management: is it safe? Abstracts: abstract_id: PUBMED:24928334 Early thromboembolic prophylaxis in patients with blunt solid abdominal organ injuries undergoing nonoperative management: is it safe? Background: The aim of this study was to compare the safety of early (≤48 hours), intermediate (48 to 72 hours), and late (≥72 hours) venous thromboembolism prophylaxis in patients with blunt abdominal solid organ injury managed nonoperatively. Methods: We performed a 6-year (2006 to 2011) retrospective review of all trauma patients with blunt abdominal solid organ injuries. Patients were matched using propensity score matching in a 2:1:1 (early:intermediate:late) for age, gender, systolic blood pressure, Glasgow Coma Scale, Injury Severity Score, and type and grade of organs injured. Our primary outcome measures were: hemorrhage complications and need for intervention (operative intervention and/or angioembolization). Results: A total of 116 patients (58 early, 29 intermediate, and 29 late) were included. There were no differences in age (P = .5), Injury Severity Score (P = .6), type (P = .1), and grade of injury of the organ (P = .6) between the 3 groups. There were 67 liver (43.2%), 63 spleen (40.6%), 49 kidney (31.6%), and 24 multiple solid organ (15.4%) injuries. There was no difference in operative intervention (P = .8) and postprophylaxis blood transfusion (P = .3) between the 3 groups. Conclusions: Early enoxaparin-based anticoagulation may be a safe option in trauma patients with blunt solid organ injury. This study showed no significant correlation between early anticoagulation and development of bleeding complications. abstract_id: PUBMED:36332306 Thromboprophylaxis Timing After Blunt Solid Organ Injury: A Systematic Review and Meta-analysis. Introduction: Trauma patients with blunt abdominal solid organ injuries are at high risk for venous thromboembolism (VTE), but the optimal time to safely administer chemical thromboprophylaxis is controversial, especially for patients who are managed nonoperatively due to increased risk of hemorrhage. We sought to compare failure of nonoperative management (NOM) and VTE events based on timing of chemical thromboprophylaxis initiation. Methods: A systematic review was conducted in PubMed and Embase databases. Studies were included if they evaluated timing of initiation of chemical thromboprophylaxis in trauma patients who underwent NOM of blunt solid organ injuries. Outcomes included failure of NOM and incidence of VTE. A random-effects meta-analysis was performed comparing patients who received late (&gt;48 h) versus early thromboprophylaxis initiation. Results: Twelve retrospective cohort studies, comprising 21,909 patients, were included. Three studies, including 6375 patients, provided data on adjusted outcomes. Pooled adjusted analysis demonstrated no difference in failure of NOM in patients receiving late versus early thromboprophylaxis (odds ratio [OR] 0.92, 95% confidence interval [CI]:0.4-2.14). When including all unadjusted studies, even those at high risk of bias, there remained no difference in failure of NOM (OR 1.16, 95% CI:0.72-1.86). In the adjusted analysis for VTE events, which had 6259 patients between two studies, patients receiving late chemical thromboprophylaxis had a higher risk of VTE compared with those who received early thromboprophylaxis (OR 1.89, 95% CI:1.15-3.12). Conclusions: Based on current observational evidence, initiation of prophylaxis before 48 h is associated with lower VTE rates without higher risk of failure of NOM. abstract_id: PUBMED:21217492 Thromboembolic prophylaxis with low-molecular-weight heparin in patients with blunt solid abdominal organ injuries undergoing nonoperative management: current practice and outcomes. Background: Low-molecular-weight heparins (LMWHs) are effective in preventing thromboembolic complications after trauma. In the nonoperative management (NOM) of blunt solid abdominal organ injuries, the timing of the administration of LMWH remains controversial because of the unknown risk for bleeding. Methods: Retrospective study including patients aged 15 years or older who sustained blunt splenic, liver, and/or kidney injuries from January 2005 to December 2008. Patients were stratified according to the type and severity of organ injuries. NOM failure rates and blood transfusion requirements were compared between patients who got LMWH early (≤3 days), patients who got LMWH late (&gt;3 days), and patients who did not receive LMWH. Results: Overall, 312 (63.8%) patients with solid organ injuries had NOM attempted. There were 154 splenic, 144 liver, and 65 kidney injuries (1.2 organs injured per patient). Forty-one patients (13.2%) received LMWH early, 70 patients (22.4%) received LMWH late, and 201 (64.4%) patients did not receive LMWH. The early LMWH group was less severely injured compared with the late LMWH group. However, the distribution of the risk factors for failure of NOM (high-grade injury, large amount of hemoperitoneum, and contrast extravasation) was similar between the three LMWH groups. Overall, 17 of 312 patients (5.4%) failed NOM (7.8% spleen, 2.1% liver, and 3.1% kidney). All but one failure occurred before LMWH administration. After adjustment for demographic differences, the overall blood transfusion requirements for the early LMWH group was significantly lower when compared with patients with late LMWH administration (3.0±5.3 units vs. 6.4±9.9 units; adjusted p=0.027). Pulmonary embolism and deep venous thrombosis occurred in four patients. The mortality rate for patients with splenic, liver, and kidney injuries was 3.2% and did not differ with LMWH application. Conclusion: In patients with solid abdominal organ injuries undergoing NOM, early use of LMWH does not seem to increase failure rates or blood transfusion requirements. abstract_id: PUBMED:33048907 Timing of thromboprophylaxis in patients with blunt abdominal solid organ injuries undergoing nonoperative management. Background: Decision making regarding the optimal timing for initiating thromboprophylaxis in patients with blunt abdominal solid organ injuries (BSOIs) remains ill-defined, with no guidelines defining optimal timing. In this study, we aimed to evaluate the relationship of the timing of thromboprophylaxis with thromboembolic and bleeding complications in the setting of BSOIs. Methods: A retrospective analysis of the Trauma Quality Improvement Program database was performed between 2013 and 2016. All patients with isolated BSOIs (liver, spleen, pancreas, or kidney, Abbreviated Injury Scale score, &lt;3 in other regions) who underwent initial nonoperative management (NOM) were included. Patients were divided into three groups (early, &lt;48 hours; intermediate, 48-72 hours; and late, &gt;72 hours) based on timing of thromboprophylaxis initiation. Primary outcomes were rates of thromboembolism and bleeding after thromboprophylaxis initiation. Results: Of the 25,118 patients with isolated BSOIs, 3,223 met the inclusion criteria (age, 38.7 ± 17.3 years; males, 2.082 [64.6%]), among which 1,832 (56.8%) received early thromboprophylaxis, 703 (21.8%) received intermediate thromboprophylaxis, and 688 (21.4%) received late thromboprophylaxis. Late thromboprophylaxis initiation was independently associated with a higher likelihood of both deep vein thrombosis (odds ratio [OR], 3.15; 95% confidence interval [CI], 1.68-5.91, p &lt; 0.001) and pulmonary embolism (OR, 4.29; 95% CI, 1.95-9.42; p &lt; 0.001). Intermediate thromboprophylaxis initiation was independently associated with a higher likelihood of deep venous thrombosis (OR, 2.38; 95% CI, 1.20-4.74; p = 0.013), but not pulmonary embolism (p = 0.960) compared with early initiation. Early (but not intermediate) thromboprophylaxis initiation was independently associated with a higher likelihood of bleeding (OR, 2.05; 95% CI, 1.11-2.18; p = 0.023), along with a history of diabetes mellitus, splenic, and high-grade liver injuries. Conclusion: Early thromboprophylaxis should be considered in patients with BSOIs undergoing nonoperative management who are at low likelihood of bleeding. An intermediate delay (48-72 hours) of thromboprophylaxis should be considered for patients with diabetes mellitus, splenic injuries, and Grades 3 to 5 liver injuries. Level Of Evidence: Therapeutic, Level IV. abstract_id: PUBMED:26820318 Very early initiation of chemical venous thromboembolism prophylaxis after blunt solid organ injury is safe. Background: The optimal timing of initiating low-molecular weight heparin (LMWH) in patients who have undergone nonoperative management (NOM) of blunt solid organ injuries (SOIs) remains controversial. We describe the safety of early initiation of chemical venous thromboembolism (VTE) prophylaxis among patients undergoing NOM of blunt SOIs. Methods: We retrospectively studied severely injured adults who sustained blunt SOI without significant intracranial hemorrhage and underwent an initial NOM at a Canadian lead trauma hospital between 2010 and 2014. Safety was assessed based on failure of NOM, defined as the need for operative intervention, in patients who received early (&lt; 48 h) or late LMWH (≥ 48 h, or early discharge [&lt; 72 h] without LMWH). Results: We included 162 patients in our analysis. Most were men (69%), and the average age was 42 ± 18 years. The median injury severity score was 17, and splenic injuries were most common (97 [60%], median grade 2), followed by liver (57 [35%], median grade 2) and kidney injuries (31 [19%], median grade 1). Combined injuries were present in 14% of patients. A total of 78 (48%) patients received early LMWH, while 84 (52%) received late LMWH. The groups differed only in percent of high-grade splenic injury (14% v. 32%). Overall 2% of patients failed NOM, none after receiving LMWH. Semielective angiography was performed in 23 (14%) patients. The overall rate of confirmed VTE on imaging was 1.9%. Conclusion: Early initiation of medical thromboembolic prophylaxis appears safe in select patients with isolated SOI following blunt trauma. A prospective multicentre study is warranted. abstract_id: PUBMED:27942848 Thromboembolic Prophylaxis with Heparin in Patients with Blunt Solid Organ Injuries Undergoing Non-operative Treatment. Background: Patients with blunt solid organ injuries (SOI) are at risk for venous thromboembolism (VTE), and VTE prophylaxis is crucial. However, little is known about the safety of early prophylactic administration of heparin in these patients. Methods: This is a retrospective study including adult trauma patients with SOI (liver, spleen, kidney) undergoing non-operative management (NOM) from 01/01/2009 to 31/12/2014. Three groups were distinguished: prophylactic heparin (low molecular weight heparin or low-dose unfractionated heparin) ≤72 h after admission ('early heparin group'), &gt;72 h after admission ('late heparin group'), and no heparin ('no heparin group'). Patient and injury characteristics, transfusion requirements, and outcomes (failed NOM, VTE, and mortality) were compared between the three groups. Results: Overall, 179 patients were included; 44.7% in the 'early heparin group,' 34.6% in the 'late heparin group,' and 20.8% in the 'no heparin group.' In the 'late heparin group,' the ISS was significantly higher than in the 'early' and 'no heparin groups' (median 29.0 vs. 17.0 vs. 19.0; p &lt; 0.001). The overall NOM failure rate was 3.9%. Failed NOM was significantly more frequent in the 'no heparin group' compared to the 'early' and 'late heparin groups' (10.8 vs. 3.2 vs. 1.3%; p = 0.043). In the 'early heparin group' 27.5% patients suffered from a high-grade SOI; none of these patients failed NOM. Mortality did not differ significantly. Although not statistically significant, VTE were more frequent in the 'no heparin group' compared to the 'early' and 'late heparin groups' (10.8 vs. 4.8 vs. 1.3%; p = 0.066). Conclusion: In patients with SOI, heparin was administered early in a high percentage of patients and was not associated with an increased NOM failure rate or higher in-hospital mortality. abstract_id: PUBMED:28941544 Management Issues in Critically Ill Pediatric Patients with Trauma. The management of critically ill pediatric patients with trauma poses many challenges because of the infrequency and diversity of severe injuries and a paucity of high-level evidence to guide care for these uncommon events. This article discusses recent recommendations for early resuscitation and blood component therapy for hypovolemic pediatric patients with trauma. It also highlights the specific types of injuries that lead to severe injury in children and presents challenges related to their management. abstract_id: PUBMED:38393797 Managing Selected Chronic Conditions in Hospitalized Patients. The management of chronic illnesses should continue during hospitalization. Some chronic conditions require immediate intervention, whereas intensification of therapy for other conditions may be delayed until after discharge. Factors such as pain, anxiety, poor sleep hygiene, and concurrent illness can result in a transient elevation of blood pressure. Acute lowering of blood pressure in hospitalized patients who do not have target-organ damage is not recommended and may lead to harm. If treatment is needed, intravenous antihypertensive agents should be avoided. Patients with diabetes mellitus require continued management of their blood glucose while hospitalized. Noninsulin agents are typically discontinued. Blood glucose levels should be managed using basal, prandial, and/or correction insulin. During hospitalization, conservative blood glucose targets (140 to 180 mg per dL) are preferred vs. lower targets to reduce length of stay, mortality, and the risk of hypoglycemic events in critically ill patients. Alcohol use disorder is common and hospitalization for other conditions necessitates identification and management of alcohol withdrawal syndrome. The mainstay of therapy for alcohol withdrawal syndrome is benzodiazepines; however, phenobarbital is an alternative treatment option. The risk of venous thromboembolic disease is significantly increased for hospitalized patients. Venous thromboprophylaxis is recommended for all but low-risk patients. Pharmacologic prophylaxis with subcutaneous low-molecular-weight heparin is preferred; mechanical prophylaxis is an alternative for patients who are at high risk of bleeding or have contraindications to anticoagulation. abstract_id: PUBMED:27841110 Pre-treatment clinical assessment in head and neck cancer: United Kingdom National Multidisciplinary Guidelines. This is the official guideline endorsed by the specialty associations involved in the care of head and neck cancer patients in the UK. This paper provides recommendations on the pre-treatment clinical assessment of patients presenting with head and neck cancer. Recommendations • Comorbidity data should be collected as it is important in the analysis of survival, quality of life and functional outcomes after treatment as well as for comparing results of different treatment regimens and different centres. (R) • Patients with hypertension of over 180/110 or associated target organ damage, should have antihypertensive medication started pre-operatively as per British Hypertension Society guidelines. (R) • Rapidly correcting pre-operative hypertension with beta blockade appears to cause higher mortality due to stroke and hypotension and should not be used. (R) • Patients with poorly controlled or unstable ischaemic heart disease should be referred for cardiology assessment pre-operatively. (G) • Patients within one year of drug eluting stents should be discussed with the cardiologist who was responsible for their percutaneous coronary intervention pre-operatively with regard to cessation of antiplatelet medication due to risk of stent thrombosis. (G) • Patients with multiple recent stents should be managed in a centre with access to interventional cardiology. (G) • Surgery after myocardial infarction should be delayed if possible to reduce mortality risk. (R) • Patients with critical aortic stenosis (AS) should be considered for pre-operative intervention. (G) • Clopidogrel should be discontinued 7 days pre-operatively; warfarin should be discontinued 5 days pre-operatively. (R) • Patients with thromboembolic disease or artificial heart valves require heparin therapy to bridge peri-operative warfarin cessation, this should start 2 days after last warfarin dose. (R) • Cardiac drugs other than angotensin-converting enzyme inhibitors and angiotensin II antagonists should be continued including on the day of surgery. (R) • Angotensin-converting enzyme inhibitors and angiotensin II antagonists should be withheld on the day of surgery unless they are for the treatment of heart failure. (R) • Post-operative care in a critical care area should be considered for patients with heart failure or significant diastolic dysfunction. (R) • Patients with respiratory disease should have their peri-operative respiratory failure risk assessed and critical care booked accordingly. (G) • Patients with severe lung disease should be assessed for right heart disease pre-operatively. (G) • Patients with pulmonary hypertension and right heart failure will be at extraordinarily high risk and should have the need for surgery re-evaluated. (G) • Perioperative glucose readings should be kept within 4-12 mmol/l. (R) • Patients with a high HbA1C facing urgent surgery should have their diabetes management assessed by a diabetes specialist. (G) • Insulin-dependent diabetic patients must not omit insulin for more than one missed meal and will therefore require an insulin replacement regime. (R) • Patients taking more than 5 mg of prednisolone daily should have steroid replacement in the peri-operative period. (R) • Consider proton pump therapy for patients taking steroids in the peri-operative phase if they fit higher risk criteria. (R) • Surgery within three months of stroke carries high risk of further stroke and should be delayed if possible. (R) • Patients with rheumatoid arthritis should have flexion/extension views assessed by a senior radiologist pre-operatively. (R) • Patients at risk of post-operative cognitive dysfunction and delirium should be highlighted at pre-operative assessment. (G) • Patients with Parkinson's disease (PD) must have enteral access so drugs can be given intra-operatively. Liaison with a specialist in PD is essential. (R) • Intravenous iron should be considered for anaemia in the urgent head and neck cancer patient. (G) • Preoperative blood transfusion should be avoided where possible. (R) • Where pre-operative transfusion is essential it should be completed 24-48 hours pre-operatively. (R) • An accurate alcohol intake assessment should be completed for all patients. (G) • Patients considered to have a high level of alcohol dependency should be considered for active in-patient withdrawal at least 48 hours pre-operatively in liaison with relevant specialists. (R) • Parenteral B vitamins should be given routinely on admission to alcohol-dependent patients. (R) • Smoking cessation, commenced preferably six weeks before surgery, decreases the incidence of post-operative complications. (R) • Antibiotics are necessary for clean-contaminated head and neck surgery, but unnecessary for clean surgery. (R) • Antibiotics should be administered up to 60 minutes before skin incision, as close to the time of incision as possible. (R) • Antibiotic regimes longer than 24 hours have no additional benefit in clean-contaminated head and neck surgery. (R) • Repeat intra-operative antibiotic dosing should be considered for longer surgeries or where there is major blood loss. (R) • Local antibiotic policies should be developed and adhered to due to local resistance patterns. (G) • Individual assessment for venous thromboembolism (VTE) risk and bleeding risk should occur on admission and be reassessed throughout the patients' stay. (G) • Mechanical prophylaxis for VTE is recommended for all patients with one or more risk factors for VTE. (R) • Patients with additional risk factors of VTE and low bleeding risk should have low molecular weight heparin at prophylactic dose or unfractionated heparin if they have severe renal impairment. (R). abstract_id: PUBMED:34802415 An Insight to ncovid-19 associated coagulopathy. Several current studies have highlighted the high occurrence of coagulopathy in nCOVID-19 infection. The corona virus often prompts hypercoagulability along with both microangiopathy and local thrombus development, and systemic coagulation limitation which causes large vessel thrombosis and key thromboembolic issues such as pulmonary embolism in seriously ill hospitalized patients. Based on recent reports, the most severely ill patients present with coagulopathy, and disseminated intravascular coagulation (DIC)-like massive intravascular clot formation is frequently seen in this cohort. Therefore, coagulation tests may be considered useful to discriminate severe cases of nCOVID-19. The clinical presentation of nCOVID-19-associated coagulopathy is organ dysfunction primarily, while hemorrhagic events are less frequent. Changes in hemostatic biomarkers represented by increase in D-dimer and fibrin/fibrinogen degradation products indicate the essence of coagulopathy is massive fibrin formation. Overall, the patients have an increase in venous and arterial thrombotic events especially in ICU patients. Routine thromboprophylaxis with low molecular weight heparin is recommended in all hospitalized patients to reduce the incidence of thrombosis. Though, the importance of thromboembolic impediments has not been extensively spotlighted, thus the rationale of this article is to provide recent information about this severe difficulty. In this article the mechanism of coagulopathy, associated problems and possible therapeutics has been reviewed. Answer: Based on the provided abstracts, early thromboembolic prophylaxis in patients with blunt solid abdominal organ injuries undergoing nonoperative management appears to be safe. Several studies have investigated the timing of venous thromboembolism (VTE) prophylaxis initiation and its association with bleeding complications and failure of nonoperative management (NOM). A retrospective review of trauma patients with blunt abdominal solid organ injuries showed that early enoxaparin-based anticoagulation did not significantly correlate with the development of bleeding complications, suggesting that it may be a safe option in these patients (PUBMED:24928334). A systematic review and meta-analysis comparing the failure of NOM and VTE events based on the timing of chemical thromboprophylaxis initiation found that early initiation of prophylaxis (before 48 hours) was associated with lower VTE rates without a higher risk of failure of NOM (PUBMED:36332306). Another retrospective study indicated that early use of low-molecular-weight heparins (LMWHs) does not seem to increase failure rates or blood transfusion requirements in patients with blunt solid abdominal organ injuries undergoing NOM (PUBMED:21217492). A study analyzing the Trauma Quality Improvement Program database found that late thromboprophylaxis initiation was associated with a higher likelihood of both deep vein thrombosis and pulmonary embolism, while early initiation was associated with a higher likelihood of bleeding. However, early thromboprophylaxis should be considered in patients with a low likelihood of bleeding (PUBMED:33048907). Research on the very early initiation of chemical VTE prophylaxis after blunt solid organ injury found it to be safe in select patients with isolated solid organ injury following blunt trauma (PUBMED:26820318). A retrospective study including adult trauma patients with blunt solid organ injuries undergoing NOM found that early administration of heparin was not associated with an increased NOM failure rate or higher in-hospital mortality (PUBMED:27942848). In summary, the evidence from these studies suggests that early thromboembolic prophylaxis in patients with blunt solid abdominal organ injuries undergoing nonoperative management is safe and does not increase the risk of bleeding complications or failure of NOM. However, individual patient risk factors should be considered when deciding on the timing of thromboprophylaxis initiation.
Instruction: Testicular prostheses in children: Is earlier better? Abstracts: abstract_id: PUBMED:1450853 Testicular prostheses: the patient's perception. There is a paucity of data to determine if the insertion of a testicular prosthesis is effective in overcoming the psychological effects of an absent testis. A review of 25 patients who had had testicular prostheses showed a high overall level of satisfaction in the 19 patients who were traced. This satisfaction was subject to some qualification. abstract_id: PUBMED:33959629 3D Printing Improved Testicular Prostheses: Using Lattice Infill Structure to Modify Mechanical Properties. Patients often opt for implantation of testicular prostheses following orchidectomy for cancer or torsion. Recipients of testicular prostheses report issues regarding firmness, shape, size, and position, aspects of which relate to current limitations of silicone materials used and manufacturing methods for soft prostheses. We aim to create a 3D printable testicular prosthesis which mimics the natural shape and stiffness of a human testicle using a lattice infill structure. Porous testicular prostheses were engineered with relative densities from 0.1 to 0.9 using a repeating cubic unit cell lattice inside an anatomically accurate testicle 3D model. These models were printed using a multi-jetting process with an elastomeric material and compared with current market prostheses using shore hardness tests. Additionally, standard sized porous specimens were printed for compression testing to verify and match the stiffness to human testicle elastic modulus (E-modulus) values from literature. The resulting 3D printed testicular prosthesis of relative density between 0.3 and 0.4 successfully achieved a reduction of its bulk compressive E-modulus from 360 KPa to a human testicle at 28 Kpa. Additionally, this is the first study to quantitatively show that current commercial testicular prostheses are too firm compared to native tissue. 3D printing allows us to create metamaterials that match the properties of human tissue to create customisable patient specific prostheses. This method expands the use cases for existing biomaterials by tuning their properties and could be applied to other implants mimicking native tissues. abstract_id: PUBMED:31501057 Surgical Outcomes of Neoscrotal Augmentation with Testicular Prostheses in Transgender Men. Introduction: Testicular prosthesis implantation may be used for neoscrotal augmentation in transgender men. Aim: Assess the clinical outcomes and risk factors for postoperative complications of this procedure in transgender men. Methods: All transgender men who underwent neoscrotal augmentation with testicular implants between January 1992 and December 2018 were retrospectively identified. A retrospective chart study was performed that recorded surgical characteristics and postoperative complications. Risk factors on complications were identified using uni- and multivariate analyses. Main Outcome Measure: Surgical outcomes included explantation due to infection, extrusion, discomfort, or leakage. Results: We identified 206 patients, and the following prostheses were placed: Dow Corning (n = 22), Eurosilicone (n = 2), Nagor (n = 205), Polytech (n = 10), Promedon (n = 105) , Prometel (n = 22), Sebbin (n = 44), and unknown (n = 2). The mean clinical follow-up time was 11.5 ± 8.3 years. In 43 patients (20.8%), one or both prostheses were explanted due to infection, extrusion, discomfort, prosthesis leakage, or urethral problems. Currently, scrotoplasty according to Hoebeke is the most frequently performed technique. Our review found that for this technique explantation occurred in 6 of 52 patients (11.5%). A history of smoking was a risk factor for postoperative infections and prosthesis explantation. In earlier years, larger prostheses were immediately placed at scrotal reconstruction; however, a trend can be seen toward smaller and lighter testicular prostheses and delayed implantation. Clinical Implications: Patients wanting to undergo this procedure can be adequately informed on postoperative outcomes. Strengths & Limitations: Strengths of this study include the number of patients, long clinical follow-up time, and completeness of data. Weaknesses of this study include its retrospective nature and the high variability of prostheses and surgical techniques used. Conclusion: Over the years, scrotoplasty techniques and testicular prostheses preferences have changed. Explantation rates have dropped over the last decade. Pigot GLS, Al-Tamimi M, Ronkes B, et al. Surgical Outcomes of Neoscrotal Augmentation with Testicular Prostheses in Transgender Men. J Sex Med 2019;16:1664-1671. abstract_id: PUBMED:27264050 Testicular prostheses in children: Is earlier better? Introduction: The absence of a testis occurs for various reasons in children, but testicular prosthesis implantation in children is uncommon. The optimal time for prosthesis placement is still unclear, and its complication rate has been poorly studied in children. Objective: The aim of this study was to determine the risk factors of complications in cases of testicular prosthesis implantation in children. Study Design: A monocentric, retrospective review was performed of children implanted with a testicular prosthesis between 2008 and 2014. All implantations were performed through an inguinal incision with a standardized procedure. Children were divided into two groups depending on the interval after orchiectomy: (A) early implantation (delay between surgeries &lt;1 year); and (B) delayed surgeries (delay ≥1 year). Statistical analysis was performed with Student and Fisher tests. Results: Twenty-six patients (A, 15; B, 11) had a total of 38 testicular prostheses placements. Mean follow-up was 36.2 months. First surgery was performed at the mean age of 11.8 years (range 0-17.9) (A, 14.1; B, 8.1; P = 0.01) and testicular prosthesis implantation at the mean age of 14.7 years (range 9-18) (A, 14.3; B, 14.6) with a mean delay of 36.1 months (A, 1.3; B, 80.3). Indications were mainly spermatic cord torsion (27%), bilateral anorchia (27%), and testicular atrophy after cryptorchidism surgery (19.2%). Complications (10.5%) included two cases of extrusion, one infection and one migration. Patient 1 had a history of acute lymphoblastic leukemia with testicle relapse 2 years after induction therapy. High-dose chemotherapy, total body irradiation and bilateral orchiectomies were performed, and bilateral prostheses were implanted 12 years after the end of chemotherapy. Complications happened 85 days after surgery. Patient 2 was followed-up for a proximal hypospadias. The tunica vaginalis flap, which was used during a redo urethroplasty, lead to testicular atrophy. Thirteen years after the last penile surgery, a testicular prosthesis was placed through an inguinal incision, and extrusion occurred 203 days after surgery. Bacterial cultures of the prostheses were sterile and histological review showed no sign of granuloma or graft rejection. The complication rate was significantly higher if the delay between the two surgeries exceeded 1 year (P = 0.01). Indications of orchiectomy, prior scrotal incision, and prosthesis size were not risk factors. Conclusions: Testicular prosthesis implantation was relatively safe in a pediatric cohort. The complication rate was significantly higher if the delay between the orchiectomy and the prosthetic placement exceeded 1 year. These results suggest that reducing the delay between orchiectomy and prosthesis implantation may lead to fewer complications. abstract_id: PUBMED:38228573 Trends in the use of testicular prostheses in Germany: a total population analysis from 2006-2021. Introduction: Testicular tumors are the most common malignancies in young adults and their incidence is growing. The implantation of a testicular prosthesis, for example, during orchiectomy is a standard procedure but its frequency in Germany is unknown. This study aims to analyze trends of testicular prosthesis implantation in recent years in Germany. Material And Methods: The nationwide German hospital billing database and the German hospital quality reports from 2006 to 2021 were studied. Results: A total of 12,753 surgical procedures with implantation of testicular prosthesis and 1,244 procedures with testicular prosthesis explantation were included. Testicular prosthesis implantation increased in total from 699 cases in 2006 to 870 cases in 2020 (+11.4 cases/year; p &lt; 0.001). The share of implantation of testicular prosthesis due to testicular tumor decreased from 72.6% in 2006 to 67.5% in 2020 (p &lt; 0.001). The share of implantation due to gender affirming surgery increased from 6.8% in 2006 to 23.3% in 2020 (p &lt; 0.001). The share of implantation due to testicular atrophy decreased from 11.4% in 2006 to 3.4% in 2020 (p &lt; 0.001). Simultaneous implantation of testicular prosthesis during orchiectomy for testicular cancer increased from 7.8% in 2006 to 11.4% in 2020 (p &lt; 0.001). In 2006, 146 hospitals (85%) performed &lt; 5 testicular prosthesis implantation, while 20 hospitals (12%) performed 5-15 implantation procedures and 6 hospitals (3%) performed &gt; 15 testicular implantation surgeries. In 2021, 115 hospitals (72%) performed &lt; 5 testicular prosthesis implantation, while 39 hospitals (25%) performed 5-15 implantation procedures and 5 hospitals (3%) performed &gt; 15 testicular implantation surgeries. Conclusion: This study shows that implantation of testicular prostheses is steadily increasing. Explantation rates are low. Besides testicular cancer transgender surgeries were the main driver for increasing case numbers in recent years. abstract_id: PUBMED:21207647 Modern types and usage of testicular prostheses Agenesia or hypoplasia of testicle create not only physiological complications but also aesthetic and psychological problems. They can be effectively solved by implantation of testicular implants. Implantation of testicular prostheses was initiated in 40ies of XX century, and for years there has been search for the best material for their production. In the present paper we compared currently used testicles prostheses and controversies related to their usage. We discussed indications for the procedure and technical aspects. Particular attention was paid on complications of the procedure, methods to prevent them, and results of the treatment. abstract_id: PUBMED:2566689 Infant testicular prostheses. Traditionally, the parents of young boys with an absent testis are advised to let the child decide whether he wants a testicular prosthesis after he enters puberty. Unfortunately, the decision by an adolescent to undergo prosthesis insertion often is made only after experiencing ridicule and embarrassment about the genital appearance. Accordingly, we have offered to implant an infant size prosthesis in patients with cryptorchidism when inguinal and abdominal exploration demonstrates that the testis is absent. A total of 41 boys less than 5 years old with an absent testis underwent simultaneous insertion of a testicular prosthesis. The only complication was a superficial wound infection that did not necessitate removal of the prosthesis. Two boys report mild scrotal discomfort. In long-term followup 91 per cent of the families rated the cosmetic appearance as "good" and 94 per cent were content with the decision regarding placement of a prosthesis. Although many assume that these boys will desire a larger prosthesis after puberty, this consideration seems to be irrelevant. Longer followup will be necessary to ascertain whether this approach alters the psychosexual development of these patients. We believe that young boys with an absent testis should be considered for simultaneous testicular prosthesis insertion at inguinal exploration or orchiectomy. abstract_id: PUBMED:10571632 Testicular prostheses: body image and sexual functioning. Objectives: To evaluate retrospectively the body image and sexual functioning in patients who have a testicular prosthesis (TP). Patients And Methods: The medical charts were evaluated for 30 adult patients who received 32 TPs between 1985 and 1997. A questionnaire was mailed to each patient, asking about body image, satisfaction with the implant and with sexual functioning. Results: Of the 30 patients, 22 (73%) replied; their mean (range) age was 30 (18-47) years. Most of the patients had a silicone gel-filled TP. No complications were reported in 25 (83%) patients and 80% reported no sexual problems; 20% felt uncomfortable in intimate contacts, 68% found their body image improved after having a TP, 58% were satisfied with their current sexual life, 90% had no erectile dysfunction and 45% reported premature ejaculation. Conclusions: Patients who have lost a testis for any reason should be informed about the availability of a TP. Prostheses were well accepted and no systemic disease was reported. Almost all patients reported an improvement in their body image. Sexual life and performance were apparently not compromised by having a TP. New surgical procedures are recommended to improve the cosmetic appearance of a TP in the scrotum. abstract_id: PUBMED:17535609 Testicular prostheses: development and modern usage. Introduction: Testicular prostheses produced from various materials have been in use since 1941. The absence of a testicle has been shown to be a psychologically traumatic experience for males of all ages. The indications for insertion of a prosthesis include absence or following orchidectomy from a number of causes such as malignancy, torsion and orchitis. The most common substance used around the world in the manufacture of these implants is silicone; however, in the US, this material is currently banned because of theoretical health risks. This has led to the development of saline-filled prostheses as an alternative. Patients And Methods: A Medline search was carried out on all articles on testicular prosthesis between 1966 and 2006. Conclusions: This review highlights the controversies regarding prosthetic materials, the complications of insertion and the potential benefits of this commonly performed procedure. abstract_id: PUBMED:3787899 Potential problems with testicular prostheses. In a survey of members of the Western Section of the American Urological Association who had testicular prostheses inserted within the preceding ten years, the responses indicated that the overall complication rate for this procedure is low. Generally, the scrotal cavity distends easily to a size adequate to accommodate the prosthesis. However, in cases of delayed secondary scrotal surgery, or when a previous inflammatory process has existed in the scrotum, the limited scrotal distensibility may lead to further complications. Answer: The question of whether earlier implantation of testicular prostheses in children is better has been explored in the literature. A study aimed at determining the risk factors of complications in cases of testicular prosthesis implantation in children found that the complication rate was significantly higher if the delay between the orchiectomy and the prosthetic placement exceeded 1 year. This suggests that reducing the delay between orchiectomy and prosthesis implantation may lead to fewer complications (PUBMED:27264050). In contrast, traditionally, parents of young boys with an absent testis have been advised to let the child decide whether he wants a testicular prosthesis after he enters puberty. However, this approach may lead to the child experiencing ridicule and embarrassment about the genital appearance. One study reported that implanting an infant size prosthesis in patients with cryptorchidism when the testis is absent resulted in a high satisfaction rate among families, with 91% rating the cosmetic appearance as "good" and 94% content with the decision regarding placement of a prosthesis. The study suggested that young boys with an absent testis should be considered for simultaneous testicular prosthesis insertion at inguinal exploration or orchiectomy (PUBMED:2566689). These findings indicate that earlier implantation of testicular prostheses in children may be associated with better outcomes in terms of reducing complications and improving satisfaction with cosmetic appearance. However, longer follow-up is necessary to ascertain whether this approach alters the psychosexual development of these patients.
Instruction: Ki-67 immunocytochemistry in liquid based cervical cytology: useful as an adjunctive tool? Abstracts: abstract_id: PUBMED:12944552 Ki-67 immunocytochemistry in liquid based cervical cytology: useful as an adjunctive tool? Aims: To test the ability of Ki-67 to detect cytological lesions in a screening setting and its use as a surrogate marker of human papillomavirus (HPV) infection. Methods: A study of liquid based cytology, HPV DNA testing by MY09/MY11 consensus polymerase chain reaction (PCR), type specific PCRs, and Ki-67 immunocytochemistry on a randomly selected series of 147 patients. Results: Comparison of the number of Ki-67 immunoreactive cells/1000 cells in the different cytological groups showed that the HSIL group yielded a significantly higher mean count than did the other groups. The number of Ki-67 immunoreactive cells/1000 cells was significantly higher in HPV-16 positive samples than in samples containing infections with other high risk types. Receiver operating characteristic curves indicated a test accuracy (area under curve) of 0.68, 0.72, and 0.86 for atypical squamous cells of undetermined significance (ASCUS), low grade squamous intraepithelial lesions (LSIL), and high grade squamous intraepithelial lesions (HSIL), respectively. Thresholds for 95% sensitivity were 0.07, 0.08, and 0.15 Ki-67 immunopositive cells/1000 cells for ASCUS, LSIL and HSIL, respectively. The threshold for 95% specificity was 1.9 Ki-67 immunopositive cells/1000 cells. Conclusions: Ki-67 immunocytochemistry can be applied to liquid based cytology. The accuracy and diagnostic indices of the test are good when compared with those of other techniques. As part of a panel of screening procedures, it could be used as an adjunct to liquid based cytology to identify HSIL, and as a surrogate marker of HPV-16 infection. abstract_id: PUBMED:29492226 Associations between apparent diffusion coefficient (ADC) and KI 67 in different tumors: a meta-analysis. Part 2: ADCmin. The purpose of this part of the meta-analysis was to summarize data regarding associations between minimum apparent diffusion coefficient (ADCmin) and KI 67 in different tumors. MEDLINE library was screened for associations between ADCmin and KI 67 in different tumors up to April 2017. Overall, 23 studies with 944 patients were identified. Associations between ADC and KI 67 were analyzed by Spearman's correlation coefficient. The pooled correlation coefficient between ADCmin and KI 67 for all included tumors was ρ = -0.47. In detail, the correlation coefficients for separate tumors were as follows: cerebral lymphoma: ρ = -0.61 (95% CI = [-0.82; -0.41]); cervical cancer: ρ = -0.56 (95% CI = [-0.68;-0.43]); pituitary adenoma: ρ = -0.55 (95% CI = [-1.31; 0.22]); glioma: ρ = -0.40 (95% CI = [-0.55; -0.24]); breast cancer: ρ = -0.37 (95% CI = [-0.74; -0.01]); meningioma, ρ = -0.15 (95% CI = [-0.38; 0.07]). abstract_id: PUBMED:34241929 Cytomorphological features of cervical small cell neuroendocrine carcinoma in SurePath™ liquid-based cervical samples. Small cell neuroendocrine carcinoma (SCNEC) of the cervix is a rare, highly aggressive tumour with poor prognosis and high propensity for distant metastases. The cytological features of SCNEC have rarely been described in cervical samples, and to the best of our knowledge, there are no previous reports using SurePath™ liquid-based cytology. In the present report we present the cytomorphological features of histopathologically confirmed cases of cervical SCNEC in SurePath preparations. On cytological examination, all three cases demonstrated variable numbers of tumour cells, ranging from a few dispersed cells and tiny micro-biopsies to large aggregates of small tumour cells with a high nucleus-to-cytoplasmic ratio, stippled chromatin, inconspicuous nucleoli, and scant cytoplasm. Immunocytochemistry for CD56 on the cervical preparation confirmed the diagnosis in one case. The presence of small tumour cells with characteristic stippled/salt-and-pepper type nuclear chromatin were the most consistent cytological features in these cases. Knowledge of these characteristic cytological features can help in suggesting a diagnosis of SCNEC in cervical samples which can then be confirmed by immunocytochemistry. abstract_id: PUBMED:35928527 Role of immunocytochemistry in cervical cancer screening. The cervical cancer screening has been based conventionally on cytologic analysis. With advances in understanding the role of human papillomavirus, cotesting has been applied. But most of the patients subjected to colposcopy did not benefit, except in cases with HSIL [high-grade squamous intraepithelial lesion] cytology. Because of this, a step to increase the sensitivity to detect cancers and pre-cancers but with higher specificity with minimal overdiagnosis leading to prevention of unindicated cervical biopsies is highly desired. Such triaging step in cases with abnormal screening results is expected to minimize invasive interventions because of low false positivity. With availability of methodologies leading to quantitatively and qualitatively enhanced cell-blocks from residual liquid based cytology specimens, immunostaining can be performed for multiple immunomarkers with potential objectivity to triage initial screening test results. This is enhanced further with inclusion of AV marker in the cell-blocks and application of SCIP (subtractive coordinate immunoreactivity pattern) approach. The cell-blocks are also resource for performing other ancillary studies including molecular pathology and proteomics/metabolomics as potential tests in future. This review explores application of residual liquid based cytology specimen for cell-blocking with application of ancillary studies in algorithmic manner as adjunct to ASCCP management guidelines for improved patient care. abstract_id: PUBMED:29088879 Associations between apparent diffusion coefficient (ADC) and KI 67 in different tumors: a meta-analysis. Part 1: ADCmean. Diffusion weighted imaging (DWI) is a magnetic resonance imaging (MRI) technique based on measure of water diffusion in tissues. This diffusion can be quantified by apparent diffusion coefficient (ADC). Some reports indicated that ADC can reflect tumor proliferation potential. The purpose of this meta-analysis was to provide evident data regarding associations between ADC and KI 67 in different tumors. Studies investigating the relationship between ADC and KI 67 in different tumors were identified. MEDLINE library was screened for associations between ADC and KI 67 in different tumors up to April 2017. Overall, 42 studies with 2026 patients were identified. The following data were extracted from the literature: authors, year of publication, number of patients, tumor type, and correlation coefficients. Associations between ADC and KI 67 were analyzed by Spearman's correlation coefficient. The reported Pearson correlation coefficients in some studies were converted into Spearman correlation coefficients. The pooled correlation coefficient between ADCmean and KI 67 for all included tumors was ρ = -0.44. Furthermore, correlation coefficient for every tumor entity was calculated. The calculated correlation coefficients were as follows: ovarian cancer: ρ = -0.62, urothelial carcinomas: ρ = -0.56, cerebral lymphoma: ρ = -0.55, neuroendocrine tumors: ρ = -0.52, glioma: ρ = -0.51, lung cancer: ρ = -0.50, prostatic cancer: ρ = -0.43, rectal cancer: ρ = -0.42, pituitary adenoma:ρ = -0.44, meningioma, ρ = -0.43, hepatocellular carcinoma: ρ = -0.37, breast cancer: ρ = -0.22. abstract_id: PUBMED:31168266 Expression of E cadherin and Ki 67: Emerging Prognostic Markers in Triple-Negative Breast Cancer. Triple-negative breast cancer (TNBC) is an aggressive subgroup of breast cancer which lacks effective target therapy. Expression of biomarkers is one of the important deciding factors for treatment strategies. The aim of this study was to evaluate expression of E cadherin and Ki 67 in relation to clinicopathological features. This prospective observational study included 141 cases of TNBC. Immunohistochemical staining was employed to analyze two biomarkers: E cadherin and Ki 67 on formalin-fixed paraffin-embedded tumor samples obtained from patients defined as TNBC. The age of the patients ranged from 26 to 84 years. Positive lymph nodes were found in 82 (58.1%). The tumor was grade 3 in 105 (74.4%). The E cadherin receptor was positive in 83 (58.8%). The Ki Index was &gt; 10% in 89 (63.12%). The Ki 67 expression was significantly associated with a high nuclear grade (p = 0.000). The significant association noted between loss of E cadherin expression and positive lymph node (p = 0.0296). According to the results, TNBCs are frequently associated with the younger age groups, and the majority is poorly differentiated. The majorities of these have high expression of the Ki 67 and significantly associated with the higher nuclear grade. Loss of E cadherin was significantly associated with positive lymph nodes. Hence, evaluating the expression of E cadherin and Ki 67 routinely would be helpful for evaluating prognostic implications. abstract_id: PUBMED:28074107 Frequency of Chlamydia trachomatis infection in cervical intraepithelial lesions and the status of cytological p16/Ki-67 dual-staining. Background: Chlamydia trachomatis (Ct) is not a disease subject to mandatory reporting in Brazil, and the prevalence rate of this genital infection varies according to the region in which studies are conducted, as well as by the detection technique employed. Ct has been associated with persistence of Human papillomavirus (HPV) infection and the facilitation of cervical carcinoma development. We evaluated the Chlamydia trachomatis infection and its association with cytology, p16/Ki-67 dual-stained cytology and cervical intraepithelial lesions status in a screening cohort in Brazil. Methods: This was a cross-sectional study of 1481 cervical samples from asymptomatic women aged 18 to 64. Samples were collected for liquid-based cytology and Ct detection by polymerase chain reaction. p16/Ki-67 double staining was performed on samples with abnormal cytology. Statistical analysis was by chi-square and likelihood-ratio tests. Odds ratio (OR) and 95% confidence intervals (95% CI) were determined. Results: The frequency of Ct was 15.6% and its presence was not associated with detection of p16/Ki-67 [OR = 1.35 (0.5-3.4)]. There was also no association between abnormal cervical cytology and Ct-positivity [OR = 1.21 (0.46-3.2)]. Associations were observed between p16/Ki-67 and high-grade lesions detected by cytology and in biopsies [OR = 3.55 (1.50-8.42) and OR = 19.00 (0.6-7.2), respectively]. Conclusions: The asymptomatic women in our study had a high frequency of Ct infection but this was not associated with p16/Ki-67 detection in samples with abnormal cytology. The expression of p16/Ki-67 was highest in women with high-grade CIN (p = 0.003). abstract_id: PUBMED:16295759 Expression of proliferation marker Ki 67 in chronic lymphocytic leukemia Chronic lymphocytic leukemia (CLL) is characterized by a clonal expansion of low proliferating mature B and T lymphocytes in the bone marrow and peripheral blood. The nuclear antigen Ki 67 is a protein detected in G1, S, G2 and M phases of the cell cycle, but not in G0, and thus, is a widely accepted proliferation marker of Human tumors. The aim of this study was to evaluate Ki 67 monoclonal antibody in CLL. We studied 48 patients diagnosed as CLL on the presence of clinical signs, over 4.109/l circulating lymphoid cells and immunophenotyping by flow cytometry using CD19, CD5, CD22, CD23, FMC7 and immunoglobulin light chains monoclonal antibodies. Ki 67 immunostaining was determined by Avidin Biotin Complex method. Our results allows to characterize between CLL: one group which proliferation rate (percentage of Ki 67 positive cells) was equal or less than 2%, represented by 14 cases (29,2%) with morphological aspect of typical CLL, one group which proliferation rate was between 3% and 9% represented by 32 cases (66,6%) with morphological aspect of polymorph CLL or prolymphocytic leukemia, and a last group with proliferation rate equal or up to 10% and corresponding to two cases (4,2%) of transformation of CLL to high grade Non Hodgkin lymphoma. There were no correlation between Matutes immunological score and proliferation rate, as this rate was 2.9% in score &lt; 3 and 2.7% in score &gt; 3. This study confirm the Ki 67 usefulness in studying cellular proliferation, and underline that CLL with polymorphic cytology are more proliferate than typical CLL. These data reinforce the notion that CLL is a disease with heterogeneity in clinical behavior, immunophenotype, cytogenetic, molecular aspects, and thus, prognostic. abstract_id: PUBMED:31554354 Modified Liquid-Based Cytology Technique for Immunocytochemistry in Effusion Specimen. Objective: Immunocytochemistry (ICC) of serous effusion is an important tool for the diagnosis of benign and malignant cells. Our aim was to develop a modified liquid-based cytological technique for ICC (i.e., a modified LBC). Methods: Serous effusions of 110 cases were collected for cytological examination: 50 were negative for malignancy albeit benign mesothelium was found, and 60 were confirmed metastatic adenocarcinoma according to the modified LBC preparation. The latter were stained for EMA, Ber-EP4, Calretinin, and p63 then interpreted by both a cytotechnologist and a pathologist. A comparative analysis of the diagnostic results was conducted. Results: The results of the metastatic adenocarcinoma were 100% (60/60) positive for EMA and 91.7% (55/60) positive for Ber-Ep4 but negative for calretinin and p63. Cases negative for malignancy were 100% (50/50) positive for calretinin but negative for carcinoma markers. The difference between ‘positive for metastatic adenocarcinoma’ and ‘negative for malignancy’ in ICC was statistically significant (p &lt; 0.001). Conclusion: The current study demonstrated that a panel marker, comprising EMA, Ber-EP4, and calretinin can be used for differentiating between cases of metastatic adenocarcinoma and benign mesothelium. The serous effusion specimen collected by the modified LBC technique is an effective preparation method for ICC. abstract_id: PUBMED:21617785 The impact of liquid-based cytology in decreasing the incidence of cervical cancer. Major advances in screening have lowered the death rate from cervical cancer in the United States. One of the first major advances in cervical cancer screening was the Papanicolaou (Pap) test. The second major advance was liquid-based cytology (LBC). This review presents a wide range of data, discusses the strengths and weaknesses of the available information regarding Pap technologies, and reviews the meta-analyses, which have examined the differences in clinical performance. The review concludes with information on new and future developments to further decrease cervical cancer deaths. Answer: Yes, Ki-67 immunocytochemistry in liquid-based cervical cytology can be useful as an adjunctive tool. A study has shown that Ki-67 immunocytochemistry can be applied to liquid-based cytology and that the accuracy and diagnostic indices of the test are good when compared with those of other techniques. It could be used as part of a panel of screening procedures to identify high-grade squamous intraepithelial lesions (HSIL) and as a surrogate marker of HPV-16 infection (PUBMED:12944552). This suggests that Ki-67, which is a proliferation marker, can provide additional information regarding the presence of cytological lesions and may help in the detection of HPV-related abnormalities. Furthermore, the role of immunocytochemistry in cervical cancer screening is being explored as a means to increase sensitivity and specificity in detecting cancers and pre-cancers, with the aim of minimizing overdiagnosis and unnecessary invasive interventions. The use of cell-blocks from residual liquid-based cytology specimens for immunostaining with multiple immunomarkers, including Ki-67, is being considered as an adjunct to management guidelines for improved patient care (PUBMED:35928527). In summary, Ki-67 immunocytochemistry has the potential to enhance cervical cancer screening when used in conjunction with liquid-based cytology, providing a valuable adjunctive tool for the identification of high-risk lesions and possibly improving the management of patients with abnormal screening results.
Instruction: Postinfarction ventricular septal defects: towards a new treatment algorithm? Abstracts: abstract_id: PUBMED:19231370 Postinfarction ventricular septal defects: towards a new treatment algorithm? Background: We reviewed our experience at the Montreal Heart Institute with early surgical and percutaneous closure of postinfarction ventricular septal defects (VSD). Methods: Between May 1995 and November 2007, 51 patients with postinfarction VSD were treated. Thirty-nine patients underwent operations, and 12 were treated with percutaneous closure of the VSD. Results: Half of the patients were in systemic shock, and 88% were supported with an intraaortic balloon pump before the procedure. Before the procedure, 14% of patients underwent primary percutaneous transluminal coronary angioplasty. The mean left ventricular ejection fraction was 0.44 +/- 0.11, and mean Qp/Qs was 2.3 +/- 1. Time from acute myocardial infarction to VSD diagnosis was 5.4 +/- 5.1 days, and the mean delay from VSD diagnosis to treatment was 4.0 +/- 4.0 days. A moderate to large residual VSD was present in 10% of patients after correction. Early overall mortality was 33%. Residual VSD, time from myocardial infarction to VSD diagnosis, and time from VSD diagnosis to treatment were the strongest predictor of mortality. Twelve patients were treated with a percutaneous occluder device, and the hospital or 30-day mortality in this group was 42%. Conclusion: Small or medium VSDs can be treated definitively with a ventricular septal occluder or initially to stabilize patients and allow myocardial fibrosis, thus facilitating delayed subsequent surgical correction. abstract_id: PUBMED:28329049 Surgical treatment of pulmonary atresia with major aortopulmonary collateral arteries in 83 consecutive patients. Objectives: This article reports the safety and efficacy of a morphology-based algorithm for the surgical management of pulmonary atresia and ventricular septal defect with major aortopulmonary collateral arteries. Methods: A total of 83 patients were operated from 2007 to 2014 using 3 surgical approaches: one-stage total repair, one-stage unifocalization with central shunt and delayed total repair and multistage unifocalization with subsequent total repair. Patients were divided into 2 groups depending on the surgical strategy used to choose the surgical approach. In Group I (2007-09), the surgeon-preferred procedure (71%-one-stage total repair) was used. In Group II (2010-14), a surgical algorithm based on pulmonary arterial tree morphology was applied. Results: Median follow-up was 5.04 years and 98% complete. The overall survival rate was 92.59%. Cumulative operative mortality decreased from 10.7% in Group I to 0% in Group II ( P = 0.036). Successful complete repair was performed in 23 (82%) of 28 patients in Group I (median follow-up, 7.99 years) and in 33 (60%) of 55 patients in Group II (median follow-up, 3.85 years). The difference in total survival was not significant between Groups I and II but was markedly influenced by the presence of the 22q11 deletion. The survival of 23 patients with confirmed 22q11 deletion was 73.91%; the survival of 28 patients with confirmed absence of the 22q11 deletion was 100%; and the survival of 32 patients not tested for the 22q11 deletion was 100% ( P = 0.0001). Conclusions: Total survival was significantly lower in patients with the 22q11 deletion. Surgical management based on preoperative pulmonary arterial anatomical features improves early surgical results. abstract_id: PUBMED:17021573 New developments in the transcatheter treatment of congenital heart disease. Purpose Of Review: This review summarizes recent developments in the field of interventional paediatric cardiology. This is a rapidly evolving field with many new techniques that have been optimized or introduced during the past few years. Recent Findings: Important new data have emerged on the device closure of atrial septal defects, ventricular septal defects and patent ductus arteriosus. The technology has been improved further and more results on complications and problems have become available. New technical developments in the treatment of stenosis of the pulmonary arteries and the aorta, such as the use of cutting balloons and (covered) stents, have resulted in new treatment options for congenital disorders that were not considered suitable for percutaneous treatment. Fascinating emerging technologies such as percutaneous aortic and pulmonary valve implantations have been introduced and the first results have been reported. Changes in imaging technology such as intracardiac echocardiography and magnetic resonance imaging affect interventional cardiology. These changes result in a new approach to congenital heart disease in which hybrid surgical and interventional procedures are used to treat complex disease. Summary: Interventional congenital cardiology is a rapidly growing field that offers new treatment options for patients with congenital heart disease. abstract_id: PUBMED:11716026 Evaluation of canine congenital heart disease using an echocardiographic algorithm. Evaluation of canine congenital heart disease presents a diagnostic challenge to many ultrasonographers. To assist clinicians attempting to examine these patients, an echocardiographic algorithm containing the six most common canine congenital heart diseases (i.e., patent ductus arteriosus, subaortic stenosis, pulmonic stenosis, ventricular septal defect, tricuspid dysplasia, and tetralogy of Fallot) is presented. The algorithm focuses on the underlying disease pathogenesis and the resultant changes in cardiac structure and function that can be readily identified during echocardiographic examination. Use of this algorithm provides a framework from which the ultrasonographer gains both experience and confidence in diagnosing congenital heart disease via echocardiography. This algorithm is supported by a number of still figures within the article as well as real-time echocardiographic images available for viewing at AAHA's website, www.aahanet.org. abstract_id: PUBMED:11732145 Congenital heart disease in Papua New Guinean children. The aim of the study was to analyse critically the programme for surgical management of children in Papua New Guinea (PNG) with congenital heart disease. A hospital record-based analysis was undertaken to document the pattern, management and short-term outcome of surgery in PNG children referred with a diagnosis of congenital heart disease to the Royal Alexandra Hospital for Children in Sydney, Australia. On admission, physical examination, chest radiogram, electrocardiogram, cross-sectional echocardiogram and, in most cases, cardiac catheterization were performed. Of the 170 children referred over the 17-year period, 1978-1994, 165 were confirmed to have congenital heart disease and were included in the study. Their ages ranged from 2 months to 16 years (median 5.5) and the male to female ratio was 1:1. One-sixth had delayed milestones and one-fifth long-term wasting. A large number were tachypnoeic, in heart failure or had pulmonary hypertension on admission. Ventricular septal defect, 34%, tetralogy of Fallot, 23%, and patent ductus arteriosus, 16.4%, were the predominant defects. lesions such as aortic stenosis, coarctation of the aorta and transposition of the great arteries are under-represented. Altogether, 133 children (81%) had surgery; 75% were open- and 25% closed-heart operations. The complications were unremarkable and the mortality rate (6%) acceptable for the era. The programme was therefore very successful for a small proportion of children born in PNG with congenital heart disease. abstract_id: PUBMED:10771965 Non surgical treatment in congenital heart disease. Dilation of blood vessels transluminally was demonstrated by Dr Andreas Gruntzig in 1978. In 1982, Kan demonstrated that congenitally stenosed valve can be dilated with the use of cylindrical balloon and opened up new vistas in the non-surgical treatment of congenital heart lesions. Rapid progress has been now made in (1) Understanding of mechanism of success or failure of balloon dilation (2) Development of newer hardwares like low profile balloon, high pressure balloons, extra support wires which have contributed to increase in success rate of the procedures (3) Understanding the unnatural history of various congenital heart lesions from the vast data bank of Registries of balloon dilation. As a consequence balloon dilation has become the first procedure of choice in the treatment of valvar pulmonary stenosis and post surgical recurrent coarctation. Although an essentially palliative procedure for the treatment of valvar aortic stenosis and native coarctation beyond the neonatal period, non-randomised studies have shown results similar to that of surgical correction. Balloon dilation of pulmonary artery, pulmonary vein, venous baffles stenosis have shown poor overall results. Balloon dilation for sub-pulmonary stenosis like tetralogy of Fallot is controversial. The development of stents to keep dilated vessel open has also found a place in treatment of congenital heart lesions. Thus pulmonary artery stenosis, recurrent coarctation in adults, venous baffle obstruction, conduit stenosis which respond poorly to balloon dilation alone have shown promising early and mid term results with stents. The use of stents to keep the ductus open, recurrent coarctation in children, in right ventricular outflow tract obstruction is controversial. Development of devices to achieve closure of cardiac defects is relatively new. Patent arterial duct was the first defect to be closed with the help of devices and has shown good results on long term follow up. Recently stainless steel coils have been used to close dose small ducti (up to 3 mm) with very promising medium term results. Its use has also been extended to larger ducti (up to 7 mm). Devices used for closing atrial septal defects, ventricular septal defects are undergoing rapid development. In atrial septal defects its use is restricted to small defects in the fossa ovalis area and in ventricular defects, to only those which are in the muscular septum. Most of the devices in use today for atrial septal closure are relatively new and await long term results. The results achieved by non surgical treatment for many congenital heart lesions are essentially palliative and may require repeat interventions at some point in the unnatural history which increases the cost of treatment substantially. The reuse of balloons, guide wires in developing countries helps reduce the overall cost of the procedure. abstract_id: PUBMED:15929625 Eisenmenger syndrome in the adult--experience with new drugs for the treatment of pulmonary hypertension. Background: The last few years have seen significant progress in the treatment of advanced pulmonary arterial hypertension (PAH). The efficacy of new drugs has been proved mainly in idiopathic PAH or PAH associated with connective tissue diseases. As the pathophysiologic patterns are similar, it is reasonable to use these drugs also in Eisenmenger syndrome or in other congenital heart defects with PAH related to initial high pulmonary flow. Objectives: To evaluate our early experience with new drugs for PAH in patients with Eisenmenger syndrome. Methods: A retrospective study of five patients, aged 28 to 51 years (39.6 +/- 9.94), four female, with Eisenmenger syndrome due to atrial septal defect (n = 2), patent ductus arteriosus (n = 2) or ventricular septal defect (n = 1), who began therapy with iloprost (n = 4, later associated with sildenafil in one patient) and bosentan (n = 1), between April 2001 and May 2003. The existence of severe and fixed PAH, with predominant right-to-left shunt, was confirmed by hemodynamic study in all cases. The patients were evaluated by clinical examination, Doppler echocardiography and the six-minute walk test before treatment and throughout follow-up (9 to 34 months, 19.8 +/- 9.04). Before treatment two patients were in NYHA class III and three in class III with periods in class IV. By Doppler echocardiography the right ventricle-right atrium (RV-RA) gradient was 74 to 111 mmHg (90.6 +/- 15.73) and the Tei index was 0.53 to 2.05 (1.13 +/- 0.62). In the six-minute walk test the distance was 214 to 500 meters (296.8 +/- 115.27). Results: All patients improved clinically, though three are still in class III. One patient is in class II and one patient died. At the latest evaluation the RV-RA gradient was 60 to 112 mmHg (84.8 +/- 19.11) and the Tei index was 0.5 to 1.33 (0.85 +/- 80.32). In the six-minute walk test a net increase in the distance covered was evident: 376 to 520 meters (420 +/- 57.89). The treatment was well tolerated in all cases, without serious adverse effects. Conclusions: Though the number of patients was small, our initial experience with the new specific drugs for PAH in Eisenmenger syndrome showed promising results, with clinical and functional improvement and without adverse effects. abstract_id: PUBMED:29476814 The Right Axillary Incision: A Potential New Standard of Care for Selected Congenital Heart Surgery. Although the median sternotomy has been the traditional approach for congenital heart surgery, young patients and their families often find the midline scar to be cosmetically unappealing. At our center, a right transverse axillary incision has become the standard approach for many congenital cardiac lesions because of its safety, versatility, and unsurpassed aesthetic result. We present our experience with the axillary approach for a diverse array of congenital defects. A retrospective review of patients receiving a right transverse axillary incision for congenital cardiac surgery between 2005 and 2016 was conducted. The right transverse axillary incision was performed in 358 patients for 24 unique procedures. Median age was 5 years (range 1 month-60 years) and 225 patients (63%) were female. Median weight was 17 kg (range 4-124 kg), with 19 patients (5%) weighing less than 6 kg. The most common lesions were atrial septal defects (n = 244, 68%) and ventricular septal defects (n = 72, 20%). As experience with this approach increased, other repairs included subvalvular aortic membrane resection (n = 10, 3%), tetralogy of Fallot repair (n = 7, 2%), ventricular assist device placement (n = 3, 1%), and mitral valve repair (n = 2, 1%). There were no intraoperative deaths or conversions to sternotomy. In-hospital complications included mortality (n = 1, 0.3%), reoperations for bleeding (n = 5, 1%), pneumothorax or pleural effusion (n = 6, 2%), and permanent pacemaker (n = 4, 1%). The right axillary incision allows a safe and effective repair for a broad range of congenital heart defects and is a potential new standard of care for many patients. abstract_id: PUBMED:35694277 Adoption of Compound Echocardiography under Artificial Intelligence Algorithm in Fetal Congenial Heart Disease Screening during Gestation. This research was aimed at exploring the diagnostic and screening effect of composite echocardiography based on the artificial intelligence (AI) segmentation algorithm on fetal congenital heart disease (CHD) during pregnancy, so as to reduce the birth rate of newborns with CHD. A total of 204 fetuses with abnormal heart conditions were divided into group II, group C (optimized with the AI algorithm), and group W (not optimized with the AI algorithm). In addition, 9,453 fetuses with normal heart conditions were included in group I. The abnormal distribution of fetal heart and the difference of cardiac Z score between group II and group I were analyzed, and the diagnostic value of group C and group W for CHD was compared. The results showed that the segmentation details of the proposed algorithm were better than those of the convolutional neural network (CNN), and the Dice coefficient, precision, and recall values were higher than those of the CNN. In fetal CHD, the incidence of abnormal ultrasonic manifestations was ventricular septal defect (98/48.04%), abnormal right subclavian artery (29/14.22%), and persistent left superior vena cava (25/12.25%). The diagnostic sensitivity (75.0% vs. 51.5%), specificity (99.6% vs. 99.2%), accuracy (99.0% vs. 98.2%), negative predictive value (88.5% vs. 78.5%), and positive predictive value (99% vs. 57.7%) of echocardiography segmentation in group C were significantly higher than those in group W. To sum up, echocardiography segmented by the AI algorithm could obviously improve the diagnostic efficiency of fetal CHD during gestation. Cardiac ultrasound parameters of children with CHD changed greatly. abstract_id: PUBMED:32787993 Hybrid perventricular muscular ventricular septal defect closure using the new multi-functional occluder. The hybrid perventricular approach for the closure of trabecular ventricular septal defects is an attractive treatment modality for small children. Worldwide experience has shown that procedure success is influenced by the defect anatomical accessibility, operators' expertise, and device technical features. In May 2018, a new promising device, the KONAR-Multi-functional™ ventricular septal defect occluder (Lifetech, Shenzhen, China), obtained CE-marking for septal defect transcatheter closure after the first-in-man implantation in 2013. Herein, this is the first report of successful perventricular closure of ventricular septal defect using this new device in a child with significant co-morbidities. Answer: The treatment of postinfarction ventricular septal defects (VSD) has evolved with the introduction of both surgical and percutaneous closure techniques. A study at the Montreal Heart Institute reviewed the experience with early surgical and percutaneous closure of postinfarction VSDs. Between May 1995 and November 2007, 51 patients with postinfarction VSD were treated, with 39 undergoing operations and 12 treated with percutaneous closure. The study found that small or medium VSDs could be treated definitively with a ventricular septal occluder or initially to stabilize patients and allow myocardial fibrosis, thus facilitating delayed subsequent surgical correction. The strongest predictors of mortality were residual VSD, time from myocardial infarction to VSD diagnosis, and time from VSD diagnosis to treatment. The hospital or 30-day mortality for those treated with a percutaneous occluder device was 42% (PUBMED:19231370). This suggests that a new treatment algorithm for postinfarction VSD could include the use of percutaneous occluder devices for small to medium defects, possibly as a bridge to surgery or as definitive treatment. The timing of the intervention and the presence of residual defects are critical factors influencing outcomes. The development of such an algorithm would need to consider individual patient factors, including the size of the VSD, hemodynamic stability, and the presence of other comorbidities. As the field of interventional cardiology continues to evolve with new techniques and devices, treatment algorithms will likely continue to be refined to improve patient outcomes (PUBMED:17021573).
Instruction: Female military medical school graduates entering surgical internships: are we keeping up with national trends? Abstracts: abstract_id: PUBMED:25129429 Female military medical school graduates entering surgical internships: are we keeping up with national trends? Background: Ratios of women graduating from the only US military medical school and entering surgical internships were reviewed and compared with national trends. Methods: Data were obtained from the Uniformed Services University of the Health Sciences graduation announcements from 2002 to 2012. Results: There were 1,771 graduates from 2002 to 2012, with 508 female (29%) and 1,263 male (71%) graduates. Female graduates increased over time (21% to 39%; P = .014). Female general surgery interns increased from 3.9% to 39% (P = .025). Female overall surgical subspecialty interns increased from 20% in 2002 to 36% in 2012 (P = .046). Women were represented well in obstetrics (57%), urology (44%), and otolaryngology (31%), but not in neurosurgery, orthopedics, and ophthalmology (0% to 20%). Conclusions: The sex disparity between military and civilian medical students occurs before entry. Once in medical school, women are just as likely to enter general surgery or surgical subspecialty as their male counterparts. Increased ratio of women in the class is unlikely to lead to a shortfall except in specific subspecialties. abstract_id: PUBMED:12953000 Trends in career choice by US medical school graduates. Context: Trends in career choice among specialties have varied greatly. Most notable is the recent decrease in the percentage of US medical student graduates choosing a primary care career, which has important implications for the US physician workforce. Objective: To review temporal trends in career choice by graduates of allopathic US medical schools, focusing on US medical doctors entering residencies since 1987. Data Sources: Three databases, the Association of American Medical Colleges Graduation Questionnaire (AAMC GQ), the National Resident Matching Program, and the national Graduate Medical Education census, were used to review temporal trends in the number of US medical doctors entering residencies in primary care, general or subspecialty surgical, and non-primary care and nonsurgical specialties from 1987 to 2002. Data Synthesis: In 1987, 49.2% of all medical school graduates matched to one of the generalist residencies (internal medicine, pediatrics, or family medicine). The percentage of students matching to primary care specialties declined in the early 1990s, peaked at 53.2% in 1998, and declined to 44.2% in 2002. Concurrent with the latter decline, AAMC GQ data showed a decrease in medical student interest in primary care careers (35.6% in 1999 to 21.5% in 2002). The total percentage of US medical doctors matching to general or subspecialty surgical residencies remained stable at 11% to 12% from 1987 to 2002. During this same period, emergency medicine and plastic surgery increased as a match choice, while anesthesiology, pathology, and psychiatry were more variable over time. Conclusions: Distribution of medical students' career choices among specialties varied considerably from 1987 to 2002. The debate will continue regarding the appropriate specialty mix within the physician workforce. abstract_id: PUBMED:37201483 Relationship Between Gender and Academic and Military Achievement Among USU Medical School Graduates. Introduction: Despite increases in the number of female matriculants in medical school, civilian data demonstrate that women still struggle to reach parity in attainment of leadership positions. In military medicine, we have seen a major increase in the number of women graduating from the USU. Yet, we still know little about the representation of female military physicians in leadership positions. The aim of this study is to examine the relationship between gender and academic and military achievement among USU School of Medicine graduates. Methods: Utilizing the USU alumni survey sent to graduates from the classes of 1980 to 2017, items of interest, such as highest military rank, leadership positions held, academic rank, and time in service, were used to evaluate the relationship between gender and academic and military achievement. Contingency table statistical analysis was conducted to compare the gender distribution on the survey items of interest. Results: Pairwise comparison demonstrated significant differences between gender in the O-4 (P = .003) and O-6 (P = .0002) groups, with females having a higher-than-expected number of O-4 officers and males having a higher-than-expected number of O-6 officers. These differences persisted in a subsample analysis that excluded those who separated from active duty prior to 20 years of service. There was a significant association between gender and holding the position of commanding officer (χ2(1) = 6.61, P &lt; .05) with fewer females than expected. In addition, there was a significant association between gender and the highest academic rank achieved (χ2(3) = 9.48, P &lt; 0.05) with lower-than-expected number of females reaching the level of full professor, in contrast to males who exceed the expected number. Conclusions: This study suggests that female graduates of USU School of Medicine have not achieved promotion to the highest levels of rank, military, or academic leadership at the projected rate. Efforts to explore what barriers may impact military medicine's desire to have more equal representation of women in higher ranks and positions should be undertaken with specific attention to what drives retention versus separation of medical officers and if systematic changes are needed to help promote equity for women in military medicine. abstract_id: PUBMED:38413964 Attractiveness and gender dynamics in surgical specialties: a comparative analysis of French medical graduates (2017-2022). Background: French medical graduates undertake a national examination at the end of their studies with a subsequent national ranking. Specialty is then chosen by each candidate according to their ranking. This study aims to describe the attractiveness of surgical specialties and the evolution of the male-female distribution among French medical graduates (FMG) from 2017 to 2022. Methods: Our database included the candidates' ranking, sex and choice of specialty from 2017 to 2022. It included all French medical graduates from 2017 to 2022 and all French medical schools. A linear regression was performed to predict future trends. Dependent variables were mean rankings and the percentage of women. The independent variable was year of application. A Pearson correlation was performed to examine any relationship with mean workweek. Results: A total number of 5270 residents chose a surgical programme between 2017 and 2022. The number of residents who were assigned their desired surgical programme held stable at 878 surgical residents per year. Plastic and reconstructive surgery remained the most frequently chosen surgical programme. Thoracic and cardiovascular surgery was the least frequently chosen surgical programme between 2017 and 2022. The mean ranking for a candidate choosing a surgical programme rose significantly by 9% from 2017 to 2022 (p &lt; 0.01). Neurosurgery exhibited the greatest fall as a surgical specialty as its rankings decreased by 163.6% (p &lt; 0.01). Maxillo-facial surgery was the only specialty with a statistically significant increase in its rankings by 35.9% (p &lt; 0.05). The overall proportion of women was 51.1%. Obstetrics-and-gynecology was the highest represented specialty among female candidates, with a mean of 83.9% of women. Orthopedic surgery was the lowest represented, being composed of a mean of 28.6% of women. The number of female surgical residents increased significantly over the six-year period, by 7.6% (p &lt; 0.01). Conclusions: More and more medical school graduates decide not to choose surgery for their residency programme. Some specialties continue to be attractive while many are losing their appeal. While there does appear to be progress towards gender equity, further investigation is necessary to assess its actual implementation. abstract_id: PUBMED:18619095 Military medical graduates' perceptions of organizational culture in Turkish military medical school. Organizational culture is the term used to describe the shared beliefs, perceptions, and expectations of individuals in organizations. In the healthcare environment, organizational culture has been associated with several elements of organizational experience that contribute to quality, such as nursing care, job satisfaction, and patient safety. A range of tools have been designed to measure organizational culture and applied in industrial, educational, and health care settings. This study has been conducted to investigate the perceptions of military medical graduates on organizational culture at Gülhane Military Medical School. A measurement of organizational culture, which was developed by the researchers from Akdeniz University, was applied to all military medical graduates in 2004. This was a Likert type scale that included 31 items. Designers of the measurement grouped all these items into five main dimensions in their previous study. The items were scored on a five-point scale anchored by 1: strongly agree and 5: strongly disagree. Study participants included all military physicians who were in clerkship training period at Gulhane Military Medical Academy in 2004. A total of 106 graduates were accepted to response the questionnaire. The mean age of participants was 25.2 +/- 1.1. At the time of study only 8 (7.5%) graduates were married. The study results have showed that the measurement tool with 31 items had a sufficient reliability with a Cronbach's alpha value of 0.91. Factor analysis has resulted a final measurement tool of 24 items with five factors. Total score and the scores of five subdimensions have been estimated and compared between groups based on living city and marital status. The study has shown the dimension of symbol received positive perceptions while the dimension of organizational structure and efficiency received the most negative perceptions. GMMS has a unique organizational culture with its weak and strong aspects. Conducting this kind of studies contribute to improve organizational culture in order to increase educational and research capability. abstract_id: PUBMED:34938080 Stakeholders' Views and Confidence Towards Indonesian Medical Doctor National Competency Examination: A Qualitative Study. Introduction: Indonesia is a huge country consisting of 33 provinces with different characteristics. There are 83 medical schools across Indonesia with different accreditation statuses. The Indonesia Medical Doctor National Competency Examination (IMDNCE) has been established to control the quality of medical school graduates. The implementation of IMDNCE needed to be evaluated to determine its impact. To date, there has not been any research in Indonesia that explores the stakeholders' perceptions toward IMDNCE. This study aimed to explore how the stakeholders in Indonesia perceived the impact of IMDNCE towards performances of medical school graduates in clinical practice. Methods And Study Participants: A qualitative study with phenomenological approach was conducted to investigate perceptions of stakeholders including representatives from consumer organizations, the National Health Coverage, the Ministry of Health, the Indonesian Medical Association, employers (hospital and health center directors), clinical supervisors as well as patients across Indonesia. Data were obtained through focus group discussions (FGDs) and interviews. The study used thematic analysis methods to obtain the results. Results: A total of 90 study participants participated in the study including 10 representatives of consumer watchdog organizations, the National Health Coverage, the Ministry of Health, the Indonesian Medical Association, 31 employers, 32 professionals, and 17 patients. The study found three general themes which represent the perceptions of the stakeholders towards performances of medical school graduates in clinical practice: IMDNCE as an effort to standardize doctor graduates in Indonesia, the results of IMDNCE as a mean to reflect the quality of medical education in Indonesia, and IMDNCE as an effort to improve health services in Indonesia through the quality of graduates. Conclusion: In general, the stakeholders perceived that the IMDNCE was able to standardize medical school graduates from various medical schools across Indonesia. However, the IMDNCE needs to be further developed to maximize its potential in improving the competences of Indonesian medical students. abstract_id: PUBMED:24136695 Entry of US medical school graduates into family medicine residencies: 2012--2013. Background: Analyzing the US medical school origin of family medicine residents highlights schools, states, or regions that have higher entrance rates into family medicine. Methods: The American Academy of Family Physicians (AAFP) 2013 Residency Census has a 100% response rate and lists information for family medicine residents who entered training July 2012. MD graduates are verified through medical school registrars or the American Medical Association's Physicians Masterfile data. The American Association of Colleges of Osteopathic Medicine provides data on DO graduates. Three-year rolling averages of graduates entering family medicine are calculated for Liaison Committee of Medical Education (LCME)-accredited medical schools. Results: In July 2012, 3,523 first-year residents entered Accreditation Council for Graduation Medical Education (ACGME)-accredited family medicine residencies. Medical students from LCME-accredited schools account for less than half of the family medicine residents (46%). Public MD-granting medical schools graduate almost threefold more students into family medicine residencies than do private schools (1,101 versus 380). The Mountain, West North Central, and Pacific regions of the United States have the highest percentage of MD graduates (13.5%, 12.3%, and 11.4%, respectively) entering family medicine. Forty-five percent of MD medical students enter a family medicine residency in the state in which they attended medical school. Conclusions: LCME-accredited medical schools with lower percentages of graduates entering family medicine should examine the economic, environmental, and academic factors that may be causing low numbers of their students graduating and entering family medicine residencies. abstract_id: PUBMED:8884249 Entry of US medical school graduates into family practice residencies: 1995-1996 and 3-year summary. This is the 15th report prepared by the American Academy of Family Physicians (AAFP) on the percentage of each US medical school's graduates entering family practice residency programs. Approximately 14.6% of the 16,124 graduates of US medical schools between July 1994 and June 1995 were first-year family practice residents in October 1995, compared with 13.4% in 1994 and 12.3% in 1993. This is the highest percentage since 1980-1981 (12.8%), when this series of studies began. Medical school graduates from publicly funded medical schools were almost twice as likely to be first-year family practice residents in October 1995 than were residents from privately funded schools, 18% compared with 9.7%. The West North Central region reported the highest percentage of medical school graduates who were first-year residents in family practice programs in October 1995 at 24.8%; the Middle Atlantic and New England regions continued with the lowest percentages at 7.6% and 9.3%, respectively. Approximately one in two medical school graduates (46.7%) entering a family practice residency program as first-year residents in October 1995 entered a program in the same state where they graduated from medical school. The percentages for each medical school have varied substantially from year to year since the AAFP began reporting this information. The average percentage for each medical school for the last 3 years is reported. In addition, the number and percentage of graduates from colleges of osteopathic medicine who entered Accreditation Council for Graduate Medical Education-accredited family practice residency programs are reported. abstract_id: PUBMED:35257164 The Impact of Medical School on Military Physicians' Readiness for their First Deployment. Introduction: Military physicians receive their undergraduate medical training primarily by either attending civilian medical school, through the Armed Forces Health Professions Scholarship Program (HPSP), or by attending the Uniformed Services University (USU), a federal medical school with a military unique curriculum. The purpose of this study was to explore the perceptions of graduates from these two educational pathways regarding the impact of their medical school training on their readiness for their first deployment. Materials And Methods: We conducted 18 semi-structured interviews with military physicians who attended civilian medical schools and USU and who had deployed within the past 2 years. The participants also completed emailed follow-up questions. The interviews were recorded and transcribed. The research team coded the interviews, extracted meaning units, and identified themes that emerged from the data. Results: The following themes emerged from the data: (1) medical readiness; (2) operational readiness; (3) command interactions; and (4) role as a military physician. All of the participants perceived themselves to be prepared medically. However, the USU graduates were more confident in their ability to navigate the operational aspects of deployment. In addition, they described their ability to naturally build positive working relationships with their commanding officers and navigate their combined roles as both a physician and military officer. Conclusions: These perceptions of both the civilian medical graduates and USU graduates provide important insight to the military medical education community regarding the ways in which civilian medical schools and USU prepare students for their first deployment. This insight will help to identify any training gaps that should be filled in order to ensure that military physicians are ready for deployment. abstract_id: PUBMED:26473564 Entry of US Medical School Graduates Into Family Medicine Residencies: 2014-2015. Background And Objectives: This is the 34th national study conducted by the American Academy of Family Physicians (AAFP) that reports retrospectively the percentage of graduates from US MD-granting and DO-granting medical schools who entered Accreditation Council for Graduate Medical Education (ACGME)-accredited family medicine residency programs as first-year residents in 2014. Approximately 8.5% of the 18,241 students graduating from US MD-granting medical schools between July 2013 and June 2014 entered a family medicine residency. Of the 1,458 graduates of the US MD-granting medical schools who entered a family medicine residency in 2014, 80% graduated from 69 of the 131 schools. Eleven schools lacking departments or divisions of family medicine produced only a total of 26 students entering family medicine. In aggregate, medical schools west of the Mississippi River represent less than a third of all US MD-granting schools but have an aggregate rate of students selecting family medicine that is two-thirds higher than schools to the east of the Mississippi. A rank order list of US MD-granting medical schools was created based on the last 3 years' average percentage of graduates who became family medicine residents, using the 2014 and prior AAFP census data. US MD schools continue to fail to produce a primary care workforce, a key measure of social responsibility as measured by their production of graduates entering into family medicine. DO-granting and international medical school graduates filled the majority of ACGME-accredited family medicine first-year resident positions in 2014. Answer: The data from the abstracts suggest that female military medical school graduates entering surgical internships have seen an increase over time. Specifically, the number of female graduates from the Uniformed Services University of the Health Sciences (USU) entering general surgery internships increased from 3.9% to 39% over a decade (PUBMED:25129429). This trend is consistent with the national trend of increasing female representation in surgical fields. However, despite this increase, there are still disparities in certain surgical subspecialties, with women being underrepresented in neurosurgery, orthopedics, and ophthalmology, ranging from 0% to 20% representation. The overall percentage of US medical doctors matching to general or subspecialty surgical residencies remained stable at 11% to 12% from 1987 to 2002 (PUBMED:12953000), indicating that the proportion of medical graduates entering surgical fields has not seen significant changes nationally. This stability in surgical residency matches may provide a backdrop against which the increases in female military medical school graduates entering surgical internships can be compared. Furthermore, the study on the relationship between gender and academic and military achievement among USU medical school graduates found that female graduates have not achieved promotion to the highest levels of rank, military, or academic leadership at the projected rate (PUBMED:37201483). This suggests that while there may be an increase in female graduates entering surgical internships, there are still barriers to their advancement in military medicine. In summary, while there has been an increase in female military medical school graduates entering surgical internships, mirroring national trends of increased female participation in surgery, disparities remain in certain subspecialties and leadership positions. The data indicate that efforts are needed to promote equity for women in military medicine and address the barriers that may impact their retention and advancement (PUBMED:25129429; PUBMED:37201483).
Instruction: Are outcomes of surgical versus transcatheter balloon valvotomy equivalent in neonatal critical aortic stenosis? Abstracts: abstract_id: PUBMED:11568048 Are outcomes of surgical versus transcatheter balloon valvotomy equivalent in neonatal critical aortic stenosis? Background: For neonates with critical aortic valve stenosis who are selected for biventricular repair, valvotomy can be achieved surgically (SAV) or by transcatheter balloon dilation (BAV). Methods And Results: Data regarding 110 neonates with critical aortic valve stenosis were evaluated in a study by the Congenital Heart Surgeons Society from 1994 to 1999. Reduced left ventricular function was present in 46% of neonates. The initial procedure was SAV in 28 patients and BAV in 82 patients. Mean percent reduction in systolic gradient was significantly greater with BAV (65+/-17%) than SAV (41+/-32%; P&lt;0.001). Higher residual median gradients were present in the SAV versus BAV group (36 mm Hg [range, 10 to 85 mm Hg] versus 20 mm Hg [0 to 85 mm Hg], P&lt;0.001). Important aortic regurgitation was more often present after BAV (18%) than SAV (3%; P=0.07). Time-related survival after valvotomy was 82% at 1 month and 72% at 5 years, with no significant difference for SAV versus BAV, even after adjustment for differences in patient and disease characteristics. Independent risk factors for mortality were mechanical ventilation before valvotomy, smaller aortic valve annulus (z score), smaller aortic diameter at the sinotubular junction (z score), and a smaller subaortic region. A second procedure was performed in 46 survivors. Estimates for freedom from reintervention were 91% at 1 month and 48% at 5 years after the initial valvotomy and did not differ significantly between groups. Conclusions: SAV and BAV for neonatal critical aortic stenosis have similar outcomes. There is a greater likelihood of important aortic regurgitation with BAV and of residual stenosis with SAV. abstract_id: PUBMED:32090726 Short- and intermediate-term results of balloon aortic valvuloplasty and surgical aortic valvotomy in neonates. Background: Balloon aortic valvuloplasty and open surgical valvotomy are procedures to treat neonatal aortic stenosis, and there is controversy as to which method has superior outcomes. Methods: We reviewed the records of patients at our institution since 2000 who had a balloon aortic valvuloplasty or surgical valvotomy via an open commissurotomy prior to 2 months of age. Results: Forty patients had balloon aortic valvuloplasty and 15 patients had surgical valvotomy via an open commissurotomy. There was no difference in post-procedure mean gradient by transthoracic echocardiogram, which were 25.8 mmHg for balloon aortic valvuloplasty and 26.2 mmHg for surgical valvotomy, p = 0.87. Post-procedure, 15% of balloon aortic valvuloplasty patients had moderate aortic insufficiency and 2.5% of patients had severe aortic insufficiency, while no surgical valvotomy patients had moderate or severe aortic insufficiency. The average number of post-procedure hospital days was 14.2 for balloon aortic valvuloplasty and 19.8 for surgical valvotomy (p = 0.52). Freedom from re-intervention was 69% for balloon aortic valvuloplasty and 67% for surgical valvotomy at 1 year, and 43% for balloon aortic valvuloplasty and 67% for surgical valvotomy at 5 years (p = 0.60). Conclusions: Balloon aortic valvuloplasty and surgical valvotomy provide similar short-term reduction in valve gradient. Balloon aortic valvuloplasty has a slightly shorter but not statistically significant hospital stay. Freedom from re-intervention is similar at 1 year. At 5 years, it is slightly higher in surgical valvotomy, though not statistically different. Balloon aortic valvuloplasty had a higher incidence of significant aortic insufficiency. Long-term comparisons cannot be made given the lack of long-term follow-up with surgical valvotomy. abstract_id: PUBMED:30997128 Surgical versus balloon valvotomy in neonates and infants: results from the UK National Audit. Objective: There are conflicting data on choosing balloon aortic valvoplasty (BAV) or surgical aortic valvotomy (SAV) in neonates and infants requiring intervention for aortic valve stenosis. We aim to report the outcome of both techniques based on results from the UK national registry. Methods: This is a retrospective study, including all patients under 1 year undergoing BAV/SAV between 2000 and 2012. A modulated renewal approach was used to examine the effect of reinterventions on outcomes. Results: A total of 647 patients (488 BAV, 159 SAV, 292 neonates) undergoing 888 aortic valve procedures were included, with a median age of 40 days. Unadjusted survival at 10 years was 90.6% after initial BAV and 84.9% after initial SAV. Unadjusted aortic valve replacement (AVR) rate at 10 years was 78% after initial BAV and 80.3% after initial SAV. Initial BAV and SAV had comparable outcomes at 10 years when adjusted by covariates (p&gt;0.4). AVR rates were higher after BAV and SAV reinterventions compared with initial valvoplasty without reinterventions (reference BAV, HR=3 and 3.8, respectively, p&lt;0.001). Neonates accounted for 29/35 of early deaths after the initial procedure, without significant differences between BAV and SAV, with all late outcomes being worse compared with infants (p&lt;0.005). Conclusions: In a group of consecutive neonates and infants, BAV and SAV had comparable survival and freedom from reintervention as initial procedures and when performed as reinterventions. These findings support a treatment choice based on patient characteristics and centre expertise, and further research into the best patient profile for each choice. abstract_id: PUBMED:16473786 Neonatal isolated critical aortic valve stenosis: balloon valvuloplasty or surgical valvotomy. Background: Open surgical valvotomy and transcatheter balloon valvuloplasty are recognised treatments for neonatal critical aortic stenosis. Methods: A retrospective analysis was undertaken of all newborns with critical aortic valve stenosis between 1990 and 2000 presenting to a tertiary centre and who required intervention. The initial catheter and surgical intervention was generally based on the preference of the attending cardiologist and the anatomy of the aortic valve and in consultation with the cardiothoracic surgeon. The two groups were therefore not strictly comparable. Twelve were subjected to balloon valvuloplasty and thirteen to surgical valvotomy at a median age of 11 days (2-42 days) and 3.5 days (1-19 days) respectively. There was no significant difference in the timing of the procedure, weight of the infant, aortic annulus or left ventricular dimensions in either group. Results: There was one unrelated hospital death in the balloon group compared to two in the surgical group both of whom had endocardial fibroelastosis. Mild to moderate aortic regurgitation was seen after both procedures. Four patients in the balloon valvuloplasty group, developed femoral artery thrombosis and two had cardiac perforation that resolved with non operative management. The mean Doppler gradient was reduced from 44+/-14 mmHg to 13.4+/-5 mmHg (p&lt;0.01) in the valvuloplasty group compared to a reduction from 42+/-15 mmHg to 27+/-8 mmHg (p&lt;0.05) in the surgical group. Five patients in the balloon group required re-intervention within 3 weeks to 21 months after the initial procedure. Two patients in the surgical group required a pulmonary autograft and Konno Procedure 3 and 5 years following surgical valvotomy. Conclusion: Both aortic valvuloplasty and valvotomy offered effective short and medium term palliation. Balloon valvuloplasty patients had a higher re-intervention rate but shorter hospital and intensive care stay, reduced immediate morbidity and were associated with less severe aortic regurgitation. abstract_id: PUBMED:2306149 Management of neonatal critical pulmonic stenosis in the balloon valvotomy era. We evaluated our recent experience with management of neonatal critical pulmonic stenosis and intact ventricular septum between 1982 and 1988. Thirty-nine patients (aged less than 3 months) were treated initially by operation (group A, n = 19) or with balloon pulmonary valvotomy (group B, n = 20). Patients in group A were younger (5 +/- 1.3 versus 18 +/- 4 days in group B) (mean +/- standard error of the mean) and had a greater degree of hypoxia (oxygen tension, 55 +/- 4 versus 80 +/- 6 mm Hg) (p less than 0.05 for all variables). Ten patients in group A and 8 patients in group B had right ventricular hypoplasia, based on an angiographically determined index. Balloon pulmonary valvotomy was attempted in 20 patients at the time of the initial catheterization but was unsuccessful in 9 owing to inability to catheterize the hypoplastic right ventricular outflow tract (n = 8) and to recurrent infundibular stenosis (n = 1). Patients with failed balloon valvotomy were subsequently operated on within 24 hours. The early operative mortality (less than 30 days) was 25% (7 of 28); one death (9%) occurred after successful balloon valvotomy owing to associated critical aortic stenosis. The early postoperative gradient was 20 +/- 2 mm Hg; the post-balloon valvotomy gradient was 18 +/- 3 mm Hg. We conclude that balloon pulmonary valvotomy yields good results in patients with critical pulmonary stenosis with essentially normal-sized right ventricle, whereas surgical pulmonary valvotomy is required for patients with right ventricular hypoplasia. abstract_id: PUBMED:32880254 Role of echocardiographic scoring systems in predicting successful biventricular versus univentricular palliation in neonates with critical aortic stenosis. Background: There are several published echo-derived scores to help predict successful biventricular versus univentricular palliation in neonates with critical aortic stenosis. This study aims to determine whether any published scoring system accurately predicted outcomes in these neonates. Methods: Single centre, retrospective cohort study including neonates who underwent aortic valve intervention (surgical valvotomy or balloon valvuloplasty) with the intention of biventricular circulation. Primary outcome was survival with biventricular circulation at hospital discharge. Data from their initial neonatal echocardiogram were used to compute the following scores - Rhodes, CHSS 1, Discriminant, CHSS 2, and 2 V. Results: Between 01/1999 and 12/2017, 68 neonates underwent aortic valve intervention at a median age of 4 days (range 1-29 days); 35 surgical valvotomy and 33 balloon valvuloplasty. Survival with biventricular circulation was maintained in 60/68 patients at hospital discharge. Of the remaining eight patients, three were converted to univentricular palliation, four died, and one underwent heart transplant prior to discharge. None of the binary score predictions of biventricular versus univentricular (using that score's proposed cut-offs) were significantly associated with the observed outcome in this cohort. A high percentage of those predicted to need univentricular palliation had successful biventricular repair: 89.4% by Rhodes, 79.3% by CHSS 1, 85.2% by Discriminant, and 66.7% by CHSS 2 score. The 2 V best predicted outcome and agreed with the local approach in most cases. Conclusion: This study highlights the limitations of and need for alternative scoring systems/cut-offs for consistently accurate echocardiographic prediction of early outcome in neonates with critical aortic stenosis. abstract_id: PUBMED:31158809 Outcomes Following Balloon Aortic Valvuloplasty Versus Surgical Valvotomy in Congenital Aortic Valve Stenosis: A Meta-Analysis. Background: The optimal treatment for congenital aortic stenosis (AS) has been debated over the past three decades of experience with both balloon aortic valvuloplasty (BAV) and surgical aortic valvotomy (SAV). While BAV has been the mainstay of therapy for children with AS in most centers, recent single-center reports suggest superior results following SAV. Methods: We queried Medline, EMBASE and Web of Science for eligible studies. Results: A total of 18 studies were included in our meta-analysis: SAV alone (n = 3), BAV alone (n = 10), and both (n = 5). The mean follow-up duration of BAV patients was 6.5 years, while the mean follow-up duration for SAV patients was 7.2 years. Mortality rates following BAV and SAV were 11% (95% CI, 8-14) and 10% (95% CI, 7-15), respectively. Reintervention following initial procedure for treatment of AS was higher following BAV (37% [95% CI, 30-44]) compared with SAV (25% [95% CI, 20-31]). The predominant reintervention for both the BAV and SAV groups was surgery (SAV or aortic valve replacement [AVR]); the surgical reintervention rate was 59% for BAV (95% CI, 51-66) and 75% for SAV (95% CI, 48-91). Mean time to reintervention was shorter for BAV (2.7 years [95% CI, 1.4-4.1]) compared with SAV (6.9 years [95% CI, 4.4-9.4]). AVR following BAV was 20% (95% CI, 17-23) and following SAV was 17% (95% CI, 12-25). Long-term and mid-term follow-up in these studies showed moderate to severe aortic insufficiency (AI) was present in 28% (95% CI, 20-37) and 19% (95% CI, 12-27) in BAV and SAV patients, respectively. Conclusions: The rate of reintervention following BAV is higher than following SAV. However, survival rates, AVR, and development of late AI following BAV and SAV are equivalent. The costs associated with the two therapies in terms of hospital days and other morbidities should be considered in future comparative studies. abstract_id: PUBMED:25647389 Transcatheter management of neonatal aortic stenosis. Neonatal aortic valvar stenosis can be challenging to treat because of the varied morphology of the valve, the association with hypoplasia of other left heart structures, and the presence of left ventricular systolic dysfunction or endomyocardial fibroelastosis. Balloon valvuloplasty and surgical valvotomy have been well described in the literature for the treatment of neonatal aortic stenosis. Transcatheter therapy for neonatal aortic stenosis is the preferred method at many centres; however, some centres prefer a surgical approach. Balloon valvuloplasty for neonatal aortic stenosis is reviewed in this manuscript, including the history of the procedure, technical aspects, and acute and long-term outcomes. abstract_id: PUBMED:36062928 Surgical valvotomy versus balloon dilatation for children with severe aortic valve stenosis: a systematic review. Aim: To evaluate outcomes of interventions for severe aortic valve stenosis (AS), whether it is done by surgical aortic valvotomy (SAV) or balloon aortic dilatation (BAD). Results: Eleven studies with total number of 1733 patients; 743 patients had SAV, while 990 patients received BAD. There was no significant difference in early mortality (odds ratio [OR]: 0.96, p = 0.86), late mortality (OR: 1.28, p = 0.25), total mortality (OR: 1.10, p = 0.56), and freedom from aortic valve replacement (OR: 1.00, p = 1.00). Reduction of aortic systolic gradient was significantly higher in the SAV group (OR: 2.24, p = 0.00001), and postprocedural AR rate was lower in SAV group (OR: 0.21, p = 0.00001). Conclusion: SAV is associated with better reduction of aortic systolic gradient and lesser post procedural AR which reduce when compared with BAD. abstract_id: PUBMED:24608018 Percutaneous aortic balloon valvotomy in the United States: a 13-year perspective. Background: We determined the contemporary trends of percutaneous aortic balloon valvotomy and its outcomes using the nation's largest hospitalization database. There has been a resurgence in the use of percutaneous aortic balloon valvotomy in patients at high surgical risk because of the development of less-invasive endovascular therapies. Methods: This is a cross-sectional study with time trends using the Nationwide Inpatient Sample database between the years 1998 and 2010. We identified patients using the International Classification of Diseases, 9th Revision, Clinical Modification procedure code for valvotomy. Only patients aged more than 60 years with aortic stenosis were included. Primary outcome included in-hospital mortality, and secondary outcomes included procedural complications and length of hospital stay. Results: A total of 2127 percutaneous aortic balloon valvotomies (weighted n = 10,640) were analyzed. The use rate of percutaneous aortic balloon valvotomy increased by 158% from 12 percutaneous aortic balloon valvotomies per million elderly patients in 1998-1999 to 31 percutaneous aortic balloon valvotomies per million elderly patients in 2009-2010 in the United States (P &lt; .001). The hospital mortality decreased by 23% from 11.5% in 1998-1999 to 8.8% in 2009-2010 (P &lt; .001). Significant predictors of in-hospital mortality were the presence of increasing comorbidities (P = .03), unstable patient (P &lt; .001), any complication (P &lt; .001), and weekend admission (P = .008), whereas increasing operator volume was associated with significantly reduced mortality (P = .03). Patients who were admitted to hospitals with the highest procedure volume and the highest volume operators had a 51% reduced likelihood (P = .05) of in-hospital mortality when compared with those in hospitals with the lowest procedure volume and lowest volume operators. Conclusion: This study comprehensively evaluates trends for percutaneous aortic balloon valvotomy in the United States and demonstrates the significance of operator and hospital volume on outcomes. Answer: The outcomes of surgical aortic valvotomy (SAV) and transcatheter balloon valvotomy (BAV) in neonatal critical aortic stenosis have been compared in several studies, and the results suggest that both procedures have similar outcomes in certain aspects, but they also have distinct differences. A study by the Congenital Heart Surgeons Society found that while both SAV and BAV resulted in similar time-related survival rates, BAV was associated with a greater likelihood of important aortic regurgitation, and SAV was more likely to leave residual stenosis (PUBMED:11568048). Another study reported that both procedures provided similar short-term reduction in valve gradient, but BAV had a higher incidence of significant aortic insufficiency (PUBMED:32090726). Similarly, a UK national registry study found comparable survival and freedom from reintervention between BAV and SAV, supporting a treatment choice based on patient characteristics and center expertise (PUBMED:30997128). A retrospective analysis of neonates with critical aortic valve stenosis indicated that balloon valvuloplasty patients had a higher re-intervention rate but shorter hospital and intensive care stay, reduced immediate morbidity, and were associated with less severe aortic regurgitation compared to surgical valvotomy (PUBMED:16473786). A meta-analysis also found that the rate of reintervention following BAV is higher than following SAV, but survival rates, aortic valve replacement (AVR), and development of late aortic insufficiency (AI) following both procedures were equivalent (PUBMED:31158809). Another systematic review concluded that SAV is associated with better reduction of aortic systolic gradient and lesser post-procedural AR compared to BAV (PUBMED:36062928). However, a study on percutaneous aortic balloon valvotomy in the United States showed an increase in the use rate of the procedure and a decrease in hospital mortality over time, highlighting the importance of operator and hospital volume on outcomes (PUBMED:24608018). In summary, while both SAV and BAV have similar survival outcomes, there are differences in the likelihood of residual stenosis, incidence of aortic regurgitation, and rates of reintervention. The choice of procedure may depend on the specific circumstances of the patient, the expertise of the center, and the preference of the attending cardiologist in consultation with the cardiothoracic surgeon.
Instruction: Are there disadvantages in administering 131I ablation therapy in patients with differentiated thyroid carcinoma without a preablative diagnostic 131I whole-body scan? Abstracts: abstract_id: PUBMED:26028188 Timing of post 131I ablation diagnostic whole body scan in differentiated thyroid cancer patients. Less than four months post ablation may be too early. Aim: to determine whether the first three months after 131I ablation is too early to perform radioiodine diagnostic whole body scintigraphy (dxWBS) in differentiated thyroid carcinoma patients. Patients, Material, Methods: The files of 462 patients who were treated for DTC in our hospital were reviewed. All patients underwent surgical thyroidectomy. 146 patients had data available on a. a dxWBS which was performed less than four months (max 120 days) after 131I ablation with concurrent stimulated TSH stimulated thyroglobulin (Tg) measurement without further therapeutic measures between ablation and dxWBS and b. a second dxWBS or 131I therapy (rxWBS) within 1.5 years after ablation. Results: A discordance between the initial and follow-up scan was found in 25/129 (19%) patients: of 54 patients with a positive initial dxWBS, scan results of a second dxWBS or rxWBS obtained with a suitable distance to the initial scan contradicted the initial one in 15 patients (27%). New lesions were discovered in 10/74 negative first dxWBS cases (14%). A discordance between the initial and follow-up stimulated Tg was found in 5/129 (4%) patients: 2/90 (2%) of patients with a negative stimulated Tg at initial dxWBS subsequently showed a positive results whereas 3/29 (10%) patients with an initially positive Tg showed a negative Tg level at the second procedure. Conclusion: Less than four months after 131I ablation is too early to perform radioiodine diagnostic whole body scintigraphy with concurrent TSH stimulated Tg measurement. The identification of the right, later, timepoint however requires further research. abstract_id: PUBMED:31736875 Prognostic Impact of Direct 131I Therapy After Detection of Biochemical Recurrence in Intermediate or High-Risk Differentiated Thyroid Cancer: A Retrospective Cohort Study. Background: Patients treated for intermediate- or high-risk differentiated thyroid carcinoma (DTC) and Thyroglobulin (TG) elevation during follow-up, require a diagnostic whole-body scan (DWBS) and if positive, 131I treatment. This approach can lead to a delay in treatment and increased costs. The purpose of this study is to compare the oncologic outcomes associated to administration of direct therapy with 131I at first biochemical recurrence. Methods: Retrospective cohort study of patients with intermediate- or high-risk DTC treated with total thyroidectomy, 131I ablation and who developed TG elevation during follow-up, between January 2007 and December 2015. Cohort A included patients who underwent a DWBS with 5 mCi of 131I, and if negative an MRI and/or 18FDG PET-CT prior to the therapeutic dosage, and cohort B included those who only received a therapeutic dosage of 131I, without a DWBS or extensive image studies. Main outcomes were second recurrence (SR) and disease-free survival (DFS). The diagnostic accuracy of DWBS was analyzed. Results: Cohorts A and B had 74 and 41 patients, each. By multivariate analysis, age, differentiation grade, TN classification, ablation dose, and performed DWBS (odds ratio 55.1; 95% CI 11.3-269) were associated with SR (p &lt; 0.0001); age, male gender, ablation dose and performed DWBS (hazard ratio 7.79; 95% CI 3.67-16.5) were independent factors associated with DFS (p &lt; 0.0001). DWBS diagnostic accuracy was 36.48%. Conclusion:131I treatment in patients with DTC biochemical recurrence and no DWBS or extensive image studies is associated with a significantly lower frequency of SR and an increased DFS. The diagnostic accuracy of DWBS is low, and its clinical efficiency should be defined in prospective phase III studies. abstract_id: PUBMED:33354183 30 mCi exploratory scan for two-step dosimetric 131I therapy in differentiated thyroid cancer patients: A novel approach and case report. Differentiated thyroid cancer patients with significantly elevated or rapidly rising serum thyroglobulin (Tg) levels and negative diagnostic radioiodine scans (DxScan) often present a therapeutic dilemma in deciding whether or not to administer an 131I treatment. In this report, we describe a novel two-step approach of a 30 mCi 131I exploratory scan before a dosimetric 131I therapy to help "un-blind" the treating physician of the benefit/risk ratio of a further "blind" 131I treatment. A 51-year-old man presented with rising Tg levels, a negative DxScan, and a history of widely metastatic follicular thyroid cancer. He had undergone total thyroidectomy, remnant ablation with 3.8 GBq (103.5 mCi) of 131I, Gammaknife®, and treatment with 12.1 GBq (326 mCi) of 131I for multiple metastases. However, at 19 months after the treatments, his Tg levels continued to rise, and scans demonstrated no evidence of radioiodine-avid metastatic disease. In anticipation of a "blind" 131I treatment, the medical team and the patient opted for a 30 mCi exploratory scan. The total dosimetrically guided prescribed activity (DGPA) was decided based on the whole-body dosimetry. The patient was first given 30 mCi of 131I, and the exploratory scan was performed 22 h later, which demonstrated 131I uptake in the left lung, left humeral head, T10, and right proximal thigh muscle. Based on the positive exploratory scan, the remainder of the DGPA was administered within several hours after the scan. On the post-DGPA treatment scan performed at 5-7 days, the lesions seen on the ~ 22 h exploratory scan were confirmed, and an additional lesion was observed in the left kidney. The 30 mCi exploratory scan suggested the potential for a response in the radioiodine-avid lesions despite a negative diagnostic scan. This method allows 131I treatment to be administered to patients who may have a greater potential for a therapeutic response while avoiding unwarranted side effects in those patients with nonavid disease. abstract_id: PUBMED:30911535 Thyroid Cancer Radiotheragnostics: the case for activity adjusted 131I therapy. Radiotheragnostics represents the systematic integration of diagnostic imaging and therapeutics using radionuclides targeting specific characteristics of tumor biology. Radioiodine (131I) is the classic radiotheragnostic agent used for the diagnosis and treatment of differentiated thyroid cancer (DTC) based on sodium-iodine symporter expression in normal and neoplastic thyroid tissue. Application of radiotheragnostics principles in thyroid cancer involves using pre-ablation diagnostic scans (Dx Scans) for detection of iodine-avid regional and distant metastatic disease and patient-individualized targeted 131I therapy with goal of maximizing the benefits of the first therapeutic 131I administration. Clinically available nuclear medicine imaging technology has significantly evolved over the past 10 years with the introduction of hybrid SPECT/CT and PET/CT systems, as well as advances in iterative image reconstruction with modeling of image degrading physical factors. This progress makes possible the acquisition of accurate diagnostic radioiodine scintigraphy capable of identifying regional and distant metastatic disease, which can be used for 131I treatment planning and delivery of activity adjusted 131I therapy for achieving intended treatment goals (e.g. remnant ablation, adjuvant131I treatment and targeted 131-I treatment). The overarching aim of thyroid cancer radiotheragnostics is to optimize the balance between 131I therapeutic efficacy and potential side-effects on non-target tissues. abstract_id: PUBMED:15579184 Are there disadvantages in administering 131I ablation therapy in patients with differentiated thyroid carcinoma without a preablative diagnostic 131I whole-body scan? Objective: To evaluate the risk of performing inappropriate (131)I ablative therapies for thyroid carcinoma in patients lacking thyroid remnants or metastases, using a strategy of treatment without a preliminary iodine-131 diagnostic whole-body scan (DxWBS). Design: Retrospective evaluation of post-therapy whole-body scans to assess the prevalence of thyroid remnants or metastases after total thyroidectomy. Comparison of (131)I uptake test and thyroglobulin (Tg) off levothyroxine (L-T4) performed before therapy with post-therapy scans, in order to evaluate the ability to predict inappropriate treatments. Patients: A group of 875 consecutive patients with previous total or near-total thyroidectomy for differentiated thyroid carcinoma underwent (131)I ablative therapy without a preliminary (131)I-DxWBS. All patients were clinically free of distant metastases and macroscopic residual tumour. Measurements: Whole-body scans were performed 2-5 days after the treatment as gold standard for thyroid remnants and metastases; 24-h (131)I quantitative neck uptake test and Tg off L-T4 were performed before (131)I therapy. Results: The majority of patients (94%) were found to have thyroid remnants or metastases at post-therapy scans, in most cases (91.2%) with detectable Tg off L-T4 and positive 24-h neck uptake. 14 patients (1.6%) with tiny lymph-node metastases positive at post-therapy scans showed undetectable Tg off L-T4. In 30 patients (3.6%) faint positive post-therapy images for thyroid remnants have been classified as false-positive results on the basis of both negative 24-h neck uptake and undetectable Tg off L-T4. Conclusions: This study confirms that most patients have residual thyroid tissue after total thyroidectomy and that it seems reasonable to omit routine diagnostic whole-body scans before (131)I treatment with clinical, managerial and economic advantages. abstract_id: PUBMED:25299469 Prediction of the success of thyroid remnant ablation using preablative 99mTc pertechnetate scintigraphy and postablative dual 131I scintigraphy. Objective: The aim of this study was to predict the success of 131I ablation using preablative 99mTc pertechnetate salivary scintigraphy and a postablative dual 131I scan in differentiated thyroid cancer (DTC). Patients And Methods: A total of 168 DTC patients who underwent 131I ablation with preablative salivary scintigraphy and a postablative dual (early and delayed) 131I scan were enrolled. For salivary scintigraphy, the thyroid remnant uptake was visually assessed. For the dual 131I scan, the thyroid remnant to background uptake ratios (TBRs) on early and delayed scans were measured and the percentage change in TBR (TBRΔ) was calculated. Results: Thyroid remnant uptake was seen in 69 (41%) patients on salivary scintigraphy and in 162 (96%) patients on the dual 131I scan. The success rate of ablation was higher in patients with negative remnant uptake on salivary scintigraphy (86%) than in patients with positive remnant uptake (58%, P=0.0001). The success rate of ablation was 100% in patients with no remnant uptake on both salivary scintigraphy and the dual 131I scan. The success rate of ablation was higher in patients with TBRΔ 0 or more than in patients with TBRΔ less than 0, irrespective of remnant uptake on salivary scintigraphy (91 vs. 70%, P=0.03, for patients without remnant uptake on salivary scintigraphy; 74 vs. 48%, P=0.05, for patients with remnant uptake on salivary scintigraphy). Conclusion: The success of thyroid remnant ablation in DTC can be predicted by the presence of remnant uptake on preablative 99mTc pertechnetate scintigraphy and change in remnant uptake on the postablative dual 131I scan. abstract_id: PUBMED:30930852 Outcome of Patients With Differentiated Thyroid Cancer Treated With 131-Iodine on the Basis of a Detectable Serum Thyroglobulin Level After Initial Treatment. Background: In patients with differentiated thyroid cancer (DTC) and raising serum thyroglobulin (Tg) after total or near-total thyroidectomy and 131I remnant ablation an empiric 131I therapy may be considered. However, outcome data after empiric therapy in did not show a clear evidence of improved survival. We assessed the efficacy of such empiric 131I therapy in patients with DTC and evaluated the long-term outcome. Methods: A total of 100 patients with DTC showing raised Tg level during follow-up after thyroidectomy and 131I ablation were treated with a further 131I therapy (6.1 ± 1.7 GBq). Whole-body scan (WBS) was performed 5-7 days after therapy. Tg value at 12 months after 131I therapy was considered as an indicator of treatment response: ≤1.5 ng/ml complete remission (CR), &gt;50% decrease partial remission (PR), higher than pre-therapy progression disease (PD), all other cases stable disease (SD). Patients were followed-up for 96 ± 75 months. Results: After 12 months, 62% of patients were in CR, 16% in PR, 8% in SD, and 14% in PD. WBS was positive in 41% of patients and negative in 59% (P = NS). Among patients with local recurrences at WBS 89% showed either CR or PR, while 71% of patients with distant metastases were in SD or PD (P &lt; 0.001). Distant metastases at WBS (P &lt; 0.05), CR (P &lt; 0.0001), and CR + PR (P &lt; 0.0001) were predictors of both progression free survival and overall survival. Conclusion: There is a beneficial effect of 131I therapy on outcome of patients with DTC treated on the basis of elevated Tg value. In these patients, survival is affected by achievement of CR or PR at 12 months evaluation after 131I therapy and by the presence of distant metastases at WBS. abstract_id: PUBMED:36268911 Nasolacrimal Duct Obstruction on 131I SPECT/CT: Atypical False-positive Paranasal Radioiodine Uptake as a Complication of Single-dose RAI Treatment. Nasolacrimal duct obstruction (NLDO) is a rare complication after radioiodine therapy and may cause false positive 131I uptake at the point of obstruction in 131I whole body scan. Here, we report a 59-year-old female patient with papillary thyroid cancer treated with total thyroidectomy followed by 131I therapy. 131I whole body scan revealed focal uptake in the head. Single photon emission computed tomography/computed tomography (CT) showed focal uptake at the right proximal nasolacrimal duct. The ophthalmologic examination and the diagnostic maxillofacial CT confirmed the diagnosis of NLDO. abstract_id: PUBMED:36531486 Radioiodine adjuvant therapy in differentiated thyroid cancer: An update and reconsideration. Radioiodine (131I) therapy (RAI) has been utilized for treating differentiated thyroid cancer (DTC) for decades, and its uses can be characterized as remnant ablation, adjuvant therapy (RAT) or treatment for known diseases. Compared with the definite 131I treatment targets for remnant ablation and known disease, 131I adjuvant therapy (RAT) aims to reduce the risk of recurrence by destroying potential subclinical disease. Since it is merely given as a risk with no imaging confirmation of persistence/recurrence/metastases, the evidence is uncertain. With limited knowledge and substance, the indication for RAT remains poorly defined for everyday clinical practice, and the benefits of RAT remain controversial. This ambiguity results in a puzzle for clinicians seeking clarity on whether patients should receive RAT, and whether patients are at risk of recurrence/death from undertreatment or adverse events from overtreatment. Herein, we clarified the RAT indications in terms of clinicopathological features, postoperative disease status and response to therapy evaluation, and retrospectively examined the clinical outcomes of RAT as reported in current studies and guidelines. Furthermore, given the evolution of nuclear medicine imaging techniques, it can be expected that the future of RAT may be advanced by nuclear medicine theranostics (i.e., 131I whole-body scan, PET/CT) by accurately revealing the biological behaviors, as well as the underlying molecular background. abstract_id: PUBMED:36093082 A semiquantitative study of the optimal whole-body imaging time after 131I therapy for differentiated thyroid cancer. Objective: We compared the efficacy of post-therapy whole-body scintigraphy (Tx-WBS) in terms of detecting lesions in patients with differentiated thyroid cancer (DTC) on days 3, 7, and 10 after 131I treatment, and we determined the optimal imaging time. Methods: Clinical data from 161 DTC patients treated with 131I were collected. All patients underwent day 3 imaging, but only 98 patients underwent day 3 and day 7 imaging, and 63 patients underwent day 3 and day 10 imaging at the same time. And the thyroid bed uptake was visually graded. The radioactivity ratios of the thyroid bed, neck lymph nodes, lungs, and liver (to the background) were calculated to allow a semiquantitative analysis. Results: Visual analysis showed that delayed imaging revealed more lymph node and lung radioactivity, early imaging showed more residual thyroid tissue, and significant differences in uptake were apparent at days 3, 7, and 10 (P &lt; 0.001). Semiquantitative analysis revealed significant differences in the target-to-background ratios of the residual thyroid bed, lungs, and liver at days 3, 7, and 10. On these days, the imaging sensitivities in terms of detecting metastatic lymph nodes were 29.58%, 39.02%, and 19.35%, and the specificities were 75.56%, 75.86%, and 75% (P = 0.465, 0.154, and 0.763, respectively). In terms of lung metastasis detection, the sensitivities were 29.58%, 38.46%, and 13.33% respectively, and the specificities were 98.33%, 100%, and 95.83% (P &lt; 0.001, &lt; 0.001, and P=0.238). Conclusion: More residual thyroid tissue can be detected by imaging on day 3; imaging on day 7 more effectively detects lung metastases than does imaging on day 3 or 10. Answer: Yes, there are potential disadvantages in administering 131I ablation therapy in patients with differentiated thyroid carcinoma without a preablative diagnostic 131I whole-body scan. One study suggests that most patients have residual thyroid tissue after total thyroidectomy, and it seems reasonable to omit routine diagnostic whole-body scans before 131I treatment, as this approach has clinical, managerial, and economic advantages (PUBMED:15579184). However, this study also acknowledges that a small percentage of patients with tiny lymph-node metastases positive at post-therapy scans showed undetectable thyroglobulin off levothyroxine, indicating that the preablative scan might have helped identify these cases. Another study indicates that the timing of post 131I ablation diagnostic whole body scan is crucial, as performing it less than four months after 131I ablation may be too early, leading to discordant results between initial and follow-up scans (PUBMED:26028188). This suggests that the absence of a preablative scan could result in premature or inappropriate treatment decisions. Furthermore, the use of preablative diagnostic scans can help predict the success of thyroid remnant ablation, as shown by the correlation between remnant uptake on preablative 99mTc pertechnetate scintigraphy and the change in remnant uptake on the postablative dual 131I scan (PUBMED:25299469). Without this predictive information, there may be a risk of either undertreating or overtreating patients. Additionally, the use of radioiodine adjuvant therapy without imaging confirmation of persistence/recurrence/metastases is controversial, and the benefits remain uncertain (PUBMED:36531486). This highlights the potential disadvantage of not using diagnostic scans to guide therapy. In summary, while there may be some advantages to omitting preablative diagnostic scans, there are also potential disadvantages, such as the risk of missing small metastases, making premature treatment decisions, and the inability to accurately predict the success of ablation therapy. Therefore, the decision to administer 131I ablation therapy without a preablative diagnostic scan should be made with careful consideration of these factors.
Instruction: Are bereaved family members a valid proxy for a patient's assessment of dying? Abstracts: abstract_id: PUBMED:29945455 Validity and Reliability of the Dying Care Process and Outcome Scales Before and After Death From the Bereaved Family Members' Perspective. Objective:: There are no instruments evaluating the processes and outcomes of dying care right before and after death. Therefore, we developed and examined the validity and reliability of 2 scales for evaluating dying care processes and outcomes before and after death. Methods:: A cross-sectional, anonymous questionnaire was administered to bereaved family members of patients with cancer who had died in 5 facilities. We evaluated the Dying Care Process Scale for Bereaved Family Members (DPS-B) and the Dying Care Outcome Scale for Bereaved Family Members (DOS-B) with 345 bereaved family members. Results:: A factor analysis revealed that DPS-B and DOS-B each consisted of 4 subscales. For the DPS-B, they were "symptom management," "respect for the patient's dignity before and after death," "explanation to the family," and "family care." For the DOS-B, they were "peaceful dying process for the patient," "being respected as a person before and after death," "good relationship between the patient and family," and "peaceful dying process for the family." Both DPS-B and DOS-B had sufficient convergent and discriminative validity, sufficient internal consistency (DPS-B: α = 0.91 and subscales' αs = 0.78-0.91; DOS-B: α = 0.91 and subscales' αs = 0.78-0.94), and sufficient test-retest reliability (DPS-B: intraclass correlation coefficient [ICC] of total score = 0.79 and subscales = 0.55-0.79; DOS-B: ICC of total score = 0.88 and subscales = 0.70-0.88). Significance Of Results:: Both DPS-B and DOS-B are valid and reliable scales for evaluating the dying care processes and outcomes before and after death from the bereaved family members' perspectives. abstract_id: PUBMED:7514313 Are bereaved family members a valid proxy for a patient's assessment of dying? Objective: To compare assessments made retrospectively by bereaved family members (or the nearest carer to the patient) with assessments made before death by palliative staff and, where available, by patients themselves or the family member. Methods: SETTING--two palliative care support teams. ASSESSMENTS--were recorded prospectively by team staff, patients and their family members for consecutive patients referred, and then were recorded retrospectively by family members during interview seven months after bereavement. MEASURES--seven items each rated 0 (best) to 4 (worst) using standard definitions. The rater was asked to average the severity over one week. ANALYSIS--ratings were tested for percentage agreement, for Cohen's Kappa (which controls for chance agreement) and for Spearman correlations. Results: Staff ratings and family members' retrospective ratings, which described the last week of life, were available for 35 patients. Six patients and seven family members had also been interviewed shortly before the patient's death. The main problems identified by all raters were similar: family anxiety, symptom control, patient anxiety and pain control. For three items, practical aid, wasted time and communication, agreement was good--all cases except one were equal or within one score. However, problems were rarely identified for these items. For the other four items: pain control, other symptom control, family anxiety and patient anxiety, there was little agreement, Cohen's Kappa ranged 0.05-0.22. Agreement for one item (patient anxiety) was significantly improved if a patient had died at home. Comparison of ratings made by the family members before the death and seven months after bereavement suggests that family members alter their assessments during bereavement. Conclusion: Retrospective assessments by bereaved family members may be valid for some items related to service provision, but not as the sole assessment of a patient's pain, symptoms or anxiety. We suggest that studies which rely on these retrospective ratings should assess the validity of their responses and record more information about the mood and grief of the family member. abstract_id: PUBMED:29164552 The Application of Grief Theories to Bereaved Family Members Loss is an inevitable experience for humans for which grief is a natural response. Nurses must have an adequate understanding of grief and bereavement in order to be more sensitive to these painful emotions and to provide appropriate care to families who have lost someone they love deeply. This article introduces four important grief theories: Freud's grief theory, Bowlby's attachment theory, Stroebe and Schuts' dual process model, and Neiyemer's meaning reconstruction model. Freud's grief theory holds that the process of grief adaptation involves a bereaved family adopting alternative ways to connect with the death of a loved one and to restore their self-ego. Attachment theory holds that individuals who undergo grieving that is caused by separation from significant others and that triggers the process of grief adaptation will fail to adapt if they resist change. The dual process model holds that bereaved families undergo grief adaptation not only as a way to face their loss but also to restore normality in their lives. Finally, the meaning reconstruction model holds that the grief-adaptation strength of bereaved families comes from their meaning reconstruction in response to encountered events. It is hoped that these theories offer nurses different perspectives on the grieving process and provide a practical framework for grief assessment and interventions. Additionally, specific interventions that are based on these four grief theories are recommended. Furthermore, theories of grief may help nurses gain insight into their own practice-related reactions and healing processes, which is an important part of caring for the grieving. Although the grieving process is time consuming, nurses who better understand grief will be better able to help family members prepare in advance for the death of a loved one and, in doing so, help facilitate their healing, with a view to the future and to finally returning to normal daily life. abstract_id: PUBMED:32040370 Insights from Bereaved Family Members about End-of-Life Care and Bereavement. Background: Bereavement programs provide institutions with an avenue for obtaining feedback from family members about their experiences during a patient's illness and end-of-life (EOL) period that can be used to improve both patient care and the care of bereaved individuals. Objective: We examined family members' experiences about the clinical care their loved one received at EOL and the perceived effect this care had on their subsequent bereavement. Design: Survey. Setting/Subjects: One hundred forty bereaved family members from our cancer institute completed a bereavement survey. Of these family members, 67% were female, 66% were 60 years of age or older, and 81% were widowed. Measurement: We analyzed open-ended responses using NVivo 11 Plus© that asked bereaved family members about the ways the clinical (oncology) team was helpful or not in dealing with their loss. Results: The findings showed that compassionate care, competency, receiving honest facts, and outreach after the death favorably influenced the bereavement experience. Conversely, impersonal contact, lack of contact, including lack of caregiver support, and lack of information about EOL and death were identified as actions taken by the clinical team that were unhelpful in dealing with their loss. Conclusions: The feedback from bereaved family members highlights two areas that could benefit from quality improvement efforts: (1) communication skills that focus on enhancing compassionate connection, including conveying empathy, and providing reassurance and guidance to patients and their families and (2) communication skills that focus on delivering information about prognosis and the EOL period in an honest and direct way. abstract_id: PUBMED:25852202 Japanese Bereaved Family Members' Perspectives of Palliative Care Units and Palliative Care: J-HOPE Study Results. The study purpose was to understand the perspectives of bereaved family members regarding palliative care unit (PCU) and palliative care and to compare perceptions of PCU before admission and after bereavement. A cross-sectional questionnaire survey was conducted, and the perceptions of 454 and 424 bereaved family members were obtained regarding PCU and palliative care, respectively. Family members were significantly more likely to have positive perceptions after bereavement (ranging from 73% to 80%) compared to before admission (ranging from 62% to 71%). Bereaved family members who were satisfied with medical care in the PCU had a positive perception of the PCU and palliative care after bereavement. Respondents younger than 65 years of age were significantly more likely to have negative perceptions of PCU and palliative care. abstract_id: PUBMED:29609469 Bereaved Family Members' Perceptions of the Distressing Symptoms of Terminal Patients With Cancer. Background: Few data are available on bereaved family members' perspective on the frequency of symptoms and degree of distress among terminal patients with cancer. Methods: We sent a questionnaire to 1472 bereaved family members of terminal patients with cancer in 20 general hospitals. The questionnaire asked them (1) to indicate which symptoms the patients had, (2) to rate on a 4-point scale the extent to which the symptom was distressing, as follows: 1 = not distressing, 2 = slightly distressing, 3 = quite distressing, and 4 = very distressing at the point of 2 weeks before the patient had died. Results: We analyzed 805 questionnaires for this study. Anorexia was the commonest symptom among terminal patients with cancer experienced by bereaved family members, followed by somnolence, weight loss, fatigue, and pain. Anorexia was the most distressing symptom among terminal patients with cancer experienced by bereaved family members, followed by weight loss, pain, edema, and dyspnea. Conclusions: Anorexia and weight loss were frequent symptoms and bereaved family members felt very distressing. Furthermore, there are not means of effective treatment now. Thus, we think that further study in this field is necessary. abstract_id: PUBMED:34060705 Good Quality Care for Cancer Patients Dying in Hospitals, but Information Needs Unmet: Bereaved Relatives' Survey within Seven Countries. Background: Recognized disparities in quality of end-of-life care exist. Our aim was to assess the quality of care for patients dying from cancer, as perceived by bereaved relatives, within hospitals in seven European and South American countries. Materials And Methods: A postbereavement survey was conducted by post, interview, or via tablet in Argentina, Brazil, Uruguay, U.K., Germany, Norway, and Poland. Next of kin to cancer patients were asked to complete the international version of the Care Of the Dying Evaluation (i-CODE) questionnaire 6-8 weeks postbereavement. Primary outcomes were (a) how frequently the deceased patient was treated with dignity and respect, and (b) how well the family member was supported in the patient's last days of life. Results: Of 1,683 potential participants, 914 i-CODE questionnaires were completed (response rate, 54%). Approximately 94% reported the doctors treated their family member with dignity and respect "always" or "most of the time"; similar responses were given about nursing staff (94%). Additionally, 89% of participants reported they were adequately supported; this was more likely if the patient died on a specialist palliative care unit (odds ratio, 6.3; 95% confidence interval, 2.3-17.8). Although 87% of participants were told their relative was likely to die, only 63% were informed about what to expect during the dying phase. Conclusion: This is the first study assessing quality of care for dying cancer patients from the bereaved relatives' perspective across several countries on two continents. Our findings suggest many elements of good care were practiced but improvement in communication with relatives of imminently dying patients is needed. (ClinicalTrials.gov Identifier: NCT03566732). Implications For Practice: Previous studies have shown that bereaved relatives' views represent a valid way to assess care for dying patients in the last days of their life. The Care Of the Dying Evaluation questionnaire is a suitable tool for quality improvement work to help determine areas where care is perceived well and areas where care is perceived as lacking. Health care professionals need to sustain high quality communication into the last phase of the cancer trajectory. In particular, discussions about what to expect when someone is dying and the provision of hydration in the last days of life represent key areas for improvement. abstract_id: PUBMED:35743451 Depression and Complicated Grief, and Associated Factors, of Bereaved Family Members of Patients Who Died of Malignant Pleural Mesothelioma in Japan. Objectives: we investigated the prevalence and associated factors of depression and complicated grief (CG) among bereaved family members of malignant pleural mesothelioma (MPM) patients in Japan. Methods: Bereaved family members of MPM patients (n = 72) were surveyed. The Japanese version of the Patient Health Questionnaire-9 (PHQ-9) and the Japanese version of the Brief Grief Questionnaire (BGQ) were used to assess depression and complicated grief (CG), respectively. Socio-economic factors, anger toward asbestos, care satisfaction, achievement of good death, and quality of end-of-life care were assessed in relation to depression and CG. Results: In the family members of MPM patients, the frequencies of depression and CG were 19.4% and 15.3%, respectively. The bereaved family members who were not compensated by the asbestos-related health-damage relief system (p = 0.018) and who felt the financial impacts of the patient's MPM on the family (p = 0.006) had a higher likelihood of depression. The bereaved family members who were not satisfied with the care given when the patient became critical (p = 0.034), who were not compensated by the asbestos-related health-damage relief system (p = 0.020), who felt the financial impact of the patient's MPM on the family (p = 0.016), and whose deceased relative underwent surgery (p = 0.030) had a higher likelihood of CG. Conclusions: For bereaved family members of MPM patients, routine screening for depression and CG and the provision of grief care are suggested. In addition, for family members of MPM patients, financial support, including the promotion of the asbestos-related health-damage relief system, and improved care for patients who undergo surgery and when patients become critical, are recommended. abstract_id: PUBMED:28450216 Effects of End-of-Life Discussions on the Mental Health of Bereaved Family Members and Quality of Patient Death and Care. Context: End-of-life discussions are crucial for providing appropriate care to patients with advanced cancer at the end of their lives. Objectives: The objective of this study was to explore associations between end-of-life discussions and bereaved families' depression and complicated grief and the quality of patient death and end-of-life care. Methods: A nationwide questionnaire survey of bereaved family members was conducted between May and July 2014. A total of 13,711 bereaved family members of cancer patients who were cared for by specialist palliative care services at 75 institutions throughout Japan and died before January 2014 participated. We evaluated the prevalence of depression (defined as the Patient Health Questionnaire-9 ≥ 10) and complicated grief (defined as the Brief Grief Questionnaire ≥ 8) in bereaved family members. Moreover, we evaluated the quality of death and end-of-life care with the Good Death Inventory and the Care Evaluation Scale, respectively. Results: A total of 9123 questionnaires were returned (response rate 67%), and 80.6% of the respondents reported that they had end-of-life discussions. After propensity score-weighted adjustment, the results showed that bereaved family members who had end-of-life discussions had a lower frequently of depression (17.3% vs. 21.6%; P &lt; 0.001) and complicated grief (13.7% vs. 15.9%; P = 0.03). End-of-life discussions were associated with better quality of death (the Good Death Inventory score, 47.2 ± 8.5 vs. 46.1 ± 9.4; P &lt; 0.001) and end-of-life care (the Care Evaluation Scale score, 84.1 ± 11.4 vs. 78.9 ± 14.3; P &lt; 0.001). Conclusion: End-of-life discussions may contribute to reducing depression and complicated grief in bereaved family members and enable patients to experience quality end-of-life care and a good death. abstract_id: PUBMED:27239421 Eating-related distress and need for nutritional support of families of advanced cancer patients: a nationwide survey of bereaved family members. Background: A number of advanced cancer patients are suffering from physical and psychosocial burdens because of cancer cachexia, and these burdens also greatly impact on their family members and relationships between patients and family members. It is necessary to consider the psychosocial impact of cancer cachexia on family members of advanced cancer patients. Methods: A cross-sectional anonymous nationwide survey was conducted involving 925 bereaved family members of cancer patients who had been admitted to 133 inpatient hospices throughout Japan. Results: A total of 702 bereaved family members returned the questionnaires (response rate, 75.9%). Concerning eating-related distress, 'I served what the patient wanted without consideration of calories and nutritional composition' was highest (75.1%), and 'I tried making many kinds of meals for the patient' and 'I was concerned about planning meals for the patient every day' followed (63.0% and 59.4%, respectively). The top 5 of the 19 items were categorized as 'fighting back'. Need for nutritional support was high (72.2%), and need for explanations about the reasons for anorexia and weight loss of patients was moderate (41.4%). Explanatory factor analysis of eating-related distress identified the following four domains: (factor 1) feeling that family members forced the patient to eat to avoid death, (factor 2) feeling that family members made great efforts to help the patient eat, (factor 3) feeling that eating was a cause of conflicts between the patient and family members, and (factor 4) feeling that correct information was insufficient. Results of multiple logistic regression analysis showed that spouse, fair/poor mental status, factors 1, and 4 were identified as independent determinants of major depression {odds ratio [OR] 3.27 [95% confidence interval (CI) 1.24-8.60], P = 0.02; OR 4.50 [95% CI 2.46-8.25], P &lt; 0.001; OR 2.51 [95% CI 1.16-5.45], P = 0.02; OR 2.33 [95% CI 1.13-4.80], P = 0.02, respectively}. Conclusions: A number of family members of advanced cancer patients experienced high levels of eating-related distress and had a need for nutritional support. Answer: The validity of bereaved family members as proxies for a patient's assessment of dying is mixed and depends on the aspect of care being evaluated. Bereaved family members' retrospective assessments may be valid for some items related to service provision, but they are not considered reliable as the sole assessment of a patient's pain, symptoms, or anxiety. A study comparing assessments made by palliative staff, patients, and family members before death with those made retrospectively by family members found that while there was good agreement on items such as practical aid, wasted time, and communication, there was little agreement on pain control, other symptom control, family anxiety, and patient anxiety. The study suggests that family members may alter their assessments during bereavement, indicating that their retrospective assessments may not always accurately reflect the patient's experience (PUBMED:7514313). However, bereaved family members' perspectives can provide valuable insights into the processes and outcomes of dying care. The development of the Dying Care Process Scale for Bereaved Family Members (DPS-B) and the Dying Care Outcome Scale for Bereaved Family Members (DOS-B) has shown that these scales are valid and reliable for evaluating the dying care processes and outcomes before and after death from the bereaved family members' perspectives (PUBMED:29945455). Furthermore, bereaved family members' feedback can highlight areas of end-of-life care that could benefit from quality improvement efforts, such as communication skills and providing information about prognosis and the end-of-life period (PUBMED:32040370). Their perceptions can also change after bereavement, often becoming more positive regarding palliative care units and palliative care (PUBMED:25852202). In summary, while bereaved family members' assessments may not always align with the patient's own assessments, especially regarding pain and symptoms, their perspectives are still valuable for evaluating certain aspects of dying care and identifying areas for improvement in end-of-life care. It is important to consider the context and specific aspects of care when determining the validity of bereaved family members as proxies for a patient's assessment of dying.
Instruction: Is laparoscopic donor nephrectomy here to stay? Abstracts: abstract_id: PUBMED:34499549 Procedure: Laparoscopic Donor Nephrectomy. Minimally invasive approaches for laparoscopic donor nephrectomy are necessary to limit surgical morbidity, and technical challenges differ from those encountered during other laparoscopic renal surgeries. Presented here is a step-by-step guide for laparoscopic donor nephrectomy-focusing on pure laparoscopic and hand-assisted techniques. Both straight laparoscopic and hand-assisted nephrectomies were performed in healthy donors who met transplantation criteria in terms of global health and psychologic well-being. Patient positioning, trocar placement, surgical steps, incision closure, and postoperative care are reviewed. Standard equipment used to complete this procedure is itemized. This guide outlines indications, preoperative preparation, and procedural steps for laparoscopic donor nephrectomy. The techniques and the evolution thereof represent our experience since 2002 for 510 cases. The attached videos demonstrate a high-volume surgeon's typical approach while factoring in anatomical variation. In both cases, the donor nephrectomies were without incident and the patient's postoperative courses were without complication. A basic framework for donor nephrectomy is presented highlighting surgical steps we believe to be essential for graft preservation and ultimately effective transplantation. Although no two cases are the same, systematic approaches will allow for timely case completion, fewer complications, and better donor/recipient outcomes. abstract_id: PUBMED:36549407 The Effect of Preoperative Carbohydrate Intake on Length of Stay and Postoperative Recovery Following Laparoscopic Living Donor Nephrectomy. Objective: Enhanced Recovery After Surgery (ERAS) protocols are applied in many surgical procedures and often involve preoperative carbohydrate intake. Research surrounding the utility of ERAS in living donor nephrectomy is limited. The objective of this study was to identify whether living kidney donors who received preoperative oral carbohydrates experienced a difference in length of hospital stay (LOS), duration of time required to resume regular oral food and fluid intake, and incidence of gastrointestinal (GI) complications following laparoscopic nephrectomy compared to historical control donors who underwent preoperative fasting. Methods: This study was a retrospective analysis of data from adult subjects at one transplant center who underwent laparoscopic living donor nephrectomy. A total of 55 ERAS subjects who received preoperative carbohydrates and 93 historical control subjects who underwent preoperative fasting were included in the final analysis. The following variables were compared between groups: LOS, time to tolerating a regular oral diet postoperatively, time to meeting 50% of estimated fluid needs by oral intake postoperatively, and incidence of postoperative GI complications. Results: No significant differences between the ERAS and historical control groups in age, weight, body mass index, sex distribution, or estimated fluid needs were identified. Both groups consisted of predominantly female subjects. ERAS subjects experienced a shorter LOS (2.8 days versus 3.9 days, P &lt; .001), time to tolerating a regular oral diet (36.5 hours versus 68.2 hours, P &lt; .001), and time to meeting 50% of estimated fluid needs (25.3 hours versus 44.6 hours, P &lt; .001) after laparoscopic nephrectomy compared to historical control subjects. No significant difference between groups in the incidence of postoperative GI complications (nausea, vomiting, or ileus) was identified. Conclusion: Our findings demonstrate the advantages of ERAS in living kidney donors undergoing laparoscopic nephrectomy and support ERAS implementation within this patient population. abstract_id: PUBMED:28101451 Laparoscopic vs open donor nephrectomy: Lessons learnt from single academic center experience. Aim: To compare laparoscopic and open living donor nephrectomy, based on the results from a single center during a decade. Methods: This is a retrospective review of all living donor nephrectomies performed at the Massachusetts General Hospital, Harvard Medical School, Boston, between 1/1998 - 12/2009. Overall there were 490 living donors, with 279 undergoing laparoscopic living donor nephrectomy (LLDN) and 211 undergoing open donor nephrectomy (OLDN). Demographic data, operating room time, the effect of the learning curve, the number of conversions from laparoscopic to open surgery, donor preoperative glomerular filtration rate and creatinine (Cr), donor and recipient postoperative Cr, delayed graft function and donor complications were analyzed. Statistical analysis was performed. Results: Overall there was no statistically significant difference between the LLDN and the OLDN groups regarding operating time, donor preoperative renal function, donor and recipient postoperative kidney function, delayed graft function or the incidence of major complications. When the last 100 laparoscopic cases were analyzed, there was a statistically significant difference regarding operating time in favor of the LLDN, pointing out the importance of the learning curve. Furthermore, another significant difference between the two groups was the decreased length of stay for the LLDN (2.87 d for LLDN vs 3.6 d for OLDN). Conclusion: Recognizing the importance of the learning curve, this paper provides evidence that LLDN has a safety profile comparable to OLDN and decreased length of stay for the donor. abstract_id: PUBMED:36807636 Laparoscopic donor nephrectomy-Technique and peri-operative outcomes in an Australian transplant center. Introduction: Demand for donor kidneys far exceeds the availability of organs from deceased donors. Living donor kidneys are an important part of addressing this shortfall, and laparoscopic nephrectomy is an important strategy to reduce donor morbidity and increase the acceptability of living donation. Aim: To retrospectively review the intraoperative and postoperative safety, technique, and outcomes of patients undergoing donor nephrectomy at a single tertiary hospital in Sydney, Australia. Method: Retrospective capture and analysis of clinical, demographic, and operative data for all living donor nephrectomies performed between 2007 and 2022 at a single University Hospital in Sydney, Australia. Results: Four hundred and seventy-two donor nephrectomies were performed: 471 were laparoscopic, two of which were converted from laparoscopic to open and hand-assisted nephrectomy, respectively, and one (.2%) underwent primary open nephrectomy. The mean warm ischemia time was 2.8 min (±1.3 SD, median 3 min, range 2-8 min) and the mean length of stay (LOS) was 4.1 days (±1.0 SD). The mean renal function on discharge was 103 μmol/L (±23.0 SD). Seventy-seven (16%) patients had a complication with no Clavien Dindo IV or V complications seen. Outcomes demonstrated no impact of donor age, gender, kidney side, relationship to the recipient, vascular complexity; or surgeon experience, on complication rate or LOS. Conclusion: Laparoscopic donor nephrectomy is a safe and effective procedure with minimal morbidity and no mortality in this series. abstract_id: PUBMED:28247669 Laparoscopic living donor nephrectomy for kidney transplantation Globally, the number of patients with end-stage renal disease (ESRD) has been steadily increasing every year. These patients need renal replacement therapy which includes peritoneal dialysis, programmed hemodialysis and kidney transplantation, which is the ultimate form of replacement. Living donor kidney transplantation has significant advantages over cadaveric kidney transplantation regarding the long-term functional results of renal transplant and the survival of recipients. Open nephrectomy, previously used to retrieve a healthy kidney from a donor is associated with traumatic access and a long recovery period of the donor. With the development of minimally invasive technologies, laparoscopic living donor nephrectomy has been developed, which currently is the best method for retrieving transplant kidney. The review outlines the characteristics and outcomes of laparoscopic living donor nephrectomy for kidney transplantation. abstract_id: PUBMED:25097308 Transperitoneal laparoscopic left versus right live donor nephrectomy: Comparison of outcomes. Introduction: Although laparoscopic donor nephrectomy (LDN) is being performed at many centers, there are reservations on the routine use of laparoscopy for harvesting the right kidney due to a perception of technical complexity and increased incidence of allograft failure, renal vein thrombosis and the need for more back-table reconstruction along with increased operative time. Materials And Methods: We performed a prospective non-randomized comparison of transperitoneal laparoscopic left donor nephrectomy (LLDN) with laparoscopic right donor nephrectomy (RLDN) from August 2008 to May 2013. The operative time, warm ischemia time, intraoperative events, blood loss and post-operative parameters were recorded. The renal recipient parameters, including post-operative creatinine, episodes of acute tubular necrosis (ATN) and delayed graft function were also recorded. Results: A total of 188 LDN were performed between August 2008 and May 2013, including 164 LLDN and 24 RLDN. The demographic characteristics between the two groups were comparable. The operative duration was in favor of the right donor group, while warm ischemia time, estimated blood loss and mean length of hospital stay were similar between the two groups. Overall renal functional outcomes were comparable between the two donor groups, while the recipient outcomes including creatinine at discharge were also comparable. Conclusions: RLDN has a safety profile comparable with LLDN, even in those with complex vascular anatomy, and can be successfully performed by the transperitoneal route with no added morbidity. RLDN requires lesser operative time with comparable morbidity. abstract_id: PUBMED:37453116 Laparoscopic Donor Nephrectomy in the Republic of North Macedonia. Introduction: Hand-assisted laparoscopic living donor nephrectomy has become the technique of choice for living donor kidney donations. Since 2018, 30 procedures have been performed at our clinic using this technique. The goal of this comparative analysis was to determine how surgical technique, specifically, hand-assisted laparoscopic living donor nephrectomy with hand assistance may affect early graft function when compared to open classical nephrectomy. Material and methods: Retrospective analyses were performed, comparing the two techniques of kidney donation. Kidney transplantation was performed with the open standard technique in both groups. The primary outcome was early graft function, and levels of urine output, and plasma creatinine were analyzed at three time points. A secondary outcome was the quality of the operative technique, which was determined by the time of warm ischemia, blood loss, and duration of surgery. Additionally, we noted all complications, length of hospital stay, and patient satisfaction. Results: In terms of warm ischemia time, there was no statistically significant difference between donors in both groups. It is important to note that in 2 recipients from Group II we did not observe diuresis at the conclusion of the operation. The recipients' diuresis was 515 ml ± 321SD in group I and 444 ml ± 271SD in group II. At 3, 12, and 36 hours postoperatively, there were statistically significant differences in the average serum creatinine values (p 0.05) in favor of group I. Similar results were observed in the second time measurement at 12 h and the third time measurement at 36 h for serum urea levels in recipients. The difference in serum urea values between the recipients in the groups at the first measurement (3h) following surgery was not statistically significant. Conclusion: Hand-assisted laparoscopic donor nephrectomy is recognized as a safe and effective treatment. Donors in this situation have a different profile from other surgical patients; hence, they do not undergo surgery due to their own medical condition but for an altruistic reason, and with hand-assisted living donor nephrectomy. Such patients receive all the advantages of minimally invasive surgery. The two main objectives of a donor nephrectomy are to give the recipient the best possible kidney and to ensure the donor's complete safety. abstract_id: PUBMED:26780835 Comparative study of laparoscopic and mini-incision open donor nephrectomy: have we heard the last word in the debate? Objective: Laparoscopic donor nephrectomy (LDN) is generally considered a better option than open donor nephrectomy (ODN) as it is associated with better cosmesis, less post-operative pain and faster recovery. Mini-incision donor nephrectomy (MDN) has proven to be an effective and less invasive modification of classic ODN. Our aim was to compare the peri-operative outcomes and quality of life of donors following laparoscopic and mini-incision ODN. Methods: One hundred patients, underwent donor nephrectomy using laparoscopic approach (n = 50) or open mini-incision approach (n = 50) over a period of 18 months. Data were entered into a prospective database and analyzed retrospectively. Results: The mean operative (skin to skin) time for MDN, 53.9 min (range, 40-75 min), was significantly shorter than the 93.7 min (range, 75-140 min) for LDN. The laparoscopic donors had a longer hospital stay, warm ischemia time and higher operative and post-operative cost. There was no significant difference in the pain scores, graft function, or quality of life between the two groups. Conclusions: MDN compares well with the laparoscopic approach in terms of post-operative pain, graft function and quality of life of donors. Significantly less operative time along with the reduced cost makes it a better option in our predominantly lower BMI patient population. abstract_id: PUBMED:12482153 Hand-assisted laparoscopic living-donor nephrectomy as an alternative to traditional laparoscopic living-donor nephrectomy. The benefits of laparoscopic living-donor nephrectomy (LDN) are well described, while similar data on hand-assisted laparoscopic living-donor nephrectomy (HALDN) are lacking. We compare hand-assisted laparoscopic living-donor nephrectomy with open donor nephrectomy. One hundred consecutive hand-assisted laparoscopic living-donor nephrectomy (10/98-8/01) donor/recipient pairs were compared to 50 open donor nephrectomy pairs (8/97-1/00). Mean donor weights were similar (179.6 +/- 40.8 vs. 167.4 +/- 30.3 lb; p = NS), while donor age was greater among hand-assisted laparoscopic living-donor nephrectomy (38.2 +/- 9.5 vs. 31.2 +/- 7.8 year; p &lt; 0.01). Right nephrectomies was fewer in hand-assisted laparoscopic living-donor nephrectomy [17/100 (17%) vs. 22/50 (44%); p &lt; 0.05]. Operative time for hand-assisted laparoscopic living-donor nephrectomy (3.9 +/- 0.7 vs. 2.9 +/- 0.5 h; p &lt; 0.01) was longer; however, return to diet (6.9 +/- 2.8 vs. 25.6 +/- 6.1 h; p &lt; 0.01), narcotics requirement (17.9 +/- 6.3 vs. 56.3 +/- 6.4h; p &lt; 0.01) and length of stay (51.7 +/- 22.2 vs. 129.6 +/- 65.7 h; p &lt; 0.01) were less than open donor nephrectomy. Costs were similar ($11072 vs. 10840). Graft function and 1-week Cr of 1.4 +/- 0.9 vs. 1.6 +/- 1.1 g/dL (p = NS) were similar. With the introduction of HALDN, our laparoscopic living-donor nephrectomy program has increased by 20%. Thus, similar to traditional laparoscopic donor nephrectomy, hand-assisted laparoscopic living-donor nephrectomy provides advantages over open donor nephrectomy without increasing costs. abstract_id: PUBMED:33194009 Long-term outcomes of laparoscopic versus open donor nephrectomy for kidney transplantation: a meta-analysis. Laparoscopic surgery is widely used for living donor nephrectomy and has demonstrated superiority over open surgery by improving several outcomes, such as length of hospital stay and morphine requirements. The purpose of the present study was to compare the long-term outcomes of open donor nephrectomy (ODN) versus laparoscopic donor nephrectomy (LDN) using meta-analytical techniques. The Web of Science, PubMed and Cochrane Library databases were searched, for relevant articles published between 1980 and January 20, 2020. Lists of reference articles retrieved in primary searches were manually screened for potentially eligible studies. Outcome parameters were explored using Review Manager version 5.3. The evaluated outcomes included donor serum creatinine levels, incidence of hypertension or proteinuria at 1 year postoperative, donor health-related quality of life, donation attitude, and graft survival. Thirteen of the 111 articles fulfilled the inclusion criteria. The LDN group demonstrated similar 1 year outcomes compared with ODN with respect to serum creatinine levels (weighted mean difference [WMD] -0.02 mg/dL [95% confidence interval (CI) -0.18-0.13]; P=0.77); hypertension (odds ratio [OR] 1.21 [95% CI 0.48-3.08]; P=0.68); proteinuria (OR 0.28 [95% CI 0.02-3.11]; P=0.30); and donation attitude (OR 4.26 [95% CI 0.06-298.27]; P=0.50). Donor health-related quality of life and recipient graft survival were also not significantly different between the groups analyzed. Thus, the long-term outcomes between LDN and ODN for living donor kidney transplantation are similar. Answer: Yes, laparoscopic donor nephrectomy appears to be here to stay. The procedure has become increasingly popular due to its minimally invasive nature, which is associated with several benefits over traditional open donor nephrectomy. The literature suggests that laparoscopic donor nephrectomy offers advantages such as reduced surgical morbidity, shorter hospital stays, quicker recovery times, and comparable long-term outcomes to open donor nephrectomy (PUBMED:34499549, PUBMED:28101451, PUBMED:36807636, PUBMED:28247669, PUBMED:33194009). For instance, studies have shown that laparoscopic living donor nephrectomy (LLDN) has a safety profile comparable to open donor nephrectomy (OLDN) with decreased length of stay for the donor (PUBMED:28101451). Enhanced Recovery After Surgery (ERAS) protocols, which often involve preoperative carbohydrate intake, have been applied to laparoscopic living donor nephrectomy and have been shown to result in shorter lengths of hospital stay and faster postoperative recovery compared to historical controls who underwent preoperative fasting (PUBMED:36549407). Furthermore, laparoscopic donor nephrectomy has been demonstrated to be a safe and effective procedure with minimal morbidity and no mortality in a series of cases (PUBMED:36807636). The technique has also been adapted to include hand-assisted approaches, which have been shown to be safe and effective, providing all the advantages of minimally invasive surgery to donors who are undergoing the procedure for altruistic reasons (PUBMED:37453116). Additionally, a meta-analysis comparing long-term outcomes of laparoscopic versus open donor nephrectomy found that the two approaches had similar outcomes in terms of donor serum creatinine levels, incidence of hypertension or proteinuria at 1 year postoperative, donor health-related quality of life, donation attitude, and graft survival (PUBMED:33194009). In conclusion, the evidence supports the continued use and potential growth of laparoscopic donor nephrectomy as a standard practice for kidney donation due to its favorable outcomes and the benefits it provides to donors.
Instruction: Physician impairment: is it relevant to academic psychiatry? Abstracts: abstract_id: PUBMED:18270279 Physician impairment: is it relevant to academic psychiatry? Objective: This article examines the relevance of physician impairment to the discipline of academic psychiatry. Method: The author reviews the scientific literature, the proceedings of previous International Conferences on Physician Health, and held discussions with experts in the physician health movement, department chairs, program directors, and residents. Results: Psychiatric illness and impairment in physicians impact academic psychiatry in several ways. Mental illnesses in physicians are being studied by some researchers, but the subject requires more scholarly attention. Training directors are interested in resident well-being and illness and how to reach out to symptomatic residents in a more timely way. Leaders in psychiatry are eager to learn the first steps in identifying colleagues at risk and the route to assessment and care. They are especially concerned about disruptive behavior in the workplace, including harassment and boundary transgressions in doctor-patient and supervisor-supervisee relationships. Academic psychiatrists wish to be more responsive to nonpsychiatrists appealing to them for guidance with impaired members of their departments. Conclusions: Physician impairment is an emerging field of study and interest to psychiatrists in academic settings. abstract_id: PUBMED:25990762 Assisting Undergraduate Physician Assistant Training in Psychiatry: The Role of Academic Psychiatry Departments. Physician assistants (PAs) are medical professionals who practice medicine with the supervision of a physician through delegated autonomy. PA school accreditation standards provide limited guidance for training PAs in psychiatry. As a result, PA students may receive inconsistent and possibly inadequate exposure to psychiatry. Providing broad and in-depth exposure to the field of psychiatry is important to attract PA students to pursue careers in psychiatry and provide a possible solution to the shortage of psychiatrists nationwide. Additionally, this level of exposure will prepare PA students who pursue careers in other fields of medicine to recognize and address their patient's psychiatric symptoms in an appropriate manner. This training can be provided by an academic department of psychiatry invested in the education of PA students. We describe a training model implemented at our university that emphasizes psychiatrist involvement in the preclinical year of PA school and full integration of PA students into the medical student psychiatry clerkship during the clinical years. The benefits and challenges to implementing this model are discussed as well. abstract_id: PUBMED:31705505 Theoretical Psychiatry as a Link Between Academic and Clinical Psychiatry. From its beginning, psychiatry has been always characterized by different orientations in the forms of "schools", "fields", "branches", etc., some of which were dominating during different periods of time. Today psychiatry seems to be in cul de sac of a serious scientific crisis and in the midst of the paradigm clashes. Academic psychiatry has been more and more criticized to be more or less irrelevant to clinical practice. The new field called theoretical psychiatry is fundamental for further scientific and professional maturation of psychiatry at the twenty-first century. Theoretical psychiatry pursues knowledge and understanding of mental disorders, and it operates so through the formulation, testing, and evaluation of theories. Digital revolution is changing significantly all fields of science, medicine, and psychiatry changing regimes and methods of knowledge production. Big data approach promises to provide the scientific holy grail in psychiatry, a single overarching theory or multiple theories and models that unify all the scientific disciplines. Brain is place where biological, psychological, social, and spiritual mechanisms meet each other and interact. Theoretical psychiatry should give all psychiatrists a common language, build bridges over academic gaps, and creatively export insights across disciplinary borders. abstract_id: PUBMED:16908606 The tripartite mission of an academic psychiatry department and the roles of the chair. Within academic medical centers, psychiatry, pediatrics, and certain other specialties of internal medicine carry a disproportionate responsibility for teaching but are least able to support teaching and research through clinical income. This article reviews the educational, research and clinical missions of an academic psychiatry department and the major challenges facing chairs as they seek to lead their departments. Though the leadership role that the chair plays in psychiatric education, research, and clinical services can be tremendously satisfying, it is also true that being a chair in these times has much in common with "middle management" in other corporate structures. abstract_id: PUBMED:11793568 Child and adolescent psychiatry. Exploring the option to be a manager in an academic medical center. Traditionally, few child psychiatrists have opted for careers as senior managers in academic medical centers, preferring to emphasize their clinical role and hoping their clinical work will be valued and reimbursed despite economic pressures. Yet the career of a hospital-wide manager in an academic medical center provides opportunities to promote an individual's career, the mission of the whole organization, and child psychiatry itself. Drawing on some of the issues discussed in the general literature on the clinician-executive, this article outlines the basic qualities and skills required to be a successful child psychiatrist executive and delineates the essential steps and potential pitfalls in building such a career. This delineation looks at the developmental stage of both the psychiatrist and the organization involved. The authors emphasize the importance of child psychiatrists assuming active roles within academic medical centers to protect and enhance the field of child psychiatry. abstract_id: PUBMED:29942717 Influences for Gender Disparity in Academic Psychiatry in the United States. Introduction Academic undertakings, including research, lead to career progression. However, the career paths of female psychiatrists appear to diverge significantly from that of their male counterparts. This article reviews the pervasiveness of the trend of women being less likely to pursue active research in psychiatry. In addition, we examine the correlation between academic rank and research productivity. Methods We searched the American Medical Association's (AMA) Fellowship and Residency Electronic Interactive Database (FREIDA) to identify training programs for psychiatry. A total of 5234 psychiatrists met our inclusion criteria. The gender, academic rank, research work, and h-index of faculty members were compared. The ratio of women reaching senior ranks as compared to men was also calculated. The Scopus database was used to determine the h-index of the individuals included in this study. Data analysis was done with SPSS 22.0 Release 2013 (IBM SPSS Statistics for Windows, IBM, Armonk, NY, USA). Kruskal-Wallis and Mann-Whitney U tests were used where required, with the P-value set at less than 0.05. Results In our study sample, 2181 (42%) of the psychiatrists were women. However, according to the information obtained from the websites of 23 programs, few women reached higher ranks, full professorship, or positions such as the chairperson of a program, and only 9% of women achieved the designation of chairperson of the psychiatry department, with men representing the other 91%. Higher academic rank correlated with higher h-index. A statistically-significant difference between the genders in terms of h-index was found for the assistant professor rank as well. However, this difference was not observed at the level of an associate professor. Conclusions Despite adequate representation of women in the academic workforce in psychiatry, there appears to be a discrepancy in the research productivity of the two genders. This study highlights the need for targeted interventions to address gender disparities in academic psychiatry. abstract_id: PUBMED:33228828 Gender equality in academic psychiatry in the UK in 2019. Aims And Method: To investigate whether gender balance in academic psychiatry in the UK has improved since a 2005 initiative to encourage career progression for female academics in UK universities. We surveyed the gender of academic psychiatrists across the UK and compared our findings with our previous 2003 London-wide survey and with the Royal College of Psychiatrists' 2001 workforce census. Results: The percentage of women in academic psychiatry posts in the UK more than doubled, from 20% in 2001 to 40% in 2019, with increases at senior lecturer (from 25 to 50%), reader/associate professor (from 29 to 48%) and professor level (from 11 to 21%). Outside London, men occupy 72% of all posts and 89% of professorial posts. Within London, men occupy 45% of all posts and 74% of professorial posts. Clinical Implications: The representation of women in academic psychiatry has improved but men continue to dominate at professorial level. Gender equality appears worse outside London. The situation is exacerbated by the diminishing availability of posts across the UK. abstract_id: PUBMED:18020190 Co-opting psychiatry: the alliance between academic psychiatry and the pharmaceutical industry. The editorial presents the arguments that an alliance between academic psychiatry and the pharmaceutical industry is harmful through a critical review of the academic literature and media coverage of activities of the pharmaceutical industry. The industry and the psychiatric profession both gain advantages from promoting biomedical models of psychiatric disturbance and pharmacological treatment. This confluence of interests has lead to the exaggeration of the efficacy of psychiatric drugs and neglect of their adverse effects and has distorted psychiatric knowledge and practice. Academic psychiatry has helped the industry to colonise more and more areas of modern life in order to expand the market for psychotropic drugs. Persuading people to understand their problems as biological deficiencies obscures the social origin and context of distress and prevents people from seeking social or political solutions. Psychiatry has the power to challenge the dominance of the pharmaceutical industry and should put its efforts into developing alternatives to routine drug treatment. Psychiatry needs to disengage from the industry if it wants to make genuine advances in understanding psychiatric disorder and help reverse the harmful social consequences of the widening med-icalisation of human experience. abstract_id: PUBMED:26307364 Demographic and Personal Characteristics of Male and Female Chairs in Academic Psychiatry. Objective: Despite the strong representation of female psychiatrists in residency and early-career positions, the number of female faculty sharply decreases in tenured or executive leadership positions. Why there exists a marked change in representation at the level of senior leadership within academic psychiatry is unclear. The authors investigated the causative factors contributing to this observation and gathered information about the personal characteristics of women in executive leadership roles. Methods: The authors surveyed psychiatry chairs at academic institutions. They identified all female chairs and randomly selected a group of male chairs to serve as a control group. The survey assessed perceived barriers, strengths, and weaknesses and differences in demographics and leadership styles between female and male chairs. Results: Ten percent of psychiatry chairs were female. Male chairs were more likely than female chairs to head large departments (p = 0.02, confidence interval (CI) -17.1-69.1) and had a higher H-index (p = 0.001, CI 6.6-37.2). Female chairs were more likely than male chairs to perceive barriers in their career development (p = 0.01, CI 0.7-2.2), citing little or no mentorship (p = 0.04), gender discrimination (p = 0.0001), and family obligations (p = 0.001) more often. Conclusion: Academic institutions must incorporate programs to decrease barriers and work to achieve equitable representation of women in upper-level leadership positions. Moreover, academic medicine must evolve to cultivate a family-friendly environment that successfully supports both genders. abstract_id: PUBMED:15715778 Partnerships between academic psychiatry and the pharmaceutical industry: the Lilly MAP Initiative. Objective: To examine the relationship between academic psychiatry and the pharmaceutical industry, focusing on a partnership between academics in Melbourne and Eli Lilly and Company (Lilly Melbourne Academic Psychiatry). Conclusions: Relationships between the pharmaceutical industry and the medical profession (including psychiatry) are under scrutiny as never before. Despite the complex nature of the relationship, the present paper argues that partnerships with external corporations such as pharmaceutical companies are of increasing importance for academic departments of psychiatry and research institutes, in environments in which core funding for tertiary institutes is being reduced. The partnership between Melbourne psychiatric academics and Eli Lilly and Company shows that benefits accrue to both parties, and suggests that there is a worthwhile place for other industry- academic collaborations of a similar nature in Australia. Answer: Physician impairment is indeed relevant to academic psychiatry. The article "Physician impairment: is it relevant to academic psychiatry?" (PUBMED:18270279) discusses the impact of psychiatric illness and impairment in physicians on the field of academic psychiatry. It highlights that mental illnesses in physicians are being studied by researchers, but more scholarly attention is needed. The article also notes that training directors are interested in the well-being of residents and how to reach out to symptomatic residents in a timely manner. Additionally, leaders in psychiatry are looking for ways to identify colleagues at risk and the route to assessment and care, with particular concerns about disruptive behavior in the workplace. This indicates that physician impairment is an emerging field of study and interest within academic psychiatry settings.
Instruction: Do Italian women prefer cesarean section? Abstracts: abstract_id: PUBMED:23530472 Do Italian women prefer cesarean section? Results from a survey on mode of delivery preferences. Background: About 20 million cesareans occur each year in the world and rates have steadily increased in almost all middle- and high-income countries over the last decades. Maternal request is often argued as one of the key forces driving this increase. Italy has the highest cesarean rate of Europe, yet there are no national surveys on the views of Italian women about their preferences on route of delivery. This study aimed to assess Italian women's preference for mode of delivery, as well as reasons and factors associated with this preference, in a nationally representative sample of women. Methods: This cross sectional survey was conducted between December 2010-March 2011. An anonymous structured questionnaire asked participants what was their preferred mode of delivery and explored the reasons for this preference by assessing their agreement to a series of statements. Participants were also asked to what extent their preference was influenced by a series of possible sources. The 1st phase of the study was carried out among readers of a popular Italian women's magazine (Io Donna). In a 2nd phase, the study was complemented by a structured telephone interview. Results: A total of 1000 Italian women participated in the survey and 80% declared they would prefer to deliver vaginally if they could opt. The preference for vaginal delivery was significantly higher among older (84.7%), more educated (87.6%), multiparous women (82.3%) and especially among those without any previous cesareans (94.2%). The main reasons for preferring a vaginal delivery were not wanting to be separated from the baby during the first hours of life, a shorter hospital stay and a faster postpartum recovery. The main reasons for preferring a cesarean were fear of pain, convenience to schedule the delivery and because it was perceived as being less traumatic for the baby. The source which most influenced the preference of these Italian women was their obstetrician, followed by friends or relatives. Conclusion: Four in five Italian women would prefer to deliver vaginally if they could opt. Factors associated with a higher preference for cesarean delivery were youth, nulliparity, lower education and a previous cesarean. abstract_id: PUBMED:24639760 Why do some pregnant women prefer cesarean delivery in first pregnancy? Background: The increasing number of cesarean section is a great concern in many countries. In Iran cesarean section rate has been steadily rising from 35% in 2000 to 40% in 2005. Preferences for cesarean are often associated with some factors. Objective: To investigate factors associated with preference for cesarean delivery, with special emphasis on pregnant women's preferences in first pregnancy in Neyshabur (Northeast of Iran). Materials And Methods: In this cross-sectional study, written questionnaires were completed via face to face interview with 797 pregnant women in first pregnancy. Socio-demographic data, preference toward mode of delivery and factors associated with it were assessed by applying questionnaire. Univariate and multivariate analysis were performed to identify the independent variables associated with preference for cesarean delivery. Results: In this study observed that 18.6% of pregnant women preferred caesarean delivery in first pregnancy. The mean age of pregnant women that they preferred cesarean delivery was upper than pregnant women that they preferred vaginal delivery and this difference was statistically significant (p=0.006). There was a statistically significant relation between preference for cesarean delivery and the following variables: educational level (p&lt;0.001), gestational age (p=0.003) spouse's age of pregnant women (p=0.001), physician's advice (p&lt;0.001), and fear of delivery (p&lt;0.001). Conclusion: The results of this study show that the majority of pregnant women do not prefer caesarean delivery to vaginal delivery. Nevertheless the preference rate for cesarean delivery exceeded 15% that suggested by WHO and most important factors in pregnant women prefer cesarean deliveries are fear of delivery and physician's advice. abstract_id: PUBMED:10869787 The changing HIV epidemic in Italian pregnant women. Objective: To describe changes in the characteristics of HIV-pregnant women in Italy and the impact of strategies for prevention of HIV vertical transmission. Study Design: Since 1985, HIV-infected women and their children are followed in 23 European centres in the European Collaborative Study (ECS), according to a standard protocol. Eight Italian Obstetric units participating in the ECS enrolled 815 patients. Results: Overall use of zidovudine to reduce HIV vertical transmission has increased significantly since 1994 and between 1995 and 1997, 57% of Italian women were treated. However, 27% of babies received the infant component of the 076 regimen. Over the years, age at delivery has increased and their CD4 count at delivery decreased, most likely reflecting heterosexually infected women with a longer duration of infection. The increasing rate of elective caesarean section (42%) is not related to maternal, foetal or obstetrical indications, but its use as an intervention to reduce HIV vertical transmission. Conclusions: The identification of HIV-infected women during pregnancy or before delivery ensures the appropriate management of the woman and her child, and clinicians should be aware of the increasing number of women with heterosexual acquisition of HIV-infection who may be less easily identified. abstract_id: PUBMED:20630634 Behind the myth--few women prefer caesarean section in the absence of medical or obstetrical factors. Objective: to describe the prevalence of women's preference for caesarean section as expressed in mid pregnancy, late pregnancy and one year post partum. An additional aim was to identify associated factors and investigate reasons for the preference. Design: mixed methods. Data were collected from 2007 to 2008 through questionnaires distributed to a Swedish regional cohort of women. The survey was part of a longitudinal study of women's attitudes and beliefs related to childbirth. One open question regarding the reasons for the preferred mode of birth was analysed using content analysis. Setting: three hospitals in the county of Västernorrland in the middle of Sweden. Participants: 1506 women were recruited at the routine ultrasound screening during weeks 17 to 19 of their pregnancy. Findings: a preference for caesarean section was stated by 7.6% of women during mid pregnancy and by 7.0% in late pregnancy. One year post partum 9.8% of the women stated that they would prefer a caesarean section if they were to have another baby. This was related to their birth experience. There were more multiparous women who wished for a caesarean section. Associated factors irrespective of parity were fear of giving birth and a 'strongly disagree' response to the statement regarding that the preferred birth should be as natural as possible. Among multiparous women the strongest predictors were previous caesarean sections, particularly those that were elective, and a previous negative birth experience. Women's comments on their preferred mode of birth revealed five categories: women described caesarean section as their only option relating to obstetrical and/or medical factors; several women stated ambivalent feelings and almost as many described their previous birthing experiences as a reason to prefer a caesarean birth; childbirth-related fear and caesarean section as a safe option were the remaining categories. Key Conclusions: rising caesarean section rates seem to be related to factors other than women's preferences. Ambivalence towards a way of giving birth is common during pregnancy. This should be of concern for midwives and obstetricians during antenatal care. Information and counselling should be frequent and comprehensive when a discussion on caesarean section is initiated by the pregnant woman. A negative birth experience is related to a future preference for caesarean section and this should be considered by caregivers providing intrapartum care. abstract_id: PUBMED:24093064 Gestational diabetes mellitus: screening and outcomes in southern italian pregnant women. Recent Italian guidelines exclude women &lt;35 years old, without risk factors for gestational diabetes mellitus (GDM), from screening for GDM. To determine the effectiveness of these measures with respect to the International Association of the Diabetes and Pregnancy Study Groups (IADPSG) criteria, we evaluated 2,448 pregnant women retrospectively enrolled in Calabria, southern Italy. GDM was diagnosed following the IADPSG 2010 criteria. Among 538 women &lt;35 years old, without risk factors, who would have not been tested according to the Italian guidelines, we diagnosed GDM in 171 (31.8%) pregnants (7.0% of total pregnants). Diagnosis was made at baseline (55.6%), 1 hour (39.8%), or 2 hours (4.7%) during OGTT. Despite of appropriate treatment, GDM represented a risk factor for cesarean section, polyhydramnios, increased birth weight, admission to neonatal intensive care units, and large for gestational age. These outcomes were similar to those observed in GDM women at high risk for GDM. In conclusion, Italian recommendations failed to identify 7.0% of women with GDM, when compared to IADPSG criteria. The risk for adverse hyperglycaemic-related outcomes is similar in low-risk and high-risk pregnants with GDM. To limit costs of GDM screening, our data suggest to restrict OGTT to two steps (baseline and 1 hour). abstract_id: PUBMED:12752165 Do Italian mothers prefer cesarean delivery? Background: In Italy the proportion of births by cesarean section rose from 11.2 percent in 1980 to 27.9 percent in 1996 and 33.2 percent in 2000. The aim of this study was to identify factors, other than medical and obstetrical risk, that may influence the method of delivery and to analyze mother's preference for vaginal versus cesarean delivery among women after the birth of their first baby in university hospitals in Italy. Methods: Primiparous women were selected from 100 consecutive deliveries in 23 university hospitals in 1999. To determine antenatal, delivery, and postnatal history, and women's preference for method of delivery, trained health personnel interviewed 1986 women. Results: Of the 1986 women who were interviewed (response rate 95%), 1023 primiparas comprised the study sample. The cesarean section rate was 36 percent. Ninety-one percent of the women who delivered spontaneously and 73 percent of those who underwent a cesarean section would have preferred a vaginal delivery. Conclusions: Most of the interviewed women in this study preferred, or were satisfied with, vaginal birth. abstract_id: PUBMED:33951101 Do women prefer caesarean sections? A qualitative evidence synthesis of their views and experiences. Background: Caesarean sections (CS) continue to increase worldwide. Multiple and complex factors are contributing to the increase, including non-clinical factors related to individual women, families and their interactions with health providers. This global qualitative evidence synthesis explores women's preferences for mode of birth and factors underlying preferences for CS. Methods: Systematic database searches (MEDLINE, EMBASE, CINAHL, PsycINFO) were conducted in December 2016 and updated in May 2019 and February 2021. Studies conducted across all resource settings were eligible for inclusion, except those from China and Taiwan which have been reported in a companion publication. Phenomena of interest were opinions, views and perspectives of women regarding preferences for mode of birth, attributes of CS, societal and cultural beliefs about modes of birth, and right to choose mode of birth. Thematic synthesis of data was conducted. Confidence in findings was assessed using GRADE-CERQual. Results: We included 52 studies, from 28 countries, encompassing the views and perspectives of pregnant women, non-pregnant women, women with previous CS, postpartum women, and women's partners. Most of the studies were conducted in high-income countries and published between 2011 and 2021. Factors underlying women preferences for CS had to do mainly with strong fear of pain and injuries to the mother and child during labour or birth (High confidence), uncertainty regarding vaginal birth (High confidence), and positive views or perceived advantages of CS (High confidence). Women who preferred CS expressed resoluteness about it, but there were also many women who had a clear preference for vaginal birth and those who even developed strategies to keep their birth plans in environments that were not supportive of vaginal births (High confidence). The findings also identified that social, cultural and personal factors as well as attributes related to health systems impact on the reasons underlying women preferences for various modes of birth (High confidence). Conclusions: A wide variety of factors underlie women's preferences for CS in the absence of medical indications. Major factors contributing to perceptions of CS as preferable include fear of pain, uncertainty with vaginal birth and positive views on CS. Interventions need to address these factors to reduce unnecessary CS. abstract_id: PUBMED:17014679 Chilean women's preferences regarding mode of delivery: which do they prefer and why? Objective: Caesarean section rates in Chile are reported to be as high as 60% in some populations. The purpose of this study was to determine pregnant Chilean women's preferences towards mode of delivery. Design: Interviewer-administered cross-sectional survey. Setting: Prenatal clinics in Santiago, Chile. Population Pregnant women in Santiago, Chile. Methods: Of 180 women completing the questionnaire, 90 were interviewed at a private clinic (caesarean delivery rate 60%) and 90 were interviewed at a public clinic (cesarean delivery rate 22%). Data collected included demographics, preferred mode of delivery, and women's attitudes towards vaginal and caesarean deliveries. Main Outcome Measures: Mode of delivery preferences, perceptions of mode of delivery measured on a 1-7 Likert scale. Results: The majority of women (77.8%) preferred vaginal delivery, 9.4% preferred caesarean section, and 12.8% had no preference. There was no statistical difference in preference between the public clinic (11% preferred caesarean) and the private clinic (8% preferred caesarean, P= 0.74). Overall, women preferring caesarean birth were slightly older than other groups (31.6 years, versus 28.4 years for women who preferred vaginal and 27.3 years for women who had no preference, P= 0.05), but there were otherwise no differences in parity, income, or education. On a scale of 1-7, women preferring caesarean birth rated vaginal birth as more painful, while women preferring vaginal birth rated it as less painful (5.8 versus 3.7, P= 0.003). Whether vaginal or caesarean, each group felt that their preferred mode of delivery was safer for their baby (P &lt; 0.001). Conclusions: Chilean women do not prefer caesarean section to vaginal delivery, even in a practice setting where caesarean delivery is more prevalent. Thus, women's preferences is unlikely to be the most significant factor driving the high caesarean rates in Chile. abstract_id: PUBMED:19652752 Preference of ghanaian women for vaginal or caesarean delivery postpartum. Objective: To determine Ghanaian women's preferred mode of delivery and opinion of caesarean section after caesarean delivery. Design: A cross sectional study of women who recently delivered by caesarean section prior to hospital discharge. Setting: Two teaching hospitals in Ghana: Komfo Anokye Teaching Hospital, Kumasi, and Korle-Bu Teaching Hospital, Accra. Respondents: 154 patients who delivered by caesarean section between the 1(st) and 31(st) August, 2003 were interviewed. Of the 154 initiating the interview, 151 completed, and 145 had complete data. Main Outcome Measures: Delivery preference and general opinion of caesarean delivery. Results: The majority of women interviewed indicated that they preferred vaginal delivery (55%). Despite preference for vaginal delivery among these women who had delivered by caesarean section, the majority had a generally positive opinion of caesarean section (53%). Conclusion: Ghanaian women with experience of caesarean delivery prefer vaginal delivery. abstract_id: PUBMED:20955097 Obstetric hospitalizations among Italian women, regular and irregular immigrants in North-Eastern Italy. Objective: Italy has become an important host country for economic immigrants. The study is aimed at providing a descriptive analysis of obstetric hospitalizations among Italian and immigrant women in North-Eastern Italy. Design: Population-based registry descriptive study. Setting: Veneto Region, Italy. Methods: All obstetric hospitalizations in 2006-2007 were extracted from the regional archive of hospital discharge records (n = 144,698). Discharges for vaginal delivery, cesarean section, threatened abortion and other antepartum diagnoses, miscarriages, and induced abortions were identified among residents with Italian or foreign citizenship, and irregular immigrants. Hospitalization rates for the above diagnostic categories were computed for Italian and foreign Veneto residents. Main Outcome Measures: Delivery rates, proportion of cesarean sections, hospitalization rates for antepartum hospitalizations, miscarriage, induced abortion, and hospitalization rate ratios of immigrants versus Italian women. Results: Among Italian women, regular and irregular immigrants, the percentages of teenage deliveries were 0.7, 2.9, and 8.4%; the ratios of miscarriages to deliveries were 0.16, 0.15 and 0.35; the ratios of induced abortions to deliveries were 0.13, 0.24 and 0.81, respectively. Regular immigrants accounted for 10% of population aged 15-49 and for 20% of deliveries. The age-related increase in miscarriage risk was steeper among regular immigrants. The induced abortions to deliveries ratio peaked among Italians aged &lt;25 and regular immigrants aged ≥35 years. 40% of Italians and 30% of regular immigrants sought care outside nearest hospitals. Conclusions: Wide differences in reproductive behavior, health status, and patterns in the access to health services exist between Italians, regular and irregular immigrants even though they represent three connected populations. Answer: Based on the provided abstracts, the majority of Italian women do not prefer cesarean section over vaginal delivery. In a survey conducted among 1000 Italian women, 80% declared they would prefer to deliver vaginally if they could opt, with a higher preference for vaginal delivery among older, more educated, multiparous women, and especially among those without any previous cesareans (PUBMED:23530472). Another study involving primiparous women in Italian university hospitals found that 91% of women who delivered spontaneously and 73% of those who underwent a cesarean section would have preferred a vaginal delivery (PUBMED:12752165). These findings suggest that most Italian women prefer, or are satisfied with, vaginal birth over cesarean section. However, certain factors are associated with a higher preference for cesarean delivery among Italian women, including youth, nulliparity, lower education, and a previous cesarean (PUBMED:23530472). Fear of pain, convenience to schedule the delivery, and perception of cesarean as being less traumatic for the baby were among the main reasons for preferring a cesarean (PUBMED:23530472). The influence of obstetricians, as well as friends or relatives, also played a significant role in shaping these preferences (PUBMED:23530472). In conclusion, while there are some factors that influence a subset of Italian women to prefer cesarean sections, the overall preference among Italian women leans towards vaginal delivery.
Instruction: Comparison of size classification of primary spontaneous pneumothorax by three international guidelines: a case for international consensus? Abstracts: abstract_id: PUBMED:18789858 Comparison of size classification of primary spontaneous pneumothorax by three international guidelines: a case for international consensus? Objective: The aim of this study was to compare classification of pneumothoraces into size groups for treatment using the British Thoracic Society [BTS], American College of Chest Physicians [ACCP] and Belgian Society of Pulmonology [BSP] guidelines and the range of pneumothorax sizes in each group calculated using the volumetrically-derived Collins' method. Method: This was a retrospective cohort study. Participants were patients with primary spontaneous pneumothorax [PSP] attending emergency departments of two teaching hospitals between 1996 and 2005. Participants were identified from a pre-existing database. Data collected included demographics, side of PSP and interpleural distances for size classification based on BTS, ACCP, BSP and Collins' method requirements measured on inspiratory X-rays. The outcome of interest was comparison of classification into size groups according to each guideline. Analysis was by descriptive statistics, overall agreement and Kappa analysis for agreement between guideline pairs. Results: Forty-nine episodes [44 patients] were studied. Median age was 22 years; 66% of patients were male. Median PSP size [Collins' method; inspiratory films] was 24%; range 5-100%. Based on inspiratory films, the BTS guideline classified 10% of PSP as large compared with 47% by the BSP guideline and 49% by the ACCP guideline. The three guidelines agreed in their classification in only 47% of cases. Conclusion: Size classification of PSP based on available treatment guideline definitions shows poor agreement. This goes some way to explain management variation between regions and limits comparability of reported outcomes. There is a strong case for international agreement in size classification/estimation in order to facilitate high-quality studies into optimal management approaches. abstract_id: PUBMED:28224348 A need to reconsider guidelines on management of primary spontaneous pneumothorax? Background: The key guidelines in the management of primary spontaneous pneumothorax (PSP) include the 2010 British Thoracic Society (BTS) Pleural Disease guideline and 2001 American College of Chest Physicians (ACCP) Consensus Statement. Current recommendations are dependent on radiographic measures which differ between these two guidelines. The aim of this study is to compare size classification of PSP cases, according to BTS and ACCP guidelines, and to evaluate guideline compliance. Findings: We conducted a retrospective evaluation of all PSP episodes presenting to St Thomas' Hospital, London, between February 2013 and December 2014. Data was recorded from review of chest X-rays and patient records. Eighty-seven episodes of PSP in 72 patients were identified (median age 25 years, IQR 22-32.25). Classification of "large" and "small" showed the greatest disparity in those managed conservatively (12/27, 44%) or with aspiration only (11/23, 48%). In this UK study, BTS guidelines were followed in 70% of episodes with adherence to ACCP guidelines in 32% of episodes. Conclusions: There is a poor agreement in size classification between BTS and ACCP guidelines, resulting in conflicting recommendations for management of PSP. Robust clinical trial evidence is required to achieve international consensus on the management of PSP. abstract_id: PUBMED:19373088 Treatment of primary spontaneous pneumothorax. Purpose Of Review: Initial treatment of primary spontaneous pneumothorax poses dilemmas for clinicians as size classification systems and current treatment guidelines differ in their approaches, and underlying evidence is weak. The purpose of this review is to summarize recent evidence and highlight remaining evidence gaps. Recent Findings: Recent studies confirm significant variation in practice. New evidence suggests that conservative management may be a viable option in a wider range of patients than previously thought (including some large pneumothoraces), with a reported clinical success rate of 79%. New data regarding aspiration report similar treatment success rates and shorter hospitalizations than chest tube drainage; however, methodological issues challenge these findings. There are conflicting data about the prognostic implications of computed tomography-identified pulmonary dystrophia. Summary: Initial management of primary spontaneous pneumothorax is controversial, and there remains little high-quality evidence to guide decision-making. International differences in classification systems and management recommendations make meaningful pooling of clinical trials difficult. This makes a strong case for international agreement about study methodology in order to facilitate meaningful comparisons. Recent evidence suggests that conservative management may be a viable option and argues for its place in much needed randomized trials. abstract_id: PUBMED:35305198 Which pneumothorax volume/size measurement method can best predict surgical indication in primary spontaneous pneumothorax patients? A comparison of six different methods. Objectives: We determined whether patients requiring surgery could be identified using pneumothorax volume/size in the first episode of primary spontaneous pneumothorax (PSP) and confirmed which of six different measurement methods of pneumothorax volume/size best predicted surgical indication (prolonged air leak, PAL, or ipsilateral recurrence). Methods: Between 2013 and 2017, 282 patients with PSP were analysed. The volume of the pneumothorax (%) was calculated using volumetric methods (Collins, Rhea, Kircher, Light), while its size (large or small) was determined using the British Thoracic Society (BTS) and American College of Chest Physicians (ACCP) guidelines for the first PSP episode. The optimal cut-off points of pneumothorax volume in patients requiring surgery were based on the most sensitive and specific scores, calculated using receiver operating characteristic analysis. Results: Surgery was indicated in 162 patients (57.4%) because of PAL (n = 92) or ipsilateral recurrence (n = 70). Kircher's method had a better area under the curve (AUC) value than the others (AUC = 0.731, 95% confidence interval [CI] 0.676-0.782). Comparing the calculated cut-off points for each volumetric method and the known thresholds from the BTS/ACCP, the best predictor for surgical indications (for both PAL and ipsilateral recurrence) was Kircher's method (odds ratio = 5.636, 95% CI 3.338-9.516, p &lt; 0.001). The four volumetric methods were significantly correlated with pneumothorax volume (rho ≥ 0.8, p &lt; 0.001, for all comparisons). Conclusions: Pneumothorax volume/size calculated in the first PSP episode may help predict surgical candidates, with Kircher's method being the best predictor. There was a correlation between all the volumetric methods of the pneumothorax volume. abstract_id: PUBMED:28096829 Is it possible to standardize the treatment of primary spontaneous pneumothorax? Part 1: etiology, symptoms, diagnostics, minimally invasive treatment. The authors of this report present the history of primary spontaneous pneumothorax (PSP) treatment, its etiology, clinical symptoms, and diagnostic methodology. Further, they discuss minimally invasive methods of treating PSP such as thoracentesis and chemical pleurodesis. They discuss the pros and cons of each method, emphasizing that, according to the international recommendations, they should be used as the first line of treatment for PSP. abstract_id: PUBMED:32642128 Interpleural distance predicts persistent air leak after initial primary spontaneous pneumothorax. Background: Initial surgical intervention for a first episode of primary spontaneous pneumothorax (PSP) is controversial. However, if air leak persists after initial drainage, surgical treatment is recommended. Therefore, we investigated risk factors for persistent air leak (PAL) in patients with a first episode of PSP. Methods: We retrospectively analyzed 122 patients with a first episode of PSP between January 2011 and April 2019. PAL was defined as air leak lasting 72 hours or longer. Early admission was defined hospital admission within 24 hours of symptom onset. Three methods were used to estimate pneumothorax size on chest X-rays taken at admission: interpleural distance, apex-cupola distance, and Light index. Results: Among 122 patients, 55 developed PAL (PAL group) and 67 did not (non-PAL group). The size of pneumothorax was significantly larger in the PAL group than in the non-PAL group in all three methods of assessment (P&lt;0.001). Early hospital admission was significantly associated with PAL (P=0.026). Logistic regression analysis revealed that the odds ratio for PAL per unit increase in pneumothorax size evaluated with the interpleural distance was 1.304 (P&lt;0.001). Multivariate logistic regression analysis showed that interpleural distance at the hilum and early admission (P&lt;0.001, P=0.008, respectively) were independent predictors of PAL in patients with a first episode of PSP. Conclusions: In our study, we demonstrated that the interpleural distance at the hilum is a simple and effective predictor of PAL in patients with a first episode of PSP. Our data may help decision-making for initial surgical treatment in these patients. abstract_id: PUBMED:31858124 Epidemiology and management of primary spontaneous pneumothorax: a systematic review. Primary spontaneous pneumothorax (PSP) is one of the most common thoracic diseases affecting adolescents and young adults. Despite the high incidence of PSP and the availability of several international guidelines for its diagnosis and treatment, a significant behavioural heterogeneity can be found among those management recommendations. A working group of the Italian Society of Thoracic Surgery summarized the best evidence available on PSP management with the methodological tool of a systematic review assessing the quality of previously published guidelines with the Appraisal of Guidelines for Research and Evaluation (AGREE) II. Concerning PSP physiopathology, the literature seems to be equally divided between those who support the hypothesis of a direct correlation between changes in atmospheric pressure and temperature and the incidence of PSP, so it is not currently possible to confirm or reject this theory with reasonable certainty. Regarding the choice between conservative treatment and chest drainage in the first episode, there is no evidence on whether one option is superior to the other. Video-assisted thoracic surgery represents the most common and preferred surgical approach. A primary surgical approach to patients with their first PSP seems to guarantee a lower recurrence rate than that of a primary approach consisting of a chest drainage positioning; conversely, the percentage of futile surgical interventions that would entail this aggressive attitude must be carefully evaluated. Surgical pleurodesis is recommended and frequently performed to limit recurrences; talc poudrage offers efficient pleurodesis, but a considerable number of surgeons are concerned about administering this inert material to young patients. Clinical Trial Registration Number: International Prospective Register of Systematic Reviews (PROSPERO): CRD42018084247. abstract_id: PUBMED:34774177 Pneumothorax: Classification and Etiology. Pneumothorax is a common problem worldwide. Pneumothorax develops secondary to diverse aetiologies; in many cases, there may be no recognizable lung abnormality. The pathogenetic mechanism(s) causing spontaneous pneumothorax may be related to an interplay between lung-related abnormalities and environmental factors such as smoking. Tobacco smoking is a major risk factor for primary spontaneous pneumothorax; chronic obstructive pulmonary disease is most frequently associated with secondary spontaneous pneumothorax. This review article provides an overview of the historical perspective, epidemiology, classification, and aetiology of pneumothorax. It also aims to highlight current knowledge and understanding of underlying risks and pathophysiological mechanisms in pneumothorax development. abstract_id: PUBMED:28743478 Comparison of different methods of size classification of primary spontaneous pneumothorax. N/A abstract_id: PUBMED:36856744 Cold coagulation in thoracoscopic treatment of primary pneumothorax: a comparison with apicectomy. Objectives: Primary spontaneous pneumothorax is a common disease, whose surgical treatment is still enigmatic in terms of timing and technique. Herein, we reported our experience with the parenchymal-sparing technique via cold coagulation (CC), in comparison to stapler apicectomy (SA). Methods: We retrospectively collected data of all patients with apical blebs or &lt;2 cm bullae treated with minimally invasive surgery for recurrent or persistent spontaneous pneumothorax, from 2010 to 2020. Two different surgical techniques were used: SA and the parenchymal-sparing CC of the apex. Perioperative and long-term results were analysed and compared. Results: Out of 177 patients enrolled, 77 patients (CC group) underwent cold-coagulation of the apex while 100 patients (SA group) were treated with SA. Two groups were comparable in terms of age, surgical indication, intraoperative findings and affected side. CC group had a mean operative time of 43.2 min (standard deviation ± 19.5), shorter than SA group with 49.3 min (standard deviation ± 20.1, P-value: 0.050). Complication rate was significantly different between 2 groups, 5 (7%) and 16 (16%), for the CC and SA groups, respectively (P: 0.048), even if not in terms of prolonged postoperative air leak (P: 0.16). During the follow-up, 13 homolateral recurrences were reported: 2 (3%) in group CC and 11 (11%) in group SA; with a significant difference (P: 0.044). All reinterventions (postoperative prolonged air leak and recurrences) required an SA. Conclusions: Parenchymal-sparing technique through CC of apical blebs and bullae is an effective treatment for primary spontaneous pneumothorax and guarantees a good immediate lung sealing, despite stapling still represents the choice treatment in complex cases. Answer: The comparison of size classification of primary spontaneous pneumothorax (PSP) by three international guidelines reveals significant discrepancies, which has implications for treatment decisions and outcomes. The British Thoracic Society (BTS), American College of Chest Physicians (ACCP), and Belgian Society of Pulmonology (BSP) guidelines all offer different criteria for classifying the size of a pneumothorax, leading to poor agreement in classification. For instance, one study found that based on inspiratory films, the BTS guideline classified only 10% of PSP as large, compared with 47% by the BSP guideline and 49% by the ACCP guideline, with agreement in classification in only 47% of cases (PUBMED:18789858). Another study highlighted that the disparity in classification between BTS and ACCP guidelines resulted in conflicting management recommendations, with BTS guidelines being followed in 70% of episodes and ACCP guidelines in 32% of episodes in a UK study (PUBMED:28224348). The lack of consensus on size classification and management recommendations is problematic because it leads to variation in practice and makes it difficult to compare outcomes across different regions and studies. This variation is further confirmed by evidence suggesting that conservative management may be a viable option for a wider range of patients, including some with large pneumothoraces, challenging the traditional approach of intervention based on size alone (PUBMED:19373088). Moreover, different methods for measuring pneumothorax volume/size, such as those by Collins, Rhea, Kircher, and Light, have been compared to determine the best predictor for surgical indication in PSP patients. Kircher's method was found to be the best predictor, suggesting that a standardized volumetric method could aid in identifying surgical candidates (PUBMED:35305198). Given these findings, there is a strong case for international consensus on the size classification and estimation of PSP to facilitate high-quality studies and optimal management approaches. This would help standardize treatment, reduce management variation, and improve patient outcomes (PUBMED:18789858; PUBMED:28224348; PUBMED:19373088; PUBMED:35305198).
Instruction: Is the wait-for-patient-to-come approach suitable for African newcomers to Alberta, Canada? Abstracts: abstract_id: PUBMED:22548343 Is the wait-for-patient-to-come approach suitable for African newcomers to Alberta, Canada? Objectives: A qualitative study was conducted to identify psychosocial barriers to providing and obtaining preventive dental care for preschool children among African recent immigrants. Methods: Seven focus groups were conducted with 48 mothers of 3- to 5-year-old children from Ethiopian, Eritrean, and Somali communities in Edmonton. Participants had lived in Canada for 5 years or less. Three debriefing interviews were conducted with the community health workers who facilitated the focus groups in participants' first languages. Data analysis consisted of assigning codes, grouping codes into existing or new categories of barriers, grouping identified categories into domains, and organizing categories and domains around a general perspective of psychosocial barriers to prevention of caries. Results: Barriers to prevention of early childhood caries (ECC) were associated with home-based prevention, early detection, and access to professional care. Barriers to parental prevention were related to health beliefs, knowledge, oral health approach, and skills. Barriers to early detection included perceived role of caregivers and dentists, perceived identity of ECC, ways of detecting cavities, and parental self-efficacy. Access barriers were related to parental knowledge of preventive services, attitudes toward dentists and dental services, English skills, and external constraints concerned dental insurance, social support, time, and transportation. Conclusions: Preventive interventions should be aimed at assisting primary caregivers with providing and obtaining adequate dental care for their children through enhancing oral health literacy, developing new set of oral health-related skills, reducing environmental constraints, and strengthening their intention of obtaining professional preventive dental services. abstract_id: PUBMED:34100262 Recruitment Strategies Used in a Survey of African Immigrant Maternal Mental Health in Alberta, Canada. African immigrant women are underrepresented in health research on maternal mental health. Thus, there is a need to highlight successful recruitment strategies to engage African women in health-oriented research. This paper offers insights on recruitment strategies utilized in recruiting African immigrant women in Alberta (Canada) with infants 2 years of age or under for a survey study on maternal mental health. We recruited 136 African immigrant women. Most participants were recruited by using already established social networks in the community. Other successful strategies included referral from community partners (i.e., immigrant organizations, cultural association, religious institutions), participants, utilizing an online survey tool (i.e., Qualtrics), and through family and friend networks (i.e., word-of-mouth). This study evidently highlights the importance of utilizing multiple recruitment strategies to successfully meet the desired sample size for a survey study. We believe the lessons learned during the process of recruitment will be helpful for others working with other African immigrant women populations in Canada and in other Western societies. abstract_id: PUBMED:30292095 Transnationalism, parenting, and child disciplinary practices of African immigrants in Alberta, Canada. Child discipline remains a topic of public health interest across the globe. Despite this enduring interest, very little is known about the child disciplinary practices of African immigrants in Canada. This paper explores the disciplinary practices of African immigrant parents in Alberta, a Canadian province with a recent surge in the population of African immigrants. Employing a critical ethnographic methodology, informed by transnational theory, we collected data through in-depth qualitative interviews with a purposive sample of African community leaders (n = 14), African immigrant parents (n = 32), policymakers (n = 2), and health and immigrant settlement workers (n = 10). As members of the African immigrant community, we were deeply immersed in the research settings, which afforded us the opportunity to collect pertinent observational data in the form of reflexive notes. Thematic analysis of the data revealed child disciplinary approaches that incorporate Canadian and African parenting practices, as well as practices that appear somewhat unique to this demographic. We found that African immigrant parents used corporal discipline, persuasive discipline, and a hybrid of the two, as well as emerging practices involving transnational fostering and emotional isolation of children who persistently misbehaved. These practices, in their totality, appeared to be influenced by the transnational experiences of parents and precepts that are traceable to Canada's legal and educational systems. We present theoretical, policy, and service implications of our findings, including a recommendation to incorporate sociocultural dimensions of child discipline into Canadian child welfare policies and practices. abstract_id: PUBMED:36341034 HIV treatment outcomes among newcomers living with HIV in Manitoba, Canada. Background: Despite the overrepresentation of immigrants and refugees (newcomers) in the HIV epidemic in Canada, research on their HIV treatment outcomes is limited. This study addressed this knowledge gap by describing treatment outcomes of newcomers in comparison with Canadian-born persons living with HIV in Manitoba. Methods: Clinical data from 1986 to 2017 were obtained from a cohort of people living with HIV and receiving care from the Manitoba HIV Program. Retrospective cohort analysis of secondary data was completed using univariate and multivariate statistics to compare differences in socio-demographic and clinical characteristics and treatment outcomes among newcomers, Canadian-born Indigenous persons, and Canadian-born non-Indigenous persons on entry into HIV care. Results: By end of 2017, 86 newcomers, 259 Canadian-born Indigenous persons, and 356 Canadian-born non-Indigenous persons were enrolled in the cohort. Newcomers were more likely than Canadian-born Indigenous and non- Indigenous cohort participants to be younger and female and have self-reported HIV risk exposure as heterosexual contact. Average CD4 counts at entry into care did not differ significantly between groups. A higher proportion of newcomers was also diagnosed with tuberculosis within 6 months of entry into care (21%), compared with 6% and 0.6% of Canadian-born Indigenous non-Indigenous persons, respectively. Newcomers and Canadian-born non-Indigenous persons had achieved viral load suppression (&lt; 200 copies/mL) at a similar proportion (93%), compared with 82% of Canadian-born Indigenous participants (p &lt; 0.05). Conclusions: The distinct demographic and clinical characteristics of newcomers living with HIV requires a focused approach to facilitate earlier diagnosis, engagement, and support in care. abstract_id: PUBMED:30352005 Healthcare provider perspectives on the uptake of the human papillomavirus vaccine among newcomers to Canada: a qualitative study. Human papillomavirus is among the most common sexually transmitted infections in the world. Newcomers, defined in Canada as foreign-born individuals who are either immigrants or refugees, but may also include students and undocumented migrants, face numerous barriers to HPV vaccination. This study sought to understand, from the perspective of healthcare providers, barriers and facilitators to HPV vaccination, and recommendations to improve HPV vaccine uptake among newcomers. Semi-structured interviews were conducted with 10 healthcare providers between March and April 2018. Data were analyzed at the manifest level using a Qualitative Content Analysis approach. Categories of barriers to vaccination included: access, communication, knowledge, culture, and provider-related factors. Facilitators included targeted health promotion; understanding the relevance of HPV vaccination; trusting the healthcare system; and cultural sensitivity. Two overarching recommendations were to publicly fund the HPV vaccine, and enhance language- and culturally-appropriate health promotion activities. Further research should explore informational desires and needs from the perspective of newcomers to inform strategies to promote equitable HPV vaccine coverage. abstract_id: PUBMED:34493550 Patient navigation programs in Alberta, Canada: an environmental scan. Background: Patient navigation is a complex intervention that has garnered substantial interest and investment across Canada. We conducted an environmental scan to understand the landscape of patient navigation programs within the health care system in Alberta, Canada. Methods: We included patient navigation programs within Alberta Health Services (AHS) and Alberta's Primary Care Networks (PCNs). Key informants were asked in October 2016 to identify existing programs and their corresponding program contacts. These program contacts were invited to complete a telephone-based survey from October 2016 to July 2017, to provide program descriptions and eligibility criteria, and to identify gaps in navigation. Programs were included if they engaged patients on an individual basis, and either facilitated continuity of care or promoted patient and family empowerment. We tabulated results and calculated summary statistics for program characteristics. Results: Ninety-five potentially eligible programs were identified by key informants. The response rate to the study survey was 73% (n = 69). After excluding programs not meeting inclusion criteria, we included a total of 58 programs in the study: 43 AHS programs and 15 PCN programs. Nearly all programs (93%, n = 54) delivered navigation via an individual acting as a navigator. A minority of programs also included nonnavigator components, such as Web-based resources (7%, n = 4) and process or structural changes to facilitate navigation (22%, n = 13). Certain patient subgroups were particularly well-served by patient navigation; these included patients with cancer, substance use disorders or mental health concerns, and pediatric patients. Gaps identified in navigation fell under 4 domains: awareness, resources, geographic distribution and integration. Interpretation: Patient navigation programs are common and have extended beyond cancer care, from which the construct originated; however, gaps include a lack of awareness and inequitable access to the programs. These findings will be of interest to those developing and implementing patient navigation interventions in Alberta and other jurisdictions. abstract_id: PUBMED:35482228 About time! A scoping review of public-facing emergency department wait time displays in Canada. Purpose: Waiting is a common experience for patients during an emergency department (ED) visit. While high acuity patients are seen with little delay, low acuity patients may experience dissatisfaction from prolonged wait times. One strategy to improve patient experience involves changing the perception of the wait by providing realistic expectations of wait times using public-facing wait time displays. The primary objective of this study is to quantify the number of Canadian EDs with online wait time displays and describe the features and type of information provided. Methods: A systematic online search of all Canadian EDs was completed to identify EDs with public-facing wait time displays. A scoping review was then performed to assess their message characteristics, translations offered, availability of multi-site information, and accessibility features. Data were summarized using descriptive statistics. Results: Sixty (9.3%) of the 647 Canadian EDs identified provide public-facing real-time wait time displays. Thirteen of these (21.7%) were associated with a single proprietary system. Distribution of wait time displays differs across Canada, with a range of zero to 100% of EDs within each province utilizing this communication tool. Common characteristics include "average" wait time (95%), graphical trend data (32%), number of patients waiting (33%), longest wait time (12%), and expected length of stay (10%). Sixty-two percent of wait time displays provide a combination of these methods to inform wait times. Important accessibility features include language translation, compliance with Canadian National Institute for the Blind (CNIB) accessibility guidelines and availability on a mobile application. Conclusion: Currently, there is emerging use of wait time displays in Canada with considerable variability in the information communicated through these tools. Effectiveness of these displays and their content needs to be determined. abstract_id: PUBMED:35960750 Economic burden of chronic pain in Alberta, Canada. Background: Although chronic pain (CP) is common, little is known about its economic burden in Alberta, Canada. Aims: To estimate incremental (as compared to the general population or people without CP) societal (healthcare and lost productivity) costs of CP in Alberta. Methods: We applied the prevalence estimated from the Canadian Community Health Survey data to the population retrieved from the Statistics Canada to estimate the number of people with CP in Alberta in 2019. We analyzed the Alberta Health administrative databases to estimate the healthcare costs of person with CP. Finally, we multiplied the number of people with the cost per person. Results: The prevalence of any CP was 20.1% and of activity-preventing CP was 14.5% among people aged &gt; = 12 years. Incremental cost per person with CP per year was CA$2,217 for healthcare services (among people aged &gt; = 12 years) and CA$8,412 for productivity losses (among people aged 18-64 years). Of the healthcare cost, prescription drugs accounted for the largest share (32.8%), followed by inpatient services (31.0%), outpatient services (13.1%), physician services (9.8%), other services (7.4%), and diagnostic imaging (5.8%). Provincially, total incremental cost of CP ranges from CA$1.2 to 1.7 billion for healthcare services (6% to 8% of total provincial health expenditure); and CA$3.4 to 4.7 billion for productivity losses. Considering costs for long-term care services, the total societal cost of CP in Alberta was CA$6.3 to 8.3 billion per year, reflecting 2.0% to 2.7% of Alberta's GDP. Conclusions: Interventions improving CP prevention and management to reduce this substantial economic burden are urgently needed. abstract_id: PUBMED:34258667 Reduce Patient Treatment wait time in a Proton Beam Facility - A Gatekeeper Approach. Patient wait time can negatively impact treatment quality in a proton therapy center, where multiple treatment rooms share one proton beam. Wait time increases patient discomfort that can lead to patient motion, dissatisfaction, and longer treatment delay. This study was to develop a patient call-back model that reduced patient wait while efficiently utilizing the proton beam. A "Gatekeeper" logic allowing therapists to adjust the time of a patient's call-back to the treatment room was developed. It uses a two-pronged approach to minimize overlap of long treatment and the possibility of excessive wait in the queue to receive the proton beam. The goal was to reduce the maximum wait time to less than eight minutes per field for a four-room facility. The effectiveness of this logic was evaluated through simulation, and five scenarios were compared. Four scenarios implementing various levels of gatekeeper logic were compared with the original scenario without the logic. The best performing model provided a reduction of the maximum field wait by 26% and met the predefined goal. Adjusting call-back extended the treatment day length by an average of 6 min and a maximum of 12 min in total. The use of this gatekeeper logic significantly reduces patient field wait with minimal impact on treatment day length for a four-room proton facility. A sample interface that adopts this logic for therapists to make informed decision on patient call-back time is demonstrated. abstract_id: PUBMED:25908909 Patient-expressed perceptions of wait-time causes and wait-related satisfaction. Background: This study set out to identify patterns in the causes of waits and wait-related satisfaction. Methods: We conducted qualitative interviews with urban, semi-urban, and rural patients (n = 60) to explore their perceptions of the waits they experienced in the detection and treatment of their breast, prostate, lung, or colorectal cancer. We asked participants to describe their experiences from the onset of symptoms to the start of treatment at the cancer clinic and their satisfaction with waits at various intervals. Interview transcripts were coded using a thematic approach. Results: Patients identified five groups of wait-time causes: Patient-related (beliefs, preferences, and non-cancer health issues)Treatment-related (natural consequences of treatment)System-related (the organization or functioning of groups, workforce, institution, or infrastructure in the health care system)Physician-related (a single physician responsible for a specific element in the patient's care)Other causes (disruptions to normal operations of a city or community as a whole) With the limited exception of physician-related absences, the nature of the cause was not linked to overall satisfaction or dissatisfaction with waits. Conclusions: Causes in themselves do not explain wait-related satisfaction. Further work is needed to explore the underlying reasons for wait-related satisfaction or dissatisfaction. Although our findings shed light on patient experiences with the health system and identify where interventions could help to inform the expectations of patients and the public with respect to wait time, more research is needed to understand wait-related satisfaction among cancer patients. Answer: The "wait for patient to come" approach may not be entirely suitable for African newcomers to Alberta, Canada, based on the findings from the provided abstracts. A qualitative study conducted with African recent immigrants in Alberta identified several psychosocial barriers to obtaining preventive dental care for preschool children among this population. These barriers included issues related to health beliefs, knowledge, oral health approach, skills, early detection, access to professional care, English language skills, and external constraints such as dental insurance, social support, time, and transportation (PUBMED:22548343). Additionally, the recruitment strategies used in a survey of African immigrant maternal mental health in Alberta highlighted the importance of utilizing established social networks, community partners, and online tools to engage African women in health-oriented research (PUBMED:34100262). This suggests that proactive outreach and engagement strategies may be more effective than a passive approach in reaching African newcomers. Moreover, the study on transnationalism, parenting, and child disciplinary practices among African immigrants in Alberta revealed that their disciplinary approaches are influenced by both Canadian and African parenting practices, as well as by their transnational experiences (PUBMED:30292095). This indicates that African newcomers may have unique needs and perspectives that require tailored approaches rather than a generic "wait for patient to come" strategy. Furthermore, research on HIV treatment outcomes among newcomers in Manitoba, another Canadian province, showed that newcomers living with HIV have distinct demographic and clinical characteristics that necessitate a focused approach to facilitate earlier diagnosis, engagement, and support in care (PUBMED:36341034). This implies that a more proactive and culturally sensitive approach may be needed to effectively address the health needs of African newcomers. In summary, the evidence suggests that a more proactive, culturally sensitive, and tailored approach to healthcare engagement and service provision may be more suitable for African newcomers to Alberta, Canada, rather than a passive "wait for patient to come" approach.
Instruction: Can preoperative MRI accurately evaluate nodal and parametrial invasion in early stage cervical cancer? Abstracts: abstract_id: PUBMED:17556538 Can preoperative MRI accurately evaluate nodal and parametrial invasion in early stage cervical cancer? Objective: To evaluate the diagnostic performance of magnetic resonance imaging (MRI) in the pretreatment evaluation of invasive cervical cancer especially for the parametrial invasion and lymph node (LN) involvement. Methods: We retrospectively recruited consecutive patients with biopsy-confirmed cervical cancer who had undergone preoperative MRI and were scheduled for surgery based on clinical assessment between January 2004 and May 2006. We evaluated the diagnostic performance of MRI for the parametrial invasion and LN involvement using surgicopathologic findings as the reference standard. Results: A total of 119 eligible patients completed preoperative and intra-operative survey, of whom 34 (28.6%) had pelvic LN metastasis and four (3.4%) had para-aortic LN metastasis histologically. The sensitivity, specificity and accuracy of MRI in detecting LN involvement by region-specific analysis were 40.5, 91.3 and 86.8% respectively. The sensitivity, specificity and accuracy of MRI in detecting parametrial invasion were 44.4, 89.1 and 88.3% respectively. The positive predictive value (PPV) of preoperative MRI for detecting region-specific LN involvement and parametrial invasion was 31.3 and 61.2%, respectively. Imaging findings of suspected parametrial invasion were not to influence the treatment decision in the study. Conclusion: Preoperative MRI showed low PPV for detecting LN involvement and parametrial invasion in cervical cancer. Further studies are necessary to determine the cost-effectiveness of using MRI in place of conventional clinical staging tests according to clinical indication and also its use in comparison with that of integrated positron emission tomography/computed tomography. abstract_id: PUBMED:29898732 Identification of a low risk population for parametrial invasion in patients with early-stage cervical cancer. Background: Recent studies have challenged radical procedures for less extensive surgery in selected patients with early-stage cervical cancer at low risk of parametrial invasion. Our objective was to identify a subgroup of patients at low risk of parametrial invasion among women having undergone surgical treatment. Methods: Data of 1447 patients with cervical cancer treated between 1996 and 2016 were extracted from maintained databases of 10 French University hospitals. Patients with early-stage (IA2-IIA) disease treated by radical surgery including hysterectomy and trachelectomy, were selected for further analysis. The Kaplan-Meier method was used to estimate the survival distribution. A Cox proportional hazards model including all the parameters statistically significant in univariate analysis, was used to account for the influence of multiple variables. Results: Out of the 263 patients included for analysis, on final pathology analysis 28 (10.6%) had parametrial invasion and 235 (89.4%) did not. Factors significantly associated with parametrial invasion on multivariate analysis were: age &gt; 65 years, tumor &gt; 30 mm in diameter measured by MRI, lymphovascular space invasion (LVSI) on pathologic analysis. Among the 235 patients with negative pelvic lymph nodes, parametrial disease was seen in only 7.6% compared with 30.8% of those with positive pelvic nodes (p &lt; 0.001). In a subgroup of patients presenting tumors &lt; 30 mm, negative pelvic status and no LVSI, the risk of parametrial invasion fell to 0.6% (1/173 patients). Conclusion: Our analysis suggests that there is a subgroup of patients at very low risk of parametrial invasion, potentially eligible for less radical procedures. abstract_id: PUBMED:35860149 What are the determinants of parametrial invasion in patients with early stage cervical cancer: A cross sectional study. Introduction: There was an increase in number of patients presented with early-stage cervical cancer (CC). Tumors with favorable pathological features might be candidates for less radical surgery. Methods: We retrospectively reviewed 700 patients with histologically confirmed CC between January 2011 and March 2020. Chi-square, Fisher's exact tests and multivariate logistic regression analysis were used to assess relations between parametrial involvement (PI) and clinic-pathological variables. Results: Total number of 132 patients with stage IA to IIA were eligible to participate. Squamous cell carcinoma was reported in 100 (75.8%) patients, adenocarcinoma and other tumor pathologies were found in 24(18.2%) and 8(6.1%), respectively. Considering the 2018 FIGO stage, 11 (8.4%) patients had IA, 111 (83%%) IB and 10 (7.6%) IIA. Nine patients (6.8%) had PI on permanent pathologic report. Univariate analysis demonstrated that following variables were statistically different between patients with and without PI: age ≥50, tumor size ≥ 3 cm, lower segment involvement, poorly differentiated pathology, deep stromal invasion, pelvic lymph node, lympho-vascular involvement and positive surgical margin (all p values &lt; 0.05). Among these variables only tumor size ≥3 cm (OR: 2.1, 95% CI: 1.11-4.16, p value: 0.02), deep stromal invasion (OR: 2.2, 95% CI: 1.9-7.43, p value: 0.02) and positive surgical margin (OR: 5.1, 95% CI: 3.97-11.15, p value: 0.008) were independent risk factor of PI in multivariate analysis. Conclusions: Early stage CC might be surgically approached in a more conservative manner if patients have tumor size &lt;3 cm and do not have deep stromal invasion in conization. abstract_id: PUBMED:35571337 The Relationship Between Parametrial Involvement and Parametrial Tissue Removed in Radical Surgery in Early-Stage Cervical Cancer. Background: The study aims to evaluate the effect of parametrial dimensions on the prognosis of cases who underwent type 3 radical hysterectomy or radical trachelectomy in early-stage cervical cancer (stage I - IIa). Methods: Medical reports of patients with early-stage cervical cancer who have undergone surgery between 1998 and 2020 in Akdeniz University Faculty of Medicine Gynecological Oncology Clinic were reviewed retrospectively. A total of 292 cases were identified and included in the study. Demographic characteristics, preoperative examination findings, operation records, and pathology results of the cases were reviewed. Results: Parametrial involvement was found histopathologically negative in 244 out of 292 patients included in our study, the remaining 48 (16.4%) patients were found to be positive. The mean length of the right and left parametrium in the group with negative parametrium invasion, who had an average follow-up of 131.2 (0.57 - 268.2) months, was 3 cm, while the mean volume of the right and left parametrium was 7.2 (0.52 - 32) cm3 and 6 (0.48 - 34) cm3, respectively. On the other hand, the mean length of the right and left parametrium was 3 (1.5 - 5.5) cm and 3 (1.4 - 7) cm, respectively, while the mean volume of the right parametrium was 5.55 (1.37 - 22) cm3, and the mean volume of the left parametrium was 7.5 (1.35 - 24) cm3 in 48 patients with positive parametrial invasion. No statistically significant difference was detected between the two groups when compared in terms of parametrial sizes and volumes (P values of 0.061, 0.262, 0.391, and 0.468, respectively). Conclusions: Radical surgical approach is necessary to obtain a tumor-free surgical margin in the surgical treatment of early cervical cancer, but the complications leading to morbidity and mortality are also increasing with this radicality. For this reason, we consider that it is important to adapt the dimensions of the removed parametrium according to the factors affecting recurrence in cervical cancer to obtain more appropriate surgical margins with the least complications. abstract_id: PUBMED:28211804 Comparison of elastography based on transvaginal ultrasound and MRI in assessing parametrial invasion of cervical cancer. Objective: The purpose of this study was to evaluate the clinical value of transvaginal elastography (TVES) combined with high-resolution transvaginal ultrasound (TVS) in the detection of parametrial invasion in cervical cancer and to compare the diagnostic performance with magnetic resonance imaging (MRI). Materials And Methods: 52 women with histologically confirmed cervical cancer over a 2-year period were staged using International Federation of Gynecology and Obstetrics (FIGO) criteria and underwent MRI and TVES combined with TVS according to a standardized protocol before treatment. When assessing parametrial involvement with TVS, MRI, and combination of TVES and TVS, the findings were recorded and compared with histopathological results after surgery in early-stage disease (stage⩽IIa). Sensitivity, specificity accuracy, positive predictive value (PPV) and negative predictive value (NPV) were calculated for each method independently; subsequently, a matched-sample analysis was performed by using McNemar's test or chi-square test. Results: Of 52 patients, 39 were early-stage disease (stage⩽IIa), and 13 were advanced-stage disease (stage⩾IIb) according to conventional FIGO staging. For the detection of parametrial infiltration, both the diagnostic sensitivity of MRI and the combination of TVS and TVES were statistically higher than alone TVS in early-stage of cervical cancer (P = 0.03 &lt; 0.05). Both MRI and the combination of TVS and TVES had a sensitivity of 72.73%; specificity rates of 82.14% for MRI and 78.57% for the combination of TVES and TVS; and the diagnostic accuracy rates of 79.49% for MRI and 76.92% for the combination of TVES and TVS. A matched sample analysis revealed no statistically significant difference between the diagnostic performance of MRI and the combination of TVES and TVS in the assessment of parametrial invasion (all P values &gt; 0.05). Conclusion: TVES combined with TVS performed by a dedicated gynecologic radiologist should be considered a promising and economic method for pre-operative work-up for cervical cancer. abstract_id: PUBMED:30846298 Predictors for pathological parametrial invasion in clinical stage IIB cervical cancer. Objective: To examine predictors of pathological parametrial invasion in clinical stage IIB cervical cancer, and to examine prognostic factors in pathological stage IIB disease. Methods: This study is an ancillary analysis of a nation-wide retrospective cohort examining 6,003 clinical stage IB-IIB cervical cancers. Women with clinical stage IIB disease who underwent primary radical hysterectomy with lymphadenectomy were examined (n = 714). Multivariate analysis was performed to identify independent clinico-pathological factors for pathological parametrial invasion and to identify independent prognostic factors in pathological stage IIB disease. Results: Parametrial invasion was identified on the surgical specimen in 400 cases (56.0%, 95% confidence interval 52.4-59.7). On multivariate analysis, deep stromal invasion (DSI, adjusted-OR 3.922), multiple pelvic nodal metastases (adjusted-OR 3.266), lympho-vascular space invasion (adjusted-OR 2.333), and uterine corpus invasion (adjusted-OR 1.656) remained independent tumor factors for pathological parametrial invasion. In classification-tree models, tumors with DSI and multiple pelvic nodal metastases had the highest incidence of pathological parametrial invasion (75.0-87.7%); contrary, tumors without DSI had the lowest incidence (21.9%). Among patients with pathological stage IIB disease, the absolute difference in 5-year disease-free survival rates was 57.2%, ranging between 80.9% in those with squamous histology with none/single pelvic nodal metastasis and 23.7% in those with non-squamous histology with multiple pelvic nodal metastases. Conclusion: In clinical stage IIB cervical cancer, accuracy for pathological parametrial invasion is low-modest. With absence of DSI, only one in five clinical stage IIB diseases has pathological stage IIB disease. Survival of pathological stage IIB varies widely and is largely dependent on nodal factors. abstract_id: PUBMED:31675631 Which factors predict parametrial involvement in early stage cervical cancer? A Turkish multicenter study. Objective: To evaluate the clinical and pathological factors for predicting the parametrial involvement (PI) in early stage cervical cancer. Study Design: This study included 406 patients with type III radical hysterectomy + pelvic ± para-aortic lymphadenectomy and FIGO stage I and II cervical adenocarcinoma, squamous type, and adenosquamous type cervical cancer. Results: The entire cohort of patients had lymphadenectomy performed. Early stage cervical cancer patients were evaluated. FIGO 2014 stage, uterine invasion, LVSI, surgical border involvement, vaginal metastasis, stromal invasion and lymph node metastasis were found to be effective for PI on univariate analyses. However; age, tumor type and tumor size did not determine the parametrial invasion. LVSI (HR: 4.438, 95%CI: 1.771-11.121; p = 0.001), lymph node metastases (HR: 2.418, 95%CI: 1.207-4.847; p = 0.013) and vaginal involvement (HR: 4.109, 95%CI: 1.674-10.087; p = 0.02) are independent prognostic factors on multivariate analysis. Conclusion: Lymph node metastases, LVSI and surgical border involvement are independent prognostic factors for PI in early stage cervical cancer patients. Therefore, less radical surgical approaches for early stage tumors with no nodal spread, negative LVSI and no surgical border involvement are applicable. abstract_id: PUBMED:32065284 Preoperative prediction of parametrial invasion in early-stage cervical cancer with MRI-based radiomics nomogram. Purpose: To develop and identify a MRI-based radiomics nomogram for the preoperative prediction of parametrial invasion (PMI) in patients with early-stage cervical cancer (ECC). Materials And Methods: All 137 patients with ECC (FIGO stages IB-IIA) underwent T2WI and DWI scans before radical hysterectomy surgery. The radiomics signatures were calculated with the radiomics features which were extracted from T2WI and DWI and selected by the least absolute shrinkage and selection operation regression. The support vector machine (SVM) models were built using radiomics signatures derived from T2WI and joint T2WI and DWI respectively to evaluate the performance of radiomics signatures for distinguishing patients with PMI. A radiomics nomogram was drawn based on the radiomics signatures with a better performance, patient's age, and pathological grade; its discrimination and calibration performances were estimated. Results: For T2WI and joint T2WI and DWI, the radiomics signatures yielded an AUC of 0.797 (95% CI, 0.682-0.911) vs 0.946 (95% CI, 0.899-0.994), and 0.780 (95% CI, 0.641-0.920) vs 0.921 (95% CI, 0.832-1) respectively in the primary and validation cohorts. The radiomics nomogram, integrating the radiomics signatures from joint T2WI and DWI, patient's age, and pathological grade, showed excellent discrimination, with C-index values of 0.969 (95% CI, 0.933-1) and 0.941 (95% CI, 0.868-1) in the primary and validation cohorts, respectively. The calibration curve showed a good agreement. Conclusions: The radiomics nomogram performed well for the preoperative prediction of PMI in patients with ECC and may be used as a supplementary tool to provide individualized treatment plans for patients with ECC. Key Points: • No previously reported study that has utilized radiomics nomogram to preoperatively predict PMI for patients with ECC. • Radiomics model involves radiomics features extracted from joint T2WI and DWI which characterize the heterogeneity between tumors in patients with ECC. • Radiomics nomogram can assist clinicians with individualized treatment decision-making for patients with ECC. abstract_id: PUBMED:32596230 Diagnosis of Cervical Cancer With Parametrial Invasion on Whole-Tumor Dynamic Contrast-Enhanced Magnetic Resonance Imaging Combined With Whole-Lesion Texture Analysis Based on T2- Weighted Images. Purpose: To evaluate the diagnostic value of the combination of whole-tumor dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and whole-lesion texture features based on T2-weighted images for cervical cancer with parametrial invasion. Materials and Methods: Sixty-two patients with cervical cancer (27 with parametrial invasion and 35 without invasion) preoperatively underwent routine MRI and DCE-MRI examinations. DCE-MRI parameters (Ktrans, Kep, and Ve) and texture features (mean, skewness, kurtosis, uniformity, energy, and entropy) based on T2-weighted images were acquired by two observers. All parameters of parametrial invasion and non-invasion were analyzed by one-way analysis of variance. The diagnostic efficiency of significant variables was assessed using receiver operating characteristic analysis. Results: The invasion group of cervical cancer demonstrated significantly higher Ktrans (0.335 ± 0.050 vs. 0.269 ± 0.079; p &lt; 0.001), lower energy values (0.503 ± 0.093 vs. 0.602 ± 0.087; p &lt; 0.001), and higher entropy values (1.391 ± 0.193 vs. 1.24 ± 0.129; p &lt; 0.001) than those in the non-invasion group. Optimal diagnostic performance [area under curve [AUC], 0.925; sensitivity, 0.935; specificity, 0.829] could be obtained by the combination of Ktrans, energy, and entropy values. The AUC values of Ktrans (0.788), energy (0.761), entropy (0.749), the combination of Ktrans and energy (0.814), the combination of Ktrans and entropy (0.727), and the combination of energy and entropy (0.619) were lower than those of the combination of Ktrans, energy, and entropy values. Conclusion: The combination of DCE-MRI and texture analysis is a promising method for diagnosis cervical cancer with parametrial infiltration. Moreover, the combination of Ktrans, energy, and entropy is more valuable than any one alone, especially in improving diagnostic sensitivity. abstract_id: PUBMED:29177590 Risk of parametrial invasion in women with early stage cervical cancer: a meta-analysis. Purpose: To first evaluate the predictive value of lymph-vascular space invasion (LVSI) and pelvic lymph node metastases (LNM) to parametrial invasion (PI) in patients with early stage cervical cancer. Methods: A systematic search of electronic databases was conducted to identify publications exploring the association between LVSI, LNM, and PI. The following databases were searched: PubMed/Medline and Web of Science. Twenty relevant studies were included. Pooling of results was done based on these studies and a diagnostic meta-analysis was performed. Results: 20 articles and a total of 7373 cases were included in the meta-analysis. 17 studies analyzed the correlation between LVSI and PI and 18 studies evaluated the correlation between LNM and PI. The existence of LVSI accelerated PI with OR being 7.37 (95% CI 5.70-9.54) in fixed-effect model and 7.32 (95% CI 4.65-10.39) in random effects model. LVSI shows high sensitivity of 0.80 and the AUC was 0.82. The existence of LNM accelerated PI. LNM shows the highest specificity of 0.90 and the AUC was 0.77. Conclusions: Both LVSI and LNM are associated with PI. Conization and lymphadenectomy might be helpful to predict the parametrial status of patients with early stage cervical cancer. In addition, the results of this meta-analysis are helpful for designing further prospective clinical trials. Answer: Preoperative MRI has been used to evaluate nodal and parametrial invasion in early-stage cervical cancer, but its accuracy and predictive value have been a subject of investigation. According to the studies, the sensitivity, specificity, and accuracy of MRI in detecting lymph node (LN) involvement are 40.5%, 91.3%, and 86.8%, respectively, while for detecting parametrial invasion, these values are 44.4%, 89.1%, and 88.3%, respectively (PUBMED:17556538). However, the positive predictive value (PPV) of preoperative MRI for detecting LN involvement and parametrial invasion is relatively low, at 31.3% and 61.2%, respectively, indicating that MRI may not be highly reliable for these assessments. Other studies have explored the identification of low-risk populations for parametrial invasion using clinical factors. For instance, in patients with tumors less than 30 mm in diameter, negative pelvic status, and no lymphovascular space invasion (LVSI), the risk of parametrial invasion fell to 0.6% (PUBMED:29898732). Additionally, factors such as tumor size greater than or equal to 3 cm, deep stromal invasion, and positive surgical margin have been identified as independent risk factors for parametrial invasion (PUBMED:35860149). Comparative studies have also been conducted to evaluate the diagnostic performance of MRI against other methods. For example, transvaginal elastography (TVES) combined with high-resolution transvaginal ultrasound (TVS) was found to have similar sensitivity, specificity, and diagnostic accuracy rates for the detection of parametrial infiltration in early-stage cervical cancer when compared to MRI (PUBMED:28211804). In conclusion, while preoperative MRI can provide valuable information in the pretreatment evaluation of invasive cervical cancer, particularly for parametrial invasion and LN involvement, its predictive value is not definitive. The accuracy of MRI may be enhanced when combined with other clinical factors or imaging techniques, and it may be more beneficial for certain subgroups of patients at low risk for parametrial invasion. Further studies are necessary to determine the cost-effectiveness and optimal use of MRI in comparison with other staging tests and imaging modalities.
Instruction: Are mental health problems associated with use of Accident and Emergency and health-related harm? Abstracts: abstract_id: PUBMED:17065176 Are mental health problems associated with use of Accident and Emergency and health-related harm? Background: Previous findings indicate that mental health problems are common in Emergency departments; however, there are few studies of the extent of health-related problems and emergency service use in mental health populations as a whole. Methods: Record linkage methods were used to map the association between mental health, age, gender, and health-related harm across total health and mental health care populations in one geographical area, over three years. By examining patterns of health-related harm, an accurate profile of mentally ill Emergency patients was generated enabling identification of factors that increased vulnerability to harm. Results: Of the total population of 625 964 individuals, 10.7% contacted Accident and Emergency (A&amp;E) over three years, this proportion rose to 28.6% among the total secondary care mental health population. Young men and older women were more likely to contact A&amp;E, both overall and within mental health populations and were also more likely to be frequent attendees at A&amp;E. Four distinct groups (typologies) of mental health patients attending A&amp;E emerged: young, male frequent attendees with self-inflicted and other traumatic injuries; young females also presenting with self-harm; older patients with multiple medical conditions; and very old patients with cardiac conditions and fractures. Conclusion: The study indicates increased A+E service use and unmet health-related need within a total mental health population. It identifies specific 'care populations' particularly vulnerable to accidents and self-harm and highlights the need for targeted services for mentally ill groups who may not access traditional health and social care services effectively. abstract_id: PUBMED:36293777 Presentations to the Emergency Department for Problems Related to Mental Health: Sex Differences in Adolescents. Background: Adolescents aged sixteen to eighteen years are mostly treated in adult emergency departments. In a study at our tertiary adult emergency department (ED) at the University Hospital in Bern (Inselspital), Switzerland, we found that adolescents significantly more often present with psychiatric problems than adults. The study at hand aimed to characterise those adolescents presenting to the ED triaged with a chief complaint regarding mental health. Furthermore, the goal was to assess sex differences in terms of diagnosis, suicidal ideation, substance use, as well as social impact. Methods: We conducted a single-centre, retrospective review of presentations to our adult ED related to the mental health of adolescents aged 16 to 18 years, covering the period from January 2013 to July 2017. Anonymised data were extracted from medical reports referring to the ED visits that were triaged as mental-health-related, and we assessed these for diagnosis, acute and previous suicidal ideation, history of self-harm, external aggression, substance use and social problems. We focused on patient characterisation and defining sex differences. Results: Data were analysed for a total of 612 consultations by adolescents who presented to our ED with problems related to mental health. Women accounted for 61.1% of cases. The most frequent diagnoses were reactions to severe stress and adjustment disorders (19.1%), followed by alcohol use disorders (17.6%), intentional self-harm (17.3%), and affective disorders (13.7%). Males had lower odds for intentional self-harm (OR 0.10, 95% 0.05-0.21, p &lt; 0.001) as well as disorders of personality and behaviour (OR 0.09, 95% 0.21-0.37, p &lt; 0.001), whereas they had higher odds of being admitted due to use of alcohol (OR 2.51, 95% 1.65-3.83, p &lt; 0.001). Of all cases, 31.7% reported acute suicidal ideation, with a significantly lower odds ratio in males (OR 0.58, 95% 0.41-0.84, p = 0.004). The most common source for referral to the ED was family members (25.2%). Males had twice the odds of reporting alcohol consumption as well as use of cannabis (in both cases p &lt; 0.001). In 27.9% of all cases, familial problems were reported, with males having significantly lower odds of being exposed to these (OR 0.64, 95% 0.44-0.94, p = 0.021), whereas they had higher odds of reporting educational problems (OR 1.68, 95% 1.04-2.72, p = 0.035). Conclusions: Adolescents aged sixteen to eighteen years presenting to the ED with problems related to mental health are an important subgroup of ED presentations and should be thoroughly assessed for suicidal ideation, substance use, as well as familial and educational problems. Assessment and establishment of post-ED care are of particular importance in this vulnerable patient group. abstract_id: PUBMED:26876459 Emergency Responses and Health Consequences after the Fukushima Accident; Evacuation and Relocation. The Fukushima accident was a compounding disaster following the strong earthquake and huge tsunami. The direct health effects of radiation were relatively well controlled considering the severity of the accident, not only among emergency workers but also residents. Other serious health issues include deaths during evacuation, collapse of the radiation emergency medical system, increased mortality among displaced elderly people and public healthcare issues in Fukushima residents. The Fukushima mental health and lifestyle survey disclosed that the Fukushima accident caused severe psychological distress in the residents from evacuation zones. In addition to psychiatric and mental health problems, there are lifestyle-related problems such as an increase proportion of those overweight, an increased prevalence of hypertension, diabetes mellitus and dyslipidaemia and changes in health-related behaviours among evacuees; all of which may lead to an increased cardiovascular disease risk in the future. The effects of a major nuclear accident on societies are diverse and enduring. The countermeasures should include disaster management, long-term general public health services, mental and psychological care, behavioural and societal support, in addition to efforts to mitigate the health effects attributable to radiation. abstract_id: PUBMED:37670954 Correlates of child mental health and substance use related emergency department visits in Ontario: A linked population survey and administrative health data study. Introduction: Knowledge of the sociodemographic, behavioural, and clinical characteristics of children visiting emergency departments (EDs) for mental health or substance use concerns in Ontario, Canada is lacking. Objectives: Using data from a population-based survey linked at the individual level to administrative health data, this study leverages a provincially representative sample and quasi-experimental design to strengthen inferences regarding the extent to which children's sociodemographic, behavioural, and clinical characteristics are associated with the risk of a mental health or substance use related ED visit. Methods: 9,301 children aged 4-17 years participating in the 2014 Ontario Child Health Study were linked retrospectively (6 months) and prospectively (12 months) with administrative health data on ED visits from the National Ambulatory Care Reporting System. Modified Poisson regression was used to examine correlates of mental health and substance use related ED visits among children aged 4-17 years over a 12-month period following their survey completion date, adjusting for ED visits in the 6 months prior to their survey completion date. Subgroup analyses of youths aged 14-17 years who independently completed survey content related to peer victimisation, substance use, and suicidality were also conducted. Results: Among children aged 4-17 years, older age, parental immigrant status, internalising problems, and perceived need for professional help were statistically significant correlates that increased the risk of a mental health or substance use related ED visit; low-income and suicidal ideation with attempt were statistically significant only among youths aged 14-17 years. Conclusions: Knowledge of the sociodemographic, behavioural, and clinical characteristics of children visiting EDs for mental health and substance use related concerns is required to better understand patient needs to coordinate effective emergency mental health care that optimises child outcomes, and to inform the development and targeting of upstream interventions that have the potential to prevent avoidable ED visits. Highlights: Growing rates of child mental health and substance use related ED visits have been observed internationally.A population-based survey linked at the individual level to administrative health data was used to examine the extent to which children's sociodemographic, behavioural, and clinical characteristics are associated with the risk of a mental health or substance use related ED visit in Ontario, Canada.Older age, low-income, parental immigrant status, perceived need for professional help, internalising problems, and suicidality increase the risk of an ED visit.Knowledge of the characteristics of children visiting EDs can be used to coordinate effective emergency mental health care that optimises child outcomes, and to inform the development and targeting of upstream interventions that have the potential to prevent avoidable ED visits. abstract_id: PUBMED:26954910 Prevalence and Sociodemographic Correlates of Mental Health Problems Among Emergency Department Patients With High-Risk and Dependent Alcohol Use Patterns. Objective: This study reports the prevalence of co-occurring mental health problems in a large convenience sample of emergency department (ED) patients with high-risk or dependent alcohol use patterns, as well as investigates associations between sociodemographic variables with co-occurring alcohol and mental health problems. Methods: Health educators screened adult patients from 12 large Southern California hospital ED/trauma units, collecting data about alcohol use behavior, sociodemographic data, and past 30-day problems with depression, anxiety, and cognitive function. Participants (N = 1,369) scored within either high-risk (hazardous, nondependent) or severe-risk (likely dependent) alcohol use categories. Analyses examined associations between age, sex, annual family income, employment status, and race/ethnicity with past 30-day mental health problems. Results: Overall, prevalence rates of 65%, 67%, and 50% were found for past 30-day problems with depression, anxiety, and cognitive function, respectively. Lower age was associated with anxiety problems only. Female gender was strongly associated with depression and anxiety. Higher income was associated with anxiety; lower income was associated with cognitive function problems. Employment status was strongly associated with all three mental health problems, with the unemployed having about double the risk of the employed. Race was associated with both depression and anxiety. Conclusions: Results suggest that emergency department patients with high-risk drinking patterns experience high rates of depression, anxiety, and cognitive function problems. Vulnerable groups may be identified based on sociodemographic characteristics. Incorporating standardized screening for mental health problems in the emergency department may be appropriate, especially among those presenting with alcohol-related problems. abstract_id: PUBMED:33840399 Enhancing mental health awareness in emergency services (the ENHANcE I project): cross-sectional survey on mental health stigma among emergency services staff. Background: The number of mental health-related 999 calls to emergency services has increased in recent years. However, emergency services staff have an unfavourable reputation when it comes to supporting people experiencing mental health problems. Aims: To assess the levels of explicit and implicit mental health stigma among accident and emergency, ambulance and police staff, and draw comparisons with the general population. Additional analyses sought to identify which variables predict mental health stigma among emergency services staff. Method: A cross-sectional survey of 1837 participants, comprising four independent groups (accident and emergency, ambulance and police staff, and the general population). Results: Levels of mental health stigma across all four groups were lower than those reported in recent surveys of the general population by the 'Time to Change' campaign. Within this study, explicit levels of mental health stigma were lower among the general population compared with emergency services staff. There was no difference between emergency service professions, nor were there any between-group differences in terms of implicit mental health stigma. The only consistent predictors of mental health stigma were attitudes and future behavioural intentions, whereby increased stigma was predicted by increased fear, reduced sympathy and greater intended discrimination. Conclusions: Our findings suggest that levels of mental health stigma have improved over time, but there is room for improvement in emergency services staff. Interventions to improve mental health stigma may be most effective if, in line with the cognitive-behavioural model of stigma, they target attitudes and behavioural intentions. abstract_id: PUBMED:37442454 Clusters of substance use and mental health variables with emergency room visits in U.S. adults: The 2020 National Survey on Drug Use and Health. Background: This study evaluated the prevalence of emergency room (ER) visits, given numerous substance use and mental health variables in the past year. Methods: Data from 5206 emergency room visits out of 27,170 adults were extracted from the 2020 National Survey on Drug Use and Health. Oblique principal component cluster analysis was used to classify 39 substance use and mental health variables into disjoint clusters. Results: In 2020, the overall prevalence of ER visits was 21.9 %. Being female, age above 65 years, with insurance, low income and low education levels, and being African American increased the risk of ER visit. Nine clusters were made out of 39 substance use and mental health variables. Multivariate stepwise logistic regression analysis showed 15 substance use and mental health variables were significantly associated with ER visits including heavy alcohol use past 30 days in cluster 3, nicotine dependence and cigarettes use in cluster 4, major depressive episode, serious psychological distress, and suicidal plans in past year in cluster 5, any psychotherapeutics use in cluster 7, tranquilizers use and lorazepam products use in cluster 8, and any pain reliever, pain reliever misuse, hydrocodone products use, oxycodone products use, tramadol products use, and codeine products use in cluster 9. Conclusions: Several substance use and mental health problems, including nicotine dependence, illicit drugs, and serious mental health problems were among the common reasons for ER visits. These findings suggest the effective use of ER as the venue to implement interventions for substance use and mental health. abstract_id: PUBMED:38277798 Predicting suicide death after emergency department visits with mental health or self-harm diagnoses. Objective: Use health records data to predict suicide death following emergency department visits. Methods: Electronic health records and insurance claims from seven health systems were used to: identify emergency department visits with mental health or self-harm diagnoses by members aged 11 or older; extract approximately 2500 potential predictors including demographic, historical, and baseline clinical characteristics; and ascertain subsequent deaths by self-harm. Logistic regression with lasso and random forest models predicted self-harm death over 90 days after each visit. Results: Records identified 2,069,170 eligible visits, 899 followed by suicide death within 90 days. The best-fitting logistic regression with lasso model yielded an area under the receiver operating curve of 0.823 (95% CI 0.810-0.836). Visits above the 95th percentile of predicted risk included 34.8% (95% CI 31.1-38.7) of subsequent suicide deaths and had a 0.303% (95% CI 0.261-0.346) suicide death rate over the following 90 days. Model performance was similar across subgroups defined by age, sex, race, and ethnicity. Conclusions: Machine learning models using coded data from health records have moderate performance in predicting suicide death following emergency department visits for mental health or self-harm diagnosis and could be used to identify patients needing more systematic follow-up. abstract_id: PUBMED:26620364 Co-occurrence of substance use related and mental health problems in the Finnish social and health care system. Aim: Many studies have noted that substance abuse and mental health problems often occur simultaneously. The aim of the work reported here was to study the co-occurrence of mental health problems and problems related to substance use in a sample of clients visiting the Finnish social and health care services for issues related to substance use. We collected background information on the clients and considered the parts of the treatment system in which these clients were treated. Data And Methods: Survey data on intoxicant-related cases in the Finnish health care and social services were gathered on a single day in 2011. During the 24 hours of data collection, all intoxicant-related cases were reported and data were obtained for 11,738 intoxicant-related cases. In this analysis we took into account the clients' background variables, mental health variables, information on the treatment type and the main reasons for the client being in treatment. The χ(2) test, Fisher's exact test and binary logistic regression analysis were used. Results: Half of the visiting clients had both substance use related and mental health problems. The strongest factors associated with the co-occurrence of substance use related and mental health problems were female sex, younger age and single marital status. Clients with co-occurring problems were more often treated in the health care services, whereas clients with only substance use related problems were primarily treated in specialized services for the treatment of substance abuse. Conclusions: It is important to identify clients with co-occurring substance use related and mental health problems. In this study, half of the clients presenting to the Finnish social and health care treatment system had both these problems. abstract_id: PUBMED:20378281 Failure to treat tobacco use in mental health and addiction treatment settings: a form of harm reduction? In mental health and addiction treatment settings, failure to treat tobacco dependence has been rationalized by some as a clinical approach to harm reduction. That is, tobacco use is viewed as a less harmful alternative to alcohol or illicit drug use and/or other self-harm behaviors. This paper examines the impact of providers' failure to treat tobacco use on patients' alcohol and illicit drug use and associated high-risk behaviors. The weight of the evidence in the literature indicates: (1) tobacco use is a leading cause of death in patients with psychiatric illness or addictive disorders; (2) tobacco use is associated with worsened substance abuse treatment outcomes, whereas treatment of tobacco dependence supports long-term sobriety; (3) tobacco use is associated with increased (not decreased) depressive symptoms and suicidal risk behavior; (4) tobacco use adversely impacts psychiatric treatment; (5) tobacco use is a lethal and ineffective long-term coping strategy for managing stress, and (6) treatment of tobacco use does not harm mental health recovery. Failure to treat tobacco dependence in mental health and addiction treatment settings is not consistent with a harm reduction model. In contrast, emerging evidence indicates treatment of tobacco dependence may even improve addiction treatment and mental health outcomes. Providers in mental health and addiction treatment settings have an ethical duty to intervene on patients' tobacco use and provide available evidence-based treatments. Answer: Yes, mental health problems are associated with the use of Accident and Emergency (A&E) services and health-related harm. The study mapped in abstract PUBMED:17065176 found that mental health problems increased A&E service use and unmet health-related need within a total mental health population. It identified specific 'care populations' particularly vulnerable to accidents and self-harm, indicating a need for targeted services for mentally ill groups who may not access traditional health and social care services effectively. Additionally, the study in abstract PUBMED:36293777 highlighted that adolescents aged sixteen to eighteen years presenting to the ED with mental health-related problems are an important subgroup of ED presentations. They should be thoroughly assessed for suicidal ideation, substance use, as well as familial and educational problems, suggesting that mental health issues are a significant factor in ED visits among adolescents. Furthermore, abstract PUBMED:37670954 discusses the sociodemographic, behavioural, and clinical characteristics associated with the risk of mental health or substance use related ED visits among children, indicating that mental health problems and substance use are relevant factors for ED utilization in this population. Abstract PUBMED:26954910 reports high rates of depression, anxiety, and cognitive function problems among emergency department patients with high-risk drinking patterns, suggesting a strong association between mental health problems and the use of emergency services. Lastly, abstract PUBMED:33840399 indicates that emergency services staff have an unfavourable reputation when it comes to supporting people experiencing mental health problems, which could reflect the challenges faced by these services in dealing with mental health-related presentations. In summary, mental health problems are indeed associated with increased use of A&E services and health-related harm, as evidenced by the studies referenced above.
Instruction: Is cholecystectomy a reasonable treatment option for simple gallbladder polyps larger than 10 mm? Abstracts: abstract_id: PUBMED:25892875 Is cholecystectomy a reasonable treatment option for simple gallbladder polyps larger than 10 mm? Aim: To determine the relevance of the 10-mm size criterion of the generally accepted surgical indication for gallbladder polyps (GBPs). Methods: We collected data of patients who were confirmed to have GBPs through cholecystectomy at Samsung Medical Center between January 1997 and December 2012. Among the patients who underwent cholecystectomy for GBP, those with a definite evidence for malignancy such as adjacent organ invasion, metastasis on preoperative imaging studies, polyp larger than 20 mm, absence of preoperative imaging study results, and patients having gallstones were excluded. We retrospectively collected and analyzed information on patient's clinical characteristics, symptoms, ultrasonographic findings, and blood laboratory tests. Results: A total of 836 patients who had undergone cholecystectomy were retrospectively analyzed. Seven hundred eighty patients (93%) had benign polyps, whereas 56 patients (7%) had malignant polyps. Of the 56 patients with malignancy, 4 patients (7%) had borderline GBP (10-12 mm) and a patient had small GBP (&lt; 10 mm) with T2 stage. We conducted an ROC curve analysis to verify the 10-mm size criteria (AUC = 0.887, SD = 0.21, P &lt; 0.001). In the ROC curve for polyp size and malignancy, sensitivity and specificity of the 10-mm size criterion was 98.2% and 19.6%, respectively. The specificity of the 11-mm and 12-mm size criteria was 44.6% and 56%, respectively, whereas the sensitivity of these two size criteria was similar. We defined the GBPs of 10 to 12 mm as a borderline-sized GBP, which were found in 411 patients (49%). In this group, there was a significant difference in age between patients with benign and malignant GBPs (47 years vs 60 years, P &lt; 0.05). Conclusion: GBPs larger than 13 mm need immediate excision whereas for borderline-sized GBPs detected in young patients, careful medical observation can be a rational decision. abstract_id: PUBMED:29843968 Reasonable cholecystectomy of gallbladder polyp - 10 years of experience. Background And Objective: Although the incidence of carcinoma is not high in gallbladder polyps, it is essential to diagnose gallbladder cancer at an early stage to achieve a good therapeutic outcome. Therefore, the aim of this study was to define the characteristics of gallbladder polyps to establish sound criteria for surgical indications. Methods: In the current study, data from 516 patients with gallbladder polyps who underwent cholecystectomy were reviewed to correlate clinical features with histopathologic findings and identify risk factors with receiver-operating characteristic curves (ROCs). Results: Among the 516 patients who underwent cholecystectomy, 24 patients (4.6%) had cancerous change. The cancer group was significantly older (65.5 years (median, range 35-85)) than the non-cancer group (42 years (median, range 23-82)) (p &lt; 0.001). Among the cancer group, the preoperative polyp size on ultrasonography was significantly larger (14 mm (median, range 9-30)) than the polyps in the non-cancer group (10.4 mm (median, range 1.9-45)) (p &lt; 0.001). Using the ROC curve and considering the sensitivity and specificity for predicting malignant polyps, 12 mm may be a reasonable cutoff for considering a malignant polyp. Conclusions: Gallbladder polyps with 10-11 mm in asymptomatic young patients (less than 50 years old) have low risk of malignancy, and therefore, a careful "wait and see with follow up by using ultrasonography strategy" might be more appropriate than immediate cholecystectomy. abstract_id: PUBMED:19160315 Cholecystectomy for gallbladder polyp. Background: The management of gallbladder polyps is controversial. Cholecystectomy has been recommended for gallbladder polyps larger than 10 mm because of the association with gallbladder cancer. Cholecystectomy has also been suggested for gallbladder polyps smaller than 10 mm in patients with biliary type of symptoms. Objectives: The aim of this review is to compare the benefits (relief of symptoms, decreased incidence of gallbladder cancer) and harms (surgical morbidity) of cholecystectomy in patients with gallbladder polyp(s). Search Strategy: We searched The Cochrane Hepato-Biliary Group Controlled Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL) in The Cochrane Library, MEDLINE, EMBASE, and Science Citation Index Expanded until July 2008 to identify the randomised trials. Selection Criteria: Only randomised clinical trials (irrespective of language, blinding, or publication status) comparing cholecystectomy and no cholecystectomy were considered for the review. Data Collection And Analysis: We planned to collect the data on the characteristics, methodological quality, mortality, number of patients in whom symptoms were improved or cured from the one identified trial. We planned to analyse the data using the fixed-effect and the random-effects models using RevMan Analysis. For each outcome we planned to calculate the risk ratio (RR) with 95% confidence intervals based on intention-to-treat analysis. Main Results: We were unable to identify any randomised clinical trials comparing cholecystectomy versus no cholecystectomy in patients with a gallbladder polyp. Authors' Conclusions: There are no randomised trials comparing cholecystectomy versus no cholecystectomy in patients with gallbladder polyps. Randomised clinical trials with low bias -risk are necessary to address the question of whether cholecystectomy is indicated in gallbladder polyps smaller than10 mm. abstract_id: PUBMED:26188659 US characteristics for the prediction of neoplasm in gallbladder polyps 10 mm or larger. Purpose: To evaluate the characteristics of gallbladder polyps 10 mm or larger to predict a neoplasm in US examinations. Materials And Methods: Fifty-three patients with gallbladder polyps ≥ 10 mm with follow-up images or pathologic diagnosis were included in the retrospective study. All images and reports were reviewed to determine the imaging characteristics of gallbladder polyps. Univariate and multivariate analyses were used to evaluate predictors for a neoplastic polyp. Results: A neoplastic polyp was verified in 12 of 53 patients and the mean size was 13.9 mm. The univariate analysis revealed that adjacent gallbladder wall thickening, larger size (≥15 mm), older age (≥57 years), absence of hyperechoic foci in a polyp, CT visibility, sessile shape, a solitary polyp, and an irregular surface were significant predictors for a neoplastic polyp. In the multivariate analysis, larger size (≥15 mm) was a significant predictor for a neoplastic polyp. Conclusion: A polyp size ≥15 mm was the strongest predictor for a neoplastic polyp with US. The hyperechoic foci in a polyp and CT visibility would be useful indicators for the differentiation of a neoplastic polyp, in addition to the established predictors. Key Points: • A polyp size ≥15 mm is the strongest predictor for a neoplastic polyp with US. • Hyperechoic foci in a polyp and CT visibility are new predictors. • The rate of malignancy is low in polyps even 10 mm or larger (15.1 %). abstract_id: PUBMED:1643988 Polypoid lesions of the gallbladder--preventive cholecystectomy? During nine years (1979-1988) there was carried out cholecystectomy in 21 patients for gallbladder polyps of unknown histology. The sex ratio female to male was 9 to 12, median age 51 (29-76) years. Most of the polyps were smaller than 5 mm, all proved to be cholesterol polyps, showing multiple localisations. Two benign adenomas had a size of 6 resp. 9 mm. Among the polyps with a diameter of more than 10 mm we found only two cholesterol polyps, but one heterotopia, one adenoma and one adenocarcinoma. Cholecystectomy is indicated for polyps with a diameter of more than 10 mm or suspicious sonographic criteria, especially sessile lesions or concomittant thickening of the gallbladder wall. abstract_id: PUBMED:33590795 Malignancy in elective cholecystectomy due to gallbladder polyps or thickened gallbladder wall: a single-centre experience. Introduction: Gallbladder cancer is a rare but aggressive malignancy. Surgical resection is recommended for gallbladder polyps ≥10 mm. For gallbladder wall thickening, resection is recommended if malignancy cannot be excluded. The incidence of gallbladder malignancy after cholecystectomy with indications of polyps or wall thickening in the Swedish population is not known. Material/methods: A retrospective study was performed at Linköping University Hospital and included patients who underwent cholecystectomy 2010 - 2018. All cholecystectomies performed due to gallbladder polyps or gallbladder wall thickening without other preoperative malignant signs were identified. Preoperative radiological examinations were re-analysed by a single radiologist. Medical records and histopathology reports were analysed. Results: In all, 102 patients were included, of whom 65 were diagnosed with gallbladder polyps and 37 with gallbladder wall thickening. In each group, one patient (1.5% and 2.7% in each group) had gallbladder malignancy ≥ pT1b.Two (3.1%) and three (8.1%) patients with gallbladder malignancy &lt; T1b were identified in each group. Discussion/conclusion: This study indicates that the incidence of malignancy is low without other malignant signs beyond gallbladder polyps and/or gallbladder wall thickening. We propose that these patients should be discussed at a multidisciplinary tumour board. If the polyp is 10-15 mm or if the gallbladder wall is thickened but no other malignant signs are observed, cholecystectomy can be safely performed by an experienced general surgeon at a general surgery unit. If the histopathology indicates ≥ pT1b, the patient should be referred immediately to a hepatobiliary centre for liver and lymph node resection. abstract_id: PUBMED:8153864 Role of laparoscopic cholecystectomy in treating gallbladder polyps. Since the application of laparoscopic cholecystectomy (Lap C) to gallbladder polyps has not yet been fully evaluated, we performed Lap C on 26 patients with gallbladder polyps. Pathological examinations showed adenocarcinoma in three patients, adenoma in two, and cholesterol polyp in 21. Preoperative diagnoses of the cases with adenocarcinoma were a cholesterol polyp in one patient and an adenoma in two. Adenocarcinoma was confirmed to reside in the mucosa without any invasion of lymphatic ducts or small vessels in the three patients. This procedure was considered to be sufficient for this grade of cancer, and, therefore, no additional operations were performed. At present, our policy is to resect by Lap C a gallbladder polyp having a maximum size larger than 10 mm and a tendency to grow or presenting with suspicion of adenoma. When cancer is suspected by preoperative examinations, however, traditional surgery may be recommended. abstract_id: PUBMED:28682049 Upper Gastrointestinal Conditions: Gallbladder Conditions. Asymptomatic patients with gallstones typically do not require treatment. Surgical intervention is indicated in most symptomatic patients with gallstones. For patients with uncomplicated cholecystitis, laparoscopic cholecystectomy performed within 24 hours of admission is associated with significantly decreased morbidity, hospital length of stay, and cost. Most gallbladder polyps are found incidentally during ultrasonography. Patients with gallbladder polyps are at increased risk of malignant transformation if they are older than 50 years; the gallbladder wall is thickened; or the polyp is adenomatous, solitary, larger than 10 mm, sessile in appearance, or has a rapid increase in size. It is crucial but sometimes difficult to differentiate between benign and malignant polyps using transabdominal ultrasonography. Cholecystectomy is recommended for all patients with a polyp larger than 1 cm. abstract_id: PUBMED:11525368 Role of laparoscopic cholecystectomy in the management of polypoid lesions of the gallbladder. This retrospective clinicohistopathologic study was performed to delineate the role of laparoscopic cholecystectomy in the management of polypoid lesions of the gallbladder. One hundred forty-three consecutive patients who had a preoperative sonographic diagnosis of polypoid lesions of the gallbladder with a diameter less than 1.5 cm and who underwent laparoscopic cholecystectomy at Cathay General Hospital were included in the analysis. Histopathologic study showed that 22 (15.4%) patients had true tumors, including adenoma (16), adenoma with focal adenocarcinoma (2), adenocarcinoma (3), and carcinoid tumor (1). Tumorlike lesions were found in 121 (84.6%) patients and included cholesterol polyp (106), adenomyomatous hyperplasia (10), inflammatory polyp (3), and papillary hyperplasia (2). The mean diameter of malignant polypoid lesions of the gallbladder was 1.35 +/- 0.42 cm, which was significantly larger than that of cholesterol polyps (0.66 +/- 0.40 cm, P = 0.0001) but not significantly larger than that of adenomyomatous hyperplasias (1.12 +/- 0.42 cm) and adenomas (1.08 +/- 0.47 cm). The mean age of patients with malignant polypoid lesions of the gallbladder (61.2 +/- 13.3 years old) was significantly older than that of patients with adenomyomatous hyperplasia (46.6 +/- 13.4 years, P = 0.03), cholesterol polyps (44.5 +/- 10.5 years, P = 0.0003), and adenomas (41.4 +/- 9.4 years, P = 0.0008). Clinical follow-up showed that most (98.6%) patients benefited from the minimal invasiveness of laparoscopic cholecystectomy with satisfactory surgical results. We conclude that laparoscopic cholecystectomy is a reliable, safe, and minimally invasive biopsy procedure and definite management of polypoid lesions of the gallbladder with a diameter less than 1.5 cm. abstract_id: PUBMED:31481314 Clinical relevance of gallbladder polyps; is cholecystectomy always necessary? Background: Gallbladder polyps are common incidental findings during abdominal ultrasonography. Cholecystectomy is recommended for polyps equal or greater than 10 mm on ultrasound due to their malignant potential. However, the majority of lesions appear to be pseudopolyps with no malignant potential. Our aim was to determine the correlation between ultrasonographic findings and histopathological findings after cholecystectomy for gallbladder polyps in two institutions. Method: A retrospective analysis was performed at two Dutch institutions of patients who underwent cholecystectomy. All cholecystectomies for suspected gallbladder polyps between January 2010 and August 2017 were included. Ultrasonographic and histopathological reports were analyzed. Results: A total of 108 patients underwent cholecystectomy for gallbladder polyps. At abdominal ultrasound sixty-five patients (60.2%) were diagnosed with multiple gallbladder polyps. The mean diameter of the polyps was 11 mm. On pathological examination after cholecystectomy, only three specimens harbored true polyps. No anomalies were found in 48 (44%) patients and 51 (47%) had cholesterolosis. Conclusion: The prevalence of true gallbladder polyps was much lower in this study than reported in literature. After cholecystectomy for gallbladder polyps diagnosed by ultrasound, 97% of patients had non-neoplastic or not identifiable lesions in the gallbladder. These findings question the usefulness of current guidelines for management of suspected gallbladder polyps. Answer: Cholecystectomy is considered a reasonable treatment option for simple gallbladder polyps larger than 10 mm, as polyps of this size have a higher potential for malignancy. Studies have shown that gallbladder polyps larger than 10 mm are associated with an increased risk of cancerous change, and the risk of malignancy increases with the size of the polyp (PUBMED:29843968, PUBMED:1643988, PUBMED:33590795). For instance, one study found that gallbladder polyps larger than 13 mm should be excised immediately, while for borderline-sized polyps (10-12 mm), careful medical observation may be a rational decision, especially in younger patients (PUBMED:25892875). Another study suggested that a "wait and see with follow-up by using ultrasonography strategy" might be more appropriate than immediate cholecystectomy for asymptomatic young patients (less than 50 years old) with gallbladder polyps measuring 10-11 mm (PUBMED:29843968). Furthermore, a retrospective study indicated that a polyp size ≥15 mm was the strongest predictor for a neoplastic polyp with ultrasound, suggesting that larger polyps are more likely to be neoplastic (PUBMED:26188659). Additionally, guidelines recommend cholecystectomy for all patients with a polyp larger than 1 cm due to the increased risk of malignant transformation (PUBMED:28682049). However, it is also important to note that the majority of gallbladder polyps, even those larger than 10 mm, may be non-neoplastic or pseudopolyps with no malignant potential (PUBMED:31481314). Therefore, the decision to perform cholecystectomy should be individualized based on factors such as the patient's age, the presence of symptoms, polyp size, growth rate, and ultrasonographic characteristics. In some cases, a conservative approach with regular follow-up may be considered, particularly for smaller polyps in younger, asymptomatic patients (PUBMED:29843968, PUBMED:25892875). In conclusion, while cholecystectomy is a reasonable option for gallbladder polyps larger than 10 mm due to the potential for malignancy, the decision should be tailored to the individual patient's risk factors and polyp characteristics.
Instruction: Is post-event processing a social anxiety specific or transdiagnostic cognitive process in the anxiety spectrum? Abstracts: abstract_id: PUBMED:25413026 Is post-event processing a social anxiety specific or transdiagnostic cognitive process in the anxiety spectrum? Background: Research on post-event processing (PEP), where individuals conduct a post-mortem evaluation of a social situation, has focused primarily on its relationship with social anxiety. Aims: The current study examined: 1) levels of PEP for a standardized event in different anxiety disorders; 2) the relationship between peak anxiety levels during this event and subsequent PEP; and 3) the relationship between PEP and disorder-specific symptom severity. Method: Participants with primary DSM-IV diagnoses of social anxiety disorder (SAD), obsessive compulsive disorder (OCD), panic disorder with/without agoraphobia (PD/A), or generalized anxiety disorder (GAD) completed diagnosis specific symptom measures before attending group cognitive behavioural therapy (CBT) specific to their diagnosis. Participants rated their peak anxiety level during the first group therapy session, and one week later rated PEP in the context of CBT. Results: The results indicated that all anxiety disorder groups showed heightened and equivalent PEP ratings. Peak state anxiety during the first CBT session predicted subsequent level of PEP, irrespective of diagnostic group. PEP ratings were found to be associated with disorder-specific symptom severity in SAD, GAD, and PD/A, but not in OCD. Conclusions: PEP may be a transdiagnostic process with relevance to a broad range of anxiety disorders, not just SAD. abstract_id: PUBMED:36398063 Post-Event Processing After Embarrassing Situations: Comparing Experience Sampling Data of Depressed and Socially Anxious Individuals. Background: Post-event processing (PEP) after social interactions (SIs) contributes to the persistence of social phobia (SP). This study investigated whether PEP as a transdiagnostic process also occurs in major depressive disorder (MDD) and controls. We also tested to what extent PEP was explained by trait levels of social anxiety (SA) or depression. Method: For seven days, a total of n = 165 patients (n = 47 SP, n = 118 MDD) and n = 119 controls completed five surveys per day on their smartphones. Event-based experience sampling was used. PEP was assessed following subjective embarrassment in SIs with two reliable items from the Post-Event Processing Questionnaire. Data were analysed via multilevel regression analyses. Results: Individuals with SP or MDD experienced more embarrassing SIs than controls and, accordingly, more PEP. The relative frequency of PEP after embarrassing SIs was equally high in all groups (86-96%). The groups did not differ regarding the amount of time PEP was experienced. After controlling trait depression, embarrassment occurred more frequently only in SP compared to controls. When controlling trait SA, between-group differences in indications of embarrassment, and consequently in PEP, dissipated. Conclusions: PEP could be interpreted as a common coping strategy among all individuals, while more frequent embarrassment might be specific for clinical groups. Embarrassment was primarily driven by SA. The alleviation of SA could lead to the reduction of embarrassment and, further, of PEP. On this basis, a model describing PEP in MDD is proposed, while current models of PEP in SP are complemented. abstract_id: PUBMED:30043718 Effects of a Brief Transdiagnostic Cognitive Behavioural Group Therapy on Disorder Specific Symptoms. Background: In recent years, cognitive behavioural group therapies (CBGT) have been increasingly deployed as a strategy to increase the efficiency and cost-effectiveness in treatment of common mental health problems. The vast majority of these therapies are disorder specific, but in the last few years there has been growing interest in transdiagnostic CBGT. Aims: The aim of this study was twofold: to evaluate the treatment effects of transdiagnostic CBGT on disorder specific symptoms and what (if any) differences would be observed in the treatment effects with regard to general as opposed to disorder specific symptoms measured pre- and post-treatment. Method: The participants were 233 adult patients diagnosed with depression and/or anxiety disorders. They underwent a 6-week transdiagnostic CBGT. To compare treatment effects on general and disorder specific symptoms, raw scores on all measures were converted to standardized scores. Results: Pre-post differences were significant and there was no evidence that treatment was differentially effective for general and disorder specific symptoms. Effect sizes ranged from medium to large. Conclusion: The 6-week transdiagnostic CBGT is feasible for a wide range of mood and anxiety disorders. The results indicate that low-intensity transdiagnostic group therapies may have similar effects on both general and disorder specific symptoms. abstract_id: PUBMED:25673923 Transdiagnostic Case Conceptualization of Emotional Problems in Youth with ASD: An Emotion Regulation Approach. Youth with autism spectrum disorder often struggle to cope with co-occurring anxiety, depression, or anger, and having both internalizing and externalizing symptoms is a common clinical presentation. A number of authors have designed cognitive-behavioral interventions to address transdiagnostic factors related to multiple emotional problems, although none have applied this focus to youth with ASD. The current review article describes how a transdiagnostic emotion regulation framework may inform cognitive-behavioral interventions for youth with ASD, which until now have focused almost exclusively on anxiety. Research is needed to empirically test how a transdiagnostic intervention can address the processes of emotion regulation and assist youth with ASD to cope with their emotional disorders. abstract_id: PUBMED:29298554 Examining the Relationships Among Self-Compassion, Social Anxiety, and Post-Event Processing. Post-event processing refers to negative and repetitive thinking following anxiety provoking social situations. Those who engage in post-event processing may lack self-compassion in relation to social situations. As such, the primary aim of this research was to evaluate whether those high in self-compassion are less likely to engage in post-event processing and the specific self-compassion domains that may be most protective. In study 1 ( N = 156 undergraduate students) and study 2 ( N = 150 individuals seeking help for social anxiety and shyness), participants completed a battery of questionnaires, recalled a social situation, and then rated state post-event processing. Self-compassion negatively correlated with post-event processing, with some differences depending on situation type. Even after controlling for self-esteem, self-compassion remained significantly correlated with state post-event processing. Given these findings, self-compassion may serve as a buffer against post-event processing. Future studies should experimentally examine whether increasing self-compassion leads to reduced post-event processing. abstract_id: PUBMED:32755909 Can I Really Do This? An Examination of Anticipatory Event Processing in Social Anxiety Disorder. Clark and Wells (1995) posit that anticipatory processing before a social situation serves to maintain social anxiety. More specifically, ruminative processes similar to post-event processing (PEP) may occur in anticipation of anxiety provoking social events, and in this article, we have labelled this type of anticipatory rumination anticipatory event processing (AnEP). Participants (n = 75) with social anxiety disorder (SAD) completed measures of anticipatory event processing, trait anxious rumination, social anxiety, state anxiety, and PEP, in the context of completing videotaped exposures twice as part of manual-based group cognitive behavioral therapy. AnEP was significantly positively associated with trait anxious rumination and social anxiety and was associated with state anxiety during the first videotaping. AnEP at the two time points was significantly correlated and decreased across the two taped exposures. Greater AnEP at the first taping was associated with greater PEP the following week. PEP after the first videotaped exposure then significantly related to AnEP for the second videotaped exposure several weeks later. Discussion focuses on the similarities between PEP and AnEP as well as implications for cognitive models and treatment of SAD. abstract_id: PUBMED:33410382 Decentring as a moderator of the associations of anticipatory and post-event processing with social anxiety. Recent studies have shown that decentring protects against social anxiety, but no research to date has explored the way it interacts with cognitive risk factors for social anxiety. The present study aimed to examine decentring as a moderator of the association of anticipatory and post-event processing with social anxiety. An unselected student sample (N = 444) completed questionnaires assessing anticipatory/post-event processing, decentring, and social anxiety. The data were analysed with structural equation modelling and the latent moderated structural equations (LMS) method. Results supported the moderating role of decentring in the relationship of anticipatory processing and social anxiety, but did not find evidence of moderation for the association of post-event processing and social anxiety, after accounting for the role of anticipatory processing. Limitations and clinical implications for the protective effects of decentring on social anxiety are discussed. abstract_id: PUBMED:32326738 Evaluating Structural Models of Cognitive Vulnerabilities: Transdiagnostic and Specific Pathways to Internalizing Symptoms. The structure of cognitive vulnerabilities to anxiety and depression was examined via a hierarchical approach to examine the usefulness of a bifactor model for identifying a broad transdiagnostic (i.e., common core) factor versus disorder-specific variables (i.e., unique dimensions) in predicting internalizing psychopathology. Several models (i.e., single factor, correlated factor, single hierarchical, and bifactor models) were evaluated in undergraduate (n = 351) and adolescent (n = 385) samples. Across both samples, the bifactor model exhibited comparable good fit as the correlated and single hierarchical models. This model comprised a core transdiagnostic vulnerability factor and six specific factors (i.e., negative cognitive style, dysfunctional attitudes, ruminative style, intolerance of uncertainty, anxiety sensitivity, and fear of negative evaluation). Although the transdiagnostic factor predicted a general internalizing symptom factor, unique fear-related specific-level associations between individual vulnerability and symptom remained significant. Moreover, the transdiagnostic vulnerability factor predicted internalizing symptoms, even after controlling for personality and gender. These findings highlight the importance of advancing an integrative etiologic model of internalizing psychopathology. abstract_id: PUBMED:31030874 Repetitive Thinking in Social Anxiety Disorder: Are Anticipatory Processing and Post-Event Processing Facets of an Underlying Unidimensional Construct? Existing literature suggests that anticipatory processing and post-event processing-two repetitive thinking processes linked to social anxiety disorder (SAD)-might be better conceptualized as facets of an underlying unidimensional repetitive thinking construct. The current study tested this by examining potential factor structures underlying anticipatory processing and post-event processing. Baseline data from two randomized controlled trials, consisting of 306 participants with SAD who completed anticipatory processing and post-event processing measures in relation to a speech task, were subjected to confirmatory factor analysis. A bifactor model with a General Repetitive Thinking factor and two group factors corresponding to anticipatory processing and post-event processing best fit with the data. Further analyses indicated an optimal model would include only the General Repetitive Thinking factor (reflecting anticipatory processing and a specific aspect of post-event processing) and Post-event Processing group factor (reflecting another specific aspect of post-event processing that is separable), providing evidence against a unidimensional account of repetitive thinking in SAD. Analyses also indicated that the General Repetitive Thinking factor had moderately large associations with social anxiety and life interference (rs = .43 to .47), suggesting its maladaptive nature. The separable Post-event Processing group factor only had small associations with social anxiety (rs = .16 to .27) and was not related to life interference (r = .11), suggesting it may not, in itself, be a maladaptive process. Future research that further characterises the bifactor model components and tests their utility has the potential to improve the conceptualisation and assessment of repetitive thinking in SAD. abstract_id: PUBMED:34165065 The relationship between cognitive functioning and psychopathology in patients with psychiatric disorders: a transdiagnostic network analysis. Background: Patients with psychiatric disorders often experience cognitive dysfunction, but the precise relationship between cognitive deficits and psychopathology remains unclear. We investigated the relationships between domains of cognitive functioning and psychopathology in a transdiagnostic sample using a data-driven approach. Methods: Cross-sectional network analyses were conducted to investigate the relationships between domains of psychopathology and cognitive functioning and detect clusters in the network. This naturalistic transdiagnostic sample consists of 1016 psychiatric patients who have a variety of psychiatric diagnoses, such as depressive disorders, anxiety disorders, obsessive-compulsive and related disorders, and schizophrenia spectrum and other psychotic disorders. Psychopathology symptoms were assessed using various questionnaires. Core cognitive domains were assessed with a battery of automated tests. Results: Network analysis detected three clusters that we labelled: general psychopathology, substance use, and cognition. Depressive and anxiety symptoms, verbal memory, and visual attention were the most central nodes in the network. Most associations between cognitive functioning and symptoms were negative, i.e. increased symptom severity was associated with worse cognitive functioning. Cannabis use, (subclinical) psychotic experiences, and anhedonia had the strongest total negative relationships with cognitive variables. Conclusions: Cognitive functioning and psychopathology are independent but related dimensions, which interact in a transdiagnostic manner. Depression, anxiety, verbal memory, and visual attention are especially relevant in this network and can be considered independent transdiagnostic targets for research and treatment in psychiatry. Moreover, future research on cognitive functioning in psychopathology should take a transdiagnostic approach, focusing on symptom-specific interactions with cognitive domains rather than investigating cognitive functioning within diagnostic categories. Answer: Post-event processing (PEP) is not specific to social anxiety but is a transdiagnostic cognitive process relevant to a broad range of anxiety disorders. Research has shown that individuals with various anxiety disorders, including social anxiety disorder (SAD), obsessive-compulsive disorder (OCD), panic disorder with/without agoraphobia (PD/A), and generalized anxiety disorder (GAD), exhibit heightened and equivalent levels of PEP. Furthermore, peak anxiety levels during an event predicted subsequent levels of PEP across different anxiety disorders. PEP was also found to be associated with disorder-specific symptom severity in SAD, GAD, and PD/A, but not in OCD (PUBMED:25413026). Additionally, studies have indicated that PEP occurs in major depressive disorder (MDD) as well as in socially anxious individuals, suggesting that PEP could be a common coping strategy among all individuals. The frequency of PEP after embarrassing social interactions was equally high across groups with SP, MDD, and controls. The study also found that embarrassment and subsequent PEP were primarily driven by social anxiety (PUBMED:36398063). Moreover, transdiagnostic cognitive-behavioral group therapies (CBGT) have been shown to be effective for a wide range of mood and anxiety disorders, indicating that such therapies may impact both general and disorder-specific symptoms, including PEP (PUBMED:30043718). In summary, PEP appears to be a transdiagnostic process that is not exclusive to social anxiety but is relevant across the anxiety spectrum and potentially other disorders such as MDD. This suggests that interventions targeting PEP could have broad applicability in the treatment of various anxiety-related conditions (PUBMED:25413026; PUBMED:36398063).
Instruction: Best prognostic factor of neuroendocrine tumors: Grade or Stage? Abstracts: abstract_id: PUBMED:27852541 Best prognostic factor of neuroendocrine tumors: Grade or Stage? A multidisciplinary single-center study. Background/aims: Currently, there is no definite consensus about the prognostic factors of neuroendocrine tumors (NETs). We evaluate epidemiology, survival and especially prognostic factors in NETs. Materials And Methods: Patients who had a NET and were diagnosed between 2000 and 2014 at a tertiary care center were included. Demographic data, tumor characteristics and survival rates were evaluated, retrospectively. Results: Two-hundred and thirty-three patients (123 male, 110 female; median age, 55 years [16-92 years]) took part in the study. Primary NET sites were the lung (n=56), stomach (n=50), pancreas (n=39), colorectal (n=21), small intestine (n=19), and appendix (n=19). According to the NET classification by the WHO in 2010, 60% (n=140) of patients were grade-1, 15% (n=35) were grade-2, and 25% (n=58) were grade-3. According to TNM staging, 88 patients (37.8%) were stage I, 30 patients (12.8%) were stage II, 22 patients (9.5%) were stage III, and 93 patients (39.9%) were stage IV. Univariate analysis revealed significant associations between gender, age, grade, lymph node metastasis, distant metastasis, stage, and the number of organs impacted by metastases and overall survival. However, with multivariate analysis only age greater than 55 years, advancing grade, and inoperable tumors were significantly associated with shortened survival. Five-year survival was 81% in grade-1, 34% in grade-2, and 9% in grade-3 NETs. Conclusion: This study is the most comprehensive study in Turkey that has evaluated NETs using a multidisciplinary approach. Also, we suggest that age, operability and especially grade rather than stage are the most important prognostic factors in NETs. abstract_id: PUBMED:34970848 Clinical characteristics and prognostic model for extensive-stage small cell lung cancer: A retrospective study over an 8-year period. Background: Small cell lung cancer (SCLC) is a highly aggressive neuroendocrine tumor with a short replication time and a rapid growth rate. Prognostic factors for SCLC in clinical practice are scarce. Retrospective analysis of 8-year extensive-stage SCLC data from the Department Respiratory and Intensive Care Unit, Peking Union Medical College Hospital (Beijing, China) was performed to develop a risk prediction model that can facilitate the identification of extensive-stage SCLC with differing prognosis in clinical practice. Methods: A retrospective analysis of data from patients with extensive-stage SCLC at a single-center from January 2013 to January 2021, including age, sex, ECOG physical score, immunohistochemistry (CgA, Syn, CD56, TTF1, and Ki67), staging, treatment regimen, laboratory examinations, and survival period, was performed. Clinical variables with potential prognostic significance were screened by univariate Cox analysis. Next, multifactor Cox risk prediction regression analysis was performed to establish an extensive-stage SCLC risk prognostic model. Survival curves and ROC curves for high and low risk groups were plotted according to risk scores. Nomogram and calibration curves were developed to assess the accuracy of the risk prediction model. Results: This study included 300 patients who were diagnosed with extensive-stage SCLC at our center from January 2013 to January 2021. The most common first presentation was respiratory symptoms, especially cough (162, 54%). The most common extra-thoracic metastatic organs were bone (36.3%), liver (24.7%), brain (15.7%), and adrenal glands (15.7%). A total of 99% of patients received first-line systemic therapy, with 86.3% of patients treated with platinum-etoposide and 10.7% of patients treated with immune checkpoint inhibitor combined with platinum-etoposide backbone. First-line progression-free survival was up to 198 days, and the median OS was 439 days. After Cox regression screening and backward stepwise selection, "time from initial therapy to relapse or progression (PFS1), liver metastases, adrenal metastases, M stage and first-line treatment pattern" were retained to establish a prognostic model with an AUC value of 0.763. The prognostic model was shown as a nomogram with good agreement between predicted and observed outcomes. Conclusions: The first-line treatment of SCLC patients admitted to our hospital in the past 8 years was relatively standardized, and the progression-free survival and OS were slightly longer than those reported in the literature. We developed a prognostic risk score model for extensive-stage SCLC to calculate individual survival in clinical practice. abstract_id: PUBMED:34964186 Relevant prognostic factors in patients with stage IV small intestine neuroendocrine neoplasms. There are few, but controversial data on the prognostic role of upfront primary tumour resection and mesenteric lymph node dissection (PTR) in patients with diffuse metastatic small intestinal neuroendocrine neoplasia (SI-NEN). Therefore, the prognostic role of PTR and other factors was determined in this setting. This retrospective cohort study included patients with stage IV SI-NETs with unresectable distant metastases without clinical and radiological signs of acute bowel obstruction or ischaemia. Patients diagnosed from January 2002 to May 2020 were retrieved from a prospective SI-NEN database. Disease specific overall survival (OS) was analysed with regard to upfront PTR and a variety of other clinical (e.g., gender, age, Hedinger disease, carcinoid syndrome, diarrhoea, laboratory parameters, metastatic liver burden, extrahepatic and extra-abdominal metastasis) and pathological (e.g., grading, mesenteric gathering) parameters by uni- and multivariate analysis. A total of 138 patients (60 females, 43.5%) with a median age of 60 years, of whom 101 (73%) underwent PTR and 37 (27%) did not, were included in the analysis. Median OS was 106 (95% CI: 72.52-139.48) months in the PTR group and 52 (95% CI: 30.55-73.46) in the non-PTR group (p = 0.024), but the non-PTR group had more advanced metastatic disease (metastatic liver burden ≥50% 32.4% vs. 13.9%). There was no significant difference between groups regarding the rate of surgery for bowel complications during a median follow-up of 51 months (PTR group 10.9% and non-PTR group 16.2%, p = 0.403). Multivariate analysis revealed age &lt; 60 years, normal C-reactive protein (CRP) at baseline, absence of diarrhoea, less than 50% of metastatic liver burden, and treatment with PRRT as independent positive prognostic factors, whereas PTR showed a strong tendency towards better OS, but level of significance was missed (p = 0.067). However, patients who underwent both, PTR and peptide radioreceptor therapy (PRRT) had the best survival compared to the rest (137 vs. 73 months, p = 0.013). PTR in combination with PRRT significantly prolongs survival in patients with stage IV SI-NEN. Prophylactic PTR does also not result in a lower reoperation rate compared to the non-PTR approach regarding bowel complications. abstract_id: PUBMED:29559553 A classification prognostic score to predict OS in stage IV well-differentiated neuroendocrine tumors. No validated prognostic tool is available for predicting overall survival (OS) of patients with well-differentiated neuroendocrine tumors (WDNETs). This study, conducted in three independent cohorts of patients from five different European countries, aimed to develop and validate a classification prognostic score for OS in patients with stage IV WDNETs. We retrospectively collected data on 1387 patients: (i) patients treated at the Istituto Nazionale Tumori (Milan, Italy; n = 515); (ii) European cohort of rare NET patients included in the European RARECAREnet database (n = 457); (iii) Italian multicentric cohort of pancreatic NET (pNETs) patients treated at 24 Italian institutions (n = 415). The score was developed using data from patients included in cohort (i) (training set); external validation was performed by applying the score to the data of the two independent cohorts (ii) and (iii) evaluating both calibration and discriminative ability (Harrell C statistic). We used data on age, primary tumor site, metastasis (synchronous vs metachronous), Ki-67, functional status and primary surgery to build the score, which was developed for classifying patients into three groups with differential 10-year OS: (I) favorable risk group: 10-year OS ≥70%; (II) intermediate risk group: 30% ≤ 10-year OS &lt; 70%; (III) poor risk group: 10-year OS &lt;30%. The Harrell C statistic was 0.661 in the training set, and 0.626 and 0.601 in the RARECAREnet and Italian multicentric validation sets, respectively. In conclusion, based on the analysis of three 'field-practice' cohorts collected in different settings, we defined and validated a prognostic score to classify patients into three groups with different long-term prognoses. abstract_id: PUBMED:37554699 Value of a preoperative prognostic nutritional index for the prognostic evaluation of gastric neuroendocrine carcinoma patients. Objective: To study the value of Onodera's prognostic nutrition index (PNI) in patients with gastric neuroendocrine cancer (G-NEC). Methods: The clinical data on 148 cases of G-NEC presented between March 2010 and April 2022 were retrospectively analyzed. The relationship between the clinical characteristics of the patients and PNI was analyzed. Optimal PNI cutoff values for G-NEC prognosis prediction were calculated using the X-tile software. The survival curves were created using the Kaplan-Meier method. A Cox proportional hazards model was also established to identify independent prognostic factors that impact the prognosis of patients with G-NEC. Results: The median overall survival (OS) rate was 30 months (range 6-127 months), and the OS rates at 1, 3 and 5 years were 89.2, 71.6 and 68.2%, respectively. The mean PNI of the 148 patients before the operation was 49.5 ± 8.0. The mean PNI of patients with anemia (p &lt; 0.001) and abnormal carcinoembryonic antigen (p = 0.039) was significantly lower than that of patients without such comorbidities. The mean PNI of patients with Stage III tumors (p &lt; 0.001) and postoperative complications was significantly lower (p = 0.005). PNI optimal cutoff values were 50 (p &lt; 0.001). Based on the cut-off value of the PNI, these patients were divided into a PNI-high group (PNI ≥ 50.0, n = 77) and a PNI-low group (PNI &lt; 50.0, n = 71). The PNI-high group had a significantly better 5-years OS rate compared with the PNI-low group (76.6% vs. 59.2%, χ2 = 14.7, p &lt; 0. 001). Multivariate analysis demonstrated that PNI and pathological stage were independent prognostic factors for patients with G-NEC. In the subgroup analysis, OS rates were significantly lower in the PNI-low group than in the PNI-high group among patients with stage I and stage III of the disease. Conclusion: The PNI is a simple and useful marker for predicting long-term outcomes in G-NEC patients regardless of tumor stage. Based on our results, we suggest that PNI should be included in routine assessments of patients with G-NEC. abstract_id: PUBMED:38172499 Prognostic significance of grade of malignancy based on histopathological differentiation and Ki-67 in pancreatic ductal adenocarcinoma. Objective: Tumor cell malignancy is indicated by histopathological differentiation and cell proliferation. Ki-67, an indicator of cellular proliferation, has been used for tumor grading and classification in breast cancer and neuroendocrine tumors. However, its prognostic significance in pancreatic ductal adenocarcinoma (PDAC) remains uncertain. Methods: Patients who underwent radical pancreatectomy for PDAC were retrospectively enrolled, and relevant prognostic factors were examined. Grade of malignancy (GOM), a novel index based on histopathological differentiation and Ki-67, is proposed, and its clinical significance was evaluated. Results: The optimal threshold for Ki-67 was determined to be 30%. Patients with a Ki-67 expression level &gt; 30% rather than ≤ 30% had significantly shorter 5-year overall survival (OS) and recurrence-free survival (RFS). In multivariate analysis, both histopathological differentiation and Ki-67 were identified as independent prognostic factors for OS and RFS. The GOM was used to independently stratify OS and RFS into 3 tiers, regardless of TNM stage and other established prognostic factors. The tumor-node-metastasis-GOM stage was used to stratify survival into 5 distinct tiers, and surpassed the predictive performance of TNM stage for OS and RFS. Conclusions: Ki-67 is a valuable prognostic indicator for PDAC. Inclusion of the GOM in the TNM staging system may potentially enhance prognostic accuracy for PDAC. abstract_id: PUBMED:34268005 Prognostic factors and treatment of neuroendocrine tumors of the uterine cervix based on the FIGO 2018 staging system: a single-institution study of 172 patients. Objective: This study aimed to explore the prognostic factors and outcomes of patients with neuroendocrine tumors (NETs) of the cervix and to determine appropriate treatment. Methods: A single-institution retrospective analysis of 172 patients with NETs was performed based on the new International Federation of Gynecology and Obstetrics (FIGO 2018) staging system. Results: Among the 172 eligible patients, 161 were diagnosed with small cell neuroendocrine carcinoma (SCNEC), six with large cell neuroendocrine carcinoma, four with typical carcinoid tumors and one with SCNEC combined with an atypical carcinoid tumor. According to the FIGO 2018 staging guidelines, 31 were stage I, 66 were stage II, 57 were stage III, and 18 were stage IV. The 5-year survival rates of patients with stage I-IV disease were 74.8%, 56.2%, 41.4% and 0%, respectively. The 5-year progression-free survival rates of patients with stage I-IV disease were 63.8%, 54.5%, 30.8% and 0%, respectively. In the multivariate analysis, advanced FIGO stage, large tumor and older age were identified as independent variables for 5-year survival in patients with stage I-IV disease. FIGO stage, tumor size and para-aortic lymph node metastasis were independent prognostic factors for 5-year progression-free survival in patients with stage I-IV disease. For the patients receiving surgery (n = 108), tumor size and pelvic lymph node metastasis were independent prognostic factors for 5-year survival, and pelvic lymph node metastasis for 5-year progression-free survival. In stage IVB, at least six cycles of chemotherapy (n = 7) was associated with significantly better 2-year OS (83.3% vs. 9.1%, p &lt; 0.001) and 2-year PFS (57.1% vs. 0%, p = 0.01) than fewer than six cycles of chemotherapy(n = 11). Conclusion: Advanced FIGO stage, large tumor, older age and lymph node metastasis are independent prognostic factors for NETs of the cervix. The TP/TC and EP regimens were the most commonly used regimens, with similar efficacies and toxicities. Standardized and complete multimodality treatment may improve the survival of patients with NETs. abstract_id: PUBMED:35301675 Prognostic significance of laterality in lung neuroendocrine tumors. Purpose: Well-differentiated lung neuroendocrine tumors (Lu-NET) are classified as typical (TC) and atypical (AC) carcinoids, based on mitotic counts and necrosis. However, prognostic factors, other than tumor node metastasis (TNM) stage and the histopathological diagnosis, are still lacking. The current study is aimed to identify potential prognostic factors to better stratify lung NET, thus, improving patients' treatment strategy and follow-up. Methods: A multicentric retrospective study, including 300 Lung NET, all surgically removed, from Italian and Spanish Institutions. Results: Median age 61 years (13-86), 37.7% were males, 25.0% were AC, 42.0% were located in the lung left parenchyma, 80.3% presented a TNM stage I-II. Mitotic count was ≥2 per 10 high-power field (HPF) in 24.7%, necrosis in 13.0%. Median overall survival (OS) was 46.1 months (0.6-323), median progression-free survival (PFS) was 36.0 months (0.3-323). Female sex correlated with a more indolent disease (T1; N0; lower Ki67; lower mitotic count and the absence of necrosis). Left-sided primary tumors were associated with higher mitotic count and necrosis. At Cox-multivariate regression model, age, left-sided tumors, nodal (N) positive status and the diagnosis of AC resulted independent negative prognostic factors for PFS and OS. Conclusions: This study highlights that laterality is an independent prognostic factors in Lu-NETs, with left tumors being less frequent but showing a worse prognosis than right ones. A wider spectrum of clinical and pathological prognostic factors, including TNM stage, age and laterality is suggested. These parameters could help clinicians to personalize the management of Lu-NET. abstract_id: PUBMED:28610828 Revised nodal stage for pancreatic neuroendocrine tumors. Background: Previously we have proposed a modified European Neuroendocrine Tumor Society (mENETS) staging system for pNETs, which is more suitable than either the American Joint Committee on Cancer (AJCC) or the European Neuroendocrine Tumor Society (ENETS) systems. However, it is necessary to revise the nodal stage of the mENETS system for the under representation of stage III diseases. Methods: Nodal substages of the upper gastrointestinal organs (N0: 0 node, N1: 1-2 nodes; N2: ≥3 nodes) or the lower gastrointestinal organs (0: 0 node, N1: 1-3 nodes, and N2:≥ 4 nodes) were incorporated into the mENETS system and evaluated using the Surveillance, Epidemiology, and End Results (SEER) registry series. Results: The mENETS classification with the upper gastrointestinal N-stage revision (stage III, 17.1%) had better proportional distribution than the mENETS classification (stage III, 8.7%) or the lower gastrointestinal N-stage revision (stage III, 14.5%). N-stage revision (N0: 0 node, N1: 1-2 nodes; N2: ≥3 nodes) was incorporated in the mENETS staging definition for further analysis. Survival curves were well separated by nodal substages. HRs of stage IIA (T3N0M0) and IIB (T1-3N1M0) of the mENETS classification with N-stage revision were similar, indicating these two substages should be attributed to stage II. Survival curves were well separated by stage using the mENETS classification with N-stage revision. Conclusions: The mENETS classification with N-stage revision (N0: 0 node, N1: 1-2 nodes; N2: ≥3 nodes) had better prognostic value and proportional distribution than the mENETS classification for pNETs and can be used in clinical practice. abstract_id: PUBMED:34350109 Biliary Neuroendocrine Neoplasms: Analysis of Prognostic Factors and Development and Validation of a Nomogram. Background: For this study, we explored the prognostic profiles of biliary neuroendocrine neoplasms (NENs) patients and identified factors related to prognosis. Further, we developed and validated an effective nomogram to predict the overall survival (OS) of individual patients with biliary NENs. Methods: We included a total of 446 biliary NENs patients from the SEER database. We used Kaplan-Meier curves to determine survival time. We employed univariate and multivariate Cox analyses to estimate hazard ratios to identify prognostic factors. We constructed a predictive nomogram based on the results of the multivariate analyses. In addition, we included 28 biliary NENs cases from our center as an external validation cohort. Results: The median survival time of biliary NENs from the SEER database was 31 months, and the value of gallbladder NENs (23 months) was significantly shorter than that of the bile duct (45 months) and ampulla of Vater (33.5 months, p=0.023). Multivariate Cox analyses indicated that age, tumor size, pathological classification, SEER stage, and surgery were independent variables associated with survival. The constructed prognostic nomogram demonstrated good calibration and discrimination C-index values of 0.783 and 0.795 in the training and validation dataset, respectively. Conclusion: Age, tumor size, pathological classification, SEER stage, and surgery were predictors for the survival of biliary NENs. We developed a nomogram that could determine the 3-year and 5-year OS rates. Through validation of our central database, the novel nomogram is a useful tool for clinicians in estimating individual survival among biliary NENs patients. Answer: The best prognostic factor for neuroendocrine tumors (NETs) appears to be grade rather than stage. A study conducted on patients diagnosed with NETs between 2000 and 2014 at a tertiary care center found that with multivariate analysis, age greater than 55 years, advancing grade, and inoperability of tumors were significantly associated with shortened survival. The study concluded that age, operability, and especially grade are the most important prognostic factors in NETs (PUBMED:27852541). This finding is supported by other studies that have identified various prognostic factors for different types of NETs. For instance, in pancreatic ductal adenocarcinoma (PDAC), a novel index based on histopathological differentiation and Ki-67, called the Grade of Malignancy (GOM), was proposed. This index was used to independently stratify overall survival (OS) and recurrence-free survival (RFS) into three tiers, regardless of TNM stage and other established prognostic factors (PUBMED:37554699). Similarly, a study on biliary neuroendocrine neoplasms developed a nomogram that included age, tumor size, pathological classification, SEER stage, and surgery as predictors for survival, indicating that a combination of factors, including grade, are important for prognosis (PUBMED:34350109). In the case of small intestine neuroendocrine neoplasms, factors such as age, normal C-reactive protein (CRP) at baseline, absence of diarrhea, less than 50% of metastatic liver burden, and treatment with peptide radioreceptor therapy (PRRT) were identified as independent positive prognostic factors, while primary tumor resection (PTR) showed a strong tendency towards better OS (PUBMED:34964186). For lung neuroendocrine tumors, laterality was identified as an independent prognostic factor, with left-sided primary tumors associated with higher mitotic count and necrosis, and female sex correlated with a more indolent disease (PUBMED:35301675). Overall, while stage is an important factor in the prognosis of NETs, the grade of the tumor, which reflects the biological aggressiveness and proliferative activity, seems to be a more critical determinant of prognosis across various types of NETs.
Instruction: Is there a link between blastomere contact surfaces of day 3 embryos and live birth rate? Abstracts: abstract_id: PUBMED:22963278 Is there a link between blastomere contact surfaces of day 3 embryos and live birth rate? Background: Cell-cell communication and adhesion are essential for the compaction process of early stage embryos. The aim of this study was to develop a non-invasive objective calculation system of embryo compaction in order to test the hypothesis that embryos with a larger mean contact surface result in a higher live birth rate compared to embryos with a lower mean contact surface. Methods: Multilevel images of 474 embryos transferred on day 3 were evaluated by the Cellify software. This software calculates the contact surfaces between the blastomeres. The primary outcome of this study was live birth. An ideal range of contact surface was determined and the positive and negative predictive value, the sensitivity, the specificity and the area under the curve for this new characteristic were calculated. Results: In total, 115 (24%) transferred embryos resulted in a live birth. Selection of an embryo for transfer on its mean contact surface could predict live birth with a high sensitivity (80%) and high negative predicting value (83%) but with a low positive predictive value (27%), a low specificity (31%) and low area under the ROC curve (0.56). The mean contact surface of embryos cultured in a single medium was significantly higher compared to the mean contact surface of embryos cultured in a sequential medium (p = 0.0003). Conclusions: Neither the mean contact surface nor the number of contact surfaces of a day 3 embryo had an additional value in the prediction of live birth. The type of culture medium, however, had an impact on the contact surface of an embryo. Embryos cultured in a single medium had a significant larger contact surface compared to embryos cultured in the sequential medium. abstract_id: PUBMED:35277132 Increased blastomere number is associated with higher live birth rate in day 3 embryo transfer. Purpose: To study the relationship between blastomere number and pregnancy outcomes of day 3 embryo transfers. Methods: This retrospective cohort study included 2237 fresh single day 3 embryo transfer cycles from October 2013 to November 2020. Patients were divided into six groups according to the blastomere number on day 3: ≤ 6-cell (n = 100), 7-cell (n = 207), 8-cell (n = 1522), 9-cell (n = 187), 10-cell (n = 91) and ≥ 11-cell (n = 130). Generalized estimating equation analysis based on multivariate logistic regression model was performed to adjust for potential confounders. Results: The live birth rate (LBR) was 19.0%, 27.1%, 38.9%, 32.1%, 44.0% and 53.8% for the ≤ 6-cell, 7-cell, 8-cell, 9-cell, 10-cell and ≥ 11-cell groups, respectively (P &lt; 0.001). Specifically, the ≤ 6-cell group was associated with reduced LBR compared with the 8-cell group (aOR 0.50, 95% CI 0.29-0.86; P = 0.013). Conversely, the odds of live birth were significantly increased in patients transferred with 10-cell embryos (aOR 1.62, 95% CI 1.03-2.53; P = 0.035) and ≥ 11-cell embryos (aOR 2.14, 95% CI 1.47-3.11; P &lt; 0.001) when using the 8-cell embryo group as reference. Similar trends were also observed in the rates of positive hCG test and clinical pregnancy, while no significant differences were detected in miscarriage risk. Conclusion: Increased blastomere number was associated with higher LBR in fresh single day 3 embryo transfer cycles. This finding questions the consensus on the reduced developmental potential of fast-cleaving embryos. Further large prospective studies are warranted for confirmation. abstract_id: PUBMED:35857255 Impact of cell loss after warming of human vitrified day 3 embryos on obstetric outcome in single frozen embryo transfers. Purpose: Does cell loss (CL) after vitrification and warming (V/W) of day 3 embryos have an impact on live birth rate (LBR) and neonatal outcomes? Method: This retrospective analysis includes cleavage stage day 3 embryos vitrified/warmed between 2011 and 2018. Only single vitrified/warmed embryo transfers were included. Pre-implantation genetic screening, oocyte donation, and age banking were excluded from the analysis. The sample was divided into two groups: group A (intact embryo after warming) and group B (≤ 50% blastomere loss after warming). Results: On the total embryos (n = 2327), 1953 were fully intact (83.9%, group A) and 374 presented cell damage (16.1%, group B). In group B, 62% (232/374) of the embryos had lost only one cell. Age at cryopreservation, cause of infertility, insemination procedure, and semen origin were comparable between the two groups. The positive hCG rate (30% and 24.3%, respectively, for intact vs CL group, p = 0.028) and LBR (13.7% and 9.4%, respectively, for intact vs CL group, p = 0.023) per warming cycle were significantly higher for intact embryos. However, LBR per positive hCG was equivalent between intact and damaged embryos (45.6% vs 38.5%, respectively, p = 0.2). Newborn measurements (length, weight, and head circumference at birth) were comparable between the two groups. Multivariate logistic regression showed that the presence of CL is not predictive for LB when adjusting for patients' age. Conclusions: LBR is significantly higher after transfer of an intact embryo compared to an embryo with CL after warming; however, neonatal outcomes are comparable between the two groups. abstract_id: PUBMED:27491644 Live birth rate and number of blastomeres on day 2 transfer. Purpose: To investigate whether the presence of large fragment (LF) and abnormal cell divisions (ACDs) has influenced the correlation between live birth rate and number of blastomeres detected on day 2 by conventional scoring. Methods: This study included 578 embryos cultured in time lapse and selected for transfer by conventional scoring on day 2. By time-lapse recordings, embryos were reassessed to identify ACDs and/or LFs mistaken as blastomeres. The latter identifications were used to recalculate fragmentation rate and the number of blastomeres. Life birth rate according to number of blastomeres was compared in (a) embryos selected by conventional scoring and (b) embryos reassessed by time lapse. Results: After conventional scoring, embryos with four cells had a significantly higher pregnancy rate than embryos with less than four cells and embryos with more than four cells. By time-lapse assessment, ACDs and/or recalculated fragmentation &gt;25 % was recognized in 106/578 (18.3 %) of transferred embryos. None of them resulted in a live birth. After exclusion of these embryos, the number of blastomeres on the day of transfer did not have any impact on life birth rate. Conclusion: Conventional scoring on day 2 did not detect ACDs and LFs mistaken as blastomeres. LFs can lead to a recalculated fragmentation rate to &gt;25 %. No significant correlation between live birth rate and number of blastomeres in day 2 embryos was observed when embryos with ACDs and fragmentation &gt;25 % were excluded. Recognition of ACDs and fragmentation &gt;25 % is more predictive of live birth than number of blastomeres. abstract_id: PUBMED:37076577 Development speed of sibling embryo positively reflects live birth rate after fresh day 3 embryo transfer. The ability of sibling embryos to form blastocysts may reflect the developmental potential of the embryos that were transferred into the uterus. The purpose of the study was to investigate whether the development speed of sibling embryos positively reflects the live birth rate following fresh embryo transfer. We examined 1262 cycles of women who underwent day 3 (D3) cleavage embryo transfer in the Peking Union Medical College Hospital in 2015-2020, who were divided into three groups (D5, D5 + D6, and D6) according to blastocyst formation. The live birth rate in patients with blastocysts that formed on D6 was significantly lower than the other two groups (36.1%, 45.6% and 44.7%, P &lt; 0.05). For women with blastocysts that formed on D6, the live birth rate was higher in those with more good quality blastocysts than poor-quality blastocysts (42.4 vs 32.3%, P &lt; 0.05). Multiple regression analysis showed that the blastocyst development speed of sibling embryos was an independent factor affecting live birth after fresh embryo transfer (P &lt; 0.05). We concluded that the blastocyst development speed of sibling embryos may reflect live birth rate following the transfer of D3 cleavage embryos. abstract_id: PUBMED:29458903 Day 4 good morula embryo transfer provided compatible live birth rate with day 5 blastocyst embryo in fresh IVF/ET cycles. Objective: Embryo transfers during cleavage stage (day 2 or day 3) and blastocyst stages (day 5 or day 6) are common in current daily practice in fresh IVF/ET cycles. Data regarding transferring day 4 embryos, morula/compact stage, is still restricted and the grading system is also inconsistent, as between IVF clinics. This study provided a new detailed classification system for morula/compact stage embryos and compared successes rates between day 4 and day 5 ET. Materials And Methods: This was a retrospective study. A review of medical records from January 1st, 2013, to December 31st 2015, performed for all conventional insemination and ICSI cycles with a GnRH-antagonist protocol at the Infertility Division of MacKay Memorial Hospital in Taipei City, Taiwan. Results: There were 427 cycles included in our study, 107 in study group (day 4 MET) and 320 in control group (day 5 BET). Pregnancy rates and live birth rate were compatible, as between morula embryo transfer (MET) and blastocyst embryo transfer (BET). The implantation rate (36.3% vs. 39.6%, respectively, p = 0.500), clinical pregnancy rate (49.5% vs. 51.9%, respectively, p = 0.737), and live birth rate (42.1% vs. 45.6%, respectively, p = 0.574) were statistically insignificant between groups. The term birth rate was statistically higher in the MET group than in the BET group (95.7% vs. 79.5%, respectively, p = 0.006). When the clinical outcomes between day 4 good MET and day 5 good BET were compared, the results were compatible. The implantation rate (48.8% vs. 41.1%, respectively, p = 0.335), clinical pregnancy rate (55.0% vs. 53.2%, respectively, p = 0.867), and live birth rate (47.5% vs. 47.1%, respectively, p = 1.000) showed no significant difference. The term birth rate was also higher in day 4 good MET group than in day 5 good BET group (100% vs. 78.3%, respectively, p = 0.025). Conclusion: In this study, we performed day 4 MET avoid BET on Sunday. The grading system we provided was more detailed for embryo selection and it was easier to remember. Our data showed that morula embryo transfer might be a flexible, easier and applicable method for embryo transfer in daily routine. abstract_id: PUBMED:26722995 The impact of blastomere survival rates on developmental competence of cryo-thawed Day 2 embryos. Objectives: The aim of the present study is to investigate the effect of embryonic blastomere loss, following cryopreservation and thaw of Day 2 embryos in an assisted reproductive technology (ART) setting, on pregnancy outcome and fetal development. Study Design: This is a retrospective analysis performed on 3553 slow frozen-thawed Day 2 embryos, of all IVF/ICSI thawing cycles carried out during the 11 year study period. Of these thawed embryos, 628 underwent SET on Day 3 of embryo development. We measured the influence of several laboratory parameters on blastomere loss after thaw including: cell stage at cryopreservation, blastomere loss post-thaw, ability to resume mitosis and the rate of overnight cleavage. Results: There is an association between cell number on day of freeze and embryonic survival post-thaw; 3 cell (77.4%), 4 cell (92.1%), 5 cell (81.4%) and 6 cell (86.5%) embryos (p&lt;0.05). We found a significant association between the rate of overnight cleavage and positive hCG and implantation rate (p-value &lt;0.05), while there is no association with live birth rate (p-value 0.242). Embryos with 100% blastomere survival have significantly higher cleavage rates, positive hCG, implantation and live birth rates than embryos which experienced blastomere loss (p&lt;0.05). However, blastomere survival has no impact on miscarriage rate or the observed newborn birth weight (3.85 ± 0.77 kg). Conclusions: In the present study we demonstrate, for the first time, that although it is optimal to select an embryo with 100% blastomere survival, transfer of an embryo with ≥ 50% blastomeres intact post-thaw does not influence the development of the baby, as indicated by weight at birth. abstract_id: PUBMED:25374657 Cumulative live birth rate and assisted reproduction: impact of female age and transfer day. Background: Many studies on assisted reproductive technology examine live birth rate per cycle. However, after a cycle fails, couples often want to know what their chances are of having a live birth if they continue treatment. From a patients' perspective, the cumulative probability of live birth is more informative. Materials And Methods: This study includes patients who underwent fresh, frozen and non-donor ICSI cycles at our IVF unit between 2006-2012. Patients were divided into two groups; Group 1 represented those who underwent only Day 5 transfers, Group 2 represented only Day 3 transfers. Patients who underwent both were excluded. -Cycles were analyzed until the first live birth or the end of the 3rd cycle. Using Kaplan-Meier analysis, we estimated the cumulative live birth rates for each group and according to female age. Results: The mean age for Group 1 was significantly lower than for Group 2. After 3 cycles, Group 1's CLBR was 79% versus 66% in Group 2. When analyzing the live births by age and group, there was a significant difference in the CLBR after 3 cycles with the women less than 35 years having the highest CLBR and the women 40 years or older having the lowest CLBR. Conclusion: In women less than 35 years, excellent CLBR can be achieved irrespective of the transfer day. For women 40 years and above, better results of CLBR are observed with Day 5 transfers. Our findings may impact the counseling of couples considering IVF treatment. abstract_id: PUBMED:35829764 Comparison of pregnancy and live birth rates between fresh day 5 morula transfer and fresh day 6 blastocyst transfer following extended culture for slow growing embryos. Purpose: The aim of this study is to evaluate and compare the outcomes of fresh day 5 morula transfers and fresh day 6 blastocyst transfers after extended culture, in women with a whole cohort of slow growing embryos. Methods: Fresh embryo transfer cycles of patients that underwent intracytoplasmic sperm injection between 2013 and 2020 with a whole cohort of slow developing embryos on 5th day of fertilization were evaluated for this retrospective cohort study. Outcomes of day 5 morula transfers and day 6 blastocyst transfers after extended culture were compared. Results: Out of 479 patients, day 5 morula transfers were performed to 194 and embryo culturing was extended to day 6 in 285 women. Blastocyst formation was observed in 129 of 285 patients in extended culture group. Implantation rate of day 6 blastocyst transfer group was significantly higher than day 5 morula transfer group (27% vs. 12%, p &lt; 0.001). However, clinical pregnancy (17% vs 15.4%) and live birth rates (14.4% vs 13%) were found similar in day 5 morula transfer and extended culture group per initiated cycle. Conclusion: Although implantation capacity of day 6 blastocysts seems higher comparing to day 5 morulas, clinical pregnancy and live birth rates are similar among intention to treat population in fresh day 5 morula transfers and day 6 blastocyst transfers after extended culture for women that cohorts consist of only slow developing embryos. abstract_id: PUBMED:32944763 The effect of Day 3 cell number on pregnancy outcomes in vitrified-thawed single blastocyst transfer cycles. Study Question: Does cell number on Day 3 have an impact on pregnancy outcomes in vitrified-thawed single blastocyst transfer cycles? Summary Answer: A low Day 3 cell number (≤5 cells) was independently associated with decreased live birth rate (LBR) during single blastocyst transfer cycles in young women. What Is Known Already: Day 3 cell number is an effective predictor of IVF success rates when transferring cleavage stage embryos. However, the association between Day 3 blastomere number and pregnancy outcomes after blastocyst transfer is still unknown. Study Design, Size, Duration: A retrospective cohort study of 3543 patients who underwent frozen-thawed single blastocyst transfers from January 2013 to June 2018 at a tertiary-care academic medical center. Participants/materials, Setting, Methods: Patients were grouped into six groups according to the Day 3 cell number: ≤4 cells, 5 cells, 6 cells, 7 cells, 8 cells and &gt;8 cells. The primary outcome measure was LBR. A logistic regression analysis was performed to explore the independent association between Day 3 blastomere number and LBR after adjustment for some potential confounders. Main Results And The Role Of Chance: In women &lt;35 years old, the LBR varied significantly according to Day 3 cell number, with the rate of 31.2%, 34.4%, 41.9%, 45.1%, 48.1% and 48.2% for the ≤4-cell, 5-cell, 6-cell, 7-cell, 8-cell and &gt;8-cell groups, respectively (P &lt; 0.001). This significant difference was also observed in the high- and low-quality blastocyst subgroups of young women. However, for women ≥35 years old, the rate of live birth was similar between groups. Furthermore, after accounting for confounding factors, the LBR was significantly decreased in the ≤4-cell (adjusted odds ratio (aOR): 0.62, 95% CI: 0.48-0.80, P &lt; 0.001) and 5-cell (aOR: 0.73, 95% CI: 0.57-0.92, P = 0.009) groups as compared to the 8-cell group. Likewise, the blastocysts arising from ≤4-cell (aOR: 0.73, 95% CI: 0.57-0.93, P = 0.010) or 5-cell (aOR: 0.77, 95% CI: 0.61-0.97, P = 0.024) embryos were associated with lower clinical pregnancy rate than those from 8-cell embryos. No significant differences were observed in biochemical pregnancy rate and miscarriage rate. Limitations, Reasons For Caution: A limitation of the current study was its retrospective design. Future prospective studies are needed to confirm our findings. Wider Implications Of The Findings: Our observations suggested that a low Day 3 cell number was related to decreased LBR after blastocyst transfer in young women, which provided vital information for clinicians in selecting blastocyst during IVF treatment. Study Funding/competing Interest(s): This study was supported by the National Natural Science Foundation of China (NSFC) (31770989 to Y.W.; 81671520 to Q.C.) and the Shanghai Ninth People's Hospital Foundation of China (JYLJ030 to Y.W.). The authors have no conflicts of interest to declare. Trial Registration Number: N/A. Answer: According to the study with PUBMED:22963278, there is no additional value in predicting live birth rate based on the mean contact surface or the number of contact surfaces of day 3 embryos. The study aimed to test the hypothesis that embryos with a larger mean contact surface would result in a higher live birth rate compared to embryos with a lower mean contact surface. However, the results indicated that selection of an embryo for transfer based on its mean contact surface could predict live birth with high sensitivity and high negative predictive value but with low positive predictive value, low specificity, and low area under the ROC curve. The study concluded that neither the mean contact surface nor the number of contact surfaces of a day 3 embryo had an additional value in the prediction of live birth. Additionally, the type of culture medium was found to have an impact on the contact surface of an embryo, with embryos cultured in a single medium having a significantly larger contact surface compared to those cultured in a sequential medium.
Instruction: Is there a role for insulin resistance in nonobese patients with idiopathic hirsutism? Abstracts: abstract_id: PUBMED:19298279 Is there a role for insulin resistance in nonobese patients with idiopathic hirsutism? Background: Hirsutism is the presence of terminal hairs in women in a male-like pattern. It may result from various causes of androgen excess or may be idiopathic. Controversies exist concerning the presence of insulin resistance in idiopathic hirsutism (IH) or if it is a manifestation of a high body mass index (BMI). Objectives: To assess insulin resistance in nonobese patients with IH. Methods: The study included three groups of age- and BMI-matched nonobese women: 30 patients with IH (group 1), 20 patients with hirsutism associated with polycystic ovary syndrome (PCOS) (group 2) and 20 healthy controls (group 3). The pattern of obesity based on waist to hip ratio (WHR), and insulin resistance based on fasting insulin levels and the homeostasis model assessment of insulin resistance (HOMA-IR) were assessed in all the groups. Results: Sixteen patients with IH and 17 with PCOS had insulin resistance with statistically significant differences in fasting insulin levels and HOMA-IR between the three groups, between patients with IH and healthy controls and between patients with PCOS and healthy controls; there were no significant differences between patients with IH and patients with PCOS. When classified according to the pattern of obesity, 23 patients in group 1, 17 in group 2 and two in group 3 had a WHR &gt;or= 0.85 (android obesity) with highly significant higher values of fasting insulin levels and HOMA-IR in patients with a WHR &gt;or= 0.85 when compared with those with a WHR &lt; 0.85. Conclusions: Insulin resistance occurs in nonobese patients with IH and appears to be related to android obesity. abstract_id: PUBMED:22662009 Insulin resistance in women with hirsutism. Introduction: There are still not enough data showing whether patients with idiopathic hirsutism (IH) also have insulin resistance. The association between polycystic ovary syndrome (PCOS) and insulin resistance is well documented in the literature, but the Rotterdam Consensus has concluded that principally obese women with PCOS should be screened for the metabolic syndrome. We intended to investigate the presence/absence of insulin resistance in non-obese women with hirsutism. Material And Methods: Twenty-eight women with PCOS (14 non-obese and 14 obese), 12 non-obese with IH, and 16 non-obese healthy women were included in the study. The presence of insulin resistance was investigated by using basal insulin levels and the homeostasis model assessment (HOMA) score in the study group. Results: Patients with obese and nonobese PCOS had significantly (p &lt; 0.05) higher basal insulin levels and HOMA scores than IH and control subjects. Insulin levels and HOMA scores did not differ between obese and non-obese PCOS patients. Patients with IH did not show any difference from the control group. Conclusions: Insulin resistance exists in non-obese women with PCOS as well as obese women with PCOS. The PCOS is associated with insulin resistance independent of obesity. Insulin resistance should be assessed in all hirsute women with PCOS regardless of their body mass index. More studies in larger numbers of patients should be performed to investigate the role of insulin resistance in women with IH. abstract_id: PUBMED:24228029 Idiopathic hirsutism and insulin resistance. Background and Objectives. Polycystic ovary syndrome (PCOS) and idiopathic hirsutism (HI) are the two most common causes of hirsutism. Insulin resistance plays a key role in PCOS, but there are not enough data showing that patients with HI also have insulin resistance. This study was designed to evaluate the presence of insulin resistance in women with HI. Methods. Based on a cross-sectional study, two groups of age-BMI matched, hirsute women were compared to age-BMI matched, nonhirsute women. Sixty nonobese women with PCOS, thirty nonobese women with HI, and sixty nonobese control women were included in the study. Samples of hormones including androgens were measured. Insulin resistance based on homeostasis model assessment of insulin resistance (HOMA-IR) was compared between three groups by the Kruskal-Wallis test. Results. Patients with PCOS had significantly higher basal insulin level (16.04 ± 1.4 versus 7.32 ± 6.85 μ Iu/mL) and HOMA-IR score (3.7 ± 3.36 versus 1.75 ± 1.67) than patients with HI (P 0.001). Patients with HI also had significantly higher basal insulin level and HOMA-IR score than control group (P 0.001). Conclusion. Our data suggest that both PCOS and HI are associated with insulin resistance and these patients are more insulin resistant than healthy control people. abstract_id: PUBMED:25496462 Evaluation of insulin resistance in idiopathic hirsutism compared with polycystic ovary syndrome patients and healthy individuals. Introduction: Hirsutism is defined as the excessive male-pattern growth of hair in women. Hirsutism is often idiopathic or the consequence of polycystic ovary syndrome (PCOS). Insulin resistance is common in PCOS (especially in obese patients) but the association between insulin resistance and idiopathic hirsutism (IH) is not clear. The aim of this study was to investigate the rate of insulin resistance in IH, compared with healthy individuals and patients with PCOS. Methods: The study included three groups, patients with idiopathic hirsutism, PCOS and healthy women. Each group included 30 non-obese women. Fasting blood sugar (FBS), insulin level and insulin resistance (estimated by the homeostasis model assessment [HOMA-IRIR]) were compared in the three groups. Results: There was a significant difference between the age of the women with IH compared with two other groups. There were no significant difference in levels of serum insulin (P = 0.49, HOMA-IR (P = 0.47) and prevalence of insulin resistance (P = 0.07) in the three groups. The age-adjusted prevalence of insulin resistance was similar in the three groups. Conclusion: Insulin resistance was no more frequent in IH patients than in healthy control groups. abstract_id: PUBMED:15181051 The investigation of insulin resistance in patients with idiopathic hirsutism. Hirsutism, which is characterized by excessive growth of terminal hair in a male pattern, is a common clinical condition in women. It may result from various causes including polycystic ovary syndrome, nonclassic adrenal hyperplasia, adrenal or ovarian tumors, or it may be idiopathic. Idiopathic hirsutism (IH) is considered to be one of the most common forms of hirsutism. Although not universal, insulin resistance and hyperinsulinemia have been demonstrated in women with polycystic ovary syndrome. Because there are not enough data showing whether patients with IH also have insulin resistance, we intended to investigate the presence/absence of insulin resistance in women with IH. Thirty-two women with IH [mean age, 24.8 +/- 1.2 yr; body mass index (BMI), 24.6 +/- 0.8 kg/m2] and 17 healthy women (mean age, 25.8 +/- 0.6 yr; BMI, 22.5 +/- 0.6 kg/m2) were included in the study. Eight of 32 patients with IH had BMI higher than 30 kg/m2. The presence of insulin resistance was investigated by using basal insulin levels, the oral glucose tolerance test, the i.v. insulin tolerance test, and the homeostasis model assessment (HOMA) score in both groups. Six (18.7%) patients had impaired glucose tolerance (IGT). Overall, patients with IH had significantly (P &lt; 0.05) higher basal insulin levels (10.5 +/- 1.1 mU/liter vs. 5.7 +/- 0.9 mU/liter) and HOMA scores (2.0 +/- 0.2 vs. 1.1 +/- 0.2) and lower plasma glucose disappearance rate values (5.2 +/- 0.2 vs. 6.0 +/- 0.3) than control subjects. However, patients with IGT were notably more obese than the patients with a normal glucose tolerance test. Analyses after omitting the patients with IGT showed that there was still a significant (P &lt; 0.05) difference in terms of basal insulin levels and HOMA scores. Six of eight (75%) obese patients with IH showed IGT. These data suggest that IH is associated with insulin resistance and an increased prevalence of IGT in obese patients. abstract_id: PUBMED:3397043 Role of obesity and hyperinsulinemia in the insulin resistance of obese subjects with the clinical triad of polycystic ovaries, hirsutism and acanthosis nigricans. The insulin resistance of 4 nonobese and 8 obese patients with polycystic ovaries, hirsutism and benign acanthosis nigricans, and of 6 'obese normal' apart from obesity and 10 normal female subjects was evaluated by means of an intravenous insulin tolerance test and by measuring basal and insulin responses to an oral glucose load. The patients with polycystic ovaries, hirsutism and acanthosis had a decreased hypoglycemic response to exogenous insulin. The subjects with polycystic ovaries presented a significantly greater mean glucose response area for the same or greater mean insulin response area than the obese or nonobese normal subjects. The insulin resistance in the patient with polycystic ovaries, hirsutism and acanthosis nigricans could not be exclusively ascribed to a reduced receptor number, but also appeared to be due to a simultaneous postbinding defect probably related to the high insulin levels in patients with polycystic ovaries be they obese or not. The elevated plasma androgens and the presence of acanthosis found in these patients are likely also related to the hyperinsulinemia. To evaluate the influence of obesity, obese and nonobese patients with acanthosis nigricans and polycystic ovaries were compared. Higher insulin levels were found in the thin subjects, which could explain their greater insulin resistance and more severe hyperandrogenism. The comparison between obese patients with and those without acanthosis nigricans and polycystic ovaries suggested that, despite similar insulin levels, the greater known duration of obesity (probably also of the hyperinsulinemia) of the former was a possible explanation for their more intense insulin resistance and higher testosterone levels. abstract_id: PUBMED:25010619 Are insulin resistance and serum resistin levels increased in women with idiopathic hirsutism? Objective: To investigate the insulin resistance and serum resistin levels in women with idiopathic hirsutism compared to controls and women with polycystic ovary syndrome (PCOS). Patients And Methods: Three groups of women including 23 women with idiopathic hirsutism, 28 women with PCOS and 28 non-hirsute women serving as controls were included into the study. The Homeostasis Model Assessment of Insulin Resistance (HOMA-IR), serum fasting insulin and resistin levels were compared between the groups. Results: There were no statistically significant differences regarding the age, BMI and waist circumferences between the groups. Mean and median fasting blood glucose, fasting insulin, HOMA-IR, serum resistin levels were statistically similar between the groups (p = 0.966, p = 0.378, p = 0.409 and p = 0.784, respectively). There were no correlations between the resistin, HOMA-IR, fasting insulin levels and BMI in any of the three groups. Conclusions: Insulin resistance and serum resistin levels do not appear to be increased in women with idiopathic hirsutism compared to controls at similar BMI's and waist circumferences. abstract_id: PUBMED:35531788 Insulin resistance and idiopathic hirsutism: A systematic review, meta-analysis, and meta-regression. Objectives: This study aimed to evaluate the Standardized Mean Difference (SMD) of insulin resistance parameters in women with IH, compared to healthy and polycystic ovary syndrome (PCOS) controls. Materials And Methods: PubMed, Scopus, Web of Sciences, and Embase were searched for retrieving studies published up to November 2021 investigating the insulin resistance parameters in women with IH, compared to control groups. Meta-regression and subgroup analysis were conducted to evaluate the effect of potential confounders, such as age, BMI, and study design. Results: A meta-analysis of 20 studies revealed that higher SMDs of fasting insulin (SMD: 0.58; 95% CI: 0.10, 1.06), HOMA-IR (SMD: 0.53; 95% CI: 0.09, 0.97), and FBS levels (SMD: 0.11; 95% CI: 0.03, 0.19) in women with IH than healthy. It also showed that the SMD of HOMA-IR was significantly lower in women with IH than PCOS patients (SMD: -0.49; 95% CI: -0.88, -0.09). A subgroup analysis of cross-sectional studies showed higher SMDs of fasting insulin (SMD: 0.86; 95% CI: 0.05, 1.68), HOMA-IR (SMD: 0.83; 95% CI: 0.01, 1.64), and FBS levels (SMD: 0.14; 95% CI: 0.00, 0.28) in women with IH than healthy, whereas there was no difference in the SMD of these metabolic parameters between IH and PCOS groups, except for SMD of HOMA-IR (SMD: -0.22; 95% CI: -0.42, -0.02). Conclusions: The results of the study demonstrate that insulin resistance parameters are related to IH, although insulin resistance values in women with IH are not as high as in patients with PCOS. According to the results of the study, measuring these metabolic parameters can be beneficial to evaluate all hirsute women with IH. abstract_id: PUBMED:14667885 Relationship between insulin resistance and gonadotropin dissociation in obese and nonobese women with polycystic ovary syndrome. Objective: To test the interdependence between insulin resistance (IR) and gonadotropin dissociation (GD) in polycystic ovary syndrome (PCOS). Design: Cross-sectional prospective study. Setting: Clinical research center. Patient(s): Thirty-two PCOS patients aged 19-34 years; 16 obese (BMI &gt; or = 27) and 16 nonobese (BMI &lt; 27). Intervention(s): A 75-g oral glucose tolerance test (OGTT) and a 100-microg i.v. GnRH test were performed on different days. Blood was taken at 0, 30, 60, 90, 120, and 180 minutes in each test. Serum glucose, insulin, LH, and FSH were measured. Main Outcome Measure(s): Area under the curve was calculated for glucose, insulin, and glucose-to-insulin ratio (GIR), and for LH, FSH, and LH-FSH ratio. Result(s): Glucose, insulin, and GIR were not modified significantly during the GnRH test, nor LH, FSH and LH-FSH ratio throughout the OGTT. There were no significant differences in GIR response of patients with and without GD, nor in LH-FSH ratio of patients with and without IR, after OGTT and GnRH test. However, obese patients with IR had a significantly larger (P&lt;.04) area under the curve for LH-FSH ratio than those without IR after GnRH test, but not after OGTT test. Conclusion(s): Insulin resistance and GD do not appear to be related events in PCOS, suggesting that each one might be determined by different genetic disorders. However, IR can affect GD after chronic stimulation in obese patients. abstract_id: PUBMED:12050266 Additive effects of insulin-sensitizing and anti-androgen treatment in young, nonobese women with hyperinsulinism, hyperandrogenism, dyslipidemia, and anovulation. The endocrine-metabolic hallmarks of polycystic ovary syndrome are hyperinsulinism, hyperandrogenism, dyslipidemia, and anovulation. We hypothesized that dyslipidemia and anovulation in nonobese women with polycystic ovary syndrome are essentially secondary to the concerted effects of hyperandrogenism and insulin resistance. We tested this hypothesis by comparing the efficacy of anti-androgen (flutamide) or insulin-sensitizing (metformin) monotherapy to that of combined therapy in normalizing the endocrine-metabolic and anovulatory status of nonobese, young women with hyperinsulinemic hyperandrogenism. Thirty-one young women (mean age, 18.7 yr; body mass index, 21.9 kg/m(2); hirsutism score, 16; monthly ovulation rate monitored by weekly serum progesterone, 10%) were randomly assigned to receive once daily flutamide (250 mg; n = 10), metformin (1275 mg; n = 8), or combined flutamide- metformin therapy (n = 13) for 9 months. At baseline, there were no endocrine-metabolic differences among treatment groups. Compared with monotherapy, combined flutamide-metformin therapy resulted in greater improvements in insulin sensitivity, in testosterone, androstenedione, dehydroepiandrosterone sulfate, and triglyceride levels, and in low-density lipoprotein/high-density lipoprotein-cholesterol ratio (all P &lt; 0.005). Monthly ovulation rates increased after 9 months to 75 and 92%, respectively, with metformin alone or with combined therapy, but were unimproved with flutamide alone. All treatments were well tolerated. In conclusion, combined anti-androgen and insulin-sensitizing treatment in young, nonobese women with hyperinsulinemic hyperandrogenism had additive benefits on insulin sensitivity, hyperandrogenemia, and dyslipidemia. The data from this small study suggest that dyslipidemia is secondary to excess androgen action in concert with the hyperinsulinemia associated with insulin resistance. In contrast, anovulation seems to be mainly attributable to insulin resistance and hyperinsulinemia. Answer: Yes, there is evidence suggesting a role for insulin resistance in nonobese patients with idiopathic hirsutism (IH). Several studies have investigated the presence of insulin resistance in nonobese women with IH and have found varying results. One study included nonobese women with IH, nonobese women with hirsutism associated with polycystic ovary syndrome (PCOS), and healthy controls. It found that both nonobese patients with IH and those with PCOS had insulin resistance, with statistically significant differences in fasting insulin levels and HOMA-IR between the groups. The study concluded that insulin resistance occurs in nonobese patients with IH and appears to be related to android obesity (PUBMED:19298279). Another study, which included non-obese women with PCOS, non-obese women with IH, and non-obese healthy women, found that patients with PCOS had significantly higher basal insulin levels and HOMA scores than IH and control subjects. However, patients with IH did not show any difference from the control group, suggesting that insulin resistance may not be as prevalent in non-obese women with IH as it is in those with PCOS (PUBMED:22662009). A cross-sectional study compared nonobese women with PCOS, nonobese women with IH, and nonobese control women and found that patients with PCOS had significantly higher basal insulin levels and HOMA-IR scores than patients with IH. However, patients with IH also had significantly higher basal insulin levels and HOMA-IR scores than the control group, indicating that both PCOS and IH are associated with insulin resistance (PUBMED:24228029). Another study evaluating insulin resistance in IH compared with PCOS patients and healthy individuals found no significant difference in serum insulin levels, HOMA-IR, and prevalence of insulin resistance among the three groups, suggesting that insulin resistance was not more frequent in IH patients than in healthy control groups (PUBMED:25496462). A systematic review and meta-analysis also indicated that insulin resistance parameters are related to IH, although insulin resistance values in women with IH are not as high as in patients with PCOS. The study recommended measuring these metabolic parameters to evaluate all hirsute women with IH (PUBMED:35531788).
Instruction: Colorectal cancer surgical care and survival: do private health insurance, socioeconomic and locational status make a difference? Abstracts: abstract_id: PUBMED:16336380 Colorectal cancer surgical care and survival: do private health insurance, socioeconomic and locational status make a difference? Background: The purpose of the present paper was to examine patterns of surgical care and the likelihood of death within 5 years after a diagnosis of colorectal cancer, including the effects of demographic, locational and socioeconomic disadvantage and the possession of private health insurance. Methods: The Western Australian Data Linkage System was used to extract all hospital morbidity, cancer and death records for people with a diagnosis of colorectal cancer from 1982 to 2001. Demographic, hospital and private health insurance information was available for all years and measures of socioeconomic and locational disadvantage from 1991. A logistic regression model estimated the probability of receiving colorectal surgery. A Cox regression model estimated the likelihood of death from any cause within 5 years of diagnosis. Results: People were more likely to undergo colorectal surgery if they were younger, had less comorbidity and were married/defacto or divorced. People with a first admission to a private hospital (odds ratio (OR) 1.31, 95% confidence interval (CI): 1.16-1.48) or with private health insurance (OR 1.27, 95% CI: 1.14-1.42) were more likely to undergo surgery. Living in a rural or remote area made little difference, but a first admission to a rural hospital reduced the likelihood of surgery (OR 0.76, 95% CI: 0.66-0.87). Residency in lower socioeconomic areas also made no difference to the likelihood of having surgical treatment. The likelihood of death from any cause was lower in those who were younger, had less comorbidity, were elective admissions and underwent surgery. Residency in lower socioeconomic status and rural areas, admission to a rural hospital or a private hospital and possession of private health insurance had no effect on the likelihood of death. Conclusions: The present study demonstrates that socioeconomic and locational status and access to private health care had no significant effects on surgical patterns of care in people with colorectal cancer. However, despite the higher rates of surgery in the private hospitals and among those with private health insurance, their survival was no better. abstract_id: PUBMED:19716780 Provider recommendation for colorectal cancer screening: examining the role of patients' socioeconomic status and health insurance. Background: Provider recommendation for colorectal cancer (CRC) screening test has been shown as a strong predictor of patients' decision to be screened. Evidence of factors associated with provider recommendation remains limited and inconsistent. The present study sought to examine the association between provider recommendation for colorectal cancer screening and patients' socioeconomic status (SES) and insurance status using national survey data for 2005. Methods: Analyses were based on 2948 adult aged 50 and older who participated in the 2005 Health Information National Trend Survey (HINTS). Multivariate logistic regression models were used to determine whether the indicators of SES (income and educational level) and insurance status have any impact on provider recommendation. Results: Our study found a strong association between recall of health care provider recommendation and reported recent screening testing after controlling for other patient characteristics. When all the study population were included in the analysis, those who had lower than high school education and high school graduates were less likely to have received provider recommendation than those with higher than high school education (OR=0.49; 95%CI=0.32-0.73 and OR=0.60; 95%CI=0.47-0.78 respectively). Income and insurance were not significant predictors. Education remained significantly associated with provider recommendation when only those who had made one or more medical visits in the past year were included in the analyses. Conclusions: Patient's educational level--but not income or insurance status--was related to provider recommendation for CRC screening. To increase awareness of colorectal cancer risks and the benefit of screening, health care providers need to make a concerted effort to recommend colorectal cancer screening to all relevant patients, regardless of socioeconomic status and other personal characteristics. abstract_id: PUBMED:34831918 Impact of Socioeconomic Status on Cancer Incidence Risk, Cancer Staging, and Survival of Patients with Colorectal Cancer under Universal Health Insurance Coverage in Taiwan. This study examined the impact of socioeconomic status on colorectal cancer risk, staging, and survival under the National Health Insurance (NHI) system in Taiwan. Monthly salary and education level were used as measures of socioeconomic status to observe the risk of colorectal cancer among individuals aged 40 years or above in 2006-2015 and survival outcomes of patients with colorectal cancer until the end of 2016. Data from 286,792 individuals were used in this study. Individuals with a monthly salary ≤Q1 were at a significantly lower incidence risk of colorectal cancer than those with a monthly salary &gt;Q3 (HR = 0.80, 95% CI = 0.74-0.85), while those with elementary or lower education were at a significantly higher risk than those with junior college, university, or higher education (HR = 1.18, 95% CI = 1.06-1.31). The results show that socioeconomic status had no significant impact on colorectal cancer stage at diagnosis. Although salary was not associated with their risk of mortality, patients with colorectal cancer who had elementary or lower education incurred a significantly higher risk of mortality than those who had junior college, university, or higher education (HR = 1.39, 95% CI = 1.07-1.77). Education level is a significant determinant of the incidence risk and survival in patients with colorectal cancer, but only income significantly impacts incidence risk. abstract_id: PUBMED:22951313 Differences in mortality for surgical cancer patients by insurance and hospital safety net status. Recent research suggests hospitals serving low-income patients have poorer outcomes. However, safety net hospitals (SNHs) offering access to care regardless of insurance coverage may provide better care than low-income patients would otherwise receive. This study considers the association between insurance and mortality among surgical cancer patients and the role of SNHs. We estimate models of 1- and 5-year mortality on insurance, SNH status, patient characteristics, and hospital surgical volume for colorectal and breast cancer patients. Interaction terms between insurance and SNH status estimate how mortality differs by insurance source at SNHs. Medicaid and uninsurance are associated with significantly higher mortality for colorectal cancer patients. There is a statistically significant improvement in mortality for Medicaid colorectal cancer patients treated in SNHs relative to non-SNHs and a marginally significant improvement for uninsured breast cancer patients treated in SNHs. The results suggest a survival benefit for low-income patients treated in SNHs. abstract_id: PUBMED:33084023 Access denied: The relationship between patient insurance status and access to high-volume hospitals. Background: Underinsured patients face significant barriers in accessing high-quality care. Evidence of whether access to high-volume surgical care is mediated by disparities in health insurance coverage remains wanting. Methods: The authors used the National Cancer Data Base to identify all adult patients who had a confirmed diagnosis of breast, prostate, lung, or colorectal cancer during 2004 through 2016. The odds of receiving surgical care at a high-volume hospital were estimated according to the type of insurance using multivariable logistic regression analyses for each malignancy. Then, the interactions between study period and insurance status were assessed. Results: In total, 1,279,738 patients were included in the study. Of these, patients with breast cancer who were insured by Medicare (odds ratio [OR], 0.75; P &lt; .001), Medicaid (OR, 0.55; P &lt; .001), or uninsured (OR, 0.50; P &lt; .001); patients with prostate cancer who were insured by Medicare (OR, 0.87; P = .003), Medicaid (OR, 0.58; P = .001), or uninsured (OR, 0.36; P &lt; .001); and patients with lung cancer who were insured by Medicare (OR, 0.84; P = .020), Medicaid (OR, 0.74; P = .001), or uninsured (OR, 0.48; P &lt; .001) were less likely to receive surgical care at high-volume hospitals compared with patients who had private insurance. For patients with colorectal cancer, the effect of insurance differed by study period, and improved since 2011. For those on Medicaid, the odds of receiving care at a high-volume hospital were 0.51 during 2004 through 2007 and 0.99 during 2014 through 2016 (P for interaction = .001); for uninsured patients, the odds were 0.45 during 2004 through 2007 and 1.19 during 2014 through 2016 (P for interaction &lt; .001) compared with patients who had private insurance. Conclusions: Uninsured, Medicare-insured, and Medicaid-insured patients are less likely to receive surgical care at high-volume hospitals. For uninsured and Medicaid-insured patients with colorectal cancer, the odds of receiving care at high-volume hospitals have improved since implementation of the Patient Protection and Affordable Care Act of 2010. abstract_id: PUBMED:27859019 The impact of health insurance on cancer care in disadvantaged communities. Background: Individuals from disadvantaged communities are among the millions of uninsured Americans gaining insurance under the Affordable Care Act. The extent to which health insurance can mitigate the effects of the social determinants of health on cancer care is unknown. Methods: This study linked the Surveillance, Epidemiology, and End Results registries to US Census data to study patients diagnosed with the 4 leading causes of cancer deaths between 2007 and 2011. A county-level social determinant score was developed with 5 measures of wealth, education, and employment. Patients were stratified into quintiles, with the lowest quintile representing the most disadvantaged communities. Logistic regression and Cox proportional hazards models were used to estimate associations and cancer-specific survival. Results: A total of 364,507 patients aged 18 to 64 years were identified (134,105 with breast cancer, 106,914 with prostate cancer, 62,606 with lung cancer, and 60,882 with colorectal cancer). Overall, patients from the most disadvantaged communities (median household income, $42,885; patients below the poverty level, 22%; patients completing college, 17%) were more likely to present with distant disease (odds ratio, 1.6; P &lt; .001) and were less likely to receive cancer-directed surgery (odds ratio, 0.8; P &lt; .001) than the least disadvantaged communities (median income, $78,249; patients below the poverty level, 9%; patients completing college, 42%). The differences persisted across quintiles regardless of the insurance status. The effect of having insurance on cancer-specific survival was more pronounced in disadvantaged communities (relative benefit at 3 years, 40% vs 31%). However, it did not fully mitigate the effect of social determinants on mortality (hazard ratio, 0.75 vs 0.68; P &lt; .001). Conclusions: Cancer patients from disadvantaged communities benefit most from health insurance, and there is a reduction in disparities in outcome. However, the gap produced by social determinants of health cannot be bridged by insurance alone. Cancer 2017;123:1219-1227. © 2016 American Cancer Society. abstract_id: PUBMED:30989212 Increased Healthcare Utilization by Patients With Inflammatory Bowel Disease Covered by Medicaid at a Tertiary Care Center. Background: Low socioeconomic status has been linked with numerous poor health outcomes, but data are limited regarding the impact of insurance status on inflammatory bowel disease (IBD) outcomes. We aimed to characterize utilization of healthcare resources by IBD patients based on health insurance status, using Medicaid enrollment as a proxy for low socioeconomic status. Methods: We retrospectively identified adult patients with IBD engaged in a colorectal cancer surveillance colonoscopy program from July 2007 to June 2017. Our primary outcomes included emergency department (ED) visits, inpatient hospitalizations, biologic infusions, and steroid exposure, stratified by insurance status. We compared patients who had ever been enrolled in Medicaid with all other patients. Results: Of 947 patients with IBD, 221 (23%) had been enrolled in Medicaid. Compared with patients with other insurance types, patients with Medicaid had higher rates of ever being admitted to the hospital (77.6% vs 42.6%, P &lt; 0.0001) or visiting the ED (90.5% vs 38.4%, P &lt; 0.0001). When adjusted for sex, age at first colonoscopy, and ethnicity, patients with Medicaid had a higher rate of inpatient hospitalizations (Rate ratio [RR] 2.95; 95% CI 2.59-3.36) and ED visits (RR 4.24; 95% CI 3.82-4.70) compared to patients with other insurance. Patients with Medicaid had significantly higher prevalence of requiring steroids (62.4% vs 37.7%, P &lt; 0.0001), and after adjusting for the same factors, the odds of requiring steroids in the patients with Medicaid was increased (OR 3.77; 95% CI 2.53-5.62). Conclusions: Medicaid insurance was a significant predictor of IBD care and outcomes. Patients with Medicaid may have less engagement in IBD care and seek emergency care more often. abstract_id: PUBMED:23930216 Cancer survival disparities by health insurance status. Previous studies found that uninsured and Medicaid insured cancer patients have poorer outcomes than cancer patients with private insurance. We examined the association between health insurance status and survival of New Jersey patients 18-64 diagnosed with seven common cancers during 1999-2004. Hazard ratios (HRs) with 95% confidence intervals for 5-year cause-specific survival were calculated from Cox proportional hazards regression models; health insurance status was the primary predictor with adjustment for other significant factors in univariate chi-square or Kaplan-Meier survival log-rank tests. Two diagnosis periods by health insurance status were compared using Kaplan-Meier survival log-rank tests. For breast, colorectal, lung, non-Hodgkin lymphoma (NHL), and prostate cancer, uninsured and Medicaid insured patients had significantly higher risks of death than privately insured patients. For bladder cancer, uninsured patients had a significantly higher risk of death than privately insured patients. Survival improved between the two diagnosis periods for privately insured patients with breast, colorectal, or lung cancer and NHL, for Medicaid insured patients with NHL, and not at all for uninsured patients. Survival from cancer appears to be related to a complex set of demographic and clinical factors of which insurance status is a part. While ensuring that everyone has adequate health insurance is an important step, additional measures must be taken to address cancer survival disparities. abstract_id: PUBMED:24204073 Health insurance is associated with preventive care but not personal health behaviors. Background: Economists posit 2 mechanisms increasing financial risk to insurers after health insurance gain: ex ante moral hazard (riskier behavior because of reduced personal costs) and ex post moral hazard (increased use of care because of lower care costs). In contrast, the Health Belief Model (HBM), would anticipate no increase in risk behaviors while also predicting increased health care utilization following insurance gain (because of reduced financial barriers to accessing care). Empirical studies examining the association of insurance change with changes in preventive care and health behaviors have been limited and yielded mixed findings. The objective of this study was to examine the association of health insurance change (gain or loss of coverage) with changes in preventive care and health behaviors in a large, nationally representative sample. Methods: We analyzed data from adults ≥18 years old and enrolled for 2 years in the 2000 to 2009 Medical Expenditure Panel Surveys (n = 76,518). Conditional logistic regression analyses modeled year-to-year individual changes in preventive care and health behaviors associated with individual changes in insurance status, adjusting for characteristics varying year to year (income, employment, total health care expenditures, office visits, prescriptions, availability of usual source of care, and health status). Preventive care included adherence to influenza vaccination, colorectal cancer screening, mammography, and Papanicolaou and prostate-specific antigen testing. Health behaviors examined were becoming nonobese, quitting smoking, and adopting consistent use of seatbelts. Results: Insurance gain (loss) was associated with increases (decreases) in preventive care (adjusted odds ratios [95% confidence intervals]: influenza vaccine, 1.27 [1.04-1.56]; colorectal cancer screening, 1.48 [0.96-2.29]; Papanicolaou testing, 1.56 [1.22-2.00]; mammography, 1.70 [1.21-2.38]; prostate-specific antigen, 1.42 [0.98-2.05]). Insurance change was not associated with significant changes in health behaviors. Conclusions: Consistent with both economic theory and the HBM, preventive care increased (decreased) after gaining (losing) coverage. In contrast, health behaviors changed little after insurance change, consistent with the HBM but not with the potential for decreased personal health care costs (ex ante moral hazard). abstract_id: PUBMED:16774689 The effects of socioeconomic status, accessibility to services and patient type on hospital use in Western Australia: a retrospective cohort study of patients with homogenous health status. Background: This study aimed to investigate groups of patients with a relatively homogenous health status to evaluate the degree to which use of the Australian hospital system is affected by socio-economic status, locational accessibility to services and patient payment classification. Method: Records of all deaths occurring in Western Australia from 1997 to 2000 inclusive were extracted from the WA mortality register and linked to records from the hospital morbidity data system (HMDS) via the WA Data Linkage System. Adjusted incidence rate ratios of hospitalisation in the last, second and third years prior to death were modelled separately for five underlying causes of death. Results: The independent effects of socioeconomic status on hospital utilisation differed markedly across cause of death. Locational accessibility was generally not an independent predictor of utilisation except in those dying from ischaemic heart disease and lung cancer. Private patient status did not globally affect utilisation across all causes of death, but was associated with significantly decreased utilisation three years prior to death for those who died of colorectal, lung or breast cancer, and increased utilisation in the last year of life in those who died of colorectal cancer or cerebrovascular disease. Conclusion: It appears that the Australian hospital system may not be equitable since equal need did not equate to equal utilisation. Further it would appear that horizontal equity, as measured by equal utilisation for equal need, varies by disease. This implies that a 'one-size-fits-all' approach to further improvements in equity may be over simplistic. Thus initiatives beyond Medicare should be devised and evaluated in relation to specific areas of service provision. Answer: The relationship between colorectal cancer surgical care and survival and factors such as private health insurance, socioeconomic status, and locational status has been explored in various studies. According to the findings from these studies, there are several points to consider: 1. Private health insurance seems to increase the likelihood of undergoing colorectal surgery. Patients with a first admission to a private hospital or with private health insurance were more likely to undergo surgery. However, survival rates were not significantly better among those with private health insurance compared to those without it (PUBMED:16336380). 2. Socioeconomic status (SES) and locational status appear to have mixed effects. For instance, one study found that SES, as measured by education level, was a significant determinant of the incidence risk and survival in patients with colorectal cancer, but only income significantly impacted incidence risk (PUBMED:34831918). Another study indicated that residency in lower socioeconomic areas and rural areas, admission to a rural hospital, or a private hospital and possession of private health insurance had no effect on the likelihood of death (PUBMED:16336380). 3. Insurance status can influence access to high-volume hospitals, which are often associated with better outcomes. Uninsured, Medicare-insured, and Medicaid-insured patients were less likely to receive surgical care at high-volume hospitals compared to those with private insurance. However, for colorectal cancer patients on Medicaid or uninsured, access to high-volume hospitals improved after the implementation of the Patient Protection and Affordable Care Act of 2010 (PUBMED:33084023). 4. Health insurance can mitigate some effects of social determinants of health on cancer care, but it does not fully bridge the gap produced by these determinants. Cancer patients from disadvantaged communities benefit most from health insurance, but disparities in outcome persist (PUBMED:27859019). 5. Provider recommendation for colorectal cancer screening is associated with patients' educational level, but not income or insurance status. This suggests that efforts to increase awareness and screening should be made across all socioeconomic groups (PUBMED:19716780). 6. The effects of socioeconomic status on hospital use in Western Australia showed that equal need did not equate to equal utilization, suggesting that the hospital system may not be equitable (PUBMED:16774689). In summary, while private health insurance and socioeconomic factors can influence the likelihood of receiving colorectal cancer surgery, they do not necessarily translate into better survival outcomes.
Instruction: Can Dysphagia Following Anterior Cervical Fusions With rhBMP-2 Be Reduced With Local Depomedrol Application? Abstracts: abstract_id: PUBMED:27018896 Can Dysphagia Following Anterior Cervical Fusions With rhBMP-2 Be Reduced With Local Depomedrol Application?: A Prospective, Randomized, Placebo-Controlled, Double-Blind Trial. Study Design: A prospective, randomized, placebo-controlled, double-blind trial. Objective: The aim of this study was to investigate whether the local administration of depomedrol decreases the severity of dysphagia after anterior cervical discectomy and fusion (ACDF) surgery using bone morphogenetic protein (BMP). Summary Of Background Data: Although recombinant human BMP-2 is effective in promoting arthrodesis, many physicians avoid using it in anterior cervical spine fusions due to concern for increased incidence of dysphagia, significant pre-vertebral swelling, and airway compromise. Pilot studies have shown that the local application of depomedrol may decrease the incidence of postoperative dysphagia. We performed a prospective, randomized trial to evaluate the efficacy of local depomedrol application in reducing the severity of postoperative dysphagia following anterior cervical fusions using low-dose rhBMP-2. We hypothesized that locally administered depomedrol reduces dysphagia following such surgeries. Methods: Fifty patients between 18 and 70 years of age, undergoing 1, 2, and 3-level ACDFs, were randomized to 1 of 2 groups: BMP-2 with depomedrol or BMP-2 with saline. Patients were followed for 4 weeks postoperatively by the study administrator. Dysphagia was measured at 5 time intervals (postoperative days 1, 4, 7, 14, and 28) using a 4-point Modified Dysphagia Scoring System. Additional data regarding overall length of hospital stay and the administration of dysphagia-directed treatments were also recorded. Results: Twenty-seven patients were randomized to the treatment (depomedrol) group and 23 were randomized to the control (saline) group. The 2 groups were nearly identical in terms of their demographic and operative characteristics. Patients receiving depomedrol experienced decreased dysphagia incidence and magnitude at all time intervals, with differences reaching statistical significance on postoperative days 4, 7, 14, and 28 (P &lt; 0.05). Conclusion: This study provides Level 1 evidence that locally administered depomedrol on a collagen sponge significantly decreases postoperative dysphagia incidence and magnitude following anterior cervical spine fusion using low-dose rhBMP-2. Level Of Evidence: 1. abstract_id: PUBMED:33982193 The effect of local intraoperative corticosteroid application on postoperative dysphagia following anterior cervical spine surgery. Dysphagia is a common complication following anterior cervical spine surgery (ACSS). Although several literatures have reported the potential benefit of local corticosteroid application on dysphagia, its safety and efficacy are still unclear. A systematic review was performed aiming to evaluate the evidence of local corticosteroid application in prevention or treatment of postoperative dysphagia following ACSS. A systematic search was performed in September 2018 in PubMed and Embase database. The following information was extracted: study investigator, year of publication, number of patients, study design, inclusion/exclusion criteria, administration protocol of steroid, type of surgical procedure, number of levels performed, assessment methodology of dysphagia, radiologic assessment of prevertebral soft tissue swelling (PSTS), follow-up time points, outcome of dysphagia, and corticosteroid-related complications. Qualitative synthesis was performed. Finally, 5 studies met the inclusion/exclusion criteria. Four studies found that local corticosteroid application could decrease the incidence and magnitude of postoperative dysphagia while 1 study showed no effect on dysphagia significantly at 6 weeks and 3 months follow-up time. A total of 2325 patients received local corticosteroid intraoperatively; no early corticosteroid-related complication was reported. Totally, 4 adverse events occurred in long-term follow-up time, including 2 bone nonunion at 1.5 and 2.5 years postoperatively, 2 esophageal perforation at 2 months and 11 months of follow-up, respectively. Local corticosteroid application can reduce the incidence and severity of dysphagia following ACSS without increasing early corticosteroid-related complications. But further high-quality study is necessary to analyze potential delayed complications. abstract_id: PUBMED:28315798 Bone Morphogenetic Proteins in Anterior Cervical Fusion: A Systematic Review and Meta-Analysis. Objective: Bone morphogenetic proteins (BMPs) have been commonly used as a graft substitute in spinal fusion. Although the U.S. Food and Drug Administration issued a warning on life-threatening complications of recombinant human BMPs (rhBMPs) in cervical spine fusion in 2008, their off-label use has been continued. This investigation aimed to review the evidence for the use of rhBMP-2 and rhBMP-7 in anterior cervical spine fusions. Methods: A comprehensive search was performed through Ovid (MEDLINE), PubMed, and Embase. The risk of bias assessment was according to the recommended criteria by the Cochrane Back and Neck group and MINORS (Methodological Index for Non-Randomized Studies). A wide array of radiographic and clinical outcomes including the adverse events were collated. Results: Eighteen articles (1 randomized and 17 nonrandomized) were eligible for inclusion. The fusion rate was higher with use of rhBMP in most studies and our meta-analysis of the pooled data from 4782 patients confirmed this finding (odds ratio, 5.45; P &lt; 0.00001). Altogether, the rhBMP and control groups were comparable in patient-reported outcomes. However, most studies tended to show a significantly higher incidence of overall complication rate, dysphagia/dysphonia, cervical swelling, readmission, wound complications, neurologic complications, and ossification. Conclusions: Application of rhBMPs in cervical spine fusion yields a significantly higher fusion rate with similar patient-reported outcomes, yet increased risk of life-threatening complications. Thus, we do not recommend the use of rhBMP in anterior cervical fusions. abstract_id: PUBMED:35796834 Impact of local steroid application on dysphagia after anterior cervical spine surgery: a meta-analysis. Introduction: Dysphagia is one of the most common complications of anterior cervical spine surgery. Local steroid was widely used to reduce the postoperative swallowing pain. However, the effect of local steroid application on dysphagia after anterior cervical spine surgery was still uncertain. Materials And Methods: We searched Medline (PubMed), Embase and the Cochrane Library on July 27, 2021 for studies investigating the effect of local steroid application on dysphagia after anterior cervical spine surgery from their date of inception to 2021. The relative risk or weighted mean difference with 95% confidence interval was recorded as a summary statistic consist of postoperative dysphagia, swallowing VAS scores, SWAL-QOL scores, PSTSI, and steroid related complications. Results: This meta-analysis included 7 RCT studies involving 254 patients in the steroid group and 232 patients in the placebo group. Results showed local steroid group had less patients with dysphagia, lower swallowing VAS scores and less severe of prevertebral soft-tissue edema on the fourth day after surgery. No significant difference in non-fusion rate between the two groups was observed. And all included studies had no serious steroid related complications reported. Conclusions: The use of local steroid in anterior cervical spine surgery could reduce the early postoperative dysphagia without serious steroid related complication. However, the safety of local steroid application still need further studies with larger samples. abstract_id: PUBMED:24740657 rhBMP use in cervical spine surgery: associated factors and in-hospital complications. Background: Widespread use of recombinant human bone morphogenetic protein (rhBMP) in cervical spine surgery has continued despite the U.S. Food and Drug Administration's 2008 notification regarding its adverse effects. Our study goals were to analyze how patient, surgical, and institutional factors influenced rhBMP use in cervical spinal fusion surgery and to examine the cervical-spine-specific in-hospital complications associated with rhBMP use. Methods: The Nationwide Inpatient Sample database was used to identify 1,064,372 patients eighteen years or older who had undergone cervical spinal fusion surgery from 2003 through 2010. Of these patients, 84,726 (7.96%) received rhBMP. Multivariate logistic regression models were used to analyze patient, surgical, and institutional factors associated with rhBMP use, and the relationship between rhBMP use and the development of in-hospital complications. Results: On multivariate analysis, patient age and sex, insurance type, surgical approach, use of autograft bone, and hospital teaching status, size, and region were significant predictors of rhBMP use. Use of rhBMP was a significant predictor of complications on univariate analysis and on multivariate analysis adjusted for patient age and sex, Charlson comorbidity score, insurance status, surgical approach, autograft bone use, and hospital teaching status, size, and region. Use of rhBMP was significantly associated with the development of dysphagia (prevalence, 2.0%; adjusted odds ratio [OR], 1.53), dysphonia (prevalence, 0.28%; adjusted OR, 1.48), hematoma/seroma formation (prevalence, 0.7%; adjusted OR, 1.24), and neurological complications (prevalence, 0.84%; adjusted OR, 2.0). These complications were most commonly found after anterior cervical fusion surgery. Wound infections and neurological complications were most commonly found after posterior cervical fusion surgery. Dysphagia was most commonly found after circumferential fusion surgery. Conclusions: Indications for rhBMP use for cervical spinal fusion are multifactorial. Its use is associated with a significantly higher likelihood of cervical-spine-specific complications. abstract_id: PUBMED:19531997 Is it safe to use recombinant human bone morphogenetic protein in posterior cervical fusion? Study Design: A retrospective chart review of all patients who underwent posterior cervical fusion during a 1-year time period. Objective: To analyze the safety profile of recombinant human bone morphogenetic protein (rhBMP)-2 in posterior cervical fusion. Summary Of Background Data: Use of rhBMP-2 in anterior cervical spine surgery has been associated with complications including postoperative edema, dysphagia, and hematoma formation. No literature exists regarding the safety of rhBMP-2 use for posterior cervical fusion. Methods: We reviewed all posterior cervical fusions performed during 1 year at our hospital following institutional review board permission. Results: A total of 83 patients underwent posterior cervical fusion. Sixty-seven (81%) did not receive rhBMP (control group), whereas 16 (19%) underwent fusion with rhBMP. The most common reason for the use of rhBMP was a failure of an anterior cervical fusion resulting in persistent neck pain, myelopathy, or radiculopathy. The average dose of rhBMP used was 1.3 mL per level. Immediate postoperative medical complications occurred in 5 (7%) in the control group and 2 (13%) in the rhBMP group. A new neurologic deficit after surgery was found in 3 (4%) in the control group and 1 (6%) in the rhBMP group. Wound infection requiring further surgery occurred in 8 (12%) in the control group, and in none in the rhBMP group. One (6%) patient in the rhBMP group had significant postoperative neck swelling, but improved with steroid treatment over 1 week, and without any compromise of vital neck structures. No postoperative hematomas were seen in either group. Chi2 analysis revealed that no complication variable reached significance (P &lt; 0.05) when the 2 groups were compared. Conclusion: rhBMP can safely be used in patients undergoing posterior cervical fusion, as it does not produce complications at the rate previously seen with its use in the anterior cervical spine. abstract_id: PUBMED:30506289 Intravenous and local steroid use in the management of dysphagia after anterior cervical spine surgery: a systematic review of prospective randomized controlled trails (RCTs). Purpose: To conduct a high-level systematic review of the literature to evaluate intravenous and local steroid use in the management of dysphagia after anterior cervical spine surgery. Methods: We searched the database PubMed, EMBASE, Clinical key, the Cochrane library and Wiley Online Library without time restriction using the terms 'dysphagia' or 'odynophagia' or 'swallowing disorder,' 'steroid,' 'anterior cervical spine surgery.' Randomized controlled trails (RCTs) were selected, and effects of intravenous and local steroids were investigated from these studies. Results: The initial search yielded 67 citations. Six of these studies met the inclusion and exclusion criteria. All of them were prospective RCTs which were evaluated as level 1 evidence. One study found that perioperative intravenous steroid use is not effective in reducing severity of early postoperative dysphagia after one-level ACDF, while five studies suggested that perioperative intravenous and local steroid use seemed to be more effective on reducing incidence and severity of early dysphagia after multilevel ACSS. One double-blinded study still found that locally administered depomedrol on a collagen sponge significantly decreases early dysphagia incidence and severity after ACSS using low-dose BMP. Conclusions: Perioperative intravenous and local steroid use could reduce incidence and severity of early dysphagia after ACSS postoperatively, especially for multilevel surgeries. For the patients underwent revision surgery or at high risk for pseudarthrosis having to use BMP during surgery, locally administered depomedrol on a collagen sponge was suggested to decrease early dysphagia incidence and severity after ACSS. These slides can be retrieved under Electronic Supplementary Material. abstract_id: PUBMED:31220039 Contained-Delivery Route and the Administration of Postoperative Steroids Following Anterior Cervical Spinal Fusion With Low-dose rhBMP-2 Reduces the Magnitude of Respiratory Compromise. Study Design: Retrospective chart review of patients who underwent anterior cervical discectomy and fusion (ACDF) using low-dose recombinant human bone morphogenic protein (rhBMP)-2. Objective: Determine whether severity and incidence of respiratory complications after ACDF surgery are decreased when using a low-dose BMP-infused sponge within a constrained carrier and postoperative IV and oral steroids. Summary Of Background Data: Many physicians avoid using BMP in anterior cervical spine fusions because of concern for an increased incidence of dysphagia, significant prevertebral swelling, and airway edema compromise. Pilot studies have shown that the local application of steroids may decrease the incidence of postoperative airway edema and dysphagia. We performed a retrospective study to evaluate the safety of immediate postoperative tapered steroid use following low-dose rhBMP-2 completely contained inside either an allograft or PEEK cage in reducing the severity and incidence of respiratory complications following ACDF. Materials And Methods: Forty-seven patients between 33 and 74 years of age, undergoing 1-, 2-, 3-, or 4-level ACDFs augmented with a fraction of a small sponge of rhBMP-2 (0.525 mg/level) within an allograft or PEEK cage and prescribed IV and oral postoperative steroids between January 1, 2008 and November 1, 2016. The incidence of complications such as dyspnea, dysphagia, airway issues, edema ectopic bone osteolysis, radiculitis, and nonunion were collected using medical charts. Additional data regarding length of hospital stay and readmissions were also recorded. Results: No life-threatening respiratory events, such as prolonged intubation, re-intubation, or readmission for labored breathing, were observed. No complications referable to steroids such as delayed healing, uncontrollable blood sugar, or diabetes were encountered in this series. Conclusions: This study provides evidence that a tapered dose of steroids and a contained delivery route significantly decreases postoperative respiratory compromise incidence and magnitude following anterior cervical spine fusion using low-dose rhBMP-2. abstract_id: PUBMED:26217385 Complications of Anterior Cervical Fusion using a Low-dose Recombinant Human Bone Morphogenetic Protein-2. Objective: There are several reports, which documented a high incidence of complications following the use of recombinant human bone morphogenetic protein-2 (rhBMP-2) in anterior cervical fusions (ACFs). The objective of this study is to share our experience with low-dose rhBMP-2 in anterior cervical spine. Methods: We performed a retrospective analysis of 197 patients who underwent anterior cervical fusion (ACF) with the use of recombinant human bone morphogenetic protein-2 (rhBMP-2) during 2007-2012. A low-dose rhBMP-2 (0.7mg/level) sponge was placed exclusively within the cage. In 102 patients demineralized bone matrix (DBM) was filled around the BMP sponge. Incidence and severity of dysphagia was determined by 5 points SWAL-QOL scale. Results: Two patients had prolonged hospitalization due to BMP unrelated causes. Following the discharge, 13.2%(n=26) patients developed dysphagia and 8.6%(n=17) patients complained of neck swelling. More than half of the patients (52.9%, n=9) with neck swelling also had associated dysphagia; however, only 2 of these patients necessitated readmission. Both of these patients responded well to the intravenous dexamethasone. The use of DBM did not affect the incidence and severity of complications (p&gt;0.05). Clinico-radiological evidence of fusion was not observed in 2 patients. Conclusion: A low-dose rhBMP-2 in ACFs is not without risk. However, the incidence and severity of complications seem to be lower with low-dose BMP placed exclusively inside the cage. Packing DBM putty around the BMP sponge does not affect the safety profile of rhBMP-2 in ACFs. abstract_id: PUBMED:38468654 Perspective: Cervical laminoforaminotomy (CLF) is safer than anterior cervical diskectomy/fusion (ACDF) for lateral cervical disease. Background: The literature documents that laminoforaminotomy (CLF), whether performed open, minimally invasively, or microendoscopically, is safer than anterior cervical diskectomy/fusion (ACDF) for lateral cervical disease. Methods: ACDF for lateral cervical disc disease and/or spondylosis exposes patients to multiple major surgical risk factors not encountered with CLF. These include; carotid artery or jugular vein injuries, esophageal tears, dysphagia, recurrent laryngeal nerve injuries, tracheal injuries, and dysphagia. CLF also exposes patients to lower rates of vertebral artery injury, dural tears (DT)/cerebrospinal fluid fistulas, instability warranting fusion, adjacent segment disease (ASD), plus cord and/or nerve root injuries. Results: Further, CLF vs. ACDF for lateral cervical pathology offer reduced tissue damage, operative time, estimated blood loss (EBL), length of stay (LOS), and cost. Conclusion: CLFs', whether performed open, minimally invasively, or microendoscopically, offer greater safety, major pros with few cons, and decreased costs vs. ACDF for lateral cervical disease. Answer: Yes, dysphagia following anterior cervical fusions with rhBMP-2 can be reduced with local depomedrol application. A prospective, randomized, placebo-controlled, double-blind trial provided Level 1 evidence that locally administered depomedrol on a collagen sponge significantly decreases postoperative dysphagia incidence and magnitude following anterior cervical spine fusion using low-dose rhBMP-2 (PUBMED:27018896). Additionally, a systematic review found that local corticosteroid application can reduce the incidence and severity of dysphagia following anterior cervical spine surgery (ACSS) without increasing early corticosteroid-related complications (PUBMED:33982193). Another meta-analysis supported these findings, showing that local steroid application resulted in fewer patients with dysphagia, lower swallowing VAS scores, and less severe prevertebral soft-tissue edema on the fourth day after surgery, without serious steroid-related complications (PUBMED:35796834). Furthermore, a systematic review of prospective randomized controlled trials (RCTs) concluded that perioperative intravenous and local steroid use could reduce the incidence and severity of early dysphagia after ACSS, especially for multilevel surgeries. It also suggested that for patients undergoing revision surgery or at high risk for pseudarthrosis requiring BMP, locally administered depomedrol on a collagen sponge was recommended to decrease early dysphagia incidence and severity (PUBMED:30506289). In summary, the evidence from multiple studies indicates that the application of local depomedrol can effectively reduce the severity and incidence of dysphagia following anterior cervical fusions with rhBMP-2.
Instruction: Does bone SPECT actually have lower sensitivity for detecting vertebral metastasis than MRI? Abstracts: abstract_id: PUBMED:8683325 Does bone SPECT actually have lower sensitivity for detecting vertebral metastasis than MRI? Unlabelled: We compared the ability of bone SPECT and MRI to detect vertebral metastasis. Methods: Skeletal scintigraphy, including planar and SPECT imaging, and spinal MRI examinations, were performed in 22 cancer patients in whom a total of 88 metastatic foci and 12 degenerative joint disease lesions were detected. Metastatic foci were defined as lesions that suggested metastasis on MRI and/or bone destruction on radiographs or CT and/or aggravation of increased tracer uptakes on serial bone scans. Image reconstruction of axial, coronal and sagittal sections was processed in a 128 X 128 matrix. MRI studies were performed with a 1.5 tesla signal scanner using fast spin-echo sequences. T1- and T2-weighted images were obtained in the sagittal plane. Results: Twenty patients had at least one vertebral metastasis. MRI diagnosed 86 of the 88 (97.7%) metastatic foci; bone SPECT correctly diagnosed 81 of 88 (92.0%); and planar imaging detected 62 of 88 (70.4%). The two vertebrae with metastasis not detected by MRI were clearly seen by bone SPECT. Extra-vertebral body metastases (e.g. in the pedicle, lamina, transverse and spinous processes) were, however, most often detected by SPECT, followed by MRI and planar imaging (40 versus 32 versus 4). Conclusion: Vertebral SPECT, using high-resolution SPECT equipment, produced excellent results that were comparable to and complementary with MRI in detecting vertebral metastasis. Our data suggest that vertebral SPECT is superior to MRI in detecting extra-vertebral body metastasis. abstract_id: PUBMED:27752772 Detection of vertebral metastases: a meta-analysis comparing MRI, CT, PET, BS and BS with SPECT. Objective: To perform a meta-analysis to compare the diagnostic value of magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), bone scintigraphy (BS) and BS with single-photon emission computed tomography (SPECT) in detecting vertebral metastases. Methods: Relevant original articles published from January 1995 to December 2015 were searched. Two reviewers independently extracted data. Software called "META-DiSc" was used to obtain pooled estimates of sensitivity, specificity, diagnostic odds ratio (DOR) and summary receiver operating characteristic (SROC) curves. Results: Twenty-three articles consisting of 33 studies fulfilled all inclusion criteria. On per-patient basis, for sensitivity, MRI = PET = SPECT &gt; CT = BS ("=" indicated no significant difference, P &gt; 0.05; "&gt;" indicated significantly higher, P &lt; 0.05). For specificity, MRI = CT = BS &gt; SPECT &gt; PET. For DOR, MRI &gt; SPECT &gt; BS &gt; CT = PET. SROC curves for SPECT and MRI showed better diagnostic accuracy than others. On per-lesion basis, for sensitivity, PET = SPECT = MRI &gt; BS &gt; CT. For specificity, MRI = CT &gt; PET = SPECT = BS. For DOR, MRI &gt; SPECT &gt; CT = PET &gt; BS. SROC curves showed MRI had the best while CT had the lowest diagnostic accuracy. Conclusion: For diagnosis of vertebral metastases, MRI was found to be the best modality and also better than other techniques on both per-patient and per-lesion basis. abstract_id: PUBMED:21188062 Efficacy of SPECT over planar bone scan in the diagnosis of solitary vertebral lesions in patients with low back pain. Background: The purpose of our study has been to evaluate the efficacy of single photon emission computed tomography (SPECT) over planar bone scan in identifying solitary vertebral lesions in patients with low backache and its ability to differentiate various pathologies according to the uptake pattern. Materials And Methods: The study included twenty patients out of whom six patients presented with known carcinoma and fourteen patients with low back pain. SPECT was done in all following planar skeletal survey. Benign and malignant lesions were identified according to the uptake pattern in vertebral elements, based on Gary F. Gates observations. Final diagnosis was obtained by means of biopsy or correlation with radiograph or computed tomography (CT) or magnetic resonance imaging (MRI), and / or follow up. Results: SPECT detected additional 30% of solitary vertebral lesions that were obscured on planar scan. Seven out of twenty were localized in anterior vertebral body and were diagnosed as benign ostophytes in six and osteoma in one substantiating the previous observations. Out of six cases of known carcinoma, three were having solitary metastases and showed posterior vertebral body uptake with pedicle involvement. SPECT could localize specific lesions as source of pain in eleven patients with low back pain (78%) and identified various etiologies including benign tumors (osteoid osteoma and osteoma), facet arthritis, discitis, transverse process fractures and spondylolysis. Conclusion: Our study highlighted the higher diagnostic value of SPECT over planar skeletal scintigraphy in localizing solitary vertebral lesions in low backache patients. Based on SPECT pattern, malignant and benign lesions could be differentiated in the given clinical context. abstract_id: PUBMED:11456373 Bone scintigraphy and the added value of SPECT (single photon emission tomography) in detecting skeletal lesions. Skeletal metastases are one of the major clinical problems for the oncologist. Over the last several decades bone scintigraphy has been used extensively in detecting bone involvement since it can provide information about disease location, prognosis and the effectiveness of treatment. Bone scan offers the advantage of total body examination, and images bone lesions earlier than other techniques. In this paper the main clinical problems related to the most common applications of bone scan in breast, prostate, lung cancer and other tumours are discussed. The experience carried out at the National Cancer Institute of Milan by using bone SPECT to detect single bone metastases is reported. One hundred and eighteen patients with bone metastases (from different tumour types: breast, lung, prostate, lymphomas, etc.) were studied by planar scintigraphy, SPECT and other radiological modalities (CT, MRI or X-rays). The overall performances of bone SPECT were sensitivity: 90.5% (19/21), specificity 92.8% (90/97), positive predictive value 73% (19/26), negative predictive value 97.8% (90/92), accuracy 92.4% (109/118). Considering breast cancer, the most frequent pathology in our series, and the lumbar spinal tract, the most common skeletal segment involved, the figures of merit of SPECT were: sensitivity 100% (4/4), specificity 95.3% (41/43), positive predictive value 66.7% (4/6), negative predictive value 100% (41/41), accuracy 95.7% (45/47). In conclusion bone SPECT showed very good performances, in particular improving the predictive value of planar scan in the diagnosis of vertebral metastases. abstract_id: PUBMED:25680341 Bone metastasis imaging with SPECT/CT/MRI: a preclinical toolbox for therapy studies. Bone is one of the most common metastatic target sites in breast cancer, with more than 200 thousand new cases of invasive cancer diagnosed in the US alone in 2011. We set out to establish a multimodality imaging platform for bone metastases in small animals as a tool to non-invasively quantify metastasis growth, imaging the ensuing bone lesions and possibly the response to treatment. To this end, a mouse model of osteolytic metastatic bone tumors was characterized with SPECT/CT and MRI over time. A cell line capable of forming bone metastases, MDA-MB-231, was genetically modified to stably express the reporter gene herpes simplex virus-1 thymidine kinase (hsv-1 tk). The intracellular accumulation of the radiolabeled tracer [(123)I]FIAU promoted by HSV-1 TK specifically pinpoints the location of tumor cells which can be imaged in vivo by SPECT. First, a study using tumors implanted subcutaneously was performed. The SPECT/MRI overlays and the ex vivo γ-counting showed a linear correlation in terms of %ID/cm(3) (R(2)=0.93) and %ID/g (R(2)=0.77), respectively. Then, bone metastasis growth was imaged weekly by SPECT/CT and T2-weighted MRI over a maximum of 40 days post-intracardiac injection of tumor cells. The first activity spots detectable with SPECT, around day 20 post-cell injection, were smaller than 2mm(3) and not yet visible by MRI and increased in volume and in %ID over the weeks. Osteolytic bone lesions were visible by CT (in vivo) and μCT (ex vivo). The SPECT/MRI overlays also showed a linear correlation in terms of %ID/cm(3) (R(2)=0.86). In conclusion, a new multimodality imaging platform has been established that non-invasively combines images of active tumor areas (SPECT), tumor volume (MRI) and the corresponding bone lesions (CT and μCT). To our knowledge this is the first report where the combination of soft tissue information from MRI, bone lesions by CT, and reporter gene imaging by SPECT is used to non-invasively follow metastatic bone lesions. abstract_id: PUBMED:34693502 Whole-body MRI: detecting bone metastases from prostate cancer. Whole-body magnetic resonance imaging (WB-MRI) is currently used worldwide for detecting bone metastases from prostate cancer. The 5-year survival rate for prostate cancer is &gt; 95%. However, an increase in survival time may increase the incidence of bone metastasis. Therefore, detecting bone metastases is of great clinical interest. Bone metastases are commonly located in the spine, pelvis, shoulder, and distal femur. Bone metastases from prostate cancer are well-known representatives of osteoblastic metastases. However, other types of bone metastases, such as mixed or inter-trabecular type, have also been detected using MRI. MRI does not involve radiation exposure and has good sensitivity and specificity for detecting bone metastases. WB-MRI has undergone gradual developments since the last century, and in 2004, Takahara et al., developed diffusion-weighted Imaging (DWI) with background body signal suppression (DWIBS). Since then, WB-MRI, including DWI, has continued to play an important role in detecting bone metastases and monitoring therapeutic effects. An imaging protocol that allows complete examination within approximately 30 min has been established. This review focuses on WB-MRI standardization and the automatic calculation of tumor total diffusion volume (tDV) and mean apparent diffusion coefficient (ADC) value. In the future, artificial intelligence (AI) will enable shorter imaging times and easier automatic segmentation. abstract_id: PUBMED:25257661 Single photon emission computed tomography (SPECT) and SPECT/low-dose computerized tomography did not increase sensitivity or specificity compared to planar bone scintigraphy for detection of bone metastases in advanced breast cancer. Purpose: To evaluate and compare the diagnostic performance of whole-body planar bone scintigraphy (WBS), single photon emission computed tomography (SPECT), SPECT/low-dose computerized tomography (SPECT/ldCT) and SPECT/contrast enhanced diagnostic CT (SPECT/cdCT) in the staging of patients with advanced breast cancer. Methods: Seventy-eight patients with recurrence of biopsy-proven breast cancer and suspicion of disseminated disease were investigated with WBS, SPECT, SPECT/ldCT, SPECT/cdCT and MRI performed on the same day in this prospective study. Images were separately analysed in a blinded fashion by radiologists and nuclear medicine physicians regarding the presence of pathological findings. MRI served as reference standard. Results: According to reference standard, 38 of 73 patients had bone metastases. The sensitivity was 87%, 87%, 79%, and 84% and specificity 63%, 71%, 63% and 83% for WBS, SPECT, SPECT/ldCT and SPECT/cdCT. A significantly increased specificity of SPECT/cdCT compared to WBS and SPECT/ldCT was found, and other parameters did not differ significantly between modalities. Additional two patients had bone metastases solely located outside the MRI scan field and seven patients had soft tissue metastases, but no skeletal changes on MRI. Conclusion: WBS, SPECT and SPECT/ldCT were less sensitive than MRI and equally specific for the detection of bone metastases in patients with advanced breast cancer. Based on our findings, we suggest that initial staging include WBS, MRI of the spine and CT for soft tissue evaluation. Further studies may clarify the potential benefits of whole-body MRI and 18F-NaF PET/CT or 18F-FDG PET/CT. abstract_id: PUBMED:28798033 A Prospective Study Comparing 99mTc-Hydroxyethylene-Diphosphonate Planar Bone Scintigraphy and Whole-Body SPECT/CT with 18F-Fluoride PET/CT and 18F-Fluoride PET/MRI for Diagnosing Bone Metastases. We prospectively evaluated and compared the diagnostic performance of 99mTc-hydroxyethylene-diphosphonate (99mTc-HDP) planar bone scintigraphy (pBS), 99mTc-HDP SPECT/CT, 18F-NaF PET/CT, and 18F-NaF PET/MRI for the detection of bone metastases. Methods: One hundred seventeen patients with histologically proven malignancy referred for clinical pBS were prospectively enrolled. pBS and whole-body SPECT/CT were performed followed by 18F-NaF PET/CT within 9 d. 18F-NaF PET/MRI was also performed in 46 patients. Results: Bone metastases were confirmed in 16 patients and excluded in 101, which was lower than expected. The number of equivocal scans was significantly higher for pBS than for SPECT/CT and PET/CT (18 vs. 5 and 6, respectively; P = 0.004 and 0.01, respectively). When equivocal readings were excluded, no statistically significant difference in sensitivity, specificity, positive predictive value, negative predictive value, or overall accuracy were found when comparing the different imaging techniques. In the per-patient analysis, equivocal scans were either assumed positive for metastases ("pessimistic analysis") or assumed negative for metastases ("optimistic analysis"). The percentages of misdiagnosed patients for the pessimistic analysis were 21%, 15%, 9%, and 7% for pBS, SPECT/CT, PET/CT, and PET/MRI, respectively. Corresponding figures for the optimistic analysis were 9%, 12%, 5%, and 7%. In those patients identified as having bone metastases according to the reference standard, SPECT/CT, 18F-NaF PET/CT, and PET/MRI detected additional lesions compared with pBS in 31%, 63%, and 71%, respectively. Conclusion:18F-NaF PET/CT and whole-body SPECT/CT resulted in a significant reduction of equivocal readings compared with pBS, which implies an improved diagnostic confidence. However, the clinical benefit of using, for example, 18F-NaF PET/CT or PET/MRI as compared with SPECT/CT and pBS in this patient population with a relatively low prevalence of bone metastases (14%) is likely limited. This conclusion is influenced by the low prevalence of patients with osseous metastases. There may well be significant differences in the sensitivity of SPECT/CT, PET/CT, and PET/MRI compared with pBS, but a larger patient population or a patient population with a higher prevalence of bone metastases would have to be studied to demonstrate this. abstract_id: PUBMED:15788608 Comparison of FDG PET and SPECT for detection of bone metastases in breast cancer. Objective: The purpose of our study was to evaluate the efficacy of FDG PET and bone SPECT for diagnosing bone metastases in breast cancer. Subjects And Methods: The study was a prospective series of 15 patients with breast cancer who underwent both PET and bone scanning with SPECT. Comparison was performed on a lesion-by-lesion analysis. MDCT, MRI, and the patient's clinical course were used as references. Results: In the lesion-by-lesion analysis (n = 900), the sensitivity for diagnosing bone metastases was 85% for SPECT and 17% for PET, specificity was 99% for SPECT and 100% for PET, and accuracy was 96% for SPECT and 85% for PET. In the statistical analysis, bone SPECT was significantly superior to FDG PET for its sensitivity (p &lt; 0.0001) and accuracy (p &lt; 0.0001). No statistically significant difference was seen with regard to specificity. When classifying the bone metastases as osteoblastic or osteolytic, bone scanning classified 92% of metastases as osteoblastic and 35% of metastases as osteolytic, whereas PET classified 6% of metastases as osteoblastic and 90% of metastases as osteolytic. Conclusion: Bone SPECT is superior to FDG PET in detecting bone metastases in breast cancer. The sensitivity of osteoblastic lesions is limited with FDG PET. Surveillance of metastatic spread to the skeleton in breast cancer patients based on FDG PET alone is not possible. abstract_id: PUBMED:26450096 Vertebral metastases of urogenital carcinomas: Diagnosis and conservative therapy The high incidence of bone metastases of urologic neoplasms and their morbidity, especially of vertebral metastases, requires exact diagnosis and consequent therapy. Conventional radiography plays an important role in the diagnosis of symptomatic bone lesions. Computed tomography can evaluate the stability of metastatic lesions and is indispensable for therapy planning. MRI and PET-CT have the highest diagnostic accuracy for the detection of bone metastases and MRI can evaluate their intra- and extraosseus components. PET-CT, PET-MRI, or SPECT-CT in combination with specific tracers - due to their high specificity and sensitivity - have the potential to replace conventional methods in the future. Conservative treatment basically consists of analgesic therapy, the administration of calcium and vitamin D3 and bisphosphonates or inhibitors of RANKL (denosumab). Moreover radium-223-dichloride can improve overall survival and the time to the first symptomatic skeletal event in castration-resistant prostate cancer patients with bone metastases. Answer: Based on the provided abstracts, bone SPECT does not have lower sensitivity for detecting vertebral metastasis than MRI. In fact, several studies suggest that bone SPECT is either comparable to or better than MRI in certain aspects of detecting vertebral metastasis. For instance, one study found that MRI diagnosed 97.7% of metastatic foci, while bone SPECT correctly diagnosed 92.0% of the same foci, indicating that SPECT's sensitivity is slightly lower than that of MRI for vertebral metastasis detection. However, the same study noted that bone SPECT was superior to MRI in detecting extra-vertebral body metastasis (PUBMED:8683325). A meta-analysis comparing various imaging modalities, including MRI, CT, PET, BS, and BS with SPECT, found that on a per-patient basis, the sensitivity of MRI, PET, and SPECT was significantly higher than that of CT and BS. On a per-lesion basis, PET, SPECT, and MRI had higher sensitivity than BS and CT. The study concluded that MRI was the best modality for diagnosing vertebral metastases, but it did not suggest that SPECT had lower sensitivity than MRI (PUBMED:27752772). Another study highlighted the higher diagnostic value of SPECT over planar skeletal scintigraphy in localizing solitary vertebral lesions in patients with low back pain, suggesting that SPECT can provide more detailed information than planar imaging (PUBMED:21188062). Furthermore, a study on bone scintigraphy and the added value of SPECT in detecting skeletal lesions reported that SPECT showed very good performance, particularly in improving the predictive value of planar scan in the diagnosis of vertebral metastases (PUBMED:11456373). In conclusion, while MRI is considered the best modality for the diagnosis of vertebral metastases, bone SPECT is also a highly sensitive and valuable tool that can complement MRI, especially in detecting extra-vertebral body metastasis. The abstracts do not support the notion that bone SPECT has lower sensitivity than MRI for detecting vertebral metastasis.
Instruction: Can concept sorting provide a reliable, valid and sensitive measure of medical knowledge structure? Abstracts: abstract_id: PUBMED:17072769 Can concept sorting provide a reliable, valid and sensitive measure of medical knowledge structure? Context: Evolution from novice to expert is associated with the development of expert-type knowledge structure. The objectives of this study were to examine reliability and validity of concept sorting (ConSort) as a measure of static knowledge structure and to determine the relationship between concepts in static knowledge structure and concepts used during diagnostic reasoning. Method: ConSort was used to identify static knowledge concepts and analysis of think-aloud protocols was used to identify dynamic knowledge concepts (used during diagnostic reasoning). Intra- and inter-rater reliability, and correlation across cases, were evaluated. Construct validity was evaluated by comparing proportions of nephrologists and students with expert-type knowledge structure. Sensitivity and specificity of static knowledge concepts as a predictor of dynamic knowledge concepts were estimated. Results: Thirteen first-year medical students and 19 nephrologists participated. Intra- and inter-rater agreement for determination of static knowledge concepts were 1.0 and 0.90, respectively. Reliability across cases was 0.45. The proportions of nephrologists and students identified as having expert-type knowledge structure were 82.9% and 55.8%, respectively (p=0.001). Sensitivity and specificity of ConSort((c)) in predicting concepts that were used during diagnostic reasoning were 96.8% and 27.8% for nephrologists and 87.2% and 55.1% for students. Conclusions: ConSort is a reliable, valid and sensitive tool for studying static knowledge structure. The applicability of tools that evaluate static knowledge structure should be explored as an addition to existing tools that evaluate dynamic tasks such as diagnostic reasoning. abstract_id: PUBMED:19811201 Attitudes to patient safety amongst medical students and tutors: Developing a reliable and valid measure. Background: Patient safety education is an increasingly important component of the medical school curricula. Aims: This study reports on the development of a valid and reliable patient safety attitude measure targeted at medical students, which could be used to compare the effectiveness of different forms of patient safety education delivery. Methods: The Attitudes to Patient Safety Questionnaire (APSQ) was developed as a 45-item measure of attitudes towards five patient safety themes. In Study 1, factor analysis conducted on the responses of 420 medical students and tutors, revealed nine interpretable factors. The revised 37-item APSQ-II was then administered to 301 students and their tutors at two further medical schools. Results: Good stability of factor structure was revealed with reliability coefficients ranging from 0.64 to 0.82 for the nine factors. The questionnaire also demonstrated good criterion validity, being able to distinguish between tutors and students across a range of domains. Conclusions: This article reports on the first attempt to develop a valid and reliable measure of patient safety attitudes which can distinguish responses between different groups. The predictive validity of the measure is yet to be assessed. The APSQ could be used to measure patient safety attitudes in other healthcare contexts in addition to evaluating changes in undergraduate curricula. abstract_id: PUBMED:19202500 Measuring knowledge structure: reliability of concept mapping assessment in medical education. Purpose: To test the reliability of concept map assessment, which can be used to assess an individual's "knowledge structure," in a medical education setting. Method: In 2004, 52 senior residents (pediatrics and internal medicine) and fourth-year medical students at the University of California-Davis School of Medicine created separate concept maps about two different subject domains (asthma and diabetes) on two separate occasions each (four total maps). Maps were rated using four different scoring systems: structural (S; counting propositions), quality (Q; rating the quality of propositions), importance/quality (I/Q; rating importance and quality of propositions), and a hybrid system (H; combining elements of S with I/Q). The authors used generalizability theory to determine reliability. Results: Learners (universe score) contributed 40% to 44% to total score variation for the Q, I/Q, and H scoring systems, but only 10% for the S scoring system. There was a large learner-occasion-domain interaction effect (19%-23%). Subsequent analysis of each subject domain separately demonstrated a large learner-occasion interaction effect (31%-37%) and determined that administration on four to five occasions was necessary to achieve adequate reliability. Rater variation was uniformly low. Conclusions: The Q, I/Q, and H scoring systems demonstrated similar reliability and were all more reliable than the S system. The findings suggest that training and practice are required to perform the assessment task, and, as administered in this study, four to five testing occasions are required to achieve adequate reliability. Further research should focus on whether alterations in the concept mapping task could allow it to be administered over fewer occasions while maintaining adequate reliability. abstract_id: PUBMED:16848903 Using a conceptual framework during learning attenuates the loss of expert-type knowledge structure. Background: During evolution from novice to expert, knowledge structure develops into an abridged network organized around pathophysiological concepts. The objectives of this study were to examine the change in knowledge structure in medical students in one year and to investigate the association between the use of a conceptual framework (diagnostic scheme) and long-term knowledge structure. Methods: Medical students' knowledge structure of metabolic alkalosis was studied after instruction and one year later using concept-sorting. Knowledge structure was labeled 'expert-type' if students shared &gt;or= 2 concepts with experts and 'novice-type' if they shared &lt; 2 concepts. Conditional logistic regression was used to study the association between short-term knowledge structure, the use of a diagnostic scheme and long-term knowledge structure. Results: Thirty-four medical students completed the concept-sorting task on both occasions. Twenty-four used a diagnostic scheme for metabolic alkalosis. Short-term knowledge structure was not a correlate of long-term knowledge structure, whereas use of a diagnostic scheme was associated with increased odds of expert-type long-term knowledge structure (odds ratio 12.6 [1.4, 116.0], p = 0.02). There was an interaction between short-term knowledge structure and the use of a diagnostic scheme. In the group who did not use a diagnostic scheme the number of students changing from expert-type to novice-type was greater than vice versa (p = 0.046). There was no significant change in the group that used the diagnostic scheme (p = 0.6). Conclusion: The use of a diagnostic scheme by students may attenuate the loss of expert-type knowledge structure. abstract_id: PUBMED:36846491 Investigating online learners' knowledge structure patterns by concept maps: A clustering analysis approach. A deep understanding of the learning level of online learners is a critical factor in promoting the success of online learning. Using knowledge structures as a way to understand learning can help analyze online students' learning levels. The study used concept maps and clustering analysis to investigate online learners' knowledge structures in a flipped classroom's online learning environment. Concept maps (n = 359) constructed by 36 students during one semester (11 weeks) through the online learning platform were collected as analysis objects of learners' knowledge structures. Clustering analysis was used to identify online learners' knowledge structure patterns and learner types, and a non-parametric test was used to analyze the differences in learning achievement among learner types. The results showed that (1) there were three online learners' knowledge structure patterns of increasing complexity, namely, spoke, small-network, and large-network patterns. Moreover, online learners with novice status mostly had spoke patterns in the context of flipped classrooms' online learning. (2) Two types of online learners were found to have different distributions of knowledge structure patterns, and the complex knowledge structure type of learners exhibited better learning achievement. The study explored a new way for educators to analyze knowledge structures by data mining automatically. The findings provide evidence in the online learning context for the relationship between complex knowledge structures and better learning achievement while suggesting the existence of inadequate knowledge preparedness for flipped classroom learners without a special instructional design. abstract_id: PUBMED:27649242 A Systematic Review of Athletes' and Coaches' Nutrition Knowledge and Reflections on the Quality of Current Nutrition Knowledge Measures. Context: Nutrition knowledge can influence dietary choices and impact on athletic performance. Valid and reliable measures are needed to assess the nutrition knowledge of athletes and coaches. Objectives: (1) To systematically review the published literature on nutrition knowledge of adult athletes and coaches and (2) to assess the quality of measures used to assess nutrition knowledge. Data Sources: MEDLINE, CINAHL, SPORTDiscuss, Web of Science, and SCOPUS. Study Selection: 36 studies that provided a quantitative measure of nutrition knowledge and described the measurement tool that was used were included. Data Extraction: Participant description, questionnaire description, results (mean correct and responses to individual items), study quality, and questionnaire quality. Data Synthesis: All studies were of neutral quality. Tools used to measure knowledge did not consider health literacy, were outdated with regards to consensus recommendations, and lacked appropriate and adequate validation. The current status of nutrition knowledge in athletes and coaches is difficult to ascertain. Gaps in knowledge also remain unclear, but it is likely that energy density, the need for supplementation, and the role of protein are frequently misunderstood. Conclusions: Previous reports of nutrition knowledge need to be interpreted with caution. A new, universal, up-to-date, validated measure of general and sports nutrition knowledge is required to allow for assessment of nutrition knowledge. abstract_id: PUBMED:36524171 Influence of multi-dimensional environmental knowledge on residents' waste sorting intention: Moderating effect of environmental concern. With the rapid increase in household waste, environmental degradation becomes more serious. It is imperative to promote waste sorting in China. This study proposes an extended KAB model to explore the impact mechanism of different dimensions of subjective environmental knowledge on urban residents' waste sorting intention. The study also explores the moderating role of environmental concern in the relationship between three types of subjective environmental knowledge and attitude toward waste sorting. Based on 308 valid questionnaires, through structural equation model, multiple regression analysis, and simple slope test, we found that system knowledge, action-related knowledge, and effectiveness knowledge all have positive impacts on residents' attitudes toward waste sorting, and effectiveness knowledge has the most significant impact. Meanwhile, environmental concern positively moderates the relationship between system knowledge, effectiveness knowledge, and attitude toward waste sorting. This study makes an important theoretical contribution to enrich the existing literature on residents' waste sorting behavior and provides theoretical insights for governmental waste sorting policy formulation at the practical level. abstract_id: PUBMED:12877751 The design of a valid and reliable questionnaire to measure osteoporosis knowledge in women: the Osteoporosis Knowledge Assessment Tool (OKAT). Background: Osteoporosis knowledge is an important contributor to improving exercise and calcium intake behaviour. However, there are few validated instruments for measuring osteoporosis knowledge levels. The aim of this study was to design a valid and reliable instrument to measure osteoporosis knowledge in Australian women. Methods: A 20 item instrument with true, false and don't know responses was drafted, based on the Osteoporosis Australia Osteoporosis Prevention and Self-management course and the information leaflet "Understanding Osteoporosis". The scoring range was 1 to 20. This was administered to a 467 randomly-selected, healthy women aged 25-44 years. Questionnaire performance was assessed by Flesch reading ease, index of difficulty, Ferguson's sigma, inter-item and item-total correlations, Cronbach's alpha and principal component factor analysis. Results: Flesch reading ease was higher than desirable at 45, but this was due to the use of the word osteoporosis in many items. Of the individual items 17 had an index of difficulty less than 0.75. The questionnaire had a Ferguson's sigma of 0.96, a Cronbach's alpha of 0.70 and factor analysis consistent with only one factor (osteoporosis knowledge) being measured. Levels of osteoporosis knowledge were low with a mean score of 8.8 out of 20 which suggests the OKAT may be sensitive to change. Conclusions: The OKAT for measuring osteoporosis knowledge has good psychometric properties in Australian 25-44 year old females. While it should be applicable to other Caucasian populations, this will require confirmation by further research. abstract_id: PUBMED:28681636 Group concept mapping: An approach to explore group knowledge organization and collaborative learning in senior medical students. Introduction: Group concept mapping may be used as a learning strategy that can potentially foster collaborative learning and assist instructors to assess the development of knowledge organization in medical students. Methods: Group concept maps were created by 39 medical students rotating through a fourth year medicine rotation. The group maps were developed based on a clinical vignette. Descriptive statistics and thematic analysis of students' evaluations were performed. Results: Evaluations indicated that students enjoyed the collaborative nature of the exercise and the knowledge sharing activities associated with it. Group maps can demonstrate different knowledge organization Discussion: Group concept mapping can be used to explore students' organization and integration of knowledge structures in a collaborative setting. Additional research should focus on how group mapping and learning progresses over time and, whether group mapping can help identify curricular strengths and needs. abstract_id: PUBMED:19107567 Qualitative differences in knowledge structure are associated with diagnostic performance in medical students. Diagnosing is a knowledge-based skill: to diagnose one must retrieve knowledge from long-term memory and then apply this to a new clinical problem. Prior research on expertise found differences in knowledge structure between experts and novices, and it is assumed that the superior diagnostic performance of experts is somehow related to their superior knowledge structure. Here our objective was to study knowledge structure in final year medical students and to examine the association between knowledge structure and diagnostic performance. Ninety-one students participated. We used concept sorting to assess knowledge structures for four clinical problems. We performed qualitative analysis of knowledge structures, categorizing these as either problem-specific, where knowledge was predominantly structured around concepts specific to that clinical problem, or generic, where knowledge was structured around general concepts that could apply to all clinical problems. We evaluated diagnostic performance using problem-solving questions. Knowledge structure varied between different problems, but for each problem most students had problem-specific knowledge structure. These students had better diagnostic performance than those with generic structure (68.5 vs. 55.3%, d = 0.45, P = 0.004). This difference persisted after adjusting for overall medical knowledge (performance on the Medical Council of Canada Part 1 examination) and clinical problem. We found that most students organize their knowledge around problem-specific concepts, and that this type of knowledge was associated with better diagnostic performance. This may be due to easier knowledge retrieval if there is congruence between how knowledge is stored and how it is applied when diagnosing. Answer: Yes, concept sorting can provide a reliable, valid, and sensitive measure of medical knowledge structure. The study described in abstract PUBMED:17072769 found that concept sorting (ConSort) is a reliable tool, with intra- and inter-rater agreement for determination of static knowledge concepts being 1.0 and 0.90, respectively. The reliability across cases was 0.45. The study also established the validity of ConSort by showing that a higher proportion of nephrologists (82.9%) compared to medical students (55.8%) were identified as having an expert-type knowledge structure, which is consistent with expectations based on their levels of experience (p=0.001). Furthermore, the sensitivity and specificity of ConSort in predicting concepts used during diagnostic reasoning were high for nephrologists (96.8% and 27.8%, respectively) and for students (87.2% and 55.1%, respectively). These findings suggest that ConSort is not only reliable and valid but also sensitive for studying static knowledge structure in the context of medical education and practice.
Instruction: Do stand-alone interbody spacers with integrated screws provide adequate segmental stability for multilevel cervical arthrodesis? Abstracts: abstract_id: PUBMED:24462812 Do stand-alone interbody spacers with integrated screws provide adequate segmental stability for multilevel cervical arthrodesis? Background Context: Some postoperative complications after anterior cervical fusions have been attributed to anterior cervical plate (ACP) profiles and the necessary wide operative exposure for their insertion. Consequently, low-profile stand-alone interbody spacers with integrated screws (SIS) have been developed. Although SIS constructs have demonstrated similar biomechanical stability to the ACP in single-level fusions, their role as a stand-alone device in multilevel reconstructions has not been thoroughly evaluated. Purpose: To evaluate the acute segmental stability afforded by an SIS device compared with the traditional ACP in the setting of a multilevel cervical arthrodesis. Study Design: In vitro human cadaveric biomechanical analysis. Methods: Thirteen human cadaveric cervical spines (C2-T1) were nondestructively tested with a custom 6 df spine simulator under axial rotation, flexion-extension, and lateral bending loading. After intact analysis, eight single-levels (C4-C5/C6-C7) from four specimens were instrumented and tested with ACP and SIS. Nine specimens were tested with C5-C7 SIS, C5-C7 ACP, C4-C7 ACP, C4-C7 ACP+posterior fixation, C4-C7 SIS, and C4-C7 SIS+posterior fixation. Testing order was randomized with each additional level instrumented. Full range of motion (ROM) data were obtained and analyzed by each loading modality, using mean comparisons with repeated measures analysis of variance. Paired t tests were used for post hoc analysis with Sidak correction for multiple comparisons. Results: No significant difference in ROM was noted between the ACP and SIS for single-level fixation (p&gt;.05). For multisegment reconstructions (two and three levels), the ACP proved superior to SIS and intact condition, with significantly lower ROM in all planes (p&lt;.05). When either the three-level SIS or ACP constructs were supplemented with posterior lateral mass fixation, there was a greater than 80% reduction in ROM under all testing modalities (p&lt;.05), with no significant difference between the ACP and SIS constructs (p&gt;.05). Conclusions: The SIS device may be a reasonable option as a stand-alone device for single-level fixation. However, SIS devices should be used with careful consideration in the setting of multilevel cervical fusion. However, when supplemented with posterior fixation, SIS devices are a sound biomechanical alternative to ACP for multilevel fusion constructs. abstract_id: PUBMED:37401066 Biomechanical Comparison of Multilevel Stand-Alone Lumbar Lateral Interbody Fusion With Posterior Pedicle Screws: An In Vitro Study. Objective: Lumbar lateral interbody fusion (LLIF) allows placement of large interbody cages while preserving ligamentous structures important for stability. Multiple clinical and biomechanical studies have demonstrated the feasibility of stand-alone LLIF in single-level fusion. We sought to compare the stability of 4-level stand-alone LLIF utilizing wide (26 mm) cages with bilateral pedicle screw and rod fixation. Methods: Eight human cadaveric specimens of L1-5 were included. Specimens were attached to a universal testing machine (MTS 30/G). Flexion, extension, and lateral bending were attained by applying a 200 N load at a rate of 2 mm/sec. Axial rotation of ± 8° of the specimen was performed at 2°/sec. Three-dimensional specimen motion was recorded using an optical motion-tracking device. Specimens were tested in 4 conditions: (1) intact, (2) bilateral pedicle screws and rods, (3) 26-mm stand-alone LLIF, (4) 26-mm LLIF with bilateral pedicle screws and rods. Results: Compared to the stand-alone LLIF, bilateral pedicle screws and rods had 47% less range of motion in flexion-extension (p &lt; 0.001), 21% less in lateral bending (p &lt; 0.05), and 20% less in axial rotation (p = 0.1). The addition of bilateral posterior instrumentation to the stand-alone LLIF resulted in decreases of all 3 planes of motion: 61% in flexion-extension ( p &lt; 0.001), 57% in lateral bending (p &lt; 0.001), 22% in axial rotation (p = 0.002). Conclusion: Despite the biomechanical advantages associated with the lateral approach and 26 mm wide cages, stand-alone LLIF for 4-level fusion is not equivalent to pedicle screws and rods. abstract_id: PUBMED:28989848 Biomechanical Stability of a Stand-Alone Interbody Spacer in Two-Level and Hybrid Cervical Fusion Constructs. Study Design: In vitro human cadaveric biomechanical analysis. Objective: To evaluate the segmental stability of a stand-alone spacer (SAS) device compared with the traditional anterior cervical plate (ACP) construct in the setting of a 2-level cervical fusion construct or as a hybrid construct adjacent to a previous 1-level ACP construct. Methods: Twelve human cadaveric cervical spines (C2-T1) were nondestructively tested with a custom 6-degree-of-freedom spine simulator under axial rotation (AR), flexion-extension (FE), and lateral bending (LB) at 1.5 N m loads. After intact analysis, each specimen underwent instrumentation and testing in the following 3 configurations, with each specimen randomized to the order of construct: (A) C5-7 SAS; (B) C5-6 ACP, and C6-7 SAS (hybrid); (C) C5-7 ACP. Full range of motion (ROM) data at C5-C7 was obtained and analyzed by each loading modality utilizing mean comparisons with repeated measures analysis of variance with Sidak correction for multiple comparisons. Results: Compared with the intact specimen, all tested constructs had significantly increased segmental stability at C5-C7 in AR and FE ROM, with no difference in LB ROM. At C5-C6, all test constructs again had increased segmental stability in FE ROM compared with intact (10.9° ± 4.4° Intact vs SAS 6.6° ± 3.2°, P &lt; .001; vs.Hybrid 2.9° ± 2.0°, P = .005; vs ACP 2.1° ± 1.4°, P &lt; .001), but had no difference in AR and LB ROM. Analysis of C6-C7 ROM demonstrated all test groups had significantly greater segmental stability in FE ROM compared with intact (9.6° ± 2.7° Intact vs SAS 5.0° ± 3.0°, P = .018; vs Hybrid 5.0° ± 2.7°, P = .018; vs ACP 4.4° ± 5.2°, P = .005). Only the hybrid and 2-level ACP constructs had increased stability at C6-C7 in AR ROM compared with intact, with no difference for all test groups in LB ROM. Comparison between test constructs demonstrated no difference in C5-C7 and C6-C7 segmental stability in all planes of motion. However, at C5-C6 comparison between test constructs found the 2-level SAS had significantly less segmental stability compared to the hybrid (6.6° ± 3.2° vs 2.9° ± 2.0°, P = .025) and ACP (6.6° ± 3.2° vs 2.1° ± 1.4°, P = .004). Conclusions: Our study found the currently tested SAS device may be a reasonable option as part of a 2-level hybrid construct, when used below an adjacent 1-level ACP, but should be used with careful consideration as a 2-level SAS construct. Consequences of decreased segmental stability in FE are unknown; however, optimal immediate fixation stability is an important surgical principle to avoid loss of fixation, segmental kyphosis, interbody graft subsidence, and pseudarthrosis. abstract_id: PUBMED:24948039 Biomechanical analysis of an interbody cage with three integrated cancellous lag screws in a two-level cervical spine fusion construct: an in vitro study. Background Context: Despite an increase in the clinical use of no-profile anchored interbody cages (AIC) for anterior cervical discectomy and fusion (ACDF) procedures, there is little published biomechanical data describing its stabilizing effect relative to the traditional anterior plating technique over two contiguous levels. Purpose: To biomechanically compare the acute stability conferred by a stand-alone interbody fusion device with three integrated fixation screws ("anchored cage") with a traditional six-hole rigid anterior plate in a two contiguous levels (C4-C5+C5-C6) fusion construct. We hypothesized that the anchored cage would confer comparable segmental rigidity to the cage and anterior plate construct. Study Design: A biomechanical laboratory study using cadaveric human cervical spines. Methods: Seven (n=7) cadaveric human cervical spines (C3-C7) were subjected to quasistatic, pure-moment loading (±1.5 Nm) in flexion-extension (flex/ext), right/left lateral bending (RB/LB), and right/left axial rotation (RR/LR) for the following test conditions: intact; after discectomy and insertion of the AIC at C4-C5 and C5-C6 with anchoring screws engaged; after the removal of the integrated anchoring screws and instrumentation of an anterior locking plate (ALP) over both levels; and cage-only (CO) configuration with screws and anterior plate removed. Intervertebral range of motion (ROM) at the instrumented levels was the primary biomechanical outcome. Results: Flex/ext, RB/LB, and RR/LR ROMs were significantly reduced (p&lt;.001) over both levels by AIC and ALP constructs relative to the CO construct. Significant reduction in flex/ext motion was achieved with the ALP (6.8±3.7) relative to the AIC (10.2°±4.6°) (p=.041) construct. No significant differences were seen in ROM reductions over the two levels between the AIC and APL groups in lateral bending or axial rotation (p&gt;.826). Conclusions: The anchored cage fusion construct conferred similar acute biomechanical stability in lateral bending and axial rotation ROMs relative to rigid anterior plating. We identified a statistically significant reduction (Δ=3.4°, combined over two levels) in sagittal plane ROM conferred by the ALP relative to the AIC construct. Our biomechanical findings may support the clinical use of no-profile integrated interbody devices over two contiguous levels in ACDF. abstract_id: PUBMED:11979162 In vitro stability of FRA spacers with integrated crossed screws for anterior lumbar interbody fusion. Study Design: Mechanical testing of human lumbar functional spine units was carried out after instrumenting the disc space with femoral ring allografts (FRAs) with and without integrated crossed anterior screws applied into the adjacent bodies. Objectives: To assess the stability of FRA construct with and without the integrated crossed anterior screws and to compare that with the intact specimen. Summary Of Background Data: Most modern methods of achieving anterior lumbar intervertebral fusion rely on the use of interbody spacers to restore and maintain intervertebral height, overall alignment, and stability while facilitating arthrodesis. The FRAs have the advantage of biologic compatibility but may not have enough stability when used as stand-alone devices. FRA spacers alone are less stiff in torsion and extension compared with other instrumented constructs. Increased motion could lead to higher failure rates because of graft migration and pseudarthrosis. This makes the use of supplementary anterior or posterior fixation necessary. The current authors hypothesized that the addition of anterior integrated crossed screws applied through the FRA spacer into the adjacent vertebral bodies would increase the stability of the FRA spacer in extension and torsion in the absence of further posterior instrumentation. Methods: Seven fresh-frozen human cadaveric lumbar spine functional spine units were tested applying cantilever bending moments in flexion, lateral bending, torsion, and extension. The specimens were tested in the following sequence: intact, with FRA spacer alone and with FRA spacer and integrated crossed screws. The stiffness of each construct was then compared with the intact specimen. Specimens with obvious deformity on radiographs or dual-energy radiograph absorptiometry t score values &lt;-1.2 were excluded. Results: The addition of the integrated crossed metal screw system improved the stiffness of the construct by 53% over the intact (P = 0.02) and by 31% over the FRA alone in extension (P = 0.01), whereas it improved the stiffness by 40% over the intact (P = 0.03) and by 18% over the FRA alone in torsion (P = 0.21). The crossed screw system did not improve the stiffness compared with intact in either flexion or lateral bending modes. Conclusions: Although there is a trend toward improved stability of the FRA spacers in torsion with the addition of the metal screws, this is not statistically significant. The integrated crossed anterior metal screw system significantly improves the stability of the FRA spacers in extension when used for anterior lumbar interbody fusion. abstract_id: PUBMED:28676461 Cervical Burst Fracture in a Patient with Contiguous 2-Level Cervical Stand-Alone Cages. Background: Cervical stand-alone interbody cages have seen increasingly wider use over the plate-and-screw construct in single-level anterior cervical discectomy and fusion (ACDF) in the treatment of cervical disc herniation and degenerative spondylotic conditions. Despite positive clinical outcomes, the efficacy and safety of using these devices in contiguous multilevel ACDF has remained controversial. This report discusses a burst fracture seen as a complication in multilevel cervical stand-alone cage use. Case Description: A 39-year-old woman with a history of C5-C6 and C6-C7 ACDF with contiguous stand-alone interbody cages 2 years prior, presented with recurrent arm and neck pain with myelopathy. Computed tomography scan revealed a burst fracture of the C6 vertebral body with retropulsion of fragments compressing the spinal cord. Conclusions: This case suggests that use of cervical stand-alone cages in contiguous levels may cause late complications despite good instrumentation and illustrates the need for more careful consideration of technique selection in multilevel ACDF. abstract_id: PUBMED:28960818 Biomechanical Stability Analysis of a Stand-alone Cage, Static and Rotational-dynamic Plate in a Two-level Cervical Fusion Construct. Objective: To test the following hypotheses: (i) anterior cervical discetomy and fusion (ACDF) using stand-alone interbody spacers will significantly reduce the range of motion from intact spine; and (ii) the use of a static or a rotational-dynamic plate will significantly augment the stability of stand-alone interbody spacers, with similar beneficial effect when compared to each other. Methods: Eleven human cadaveric subaxial cervical spines (age: 48.2 ± 5.4 years) were tested under the following sequence: (i) intact spine; (ii) ACDF at C4 -C5 using a stand-alone interbody spacer; (iii) ACDF at C5 -C6 and insertion of an interbody spacer (two-level construct); and (iv) randomized placement of either a two-level locking static plate or a rotational-dynamic plate. Results: Insertion of stand-alone cage at C4 -C5 and C5 -C6 caused a significant decrease in the range of motion compared to intact spine (P &lt; 0.05). Placement of both the locking and the rotational dynamic plate further reduced the range of motion at C4 -C5 and C5 -C6 compared to stand-alone cage (P &lt; 0.01). No significant differences in range of motion restriction at either C4 -C5 or C5 -C6 were found when the two plating systems were compared (P &gt; 0.05). Conclusions: Cervical stand-alone interbody spacers caused significant restriction in the range of motion. Both plates significantly augmented the stability of stand-alone interbody spacers, with similar stabilizing effect. abstract_id: PUBMED:27123025 Stand-Alone Cages for Anterior Cervical Fusion: Are There No Problems? Objective: There are complications in stand-alone cage assisted anterior cervical discectomy and fusion (ACDF), such as cage subsidence and kyphosis. Here we report our clinical result on ACDF, comparing with stand-alone cages and with cervical plate system for degenerative cervical spine diseases. Methods: Patients with degenerative cervical disease who were diagnosed and treated in Konyang University Hospital between January 2004 and December 2014 were included in this study. Patients who had operation in single level ACDF were selected. Patients scored the degree of pain using visual analog scale before and after the surgery. Subsidence was defined as ≥3-mm decrease of the segmental height, and cervical kyphosis was defined as progression of ≥5° at 12 months after postoperative follow-up compared to that measured at the immediate postoperative period. Results: A total of 81 patients were enrolled for this study. Forty-five patients were included in a cervical plate group and the others were in stand-alone cage group. There was no statistical difference in pain score between the 2 groups. Segmental subsidence was observed in 7 patients (15.6%) in plate-assisted cervical fusion group, and 13 patients (36.1%) in stand-alone cage group. Segmental kyphosis was observed in 4 patients (8.9%) in plate-assisted cervical fusion group, and 10 patients (27.8%) in stand-alone cage group. There was statistical difference between the 2 groups. Conclusion: There was no difference in pain between 2 groups. But stand-alone case group showed higher incidence rate than plate-assisted cervical fusion group in segmental subsidence and cervical kyphosis. When designing cervical fusion, more attention should be given selecting the surgical technique. abstract_id: PUBMED:35940637 Evaluation of Anterior Lumbar Interbody Fusion Performed Using a Stand-Alone, Integrated Fusion Cage. Background: Anterior lumbar interbody fusion (ALIF) has been performed for many years. Often, posterior supplemental fixation has been used to provide additional stability to the operated segment. Interbody implants have evolved to incorporate unique designs, polyetheretherketone, integrated screws, and surface texture. With these changes, the need for supplemental posterior fixation has been debated. The purpose of this study was to evaluate the clinical outcome of stand-alone ALIF. Methods: A surgery log was reviewed to identify the consecutive series of 58 patients undergoing ALIF using a STALIF stand-alone cage from March 2011 (first case) to December 2018 (minimum 24 months postoperative) with a mean follow-up of 30.6 months. All patients were treated for symptomatic degenerative conditions. Charts were reviewed to collect general patient information, operative data, and patient-reported outcomes, including the Oswestry Disability Index (ODI), visual analog scales (VAS) separately assessing back pain and leg pain, and re-operations. For patients who were not seen recently in clinic for follow-up, current outcome data were collected through mailings. Results: The mean operative blood loss was 52.1 mL. There was a statistically significant improvement in mean ODI scores from 41.7 preoperatively to 21.0 at follow-up (P &lt; 0.01). There was also significant improvement (P &lt; 0.01) in VAS back pain (6.0-2.5) and leg pain (4.1-1.3). Subsequent surgery was performed on 9 patients. Reasons for re-operation were pseudoarthrosis (n = 3), progressive cage subsidence (n = 1), foraminal stenosis at the index level (n = 1), metal allergy reaction (n = 2), adjacent segment degeneration (n = 1), and ongoing pain (n = 1). There were no cases of device failure, vertebral body fracture, or screws backing out of the implant. Discussion: Stand-alone ALIF was associated with statistically significant improvements in ODI scores, back pain, and leg pain. The re-operation rate for clear pseudoarthrosis or cage subsidence was 6.8%. These results support that stand-alone ALIF produces good outcomes in patients treated for symptomatic disc degeneration while avoiding the use of posterior fixation and its complication risk and cost. Clinical Relevance: The results of this study support that stand-alone ALIF is a viable procedure for the treatment of symptomatic disc degeneration unresponsive in patients who have failed nonoperative care and who do not have specific indications for supplemental posterior instrumentation. Level Of Evidence: 4: abstract_id: PUBMED:28067466 Anterior Lumbar Interbody Fusion Integrated Screw Cages: Intrinsic Load Generation, Subsidence, and Torsional Stability. Objective: To perform a repeatable idealized in vitro model to evaluate the effects of key design features and integrated screw fixation on unloaded surface engagement, subsidence, and torsional stability. Methods: We evaluated four different stand-alone anterior lumbar interbody fusion (ALIF) cages with two, three, and four screw designs. Polyurethane (saw-bone) foam blocks were used to simulate the vertebral bone. Fuji Film was used to measure the contact footprint, average pressure, and load generated by fixating the cages with screws. Subsidence was tested by axially loading the constructs at 10 N/s to 400 N and torsional load was applied +/-1 Nm for 10 cycles to assess stability. Outcome measures included total subsidence and maximal torsional angle range. Results: Cages 1, 2, and 4 were symmetrical and produced similar results in terms of contact footprint, average pressure, and load. The addition of integrated screws into the cage-bone block construct demonstrated a clear trend towards decreased subsidence. Cage 2 with surface titanium angled ridges and a keel produced the greatest subsidence with and without screws, significantly more than all other cages ( P &lt; 0.05). Angular rotation was not significantly affected by the addition of screws ( P &lt; 0.066). A statistically significant correlation existed between subsidence and reduced angular rotation across all cage constructs ( P = 0.018). Conclusion: Each stand-alone cage featured unique surface characteristics, which resulted in differing cage-foam interface engagement, influencing the subsidence and torsional angle. Increased subsidence significantly reduced the torsional angle across all cage constructs. Answer: Stand-alone interbody spacers with integrated screws (SIS) have been developed as an alternative to anterior cervical plates (ACP) to reduce postoperative complications and the need for wide operative exposure. While SIS constructs have shown similar biomechanical stability to ACP in single-level fusions, their efficacy in multilevel cervical arthrodesis is less clear. In a study comparing the acute segmental stability of SIS devices with traditional ACP in multilevel cervical arthrodesis, it was found that for single-level fixation, there was no significant difference in range of motion (ROM) between ACP and SIS. However, for multisegment reconstructions (two and three levels), ACP provided superior stability compared to SIS, with significantly lower ROM in all planes. When supplemented with posterior lateral mass fixation, both three-level SIS and ACP constructs showed a reduction in ROM under all testing modalities, with no significant difference between the two constructs (PUBMED:24462812). Another study evaluating the segmental stability of a stand-alone spacer (SAS) device in comparison with ACP in two-level cervical fusion constructs found that while all tested constructs increased segmental stability compared to the intact specimen, the two-level SAS had significantly less segmental stability in flexion-extension (FE) ROM compared to hybrid (SAS below an adjacent 1-level ACP) and ACP constructs. This suggests that SAS may be a reasonable option as part of a two-level hybrid construct but should be used with careful consideration as a two-level SAS construct (PUBMED:28989848). A case report also highlighted a potential complication with the use of contiguous multilevel cervical stand-alone cages, where a patient experienced a burst fracture of the C6 vertebral body following a previous surgery with contiguous stand-alone interbody cages at C5-C6 and C6-C7 levels (PUBMED:28676461). In summary, while stand-alone interbody spacers with integrated screws may provide adequate segmental stability for single-level cervical arthrodesis, their use in multilevel cervical fusion should be approached with caution. They appear to offer less stability compared to ACP in multilevel constructs, and when used in two-level constructs, they may be more suitable as part of a hybrid construct rather than as a stand-alone solution.
Instruction: Common mental disorder symptom counts in populations: are there distinct case groups above epidemiological cut-offs? Abstracts: abstract_id: PUBMED:12420889 Common mental disorder symptom counts in populations: are there distinct case groups above epidemiological cut-offs? Background: At the lower end of IQ distributions in general populations, there is a clear excess of cases, representing the distinct pathology of severe learning disability. This study aimed to establish whether such a subpopulation exists in distributions of common mental disorder and depression symptom scores, above epidemiological 'case' cut-offs. Method: Data from 9556 non-psychotic respondents to the 1993 OPCS (Office of Population Censuses and Surveys) National Household Psychiatric Morbidity Survey were analysed. The distribution of total neurotic symptom and depression scores from the revised Clinical Interview Schedule were examined. Automated least squares methods were used to fit the best single statistical distribution to the data. Results: A single exponential curve provided the best fit for the whole population, but floor effects produced deviations at symptom counts of 0-3. After truncation, exponential distributions fitted excellently. Proportions of the population above conventional cut-offs of &gt; or = 12 symptoms differed by &lt; 12% from expected for a range of low and high prevalence groups. The single exponential model also fitted the depression score. Conclusions: Symptom counts for the common mental disorders fall within single population distributions, with little apparent numerical excess in the case range. High and low prevalences of these disorders appear to be population characteristics, with shifts in exponential means predicting proportions above case cut-offs. abstract_id: PUBMED:38156726 Somatic symptom severity, depression and anxiety associations with pancreatitis and undifferentiated abdominal pain in surgical inpatients. Background: Somatic Symptom Disorder is a psychiatric diagnosis that describes the experience of physical symptoms and associated distress, that is disproportionate to recognized organic pathology. Somatic symptom severity (SSS) may be associated with some surgical diagnoses; particularly the complex pain associated with pancreatitis, or the diagnostic ambiguity of undifferentiated abdominal pain (UAP). We aimed to estimate the prevalence of SSS in different diagnostic groups in surgical inpatients with abdominal pain; and to estimate the magnitude and direction of any association of SSS, anxiety and depression. Methods: Cross sectional analysis (n = 465) of adult admissions with non-traumatic abdominal pain, at a tertiary hospital in Australia. We estimated SSS with the Patient Health Questionnaire-15 (PHQ-15), depression with the Patient Health Questionnaire (PHQ-9) and anxiety with the General Anxiety Disorder (GAD-7), at standard cut-points ≥ 10; comparing acute pancreatitis (n = 20), chronic pancreatitis (n = 18) and UAP (n = 64) versus other causes of abdominal pain. Results: Somatic symptoms were common, 52% having moderate and 19.6% severe SSS. There was an association between moderate SSS and pancreatitis (OR 2.11, 95% CI 1.05-4.25) and depressive symptoms and chronic pancreatitis (OR = 3.47, 95% CI 1.31-9.24). There was no significant association between the four mental health categories and UAP. Conclusions: SSS and psychological comorbidity were common in a surgical inpatients admitted for abdominal pain and equally represented across most diagnostic sub-groups. However, the pancreatitis sub-group had greater proportions with clinically significant SSS and depression, suggesting that they have a higher requirement for psychological assessment and intervention. abstract_id: PUBMED:31258106 Association of PTSD symptom groups with functional impairment and distress in trauma-exposed disaster survivors. Background: In the nearly a quarter of a century since the addition of the clinically significant distress/impairment criterion to the definition of PTSD in DSM-IV, little research has been done to examine the association of this criterion with symptom group criteria and with the numbing subgroup specifically. This study was conducted to examine these relationships in a large database of disaster survivors consistently studied across 12 different incidents of the full range of disaster typology. Methods: Analysis was conducted on a merged database representing 1187 trauma-exposed survivors of 12 different disasters studied systematically. DSM-IV-TR criteria for disaster-related PTSD were assessed with the Diagnostic Interview Schedule. Results: PTSD Group C (avoidance/numbing) and numbing specifically were less common and more associated than other symptom groups with criterion F (distress/impairment). Consistently in multivariable models, group C and numbing were independently associated with criterion F. Group D (hyperarousal) was less strongly associated with criterion F. Neither group B (intrusion) nor avoidance were associated with criterion F. Conclusions: In this and other studies, group C and numbing specifically have been shown to be associated with criterion F, which is consistent with the demonstration that group C and the numbing component specifically are central to the psychopathology of PTSD. The addition of the distress/impairment requirement broadly across the psychiatric diagnoses in DSM-IV added little value to PTSD symptom criteria. Future revisions of diagnostic criteria may benefit by carefully considering these findings to possibly re-include a prominent numbing symptom section. abstract_id: PUBMED:9242841 The Shona Symptom Questionnaire: the development of an indigenous measure of common mental disorders in Harare. The objective of this study was to develop an indigenous measure of common mental disorders (CMD) in the Shona language in Zimbabwe. Ethnographic and qualitative studies elicited idioms of distress of mental disorder leading to the preliminary Shona Symptom Questionnaire (SSQ), and items from the Self-Report Questionnaire (SRQ) were added to it. The 56-item Preliminary Shona Symptom Questionnaire (PSSQ) was administered to 302 randomly selected primary care attenders, of whom 100 were classified as cases on the basis of agreement between care provider assessment and a psychiatric interview. Discriminant analysis identified 14 items that were the strongest predictors of mental disorder. The 14-item SSQ has a high level of internal consistency (Cronbach's alpha = 0.85). The items are a mixture of emic and etic phenomena. The total score correlates strongly with patients' self-assessment of the emotional nature of their illness. Satisfactory specificity and sensitivity occurred at a cut-off point of 7/8. The SSQ is the first indigenous measure of mental disorder developed in sub-Saharan Africa to have included idioms or distress of primary care attenders and involved patients consulting traditional medical practitioners. It promises to be a useful instrument for epidemiological and clinical research. The methodology used is an innovative way of combining etic and emic methods in the evaluation of CMD. abstract_id: PUBMED:15106055 Symptom Checklist SCL-27 The Symptom-Checklist-27 (SCL-27) forms a modification of the widely used Symptom-Checklist-90-R, with the latter demonstrating considerable shortcomings in psychometric properties. The SCL-27 is designed to screen for psychiatric symptoms in patients presenting somatic complaints. It contains the six subscales depressive, dysthymic, vegetative, agoraphobic, sociophobic symptoms and symptoms of mistrust. Additionally, a global severity index (GSI-27), similar to the GSI in the SCL-90-R is available. The subscales are short, the number of items varies between four and six. Psychometric properties and reference values based on a representative German sample of more than 2000 subjects are presented for the various subscales. All subscales show sufficient internal consistency with Cronbach's alpha &gt; or = 0.70 and for the GSI-27 alpha is = 0.93. The correlation between GSI-27 and GSI is r = 0.93. Sex-specific cut-offs are presented to identify those patients, who need further examination. abstract_id: PUBMED:19318883 Simple identification of complex ADHD subtypes using current symptom counts. Objective: New attention-deficit/hyperactivity disorder (ADHD) subtypes identified through latent class analysis have been recently proposed. Here, we assess the accuracy of simple rules based on symptom counts for the assignment of youths to clinically relevant population-derived ADHD subtypes: severe inattentive (SI) and severe combined (SC). Method: Data from 9,675 twins and siblings from Missouri and Australia aged 7 to 19 years were analyzed using continuous and categorical models of ADHD symptoms using principal components analysis and subtyping by DSM-IV and by latent class criteria. Cut points were derived for classifying SI and SC subtypes by positive predictive value, negative predictive value, percent positive agreement, and Matthew coefficient of agreement. Results: Principal components analysis suggested two underlying factors: total number of symptoms and symptom type, with SI and SC latent class subtypes clearly mapping to distinct areas on a plot of these factors. Having six or more total symptoms and fewer than three hyperactive-impulsive symptoms accurately predicts the latent class SI subtype. The latent class SC subtype was best identified by 11 or more total symptoms and 4 or more hyperactive-impulsive. The DSM-IV ADHD subtype criteria accurately identified the SC subtype but only poorly for the SI subtype. Conclusions: Symptom counts criteria allow the simple and accurate identification of subjects with severe ADHD subtypes defined by latent class analysis. Such simple symptom counts corresponding to screening cut points selected latent class-derived SI subtype subjects with greater precision than DSM-IV criteria. abstract_id: PUBMED:37934963 Screening for depression in the Serbian general population sample: an alternative to the traditional patient health questionnaire-9 cut-off score. Background: The Patient Health Questionnaire (PHQ-9) score ≥ 10 balances best sensitivity and specificity when detecting probable depression in patients. In the general population, different cut-offs are suggested. European studies on general populations validating the PHQ-9 against a diagnostic interview to detect depression are rare. Methods: This was a cross-sectional observational epidemiological survey using multistage household probabilistic sampling to recruit a representative adult sample (N = 1203; age = 43.7 ± 13.6; 48.7% male). Mental disorders including current major depressive episode (MDE) were observer-rated (Mini International Neuropsychiatric Interview). The PHQ-9, quality of life (QoL), and loneliness were self-assessed. We performed validity and reliability tests of the PHQ-9 and receiver operating curve (ROC) analysis. Results: The Serbian PHQ-9 was internally consistent and correlated in the expected directions with QoL and loneliness. At the cut-off score ≥ 8, sensitivity was .85 and specificity was .91. ROC analysis showed that the area under the curve was .95, indicating that the Serbian PHQ-9 can discriminate very well between persons with/without MDE. Conclusions: When the PHQ-9 is assessed against the structured diagnostic interview in the general population to detect depression, the cut-off of ≥8 balances best sensitivity and specificity. abstract_id: PUBMED:34265564 Multimorbidity networks of mental disorder symptom domains across psychopathology severity levels in community youth. Our aim was to scrutinize multimorbidity in a community sample of youths (Philadelphia Neurodevelopmental Cohort) in form of co-occurrences of DSM-IV disorder symptom domains, elucidating if and when specific symptom domain interrelations emerge as mental disorder severity levels increase. We estimated four multimorbidity networks based on four severity cut-offs ('at least symptomatic', 'at least subthreshold', 'at least threshold', and 'impaired') and compared them pairwise on two measures: global network strength and network structure. We further computed community clusters for each network to detect symptom domain interrelations. Pairwise comparisons of the multimorbidity networks based on data from 9410 probands showed significant differences in global strength of the networks with the two highest severity cut-offs ('impaired' and 'at least threshold') with the at least symptomatic networks (p &lt; .05). The networks with the three highest severity cut-offs ('impaired', 'at least threshold', and 'at least subthreshold') differed significantly (p &lt; .001) from the at least symptomatic network regarding global network structure but did not significantly differ from each other (p &gt; .05). We identified four common clusters in the impaired, at least threshold, and at least subthreshold networks consisting of i) domains associated with behavioral disorders; ii) domains associated with anxiety disorders (agoraphobia, social anxiety and specific phobia); iii) domains associated with anxiety/mood/eating and; iv) domains associated with mood/eating disorders. We found that major mental disorder symptom domain interrelations become consistent from a subthreshold level onwards. Findings suggest that specific multimorbidity patterns emerge as psychopathology severity levels increase. abstract_id: PUBMED:33314419 The validity of the fifth and the 10th Body Mass Index percentile as weight cut-offs for anorexia nervosa in adolescence: No evidence from quantitative and network investigation of psychopathology. Objective: Although the fifth Body Mass Index (BMI) percentile is the Diagnostic and Statistic Manual of Mental Disorders -5 weight cut-off criterion to diagnose anorexia nervosa (AN) in children and adolescents, its validity has not been proved, and the 10th percentile value is often applied. We aimed to investigate the diagnostic validity of these weight cut-offs. Method: We compared general and eating-disorder (ED) specific psychopathology in 380 adolescents with AN or atypical AN. They were grouped first with respect to the fifth BMI percentile and then with respect to the 10th BMI percentile and differences between groups were analysed. Network analyses on psychopathological symptoms were also conducted. Results: Adolescents with BMI above the fifth and the 10th percentile reported more severe ED specific symptomatology compared to those with BMI below these cut-offs. No significant differences emerged between groups neither in general psychopathology nor in the network structure of psychopathology. Conclusions: The fifth BMI percentile does not discriminate psychopathology severity in adolescents with AN. From the psychopathology perspective, our findings suggest that adolescents with atypical AN deserve the same clinical and research attention as those with full AN. Future studies are needed to identify a more accurate definition of underweight in adolescents. abstract_id: PUBMED:23586345 Relation of symptom-induced impairment with other illness parameters in clinic-referred youth. Objective: To examine the relation of caregiver ratings of psychiatric symptom-induced impairment with number and severity of symptoms and informant agreement in consecutive child psychiatry outpatient referrals. Methods: Parents and teachers completed a broadband DSM-IV-referenced rating scale with disorder-specific impairment for 636 youth (6-18 years). Illness parameters included impairment, number and severity of symptoms, and their combination (symptom + impairment) as well as categorical (cut-off) and dimensional scoring. Results: Agreement between impairment and other illness parameters showed considerable variation as a function of type of parameter, disorder, and informant, but to lesser extent age and gender. Many youth who met impairment cut-off for specific disorders did not meet symptom cut-off. Conversely, most youth who met symptom cut-off were impaired. Symptom cut-off evidenced greater convergence with impairment cut-off than combined symptom + impairment cut-offs. Severity of impairment was moderately to highly correlated with number and severity of symptoms. Parents' and teachers' ratings indicated little disorder-specific agreement about youth who met impairment cut-off, symptom cut-off, or combined symptom + impairment cut-off. Therefore, sole reliance on one informant greatly underestimates the pervasiveness of impairment. Conclusion: Findings are consistent with the notion that each illness parameter represents a unique conceptual construct, which has important clinical and research implications. Answer: The study by Brugha et al. (PUBMED:12420889) aimed to determine if there are distinct subpopulations with common mental disorder (CMD) and depression symptom scores above epidemiological 'case' cut-offs, similar to the clear excess of cases at the lower end of IQ distributions representing severe learning disability. The analysis of data from 9556 non-psychotic respondents to the 1993 OPCS National Household Psychiatric Morbidity Survey showed that symptom counts for CMDs fall within single population distributions, with little numerical excess in the case range. The study concluded that high and low prevalences of these disorders appear to be population characteristics, with shifts in exponential means predicting proportions above case cut-offs. This suggests that there are not distinct case groups above epidemiological cut-offs for CMD symptom counts in the general population. Instead, the distribution of symptoms is more continuous, with no clear separation between 'cases' and 'non-cases' above the cut-offs used in the study.
Instruction: Resection of pancreatic ductal adenocarcinoma with synchronous distant metastasis: is it worthwhile? Abstracts: abstract_id: PUBMED:37909951 CA19-9 With Two-stage Resection Is Useful for Conversion Surgery in PDAC With Synchronous Oligometastases. Background/aim: Pancreatic adenocarcinoma (PDAC) with synchronous oligometastases may indicate a surgical benefit after chemotherapy. We investigated whether primary and metastatic resection of PDAC with oligometastases can improve the survival and then explored prognostic factors to identify indications for conversion surgery. Patients And Methods: We reviewed 425 patients with PDAC who underwent pancreatic resection from 2005 to 2019. Clinical characteristics and outcomes were analyzed. Two-stage resection was defined as preceding metastasectomy and subsequent primary resection after chemotherapy. Results: Fifteen patients (3.5%) had synchronous oligometastases. We evaluated the overall survival of the patients with oligometastases and those without metastases. The survival curves almost completely overlapped (median survival time: 35.9 vs. 32.1 months). The univariate Cox regression analysis revealed a normal level of preoperative CA19-9 (p=0.075), two-stage resection (p=0.072), and R0 resection (p=0.064) were likely promising prognostic factors. The combination of a normal level of preoperative CA19-9 with two-stage resection was a significant prognostic factor (p=0.038). In addition, patients with a normal preoperative CA19-9 level and two-stage resection had better survival (46.1 vs. 28.1 months, p=0.026). Conclusion: The combination of normal preoperative CA19-9 with two-stage resection can be a useful way to identify patients with PDAC and oligometastases for surgical indication. abstract_id: PUBMED:17941009 Is resection of periampullary or pancreatic adenocarcinoma with synchronous hepatic metastasis justified? Background: To date, no consensus has been reached regarding which primary tumor subtypes are managed appropriately with hepatic metastectomy. Specifically, the role of hepatic resection for metastatic periampullary or pancreatic adenocarcinoma remains controversial. Methods: Between 1995 and 2005, 1563 patients underwent surgical resection for periampullary carcinoma (n=608 patients) or pancreatic adenocarcinoma (head, n=905 patients; tail, n=50 patients). Data on demographics, operative details, primary tumor status, and-when indicated-extent of hepatic metastasis were collected. Results: Of the 1563 patients who underwent resection of periampullary or pancreatic adenocarcinoma, 22 patients (1.4%) underwent simultaneous hepatic resection for synchronous liver metastasis. The primary tumor site was ampullary (n=1 patient ), duodenal (n=2 patients), distal bile duct (n=2 patients), or pancreas (head, n=10 patients; tail, n=7 patients). The majority of patients (86.4%) had a solitary hepatic metastasis, and the median size of the largest lesion was 0.6 cm. Hepatic metastectomy included wedge resection (n=20 patients), segmentectomy (n=1 patient), and hemihepatectomy (n=1 patient). After matching patients on primary tumor histology and location, the median survival of patients who underwent hepatic resection of synchronous metastasis was 5.9 months compared with 5.6 months for patients who underwent palliative bypass alone (P=.46) and 14.2 months for patients with no metastatic disease who underwent primary tumor resection only (P&lt;.001). Pancreatic (median, 5.9 months) versus nonpancreatic (median, 9.9 months) primary tumor histology was not associated with a difference in survival in patients who underwent resection of synchronous liver metastasis (P=.43). Conclusions: Even in well selected patients with low-volume metastatic liver disease, simultaneous resection of periampullary or pancreatic carcinoma with synchronous liver metastases did not result in long-term survival in the overwhelming majority of patients. abstract_id: PUBMED:37555114 Resection of isolated liver oligometastatic disease in pancreatic ductal adenocarcinoma: Is there a survival benefit? A systematic review. Background: Presence of liver metastatic disease in pancreatic ductal adenocarcinoma (PDAC), either synchronous or metachronous after pancreatic resection, is a terminal diagnosis that warrants management with palliative intent as per all international practice guidelines. However, there is an increasing interest on any potential value of surgical treatment of isolated oligometastatic disease in selected cases. Aim: To present the published evidence on surgical management of PDAC liver metastases, synchronous and metachronous, and compare the outcomes of these treatments to the current standard of care. Methods: A systematic review was performed in line with the Preferred Reporting Items for Systematic Review and Meta-Analyses guidelines to compare the outcomes of both synchronous and metachronous liver metastases resection to standard care. Results: 356 studies were identified, 31 studies underwent full-text review and of these 10 were suitable for inclusion. When synchronous resection of liver metastases was compared to standard care, most studies did not demonstrate a survival benefit with the exception of one study that utilised neoadjuvant treatment. However, resection of metachronous disease appeared to confer a survival advantage when compared to treatment with chemotherapy alone. Conclusion: A survival benefit may exist in resection of selected cases of metachronous liver oligometastatic PDAC disease, after disease biology has been tested with time and systemic treatment. Any survival benefit is less clear in synchronous cases; however an approach with neoadjuvant treatment and consideration of resection in some selected cases may confer some benefit. Future studies should focus on pathways for selection of cases that may benefit from an aggressive approach. abstract_id: PUBMED:32399273 Surgical resection of metastatic pancreatic cancer: is it worth it?-a 15-year experience at a single Chinese center. Background: The significance of surgical resection in the treatment of metastatic pancreatic ductal adenocarcinoma (PDAC) is currently unclear. This study aimed to summarize and clarify the experience of surgical treatment of M1 PDAC in our center and evaluate whether it may offer benefits to some metastatic PDAC patients. Methods: We analyzed the data of the patients with M1 PDAC who underwent synchronous tumor resection between 2003 and 2014 at Ruijin Hospital. Simultaneously, clinical data for M1 PDAC patients who underwent palliative bypass or exploratory laparotomy only was also collected during the same period as control. Every patient was followed up at least 3 years after hospitalization. The follow-up endpoint was December 31, 2017. Results: A total of 36 patients underwent synchronous tumor resection; of them, 7 received postoperative adjuvant chemotherapy. Their 1-, 2-, and 3-year survival rates were 27.3%, 21.2%, and 7.1%, respectively, and the overall survival was 7.9 months. The overall survival of the palliative bypass and exploratory laparotomy groups was only 4 and 3.7 months, respectively (P&lt;0.05). Conclusions: It is unclear whether synchronous tumor resection can benefit M1 PDAC patients due to tumor heterogeneity and differences in tumor burden. However, our current experience indicated that synchronous tumor resection can be safely performed and might be appropriate for some highly selected patients due to the relatively longer survival time. Combined (neo)adjuvant chemotherapy would further prolong overall survival in these patients. abstract_id: PUBMED:27832445 Pancreatic cancer and liver metastases: state of the art. Pancreatic cancer is still one of the most aggressive oncological diseases with a 5-year mortality rate below 10%. Surgery remains the only curative treatment; however, most patients present with late-stage disease deemed unresectable, either due to extensive local vascular involvement or the presence of distant metastasis. In the detection of hepatic metastases, the current standard is palliative chemotherapy with fluorouracil, leucovorin, irinotecan, and oxaliplatin (FOLFIRINOX) or nab-paclitaxel with gemcitabine. Once hepatic metastases are diagnosed, the guidelines do not recommend resection of the primary tumor. Recent findings suggest that some patients with non-resectable diseases initially have survival rates as good as those with initially resectable disease when they are able to undergo surgical resection. Synchronous resection of both the primary tumour as well as the liver metastases may be beneficial and improves the outcome. abstract_id: PUBMED:31943690 Pancreatic resection in patients with synchronous extra-pancreatic malignancy: outcomes and complications. Background: Patients may present with a resectable pancreatic tumour in the context of a concurrent primary extra-pancreatic malignancy. These patients pose a dilemma regarding their suitability for surgery. We evaluated our experience with such patients who underwent pancreatic resection with curative intent and detailed their outcomes and rationale for surgical decision-making. Methods: A retrospective review of patients with pancreatic concurrent extra-pancreatic primary malignancy who underwent pancreatic resection at our institution over a 12-year period (2005-2016) was performed. Clinical, histopathological and perioperative outcomes were reviewed. Results: Ten patients with a median age of 74 years (40-85 years) were identified. Secondary primary tumours included thyroid (n = 2), gastrointestinal (n = 4), small bowel neuroendocrine (n = 1), renal (n = 1) and haematological malignancies (n = 2). Pancreatic tumours included pancreatic ductal adenocarcinomas (n = 6), solid pseudopapillary neoplasms (n = 2) and ampullary carcinomas (n = 2). After a median follow up of 41.3 months (31.3-164 months), 8 of 10 patients were still alive. Two patients died due to metastatic disease from the secondary malignancy (small bowel neuroendocrine tumour and sigmoid colon adenocarcinoma). The post-operative complication rate was 30% with no perioperative 90-day mortality. Conclusion: Selected patients with a pancreatic and concurrent primary extra-pancreatic malignancy may undergo curative pancreatic resection with favourable outcomes. abstract_id: PUBMED:31346576 Oligometastases in pancreatic cancer (Synchronous resections of hepatic oligometastatic pancreatic cancer: Disputing a principle in a time of safe pancreatic operations in a retrospective multicenter analysis). The aim of the present review was to analyze the current data on surgery of synchronous liver metastases in pancreatic ductal adenocarcinoma (PDAC) in curative intent. A review of the literature was carried out to identify the current international concepts regarding surgery of liver metastases of PDAC and, furthermore, we addressed the current challenges of resection of liver metastases of PDAC. Resection of liver metastases in PDAC may provide survival benefit without compromising safety and quality of life in a highly selected group of patients. abstract_id: PUBMED:31997233 A systematic review of surgical resection of liver-only synchronous metastases from pancreatic cancer in the era of multiagent chemotherapy. Recent studies considered surgery as a treatment option for patients with pancreatic ductal adenocarcinoma (PDAC) and synchronous liver metastases. The aim of this study was to evaluate systematically the literature on the role of surgical resection in this setting as an upfront procedure or following primary chemotherapy. A systematic search was performed of PubMed, Embase and the Cochrane Library in accordance with PRISMA guidelines. Only studies that included patients with synchronous liver metastases published in the era of multiagent chemotherapy (after 2011) were considered, excluding those with lung/peritoneal metastases or metachronous liver metastases. Median overall survival (OS) was the primary outcome. Six studies with 204 patients were analyzed. 63% of patients underwent upfront pancreatic and liver resection, 35% had surgery after primary chemotherapy with strict selection criteria and 2% had an inverse approach (liver surgery first). 38 patients (18.5%) did not undergo any liver resection since metastases disappeared after chemotherapy. Postoperative mortality was low (&lt; 2%). Median OS ranged from 7.6 to 14.5 months after upfront pancreatic/liver resection and from 34 to 56 months in those undergoing preoperative treatment. This systematic review suggests that surgical resection of pancreatic cancer with synchronous liver oligometastases is safe, and it can be associated with improved survival, providing a careful selection of patients after primary chemotherapy. abstract_id: PUBMED:28105159 Small bowel metastasis from pancreatic cancer in a long-term survival patient with synchronous advanced malignant pleural mesothelioma: A case report and literature review. Diffuse malignant pleural mesothelioma (MPM) is an aggressive tumor that originates from the surface of the pleura. Approximately 70% of cases are associated with chronic asbestos exposure. MPM is regarded as an incurable disease, with a median survival of ~2 years following intensive multimodality treatment. Pancreatic cancer is a malignancy also associated with a poor prognosis, with only 2% of patients surviving for 5 years. The majority of patients with pancreatic cancer are diagnosed with an advanced stage of disease and experience a poor response to therapy. The development of synchronous MPM and other types of cancer is rare. The present study describes a patient with synchronous, biphasic MPM and pancreatic adenocarcinoma, who was treated with a multimodal therapeutic approach with stereotactic body radiation therapy. Due to a suspected diagnosis of 'acute abdomen', an emergency small intestine resection was performed and a subsequent diagnosis of moderately-differentiated adenocarcinoma was confirmed. During a further immunohistochemical examination, pathologists determined that the small bowel metastasis descended from pancreatic cancer. The onset of bowel metastasis is an event rarely associated with MPM, and has not been previously described in the literature for cases of pancreatic cancer. Therefore, to the best of our knowledge, the present study describes the first case of intestinal metastasis from pancreatic cancer in a long-term survival patient with biphasic MPM. abstract_id: PUBMED:36359552 Synchronous Pancreatic Ductal Adenocarcinoma in the Head and Tail, a Double Trouble: A Case Report and Literature Review. Synchronous primary pancreatic ductal adenocarcinoma (PDAC) is very rare and can be formed either through multicentric carcinogenesis or intrapancreatic metastasis. We report the case of an 80-year-old man with a history of type 2 diabetes mellitus who presented with abdominal pain and weight loss. Laboratory tests showed elevated levels of blood glucose and CA 19-9, and Computed Tomography revealed two hypoenhancing lesions in the head and tail of the pancreas. Endoscopic ultrasound, which is the imaging method of choice for pancreatic cancer, was performed with a fine needle biopsy, and the cytological analysis diagnosed PDAC in both lesions. The patient underwent total pancreatectomy, and pathologic evaluation revealed synchronous primary PDAC with moderate to poor differentiation in the head and tail in the setting of IPMN (intraductal papillary mucinous neoplasia) and chronic pancreatitis. After his recovery from postoperative pulmonary embolism, the patient was discharged home with sufficient glycemic control. Multifocal PDAC occurs more often when precursor lesions, such as IPMN, pre-exist. The optimal treatment for multiple lesions spread all over the pancreas is total pancreatectomy. Diabetes mellitus is a serious complication of total pancreatectomy (new-onset or type 3c), but overall, long-term survival has been significantly improved. Answer: The resection of pancreatic ductal adenocarcinoma (PDAC) with synchronous distant metastasis is a subject of ongoing debate and research. The literature presents mixed findings on the survival benefits and justifications for such surgical interventions. Some studies suggest that resection may be beneficial in highly selected patients. For instance, a study found that the combination of normal preoperative CA19-9 levels with two-stage resection (preceding metastasectomy followed by primary resection after chemotherapy) could be a significant prognostic factor for improved survival in patients with PDAC and oligometastases (PUBMED:37909951). Another study indicated that resection of liver metastases in PDAC might provide a survival benefit without compromising safety and quality of life in a highly selected group of patients (PUBMED:31346576). However, other studies have reported less optimistic outcomes. One study concluded that simultaneous resection of periampullary or pancreatic carcinoma with synchronous liver metastases did not result in long-term survival for the majority of patients (PUBMED:17941009). Similarly, a systematic review suggested that while resection of metachronous liver oligometastatic PDAC disease might confer a survival advantage, the benefit is less clear in synchronous cases, although an approach with neoadjuvant treatment and consideration of resection in selected cases may offer some benefit (PUBMED:37555114). A 15-year experience at a single Chinese center suggested that synchronous tumor resection might be appropriate for some highly selected patients due to the relatively longer survival time, especially when combined with (neo)adjuvant chemotherapy (PUBMED:32399273). Another systematic review indicated that surgical resection of pancreatic cancer with synchronous liver oligometastases is safe and can be associated with improved survival, provided careful selection of patients after primary chemotherapy (PUBMED:31997233). In summary, while some evidence supports the potential benefit of resection in selected patients with PDAC and synchronous distant metastasis, particularly when combined with chemotherapy, the overall consensus is that such an approach should be considered with caution and on a case-by-case basis. The decision to proceed with resection should involve careful patient selection, consideration of the disease biology, and the use of multidisciplinary treatment strategies.
Instruction: Evaluation of new treatments in radiation oncology: are they better than standard treatments? Abstracts: abstract_id: PUBMED:27974927 Development of new immunotherapy treatments in different cancer types. Cancer immunotherapy involves the use of therapeutic modalities that determine a manipulation of the immune system by using immune agents such as cytokines, vaccines, cell therapies and humoral, transfection agents. Immunotherapy of cancer has to stimulate the host's anti-tumor response by increasing the effector cell number and the production of soluble mediators and decrease the host's suppressor mechanisms by inducing tumor killing environment and by modulating immune checkpoints. Immunotherapy seems to work better in more immunogenic tumors. Making a review of literature, the article presents the new immunologic treatments in cancers less presented in the latest conferences, cancers in which, immunotherapy is still under investigation. Bladder cancer was the first indication for which immunotherapy was used in 1970. A promising clinical research in bladder cancer is the use of immune checkpoint inhibitors. Although breast cancer is considered immunologically silent, several preclinical and clinical studies suggested that immunotherapy has the potential to improve the clinical outcomes for patients with breast cancer. Cervical cancer, brain cancer, head and neck cancer and colorectal and esophageal cancers are cancer types for which new immune-based cancer treatments are currently under development. Recent agents used in clinical trials will be described in before mentioned cancers. abstract_id: PUBMED:35076514 Loco-Regional Treatments for Hepatocellular Carcinoma in People Living with HIV. Hepatocellular carcinoma (HCC) accounts for approximately 75-90% of primary liver cancers and is the sixth most common cancer and the third leading cause of cancer-related deaths worldwide. In the HIV-positive population, the risk of HCC is approximately four times higher than in the general population, with higher cancer-specific mortality than in HIV-negative patients. In most cases, HCC diagnosis is made in patients younger than the HIV-negative population and in the intermediate-advanced stage, thus limiting the therapeutic possibilities. Treatment choice in HIV-positive patients with HCC is subject to cancer staging, liver function and health status, as for HIV-negative and non-HIV-negative HCC patients. There are relatively few studies on the efficacy and safety in HIV-positive patients to date in loco-regional treatments for HCC. So far, literature shows that curative treatments such as radiofrequency ablation (RFA) have no significant differences in overall survival between HIV-positive and HIV-negative patients, as opposed to palliative treatments such as TACE, where there is a significant difference in overall survival. Although it can be assumed that the most recently discovered loco-regional therapies are applicable to HIV-positive patients with HCC in the same way as HIV-negative patients, further studies are needed to confirm this hypothesis. The purpose of our review is to evaluate these treatments, their efficacy, effectiveness, safety and their applicability to HIV-positive patients. abstract_id: PUBMED:31527477 Targeting Autophagy for Overcoming Resistance to Anti-EGFR Treatments. Epidermal growth factor receptor (EGFR) plays critical roles in cell proliferation, tumorigenesis, and anti-cancer drug resistance. Overexpression and somatic mutations of EGFR result in enhanced cancer cell survival. Therefore, EGFR can be a target for the development of anti-cancer therapy. Patients with cancers, including non-small cell lung cancers (NSCLC), have been shown to response to EGFR-tyrosine kinase inhibitors (EGFR-TKIs) and anti-EGFR antibodies. However, resistance to these anti-EGFR treatments has developed. Autophagy has emerged as a potential mechanism involved in the acquired resistance to anti-EGFR treatments. Anti-EGFR treatments can induce autophagy and result in resistance to anti-EGFR treatments. Autophagy is a programmed catabolic process stimulated by various stimuli. It promotes cellular survival under these stress conditions. Under normal conditions, EGFR-activated phosphoinositide 3-kinase (PI3K)/AKT serine/threonine kinase (AKT)/mammalian target of rapamycin (mTOR) signaling inhibits autophagy while EGFR/rat sarcoma viral oncogene homolog (RAS)/mitogen-activated protein kinase kinase (MEK)/mitogen-activated protein kinase (MAPK) signaling promotes autophagy. Thus, targeting autophagy may overcome resistance to anti-EGFR treatments. Inhibitors targeting autophagy and EGFR signaling have been under development. In this review, we discuss crosstalk between EGFR signaling and autophagy. We also assess whether autophagy inhibition, along with anti-EGFR treatments, might represent a promising approach to overcome resistance to anti-EGFR treatments in various cancers. In addition, we discuss new developments concerning anti-autophagy therapeutics for overcoming resistance to anti-EGFR treatments in various cancers. abstract_id: PUBMED:36366823 New and Emerging Treatments for Inflammatory Bowel Disease. Background: The specific etiopathogenesis of inflammatory bowel disease (IBD) is still unknown. Although the conventional anti-inflammatory or immunomodulatory drugs relatively nonspecific to pathogenesis have been quite useful in many cases, elucidating the pathogenesis has gradually facilitated developments of disease-specific therapies for refractory cases in the last 2 decades. Summary: With a greater understanding of the multiple overactive signaling pathways of the gut mucosal immune response and enhanced leukocyte trafficking, several biological agents or small molecule drugs following the first novel biologic, anti-tumor necrosis factor α (anti-TNFα), have been developed against several modes of action including adhesion molecules, sphingosine-1-phospate receptors, cytokines (IL-12/23, TL1A, and IL-36), Janus kinase (JAK), and phosphodiesterase. Although preceding biological agents have dramatically changed the IBD treatment strategy, many patients still require alternative therapies due to failure or side effects. Newer treatments are now expected to be provided for better efficacy with an improved adverse event profile. In addition, translational studies have highlighted the new therapeutic concepts' potential, including modulation of host-microbiome interactions, stem therapy for perianal fistula, regulation of fibrosis, regulation of the gut-brain axis, and control of previously less targeted immune cells (B cells and innate lymphoid cells). This paper comprehensively reviewed not only the latest already or shortly available therapies but also emerging promising treatments that will be hopefully established in the future for IBD. Key Messages: Many kinds of new treatments are available, and promising treatments with new perspectives are expected to emerge for refractory IBD in the future. abstract_id: PUBMED:34986970 Biomarkers with Potential Predictive Value for Cardiotoxicity in Anticancer Treatments. Rapid development of anticancer treatments in recent years has greatly improved prognosis of cancer patients. However, with extension of survival time of cancer patients, various short-term and long-term side effects brought about by anticancer treatments, especially cardiotoxicity, have become increasingly prominent. Nonetheless, at present, there is few diagnostic methods with extremely high sensitivity and specificity to detect and accurately predict whether patients with anticancer treatment will experience cardiovascular complications. Inflammation, fibrosis and oxidative stress are considered to be important mechanisms involved in cardiotoxicity anticancer treatments. The cardiovascular biomarkers having the ability to predict and detect cardiovascular dysfunction earlier than clinical symptoms as well as left ventricular ejection fraction monitored by echocardiography, are of great value to timely treatment adjustment and prognosis evaluation. Cardiac troponin T/I and brain natriuretic peptide/N-terminal prohormone of brain natriuretic peptide have been routinely used in clinical practice to monitor cardiotoxicity, and some new biomarkers such as soluble suppression of tumorigenecity-2, myeloperoxidase, growth differentiation factor-15, galectin-3, endothelin-1, have potential in this area. In the future, larger-scale experimental studies are needed to provide sufficient evidences, and how to detect them quickly and at low cost is also a problem to be dealed with. abstract_id: PUBMED:35948279 Doubly robust estimation of optimal dynamic treatment regimes with multicategory treatments and survival outcomes. Patients with chronic diseases, such as cancer or epilepsy, are often followed through multiple stages of clinical interventions. Dynamic treatment regimes (DTRs) are sequences of decision rules that assign treatments at each stage based on measured covariates for each patient. A DTR is said to be optimal if the expectation of the desirable clinical benefit reaches a maximum when applied to a population. When there are three or more options for treatments at each decision point and the clinical outcome of interest is a time-to-event variable, estimating an optimal DTR can be complicated. We propose a doubly robust method to estimate optimal DTRs with multicategory treatments and survival outcomes. A novel blip function is defined to measure the difference in expected outcomes among treatments, and a doubly robust weighted least squares algorithm is designed for parameter estimation. Simulations using various weight functions and scenarios support the advantages of the proposed method in estimating optimal DTRs over existing approaches. We further illustrate the practical value of our method by applying it to data from the Standard and New Antiepileptic Drugs study. In this analysis, the proposed method supports the use of the new drug lamotrigine over the standard option carbamazepine. When the actual treatments match the estimated optimal treatments, survival outcomes tend to be better. The newly developed method provides a practical approach for clinicians that is not limited to cases of binary treatment options. abstract_id: PUBMED:37460727 Image-Guided Precision Treatments. Chemotherapy, radiotherapy, and surgery are traditional cancer treatments, which usually produce unpredictable side effects and potential risks to normal healthy organs/tissues. Thus, safe and reliable treatment strategies are urgently required for maximized therapeutic efficiency to lesions and minimized risks to healthy regions. To this end, molecular imaging is responsible to undertake a specific targeting therapy. Besides that, the image guidance as a precision visualized approach for real-time in situ evaluations as well as an intraoperational navigation approach has earned attractive attention in the past decade. Along with the rapid development of multifunctional micro-/nanobiomaterials, versatile cutting-edge and advanced therapy strategies (e.g., thermal therapy, dynamic therapy, gas therapy, etc.) have been achieved and greatly contributed to the image-guided precision treatments in every aspect. Therefore, this chapter aims to discuss about both traditional and advanced cancer treatments and especially to elucidate the important roles that visualized medicine has been playing in the image-guided precision treatments. abstract_id: PUBMED:29568412 Impact of physical exercise in cancer survivors during and after antineoplastic treatments. Cancer patients experience symptoms and adverse effects of treatments that may last even after the end of treatments. Exercise is a safe, non-pharmacological and cost-effective therapy that can provide several health benefits in cancer patient and survivors, reducing cancer symptoms and cancer treatment side effects. The purpose of this review is to describe how the physical exercise is capable to reduce cancer symptoms and cancer treatment side effects. We realized a pragmatic classification of symptoms, dividing them into physical, psychological and psycho-physical aspects. For each symptom we discuss causes, therapies, we analyse the effects of physical exercise and we summarize the most effective type of exercise to reduce the symptoms. This review also points out what are the difficulties that patients and survivors face during the practice of physical activity and provides some solutions to overcome these barriers. Related to each specific cancer, it emerges that type, frequency and intensity of physical exercise could be prescribed and supervised as a therapeutic program, like it occurs for the type, dose and duration of a drug treatment. abstract_id: PUBMED:34025489 Personality Traits and Cardiotoxicity Arising From Cancer Treatments: An Hypothesized Relationship. Thanks to the evolution in medical and pharmaceutical research, to date, the number of cancer treatments is increasingly on the rise. Despite this, several side effects related to cancer treatments can exacerbate patients' physical and psychological conditions, such as cardiotoxicity. Over the years, researchers have explored the possible relationship between psychological variables and physical diseases. Even though some authors examined the relationship between personality and specific diseases, no scientific attention has been paid to the role of personality in the development of cardiotoxicity arising from cancer treatments. Yet this is an important objective, given that determining whether personality influences cardiac toxicity of anticancer treatments could inform the processes by which stable psychological factors influence health. This contribution summarizes and analyzes the available scientific evidence about the association between personality and main cardiotoxicity-related-diseases of anticancer therapies, including cancer and cardiovascular diseases, in order to sketch a hypothetical model of the relationship between personality traits and cardiotoxicity. abstract_id: PUBMED:15728168 Evaluation of new treatments in radiation oncology: are they better than standard treatments? Context: The superiority of innovative over standard treatments is not known. To describe accurately the outcomes of innovations that are tested in randomized controlled trials (RCTs) 3 factors have to be considered: publication rate, quality of trials, and the choice of the adequate comparator intervention. Objective: To determine the success rate of innovative treatments by assessing preferences between experimental and standard treatments according to original investigators' conclusions, determining the proportion of RCTs that achieved primary outcomes' statistical significance, and performing meta-analysis to examine if the summary point estimate favored innovative vs standard treatments. Data Sources: Randomized controlled trials conducted by the Radiation Therapy Oncology Group (RTOG). Study Selection: All completed phase 3 trials conducted by the RTOG since its creation in 1968 until 2002. For multiple publications of the same study, we used the one with the most complete primary outcomes and with the longest follow-up information. Data Extraction: We used the US National Cancer Institute definition of completed studies to determine the publication rate. We extracted data related to publication status, methodological quality, and treatment comparisons. One investigator extracted the data from all studies and 2 independent investigators extracted randomly about 50% of the data. Disagreements were resolved by consensus during a meeting. Data Synthesis: Data on 12,734 patients from 57 trials were evaluated. The publication rate was 95%. The quality of trials was high. We found no evidence of inappropriateness of the choice of comparator. Although the investigators judged that standard treatments were preferred in 71% of the comparisons, when data were meta-analyzed innovations were as likely as standard treatments to be successful (odds ratio for survival, 1.01; 99% confidence interval, 0.96-1.07; P = .5). In contrast, treatment-related mortality was worse with innovations (odds ratio, 1.76; 99% confidence interval, 1.01-3.07; P = .008). We found no predictable pattern of treatment successes in oncology: sometimes innovative treatments are better than the standard ones and vice versa; in most cases there were no substantive differences between experimental and conventional treatments. Conclusion: The finding that the results in individual trials cannot be predicted in advance indicates that the system and rationale for RCTs is well preserved and that successful interventions can only be identified after an RCT is completed. Answer: The evaluation of new treatments in radiation oncology compared to standard treatments reveals a complex picture. According to a study that assessed the outcomes of innovations tested in randomized controlled trials (RCTs) conducted by the Radiation Therapy Oncology Group (RTOG), the publication rate of these trials was high at 95%, and the quality of the trials was generally good. However, the study found that while investigators judged standard treatments to be preferred in 71% of the comparisons, meta-analysis showed that innovative treatments were as likely as standard treatments to be successful in terms of survival outcomes (odds ratio for survival, 1.01; 99% confidence interval, 0.96-1.07; P = .5). Interestingly, treatment-related mortality was worse with innovative treatments (odds ratio, 1.76; 99% confidence interval, 1.01-3.07; P = .008). The study concluded that there is no predictable pattern of treatment successes in oncology, indicating that sometimes innovative treatments are better, sometimes they are not, and often there are no substantive differences between experimental and conventional treatments. This suggests that the success of interventions can only be identified after an RCT is completed, and it underscores the importance of conducting RCTs to evaluate the efficacy and safety of new radiation oncology treatments compared to standard ones (PUBMED:15728168).
Instruction: Polypectomy practices in a real life setting. Do we do enough for our patients? Abstracts: abstract_id: PUBMED:22568270 Polypectomy practices in a real life setting. Do we do enough for our patients? A review of 1061 colonoscopies. Unlabelled: Endoscopic removal of colon polyps is the main tool in colorectal cancer prevention programs. Although several quality indicators and guidelines have been proposed, polypectomy practices are still subject to great variation among endoscopist and little data is available regarding polypectomy practices in real life settings. Methods: The records of the 1061 screening colonoscopies performed in 2010 in a tertiary care teaching hospital in Bucharest were reviewed and all colonoscopies where at least one polyp was detected were selected for analysis. The number of detected polyps, the resection rate and method used for polypectomy were studied and compared to colonoscopy quality indicators previously reported in literature. Results: 941 polyps were detected in 395 patients. Invasive colorectal cancer was found in 42 patients. 548 polyps (58.23%) were removed endoscopically, with at least one polyp being resected in 283 patients (71.5%), resulting in a polypectomy rate of 26.67% in the entire study population. Cold forceps resection was the most commonly used method for the resection of polyps less than 5 mm in size, while for larger polyps hot snare was the preferred method. Concomitant invasive carcinoma and a larger number of polyps were predictive of incomplete removal of all detected polyps. Conclusion: Most quality indicators were met in our study group, with suboptimal performance regarding histological documentation of detected polyps and establishing a polyp-free colon. abstract_id: PUBMED:37466144 Skeletal response to teriparatide in real-life setting: effects of age, baseline bone density and prior denosumab use. Objectives: Teriparatide (TPD) is an osteoanabolic agent used in patients with high osteoporotic fracture risk. Predictors of therapeutic response to TPD in real-life setting are not well characterised. This study investigated the influence of previous antiresorptive therapy, age and other patient characteristics on the skeletal response to TPD. Methods: Retrospective study at the metabolic bone clinic, University Hospitals Leuven, Belgium. Patients with osteoporosis and a high fracture burden received TPD for 9-18 months. Bone mineral density (BMD) was measured at baseline, 9 and 18 months at lumbar spine (LS), femoral neck (FN) and total hip (TH). Results: BMD at LS increased at 9 months (change mean (standard error) 6.8 % (0.7) p &lt; 0.001) and at 18 months (8.0 % (0.9) p &lt; 0.001), while BMD at FN and TH did not change significantly. Non-response in BMD change at the LS was seen with prior denosumab use (odds ratio 0.21, 95% confidence interval (CI) 0.049-0.912, p = 0.037). Changes in BMD at TH were significantly greater in younger patients and in patients with a lower baseline BMD. Conclusion: TPD-induced changes in BMD at TH might depend on age and baseline BMD and at LS on prior denosumab use. The results suggest that these factors may be relevant for clinical decision making when initiating TPD treatment, although larger studies are needed to confirm these findings. abstract_id: PUBMED:31921126 Let's Talk About BiTEs and Other Drugs in the Real-Life Setting for B-Cell Acute Lymphoblastic Leukemia. Background: Therapy for acute lymphoblastic leukemia (ALL) are currently initially efficient, but even if a high percentage of patients have an initial complete remission (CR), most of them relapse. Recent data shows that immunotherapy with either bispecific T-cell engagers (BiTEs) of chimeric antigen receptor (CAR) T cells can eliminate residual chemotherapy-resistant B-ALL cells. Objective: The objective of the manuscript is to present improvements in the clinical outcome for chemotherapy-resistant ALL in the real-life setting, by describing Romania's experience with bispecific antibodies for B-cell ALL. Methods: We present the role of novel therapies for relapsed B-cell ALL, including the drugs under investigation in phase I-III clinical trials, as a potential bridge to transplant. Blinatumomab is presented in a critical review, presenting both the advantages of this drug, as well as its limitations. Results: Bispecific antibodies are discussed, describing the clinical trials that resulted in its approval by the FDA and EMA. The real-life setting for relapsed B-cell ALL is described and we present the patients treated with blinatumomab in Romania. Conclusion: In the current manuscript, we present blinatumomab as a therapeutic alternative in the bridge-to-transplant setting for refractory or relapsed ALL, to gain a better understanding of the available therapies and evidence-based data for these patients in 2019. abstract_id: PUBMED:29141265 HBV-infections in Germany - health care costs in a real-life-setting Background An estimated 500 000 people are infected with hepatitis B in Germany, inducing an enormous burden on infected patients and the health care system. The aim of our study was to estimate the real-life costs of treating hepatitis B and to analyze sociodemographic factors. Methods We conducted a retrospective, non-interventional, single-center study from 07/2009 to 12/2012. Information on health care delivery was extracted from patient records. Besides that, a questionnaire survey regarding sociodemographic parameters and quality of life of HBV-infected patients was performed. Results A total of 117 patients were included in our study and grouped in six different disease stages. The response rate of our survey was 80 %. We determined annual total costs of € 3509. The different groups altered between € 221 and € 5618. The main costs (80 %) were caused by the antiviral therapy. Costs of co-medication and hospitalizations were of minor importance. Laboratory costs were primarily caused by determination of virological parameters. Route of transmission of HBV-infection was unknown in 2/3 of all cases. Restrictions in quality of life due to the HBV-infection were reported by 60 % of the patients. Patients receiving interferon treatment reported highest restrictions. In an extrapolation, we estimated total annual hepatitis B treatment costs of 430 million € in Germany. Conclusion This is the first study estimating real-life treatment costs of hepatitis B infections in Germany. Further research should follow in the context of newly introduced generic antivirals. abstract_id: PUBMED:30083012 Can the Communion of Saints Help the Search for Justice in Dying well (Enough), "In Abraham's Arms, Where Lazarus is Poor no Longer"? How can we practice dying in community? What communal practices nourish positive acts that make dying well (enough)-more than prudent caution in avoiding causing death (killing)? What part of our common ecclesial life as Catholics enables us to accompany dying persons as a gracious return to God? We are twice haunted: most first world deaths result from foreseen end-stage illnesses; mortality is less feared than burdensomeness for caregivers from vulnerable dependence. Of those who die in the hospital, many deaths result from decisions to withhold and withdraw treatments. From ten kinds of desolation about dying, ten parallel consolations are discerned in hopes and outcomes-based best practices. abstract_id: PUBMED:36157862 Real-life multi-center retrospective analysis on nivolumab in difficult-to-treat patients with advanced hepatocellular carcinoma. Background: Hepatocellular carcinoma (HCC) is one of the leading causes of cancer-related death worldwide. The landscape of the systemic treatment for advanced HCC is changing quickly, and recently, the standard of care became either atezolizumab plus bevacizumab or tremelimumab plus durvalumab in the single tremelimumab regular interval durvalumab regimen. Nivolumab monotherapy has proven to be effective sometimes for advanced HCC and could be a valuable treatment option for patients outside current treatment indications and reimbursement criteria for the standard of care. This is a particular population of interest. Aim: To evaluate the real-world effectiveness of nivolumab monotherapy in patients with advanced HCC who are not eligible for other treatment. Methods: We conducted a retrospective, multicentric study including 29 patients with advanced HCC from 3 Belgian tertiary hospitals. All patients had had prior chemotherapy or were intolerant or ineligible for treatments. All study subjects received nivolumab 3 mg/kg in monotherapy, administered once every two weeks intravenously. Treatment continued until disease progression, severe adverse events or death. Data were retrieved from patients' medical records. The outcome parameters such as radiological response according to response evaluation criteria in solid tumors (RECIST) criteria, the biological response through the evolution of the alpha-fetoprotein (AFP) level, and clinical response considering both the Child-Pugh (CP) score and the World Health Organization (WHO) performance status (PS) were reported. A safety profile was also reported. Statistical analysis was performed using the SPSS Statistics 27 statistical software package. Results: The radiological overall response rate (defined as complete or partial response according to the immune RECIST and modified RECIST criteria) to nivolumab monotherapy was 24.1%. The biological overall response rate (defined as a decrease of ≥ 25% in AFP blood level) was 20.7%. Radiological and biological responses were significantly associated both with each other (P &lt; 0.001) and with overall survival (P &lt; 0.005 for radiological response and P &lt; 0.001 for biological response). Overall survival was 14.5 mo (+/- 2.1), and progression-free survival was 10.9 mo (+/- 2.3). After 4 mo of treatment, 78.3% of patients remained clinically stable or even showed improvement in WHO PS. Grade 3 adverse events occurred in 17.2% of patients, none had grade 4 adverse events, and no patients ceased nivolumab due to adverse events. Conclusion: Nivolumab monotherapy is a good treatment choice in frail patients with HCC who are ineligible for the standard of care or other validated systemic treatments. abstract_id: PUBMED:26493629 Early and delayed complications of polypectomy in a community setting: The SPoC prospective multicentre trial. Background: Colonoscopic polypectomy is effective in reducing the incidence of and mortality from colorectal cancer, but is not complication-free. Aims: To evaluate the incidence of early and delayed polypectomy complications and factors associated with their occurrence in a community setting. Methods: Web-database collection of patients' and polyp's features in consecutive colonic polypectomies during a 3-month period in 18 endoscopy centres. Results: Data on 5178 polypectomies in 2692 patients (54.3% males, mean age 59 years) were collected. The majority of the polyps were &lt;10mm (83.5%). Antithrombotic agents were taken by 22.7% of patients, 57.3% of which withheld them before the procedure. Overall, 5 patients experienced perforations (0.2%) and 114 had bleeding (4.2%); the overall complication rate was 4.4%. Early complications were observed in 87 (3.2%); delayed complications (all major bleedings) occurred in 32 (1.2%). At multivariate analysis polyp size (size &gt;10mm: OR 4.35, 95% CI 5.53-7.48) and, inversely, right-sided location (OR 0.58, 95% CI 0.36-0.94) were correlated with bleeding events. The use of antithrombotics was associated with 5-fold increased risk of delayed bleeding. Conclusions: In the community setting, polypectomy was associated with a 1.4% risk of major complications. Polyp size and, inversely, right-sided location were associated with early bleeding; the use of antithrombotics increased the risk of delayed events. abstract_id: PUBMED:28839795 National survey to determine current practices, training and attitudes towards advanced polypectomy in the UK. Objective: Developments in advanced polypectomy technique provide an alternative to surgery in the management of large and complex colorectal polyps. These endoscopic techniques require expertise and can potentially incur high complication rates. This survey evaluates current UK practice, attitudes and training in advanced polypectomy. Design: Anonymous online questionnaire. Setting: Colonoscopists within the UK were asked about their approach to large polyps (&gt;2 cm). Results: Among the 268 respondents (64% of whom were BCSP accredited), 86% were confident in removing lesions &gt;2 cm by endoscopic mucosal resection (EMR). Of these, 27% were classed as low volume operators (&lt;10 lesions resected/annum) and 14% as high volume operators (&gt;50/annum). By comparison, only 3% currently performed endoscopic submucosal dissection (ESD). Referring one or more benign polyps for surgery a year was common among responders of all levels (11-68%). Training deficiencies were common: only 21% of responders had received a period of training dedicated to advanced polypectomy; 58% of responders would welcome a national training scheme and a majority supported the implementation of advanced polypectomy accreditation with national guidelines. However, while 41% wanted nominated regional EMR experts, only 18% would welcome an integrated national referral network for large/complex polyps. Conclusions: EMR is practised widely while ESD service provision is very limited. Most experienced colonoscopists are confident to perform piecemeal EMR, even if their training is suboptimal and annual numbers low. Practices and attitudes were variable, even among self-defined level 4 operators. Improving training and implementation of accreditation were welcomed, but there was little appetite for mandated referral to subspecialist 'experts' and national networks. abstract_id: PUBMED:27942276 Colonoscopy with Polypectomy in Patients Taking Clopidogrel. Background And Aims: To investigate the bleeding risk of colonoscopy with polypectomy in patients taking clopidogrel. Methods: Retrospective review of patients undergoing colonoscopy with polypectomy without interruption of clopidogrel. Patients with lesions larger than 1cm were generally rescheduled for polypectomy off clopidogrel. Most of the polyps were removed using cold snare technique. Endoscopic clips were routinely applied prophylactically. Results: A total of 125 polypectomies were performed in 60 patients. The average polyp size was 5.4 ± 2.1 mm. One patient (1.7%, CI 0.3-8.9%) developed post-polypectomy bleeding that resolved without treatment. Three patients (5%, CI 1.7-14%) had immediate bleeding during the procedure and all resolved with prompt clip application. Conclusions: Polypectomy of lesions up to 1cm in size can be performed without interruption of clopidogrel. abstract_id: PUBMED:33704855 Feeding the family at the end-of-life: An ethnographic study on the role of food and eating practices for families facing death in Portugal. Little has been said about the disruptive impact that the inability to eat and to participate in mealtimes has for patients with a life-threatening illness and their families. The aim of the current study is to overcome this gap and shed light on how food and eating practices are experienced by families at the end-of-life. An ethnographic research was developed in two Portuguese palliative care units: participant observation was conducted during 10 months and in-depth interviews were carried out with 10 patients with a life-threatening illness, 20 family members and 20 palliative care professionals. Food is not only a matter of nutrition in a biological sense, but also an act of giving care to patients with a life-threatening illness. The findings suggest that food and eating practices affect the processes of relationality and of doing family at the end-of-life in Portugal. Attention is particularly paid to gender differences. The study supports recent sociological research which understands the dying process as a relational experience and intends to develop sociological knowledge on the materialities of care. Answer: Polypectomy practices in real-life settings reveal that while there are efforts to adhere to quality indicators and guidelines, there is still room for improvement in ensuring optimal care for patients. A review of 1061 screening colonoscopies performed in a tertiary care teaching hospital in Bucharest showed that 941 polyps were detected in 395 patients, with invasive colorectal cancer found in 42 patients. Of the detected polyps, 58.23% were removed endoscopically, resulting in a polypectomy rate of 26.67% in the entire study population. Cold forceps resection was the most commonly used method for polyps less than 5 mm, and hot snare was preferred for larger polyps. However, the presence of concomitant invasive carcinoma and a larger number of polyps were predictive of incomplete removal of all detected polyps. This suggests that while most quality indicators were met, there was suboptimal performance regarding the histological documentation of detected polyps and establishing a polyp-free colon (PUBMED:22568270). In the broader context of polypectomy practices, the SPoC prospective multicentre trial in a community setting reported a 1.4% risk of major complications associated with polypectomy. Factors such as polyp size and the use of antithrombotics were associated with an increased risk of complications, particularly bleeding events (PUBMED:26493629). Additionally, a national survey in the UK indicated that while advanced polypectomy techniques like EMR are widely practiced, there is a lack of dedicated training and a need for national guidelines and accreditation to improve practices (PUBMED:28839795). Furthermore, a retrospective review found that polypectomy of lesions up to 1cm in size could be performed without interruption of clopidogrel, suggesting that certain practices can be safely maintained to avoid treatment delays (PUBMED:27942276). In conclusion, while polypectomy practices in real-life settings aim to meet quality standards, there is evidence that not all patients receive the optimal level of care, particularly in terms of complete polyp removal and documentation. There is a need for improved training, adherence to guidelines, and consideration of patient-specific factors to enhance the safety and effectiveness of polypectomy procedures.
Instruction: Can middle grade and consultant emergency physicians accurately interpret computed tomography scans performed for head trauma? Abstracts: abstract_id: PUBMED:19625556 Can middle grade and consultant emergency physicians accurately interpret computed tomography scans performed for head trauma? Cross-sectional study. Background: There has been an increasing demand for computed tomography (CT) scans of the head following trauma over the past 10 years. This has placed radiology services under pressure. Aims: This study aimed to see whether middle grade and consultant doctors with no particular training in radiology working in the emergency department could interpret CT scans performed for trauma with adequate sensitivity and specificity to allow safe discharge. The secondary aim was to describe which abnormalities, if any, were missed. Methods: Study participants were middle grade and consultant emergency physicians. A comparison was made of the ability to identify any abnormality on CT and a clinically important brain injury on CT, compared with a gold standard of a neuroradiology report. Results: 243 consecutive CT scans performed for trauma were reviewed by 17 doctors, including five consultants. The overall sensitivity and specificity for detecting clinically important brain injuries was 87.8 (95% CI 73.8 to 95.9) and 80.2 (95% CI 75.2 to 85.3), respectively. All the missed abnormalities were around the base of the brain. The most commonly overdiagnosed abnormality was diffuse cerebral oedema. Conclusions: Emergency physicians should not interpret CT scans for trauma without extra training. abstract_id: PUBMED:19306170 Emergency head CT scans: can neurosurgical registrars be relied upon to interpret them? Neurosurgical registrars are frequently called upon by A&amp;E staff and physicians to interpret emergency head CT (computed tomography) scans out of hours. This appears to reflect the reduced threshold for scanning patients and the nonavailability of a radiologist to report these scans. This study was undertaken to assess the safety of such practices. Five neurosurgical registrars, blinded to each other and to the radiology reports, interpreted 50 consecutive emergency head CT scans (both trauma and nontrauma) from the hospital's imaging system as a pilot study. These were initially graded as normal or abnormal. Abnormal scans were assessed for the presence of an intracranial bleed, pneumocephalus, skull fractures, cerebral contusions, mass effect, midline shift, ischaemia or hydrocephalus. The agreement of the observers' recordings with the report issued or approved by a consultant radiologist was evaluated using SPSS Version 13.0. Four of the five registrars assessed a further 150 scans in a similar manner to complete the study. There was a good general agreement between the formal reports and the neurosurgical registrars' identification of normal scans (average Kappa 0.79). The radiology reports and the registrars also agreed well on the presence or absence of intracranial blood, contusions and pneumocephalus (Kappa value &gt; 0.70). The agreement was poorer for ischaemia, mass lesions (other than intracranial haematomas), grey white differentiation, evidence of raised intracranial pressure and midline shift (Kappa &lt; 0.5). Neurosurgical registrars compared well with radiologists when it came to assessing emergency head CT scans as normal or detecting a surgical lesion. The agreement was poorer on subtle abnormalities. The practice of neurosurgical registrars informally 'reporting' on emergency head CT scans cannot be recommended. abstract_id: PUBMED:9785153 Computed tomography of the head by the accident and emergency department--why 24 hour access is vital. Objective: To examine the use made of 24 hour access to computed tomography from an accident and emergency (A&amp;E) department and to assess whether clear benefits for patients could be identified by having such a service. Methods: Retrospective review of 176 cases where computed tomography was ordered by A&amp;E staff of a large teaching hospital over a one year period. Results: 53% of scans were done "out of hours"; 97% of scans performed (171/176) were studies of the brain. Three examinations were of the cervical spine and two were of the chest. 54% of head scans (93/171) were performed for either confirmed or suspected trauma with 46% (78/171) done for medical indications. Only 16% (11/71) of patients who had a head scan for acute trauma required transfer to the regional neurosurgical unit after consultation. Computed tomography was 100% sensitive in the diagnosis of sub-arachnoid haemorrhage. In cases where computed tomography was performed for coma of undetermined origin the pathology causing coma was identified on computed tomography in 50% of cases. Conclusions: Computed tomography facilities allow comprehensive initial evaluation of the head injured patient and minimise hazardous and expensive transfer of these seriously ill patients. Experience shows that it is a vital tool in the initial differential diagnosis of the comatose patient and therefore must be available for use by senior and middle grade A&amp;E staff on a 24 hour basis. abstract_id: PUBMED:21346679 A survey of emergency physicians' fear of malpractice and its association with the decision to order computed tomography scans for children with minor head trauma. Objective: The objective of the study was to determine whether fear of malpractice is associated with emergency physicians' decision to order head computed tomography (CT) in 3 age-specific scenarios of pediatric minor head trauma. We hypothesized that physicians with higher fear of malpractice scores will be more likely to order head CT scans. Methods: Board-eligible/board-certified members of the Michigan College of Emergency Physicians were sent a 2-part survey consisting of case scenarios and demographic questions. Effect of fear of malpractice on the decision to order a CT scan was evaluated using a cumulative logit model. Results: Two hundred forty-six members (36.5%) completed the surveys. In scenario 1 (infant), being a male and working in a university setting were associated with reduced odds of ordering a CT scan (odds ratio [OR], 0.40; 95% confidence interval [CI], 0.18-0.88; and OR, 0.35; 95% CI, 0.13-0.96, respectively). In scenario 2 (toddler), working for 15 years or more, at multiple hospitals, and for a private group were associated with reduced odds of ordering a CT scan (OR, 0.46; 95% CI, 0.26-0.79; OR, 0.36; 95% CI, 0.16-0.80; and OR, 0.51; 95% CI, 0.27-0.94, respectively). No demographic variables were significantly associated with ordering a CT scan in scenario 3 (teen). Overall, the fear of malpractice was not significantly associated with ordering a CT scan (OR, 1.28; 95% CI, 0.73-2.26; and OR, 1.70; 95% CI, 0.97-3.0). Only in scenario 2 was high fear significantly associated with increased odds of ordering a CT scan (OR, 2.09; 95% CI, 1.08-4.05). Conclusions: Members of Michigan College of Emergency Physicians with a higher fear of malpractice score tended to order more head CT scans in pediatric minor head trauma. However, this trend was shown to be statistically significant only in 1 case and not overall. abstract_id: PUBMED:7832342 Accuracy of interpretation of cranial computed tomography scans in an emergency medicine residency program. Study Objectives: To determine the concordance of emergency physicians and radiologists in interpreting cranial computed tomography (CT) scans. The study also sought to determine the clinical significance of misinterpretations of cranial CT scans by emergency physicians. Design: Prospective cohort study. Setting: A county hospital emergency medicine residency program. Participants: Five hundred fifty-five patients undergoing CT scanning during emergency department evaluation. Results: Forty-nine percent (272) of the indications for CT scanning were for trauma, 14.2% (79) were for cerebrovascular accident, 25.1% (139) were for headache, 15.1% (84) were for seizure, and 13.7% (76) were for miscellaneous reasons. The radiologists interpreted 46.1% (256) of the CT scans as abnormal. The most frequent abnormalities were scalp hematoma, 15.2% (39); infarction, 14.1% (36); calcification, 6.3% (16); contusion, 6.3% (16); parenchymal hemorrhage, 5.1% (13); and mass, 5.1% (13). Nonconcordance between radiologists and emergency physicians was found in 38.7% (206) of the cases. Potentially clinically significant misinterpretations were found in 24.1% (131) of the total sample. These misinterpretations included 62 missed major findings (11.4% of total sample): 25 new infarcts, 10 mass lesions, 8 cases of cerebral edema, 8 parenchymal hemorrhages, 5 contusions, 4 subarachnoid hemorrhages, 1 epidural hematoma, and 1 subdural hematoma. However, on chart review, only three patients (0.6%) were found to have been managed inappropriately, and none had an adverse outcome. Conclusion: The misinterpretation rate of cranial CT scans by emergency physicians is of potential clinical concern. However, clinical mismanagement is rare. We recommend that more formal education in CT interpretation be included in residency training and continuing medical education programs for emergency physicians. abstract_id: PUBMED:27752602 Change in guardians' preference for computed tomography after explanation by emergency physicians in pediatric head injury. Objective: Head injury in children is a common problem presenting to emergency departments, and cranial computed tomography scanning is the diagnostic standard for these patients. Several decision rules are used to determine whether computed tomography scans should be used; however, the use of computed tomography scans is often influenced by guardians' preference toward the scans. The objective of this study was to identify changes in guardian preference for minor head injuries after receiving an explanation based on the institutional clinical practice guideline. Methods: A survey was conducted between July 2010 and June 2012. Patients younger than 16 years with a Glasgow Coma Scale score of 15 after a head injury and their guardians were included. Pre- and post-explanation questionnaires were given to guardians to assess their preference for computed tomography scans and factors related to the degree of preference. Treating physicians explained the risks and benefits of cranial computed tomography scanning using the institutional clinical practice guideline. Guardian preference for a computed tomography scan was examined using a 100-mm visual analog scale. Results: In total, 208 patients and their guardians were included in this survey. Guardian preference for computed tomography scans was significantly reduced after explanation (46.7 vs. 17.4, P&lt;0.01). Pre-explanation preference and the strength of the physician recommendation to get a computed tomography were the most important factors affecting pre- and post-explanation changes in preferences. Conclusion: Explanation of the risks and benefits of cranial computed tomography scans using the institutional clinical practice guideline may significantly reduce guardian preference for computed tomography scans. abstract_id: PUBMED:29847541 Physicians' Risk Tolerance and Head Computed Tomography Use for Pediatric Patients With Minor Head Injury. Objectives: Traumatic brain injury is the leading cause of death and disability in children worldwide. The objective of this study was to determine the association between physician risk tolerance and head computed tomography (CT) use in patients with minor head injury (MHI) in the emergency department (ED). Methods: We retrospectively analyzed pediatric patients (&lt;17 years old) with MHI in the ED and then administered 2 questionnaires (a risk-taking subscale [RTS] of the Jackson Personality Inventory and a malpractice fear scale [MFS]) to attending physicians who had evaluated these patients and made decisions regarding head CT use. The primary outcome was head CT use during ED evaluation; the secondary outcome was ED length of stay and final diagnosis of intracranial injury (ICI). Results: Of 523 patients with MHI, 233 (44.6%) underwent brain CT, and 16 (3.1%) received a final diagnosis of ICI. Among the 16 emergency physicians (EPs), the median scores of the MFS and RTS were 22 (interquartile range, 17-26) and 23 (interquartile range, 19-25), respectively. Emergency physicians who were most risk averse tended to order more head CT scans compared with the more risk-tolerant EPs (56.96% vs 37.37%; odds ratio, 8.463; confidence interval, 2.783-25.736). The ED length of stay (P = 0.442 and P = 0.889) and final diagnosis (P = 0.155 and P = 0.835) of ICI were not significantly associated with the RTS and MFS scores. Conclusions: Individual EP risk tolerance, as measured by RTS, was predictive of CT use in pediatric patients with MHI. abstract_id: PUBMED:32471780 Effect of parental pressure on emergency physicians for computerized tomography imaging request in children with head trauma. Background: Both minor and major head traumas constitute an important proportion of childhood emergency admissions. In this study, the findings of cranial computed tomography (CCT) scans performed as a result of the parental pressure were evaluated. Methods: The frequency and findings of CCT scans performed as a result of parental pressure were examined in a separate subgroup. Results: A total of 227 patients were included in the study; 158 (69.9%) patients had undergone CCT scans; a pathological finding was detected in 24 (10.6%) of these patients and undergone a consultation by the neurosurgeon (most common finding was isolated linear fracture; n = 12; 50%). The patients undergoing CCT scans were divided in two subgroups: the PECARN group [n = 123 (77.8%)] and the Parental pressure group [n = 33 (22.2%)]. Conclusion: One third of the parents of children who presented to the emergency department with head trauma and had no indication for CCT according to PECARN rules insisted on CCT imaging, and none of these cases showed ciTBI, surgical operation, or mortality. None of the patients in the parental pressure group had a history of surgical intervention or mortality within one month after discharge. abstract_id: PUBMED:26066771 Are emergency physicians competent to interpret the cranial CT of patients younger than the age of 2 years with mild head trauma? Objective: The aim of this study is to detect the accuracy rate of emergency physicians' (EPs') interpretations of cranial computed tomography (CT) of a special patient group, the patients younger than the age of 2 years with mild head trauma. Methods: The study took place in a research and training hospital within a period of 3 months and included a total of 156 patients. The scans were interpreted by the EPs, and the predicted results were recorded in the patients' files; simultaneous interpretations of the on-call radiologist were also recorded. The interpretations were scanned retrospectively at the end of each month, compared, and recorded. Results: With reference to the radiologists' reports, the sensitivity and the specificity of EPs' interpretations of cranial CT were 76.9% and 95.1%, respectively. Concordance of both groups' interpretations was 93.6%. The area under the curve (AUC) value in receiver operating characteristic (ROC) analysis, which reflects these results, was calculated as 0.860 (95% confidence interval, 0.740-0.981). This value was found to be statistically significant (P &lt; .001). False-negative and false-positive values were 23% and 4.9%, respectively. No negative clinical outcomes were observed due to nonconcordance interpreted cases (6.4%). Conclusions: Emergency physicians can interpret safely the cranial CTs of patients younger than the age of 2 years with mild head trauma, until the radiologists' reports are obtained. However, the anatomic diversities of these patients' age group should be taken into consideration. abstract_id: PUBMED:29485796 Evaluation of the Head Multislice Computed Tomography Scan in Emergency Department. The aim of the study was to examine the prevalence of head injuries, acute stroke and brain tumors obtained from computed tomography (CT) scans in the emergency department (ED) during a one-year period. We also assessed the potential effect of seasons on the occurrence of stroke, head trauma and tumors found on CT scans, expressed in monthly intervals. This retrospective review included all patients that underwent emergency head CT from the hospital database. A total of 3888 head CT examinations were performed in adult patients presenting to ED and 1424 CT scans had at least one pathologic finding meeting diagnostic criteria for the study. Of the total number of CT scans analyzed, acute stroke was identified in 552 (14.19%), head trauma in 660 (16.97%), and brain tumor in 212 (5.45%) patients. Head trauma was more commonly found in males (n=465, 70.45%) than in females (n=195, 29.54%). Acute stroke was slightly more common in males than in females. Brain tumors were more frequently found in female patients. There were monthly variations in the number of head injuries and acute stroke diagnosed during the study period. Men and elderly patients were found to account for the greatest number of traumatic head injuries and therefore are at the highest risk of possible brain injury. Answer: Middle grade and consultant emergency physicians may not accurately interpret computed tomography (CT) scans performed for head trauma without additional training. A study found that the overall sensitivity and specificity for detecting clinically important brain injuries by these physicians was 87.8% and 80.2%, respectively. However, all missed abnormalities were around the base of the brain, and the most commonly overdiagnosed abnormality was diffuse cerebral oedema, leading to the conclusion that emergency physicians should not interpret CT scans for trauma without extra training (PUBMED:19625556). Another study assessing the safety of neurosurgical registrars interpreting emergency head CT scans out of hours found that while there was good general agreement with radiologists on the identification of normal scans and the presence or absence of intracranial blood, contusions, and pneumocephalus, the agreement was poorer for ischaemia, mass lesions, and other subtle abnormalities. This suggests that neurosurgical registrars may not be reliable in interpreting emergency head CT scans without the presence of a radiologist (PUBMED:19306170). Furthermore, a study evaluating the accuracy of emergency physicians' interpretations of cranial CT scans in an emergency medicine residency program found a nonconcordance rate of 38.7% between radiologists and emergency physicians, with potentially clinically significant misinterpretations in 24.1% of cases. This indicates a potential clinical concern regarding the misinterpretation of cranial CT scans by emergency physicians (PUBMED:7832342). In the context of pediatric patients with mild head trauma under the age of 2 years, emergency physicians showed a sensitivity of 76.9% and specificity of 95.1% in interpreting cranial CT scans, with a concordance rate of 93.6% compared to radiologists' reports. This suggests that emergency physicians can interpret these scans safely until radiologists' reports are obtained, but anatomic diversities of this age group should be considered (PUBMED:26066771). Overall, these studies suggest that while emergency physicians can identify certain abnormalities on head CT scans following trauma, their accuracy is not consistent across all types of injuries, and additional training or the presence of a radiologist may be necessary to ensure accurate interpretation.
Instruction: Can the male-to-female ratio of gonorrhoea in one sexually transmissible infection treatment facility be solely used to judge its efficiency? Abstracts: abstract_id: PUBMED:18771641 Can the male-to-female ratio of gonorrhoea in one sexually transmissible infection treatment facility be solely used to judge its efficiency? Background: This facility has for a long time audited its efficacy in contact tracing (case finding) and found results comparable with national guidelines. In addition, we consistently measure control of the disease using three indices. A departure from the norm in one of these (the male-to-female ratio) prompted us to explore whether local case finding, and therefore control, was lacking resulting in the identification of a statistical anomaly. We have learnt a lesson, which may be of use to others who critically evaluate their work. Methods: Review of statutory clinic quarterly returns and manually-held contact tracer data, comparison of representative quarters (Wilcoxson sign rank test) and detailed inspection of sampled case-to-case contact tracing efficiency. Results: Evidence was found challenging our belief that male-to-female ratios are at face value an inevitably accurate surrogate for case finding or infection control. In our clinic, we identified recording anomalies giving rise to false concerns that case finding was less efficient than usual. Conclusions: Although the heterosexual male:female ratio for gonorrhoea is one readily available and proxy measure of disease control and tracing efficiency, its sole use should be interpreted with caution. A time lag across quarters between patient and partner attendance and other recording anomalies may mislead. Ratios should therefore be interpreted in the context of partner notification outcomes, which give a more reliable measure of efficiency. The use of the ratio in critical evaluation of a unit's efficiency should be but one part of a package of measures. abstract_id: PUBMED:17112446 Sexually transmissible infections among illegal female sex workers in Israel. Due to the mobile and clandestine nature of those who enter a country illegally, female sex workers (FSWs) who are working without papers or work permits often have no access to sexual health care. This study reports on the sexually transmissible infection (STI) prevalence among a sample of 43 sex workers working illegally. Brothel workers from republics of the Former Soviet Union (FSU), working in two locales in Israel were tested for the presence of eight pathogens and the presence of pathology by Pap smear. Of these brothel workers, 48.8% had at least one positive STI result, 14% had two STIs and one woman had three STIs. There were no cases of HIV, gonorrhoea or malignancy detected; high rates of ureaplasma (26.8%) and chlamydia were found (16.7%). Four cases of hepatitis C (9%) and three cases of hepatitis B (7%) and mycoplasma (7%) were detected. There was no relationship between reported symptoms and the detection of STIs. The level of STIs is high among this population of FSWs and it is imperative to develop more accessible health services for these women. abstract_id: PUBMED:27050292 Is the risk for sexually transmissible infections (STI) lower among women with exclusively female sexual partners compared with women with male partners? A retrospective study based on attendees at a Norwegian STI clinic from 2004 to 2014. Background: The prevalence of and the risk for sexually transmissible infections (STIs) for women engaging in same-sex sexual behaviour was investigated among women attending an STI clinic. Methods: Data from electronic medical records were reviewed and logistic regression used to estimate the odds ratio (OR) of STIs. Women reporting life-time exclusively female partners (WSW) and women reporting female and male partners (WSWM) were compared with women reporting exclusively male partners (WSM). Outcomes included: Chlamydia trachomatis, Mycoplasma genitalium, Neisseria gonorrhoea, HIV and syphilis. Results: The study population comprised 103564 women (WSW 641, WSWM 12010 and WSM 90913). Overall prevalence of STIs was 8%. Crude OR of STIs for WSW: 0.56 (95% CI 0.39-0.81), for WSWM: 0.99 (95% CI 0.92-1.06) compared with WSM. Multivariate analysis revealed an interaction effect between same-sex sexual behaviour and smoking. Among non-smokers; WSW adjusted OR was 0.41 (95% CI 0.21-0.80), WSWM adjusted OR was 0.91 (95% CI 0.81-1.02) compared with WSM. Among smokers; WSW adjusted OR was 1.03 (95% CI 0.63-1.67) for WSWM adjusted OR was 1.00 (CI 95% 0.93-1.13), compared with WSM. Conclusion: This study, including the largest cohort of women reporting life-time exclusively female partners in an STI study, shows that WSW generally are at lower risk for acquiring STIs than WSM. Smoking WSW, however, had the same risk for acquiring bacterial STIs as WSM and WSWM. Our study suggests that all WSW should receive the same encouragement to test for STIs as WSM. abstract_id: PUBMED:17931536 Prevalence of sexually transmissible infections (STI) among male patients with STI in Denpasar and Makassar, Indonesia: are symptoms of urethritis sufficient to guide syndromic treatment? Background: There are few data on sexually transmissible infections in men in Indonesia. We conducted a prevalence study and compared symptoms alone with clinical signs to manage urethritis. Methods: We recruited symptomatic male patients at public clinics, and used standardised questionnaires, examinations and laboratory tests. Results: We recruited 273 men and the prevalences were Neisseria gonorrhoeae 18.2%, Chlamydia trachomatis 10.1%, and positive syphilis serology 5.2%. Four cases of HIV were detected. Urethral symptoms detected 91.7% of N. gonorrhoeae and/or C. trachomatis, and positive predictive value (PPV) was 41.8%, compared with clinical confirmation (sensitivity 86.1%, PPV 37.6%). Conclusions: Most cases of syphilis were asymptomatic, supporting routine screening for syphilis. Urethral symptoms predicted infection with N. gonorrhoeae/C. trachomatis better than clinical signs. abstract_id: PUBMED:1743710 A survey of female prostitutes at risk of HIV infection and other sexually transmissible diseases. Objective: To determine risk factors for the transmission of human immunodeficiency virus (HIV), including injecting drug use (IDU), sexual behaviour and other sexually transmissible diseases (STDs), in female prostitutes who attended the Sydney Sexual Health (previously STD) Centre. Design: We surveyed by questionnaire 231 (47%) of 491 female prostitutes who visited the Centre over a 19 month period from 1986 to 1988. All were tested for HIV antibody. Main Outcome: All the women were seronegative for HIV but a number of major risk factors for infection were identified. Results: Seventeen of 26 (65%) current injecting drug users had shared needles in the previous 6 months. Nineteen per cent of those surveyed had bisexual non-paying partners and 21% had partners who injected drugs. Sixty nine per cent always used condoms for vaginal intercourse with paying clients, but they were rarely used with non-paying partners. Condom use was also rare for anal intercourse with clients and/or partners by those (18%) who practised it. Seventeen per cent used condoms alone for contraception and 48% relied on oral contraceptives. We found a reduction in gonorrhoea, herpes and trichomoniasis when compared with a 1985 study conducted at the same Centre. However, there was an increase in reported abnormal cervical cytology. Conclusion: In spite of behaviour change by some, there are still many women working as prostitutes in Sydney who remain seriously at risk of HIV infection. We recommend more widespread use of barrier methods of contraception, intensified efforts to prevent the sharing of intravenous needles, closer monitoring of the health of prostitutes, and scientific study of their paying and non-paying sexual partners. abstract_id: PUBMED:20152097 Prevalence of other sexually transmissible infections in patients with newly diagnosed anogenital warts in a sexual health clinic. Background: Anogenital warts are a common initial presentation to the Canberra Sexual Health Centre. It is anticipated that the introduction of human papillomavirus vaccination will reduce the incidence of anogenital warts. The present study determines the prevalence of other sexually transmissible infections in patients newly diagnosed with warts who may not have presented for screening without the impetus of a genital lump. Methods: The prevalence of other sexually transmissible infections in new patients presenting to the Canberra Sexual Health Centre diagnosed with anogenital warts was determined from a retrospective clinical audit from 2002 to 2007. Results: A total of 1015 new patients were diagnosed with anogenital warts. Of this total cohort, 53 (5.2%) were found to be co-infected with either chlamydia and/or gonorrhoea. Only 13.2% of co-infected patients reported symptoms other than genital lumps. Of co-infected patients 11.3% reported contact with a partner with chlamydia and/or gonorrhoea. Not all patients were screened for other sexually transmissible infections: 762 (75.1%) were screened for chlamydia and 576 (56.7%) were screened for gonorrhoea. Of those tested, 6.8% of men and 6.9% of women were positive for chlamydia highlighting the importance of offering full sexually transmissible infection screening in those newly diagnosed with anogenital warts. Chlamydia was more common in younger patients who reported a higher number of sexual partners. Conclusions: It is anticipated that human papillomavirus vaccination will lead to a decline in anogenital wart incidence as well as other human papillomavirus associated disease. Although one opportunity for testing for other sexually transmissible infections may be lost in this population, the decrease in anogenital warts will leave clinicians with more time to pursue other screening programs. Education and screening campaigns should continue to focus on the asymptomatic nature of the majority of sexually transmissible infections. abstract_id: PUBMED:33624738 Sexually transmissible infections in African-descendant women in maroon communities in Brazil: prevalence and associated factors The study aimed to estimate the prevalence of sexually transmissible infections (STIs) and associated factors in women in quilombola (maroon) communities in Brazil. This was a population-based cross-sectional study of quilombola women from March 2017 to January 2019. A questionnaire was used with sociodemographic, behavioral, and clinical information. A gynecological examination was performed for the collection of uterine cervical cells for oncotic cytology and the detection of Chlamydia trachomatis, Neisseria gonorrhoeae, Trichomonas vaginalis, and human papillomavirus (HPV) via polymerase chain reaction. Rapid tests for HIV and syphilis were performed. The main outcome was defined as infection with one or more sexually transmissible pathogens. The analysis used chi-square test and hierarchical logistic regression. From a total of 380 women, 352 (92.6%) were included in the study. Prevalence of at least one STI was 18.5% (95%CI: 14.76-22.85). The highest prevalence was for HPV, with 11.1%, followed by 6.3% for T. vaginalis and 4.3% for C. trachomatis. There were no cases of N. gonorrhoeae. Prevalence was 0.3% for HIV and 4.3% for syphilis. Cervical-vaginal cytology was altered in 7.7% of the women. Detection of one or more STIs was significantly associated with age 25 to 44 years (OR = 2.33; 95%CI: 1.05-5.18), alcohol consumption (OR = 1.96; 95%CI: 1.06-3.64), altered cervical cytology (OR = 3.96; 95%CI: 1.65-9.48), and bacterial vaginosis (OR = 3.61; 95%CI: 2.01-6.47). Quilombola women showed high prevalence of one or more STIs, emphasizing the importance of organizing prevention strategies targeted to these women. abstract_id: PUBMED:9709043 A controlled trial of nonoxynol 9 film to reduce male-to-female transmission of sexually transmitted diseases. Background: Nonoxynol 9 is a proved spermicide, but whether it is also a microbicide is uncertain. A truly effective vaginal microbicide would reduce the susceptibility of women to sexually transmitted diseases, including infection with the human immunodeficiency virus (HIV). Methods: We enrolled 1292 HIV-negative female sex workers in Cameroon and enrolled them in a double-blind, placebo-controlled study in which the participants were randomly assigned to use either a film containing 70 mg of nonoxynol 9 or a placebo film, inserted into the vagina before intercourse. All of the women were provided with latex condoms and were instructed to have their male sexual partners use them. At monthly follow-up visits, we examined the women with a colposcope for genital lesions, tested endocervical specimens for gonorrhea and chlamydia infection with DNA probes, tested for HIV infection, and treated the women for curable sexually transmitted diseases. Results: The rates of HIV infection (cases per 100 woman-years) were 6.7 in the nonoxynol 9 group and 6.6 in the placebo group (rate ratio, 1.0; 95 percent confidence interval, 0.7 to 1.5). The rates of genital lesions were 42.2 cases per 100 woman-years in the nonoxynol 9 group and 33.5 in the placebo group (rate ratio, 1.3; 95 percent confidence interval, 1.0 to 1.6). The rates of gonorrhea were 33.3 and 31.1 cases per 100 woman-years in the nonoxynol 9 and placebo groups, respectively (rate ratio, 1.1; 95 percent confidence interval, 0.8 to 1.4). The corresponding rates of chlamydia infection in the nonoxynol 9 group and the placebo group were 20.6 and 22.2 per 100 woman-years (rate ratio, 0.9; 95 percent confidence interval, 0.7 to 1.3). The women reported that condoms were used during 90 percent of sexual acts. Conclusions: The use of a nonoxynol 9 vaginal film did not reduce the rate of new HIV, gonorrhea, or chlamydia infection in this group of sex workers who used condoms and received treatment for sexually transmitted diseases. abstract_id: PUBMED:23369293 The prevalence of sexually transmissible infections among female sex workers from countries with low and high prevalences in Melbourne. Objectives: To assess the proportion of female sex workers (FSWs) from low-prevalence ('other') and high-prevalence countries (HPCs) for sexually transmissible infections (STIs) in Melbourne. Methods: Retrospective data analysis from the Melbourne Sexual Health Centre database for February 2002-May 2011. Results: Some 1702 HPC FSWs, 2594 other FSWs and 13?644 nonFSW women were included, with 12?891 and 21?611 consultations, respectively, for HPC FSWs and other FSWs. Proportions of HPC FSWs with chlamydia (Chlamydia trachomatis) (4.2% v. 3.3%, P=0.14), gonorrhoea (Neisseria gonorrhoeae) (0.24% v. 0.31%, P=0.66) or trichomonas (0.65 v. 0.46, P=0.42) at first visit were similar to those of other FSWs, but nonFSWs had a higher prevalence of chlamydia and gonorrhoea (P&lt;0.01). On return visits, chlamydia (1.2% v. 0.92%, P&lt;0.05) and gonorrhoea (0.29% v. 0.10%, P=&lt;0.05) were more common in HPC FSWs than other FSWs. The presence of any of these STIs among FSWs was predicted by having private nonpaying partners in the last year (adjusted odds ratio (AOR): 1.99 (95% confidence interval (CI): 1.52-2.61)), having private partners from overseas (AOR: 1.69 (95% CI: 1.33-2.16)) and being a HPC FSW (AOR: 1.38 (95% CI: 1.12-1.72)). Injecting drug use was less common in HPC FSWs than in other FSWs (0.82% v. 16.54%, P&lt;0.0001). Conclusions: STI prevalence was low among FSWs from countries with high and low background STI. Among FSWs having private sex partners and private partners from overseas were the primary predictors for STIs. abstract_id: PUBMED:32564780 Outreach services to brothels facilitate sexually transmissible infection testing for Asian sex workers in regional Australia. Outreach services providing sexually transmissible infection (STI) testing on-site in brothels were introduced in 2012 by Pacific Clinic Newcastle, Australia. Asian female sex workers (FSW) were more likely than non-culturally and linguistically diverse (CALD) FSW to access testing via outreach services than in clinics (OR 2.38, 95% CI 1.82-3.11, P &lt; 0.0001) and to test positive for chlamydia or gonorrhoea (OR 2.58, 95% CI 1.58-4.20, P = 0.0001). Outreach services offering testing on-site in brothels are recommended to facilitate access for Asian FSW in similar settings. Answer: No, the male-to-female ratio of gonorrhoea in one sexually transmissible infection treatment facility cannot be solely used to judge its efficiency. The study conducted at a particular clinic found that relying solely on the heterosexual male-to-female ratio for gonorrhoea as a proxy measure of disease control and tracing efficiency can be misleading. The clinic identified recording anomalies that gave rise to false concerns about the efficiency of case finding. It was concluded that while the male-to-female ratio is a readily available measure, it should be interpreted with caution and not used in isolation. Ratios should be interpreted in the context of partner notification outcomes, which provide a more reliable measure of efficiency. The use of the ratio should be part of a package of measures rather than the sole indicator of a unit's efficiency (PUBMED:18771641).
Instruction: Do depression treatments reduce suicidal ideation? Abstracts: abstract_id: PUBMED:24953481 Do depression treatments reduce suicidal ideation? The effects of CBT, IPT, pharmacotherapy, and placebo on suicidality. Background: Many well-researched treatments for depression exist. However, there is not yet enough evidence on whether these therapies, designed for the treatment of depression, are also effective for reducing suicidal ideation. This research provides valuable information for researchers, clinicians, and suicide prevention policy makers. Methods: Analysis was conducted on the Treatment for Depression Research Collaborative (TDCRP) sample, which included CBT, IPT, medication, and placebo treatment groups. Participants were included in the analysis if they reported suicidal ideation on the HRSD or BDI (score of ≥1). Results: Multivariate linear regression indicated that both IPT (b=.41, p&lt;.05) and medication (b =.47, p&lt;.05) yielded a significant reduction in suicide symptoms compared to placebo on the HRSD. Multivariate linear regression indicated that after adjustment for change in depression these treatment effects were no longer significant. Moderate Cohen׳s d effect sizes from baseline to post-test differences in suicide score by treatment group are reported. Limitations: These analyses were completed on a single suicide item from each of the measures. Moreover, the TDCRP excluded participants with moderate to severe suicidal ideation. Conclusions: This study demonstrates the specific effectiveness of IPT and medications in reducing suicidal ideation (relative to placebo), albeit largely as a consequence of their more general effects on depression. This adds to the growing body of evidence that depression treatments, specifically IPT and medication, can also reduce suicidal ideation and serves to further our understanding of the complex relationship between depression and suicide. abstract_id: PUBMED:34633054 Mental depression: Relation to different disease status, newer treatments and its association with COVID-19 pandemic (Review). The present study aimed to review major depression, including its types, epidemiology, association with different diseases status and treatments, as well as its correlation with the current COVID-19 pandemic. Mental depression is a common disorder that affects most individuals at one time or another. During depression, there are changes in mood and behavior, accompanied by feelings of defeat, hopelessness, or even suicidal thoughts. Depression has a direct or indirect relation with a number of other diseases including Alzheimer's disease, stroke, epilepsy, diabetes, cardiovascular disease and cancer. In addition, antidepressant drugs have several side effects including sedation, increased weight, indigestion, sexual dysfunction, or a decrease in blood pressure. Stopping medication may cause a relapse of the symptoms of depression and pose a risk of attempted suicide. The pandemic of COVID-19 has affected the mental health of individuals, including patients, individuals contacting patients and medical staff with a number of mental disorders that may adversely affect the immune ability of their bodies. Some of the drugs currently included in the protocols for treating COVID-19 may negatively affect the mental health of patients. Evidence accumulated over the years indicates that serotonin (5HT) deficiencies and norepinephrine (NE) in the brain can lead to mental depression. Drugs that increase levels of NE and 5HT are commonly used in the treatment of depression. The common reason for mood disorders, including mania and bipolar disease are not clearly understood. It is assumed that hyperactivity in specific parts of the brain and excessive activity of neurotransmitters may be involved. Early diagnosis and developing new treatment strategies are essential for the prevention of the severe consequences of depression. In addition, extensive research should be directed towards the investigation of the mental health disturbances occurring during and/or after COVID-19 infection. This may lead to the incorporation of a suitable antidepressant into the current treatment protocols. abstract_id: PUBMED:26402009 Depression and Suicidal Ideation During Two Psychosocial Treatments in Older Adults with Major Depression and Dementia. Background: Depression is prevalent in dementia and contributes to poor outcomes for patients and their families. Antidepressants have limited efficacy in older adults with major depression and dementia, and psychosocial interventions are under-investigated. Objective: To examine the course, predictors and moderators of depression and suicidal ideation during 12 weeks of home-delivered Problem Adaptation Therapy (PATH) versus Supportive Therapy for Cognitively Impaired Older Adults (ST-CI) in 39 older adults with major depression and dementia. Methods: Thirty-nine older adults with major depression, mild or moderate dementia, and disability participated in a randomized controlled trial that compared the efficacy of PATH versus ST-CI. Depression and suicidal ideation were assessed with Cornell Scale for Depression in Dementia Total Score and Suicide Item. Results: PATH participants had significantly greater reduction in depression than ST-CI participants over 12 weeks of treatment. PATH participants with high social support had the greatest reduction in depression. Both treatments had comparable reduction in suicidal ideation. Conclusion: PATH is more effective in reducing depression in older adults with major depression and dementia compared to ST-CI. These results are clinically significant as antidepressants have limited efficacy in this population. Home-delivered psychosocial treatments may reduce suicidal ideation in this population. abstract_id: PUBMED:32217564 Unwanted effects of treatments for depression in children and adolescents: a mapping of systematic reviews. Objectives: To describe the results of a mapping review exploring the coverage of unwanted treatment effects in systematic reviews of the effects of various treatments for moderate to severe depression in children and adolescents. Setting: Any context or service providing treatment for depression, including interventions delivered in local communities and school settings, as well as services provided in primary or specialist care. Participants: Children and young people with moderate to severe depression (&lt;18 years). Interventions: Systematic reviews published in 2011 or later comparing the effects of any treatment for children and adolescents with moderate to severe depression meeting the Database of Abstracts of Reviews of Effects criteria. The systematic search was performed in April 2018 and updated in December 2018. Primary Outcomes: Any unwanted effects of treatments as defined in the systematic review. Results: We included 10 systematic reviews covering 19 treatment comparisons. Unwanted effects were assessed for seven of 19. Three comparisons were evaluations of pharmaceutical interventions or combination therapy, reporting effects on 'suicidal ideation' and 'suicide risk'. Two included therapy, reporting 'self-harm', and 'suicidal ideation', and two comparisons included transcranial magnetic stimulation and electroconvulsive treatment. Unwanted effects evaluated for these treatments were mostly symptoms of physical discomfort such as headache or cramps. For the remaining treatment comparisons evaluating psychological and psychosocial therapies, unwanted effects were not evaluated or found. A limitation of overviews of systematic reviews such as this mapping study is that data extraction is done based on the reporting of results by the review authors and not on the primary studies. Conclusion: The unwanted effects of widely used treatments for children and young people with depression is unknown. This is a major barrier for evidence informed decision making about treatment choices for children and young people. We suggest that unwanted effects should be a reporting standard in all protocols describing evaluations of treatments, including primary studies as well as systematic reviews. abstract_id: PUBMED:33755280 Cognitive therapy and interpersonal psychotherapy reduce suicidal ideation independent from their effect on depression. Background: Clinical guidelines suggest that psychological interventions specifically aimed at reducing suicidality may be beneficial. We examined the impact of two depression treatments, cognitive therapy (CT) and interpersonal psychotherapy (IPT) on suicidal ideation (SI) and explored the temporal associations between depression and SI over the course of therapy. Methods: Ninety-one adult (18-65) depressed outpatients from a large randomized controlled trial who were treated with CT (n = 37) and IPT (n = 54) and scored at least ≥1 on the Beck Depression Inventory II (BDI-II) suicide item were included. Linear (two-level) mixed effects models were used to evaluate the impact of depression treatments on SI. Mixed-effects time-lagged models were applied to examine temporal relations between the change in depressive symptoms and the change in SI. Results: SI decreased significantly during treatment and there were no differential effects between the two intervention groups (B = -0.007, p = .35). Depressive symptoms at the previous session did not predict higher levels of SI at the current session (B = 0.016, p = .16). However, SI measured at the previous session significantly predicted depressive symptoms at the current session (B = 2.06, p &lt; .001). Conclusions: Both depression treatments seemed to have a direct association with SI. The temporal association between SI and depression was unidirectional with SI predicting future depressive symptoms during treatment. Our findings suggest that it may be most beneficial to treat SI first. abstract_id: PUBMED:37575602 Depression and Suicidal Ideation among Patients Receiving Opioid Maintenance Treatments Considering Cognitive Flexibility. Objective: Cognitive flexibility is associated with psychiatric disorders. Drug addicts experience more psychiatric disorders. This research aimed to examine depression and suicidal ideation among those receiving opioid maintenance treatment (OMT), taking into account the mediating role of cognitive flexibility. Method: This cross-sectional research was conducted on patients who were enrolled in the OMT program in Semnan in 2021 and abstained from opioid use for at least one year. 126 participants (115 males and 11 females) were randomly selected from among patients in three therapeutic groups (42 from each of the methadone, buprenorphine, and opium tincture groups). The main data collection tools were the Beck Scale for Suicide Ideation (BSSI), Cognitive Flexibility Inventory (CFI), and Beck Depression Scale (BDI-II). Data analysis was done through logistic regression models. Results: Correlation analysis between depression scores, suicidal ideation, and cognitive flexibility showed a significant correlation between each of them. Adjusting for the type of treatment, the increase in cognitive flexibility was associated with a decreasing chance of depression (odds ratio [OR] = 0.87; 95% CI [0.82, 0.92]), and the use of buprenorphine (OR = 15.1) and opium tincture (OR = 9.3), compared to methadone, were associated with a depression increase. Yet, multivariate analysis did not show an independent and significant association between cognitive flexibility and the risk of suicide. Conclusion: Based on the results, patients receiving maintenance treatments are in different conditions in terms of depression and suicide, and psychological flexibility is in correlation with depression and suicidal thinking and behavior in them. This suggests that these patients seem to benefit from cognitive training, at least in reducing their depression. abstract_id: PUBMED:25856968 Psoriasis and depression Psoriasis is a common inflammatory disease affecting 2%-5% of the population of industrialized countries. Although the association between psychiatric pathologies and dermato- logical conditions is well known, the dermato- logist fails to indentify the psychological distress and when he identifies it, he doesn't take care of this suffering. The biological links between psoriasis and depression are now well-established. The impact on the quality of life (QoL) is studied for several years. In this vast domain of the QoL, the first problem revealed by the patients is the feeling of stigmatization. In psoriasis, the psychiatric comorbidity is estimated approximately at 30 %. The psychiatric troubles the most associated with dermatological conditions in general are depression, anxiety and suicidal ideations. The prevalence of depression in patients with psoriasis is estimated between 10 % and 62 % according to different studies. The patient's psychological distress is not correlated to the intensity of the clinical symptoms and the treatment does not modify either this distress, the coping mechanism, or the patient's opinion about his skin condition. Furthermore, it is necessary to know that the dermatological diseases affecting the patients during their childhood or adolescence will have a significant effect on the formation of their personality. However, personality will have an influence on the psychological morbidity. Therefore, it is important to take care of young people. To help the physician to recognize these difficulties, we propose a review of the literature and a method for a better management of the psychological suffering often experienced by the patients with psoriasis. abstract_id: PUBMED:38302067 Treatment-resistant depression patients with baseline suicidal ideation required more treatments to achieve therapeutic response with ketamine/esketamine. Background: There is an urgent need to identify interventions to reduce suicidality. We investigated the antisuicidal effects of intravenous (IV) ketamine and intranasal (IN) esketamine among patients with treatment-resistant depression (TRD) in a historical cohort study. Methods: The Quick Inventory of Depressive Symptomatology self-report (QIDS-SR) question 12 was used to measure suicidal ideation (SI). Cox proportional hazards models were used to evaluate associations between the number of treatments to response and baseline SI (yes, Q12 &gt; 0 versus no, Q12 = 0), adjusting for covariates and modified baseline QIDS-SR score. We evaluated associations between the number of treatments to a 50 % reduction in SI score between IV and IN treatment. Results: Fifty-two adults (62.5 % female, median age 49.1 years) received IV ketamine (71 %, n = 37) or IN esketamine (29 %, n = 15). Eighty-one percent of patients reported SI at baseline. Among those with baseline SI, 60 % had improved SI scores while 38 % did not change, and among those with no SI, 80 % did not change. After adjusting for covariates, the hazard ratios (HR) of response were significantly lower among those with baseline SI (HR = 0.36, 95 % CI, 0.14-0.92, p = 0.03). The number of treatments to achieve a 50 % reduction in SI score did not depend on group (IN esketamine vs. IV ketamine HR = 0.74 [95 % CI, 0.27-2.05]; p = 0.57). Limitations: Small sample size and lack of a placebo group. Conclusions: This study suggests that patients with baseline suicidal ideation require more treatments to achieve a response with ketamine or esketamine. The antisuicidal response seemed similar between IV ketamine and IN esketamine. abstract_id: PUBMED:31253233 An Evaluation of the Effectiveness of Evidence-Based Psychotherapies for Depression to Reduce Suicidal Ideation among Male and Female Veterans. Background: Although most suicide-related deaths occur among male veterans, women veterans are dying by suicide in increasing numbers. Identifying and increasing access to effective treatments is imperative for Department of Veterans Affairs suicide prevention efforts. We examined the impact of evidence-based psychotherapies for depression on suicidal ideation and the role of gender and treatment type in patients' responses to treatment. Methods: Clinicians receiving case consultation in interpersonal psychotherapy, cognitive-behavioral therapy for depression, and acceptance and commitment therapy for depression submitted data on depressive symptoms and suicidal ideation while treating veterans with depression. Results: Suicidal ideation was reduced across time in all three treatments. A main effect for wave was associated with statistically significant decreases in severity of suicidal ideation, χ2 (2) = 224.01, p = .0001, and a subsequent test of the Gender × Wave interaction was associated with differentially larger decreases in ideation among women veterans, χ2 (2) = 9.26, p = .001. Within gender-stratified subsamples, a statistically significant Treatment × Time interaction was found for male veterans, χ2 (4) = 16.82, p = .002, with levels of ideation significantly decreased at waves 2 and 3 in interpersonal psychotherapy and cognitive-behavioral therapy for depression relative to acceptance and commitment therapy for depression; the Treatment × Wave interaction within the female subsample was not statistically significant, χ2 (4) = 3.41, p = .492. Conclusions: This analysis demonstrates the efficacy of each of the three tested evidence-based psychotherapies for depression as a means of decreasing suicidal ideation, especially in women veterans. For male veterans, decreases in suicidal ideation were significantly greater in interpersonal psychotherapy and cognitive-behavioral therapy for depression relative to acceptance and commitment therapy for depression. abstract_id: PUBMED:31748851 Preschool Depression: a Diagnostic Reality. Purpose Of Review: We review findings related to predictors, correlates, outcomes, and treatment of preschool depression that have been published in the last 3 years. Recent Findings: Preschool depression displays a chronic course through late adolescence and is associated with temperamental and personality traits, poorer physical health, and negative parenting practices. Preschool depression predicts deficits into adolescence, including social difficulties and blunted neural response to rewards. Depressed preschoolers can experience suicidal ideation and behaviors and display an accurate understanding of the finality of death. A treatment for preschool depression has now been validated that uses the parent-child relationship to enhance emotion development and reduce depressive symptoms. Preschool depression is homotypic with depression that occurs later in life. Future work elucidating mechanisms through which preschool depression develops and informs the sub-groups for which particular treatments may be most effective will have considerable implications for prevention and early intervention. Answer: Yes, depression treatments have been found to reduce suicidal ideation. Interpersonal psychotherapy (IPT) and medication treatments have shown a significant reduction in suicide symptoms compared to placebo, as indicated by multivariate linear regression in a study analyzing the Treatment for Depression Research Collaborative (TDCRP) sample (PUBMED:24953481). Additionally, cognitive therapy (CT) and interpersonal psychotherapy (IPT) have been found to reduce suicidal ideation independent of their effect on depression in adult outpatients (PUBMED:33755280). Furthermore, home-delivered Problem Adaptation Therapy (PATH) has been more effective in reducing depression and suicidal ideation in older adults with major depression and dementia compared to Supportive Therapy for Cognitively Impaired Older Adults (ST-CI) (PUBMED:26402009). Evidence-based psychotherapies for depression, such as interpersonal psychotherapy, cognitive-behavioral therapy for depression, and acceptance and commitment therapy for depression, have also been effective in reducing suicidal ideation among veterans with depression, with a particularly notable decrease in ideation among women veterans (PUBMED:31253233). However, it is important to note that the reduction in suicidal ideation is often a consequence of the more general effects on depression, and the specific effectiveness of these treatments on suicidal ideation may not be significant after adjusting for change in depression (PUBMED:24953481). Additionally, while treatments like ketamine/esketamine have been used for patients with treatment-resistant depression, those with baseline suicidal ideation required more treatments to achieve a therapeutic response (PUBMED:38302067). Overall, the evidence suggests that various forms of depression treatment, including psychotherapies and pharmacotherapies, can contribute to a reduction in suicidal ideation.
Instruction: Risk of cardiovascular events and mortality among a population-based cohort of immigrants and long-term residents with diabetes: Are all immigrants healthier and if so, for how long? Abstracts: abstract_id: PUBMED:26093873 Risk of cardiovascular events and mortality among a population-based cohort of immigrants and long-term residents with diabetes: Are all immigrants healthier and if so, for how long? Background: Cardiovascular events are responsible for half of all deaths among individuals with diabetes. Immigrants to Western countries may experience an acceleration of cardiovascular risk in the first 10 years of arrival because of a sedentary lifestyle, poor diet, or barriers to accessing care, leading to higher levels of obesity and diabetes. Objectives: To compare the risk of cardiovascular events and mortality between immigrants to Canada and long-term residents with diabetes and to assess whether immigrants experience acceleration in risk after arrival. Methods: We conducted a population-based retrospective cohort study using linked health and immigration data from Ontario, Canada, of 87,707 immigrants who immigrated to Canada between 1985 and 2005 matched to 87,707 long-term residents with diabetes (age ≥20 years). Individuals were followed up from April 1, 2005, until February 29, 2012, for the primary composite outcome of a cardiovascular event (acute myocardial infarction, unstable angina, congestive heart failure, transient ischemic attack, stroke) or all-cause mortality. Results: There was a lower adjusted risk of cardiovascular events or mortality among immigrants (adjusted hazard ratio [HR] 0.76, 95% CI 0.74-0.78) after accounting for differences in baseline age, gender, socioeconomic status, neighborhood, and health care utilization-which persisted beyond 10 years from immigration. However, this healthy immigrant advantage was not found among more recent refugees (HR 0.93, 95% CI 0.81-1.08), immigrants with no previous education (HR 1.08, 95% CI 0.84-1.40), and those who were unmarried (HR 0.80, 95% CI 0.62-1.03). Conclusion: Immigrants with diabetes are at lower risk for cardiovascular events and mortality compared with long-term residents, an effect that persists more than 10 years after arrival. Not all immigrants demonstrate this health advantage. abstract_id: PUBMED:25028526 Impact of language barriers on complications and mortality among immigrants with diabetes: a population-based cohort study. Objective: Our objective was to examine the effect of language barriers on the risk of acute and chronic complications of diabetes and on mortality among immigrants. Research Design And Methods: Linked health and immigration databases were used to identify 87,707 adults with diabetes who immigrated to Ontario, Canada, between 1985 and 2005. These individuals were included in our cohort and stratified by language ability at the time of their immigration application. Primary end points included: one or more emergency department visit or hospitalization for 1) hypo- or hyperglycemia, skin and soft tissue infection, or foot ulcer and 2) a cardiovascular event or death between April 1, 2005, and February 29, 2012. Results: Our cohort was followed up for a median of 6.9 person-years. Immigrants with language barriers were older (mean age, 49 ± 15 vs. 42 ± 13 years; P &lt; 0.001), more likely to have immigrated for family reunification (66% vs. 38%, P &lt; 0.001), had less education (secondary school or less and no education, 82% vs. 53%; P &lt; 0.001), and a higher use of health care (mean visits, 8.6 ± 12.1 vs. 7.8 ± 11.2; P &lt; 0.001). Immigrants with language barriers were not found to have higher adjusted rates of diabetes complications (acute complications: hazard ratio [HR] 0.99, 95% CI 0.93-1.05; cardiovascular events or death: HR 0.95, 95% CI 0.91-0.99). Significant predictors included older age, being unmarried, living in a rural neighborhood, and having less education. Immigrants who were older (≥65 years) and who had arrived through family reunification had a lower risk of cardiovascular events or death (HR 0.88, 95% CI 0.81-0.96). Conclusions: In a heterogenous immigrant population with universal insurance, language barriers were not found to increase the risk of diabetes complications. However, their effect may vary based on age at time of landing, education level, marital status, and neighborhood of settlement. abstract_id: PUBMED:26979831 Long-Term Outcomes of Short-Term Statin Use in Healthy Adults: A Retrospective Cohort Study. Introduction: Data suggest that the beneficial cardiovascular effects of statins are maximized after the first year of statin use; yet, the timeline of statin-associated adverse events is not well delineated. Objective: To examine the associations of short-term statin use (≤1 year) with short- and long-term adverse events and beneficial cardiovascular outcomes in a 'healthy' cohort. Participants And Methods: A cohort study of a healthy Tricare population (fiscal year [FY] 2002 through FY 2011) who have no cardiovascular disease, major comorbidities requiring medications, or functional limitations. Statin users used statins for 90-365 days during FY 2005 as their only prescription medication. Nonusers had medical encounters but did not receive prescription medications during FY 2005, and did not receive any statins throughout the study period from FY 2002 to FY 2011. Outcomes were the occurrence of major acute cardiovascular events, diabetes mellitus and its complications, kidney diseases, musculoskeletal diseases, obesity, cataracts, malignancy, and death. Results: We matched 1525 statin users to 1525 nonusers. During the follow-up period (FY 2006 to FY 2011), statin users had significantly higher odds of developing diabetes and diabetic complications that persisted throughout follow-up (odds ratio [OR] 1.93, 95 % confidence interval [CI] 1.55-2.41 and OR 2.15, 95 % CI 1.20-3.86, respectively). Short-term statin use was not associated with decreased odds of major acute cardiovascular events (OR 1.17, 95 % CI 0.72-1.92). There were no differences in risks of kidney diseases, musculoskeletal diseases, or malignancy. Conclusions: Short-term statin use for primary prevention in this healthy cohort was associated with an increased risk of long-term diabetes and diabetic complications without cardiovascular benefits. Further study using pragmatic studies and prospective observational studies appropriately equipped to eliminate unidentified confounders are urgently needed. abstract_id: PUBMED:30533559 Red blood cell distribution width as a predictor of long-term mortality in patients with carbon monoxide poisoning. Objectives: Elevated red blood cell distribution width (RDW) is an independent prognostic factor for cardiovascular events that are major causes of mortality in patients with carbon monoxide (CO) poisoning. Due to the limited number of studies, we aimed to investigate the relationship between RDW levels and long-term mortality for these patients. Method: This retrospective study included patients with CO poisoning, who presented to the emergency department. Baseline characteristics, laboratory results and survival status were retrieved from patients' hospital records. The severity of poisoning was determined according to COHb level and/or clinical signs and symptoms. Results: The study included 571 patients (median age was 37.0 years) and less than half of these patients were male (n = 206, 36.1%). There were mild-moderate CO poisoning in 389 (68.1%) patients and severe poisoning in 182 (31.9%). At a median follow-up of 6.2 years, there were 33 deaths (5.8%). Univariate cox-regression analysis demonstrated that age, gender, presence of hypertension or diabetes mellitus, levels of hemoglobin, RDW, creatinine and alanine-aminotransferase, and white-blood-cell count were potential covariates of long-term all-cause mortality. In the multivariate analysis, the median age and RDW level remained independent predictors of mortality (age, Odds ratio [OR]: 1.070 95% confidence interval [CI]: 1.030-1.110, p = 0.001; RDW, OR: 1.221 95% CI: 1.042-1.431, p = 0.013). Patients with higher RDW levels had a significantly worse prognosis in terms of mortality than with lower RDW levels (log-rank test, p = 0.003). Conclusion: This study demonstrated that RDW level is an independent predictor of long-term mortality in patients with CO poisoning. abstract_id: PUBMED:35717432 Effect of bariatric surgery on long-term cardiovascular outcomes: a systematic review and meta-analysis of population-based cohort studies. This meta-analysis aimed to compare the effects of bariatric surgery and nonsurgery on cardiovascular outcomes in patients with obesity. A systematic literature search of the Medline (via PubMed), Embase, and Cochrane Central Register of Controlled Trials databases was performed until August 18th, 2021. Population-based cohort studies comparing long-term cardiovascular outcomes for patients with obesity undergoing bariatric surgery or not were included. A meta-analysis of relative risks (RRs) was performed for all outcomes. We conducted subgroup analyses and meta-regression to explore sources of heterogeneity and the stability of the results. Twenty-one population-based cohort studies involving 2,857,016 participants were identified. The major adverse cardiovascular event (MACE) RR in the bariatric surgery group was .53 (95% confidence interval [CI] = .45-.62, P &lt; .001) relative to the nonsurgical group. Relative to the nonsurgical group, the risk of myocardial infarction (MI) (RR = .40, 95% CI = .30-.52, P &lt; .001), stroke (RR = .60, 95% CI = .46-.79, P &lt; .001), cardiovascular death (RR = .43, 95% CI = .35-.54, P &lt; .001), and all-cause death (RR = .44, 95% CI = .32-.59, P &lt; .001) was significantly reduced for patients who underwent bariatric surgery. In subgroup analyses, as the proportion of patients with diabetes mellitus increased, lower RRs for MACE, MI, and stroke were observed in the surgery group relative to the nonsurgical group. The decreased risk of MACE was also observed in the subgroup with median follow-up duration ≥5 years.Bariatric surgery improves cardiovascular outcomes in patients with obesity, especially providing long-term benefits, and this effect is more pronounced in patients with comorbid diabetes. abstract_id: PUBMED:32440788 Long-term prognosis after kidney donation: a propensity score matched comparison of living donors and non-donors from two population cohorts. Background: Live donor nephrectomy is a safe procedure. However, long-term donor prognosis is debated, necessitating high-quality studies. Methods: A follow-up study of 761 living kidney donors was conducted, who visited the outpatient clinic and were propensity score matched and compared to 1522 non-donors from population-based cohort studies. Primary outcome was kidney function. Secondary outcomes were BMI (kg/m2), incidences of hypertension, diabetes, cardiovascular events, cardiovascular and overall mortality, and quality of life. Results: Median follow-up after donation was 8.0 years. Donors had an increase in serum creatinine of 26 μmol/l (95% CI 24-28), a decrease in eGFR of 27 ml/min/1.73 m2 (95% CI - 29 to - 26), and an eGFR decline of 32% (95% CI 30-33) as compared to non-donors. There was no difference in outcomes between the groups for ESRD, microalbuminuria, BMI, incidence of diabetes or cardiovascular events, and mortality. A lower risk of new-onset hypertension (OR 0.45, 95% CI 0.33-0.62) was found among donors. The EQ-5D health-related scores were higher among donors, whereas the SF-12 physical and mental component scores were lower. Conclusion: Loss of kidney mass after live donation does not translate into negative long-term outcomes in terms of morbidity and mortality compared to non-donors. Trial Registration: Dutch Trial Register NTR3795. abstract_id: PUBMED:24905415 Recurrent pregnancy loss: a risk factor for long-term maternal atherosclerotic morbidity? Objective: We sought to investigate whether patients with a history of recurrent pregnancy loss (RPL) have an increased risk for future maternal atherosclerotic morbidity. Study Design: A population-based study compared the incidence of long-term atherosclerotic morbidity (renal and cardiovascular) in a cohort of women with and without a diagnosis of RPL. Patients had a mean follow-up duration of more than a decade. Women with known atherosclerotic disease were excluded from the study. Cardiovascular morbidity was divided into 4 categories according to severity and type including simple and complex cardiovascular events and invasive and noninvasive cardiac procedures. Kaplan-Meier survival curves were used to estimate cumulative incidence of cardiovascular and renal hospitalizations. Cox proportional hazards models were used to estimate the adjusted hazard ratios for cardiovascular and renal morbidity. Results: During the study period 99,285 patients were included; of these 6.7% (n = 6690) had a history of RPL. Patients with RPL had higher rates of renal and cardiovascular morbidity including cardiac invasive and noninvasive diagnostic procedures, simple as well as complex cardiovascular events, and hospitalizations due to cardiovascular causes. Using Kaplan-Meier survival curves, patients with a previous diagnosis of RPL had a significantly higher cumulative incidence of cardiovascular but not renal hospitalizations. Using a Cox proportional hazards model, adjusted for confounders such as preeclampsia, diabetes mellitus, obesity, and smoking, a history of RPL remained independently associated with cardiovascular hospitalizations (adjusted hazard ratio, 1.6; 95% confidence interval, 1.4-1.8; P = .001). Conclusion: RPL is an independent risk factor for long-term maternal cardiovascular complications. abstract_id: PUBMED:23800639 An association between preterm delivery and long-term maternal cardiovascular morbidity. Objective: The purpose of this study was to investigate whether a history of preterm delivery (PTD) poses a risk for subsequent maternal long-term cardiovascular morbidity. Study Design: A population-based study compared the incidence of cardiovascular morbidity in a cohort of women who delivered preterm (&lt;37 weeks' gestation) and those who gave birth at term at the same period. Deliveries occurred during the years 1988-1999 with follow up until 2010. Kaplan-Meier survival curves were used to estimate cumulative incidence of cardiovascular hospitalizations. Cox proportional hazards models were used to estimate the adjusted hazard ratios for cardiovascular hospitalizations. Results: During the study period 47,908 women met the inclusion criteria; 12.5% of the patients (n = 5992) delivered preterm. During a follow-up period of &gt;10 years, patients with PTD had higher rates of simple and complex cardiovascular events and higher rates of total cardiovascular-related hospitalizations. A linear association was found between the number of previous PTD and future risk for cardiovascular hospitalizations (5.5% for ≥2 PTDs; 5.0% for 1 PTD vs 3.5% in the comparison group; P &lt; .001). The association remained significant for spontaneous vs induced PTD and for early (&lt;34 weeks) and late (34 weeks to 36 weeks 6 days' gestation) PTD. In a Cox proportional hazards model that adjusted for pregnancy confounders such as labor induction, diabetes mellitus, preeclampsia, and obesity, PTD was associated independently with cardiovascular hospitalizations (adjusted hazard ratio, 1.4; 95% confidence interval, 1.2-1.6). Conclusion: PTD is an independent risk factor for long-term cardiovascular morbidity in a follow-up period of more than a decade. abstract_id: PUBMED:28935461 Short and long-term outcome in very old patients with ST-elevation myocardial infarction after primary percutaneous coronary intervention. Background: Although octogenarians constitute a fast-growing portion of cardiovascular patients, few data are available on the outcome of patients aged ≥85 years with ST-Elevation Myocardial Infarction (STEMI). Methods And Results: We analyzed 126 consecutive patients aged ≥85 years (age 88±2 years) with STEMI, undergoing primary percutaneous coronary intervention (pPCI) within 12 hours from symptoms onset. Long-term follow-up (median 898 days) was obtained for the 102 patients surviving the index-hospitalization. In-hospital mortality rate was 19%. Nonagenarians, diabetes mellitus, severe left ventricular systolic dysfunction and intra-aortic balloon pumping were significantly and independently correlated to in-hospital mortality at the multivariate analysis. A low rate of complications was detected. Among patients surviving the index hospitalization, 32 (31%) patients died during follow-up. 55 patients (54%) had re-hospitalization due to cardiovascular causes. The univariate analysis identified chronic renal failure, Killip class ≥ 3, TIMI Risk Score &gt;8 and very high risk of bleeding as predictors of long-term overall mortality. At the multivariate analysis only chronic renal failure and very high risk of bleeding were significantly and independently correlated to long-term all-cause mortality. Renal function and anterior myocardial infarction were significantly and independently associated with the combined end-point of cardiac mortality and re-hospitalization due to cardiovascular disease at the multivariate analysis. Conclusions: PPCI in patients ≥85 years old is relatively safe. In this population, pPCI is associated with a good long-term survival, although still worse than in younger patients, despite a considerable incidence of re-hospitalization due to cardiovascular events. abstract_id: PUBMED:30069815 Cardiovascular Risk Factors and Events in Iranian Immigrants Versus Other Immigrants from the Middle East. We report the incidence of cardiovascular events, mortality and risk factors (diabetes and hypertension) in immigrants from Iran and in non-Iranian immigrants from the Middle East. Using population-based healthcare administrative data, all immigrants from the Middle East and the non-immigrant population of Ontario, Canada as of July 2012 were identified. Baseline differences in diabetes and hypertension prevalence were compared. Outcomes were ascertained through March 2016 and included the incidence of acute coronary events, ischemic heart disease mortality, all-cause mortality, and incident diabetes and hypertension. The study population included 55,539 Iranian immigrants, 106,926 non-Iranian Middle Eastern immigrants, and 6,967,132 non-immigrants. Non-immigrants had the highest crude baseline prevalence of diabetes and hypertension. Compared to non-immigrants, Iranian and non-Iranian Middle Eastern immigrants had significantly lower incidence rate of acute coronary events, ischemic heart disease mortality and all-cause mortality. Both Iranian and non-Iranian Middle Eastern immigrants had better cardiovascular health status than non-immigrants. Answer: The study conducted on a population-based cohort of immigrants and long-term residents with diabetes in Ontario, Canada, found that immigrants with diabetes had a lower adjusted risk of cardiovascular events or mortality compared to long-term residents (adjusted hazard ratio [HR] 0.76, 95% CI 0.74-0.78), and this advantage persisted beyond 10 years after immigration (PUBMED:26093873). However, this "healthy immigrant advantage" was not consistent across all immigrant groups. More recent refugees, immigrants with no previous education, and those who were unmarried did not demonstrate this health advantage (PUBMED:26093873). The study also highlighted that language barriers did not increase the risk of diabetes complications among immigrants, suggesting that universal insurance may mitigate the impact of language barriers on health outcomes. Factors such as age at the time of landing, education level, marital status, and neighborhood of settlement were significant predictors of health outcomes (PUBMED:25028526). In summary, while immigrants with diabetes generally have a lower risk of cardiovascular events and mortality compared to long-term residents, this advantage is not uniform across all immigrant subgroups. The persistence of the health advantage beyond 10 years suggests that factors other than acculturation may play a role in maintaining this advantage. However, the lack of advantage among certain groups of immigrants indicates that socioeconomic and marital status, as well as education level, may influence the long-term health outcomes of immigrants with diabetes.
Instruction: Is sleep disturbed by vigorous late-night exercise? Abstracts: abstract_id: PUBMED:10378914 Is sleep disturbed by vigorous late-night exercise? Purpose: This experiment examined the influence of prolonged, vigorous late-night exercise on sleep. Methods: Sixteen highly fit male cyclists completed each of two 60-h laboratory treatments involving a baseline night, an experimental treatment night, and a recovery night. In counterbalanced order, subjects 1) cycled for 3 h at 65-75% of heart rate reserve combined with bright light exposure (3000 lux) light, and 2) were exposed to a 3 h pulse of bright light (3000 lux) alone. Results: On the baseline and recovery nights, subjects maintained their usual sleep-wake schedules. On the treatment night, exercise + bright light or bright light alone were centered at 6 h before their usual wake times, followed by bedtimes 30 min after the treatments. Illumination was 3000 lux during the experimental treatments, 0 lux during the sleep periods, and 50 lux at other times. Sleep was assessed with an Actillume (Ambulatory Monitoring, Inc., Ardsley, NY) wrist monitor to define sleep onset latency (SOL), wakefulness after sleep onset (WASO), and total sleep time. Subjective assessments of SOL, WASO, and insomnia were also gathered each morning. No significant differences in objective or subjective sleep variables were found between treatments. Conclusions: These data are inconsistent with the general opinion that vigorous exercise shortly before bedtime disturbs sleep. abstract_id: PUBMED:20673290 Effects of vigorous late-night exercise on sleep quality and cardiac autonomic activity. Sleep is the most important period for recovery from daily load. Regular physical activity enhances overall sleep quality, but the effects of acute exercise on sleep are not well defined. In sleep hygiene recommendations, intensive exercising is not suggested within the last 3 h before bed time, but this recommendation has not been adequately tested experimentally. Therefore, the effects of vigorous late-night exercise on sleep were examined by measuring polysomnographic, actigraphic and subjective sleep quality, as well as cardiac autonomic activity. Eleven (seven men, four women) physically fit young adults (VO(2max) 54±8 mL·kg(-1)·min(-1) , age 26±3 years) were monitored in a sleep laboratory twice in a counterbalanced order: (1) after vigorous late-night exercise; and (2) after a control day without exercise. The incremental cycle ergometer exercise until voluntary exhaustion started at 21:00±00:28 hours, lasted for 35±3 min, and ended 2:13±00:19 hours before bed time. The proportion of non-rapid eye movement sleep was greater after the exercise day than the control day (P&lt;0.01), while no differences were seen in actigraphic or subjective sleep quality. During the whole sleep, no differences were found in heart rate (HR) variability, whereas HR was higher after the exercise day than the control day (54±7 versus 51±7, P&lt;0.01), and especially during the first three sleeping hours. The results indicate that vigorous late-night exercise does not disturb sleep quality. However, it may have effects on cardiac autonomic control of heart during the first sleeping hours. abstract_id: PUBMED:33423141 Testing the sleep hygiene recommendation against nighttime exercise. Purpose: Experts have recommended exercise for improved sleep, but often with the caveat that one should avoid nighttime exercise. The aim of this study was to challenge this recommendation in a sample who might be most prone to sleep impairment after nighttime exercise. The secondary aims were to assess whether or not post-treatment sleep was correlated with anxiolytic effects or downregulation of body temperature. Methods: Twelve sedentary adults with insomnia completed two treatments (separated by 2-5 days) in counterbalanced order: (1) 30 min of moderate treadmill exercise (60-70% maximum heart rate) + 15 min of moderate resistance exercise and (2) a control treatment (reading). Each treatment was completed 2 h before bedtime and followed by a 10-min shower, a light snack, 8 h of polysomnographic recording, and a sleep questionnaire. State anxiety was assessed before and 30 min after each treatment and 10 min before bedtime. Body temperature was assessed from pre-exercise to wake time. Results: No significant treatment differences in sleep were found, and Hedges g paired comparisons indicated small effect sizes; however, two participants had severely disturbed objective sleep following exercise. Significant correlations were found between change in state anxiety from pre-exercise to bedtime and TST (r = 0.69, p = 0.03). Stage 1 (r = 0.67, p = 0.03), WASO (r = 0.69, p = 0.03), and sleep efficiency (r = 0.66, p = 0.02). No significant correlations were found of sleep with temperature decline. Conclusions: Profound sleep disturbance after exercise in some participants, and no marked sleep improvement in the others, provides some support for caution regarding late-night exercise for sedentary individuals with insomnia. abstract_id: PUBMED:36553255 Night Sleep and Parental Bedtime Practices in Low-Risk Preterm and Full-Term Late Talkers. Night sleep and parental bedtime practices have rarely been investigated in late talkers. This study aimed to explore: night sleep, parental bedtime practices, and their associations in late talkers as well as individual, socio-demographic, and socio-relational factors affecting them. Parents of 47 30-month-old late talkers, born low-risk preterm (n = 24) or full-term (n = 23), with an expressive vocabulary size ≤10th percentile measured by the MacArthur-Bates Communicative Development Inventory Words and Sentences, and normal cognitive abilities measured by the Bayley Scales, completed the Infant Sleep Questionnaire, the Parental Interactive Bedtime Behaviour Scale, and the Parenting Stress Index Short Form. Results showed slight settling difficulties, night wakings, and frequent co-sleeping in late talkers. Encouraging autonomy practices were frequently used by parents, rather than active physical comforting ones. Recurrent settling difficulties were reported by parents who often applied encouraging autonomy practices, whereas greater night waking problems and frequent co-sleeping were reported by parents who often left their child crying. Low-risk preterm birth and mother's parenting stress predicted total sleep difficulties and night wakings; first-born, high maternal education level and mother's parenting stress predicted settling difficulties; mother's parenting stress was the only predictor for co-sleeping and leaving to cry. These findings have relevant implications for improving late talkers' night sleep and their parents' bedtime practices. abstract_id: PUBMED:35461788 Earlier bedtimes and more sleep displace sedentary behavior but not moderate-to-vigorous physical activity in adolescents. Objectives: Correlational models suggest increased cardiometabolic risk when sleep replaces moderate-to-vigorous (but not sedentary or light) physical activity. This study tested which activity ranges are impacted by experimentally altering adolescents' bedtime. Method: Adolescents completed a 3-week within-subjects crossover experiment with 5 nights of late bedtimes and 5 nights early bedtimes (6.5- and 9.5-hours sleep opportunity, respectively). Experimental condition order was randomized. Waketimes were held constant throughout to mimic school start times. Sleep and physical activity occurred in the natural environments, with lab appointments following each 5-day condition. Waist-worn accelerometers measured physical activity and sedentary behavior. Wrist-worn actigraphs confirmed sleep condition adherence. Wilcoxon tests and linear mixed effects models compared waking activity levels between conditions and across time. Results: Ninety healthy adolescents (14-17 years) completed the study. When in the early (vs. late) bedtime condition, adolescents fell asleep 1.96 hours earlier (SD = 1.08, d = 1.82, p &lt; .0001) and slept 1.49 hours more (SD = 1.01, d = 1.74, p &lt; .0001). They spent 1.68 and 0.32 fewer hours in sedentary behavior (SD = 1.67, d = 1.0, p &lt; .0001) and light physical activity (SD = 0.87, d = 0.37, p = .0005), respectively. This pattern was reflected in increased proportion of waking hours spent in sedentary and light activity. Absolute and proportion of moderate-to-vigorous physical activity did not differ between conditions (d = 0.02, p = .89; d = 0.14, p = .05, respectively). Conclusions: Inducing earlier bedtimes (allowing for healthy sleep opportunity) did not affect moderate-to-vigorous physical activity. Alternatively, later bedtimes (allowing for ≤ 6.5 hours of sleep opportunity, mimicking common adolescent school night sleep) increased sedentary behavior. Results are reassuring for the benefits of earlier bedtimes. abstract_id: PUBMED:27849113 Late-night exercise affects the autonomic nervous system activity but not the hypothalamo-pituitary-adrenal axis in the next morning. Background: Exercise activates hypothalamic-pituitary-adrenal axis (HPA) and autonomic nervous system (ANS) and generally causes beneficial changes in homeostatic balance. However, the health benefits of late-night exercise programs on the activity of HPA and ANS is not known. The aim of this study was to assess effects of late-night exercise on sleep quality and on the activities of the HPA axis (as cortisol awakening response, CAR) and the ANS (as heart rate variability, HRV) measurements in the following morning. Methods: Medical students (N.=20 males, 20-24 years old) filled Karolinska Sleep Diary on the day before exercise program. In the following morning, they provided salivary samples for the assessment of CAR (samples at 0, 15, 30 and 60 min postawakening) and had a 5-min electrocardiogram recording for the determination of HRV. In the next night, an exercise program consisting of a 90-min football match was implemented at 09:30 p.m. and all procedures were repeated. Cortisol concentrations were measured in the salivary samples and time- and frequency-domain parameters of HRV were calculated. Results: Late-night exercise did not affect (P&gt;0.05) sleep parameters (sleep duration, disturbed sleep, awakening problems) and CAR parameters (0, 15, 30, 60 min cortisol concentrations, mean concentration, area under the curve) but influenced HRV parameters (P&lt;0.05). Conclusions: The results suggest that late-night exercise is associated with changed HRV activity rather than changes in CAR and, therefore, it might be suggested that late-night exercise affects ANS activity rather than HPA activity in the next morning. abstract_id: PUBMED:33426778 Effects of late-night eating of easily-or slowly-digestible meals on sleep, hypothalamo-pituitary-adrenal axis, and autonomic nervous system in healthy young males. Aim of the current study was to assess the effects of the digestibility of late-night high calorie meal on sleep and the activities of the hypothalamo-pituitary-adrenal (HPA) and autonomous nervous system (ANS) in healthy young males. For that purpose, effects of an easily digestible meal (starch + sugar-rich meal, SSR, i.e., dessert) or a slowly digestible meal (protein + fat-rich, PFR, i.e. kebab) were investigated in a crossover design in 16 participants (20-26 year old). They did not eat anything after 07:00 PM on Day 0; had an SSR meal on Day 1 and a PFR meal on Day 2 at 10:00 PM. HPA and ANS activities were measured by cortisol awakening response (CAR) and heart rate variability (HRV), respectively. The participants provided salivary samples for CAR; had a 5-min continuous electrocardiogram recording for HRV; and filled in sleep questionnaires. Late-night eating of SSR and PFR diets increased the area under the curve of CAR (p &lt; 0.05) but did not affect HRV parameters (p &gt; 0.05). PFR meal significantly disturbed sleep (p &lt; 0.05). The data suggests that increased activity of HPA, but not ANS, might be involved in pathophysiology of late-night eating and that this might be due to disturbed sleep if slowly-digestible meal is consumed. abstract_id: PUBMED:3245463 Epidemiology of exercise effects on sleep. This survey was designed to study the characteristics of sleep and perceived factors promoting and disturbing sleep. Several living habits, such as exercise, and characteristics of sleep were investigated independently of each other by a pretested questionnaire and a sleep diary which were mailed to a stratified random sample of 200 men and women in each of the age groups 36, 41, 46 and 50 years (N = 1600). The response rate was 75%. Every third respondent felt that exercise (e.g. jogging and evening walks) had a positive impact on sleep quality. The most frequently perceived positive effects of exercise were ease of falling asleep, deepness of sleep, a sense of wellbeing and more alertness in the morning. Negative perceived effects of exercise were uncommon. The balance of positive and negative perceived effects of exercise was more favourable when the activity was performed early in the evening compared to late at night. When both the intensity and the timing of exercise were considered, light and moderate exercise, especially early in the evening, had mainly positive effects on sleep. The frequency of negative impacts increased when the exercise was performed vigorously late at night. However, a considerable portion of vigorous late exercisers reported mainly positive effects. The available information suggests that especially light and moderate exercise early in the evening should be emphasised in health education as a means to enhance sleep and to improve its quality. However, the epidemiological information about exercise and sleep should be investigated and verified in detail using objective methods and controlled interventions. abstract_id: PUBMED:32495463 The effect of a short burst of exercise during the night on subsequent sleep. When on-call workers wake during the night to perform work duties, they may experience reduced alertness and impaired performance as a result of sleep inertia. After performing their duties, on-call workers may have the opportunity to return to sleep. Thus, it is important that sleep inertia countermeasures do not affect subsequent sleep. Exercise may be a suitable countermeasure; however, the impact on subsequent sleep is untested. Healthy participants (n = 15) completed three conditions in a counterbalanced order: sedentary, low-intensity exercise or high-intensity exercise, performed for 2 min upon awakening. Sleep was recorded 2 hr later using polysomnography, the Karolinska Sleepiness Scale was administered to measure subjective sleepiness, and core body temperature was measured continuously. Results indicate there was no effect of condition on most sleep variables; however, three variables had small differences, with longer total sleep time (p = .006), higher sleep efficiency (p = .006) and shorter N3 latency (p &lt; .001) in the low-intensity exercise condition. There was no difference in subjective sleepiness (p = .124) or core body temperature (p = .216) 90 min after the exercise intervention. These results indicate that using a short burst of exercise to counteract sleep inertia when woken during the night may be a suitable countermeasure for on-call workers who not only need to be alert upon waking but also need quality sleep when returning to bed. Future research could include participants of other ages and health statuses to investigate whether the results are generalizable. abstract_id: PUBMED:20629937 Exercise effects on night-to-night fluctuations in self-rated sleep among older adults with sleep complaints. Sleep interventions have rarely explored reductions in night-to-night fluctuations [i.e. intra-individual variability (IIV)] in sleep, despite the negative impacts of such fluctuations on affective states and cognitive and physical symptoms. In a community-based randomized controlled trial we evaluated whether physical exercise reduced IIV in self-rated sleep outcomes among middle-aged and older adults with sleep complaints. Under-active adults 55 years and older (n = 66, 67% women) with mild to moderate sleep complaints were randomized to 12 months of a moderate-intensity endurance exercise (n = 36) or a health education control group (n = 30). Daily sleep logs, Pittsburgh Sleep Quality Index (PSQI) and in-home polysomnographic sleep recordings (PSG) were collected at baseline, 6 months and 12 months. Sleep log-derived means and IIV were computed for sleep-onset latency (SOL), time in bed, feeling rested in the morning, number of nighttime awakenings, and wake after final awakening (WAFA). Using intent-to-treat methods, at 6 months no differences in IIV were observed by group. At 12 months, SOL-based IIV was reduced in the exercise group compared with the control (difference = 23.11, 95% CI: 3.04-47.18, P = 0.025, Cohen's d = 0.57). This change occurred without mean-level or IIV changes in sleep-wake schedules. For all sleep variables, except SOL and WAFA, IIV changes and mean-level changes in each variable were negatively correlated (r = -0.312 to -0.691, P &lt; 0.05). Sleep log-derived IIV changes were modestly correlated with mean-level PSQI and PSG-based changes at 12 months. Twelve months of moderate-intensity exercise reduced night-to-night fluctuations in self-rated time to fall asleep, and this relationship was independent of mean-level time to fall asleep. Answer: The question of whether sleep is disturbed by vigorous late-night exercise has been explored in several studies, with mixed results. According to the abstracts provided: - One study found that prolonged, vigorous late-night exercise did not significantly affect objective or subjective sleep variables, suggesting that vigorous exercise shortly before bedtime may not disturb sleep (PUBMED:10378914). - Another study reported that vigorous late-night exercise did not disturb sleep quality, although it may affect cardiac autonomic control during the first few hours of sleep (PUBMED:20673290). - A study focusing on sedentary adults with insomnia found no significant differences in sleep after moderate exercise compared to a control treatment, but two participants experienced severely disturbed objective sleep following exercise. This suggests that there might be individual variability in response to late-night exercise (PUBMED:33423141). - Research on the effects of late-night exercise on the autonomic nervous system (ANS) and hypothalamo-pituitary-adrenal (HPA) axis found that late-night exercise influenced ANS activity but not HPA activity the following morning, without affecting sleep parameters (PUBMED:27849113). - A study on the effects of late-night eating on sleep and HPA/ANS activities found that a high-calorie meal late at night increased HPA activity but did not affect ANS parameters, with a slowly-digestible meal significantly disturbing sleep (PUBMED:33426778). - An epidemiological survey suggested that light and moderate exercise early in the evening had mainly positive effects on sleep, while vigorous late-night exercise had more mixed effects, with some individuals reporting positive outcomes (PUBMED:3245463). - A study testing the effect of a short burst of exercise during the night on subsequent sleep found no significant impact on most sleep variables, suggesting that a short burst of exercise might be a suitable countermeasure for sleep inertia without affecting subsequent sleep (PUBMED:32495463). - Finally, a study on exercise effects on night-to-night fluctuations in sleep among older adults with sleep complaints found that 12 months of moderate-intensity exercise reduced fluctuations in self-rated time to fall asleep (PUBMED:20629937). In summary, the evidence suggests that vigorous late-night exercise does not consistently disturb sleep quality across studies, though individual responses may vary, and some effects on cardiac autonomic control have been observed. Moderate exercise may even have beneficial effects on sleep stability over time.
Instruction: Risk of contralateral manifestation in children with unilateral inguinal hernia: should hernia in children be treated contralaterally? Abstracts: abstract_id: PUBMED:11101728 Risk of contralateral manifestation in children with unilateral inguinal hernia: should hernia in children be treated contralaterally? Purpose: This study was done to identify risk factors for metachronous manifestation of contralateral inguinal hernia in patients with unilateral inguinal hernia. Methods: Characteristics of 156 patients with metachronous contralateral hernia were compared with those of 156 patients with unilateral hernia who were ascertained not to have presented with contralateral hernia. Results: There was a tendency for the hernia to be more often on the left side in 88 of 156 patients (56.4%) with contralateral manifestation compared with 70 of 156 patients (44.9%) in the control group (P =.054). The age at hernia repair of the patients with contralateral manifestation, 1 to 120 months (median, 14 months), was significantly younger than the 1 to 149 months (median, 20 months) of the control patients (P =.016). More patients with contralateral manifestation had a family history of inguinal hernia, and the percentage, 24.4%, was significantly higher than the 14.7% in the control group (P =.046). A univariate analysis with the Cox regression models found that hernia on the left side and a positive family history were significantly associated with the metachronous manifestation of contralateral hernia (hazard ratio [HR], 1.40; P =. 037 and HR, 1.59; P =.013, respectively). Conclusion: The risk of metachronous manifestation of contralateral hernia is high in patients with left-side hernia and in those with a family history, and the incidence of contralateral hernia is at most 10% in these patients. The authors think that the incidence is still too low to justify routine exploration and surgery for a patent processus vaginalis. Contralateral exploration should therefore be reserved for high-risk patients in whom second anesthesia and surgery have to be avoided. abstract_id: PUBMED:26455468 Incidence of metachronous contralateral inguinal hernias in children following unilateral repair - A meta-analysis of prospective studies. Purpose: The objective of this review was to systematically evaluate the incidence of a metachronous contralateral inguinal hernia (MCIH) in children with unilateral inguinal hernia and therefore to propose or to reject routine contralateral groin exploration. Methods: Electronic searches restricted to prospective studies with a minimal follow-up of 1year included MEDLINE, EMBASE and the Cochrane Central Register of Controlled Trials. Results: Six studies involving 1669 children were included. Overall MCIH was 6% (95% CI from 4% to 8%). The odds for MCIH development were significantly larger in children with an initial left-sided hernia (OR 2.66 with 95% CI from 1.56 to 4.53) and in children with open contralateral processus vaginalis (CPV) (OR 4.17 with 95% CI from 1.25 to 13.9). Conclusions: The overall incidence of MCIH following unilateral inguinal hernia repair in children is 6%. Initial left-sided hernia (8.5%) and open CPV (13.8%) are risk factors for MCIH development. Female gender (8.2%) and younger age (&lt;1year) (6.9%) non-significantly increase the risk of MCIH. abstract_id: PUBMED:25574209 Risk factors for contralateral patent processus vaginalis determined by transinguinal laparoscopic examination. Concurrent contralateral inguinal exploration in children with unilateral hernia or hydrocele is a subject of debate. The aim of the present study was to investigate the incidence of contralateral patent processus vaginalis (CPPV) using transinguinal laparoscopy (inguinoscopy). In addition, the risk factors of CPPV were evaluated in order to facilitate the selection of appropriate candidates for contralateral examination. A total of 119 patients who presented with unilateral hydrocele, inguinal hernia or cryptorchidism between 2001 and 2008 underwent inguinoscopy during the ipsilateral surgery. All data were collected prospectively. The incidence of CPPV was investigated and the risk factors affecting the presence of CPPV were analyzed. Among these patients, 29 individuals (24.4%) had CPPV confirmed by inguinoscopy. No surgical complications were observed during the inguinoscopy. Cases with suspicious ultrasound findings were at a higher risk of CPPV than cases with normal findings (odds ratio, 13.800; P=0.004). A history of contralateral disease was also found to be a significant risk factor (odds ratio, 4.008; P=0.019). The present study identified that the significant risk factors for CPPV were suspicious findings on ultrasound examination and a history of contralateral disease. Therefore, it is concluded that performing inguinoscopy in children with these risk factors is beneficial. abstract_id: PUBMED:35821302 The relationship between contralateral patent processus vaginalis and metachronous contralateral inguinal hernia in children with unilateral inguinal hernia: a prospective observational study. Purpose: Laparoscopic contralateral patent processus vaginalis (CPPV) repair in children is debatable due to the high CPPV rate, but low metachronous contralateral inguinal hernia (MCIH) rate. We conducted this study to find risk factors for MCIH. Method: We conducted a prospective, observational trial with patients recruited from Shanghai Children's Hospital. Eligible participants were patients under 16 years old with unilateral inguinal hernia whose parents did not opt for simultaneous CPPV repair. The subjects were followed for 24 to 34.1 months. Patients who developed MCIH were analyzed to identify the relationship between CPPV and MCIH. Results: Between October 17, 2018, and July 31, 2019, we included 184 patients and 182 completed follow-up. MCIH occurred in ten patients, of which 7 (7.53%) had CPPV and three (3.37%) had no CPPV. Univariate analysis showed that age (p = 0.025, OR = 0.938) and CPPV diameter (p = 0.003, OR = 1.783) were related to the development of MCIH. In multivariate analysis, only diameter of CPPV (p = 0.008, OR = 1.411) was associated with MCIH. The receiver operating characteristic (ROC) curve was used to test, and it was found that when the diameter of CPPV was greater than 4 mm, the Youden index was the highest, with a specificity of 62.8% and a sensitivity of 100%. Conclusion: The incidence of MCIH was not statistically higher in patients with CPPV compared with those without CPPV, so there is no indication for routine CPPV repair. The risk of MCIH development increases with CPPV diameter. 4 mm is the optimal cutoff point. Large CPPVs (&gt; 4 mm) could be treated to prevent future hernias. Trial Registration: The Chinese Clinical Trial Registry ( www.chictr.org.cn ), number ChiCTR2000041307. abstract_id: PUBMED:17571299 Systematic review of the risk of developing a metachronous contralateral inguinal hernia in children. Background: This study aims to establish the risk of developing a metachronous contralateral inguinal hernia (MCIH) following open repair of a unilateral inguinal hernia in children. Methods: A systematic review was performed using a defined search strategy. Studies in which children undergoing open repair of a unilateral inguinal hernia without contralateral exploration and who were followed up for MCIH development were included. Results: Of 5937 titles and abstracts screened, 154 full-text articles were identified for review; 49 papers were analysed with data on 22,846 children. The incidence of MCIH was 7.2 per cent overall, 6.9 per cent in boys and 7.3 per cent in girls (P = 0.381). Children with a left-sided inguinal hernia had a significantly higher risk of developing a MCIH than those with a right-sided hernia (10.2 versus 6.3 per cent respectively; P &lt; 0.001). Conclusion: Overall, in both boys and girls, 14 contralateral explorations are required to prevent one metachronous hernia. The risk of developing a MCIH appears unchanged in early childhood, with a slight reduction after 12 years of age. Children with a left-sided hernia have the greatest risk of developing a contralateral hernia, but ten explorations are still required to prevent one metachronous hernia. Most MCIHs occur in the first 5 years after unilateral inguinal hernia repair. abstract_id: PUBMED:37892293 Implementing Contralateral Surgical Exploration during Hernia Repair in Children with Unilateral Inguinal Hernia: A Dutch Qualitative Study. A total of 10-15% of children undergoing unilateral inguinal hernia repair develop a metachronous contralateral inguinal hernia (MCIH) that necessitates second anesthesia and surgery. Contralateral exploration can be performed to prevent MCIH development. This study investigates (1) factors that promote or hinder the adoption and (de-)implementation of contralateral groin exploration in children ≤ 6 months undergoing unilateral hernia repair and (2) strategies to overcome these barriers. A qualitative interview study was conducted using 14 semi-structured interviews and two focus groups involving healthcare professionals, stakeholders involved from a patients' perspective and stakeholders at the organizational/policy level. The results show that the effectiveness of surgical treatment and stakeholders' motivation and attitudes towards the intervention were reported as barriers for implementation, whereas patient and family outcomes and experience and strategies to overcome these barriers were identified as facilitating factors for future implementation. This study is unique in its contributions towards insights into facilitators and barriers for (de-)implementation of contralateral groin exploration in children with a unilateral inguinal hernia. In case the HERNIIA trial shows that contralateral exploration is beneficial for specific patient and family outcomes or a subgroup of children, the results of this study can help in the decision-making process as to whether contralateral exploration should be performed or not. abstract_id: PUBMED:10458458 Risk of contralateral hydrocele or hernia after unilateral hydrocele repair in children. Purpose: Recent laparoscopic studies indicate a high incidence of a contralateral open internal ring in children undergoing unilateral hydrocele or hernia repair, raising the question of whether routine contralateral exploration should be done. Data on the long-term risk of clinical contralateral hernia or hydrocele after unilateral hydrocele repair are limited. To address this question we performed long-term followup in patients who underwent unilateral hydrocele repair. Materials And Methods: We followed patients who previously underwent unilateral hydrocele repair performed by one of us before 1997. Patients were interviewed by telephone and encouraged to return to one of us or their pediatrician for evaluation. Results: Of the 101 patients who fulfilled study inclusion criteria 85 who were 5 to 107 months old (median age 37) at the original surgery were successfully contacted, including 45 examined by one of us or a pediatrician and 40 followed by telephone interview only. Contralateral hydrocele or hernia developed in 6 of the 89 patients (7%) 6 to 15 months (median 12) postoperatively. The remaining 79 patients have been recurrence-free for 6 to 153 months (mean 44, median 37). Of the patients 5 of 32 are (15%) and 1 of 53 (2%) who underwent left and right hydrocele repair, respectively, had contralateral recurrence. Conclusions: The risk of a clinically evident contralateral hydrocele or hernia after unilateral hydrocele repair is approximately 7%. We do not recommend routine contralateral exploration in children undergoing unilateral hydrocele repair. abstract_id: PUBMED:31383517 Transinguinal laparoscopic evaluation of contralateral side during unilateral inguinal hernia repair for children. Background: Inguinal hernia repair is a common procedure in daily pediatric surgical practice. Objectives: The present study was planned to find out whether transinguinal laparoscopic exploration (TILE) of the contralateral groin is effective in reducing the need of operation for contralateral metachronous inguinal hernia (CMIH) in children. Study Design: Charts of 1103 children who underwent inguinal hernia repair between 2006 and 2016 were retrospectively analyzed. Eighty-eight children with bilateral hernia at the presentation were excluded, and 705 patients whose parents could be contacted by phone to get the latest information about children's condition were included in the study. Results: Of the 705 children with unilateral inguinal hernia repair, 362 (51.4%) and 343 (48.6%) of them had right-sided and left-sided inguinal hernia, respectively. Transinguinal laparoscopic exploration was performed in 479 of the 705 children with unilateral hernia and a hernia or contralateral patent processus vaginalis (PPV) was found and ligated in %28.3 (n = 136) of them. Mean follow-up time was 60 ± 36 months. Fifteen (4.3%) of 479 patients who had TILE and 31 (13.6%) of 226 the patients who did not have TILE developed CMIH. When the videos of 15 patients who developed CMIH were reviewed, overlooked PPV was found in 10 (3.3%) patients who had TILE during early phases of institutional learning curve. Discussion And Conclusions: TILE of the contralateral side during pediatric inguinal hernia repair is a simple and effective method to evaluate contralateral PPV. This approach clearly and significantly reduces the need of operation for a metachronous hernia at a later date. abstract_id: PUBMED:30120549 Is contralateral inguinal exploration necessary in preterm girls undergoing inguinal hernia repair during the first months of life? Background: To assess the need for contralateral surgical exploration in preterm girls with symptomatic unilateral inguinal hernia. Methods: The medical data of girls operated for inguinal hernia between 2004 and 2016 in a single pediatric surgery center were retrospectively collected. Preterm girls operated for unilateral hernia before 6 months of life were selected (55/517 cases) to assess the incidence and risk factors for contralateral metachronous inguinal hernia (CMIH). Results: CMIH was observed in 7% of cases (4 girls with a right inguinal hernia in 3 cases) at a mean age of 4.2 years. Only one case occurred early (3 months). Birth weight and term were comparable (1674 ± 620 g and 32 ± 5 WA without CMIH vs. 1694 ± 582 g and 33 ± 3 WA with CMIH). Conclusion: Contralateral inguinal hernia is very rare and generally occurs several years after inguinal repair surgery in preterm girls, which should encourage practitioners to follow these children throughout childhood for the subsequent development of inguinal hernia. This study did not find any arguments in favor of systematic contralateral exploration in preterm girls. abstract_id: PUBMED:30729303 Independent risk factors for contralateral patent processus vaginalis undetected by pre-operative ultrasonography in boys with unilateral inguinal hernia. Purpose: Many trials have been done to make sure probability of metachronous contralateral side hernia (MCH) and contralateral patent processus vaginalis (CPPV). But the necessity of contralateral side exploration is still on debate. The aim is to investigate the risk factors for the consideration of contralateral examination on operation. Materials And Methods: The study was designed as retrospectively. Patients with unilateral inguinal hernia from January 2010 to May 2015 were enrolled. Pre-operative ultrasonography was done in all patients. Patients with obvious contralateral side hernia on pre-operative US were excluded. The presence of CPPV was evaluated by transinguinal laparoscopy during the operation. Results: In univariate analysis, hernial sac size only shows difference (P value: 0.001). The others, location of the hernia, age at surgery, gestational age (preterm), low birth weight and parent's age, did not show statistically significant differences. Multivariate analysis also demonstrates CPPV is more common in patients with large hernial sac (Odds ratio: 2.727, 95% confidence interval 1.495-4.974, P value: 0.001). Conclusion: We propose that surgeons should consider contralateral evaluation during operation in case with large ipsilateral hernial sac, although CPPV was not detected by pre-operative US. Answer: The risk of contralateral manifestation in children with unilateral inguinal hernia varies, and the decision to treat contralaterally is influenced by several factors. Studies have identified that children with an initial left-sided hernia have a higher risk of developing a metachronous contralateral inguinal hernia (MCIH) compared to those with a right-sided hernia (PUBMED:11101728, PUBMED:26455468, PUBMED:17571299). The overall incidence of MCIH following unilateral inguinal hernia repair in children is around 6% to 7% (PUBMED:26455468, PUBMED:17571299). Other risk factors for MCIH include the presence of an open contralateral processus vaginalis (CPV), a family history of inguinal hernia, and younger age at the time of hernia repair (PUBMED:11101728, PUBMED:26455468, PUBMED:25574209). Transinguinal laparoscopic examination has been used to identify contralateral patent processus vaginalis (CPPV), and studies suggest that suspicious ultrasound findings and a history of contralateral disease are significant risk factors for CPPV (PUBMED:25574209). However, the presence of CPPV does not always lead to MCIH, and the decision to repair CPPV should be based on its size, with a diameter greater than 4 mm being a potential cutoff for increased risk of MCIH (PUBMED:35821302). Routine contralateral exploration to prevent MCIH is debated, as the incidence of MCIH is relatively low, and most cases occur within the first few years after unilateral inguinal hernia repair (PUBMED:17571299). Additionally, the necessity of contralateral exploration in specific subgroups, such as preterm girls, is questioned due to the rarity of contralateral hernia occurrence (PUBMED:30120549). In conclusion, while certain risk factors increase the likelihood of MCIH, the overall risk remains low, and routine contralateral exploration is not universally recommended. Instead, it may be reserved for high-risk patients or those with significant CPPV to avoid second anesthesia and surgery (PUBMED:11101728, PUBMED:37892293).
Instruction: Cross-sectional study comparing public and private hospitals in Catalonia: is the practice of routine episiotomy changing? Abstracts: abstract_id: PUBMED:25889079 Cross-sectional study comparing public and private hospitals in Catalonia: is the practice of routine episiotomy changing? Background: In Spain, the Strategy for Assistance in Normal Childbirth (SANC) promoted a model of care, which respects the physiological birth process and discards unnecessary routine interventions, such as episiotomies. We evaluated the rate of episiotomy use and perineal trauma as indicators of how selective introduction of the SANC initiative has impacted childbirth outcomes in hospitals of Catalonia. Methods: Cross-sectional study of all singleton vaginal term deliveries without instrument registered in the Minimum Basic Data Set (MBDS) of Catalonia in 2007, 2010 and 2012. Hospitals were divided into types according to funding (public or private), and four strata were differentiated according to volume of births attended. Episiotomies and perineal injury were considered dependent variables. The relationship between qualitative variables was analysed using the chi-squared test, and Student's t-test was used for quantitative variables. Comparison of proportions was performed on the two hospital groups between 2007 and 2012 using a Z-test. Logistic regression models were used to analyse the relationship between episiotomy or severe perineal damage and maternal age, volume of births and hospital type, obtaining odds ratios (OR) and 95% confidence intervals (CI). Results: The majority of normal singleton term deliveries were attended in public hospitals, where maternal age was lower than for women attended in private hospitals. Analysis revealed a statistically significant (P &lt; 0.001) decreasing trend in episiotomy use in Catalonia for both hospital types. Private hospitals appeared to be associated with increased episiotomy rate in 2007 (OR = 1.099, CI: 1,057-1,142), 2010 (OR = 1.528, CI: 1,472-1,587) and 2012 (OR = 1.459, CI: 1,383-1,540), and a lower rate of severe perineal trauma in 2007 (OR = 0.164, CI: 0.095-0.283), 2010 (OR = 0.16, CI: 0.110-0.232) and 2012 (OR = 0.19, CI: 0.107-0.336). Regarding severe perineal injury, when independent variables were adjusted, maternal age ceased to have a significant correlation in 2012 (OR = 0.994, CI: 0.970-1.018). Conclusions: Episiotomy procedures during normal singleton vaginal term deliveries in Catalonia has decreased steadily since 2007. Study results show a stable incidence trend below 1% for severe perineal trauma over the study period. abstract_id: PUBMED:10387901 Episiotomy in NSW hospitals 1993-1996: towards understanding variations between public and private hospitals. Episiotomy rates for women experiencing childbirth in New South Wales (NSW) hospitals are another indicator that private insurance may be a risk factor for obstetric intervention. A recent comparison of episiotomy rates in NSW public and private hospitals between 1993 and 1996 revealed that episiotomy rates were 12 to 15 percentage points higher in NSW private hospitals than in public hospitals studied. Rates also appear to be declining in NSW public hospitals, yet this trend is not evident in the NSW private hospitals studied. Although private hospital patients were almost twice as likely to experience forceps or vacuum delivery (often associated with episiotomy), this leaves a 6 to 8 percentage point difference unexplained. Given the potential health-related quality of life issues associated with perineal trauma during childbirth, further analysis of the clinical make-up of privately insured women may help determine the extent to which clinical explanations exist to support the differences in this childbirth intervention. abstract_id: PUBMED:34312192 Assessment of surgical capacity in Chiapas, Mexico: a cross-sectional study of the public and private sector. Introduction: Surgical, anaesthesia and obstetric (SAO) care are essential, life-saving components of universal healthcare. In Chiapas, Mexico's southernmost state, the capacity of SAO care is unknown. This study aims to assess the surgical capacity in Chiapas, Mexico, as it relates to access, infrastructure, service delivery, surgical volume, quality, workforce and financial risk protection. Methods: A cross-sectional study of Ministry of Health public hospitals and private hospitals in Chiapas was performed. The translated Surgical Assessment Tool (SAT) was implemented in sampled hospitals. Surgical volume was collected retrospectively from hospital logbooks. Fisher's exact test and Mann-Whitney U test were used to compare public and private hospitals. Catastrophic expenditure from surgical care was calculated. Results: Data were collected from 17 public hospitals and 20 private hospitals in Chiapas. Private hospitals were smaller than public hospitals and public hospitals performed more surgeries per operating room. Not all hospitals reported consistent electricity, running water or oxygen, but private hospitals were more likely to have these basic infrastructure components compared with public hospitals (84% vs 95%; 60% vs 100%; 94.1% vs 100%, respectively). Bellwether surgical procedures performed in private hospitals cost significantly more, and posed a higher risk of catastrophic expenditure, than those performed in public hospitals. Conclusion: Capacity limitations are greater in public hospitals compared with private hospitals. However, the cost of care in the private sector is significantly higher than the public sector and may result in catastrophic expenditures. Targeted interventions to improve the infrastructure, workforce availability and data collection are needed. abstract_id: PUBMED:31537117 Do job stress, health, and presenteeism differ between Chinese healthcare workers in public and private hospitals: a cross sectional study. To determine if job stress, health, and presenteeism differ between healthcare workers at Chinese public and private hospitals. This cross-sectional study analyzed the records of 1080 healthcare workers in eastern, central, and western China for the period from January2015 through November2015. Data on challenge stress, hindrance stress, health, and presenteeism were collected. Using univariate and multivariate regression and SPSS, we investigated differences between Chinese public and private hospitals in China. Challenge stress, hindrance stress, and presenteeism, but not health status, significantly differed between healthcare workers at public and private hospitals in China. Challenge stress and hindrance stress were significantly higher in public hospitals, while presenteeism was significantly lower in private hospitals. The significant differences between public and private hospitals are attributable to differences in the business practices and management of public and private hospitals. To achieve successful long-term medical reform in China, the adverse effects of psychosocial factors should be considered in future research plans and policies. Chinese hospitals urgently require improvements in management and leadership. Reform efforts should encompass fields such as management science, psychology, and the behavioral sciences. abstract_id: PUBMED:34710717 Differences in health literacy level of patients from public and private hospitals: a cross-sectional study in Turkey. Objectives: Policy-making based on a health literacy approach makes it a priority to develop people-centered public health strategies and programs, particularly in the time of COVID-19 across the world. This is the first study to assess health literacy levels of patients visiting public and private hospitals in Turkey and also compares these levels with sociodemographic and health-related variables by hospital type to suggest health policies aimed at improving the health literacy skills for patients with different socio-economic backgrounds. Study Design: This is a cross-sectional study. Methods: The study was conducted on 948 outpatients from both hospital types in 2018. Health literacy was assessed using the validated Turkish version of the European Health Literacy Survey Questionnaire with 47 items. The level of health literacy and sociodemographic factors influencing it were analyzed using correlation and binary logistic regression tests. Patients from private hospital had better health literacy index score compared with the public hospital. Results: The health-related variables, such as self-reported health and the presence of long-term illness, and sociodemographic characteristics, including education, age, and gender, were associated with health literacy for both public and private hospitals. Age and education were important predictors, whereas gender, long-term disease condition, self-reported health, and perceived income status were statistically significant variables for adequate health literacy in both hospital types. Conclusions: Participants from private hospital had better health literacy than that of public hospital. These findings could be used to help health policy makers to improve the current health literacy policy for patients and develop strategies by stakeholders for reducing barriers to obtaining health-related information. abstract_id: PUBMED:30729929 A comparison of leadership competencies among doctors practicing in public and private hospitals in Jos metropolis of Plateau State, Nigeria. Aim: This study aimed at assessing and comparing perceived leadership competencies of doctors occupying managerial positions in a public and various private hospitals in Jos metropolis of Plateau state, Nigeria. Materials And Methods: This was a cross-sectional, comparative multicenter study. It involved self-assessment by 27 doctors occupying managerial positions in public and private hospitals within Jos metropolis and peer-assessments by 89 health and nonhealth professionals who worked with the doctor-managers at the time of the study. The National Center for Healthcare Leadership competency model was used as the assessment tool. Results: We found that perceived leadership competencies were low for all doctor-managers irrespective of their hospital affiliation. Distributions of these competencies varied with the private sector showing superiority for certain competencies. Conclusion: Perceived leadership competencies were low for the doctor-managers assessed. This calls for an innovative approach to the training and practice of health management in Jos Metropolis. abstract_id: PUBMED:32257264 A comparative study on patient safety culture among emergency nurses in the public and private hospitals of Tabriz, Iran. Aim: To appraise patient safety culture in emergency departments of Iranian public and private hospitals. Design: A comparative cross-sectional survey design was used. Methods: This cross-sectional study was conducted from January-March 2019. For data collection, the Hospital Patient Safety Culture (HSOPSC) questionnaire was employed. The study sample comprised 256 participants. Independent t test and analysis of variance were performed to report the mean differences in total patient safety culture perception according to the participants' characteristics and according to hospital ownership. Results: There was no significant difference in overall perceptions of patient safety culture in public and private sectors (2.94 vs. 2.84, p &gt; .05). There are no solid areas regarding patient safety found neither in public nor in private sectors. Out of 12 dimensions of patient safety culture, different perceptions of nurses working in public and private hospitals belonged to "Organizational learning" (t = 2.13, p = .034) and "Handoffs and transitions" (t = 2.91, p = .004). Regarding the participants' characteristics, no significant difference was observed as to the perspectives of nurses on patient safety culture. However, there was a meaningful difference in the attitudes of nurses working in public hospitals according to their employment status (t = 2.07, p = .039). abstract_id: PUBMED:23764930 Can major systematic reviews influence practice patterns? A case study of episiotomy trends. Purpose: Episiotomy is one of the most commonly performed procedures among women of childbearing age in the United States. In 2005, a major systematic review conducted by Hartmann and colleagues recommended against routine use of episiotomy and was widely covered in the media. We assessed the impact of the Hartman et al. study on episiotomy trend. Methods: Based on 100% hospital discharge data from eight states in 2003-2008, we used interrupted time series regression models to estimate the impact of the Hartman et al. review on episiotomy rates. We used mixed-effects regression models to assess whether interhospital variation was reduced over time. Results: After controlling for underlying trend, episiotomy rates dropped by 1.4 percentage points after Hartman et al. publication (p &lt; 0.01 for spontaneous delivery; p &lt; 0.1 for operative delivery). The publication has smaller effect on government hospitals as compared to private hospitals. Mixed effects models estimated negative correlation between cross-time and cross-hospital variations in episiotomy rates, indicating reduced cross-hospital variation over time. Conclusions: Our results suggested that there has been a gradual decline in episiotomy rates over the period 2003-2008, and that synthesis of evidence showing harms from routine episiotomy had limited impact on practice patterns in the case of episiotomy. The experience of episiotomy illustrates the challenge of using comparative effectiveness and evidenced-based medicine to reduce use of unnecessary procedures. abstract_id: PUBMED:22827884 How receptive are patients to medical students in Australian hospitals? A cross-sectional survey of a public and a private hospital. Background: Medical student numbers in Australian universities have more than doubled since 2000. There are concerns about the ability for existing clinical training sites to accommodate this increase in student numbers, and there have been calls to increase training in private hospitals. The receptiveness of patients in private hospitals will influence the success of such placements. Aims: We aimed to evaluate whether patients in a private hospital are as receptive to medical students as patients in a public hospital. Methods: Cross-sectional survey of patients conducted at a private and a public teaching hospital in Melbourne, Australia. Main outcome measures were willingness to allow a medical student to participate in an interview, physical examination and procedures (electrocardiogram, venepuncture and digital rectal examination), and patient attitudes towards medical students as assessed by a series of 20 attitude statements and a summative attitude score. Results: Patients at the private hospital were more willing than patients at the public hospital to allow a medical student to take their history unsupervised (112/146, 76.7% vs 90/141, 63.8%; P = 0.02). The distribution of patient willingness did not otherwise differ between hospitals for physical examination or procedures. There was no difference in the mean attitude score between hospitals (15.3 ± 0.8 private vs 15.4 ± 1.2 public, P = 0.38), and responses differed between hospitals for only four of the 20 attitude statements. Conclusions: Our findings suggest that patients in a private hospital are at least as receptive to medical students as patients in a public hospital. abstract_id: PUBMED:35690738 Selective versus routine use of episiotomy for vaginal births in Shanghai hospitals, China: a comparison of policies. Background: To assess the effects of policy of selective versus routine episiotomy on mother and baby for women delivering vaginally in Shanghai and whether the hospital type has any effect on the outcomes. Method: This was a multi-center retrospective cohort study in Shanghai between March 2015 and May 2017. The study population were vaginal births with selective or routine episiotomy (n = 5478) in 20 secondary or tertiary hospitals. Main Outcome Measure was the incidence of severe perineal lacerations. The adjusted odds ratios (aOR) and 95% confidence intervals (CI) were estimated by logistic regression and presented as the effect sizes. All models were stratified by the utilization of level (secondary and tertiary) and type (general and Obstetric) of hospital. Results: The primary outcome was not significantly different between vaginal births with routine and selective episiotomy. Patients with selective episiotomy had a lower risk of postpartum hemorrhage, and newborns in the selective episiotomy group had a lower risk of shoulder dystocia and Neonatal Ward compared to those with routine episiotomy. Newborns in selective episiotomy group had a lower risk of birth injury in tertiary hospital. However, newborns in selective episiotomy group had a higher risk of birth injury in general hospitals. Conclusion: Selective episiotomy is safe and can be recommended over routine episiotomy in obstetric and tertiary hospital settings in China. Answer: The cross-sectional study comparing public and private hospitals in Catalonia aimed to evaluate the rate of episiotomy use and perineal trauma as indicators of the impact of the Strategy for Assistance in Normal Childbirth (SANC) initiative on childbirth outcomes in hospitals of Catalonia. The study analyzed data from all singleton vaginal term deliveries without instrument registered in the Minimum Basic Data Set (MBDS) of Catalonia in 2007, 2010, and 2012. Hospitals were divided into types according to funding (public or private), and four strata were differentiated according to the volume of births attended. The study found a statistically significant decreasing trend in episiotomy use in Catalonia for both hospital types (P < 0.001). Private hospitals were associated with an increased episiotomy rate in 2007, 2010, and 2012, and a lower rate of severe perineal trauma in those same years. The study concluded that episiotomy procedures during normal singleton vaginal term deliveries in Catalonia have decreased steadily since 2007, with a stable incidence trend below 1% for severe perineal trauma over the study period (PUBMED:25889079).
Instruction: Is it necessary to treat all patients with idiopathic pulmonary fibrosis? Abstracts: abstract_id: PUBMED:11587101 Is it necessary to treat all patients with idiopathic pulmonary fibrosis? Background And Aim Of The Work: To investigate the clinical course of untreated patients with idiopathic pulmonary fibrosis (usual interstitial pneumonia) (IPF/UIP). Methods: Forty-three patients with IPF/UIP, divided into two groups. Group I consisted of 29 patients treated at diagnosis, while Group II comprised 14 patients who did not receive treatment. The indication of treatment was established whenever patients referred to a significant progression of the degree of dyspnea during the year prior to diagnosis. Results: At diagnosis, patients from Group I had lower FVC (mean +/- SEM, 56+/-3% vs 73+/-3%) (p = 0.0004) and a greater extent of ground glass pattern in high resolution CT scan (18+/-4% vs 4+/-1%) (p = 0.004) than those from Group II. In group I, a follow-up study was carried out on 26 patients for 24+/-4 months. Thirteen of these 26 patients (50%) died 11+/-4 months after the initial assessment. Serial pulmonary functional tests were performed on 19 patients. Thirteen patients from Group II were followed up for 23+/-3 months. Seven of these 13 patients were treated 12+/-3 months after the diagnosis because of progression of the disease. The remaining 6 patients remained untreated and with the disease stable at the end of the follow-up, representing 15% (6 out of 39) of the whole study group. No patients from this group died during the follow-up. At the end of the follow-up, there were no differences in lung function changes between treated patients (19 from Group I and 7 from Group II), and the 6 untreated patients. Conclusions: Some patients with IPF/UIP remain stable for extended periods of time without treatment. abstract_id: PUBMED:30188172 Procedure Preference and Intention-to-Treat Outcomes after Listing for Lung Transplantation among U.S. Adults. A Cohort Study. Rationale: Bilateral lung transplantation is widely used to treat chronic obstructive pulmonary disease (COPD) and interstitial lung disease (ILD), on the basis of an expectation of improved survival after transplantation. Yet, waiting list mortality is higher while awaiting bilateral transplantation. The net effect of procedure preference on overall survival is unknown. Objectives: To determine whether an unrestricted procedure preference is associated with improved overall outcomes after listing for lung transplantation. Methods: We performed a retrospective cohort study of 12,155 adults with COPD or ILD listed for lung transplantation in the United States between May 4, 2005, and December 31, 2014. We defined a "restricted" procedure preference as listing for "bilateral transplantation only" and an "unrestricted" procedure preference as listing for any combination of bilateral or single lung transplantation. We used a composite "intention-to-treat" primary outcome that included events both before and after transplantation, defined as the number of days between listing and death, removal from the list for clinical deterioration, or retransplantation. Results: In adjusted analyses, an unrestricted procedure preference was associated with a 3% lower rate of the primary intention-to-treat outcome in COPD (adjusted hazard ratio [aHR], 0.97; 95% confidence interval [CI], 0.89-1.07) and a 1% higher rate in ILD (aHR, 1.01; 95% CI, 0.94-1.08). There was no convincing evidence that these associations varied by age, disease severity, or the use of mechanical support. Among those with ILD and concomitant severe pulmonary hypertension, an unrestricted preference was associated with a 17% increased rate of the primary outcome (aHR, 1.17; 95% CI, 0.99-1.39). An unrestricted preference was consistently associated with lower rates of death or removal from the list for clinical deterioration and with higher rates of transplantation. Graft failure rates were similar among those listed with restricted and unrestricted preferences. Conclusion: When considering outcomes both before and after transplantation, we found no evidence that patients with COPD or ILD benefit from listing for bilateral lung transplantation compared with listing for a more liberal procedure preference. An unrestricted listing strategy for suitable candidates may increase the number of transplants performed without impacting overall survival. abstract_id: PUBMED:30100404 Laparoscopic anti-reflux surgery for the treatment of idiopathic pulmonary fibrosis (WRAP-IPF): a multicentre, randomised, controlled phase 2 trial. Background: Abnormal acid gastro-oesophageal reflux (GER) is hypothesised to play a role in progression of idiopathic pulmonary fibrosis (IPF). We aimed to determine whether treatment of abnormal acid GER with laparoscopic anti-reflux surgery reduces the rate of disease progression. Methods: The WRAP-IPF trial was a randomised controlled trial of laparoscopic anti-reflux surgery in patients with IPF and abnormal acid GER recruited from six academic centres in the USA. We enrolled patients with IPF, abnormal acid GER (DeMeester score of ≥14·7; measured by 24-h pH monitoring) and preserved forced vital capacity (FVC). We excluded patients with a FVC below 50% predicted, a FEV1/FVC ratio of less than 0·65, a history of acute respiratory illness in the past 12 weeks, a body-mass index greater than 35, and known severe pulmonary hypertension. Concomitant therapy with nintedanib and pirfenidone was allowed. The primary endpoint was change in FVC from randomisation to week 48, in the intention-to-treat population with mixed-effects models for repeated measures. This trial is registered with ClinicalTrials.gov, number NCT01982968. Findings: Between June 1, 2014, and Sept 30, 2016, we screened 72 patients and randomly assigned 58 patients to receive surgery (n=29) or no surgery (n=29). 27 patients in the surgery group and 20 patients in the no surgery group had an FVC measurement at 48 weeks (p=0·041). Intention-to-treat analysis adjusted for baseline anti-fibrotic use demonstrated the adjusted rate of change in FVC over 48 weeks was -0·05 L (95% CI -0·15 to 0·05) in the surgery group and -0·13 L (-0·23 to -0·02) in the non-surgery group (p=0·28). Acute exacerbation, respiratory-related hospitalisation, and death was less common in the surgery group without statistical significance. Dysphagia (eight [29%] of 28) and abdominal distention (four [14%] of 28) were the most common adverse events after surgery. There was one death in the surgery group and four deaths in the non-surgery group. Interpretation: Laparoscopic anti-reflux surgery in patients with IPF and abnormal acid GER is safe and well tolerated. A larger, well powered, randomised controlled study of anti-reflux surgery is needed in this population. Funding: US National Institutes of Health National Heart, Lung and Blood Institute. abstract_id: PUBMED:33798455 Pirfenidone in patients with progressive fibrotic interstitial lung diseases other than idiopathic pulmonary fibrosis (RELIEF): a double-blind, randomised, placebo-controlled, phase 2b trial. Background: Pirfenidone has been shown to slow disease progression in patients with idiopathic pulmonary fibrosis (IPF). However, there are few treatment options for progressive fibrotic interstitial lung diseases (ILDs)) other than IPF. In view of the pathomechanistic and clinical similarities between IPF and other progressive fibrotic ILDs, we aimed to assess the efficacy and safety of pirfenidone in patients with four non-IPF progressive fibrotic ILDs. Methods: We did a multicentre, double-blind, randomised, placebo-controlled, parallel phase 2b trial (RELIEF) in 17 centres with expertise in ILD in Germany. Eligible participants were patients aged 18-80 years with progressive fibrotic ILD due to four diagnoses: collagen or vascular diseases (ie, connective tissue disease-associated ILDs), fibrotic non-specific interstitial pneumonia, chronic hypersensitivity pneumonitis, or asbestos-induced lung fibrosis. Other eligibility criteria included a forced vital capacity (FVC) of 40-90% predicted, a diffusing capacity of the lung for carbon monoxide of 10-90% predicted, and an annual decline of FVC of at least 5% predicted despite conventional therapy, based on at least three measurements within 6-24 months before enrolment. Patients who had received any previous antifibrotic therapy were excluded. We randomly assigned patients (1:1) to either oral pirfenidone (267 mg three times per day in week 1, 534 mg three times per day in week 2, and 801 mg three times per day thereafter) or matched placebo, added to their ongoing medication. Randomisation was done centrally using permuted block randomisation with varying block sizes stratified by the four diagnostic groups. Patients, investigators, statisticians, monitors, and the study coordinator were masked to treatment assignment until database closure. The placebo-controlled study period was 48 weeks (including up-titration). The primary endpoint was absolute change in percentage of predicted FVC (FVC % predicted) from baseline to week 48 in the intention-to-treat population, with imputation of missing data by the smallest sum of squared differences and attribution of deceased patients to the lowest rank in a rank ANCOVA model. Additionally, we did linear mixed-model repeated measures slope analyses of FVC % predicted longitudinal data over the course of the study as a prespecified sensitivity analysis and post-hoc sensitivity analyses of the primary endpoint in the intention-to-treat population using imputation methods of last observation carried forward [LOCF] and a regression-based multiple imputation procedure. Safety was assessed in all patients who received at least one dose of study medication. This trial is registered with EudraCT 2014-000861-32; DRKS00009822 and is no longer recruiting. Findings: Between April 5, 2016, and Oct 4, 2018, we randomly assigned 127 patients to treatment: 64 to pirfenidone, 63 to placebo. After 127 patients had been randomised, the study was prematurely terminated on the basis of an interim analysis for futility triggered by slow recruitment. After 48 weeks and in the overall population of 127 patients, rank ANCOVA with diagnostic group included as a factor showed a significantly lower decline in FVC % predicted in the pirfenidone group compared with placebo (p=0·043); the result was similar when the model was stratified by diagnostic group (p=0·042). A significant treatment effect was also observed when applying the LOCF and multiple imputation methods to analyses of the primary endpoint. The median difference (Hodges-Lehmann estimate) between pirfenidone and placebo groups for the primary endpoint was 1·69 FVC % predicted (95% CI -0·65 to 4·03). In the linear mixed-model repeated measures slope analysis of FVC % predicted, the estimated difference between treatment and placebo groups from baseline to week 48 was 3·53 FVC % predicted (95% CI 0·21 to 6·86) with imputation of deaths as prespecified, or 2·79 FVC % predicted (95% CI 0·03 to 5·54) without imputation. One death (non-respiratory) occurred in the pirfenidone group (2%) and five deaths (three of which were respiratory) occurred in the placebo group (8%). The most frequent serious adverse events in both groups were infections and infestations (five [8%] in the pirfenidone group, ten [16%] in the placebo group); general disorders including disease worsening (two [3%] in the pirfenidone group, seven [11%] in the placebo group); and cardiac disorders (one ([2%] in the pirfenidone group, 5 [8%] in the placebo group). Adverse events (grade 3-4) of nausea (two patients on pirfenidone, two on placebo), dyspnoea (one patient on pirfenidone, one on placebo), and diarrhoea (one patient on pirfenidone) were also observed. Interpretation: In view of the premature study termination, results should be interpreted with care. Nevertheless, our data suggest that in patients with fibrotic ILDs other than IPF who deteriorate despite conventional therapy, adding pirfenidone to existing treatment might attenuate disease progression as measured by decline in FVC. Funding: German Center for Lung Research, Roche Pharma. abstract_id: PUBMED:33997131 Lived experiences of the disease journey among patients with idiopathic pulmonary fibrosis. Objective: This study aimed to explore the lived experiences of the disease journey and patients' care needs with idiopathic pulmonary fibrosis (IPF). Methods: Face-to-face semi-structured interviews were conducted with a purposive sampling of IPF patients admitted to the department of respiratory medicine in a tertiary hospital in Beijing. Interview data were analyzed using the thematic analysis method. In the end, 16 patients were interviewed. Results: Four themes emerged from the qualitative data included the long and confusing journey to reach a diagnosis, living with the disease, understanding the disease and treatment and desire for continuity of care. A series of subthemes were also identified, including uncertainty of diagnosis, delaying the process, living with physical symptoms, living with emotional distress, loss of independence, uncertainty with the prognosis, questioning the cause of the disease, concerning the side effects of treatments, lacking continuity of care, and wanting a better quality of healthcare in community hospitals. Conclusions: Based on the findings, there is an urgent need to improve the care delivery to this vulnerable population in China. To meet their health needs, it is of paramount importance to develop effective education programs for health professionals and IPF patients and improve care models of healthcare systems, especially in remote areas, to enhance care continuity in the communities. abstract_id: PUBMED:32168781 Improvement in Subjective Symptoms and Tolerability in Response to Nintedanib Treatment in Elderly Patients with Idiopathic Pulmonary Fibrosis. The efficacy of nintedanib treatment in patients with idiopathic pulmonary fibrosis (IPF) was demonstrated in phase III trials. However, there is limited data on the significance of nintedanib in elderly patients aged ≥75 years. We have retrospectively evaluated 54 newly nintedanib-treated patients including 32 elderly individuals. Potential changes in modified medical research council (mMRC) grade and COPD (chronic obstructive pulmonary disease) assessment test (CAT) score, as well as in forced vital capacity (FVC) were obtained 6 months before, at the time of, and 6 and 12 months after initiation of nintedanib treatment. Significant differences were observed in CAT scores between 6 months before treatment and baseline (p &lt; 0.001), and between baseline and 6 months (p &lt; 0.001) and 12 months (p &lt; 0.001) after treatment. If subjective improvement is defined as an improvement in mMRC grade or CAT score by 1 or 3 points, respectively, 25 patients (46.3%) have significantly improved after 6 months of treatment. Out of these, all have improved in CAT score. The tolerability of nintedanib was similar in elderly and younger patients. These findings suggest that CAT scores could be useful in the subjective assessment during nintedanib treatment, and that nintedanib is safe and efficient for the treatment of the elderly population. abstract_id: PUBMED:23143842 Treating idiopathic pulmonary fibrosis with the addition of co-trimoxazole: a randomised controlled trial. Background: Idiopathic pulmonary fibrosis (IPF) is a fatal condition with limited treatment options. However, in a previous small study, co-trimoxazole was found to be beneficial. Methods: In a double-blind multicentre study, 181 patients with fibrotic idiopathic interstitial pneumonia (89% diagnosed as definite/probable IPF) were randomised to receive co-trimoxazole 960 mg twice daily or placebo for 12 months in addition to usual care. Measurements were made of forced vital capacity (FVC) (primary endpoint), diffusing capacity of carbon monoxide (Dlco) and EuroQol (EQ5D)-based utility, 6-minute walk test (6MWT) and Medical Research Council (MRC) dyspnoea score (secondary endpoints). All-cause mortality and adverse events were recorded (tertiary endpoints). Results: Co-trimoxazole had no effect on FVC (mean difference 15.5 ml (95% CI -93.6 to 124.6)), Dlco (mean difference -0.12 mmol/min/kPa (95% CI 0.41 to 0.17)), 6MWT or MRC dyspnoea score (intention-to-treat analysis). The findings of the per-protocol analysis were the same except that co-trimoxazole treatment resulted in a significant improvement in EQ5D-based utility (mean difference 0.12 (95% CI 0.01 to 0.22)), a reduction in the percentage of patients requiring an increase in oxygen therapy (OR 0.05 (95% CI 0.00 to 0.61)) and a significant reduction in all-cause mortality (co-trimoxazole 3/53, placebo 14/65, HR 0.21 (95% CI 0.06 to 0.78), p=0.02)) compared with placebo. The use of co-trimoxazole reduced respiratory tract infections but increased the incidence of nausea and rash. Conclusions: The addition of co-trimoxazole therapy to standard treatment for fibrotic idiopathic interstitial pneumonia had no effect on lung function but resulted in improved quality of life and a reduction in mortality in those adhering to treatment. ISRCTN22201583. abstract_id: PUBMED:37680626 A prospective observational cohort study of the efficacy of tofacitinib plus iguratimod on rheumatoid arthritis with usual interstitial pneumonia. Objectives: This study aims to assess the efficacy of tofacitinib (TOF) plus iguratimod (IGU) in rheumatoid arthritis (RA) with usual interstitial pneumonia (UIP) (RA-UIP). Methods: This was a prospective observational cohort, single-center study. Data from 78 RA-UIP patients treated with TOF plus IGU, IGU plus conventional synthetic disease-modifying anti-rheumatic drugs (csDMARDs), and csDMARDs were analyzed. Clinically relevant responses in RA activity assessment, pulmonary function tests (PFTs), and high-resolution computed tomography (HRCT) assessment at baseline and follow-up were compared between groups to evaluate the efficacy of TOF plus IGU. Results: A total of 78 patients were followed up for at least 6 months after treatment. There were significant changes in sedimentation rate (ESR), C reactive protein (CRP), and disease activity score (DAS) 28-CRP during the follow-up within each treatment group, but there was no statistically significant difference between the two groups. After 6 months of TOF plus IGU treatment, forced vital capacity (FVC)% (84.7 ± 14.7 vs. 90.7 ± 15.4) and HRCT fibrosis score (7.3 ± 3.4 vs. 7.0 ± 5.6) showed a significant improvement compared to the csDMARDs group (P = 0.031, P = 0.015). The TOF plus IGU-treated patients had a significantly higher regression and lower deterioration than the csDMARDs-treated patients (P = 0.026, P = 0.026) and had a significantly higher response (regression + stability), with overall response rates of 66.7% (16/24) vs. 35.7% (10/28) (P = 0.027), respectively. Conclusion: Our results indicate that TOF plus IGU can simultaneously relieve RA and RA-UIP and be better than the csDMARDs with a higher response rate in RA-UIP, which may be a potential choice for "dual treat-to-target". abstract_id: PUBMED:19213772 Gelsolin expression is necessary for the development of modelled pulmonary inflammation and fibrosis. Background: Despite intense research efforts, the aetiology and pathogenesis of idiopathic pulmonary fibrosis remain poorly understood. Gelsolin, an actin-binding protein that modulates cytoskeletal dynamics, was recently highlighted as a likely disease modifier through comparative expression profiling and target prioritisation. Methods: To decipher the possible role of gelsolin in pulmonary inflammation and fibrosis, immunocytochemistry on tissue microarrays of human patient samples was performed followed by computerised image analysis. The results were validated in the bleomycin-induced animal model of pulmonary inflammation and fibrosis using genetically-modified mice lacking gelsolin expression. Moreover, to gain mechanistic insights into the mode of gelsolin activity, a series of biochemical analyses was performed ex vivo in mouse embryonic fibroblasts. Results: Increased gelsolin expression was detected in lung samples of patients with idiopathic interstitial pneumonia as well as in modelled pulmonary inflammation and fibrosis. Genetic ablation of gelsolin protected mice from the development of modelled pulmonary inflammation and fibrosis attributed to attenuated epithelial apoptosis. Conclusions: Gelsolin expression is necessary for the development of modelled pulmonary inflammation and fibrosis, while the caspase-3-mediated gelsolin fragmentation was shown to be an apoptotic effector mechanism in disease pathogenesis and a marker of lung injury. abstract_id: PUBMED:37599655 Expectations, symptoms, and quality of life before and after 1 year of Pirfenidone treatment in patients with idiopathic pulmonary fibrosis: A single-arm, open-label nonrandomized study. Background And Aims: Antifibrotic therapies reduce lung function decline in patients with idiopathic pulmonary fibrosis (IPF). This single-arm, open-label, nonrandomized study aimed to determine the influence of antifibrotic treatment on patients' reported symptoms and expectations of the therapy. Methods: Fifty-two patients with confirmed IPF at a mean age of 65 ± 8.63 years (73% male) completed the following surveys at baseline and after 12 months of Pirfenidone treatment: Short Form Healthy Survey (SF-36), St. George's Respiratory Questionnaire (SGRQ), Baseline Dyspnea Index (BDI), Fatigue Assessment Scale (FAS), Leicester Cough Questionnaire (LCQ), and Patient's Needs and Expectations Authors' Survey. Results: The most important patients' needs were access to novel therapy, fast and easy access to health centers specializing in IPF treatment, and the improvement of the general condition or the maintenance of its level. These needs did not change with time, except for the significantly more important right of deciding on disease management after 12 months of treatment (p = 0.014). The quality of life per SF-36, after 1 year of Pirfenidone treatment, significantly improved in the physical cumulative score (p = 0.004) and mental cumulative score (p = 0.003). Significant deteriorations were observed in bodily pain and vitality. For the remaining questionnaires (SGRQ, BDI, FAS, and LCQ), no significant changes in the course of the study were noticed. Around one in 10 patients subjected to Pirfenidone therapy had achieved general symptom improvement in all areas; that is, quality of life improvement as well as cough and dyspnea reduction. Conclusions: One year of antifibrotic treatment resulted in a general improvement in the quality of life per the SF-36 questionnaire. Patients' expectations of disease management did not change; also, access to novel therapies and easy access to health centers specializing in IPF management remained their top needs. Answer: Based on the provided abstracts, it is not necessary to treat all patients with idiopathic pulmonary fibrosis (IPF) with the same approach, as the disease course and response to treatment can vary significantly among individuals. One study found that some patients with IPF/UIP (usual interstitial pneumonia) remained stable for extended periods without treatment. In this study, 15% (6 out of 39) of the whole study group did not receive treatment and had stable disease at the end of the follow-up, with no patients from this group dying during the follow-up. This suggests that not all patients with IPF/UIP may require immediate treatment (PUBMED:11587101). Other studies have explored different treatment options for IPF, including the use of antifibrotic therapies like pirfenidone and nintedanib, which have been shown to slow disease progression in some patients (PUBMED:33798455, PUBMED:32168781). Additionally, interventions such as laparoscopic anti-reflux surgery for patients with abnormal acid gastro-oesophageal reflux and IPF have been investigated, with some evidence suggesting it is safe and well-tolerated, although a larger study is needed (PUBMED:30100404). Furthermore, the addition of co-trimoxazole to standard treatment for fibrotic idiopathic interstitial pneumonia resulted in improved quality of life and a reduction in mortality in those adhering to treatment, despite no effect on lung function (PUBMED:23143842). Another study found that tofacitinib plus iguratimod could simultaneously relieve rheumatoid arthritis and RA-UIP, suggesting a potential treatment strategy for patients with both conditions (PUBMED:37680626). The lived experiences of patients with IPF also highlight the importance of individualized care, as patients have expressed a need for improved care delivery, effective education programs, and better quality of healthcare in community hospitals (PUBMED:33997131). In conclusion, while some patients with IPF may remain stable without treatment, others may benefit from various therapeutic interventions. The decision to treat should be individualized based on the patient's disease course, comorbidities, and preferences, as well as the potential benefits and risks of available treatments.