input
stringlengths
6.82k
29k
Instruction: Freestanding midwifery units versus obstetric units: does the effect of place of birth differ with level of social disadvantage? Abstracts: abstract_id: PUBMED:22726575 Freestanding midwifery units versus obstetric units: does the effect of place of birth differ with level of social disadvantage? Background: Social inequity in perinatal and maternal health is a well-documented health problem even in countries with a high level of social equality. We aimed to study whether the effect of birthplace on perinatal and maternal morbidity, birth interventions and use of pain relief among low risk women intending to give birth in two freestanding midwifery units (FMU) versus two obstetric units in Denmark differed by level of social disadvantage. Methods: The study was designed as a cohort study with a matched control group. It included 839 low-risk women intending to give birth in an FMU, who were prospectively and individually matched on nine selected obstetric/socio-economic factors to 839 low-risk women intending OU birth. Educational level was chosen as a proxy for social position. Analysis was by intention-to-treat. Results: Women intending to give birth in an FMU had a significantly higher likelihood of uncomplicated, spontaneous birth with good outcomes for mother and infant compared to women intending to give birth in an OU. The likelihood of intact perineum, use of upright position for birth and water birth was also higher. No difference was found in perinatal morbidity or third/fourth degree tears, while birth interventions including caesarean section and epidural analgesia were significantly less frequent among women intending to give birth in an FMU. In our sample of healthy low-risk women with spontaneous onset of labour at term after an uncomplicated pregnancy, the positive results of intending to give birth in an FMU as compared to an OU were found to hold for both women with post-secondary education and the potentially vulnerable group of FMU women without post-secondary education. In all cases, women without post-secondary education intending to give birth in an FMU had comparable and, in some respects, more favourable outcomes when compared to women with the same level of education intending to give birth in an OU. In this sample of low-risk women, we found that the effect of intended place on birth outcomes did not differ with women's level of education. Conclusion: FMU care appears to offer important benefits for birthing women with no additional risk to the infant. Both for women with and without post-secondary education, intending to give birth in an FMU significantly increased the likelihood of a spontaneous, uncomplicated birth with good outcomes for mother and infant compared to women intending to give birth in an OU. All women should be provided with adequate information about different care models and supported in making an informed decision about the place of birth. abstract_id: PUBMED:28068929 Are freestanding midwifery units a safe alternative to obstetric units for low-risk, primiparous childbirth? An analysis of effect differences by parity in a matched cohort study. Background: Intrapartum complications and the use of obstetric interventions are more common in primiparous childbirth than in multiparous childbirth, leading to concern about out of hospital birth for primiparous women. The purpose of this study was to determine whether the effect of birthplace on perinatal and maternal morbidity and the use of obstetric interventions differed by parity among low-risk women intending to give birth in a freestanding midwifery unit or in an obstetric unit in the North Denmark Region. Methods: The study is a secondary analysis of data from a matched cohort study including 839 low-risk women intending birth in a freestanding midwifery unit (primary participants) and 839 low-risk women intending birth in an obstetric unit (individually matched control group). Analysis was by intention-to-treat. Conditional logistic regression analysis was applied to compute odds ratios and effect ratios with 95% confidence intervals for matched pairs stratified by parity. Results: On no outcome did the effect of birthplace differ significantly between primiparous and multiparous women. Compared with their counterparts intending birth in an obstetric unit, both primiparous and multiparous women intending birth in a freestanding midwifery unit were significantly more likely to have an uncomplicated, spontaneous birth with good outcomes for mother and infant and less likely to require caesarean section, instrumental delivery, augmented labour or epidural analgesia (although for caesarean section this trend did not attain statistical significance for multiparous women). Perinatal outcomes were comparable between the two birth settings irrespective of parity. Compared to multiparas, transfer rates were substantially higher for primiparas, but fell over time while rates for multiparas remained stable. Conclusions: Freestanding midwifery units appear to confer significant advantages over obstetric units to both primiparous and multiparous mothers, while their infants are equally safe in both settings. Our findings thus support the provision of care in freestanding midwifery units as an alternative to care in obstetric units for all low-risk women regardless of parity. In view of the global rise in caesarean section rates, we consider it an important finding that freestanding midwifery units show potential for reducing first-birth caesarean. abstract_id: PUBMED:30878254 An analysis of media reporting on the closure of freestanding midwifery units in England. Problem: Despite clinical guidelines and policy promoting choice of place of birth, 14 Freestanding Midwifery Units were closed between 2008 and 2015, closures reported in the media as justified by low use and financial constraints. Background: The Birthplace in England Programme found that freestanding midwifery units provided the most cost-effective birthplace for women at low risk of complications. Women planning birth in a freestanding unit were less likely to experience interventions and serious morbidity than those planning obstetric unit birth, with no difference in outcomes for babies. Methods: This paper uses an interpretative technique developed for policy analysis to explore the representation of these closures in 191 news articles, to explore the public climate in which they occurred. Findings And Discussion: The articles focussed on underuse by women and financial constraints on services. Despite the inclusion of service user voices, the power of framing was held by service managers and commissioners. The analysis exposed how neoliberalist and austerity policies have privileged representation of individual consumer choice and market-driven provision as drivers of changes in health services. This normative framing presents the reasons given for closure as hard to refute and cultural norms persist that birth is safest in an obstetric setting, despite evidence to the contrary. Conclusion: The rise of neoliberalism and austerity in contemporary Britain has influenced the reform of maternity services, in particular the closure of midwifery units. Justifications given for closure silence other narratives, predominantly from service users, that attempt to present women's choice in terms of rights and a social model of care. abstract_id: PUBMED:28126592 Freestanding midwifery units: Maternal and neonatal outcomes following transfer. Background: the viability of freestanding midwifery units in Australia is restricted, due to concerns over their safety, particularly for women and babies who, require transfer. Aim: to compare the maternal and neonatal birth outcomes of women who planned, to give birth at freestanding midwifery units and subsequently, transferred to a tertiary maternity unit to the maternal and neonatal, outcomes of a low-risk cohort of women who planned to give birth in, tertiary maternity unit. Methods: a descriptive study compared two groups of women with low-risk singleton, pregnancies who were less than 28 weeks pregnant at booking: women who, planned to give birth at a freestanding midwifery unit (n=494) who, transferred to a tertiary maternity unit during the antenatal, intrapartum or postnatal periods (n=260) and women who planned to give, birth at a tertiary maternity unit (n=3157). Primary outcomes were mode, of birth, Apgar score of less than 7 at 5minutes and admission to, special care nursery or neonatal intensive care. Key Findings: the proportion of women who experienced a caesarean section was lower, among the freestanding midwifery unit women who transferred during the, intrapartum/postnatal period compared to women in the tertiary maternity, unit group (16.1% versus 24.8% respectively). Other outcomes were, comparable between the cohorts. Rates of primary outcomes in relation to, stage of transfer varied when stratified by parity. Discussion: these descriptive results support the provision of care in freestanding, midwifery units as an alternative to tertiary maternity units for women, with low risk pregnancies at the time of booking. A larger study, powered, to determine statistical significance of any differences in outcomes, is, required. abstract_id: PUBMED:37247500 Births in freestanding midwifery-led units in Norway: What women view as important aspects of care. Objective: To describe what women view as important aspects of care when giving birth in freestanding midwifery-led units in Norway. Methods: Data from four open-ended questions in the Babies Born Better survey, Version 1, 2 and 3 was used. We performed inductive content analysis to explore and describe women's experiences with the care they received. Results: In all, 190 women who had given birth in midwifery-led units in Norway between 2010 and 2020 responded to the B3 survey. The final sample comprised 182 respondents. The analysis yielded three main categories: 1) The immediate birth surroundings, 2) Personal and safe support, and 3) Organisational conditions. Conclusion: This study adds valuable knowledge regarding what women describe as important aspects of care in free-standing midwifery-led units. Women experience maternity services in these units as peaceful, flexible and family-friendly. However, some women perceive the freestanding midwifery-led unit as a vulnerable service, mainly due to lack of midwives on call and uncertainty around temporary closure of the freestanding midwifery-led units. This finding points to the importance of staffing of birth facilities to ensure that all women giving birth have available midwifery care at all times, which is recommended in the National guidelines for care during labour and birth. Predictability around place of birth for the upcoming birth is crucial for every woman and her family. These goals might be achieved by a stable, continuous maternity service in all geographical areas of the country. abstract_id: PUBMED:27984773 The economic costs of intrapartum care in Tower Hamlets: A comparison between the cost of birth in a freestanding midwifery unit and hospital for women at low risk of obstetric complications. Objective: to compare the economic costs of intrapartum maternity care in an inner city area for 'low risk' women opting to give birth in a freestanding midwifery unit compared with those who chose birth in hospital. Design: micro-costing of health service resources used in the intrapartum care of mothers and their babies during the period between admission and discharge, data extracted from clinical notes. Setting: the Barkantine Birth Centre, a freestanding midwifery unit and the Royal London Hospital's consultant-led obstetric unit, both run by the former Barts and the London NHS Trust in Tower Hamlets, a deprived inner city borough in east London, England, 2007-2010. Participants: maternity records of 333 women who were resident in Tower Hamlets and who satisfied the Trust's eligibility criteria for using the Birth Centre. Of these, 167 women started their intrapartum care at the Birth Centre and 166 started care at the Royal London Hospital. Measurements And Findings: women who planned their birth at the Birth Centre experienced continuous intrapartum midwifery care, higher rates of spontaneous vaginal delivery, greater use of a birth pool, lower rates of epidural use, higher rates of established breastfeeding and a longer post-natal stay, compared with those who planned for care in the hospital. The total average cost per mother-baby dyad for care where mothers started their intrapartum care at the Birth Centre was £1296.23, approximately £850 per patient less than the average cost per mother and baby who received all their care at the Royal London Hospital. These costs reflect intrapartum throughput using bottom up costing per patient, from admission to discharge, including transfer, but excluding occupancy rates and the related running costs of the units. Key Conclusions And Implications For Practice: the study showed that intrapartum throughput in the Birth Centre could be considered cost-minimising when compared to hospital. Modelling the financial viability of midwifery units at a local level is important because it can inform the appropriate provision of these services. This finding from this study contribute a local perspective and thus further weight to the evidence from the Birthplace Programme in support of freestanding midwifery unit care for women without obstetric complications. abstract_id: PUBMED:29024869 Mapping midwifery and obstetric units in England. Objective: to describe the configuration of midwifery units, both alongside&free-standing, and obstetric units in England. Design: national survey amongst Heads of Midwifery in English Maternity Services SETTING: National Health Service (NHS) in England PARTICIPANTS: English Maternity Services Measurements descriptive statistics of Alongside Midwifery Units and Free-standing Midwifery Units and Obstetric Units and their annual births/year in English Maternity Services FINDINGS: alongside midwifery units have nearly doubled since 2010 (n = 53-97); free-standing midwifery units have increased slightly (n = 58-61). There has been a significant reduction in maternity services without either an alongside or free-standing midwifery unit (75-32). The percentage of all births in midwifery units has trebled, now representing 14% of all births in England. This masks significant differences in percentage of all births in midwifery units between different maternity services with a spread of 4% to 31%. Key Conclusions: In some areas of England, women have no access to a local midwifery unit, despite the National Institute for Health&Clinical Excellence (NICE) recommending them as an important place of birth option for low risk women. The numbers of midwifery units have increased significantly in England since 2010 but this growth is almost exclusively in alongside midwifery units. The percentage of women giving birth in midwifery units varies significantly between maternity services suggesting that many midwifery units are underutilised. Implications For Practice: Both the availability and utilisation of midwifery units in England could be improved. abstract_id: PUBMED:28320352 A comparison of intrapartum interventions and adverse outcomes by parity in planned freestanding midwifery unit and alongside midwifery unit births: secondary analysis of 'low risk' births in the birthplace in England cohort. Background: For low risk women, there is good evidence that planned birth in a midwifery unit is associated with a reduced risk of maternal interventions compared with planned birth in an obstetric unit. Findings from the Birthplace cohort study have been interpreted by some as suggesting a reduced risk of interventions in planned births in freestanding midwifery units (FMUs) compared with planned births in alongside midwifery units (AMUs). However, possible differences have not been robustly investigated using individual-level Birthplace data. Methods: This was a secondary analysis of data on 'low risk' women with singleton, term, 'booked' pregnancies collected in the Birthplace national prospective cohort study. We used logistic regression to compare interventions and outcomes by parity in 11,265 planned FMU births and 16,673 planned AMU births, adjusted for potential confounders, using planned AMU birth as the reference group. Outcomes considered included adverse perinatal outcomes (Birthplace primary outcome measure), instrumental delivery, intrapartum caesarean section, 'straightforward vaginal birth', third or fourth degree perineal trauma, blood transfusion and maternal admission for higher-level care. We used a significance level of 1% for all secondary outcomes. Results: There was no significant difference in adverse perinatal outcomes between planned AMU and FMU births. The odds of instrumental delivery were reduced in planned FMU births (nulliparous: aOR 0.63, 99% CI 0.46-0.86; multiparous: aOR 0.41, 99% CI 0.25-0.68) and the odds of having a 'straightforward vaginal birth' were increased in planned FMU births compared with planned AMU births (nulliparous: aOR 1.47, 99% CI 1.17-1.85; multiparous: 1.86, 99% CI 1.35-2.57). The odds of intrapartum caesarean section did not differ significantly between the two settings (nulliparous: p = 0.147; multiparous: p = 0.224). The overall pattern of findings suggested a trend towards lower intervention rates and fewer adverse maternal outcomes in planned FMU births compared with planned AMU births. Conclusions: The findings support the recommendation that 'low risk' women can be informed that planned birth in an FMU is associated with a lower rate of instrumental delivery and a higher rate of 'straightforward vaginal birth' compared with planned birth in an AMU; and that outcomes for babies do not appear to differ between FMUs and AMUs. abstract_id: PUBMED:30196150 Confidence: Fundamental to midwives providing labour care in freestanding midwifery-led units. The aim of this study was to reveal what enables, safeguards and sustains midwives to provide labour care in freestanding midwifery-led units. Design: A hermeneutic phenomenological study was undertaken in the Auckland region of New Zealand. In-depth interviews were conducted with 14 participants: 11 midwives who provide care in freestanding midwifery-led units and three obstetricians who provide antenatal consultations on site in midwifery-led units. Main Findings: Confidence is necessary to provide intrapartum care in freestanding midwifery units. This confidence is cultivated by working in the community or freestanding unit and believing this unit is an appropriate space for healthy women to labour and birth. Normal labour and birth are commonplace in this space which in turn reinforces midwives' confidence. Maintaining confidence for midwives to work in these units requires trusting relationships in the midwifery team. Further, there needs to be mutually respectful relationships with obstetric colleagues. Midwives who have lesser experience, or experience in obstetric unit only, may need support to step into the role of providing labour care in freestanding midwifery units. When the midwife feels supported, when s/he witnesses women and families experiencing their normal birth, one's resolve to practising in this manner is strengthened. The midwife holds confidence. Key Conclusions And Implications For Practice: Confidence required to provide labour care in a midwifery-led unit is cultivated through immersion in these units. abstract_id: PUBMED:25361840 Evaluating Midwifery Units (EMU): a prospective cohort study of freestanding midwifery units in New South Wales, Australia. Objective: To compare maternal and neonatal birth outcomes and morbidities associated with the intention to give birth in two freestanding midwifery units and two tertiary-level maternity units in New South Wales, Australia. Design: Prospective cohort study. Participants: 494 women who intended to give birth at freestanding midwifery units and 3157 women who intended to give birth at tertiary-level maternity units. Participants had low risk, singleton pregnancies and were at less than 28(+0) weeks gestation at the time of booking. Primary And Secondary Outcome Measures: Primary outcomes were mode of birth, Apgar score of less than 7 at 5 min and admission to the neonatal intensive care unit or special care nursery. Secondary outcomes were onset of labour, analgesia, blood loss, management of third stage of labour, perineal trauma, transfer, neonatal resuscitation, breastfeeding, gestational age at birth, birth weight, severe morbidity and mortality. Results: Women who planned to give birth at a freestanding midwifery unit were significantly more likely to have a spontaneous vaginal birth (AOR 1.57; 95% CI 1.20 to 2.06) and significantly less likely to have a caesarean section (AOR 0.65; 95% CI 0.48 to 0.88). There was no significant difference in the AOR of 5 min Apgar scores, however, babies from the freestanding midwifery unit group were significantly less likely to be admitted to neonatal intensive care or special care nursery (AOR 0.60; 95% CI 0.39 to 0.91). Analysis of secondary outcomes indicated that planning to give birth in a freestanding midwifery unit was associated with similar or reduced odds of intrapartum interventions and similar or improved odds of indicators of neonatal well-being. Conclusions: The results of this study support the provision of care in freestanding midwifery units as an alternative to tertiary-level maternity units for women with low risk pregnancies at the time of booking. Answer: The effect of place of birth does not appear to differ with the level of social disadvantage when comparing freestanding midwifery units (FMUs) to obstetric units (OUs). A study conducted in Denmark found that women intending to give birth in an FMU had a significantly higher likelihood of having an uncomplicated, spontaneous birth with good outcomes for both mother and infant compared to those intending to give birth in an OU. This was true for both women with post-secondary education and those without, indicating that the positive results of intending to give birth in an FMU were consistent regardless of the women's level of education, which was used as a proxy for social position (PUBMED:22726575). Furthermore, the study found no difference in perinatal morbidity or severe perineal tears between the two groups, while birth interventions such as caesarean sections and epidural analgesia were less frequent among women intending to give birth in an FMU. The findings suggest that FMU care offers important benefits without additional risk to the infant and that all women should be provided with adequate information about different care models to make an informed decision about the place of birth (PUBMED:22726575). In summary, the available evidence indicates that the effect of intended place of birth on outcomes does not differ with women's level of education or social disadvantage, supporting the provision of care in FMUs as a beneficial alternative to OUs for low-risk women.
Instruction: Is Science Built on the Shoulders of Women? Abstracts: abstract_id: PUBMED:27276004 Is Science Built on the Shoulders of Women? A Study of Gender Differences in Contributorship. Purpose: Women remain underrepresented in the production of scientific literature, and relatively little is known regarding the labor roles played by women in the production of knowledge. This study examined labor roles by gender using contributorship data from science and medical journals published by the Public Library of Science (PLOS), which require each author to indicate their contribution to one or more of the following tasks: (1) analyzed the data, (2) conceived and designed the experiments, (3) contributed reagents/materials/analysis tools, (4) performed the experiments, and (5) wrote the paper. Method: The authors analyzed contribution data from more than 85,000 articles published between 2008 and 2013 in PLOS journals with respect to gender using both descriptive and regression analyses. Results: Gender was a significant variable in determining the likelihood of performing a certain task associated with authorship. Women were significantly more likely to be associated with performing experiments, and men were more likely to be associated with all other authorship roles. This holds true controlling for academic age: Although experimentation was associated with academically younger scholars, the gap between male and female contribution to this task remained constant across academic age. Inequalities were observed in the distribution of scientific labor roles. Conclusions: These disparities have implications for the production of scholarly knowledge, the evaluation of scholars, and the ethical conduct of science. Adopting the practice of identifying contributorship rather than authorship in scientific journals will allow for greater transparency, accountability, and equitable allocation of resources. abstract_id: PUBMED:34749284 The built environment and perceived HIV risk among young women living in the peri-urban slum of Kibra, Nairobi, Kenya. This study reports on the lived experiences of young women living in a peri-urban slum in Kenya and its impact on perceived HIV risk and prevention needs. Guided by the theory of gender and power and postcolonial theory, 73 women 15-24 years of age participated in individual and focus group interviews. Results revealed that the built environment inside and outside the home such as inadequate physical space and lack of security impacted perceived HIV risk. To have meaningful and sustainable change, HIV prevention efforts must address social structures that impact daily lived experiences of young women. abstract_id: PUBMED:34481472 Identifying urban built environment factors in pregnancy care and maternal mental health outcomes. Backgrounds: Risk factors related to the built environment have been associated with women's mental health and preventive care. This study sought to identify built environment factors that are associated with variations in prenatal care and subsequent pregnancy-related outcomes in an urban setting. Methods: In a retrospective observational study, we characterized the types and frequency of prenatal care events that are associated with the various built environment factors of the patients' residing neighborhoods. In comparison to women living in higher-quality built environments, we hypothesize that women who reside in lower-quality built environments experience different patterns of clinical events that may increase the risk for adverse outcomes. Using machine learning, we performed pattern detection to characterize the variability in prenatal care concerning encounter types, clinical problems, and medication prescriptions. Structural equation modeling was used to test the associations among built environment, prenatal care variation, and pregnancy outcome. The main outcome is postpartum depression (PPD) diagnosis within 1 year following childbirth. The exposures were the quality of the built environment in the patients' residing neighborhoods. Electronic health records (EHR) data of pregnant women (n = 8,949) who had live delivery at an urban academic medical center from 2015 to 2017 were included in the study. Results: We discovered prenatal care patterns that were summarized into three common types. Women who experienced the prenatal care pattern with the highest rates of PPD were more likely to reside in neighborhoods with homogeneous land use, lower walkability, lower air pollutant concentration, and lower retail floor ratios after adjusting for age, neighborhood average education level, marital status, and income inequality. Conclusions: In an urban setting, multi-purpose and walkable communities were found to be associated with a lower risk of PPD. Findings may inform urban design policies and provide awareness for care providers on the association of patients' residing neighborhoods and healthy pregnancy. abstract_id: PUBMED:19256415 Women's participation in science The participation of women in higher education in Mexico took place in the late 19th and early 20th century. The rise of women's enrollment in universities known as the "feminization of enrollment" occurred in the last thirty years. In this review we analyze how the new conditions that facilitated better access to higher education are reflected in the inclusion of women in science. We include an overview of the issues associated with a change in the demographics of enrollment, segregation of academic areas between men and women and participation in post graduate degrees. We also review the proportion of women in science. While in higher education the ratio between male and women is almost 50-50 and in some areas the presence of women is even higher, in the field of scientific research women account for barely 30% of professionals. This is largely due to structural conditions that limit the access of women to higher positions of power that have been predominantly taken by men. abstract_id: PUBMED:36361047 The Association between Changes in Built Environment and Changes in Walking among Older Women in Portland, Oregon. Some cross-sectional evidence suggests that the objectively measured built environment can encourage walking among older adults. We examined the associations between objectively measured built environment with change in self-reported walking among older women by using data from the Study of Osteoporotic Fractures (SOF). We evaluated the longitudinal associations between built environment characteristics and walking among 1253 older women (median age = 71 years) in Portland, Oregon using generalized estimating equation models. Built environment characteristics included baseline values and longitudinal changes in distance to the closest bus stop, light rail station, commercial area, and park. A difference of 1 km in the baseline distance to the closest bus stop was associated with a 12% decrease in the total number of blocks walked per week during follow-up (eβ = 0.88, 95% CI: 0.78, 0.99). Our study provided limited support for an association between neighborhood transportation and changes in walking among older women. Future studies should consider examining both objective measures and perceptions of the built environment. abstract_id: PUBMED:35742401 Impacts of Built Environment on Risk of Women's Lung Cancer: A Case Study of China. Built environment factors such as air pollution are associated with the risk of respiratory disease, but few studies have carried out profound investigation. We aimed to evaluate the association between the built environment and Chinese women's lung cancer incidence data from the China Cancer Registry Annual Report 2017, which covered 345,711,600 people and 449 qualified cancer registries in mainland China. The air quality indicator (PM2.5) and other built environment data are obtained from the China Statistical Yearbook and other official approved materials. An exploratory regression tool is applied by using Chinese women's lung cancer incidence data (Segi population) as the dependent variable, PM2.5 index and other built environment factors as the independent variables. An apparent clustering region with a high incidence of women's lung cancer was discovered, including regions surrounding Bohai bay and the three Chinese northeastern provinces, Heilongjiang, Liaoning and Inner Mongolia. Besides air quality, built environment factors were found to have a weak but clear impact on lung cancer incidence. Land-use intensity and the greening coverage ratio were positive, and the urbanization rate and population density were negatively correlated with lung cancer incidence. The role of green spaces in Chinese women's lung cancer incidence has not been proven. abstract_id: PUBMED:37040736 Challenges and Opportunities for Data Science in Women's Health. The intersection of women's health and data science is a field of research that has historically trailed other fields, but more recently it has gained momentum. This growth is being driven not only by new investigators who are moving into this area but also by the significant opportunities that have emerged in new methodologies, resources, and technologies in data science. Here, we describe some of the resources and methods being used by women's health researchers today to meet challenges in biomedical data science. We also describe the opportunities and limitations of applying these approaches to advance women's health outcomes and the future of the field, with emphasis on repurposing existing methodologies for women's health. abstract_id: PUBMED:28940674 The built environment and older adults: A literature review and an applied approach to engaging older adults in built environment improvements for health. Aims And Objectives: This paper presents a review of the literature about the built environment as it impacts the health of older people. It then introduces the gerontological nurse and researcher to the Our Voice framework for engaging older people as citizen scientists in order to empower them as agents of change in improving their local built environment and ultimately advancing community health. Background: Community-level strategies to promote successful ageing in place are critical both to optimising health outcomes and containing healthcare costs. Such strategies must take into account the influence of the built environment both on individual health behaviours and on overall community health. At the same time, the perspectives and experiences of older people themselves ought to inform policies and practices in a systematic way. Design: Integrative literature review. Method: A wide scan of English language articles published in the EMBASE, PubMed and CINAHL bibliographic databases was conducted. Additional articles were sourced by mining relevant reference lists (i.e., snowball sampling). Papers included were published between 2005 and 2016. Results: Three distinct components emerged from the review: the impact of the built environment on health-in particular the health of older persons; citizen science and its applicability for older people research; and the promise of the Our Voice citizen science framework to activate changes in the built environment that improve older peoples' health. Conclusion: The ageing of the world's population brings with it an increased population-level risk of chronic disease and disability. We present the Our Voice framework, developed by researchers at Stanford University, as a promising strategy for engaging and empowering older people as citizen scientists, as a framework to apply to gerontological nursing and improving community health. Implications For Practice: Gerontology nurses are encouraged to: (i) Recognise the impact of the built environment and other community-level factors on the health of their patients. (ii) Encourage older adults to take an active role in documenting features of their environments that promote or hinder healthy living. (iii) Support policies and programmes that promote healthy environments. abstract_id: PUBMED:35122410 Outlook of women in science: an interview with our authors. Diversity in research teams ties alternative perspectives into research projects, and this can fast-forward scientific progress. Concerted efforts have been aimed at encouraging and supporting women to pursue a career in science, yet a gender disparity can still be observed at senior positions, with fewer women in leadership roles. To get insight into how the current landscape for women in science is perceived by different career stages, we interviewed female authors of Molecular Oncology from diverse career stages and disciplines about their inspiration, challenges they have faced as scientists as well as their thoughts on how gender diversity can be further enhanced. abstract_id: PUBMED:32929006 Open science, communal culture, and women's participation in the movement to improve science. Science is undergoing rapid change with the movement to improve science focused largely on reproducibility/replicability and open science practices. This moment of change-in which science turns inward to examine its methods and practices-provides an opportunity to address its historic lack of diversity and noninclusive culture. Through network modeling and semantic analysis, we provide an initial exploration of the structure, cultural frames, and women's participation in the open science and reproducibility literatures (n = 2,926 articles and conference proceedings). Network analyses suggest that the open science and reproducibility literatures are emerging relatively independently of each other, sharing few common papers or authors. We next examine whether the literatures differentially incorporate collaborative, prosocial ideals that are known to engage members of underrepresented groups more than independent, winner-takes-all approaches. We find that open science has a more connected, collaborative structure than does reproducibility. Semantic analyses of paper abstracts reveal that these literatures have adopted different cultural frames: open science includes more explicitly communal and prosocial language than does reproducibility. Finally, consistent with literature suggesting the diversity benefits of communal and prosocial purposes, we find that women publish more frequently in high-status author positions (first or last) within open science (vs. reproducibility). Furthermore, this finding is further patterned by team size and time. Women are more represented in larger teams within reproducibility, and women's participation is increasing in open science over time and decreasing in reproducibility. We conclude with actionable suggestions for cultivating a more prosocial and diverse culture of science. Answer: The question "Is Science Built on the Shoulders of Women?" can be addressed by examining the roles and contributions of women in scientific research and literature. According to a study examining gender differences in contributorship in science and medical journals published by the Public Library of Science (PLOS), women are underrepresented in the production of scientific literature. The study found that gender was a significant variable in determining the likelihood of performing certain tasks associated with authorship. Women were more likely to be associated with performing experiments, while men were more likely to be associated with other authorship roles such as designing experiments, analyzing data, contributing reagents/materials/analysis tools, and writing the paper. This disparity remained constant across academic age, suggesting that inequalities exist in the distribution of scientific labor roles (PUBMED:27276004). Furthermore, the participation of women in higher education and science in Mexico shows that while there has been a "feminization of enrollment" with nearly equal ratios of men and women in higher education, women account for only about 30% of professionals in scientific research. This is attributed to structural conditions that limit women's access to higher positions of power, which have been predominantly occupied by men (PUBMED:19256415). In the context of the open science movement, which focuses on reproducibility/replicability and open science practices, women's participation is more pronounced in the open science literature compared to the reproducibility literature. Open science literature is characterized by more communal and prosocial language, which is known to engage members of underrepresented groups, including women, more than independent, winner-takes-all approaches. Women publish more frequently in high-status author positions within open science than in reproducibility, although this varies by team size and over time (PUBMED:32929006). In summary, while women contribute significantly to scientific research, particularly in roles involving experimentation, they remain underrepresented in other key authorship roles and in higher positions of power within the scientific community. The open science movement appears to offer a more inclusive and collaborative environment that may encourage greater participation by women in science.
Instruction: Is transvaginal elastography useful in pre-operative diagnosis of cervical cancer? Abstracts: abstract_id: PUBMED:22717123 Is transvaginal elastography useful in pre-operative diagnosis of cervical cancer? Objective: To evaluate the clinical value of transvaginal elastography (TVES) in diagnosing cervical malignancies by detecting changes of tissue stiffness. Methods: One hundred and ten consecutive patients with cervical lesions were enrolled. Pathological results were used as the gold standards. TVES was employed to detect the stiffness changes of the cervix. Strain ratio was calculated and compared between the benign and malignant lesions. Depth of invasion into stromas of 56 cases of cervical cancers measured by TVES were recorded and compared with the pathological results. Interclass correlation coefficient (ICC) was used to analyze the reproducibility. Results: Strain ratio of malignant lesions were much higher than that of the benign lesions (8.19±5.66 vs. 2.81±2.24, P<0.01). Area under the curve (AUC) was 0.905 with a 95% CI (0.835-0.976). The best cut-off point of strain ratio value was 4.53. Specificity and sensitivity for the best cut-off point were 0.788 and 0.897, respectively. Mean depth of the 56 malignant lesions was 17.8±7.4mm measured by TVES (range 5.4-43.1mm) and 11.5±8.8mm measured by pathological samples (range 3.7-38.4mm). ICC of the 2 methods were 0.87 (95% CI 0.863-0.947) and 0.931 (95% CI 0.902-0.952) for the 2 observers. Conclusions: TVES was a useful technique in confirming the diagnoses of cervical cancer and in estimating the infiltrating region. When the strain ratio of a cervical lesion was higher than 4.53, it is confidential to be diagnosed as malignant. abstract_id: PUBMED:29044684 Evaluation of the Cervix After Cervical Conization by Transvaginal Elastography. Objectives: The purpose of this study was to investigate the stiffness of the cervix after the loop electrosurgical excision procedure (LEEP) by transvaginal elastography and its potential ability to predict future pregnancy. Methods: A retrospective study included patients of reproductive age who underwent LEEP for cervical high-grade squamous intraepithelial lesions on the basis of colposcopic findings and who desired fertility. The characteristics on conventional transvaginal ultrasonography and elastography before and 6 months after LEEP were reviewed and analyzed. Each case had a 12-month follow-up, and the information on pregnancy and a cervical cytologic examination was recorded. Results: Fifty-three patients who completed the 12-month follow-up were included in the analysis. Thirty-two cases (group 1) were pregnant during the follow-up, and the other 21 (group 2) had pregnancy failure. The cervical lengths of group 1 and 2 as measured by traditional transvaginal ultrasonography were similar before and 6 months after LEEP. The resistive index of group 1 was significantly higher 6 months after LEEP than before LEEP (mean ± SD, 0.88 ± 0.36 versus 0.42 ± 0.22; P = .007). The mean elasticity score for the cervix was statistically significantly lower after LEEP (2.21 ± 0.53) than before (3.46 ± 0.78; P = .001) in group 1. However, in group 2, the mean elasticity scores were similar after LEEP (3.19 ± 0.58) and before (3.25 ± 0.66; P = .75). Conclusions: Our results suggest that elastography was a useful technique for evaluating the stiffness of the cervix after LEEP. Elastography could provide a potential means to predict future pregnancy after LEEP. abstract_id: PUBMED:31066099 Transvaginal Real-time Shear Wave Elastography in the Diagnosis of Cervical Disease. Objectives: To explore the value of shear wave elastography in the diagnosis of cervical disease. Methods: This work was a retrospective analysis of 246 cases of cervical lesions confirmed by transvaginal conventional ultrasound, shear wave elastography, and a cytologic test. The lesions were divided into 2 groups according to the final pathologic results: a malignant cervical group and a benign cervical group. In addition, the normal cervix was set as the control group. Results: The maximum and mean shear wave velocity values ± SD were 5.24 ± 1.11 and 4.91 ± 1.12 m/s for the malignant cervical group, 3.93 ± 0.39 and 3.53 ± 0.52 m/s for the benign cervical group, and 3.27 ± 0.31 and 2.86 ± 0.23 m/s for the normal cervix, respectively. The areas under the receiver operating characteristic curves for the maximum and mean shear wave velocity in the differential diagnosis of a normal cervix and benign cervical tumors were 0.909 and 0.878 (both P < .001), whereas in the differential diagnosis of benign and malignant cervical tumors, they were 0.909 and 0.895 (both P < .001). Conclusions: Shear wave elastography can quantitatively analyze the elastic characteristics of cervical diseases, help differentially diagnose cervical diseases, accurately determine the extent of tumor invasion, and improve effective clinical staging and treatment. abstract_id: PUBMED:28211804 Comparison of elastography based on transvaginal ultrasound and MRI in assessing parametrial invasion of cervical cancer. Objective: The purpose of this study was to evaluate the clinical value of transvaginal elastography (TVES) combined with high-resolution transvaginal ultrasound (TVS) in the detection of parametrial invasion in cervical cancer and to compare the diagnostic performance with magnetic resonance imaging (MRI). Materials And Methods: 52 women with histologically confirmed cervical cancer over a 2-year period were staged using International Federation of Gynecology and Obstetrics (FIGO) criteria and underwent MRI and TVES combined with TVS according to a standardized protocol before treatment. When assessing parametrial involvement with TVS, MRI, and combination of TVES and TVS, the findings were recorded and compared with histopathological results after surgery in early-stage disease (stage⩽IIa). Sensitivity, specificity accuracy, positive predictive value (PPV) and negative predictive value (NPV) were calculated for each method independently; subsequently, a matched-sample analysis was performed by using McNemar's test or chi-square test. Results: Of 52 patients, 39 were early-stage disease (stage⩽IIa), and 13 were advanced-stage disease (stage⩾IIb) according to conventional FIGO staging. For the detection of parametrial infiltration, both the diagnostic sensitivity of MRI and the combination of TVS and TVES were statistically higher than alone TVS in early-stage of cervical cancer (P = 0.03 < 0.05). Both MRI and the combination of TVS and TVES had a sensitivity of 72.73%; specificity rates of 82.14% for MRI and 78.57% for the combination of TVES and TVS; and the diagnostic accuracy rates of 79.49% for MRI and 76.92% for the combination of TVES and TVS. A matched sample analysis revealed no statistically significant difference between the diagnostic performance of MRI and the combination of TVES and TVS in the assessment of parametrial invasion (all P values > 0.05). Conclusion: TVES combined with TVS performed by a dedicated gynecologic radiologist should be considered a promising and economic method for pre-operative work-up for cervical cancer. abstract_id: PUBMED:34376298 Evaluation of Pre-malignant Lesions of the Uterine Cervix by Shear Wave Elastography: A New Diagnostic Tool. The objective of the study was to evaluate the difference in the stiffness between a healthy cervix (no pre-invasive lesions [NPILs]) and a cervix with a pre-invasive lesion (PIL). In the PIL group, we determined whether there was a difference in stiffness between the cervix with persistent low-grade lesions (>2 y, LSIL-persistent) and that with high-grade lesions (HSILs). Evaluation was performed using 2-D shear-wave elastography (SWE) in the midsagittal-plane of the uterine cervix (UC) at 0.5 cm (cervical canal, anterior and posterior cervical lips). In this prospective observational study (consecutive series), we evaluated 96 non-pregnant women: a group with PIL (LSIL-persistent, 22 cases; HSIL, 26 cases) with indications for cervical conization (48 cases) and a group without UC pathology (NPIL, 48 cases). Although we did not observe statistically significant differences (SSDs) in epidemiological characteristics, we did find an SSD in the speed and stiffness between the PIL versus NPIL groups at all evaluated depths (speed: 4.1 m/s vs 3.0 m/s, stiffness: 58.6 and 34.5kPa in the PIL and NPIL groups, respectively, p < 0.001). An SSD in speed and stiffness (speed: 4.9 m/s vs. 3.2 m/s, and stiffness: 76.1 and 38.0 kPa) between the HSIL (26 cases) and LSIL-persistent (22 cases) groups, respectively, was also detected (p < 0.001). The area under the curve of speed differentiation between a cervix with HSILs and without lesions was 73.4% (95% confidence interval [CI]: 63.1-83.7), and the best cutoff of speed was 3.25 m/s (sensitivity = 62.5%, 95% CI: 47.3-76.0), with a specificity of 75.5% (95% CI: 60.4-87.1). abstract_id: PUBMED:31186698 Study on value of ultrasonic elastography in diagnosis of clinical staging of cervical cancer and efficacy evaluation of radiotherapy. The aim of this study was to investigate the application of ultrasonic elastography in the diagnosis of the clinical staging of cervical cancer (CC) and its evaluation value of the treatment effect of CC. A total of 160 suspected CC patients treated in our hospital from September 2016 to March 2018 were collected. Transvaginal conventional ultrasound and ultrasonic elastography were performed on patients to compare the results of the two in the diagnosis of the pathology and clinical staging of CC. Radiotherapy was used for patients confirmed as CC75 in 160 suspected CC patients. The value difference of strain ratio (SR) between conventional ultrasound and elastic ultrasound in the efficacy evaluation of CC patients was compared. The sensitivity (94.67%), specificity (92.94%) and diagnostic accordance rate (93.75%) of ultrasonic elastography for CC were significantly higher than those of conventional ultrasound, with a statistically significant difference (P<0.001). The sensitivity and diagnostic accordance rate of ultrasonic elastography for the pathological diagnosis of CC in stage III and IV were significantly higher than those of conventional ultrasound, with a statistically significant difference (P<0.05). The sensitivity, specificity and diagnostic accordance rate of elastic ultrasound SR value in the efficacy evaluation of radiotherapy in CC patients were higher than those of conventional ultrasound. Except for the specificity, the other two P-values were <0.05, with a statistically significant difference. The elastography images of different stages of CC have some characteristic features. Ultrasonic elastography has a certain clinical value for the diagnosis and efficacy evaluation of CC. abstract_id: PUBMED:38282023 The role of cervical elastography in the differential diagnosis of preinvasive and invasive lesions of the cervix. Study Objective: This study aims to evaluate the role of cervical elastography in the differential diagnosis of preinvasive and invasive lesions of the cervix. Materials And Methods: A total of 95 women participated in this prospective study and were divided into the following groups: 19 healthy subjects (group 1) with normal cervicovaginal smear (CVS) and negative human papillomavirus test (HPV DNA), 19 women with normal cervical biopsy and normal final pathological result of cervical biopsy (group 2), 19 women with low-grade squamous intraepithelial lesion (LSIL) (group 3), 19 women with high-grade squamous intraepithelial lesion (HSIL) (group 4), and 19 women with cervical cancer (group 5). Clinical, demographic, histopathological, and elastographic results were compared between these groups. Results: Comparing groups, age (40.42 ± 8.31 vs. 39.53 ± 8.96 vs. 38.79 ± 9.53 vs. 40.74 ± 7.42 vs. 54.63 ± 12.93, p < 0.001 respectively), gravida (1.74 ± 1.33 vs. 2.16 ± 1.68 vs. 2.21 ± 1.96 vs. 2.53 ± 1.93 vs. 4.63 ± 2.17 p < 0.001 respectively), parity (1.37 ± 0.68 vs. 1.68 ± 1.20 vs. 1.58 ± 1.30 vs. 2.00 ± 1.67 vs. 3.37 ± 1.61, p < 0.001 respectively), and the proportion of patients at menopause (10.5% vs., 15.8% vs. 10.5% vs. 5.3% vs. 57.9%, p < 0.01 respectively), a statistically significant difference was found (Table 1). However, no statistically significant difference was found in the number of abortions, BMI, mode of delivery, smoking, additional disease status, history of surgery, and family history (p > 0.05) (Table 2. As a result of the applied roc analysis, mean cervical elastographic stiffness degree (ESD) was found to be an influential factor in predicting cervical cancer (p < 0.05). The mean cut-off value was 44.65%, with a sensitivity of 94.7% and a specificity of 96.1% (Table 7). Conclusion: Measurement of ESD by elastography is a low-cost, easily applicable, and non-invasive indicator that can distinguish cervical cancer from normal cervical and preinvasive lesions. However, it is unsuitable for determining preinvasive cervical lesions from normal cervix. abstract_id: PUBMED:36057675 Advances in the clinical application of ultrasound elastography in uterine imaging. Changes in tissue stiffness by physiological or pathological factors in tissue structure are identified earlier than their clinical features. Pathological processes such as uterine fibrosis, adenomyosis, endometrial lesions, infertility, and premature birth can manifest as tissue elasticity changes. In clinical settings, elastography techniques based on ultrasonography, optical coherence tomography, and magnetic resonance imaging are widely used for noninvasive measurement of mechanical properties in patients, providing valuable tool and information for diagnosis and treatment. Ultrasound elastography (USE) plays a critical role in obstetrics and gynecology clinical work because of its simplicity, non-invasiveness, and repeatability. This article reviews the recent progress of USE in uterine tumor diagnosis (especially early diagnosis and treatment effect evaluation), prediction of preterm birth, and intrauterine insemination. We believe that USE, especially shear wave elastography, may serve as a potential means to assess tissue stiffness, thereby improving the diagnosis and treatment of adenomyosis, fibroids, endometrial lesions, cervical cancer, and precise management of preterm birth and intrauterine insemination monitoring. abstract_id: PUBMED:34049726 Advancements in the Application of Ultrasound Elastography in the Cervix. Ultrasound elastography is a modern imaging technique that has developed rapidly in recent years. It enables objective measurement of tissue stiffness, a physical property intuitive to the human sense of touch. This novel technology has become a hotspot and plays a major role in scientific research and academic practice. Presently, ultrasound elastography has been used in the identification of benign and malignant tumors in superficial organs, such as breast and thyroid, providing clinically accurate diagnosis and treatment. The method has also been widely used for the liver, kidney, prostate, lymph nodes, blood vessels, skin and muscle system. In the application of cervical lesions, ultrasound elastography can distinguish normal cervix from abnormal cervix and differentiate benign from malignant lesions. It can significantly improve the diagnostic specificity for cervical cancer and is also useful for assessing infiltration depth and stage of cervical cancer, as well as predicting chemoradiotherapy treatment response. For cervical evaluation during pregnancy, ultrasound elastography is useful for assessing cervical softening and predicting premature delivery and outcome of induced labor. This article reviews the principles of ultrasound elastography as well as the current status and limitations in its application for cervical lesions and the cervix during pregnancy. abstract_id: PUBMED:32782540 Value of shear wave elastography in the diagnosis and evaluation of cervical cancer. The aim of the present study was to explore the value of shear wave elastography (SWE) in the differential diagnosis of cervical disease and to evaluate the infiltration of cervical cancer. A total of 40 inpatients with cervical cancer, 40 inpatients with cervical benign lesion and 40 healthy volunteers encountered between October 2014 and January 2017 were enrolled. All patients and volunteers underwent conventional ultrasound (US) and SWE examinations. The malignancy and the size (including long, tranverse and anteroposterior diameter) of the lesion were assessed on US. The elastic score, strain ratio, shear wave speed (SWS) and the size of lesions were determined on SWE. Infiltration of the uterus and vaginal vault were also evaluated on US and SWE. The SWS values of cervical cancers, cervical benign lesions and normal cervixes groups were compared. The results suggested that the optimal cut-off elasticity score for predicting cervical cancers was 3 points. The strain ratio between the cervical cancers and the cervical benign lesions exhibited a significant difference (P<0.01). The mean value of SWS for cervical cancers was significantly higher than that of cervical benign lesions and normal cervix (P<0.05). Regarding the lesion size and volume, SWE and pathological measurements were larger than those determined by US (P<0.05 for each). The lesion volume on SWE and pathological measurements exhibited no significant difference (P>0.05). Compared to the pathological diagnosis of focal infiltration of uterus and vaginal vault, the diagnostic accuracy of SWE was higher than that of US. In conclusion, SWE may be used to differentiate between cervical benign lesions and cervical cancers. The elastic score, strain ratio and SWS of cervical cancers were higher than those of cervical benign lesions. Furthermore, SWE is able to evaluate the infiltration of cervical cancer. Answer: Yes, transvaginal elastography (TVES) is useful in the pre-operative diagnosis of cervical cancer. Studies have shown that TVES can detect changes in tissue stiffness, which is a characteristic of malignant lesions in the cervix. For instance, one study found that the strain ratio of malignant lesions was significantly higher than that of benign lesions, with a strain ratio higher than 4.53 being indicative of malignancy (PUBMED:22717123). Additionally, TVES was shown to be helpful in estimating the infiltrating region of cervical cancers. Another study demonstrated that TVES combined with high-resolution transvaginal ultrasound (TVS) could be used to detect parametrial invasion in cervical cancer, with diagnostic performance comparable to magnetic resonance imaging (MRI) (PUBMED:28211804). Furthermore, shear wave elastography, a specific type of TVES, has been found to quantitatively analyze the elastic characteristics of cervical diseases, aiding in the differential diagnosis and accurate determination of tumor invasion extent (PUBMED:31066099). TVES has also been used to evaluate the stiffness of the cervix after loop electrosurgical excision procedure (LEEP), which could potentially predict future pregnancy outcomes (PUBMED:29044684). Additionally, shear wave elastography has been shown to differentiate between pre-malignant lesions and healthy cervix tissue, with significant differences in speed and stiffness between high-grade and low-grade lesions (PUBMED:34376298). Moreover, ultrasonic elastography has been found to have higher sensitivity, specificity, and diagnostic accordance rate than conventional ultrasound in the diagnosis of cervical cancer and its clinical staging, as well as in evaluating the effectiveness of radiotherapy (PUBMED:31186698). Elastography has also been suggested as a low-cost, easily applicable, and non-invasive indicator that can distinguish cervical cancer from normal cervical and preinvasive lesions (PUBMED:38282023). In summary, transvaginal elastography is a valuable tool in the pre-operative diagnosis of cervical cancer, providing information on tissue stiffness that can help differentiate between benign and malignant lesions, assess the extent of tumor invasion, and potentially predict treatment outcomes.
Instruction: Diagnosing unexplained fever: can quick diagnosis units replace inpatient hospitalization? Abstracts: abstract_id: PUBMED:24920307 Diagnosing unexplained fever: can quick diagnosis units replace inpatient hospitalization? Background: Outpatient quick diagnosis units (QDUs) have become an increasingly recognized alternative to hospitalization for the diagnosis of a number of potentially serious diseases. No study has prospectively evaluated the usefulness of QDU for the diagnosis of unexplained fever. Materials And Methods: We prospectively assessed patients referred to QDU due to fever of uncertain nature (FUN), defined as a temperature > 38 °C during at least 1 week and no diagnosis after a previous evaluation. We also evaluated consecutive patients with FUN who were hospitalized during the same period. QDU and hospital costs were analysed by micro-costing techniques. Results: We evaluated 176 QDU patients and 168 controls. QDU patients were younger and required fewer investigations than controls. QDU patients had higher prevalence of viral infections (36% vs. 8%, P < 0·001) and lower prevalence of bacterial infections (6% vs. 46%, P < 0·001) and malignancies (2% vs. 14%, P < 0·001). While time-to-diagnosis of QDU patients was longer than length-of-stay of controls (25·82 vs.12·89 days, P < 0·001), 56% of QDU patients only required up to two visits. Cost per QDU patient was €644·59, while it was €4404·64 per hospitalized patient. Conclusions: QDU patients with FUN were younger and had less serious diseases than controls including more viral and less bacterial infections and fewer malignancies. Mainly owing to untimely diagnostic reports, time-to-diagnosis was longer in QDU patients. Cost-savings in QDU were substantial. Using objective tools to evaluate the condition severity and general health status of FUN patients could help decide the most appropriate setting for their diagnostic study. abstract_id: PUBMED:30029861 National Trends in Hospitalization for Fever and Neutropenia in Children with Cancer, 2007-2014. Objective: To assess the trends of inpatient resource use and mortality in pediatric hospitalizations for fever with neutropenia in the US from 2007 to 2014. Study Design: Using National (Nationwide) Inpatient Sample (NIS) and International Classification of Diseases, Ninth Revision, Clinical Modification codes, we studied pediatric cancer hospitalizations with fever with neutropenia between 2007 and 2014. Using appropriate weights for each NIS discharge, we created national estimates of median cost, length of stay, and in-hospital mortality rates. Results: Between 2007 and 2014, there were 104 315 hospitalizations for pediatric fever with neutropenia. The number of weighted fever with neutropenia hospitalizations increased from 12.9 (2007) to 18.1 (2014) per 100 000 US population. A significant increase in fever with neutropenia hospitalizations trend was seen in the 5- to 14-year age group, male sex, all races, and in Midwest and Western US hospital regions. Overall mortality rate remained low at 0.75%, and the 15- to 19-year age group was at significantly greater risk of mortality (OR 2.23, 95% CI 1.36-3.68, P = .002). Sepsis, pneumonia, meningitis, and mycosis were the comorbidities with greater risk of mortality during fever with neutropenia hospitalizations. Median length of stay (2007: 4 days, 2014: 5 days, P < .001) and cost of hospitalization (2007: $8771, 2014: $11 202, P < .001) also significantly increased during the study period. Conclusions: Our study provides information regarding inpatient use associated with fever with neutropenia in pediatric hospitalizations. Continued research is needed to develop standardized risk stratification and cost-effective treatment strategies for fever with neutropenia hospitalizations considering increasing costs reported in our study. Future studies also are needed to address the greater observed mortality in adolescents with cancer. abstract_id: PUBMED:36286920 Assessment of factors affecting the probability of hospitalization of COVID-19 patients with concomitant pathology and development of a prognostic model based on them Introduction: Currently, a significant number of patients with COVID-19 require inpatient treatment. At the same time, predictors of hospitalization are still stable, including in persons with concomitant pathology. Aim: Assessment of factors affecting the probability of hospitalization of COVID-19 patients with concomitant pathology and the development of a prognostic model based on them. Materials And Methods: An observational retrospective cohort study of 74 314 patients with COVID-19 with various comorbidities was carried out in the period from March to November 2020 in the Russian Federation. Results: Based on 16 factors, including age, gender, place of diagnosis, fever, rhinitis, loss of taste, shortness of breath, concomitant diseases of the cardiovascular, bronchopulmonary system, oncological, endocrine diseases in patients included in the study, a prognostic model was developed. The need for inpatient treatment of patients with COVID-19 and comorbidities was determined. Conclusion: The constructed predictive model has demonstrated sufficient efficiency to assess the likelihood of hospitalization of patients with COVID-19 by medical specialists. abstract_id: PUBMED:34123669 Sterile Leukocytosis Predicts Hemorrhagic Transformation in Arterial Ischemic Stroke: A National Inpatient Sample Study. Objective: Hemorrhage transformation (HT) is a known complication of arterial ischemic stroke (AIS). In addition, it is known that the increase of proinflammatory immune cells in the brain tissue after AIS predict worse outcomes. However, it is not clear whether inflammation due to preceding or post-stroke infections affect outcomes and moreover, if systemic inflammatory markers could be useful as a clinical prediction tool for HT post-stroke. Therefore, our objective was to assess the association between systemic pro-inflammatory profile in AIS patients with HT and in-hospital mortality that did not course with acute infections during hospitalization. Methods: This study was conducted using the 2016 and 2017 National Inpatient Sample (NIS) with International Classification of Diseases (ICD-10) codes. Multivariate logistic regression was used to examine the association between HT and in-hospital mortality with pro-inflammatory anomalies of white blood cells (WBCs) in AIS patients. Exclusion criteria comprised patients with under 18 years old, and with a diagnosis of gastrointestinal, urogenital, respiratory infection, bacteremia, viral infection, sepsis, or fever. Results: A total of 212,356 patients with AIS were included in the analysis. 422 (0.2%) patients had a HT and 10,230 (4.8%) patients died during hospitalization. The most common WBC pro-inflammatory marker was leukocytosis with 6.9% (n=29/422) of HT and 5.5% (n=560/10,230) of patients that died during hospitalization. After adjusting for socio-demographic, comorbidities and treatment factors, leukocytosis was found to be an independent risk factor for both outcomes, HT [OR = 1.5, 95% CI: 1-2.3, p=0.024] and, in-hospital mortality [OR = 1.5, 95% CI: 1.3-1.6, p < 0.001]. Conclusion: Sterile leukocytosis is a potential clinical prediction tool to determine which patients are at higher risk of developing HT and die during hospitalization. abstract_id: PUBMED:25470076 Febrile non-haemolytic transfusion reaction occurrence and potential risk factors among the U.S. elderly transfused in the inpatient setting, as recorded in Medicare databases during 2011-2012. Background And Objectives: Febrile non-haemolytic transfusion reaction (FNHTR) is an acute transfusion complication resulting in fever, chills and/or rigours. Study's objective was to assess FNHTR occurrence and potential risk factors among inpatient U.S. elderly Medicare beneficiaries, ages 65 and older, during 2011-2012. Materials And Methods: Our retrospective claims-based study utilized large Medicare administrative databases. FNHTR was ascertained via ICD-9-CM diagnosis code, and transfusions were identified by recorded procedure and revenue centre codes. The study ascertained FNHTR rates among the inpatient elderly overall and by age, gender, race, blood components and units transfused. Multivariate logistic regression analyses were used to assess potential risk factors. Results: Among 4 336 338 inpatient transfusion stays for elderly during 2011-2012, 2517 had FNHTR diagnosis recorded, an overall rate of 58.0 per 100,000 stays. FNHTR rates (per 100,000 stays) varied by age, gender, number of units and blood components transfused. FNHTR rates were substantially higher for RBCs- and platelets-containing transfusions as compared to plasma only. Significantly higher odds of FNHTR were identified with greater number of units transfused (P < 0.01), for females vs. males (OR = 1.15, 95% CI 1.04-1.27), and with 1-year histories of transfusion (OR = 1.25, 95% CI 1.10-1.42), lymphoma (OR = 1.22, 95% CI 1.02-1.46), leukaemia (OR = 1.90, 95% CI 1.56-2.31) and other diseases. Conclusions: Our study shows increased FNHTR occurrence among elderly with greater number of units and with RBCs- and platelets-containing transfusions, suggesting need to evaluate effectiveness of prestorage leucoreduction in elderly. The study also suggests importance of prior recipient alloimmunization and underlying health conditions in the development of FNHTR. abstract_id: PUBMED:19159905 Parainfluenza virus infection of young children: estimates of the population-based burden of hospitalization. Objective: To determine the population-based inpatient disease burden of parainfluenza virus in children <5 years of age. Study Design: The New Vaccine Surveillance Network (NVSN) enrolled children <5 years of age who were hospitalized with febrile or acute respiratory illnesses. Surveillance hospitals admitted >95% of all hospitalized children from each county. Combined nasal turbinate/throat swabs were tested for parainfluenza virus (PIV), respiratory syncytial virus, and influenza virus with culture and reverse-transcription-polymerase chain reaction. Both parental interviews and medical chart reviews were conducted. Age-specific population-based hospitalization rates were calculated. Results: From October 2000 through September 2004, 2798 children were enrolled. A total of 191 PIVs were identified from 189 children (6.8% of enrolled: 73 PIV type 1, 23 PIV type 2, and 95 PIV type 3), compared with 521 respiratory syncytial viruses and 159 influenza viruses. Mean PIV hospitalization rates were 3.01, 1.73, 1.53, 0.39, and 1.02 per 1000 children per year for ages 0 to 5 months, 6 to 11 months, 12 to 23 months, 24 to 59 months, and 0 to 59 months, respectively. Conclusions: PIV accounted for 6.8% of all hospitalizations for fever, acute respiratory illnesses, or both in children <5 years of age. The pediatric PIV inpatient burden is substantial and highlights the need to find an effective vaccine candidate. abstract_id: PUBMED:30161228 Hospitalization of HIV positive patients in a referral tertiary care hospital in Antananarivo Madagascar, 2010-2016: Trends, causes and outcome. Background: During the last few years, significant efforts have been made to improve access to antiretroviral therapy which led to dramatic reduction in AIDS-related events and mortality in HIV positive patients at the global level. However, current data in Africa suggested modest impact of widespread antiretroviral therapy scale-up especially regarding HIV-related hospitalization. In this study, we aimed to describe causes of hospitalization and factors associated with AIDS-defining events and inpatient mortality. Materials And Methods: A retrospective study was performed on medical records of HIV positive patients admitted for at least 24 hours in the Infectious Diseases Unit of the University Hospital Joseph Raseta Befelatanana Antananarivo. Cause of hospitalization was considered as the main diagnosis related to the symptoms at admission. Diagnostic criteria were based on criteria described in WHO guidelines. AIDS-defining events were defined as diseases corresponding to WHO stage 4 or category C of CDC classification. Results: From 2010 to 2016, 236 hospital admissions were included. AIDS-defining events were the most frequent cause of hospitalization (61.9%) with an increasing trend during the study period. Tuberculosis (28.4%), pneumocystis pneumonia (11.4%), cerebral toxoplasmosis (7.2%) and cryptococcosis (5.5%) were the most frequent AIDS-defining events. Tuberculosis was also the most frequent cause of overall hospitalization. In multivariate analysis, recent HIV diagnosis (aOR = 2.0, 95% CI: 1.0-3.9), CD4<200 cells/μl (aOR = 4.0, 95%CI: 1.9-8.1), persistent fever (aOR = 4.4, 95%CI: 2.1-9.0), duration of symptoms≥ 6 weeks (aOR = 2.6, 95%CI: 1.2-5.4) were associated with AIDS-defining events. Overall inpatient mortality was 19.5%. Age≥55 years (aOR = 4.9, 95%CI: 1.5-16.6), neurological signs (aOR = 3.2, 95%CI: 1.5-6.9) and AIDS-defining events (aOR = 2.9, 95%CI: 1.2--7.2) were associated with inpatient mortality. Conclusions: AIDS-defining events were the most frequent cause of hospitalization during the study period. Factors associated with AIDS-defining events mostly reflected delay in HIV diagnosis. Factors associated with mortality were advanced age, neurological signs and AIDS-defining events. abstract_id: PUBMED:33204757 Predicting Inpatient Mortality Among Encephalitis Patients: A Novel Admission Risk Score. Background: Identifying underlying commonalities among all-cause encephalitis cases can be extraordinarily useful in predicting meaningful risk factors associated with inpatient mortality. Methods: A retrospective cohort of patients with encephalitis was derived from a clinical chart review of adult patients (age ≥18 years) across 16 different hospitals in Houston, Texas, between January 2005 and July 2015. Clinical features at admission were assessed for their correlation with inpatient mortality and used to derive a final risk score prediction tool. Results: The study included a total of 273 adult patients with all-cause encephalitis, 27 (9.9%) of whom died during hospitalization. A limited number of clinical features were substantially different between patients who survived and those who died (Charlson score, Glasgow coma scale [GCS], immunosuppression, fever on admission, multiple serologic studies, and abnormal imaging). A final multivariable logistic model was derived with the following risk factors, which were transformed into a scoring system: 1 point was assigned to the presence of a Charlson score >2, thrombocytopenia, or cerebral edema, and 2 points for a GCS value <8. Patients were then classified into different risk groups for inpatient mortality: 0 points (0%), 1 point (7%), 2 points (10.9%), 3 points (36.8%), and ≥4 points (81.8%). Conclusions: The risk score developed from this study shows a high predictive value. This can be highly beneficial in alerting care providers to key clinical risk factors associated with in-hospital mortality in adults with encephalitis. abstract_id: PUBMED:20716830 A guideline for the inpatient care of children with pyelonephritis. Background And Objectives: Febrile urinary tract infections and pyelonephritis are common in children and frequently lead to hospitalization for management, especially in the child who appears toxic. The American Academy of Pediatrics (AAP) practice parameter on the diagnosis, treatment and evaluation of the initial urinary tract infection in febrile infants and young children provides experience and evidence-based guidelines for the practitioner caring for children between the ages of 2 months to 2 years. No established guideline exists for older children and the AAP guideline does not specifically focus on inpatient care. Methods: We conducted a comprehensive review of recently published literature and practice guidelines to develop a consensus on the inpatient diagnosis and management of children with pyelonephritis. Results: Eight recommendations are proposed for the diagnosis and management, including revised guidelines for the imaging studies postpyelonephritis on the basis of current best evidence. Conclusion: Proper diagnosis of pyelonephritis, timely initiation of appropriate therapy and identification of children at risk for renal injury will help to reduce immediate as well as long-term complications due to chronic kidney disease. abstract_id: PUBMED:35384507 Diagnosis and management of urinary tract infections in children aged 2 months to 3 years in the Italian emergency units: the ItaUTI study. Urinary tract infections (UTIs) are among the most frequent bacterial diseases in infants and children. Physician adherence to recommendations is notoriously often poor, but no data are available on UTIs management in the emergency setting. In this multicenter national study, we investigated the policies regarding UTIs management in children aged 2 months to 3 years in Italian emergency units. Between April and June 2021, directors of the emergency units were invited to answer an online survey on the following items: diagnostic approach to children with fever without an apparent source, therapeutic approach to UTIs, the use of kidney and urinary tract ultrasound, and the criteria for hospitalization. A total of 121 (89%) out of 139 of invited units participated in the study. Overall, units manage children with a suspected or confirmed UTI according to available recommendations for most of the items. However, in almost 80% (n = 94) of units, a sterile perineal bag is used to collect urine for culture. When urine is collected by cathether, heterogeneity exists on the threshold of bacterial load considered for UTI diagnosis. Conclusions: Available recommendations on UTIs in children are followed by Italian emergency units for most of the items. However, the methods to collect urine specimens for culture, one of the crucial steps of the diagnostic work-up, often do not align with current recommendations and CFU thresholds considered for diagnosis largely vary among centers. Efforts should be addressed to validate and implement new child and family friendly urine collection techniques. What is Known: • Several guidelines are published on the management of children with suspected or confirmed urinary tract infection. • No data are available on the management of pediatric urinary tract infections in the emergency setting. What is New: • Almost 80% of the Italian emergency units employ a sterile perineal bag to collect urine for culture. • Diagnostic CFU thresholds largely vary among centers. Answer: The study reported in abstract PUBMED:24920307 investigated the potential of outpatient quick diagnosis units (QDUs) as an alternative to inpatient hospitalization for diagnosing unexplained fever. The study prospectively assessed patients referred to QDU due to fever of uncertain nature (FUN), defined as a temperature > 38 °C for at least one week with no diagnosis after a previous evaluation. The study compared these patients to a control group of consecutively hospitalized patients with FUN during the same period. The results showed that QDU patients were younger and had less serious diseases than hospitalized controls, with a higher prevalence of viral infections and lower prevalence of bacterial infections and malignancies. Although the time-to-diagnosis was longer for QDU patients compared to the length-of-stay of hospitalized patients, the cost savings were substantial, with QDU patient costs being significantly lower than those for hospitalized patients. The study concluded that using objective tools to evaluate the condition severity and general health status of FUN patients could help decide the most appropriate setting for their diagnostic study, suggesting that QDUs could potentially replace inpatient hospitalization for certain patients with unexplained fever.
Instruction: Is nocturnal panic a distinct disease category? Abstracts: abstract_id: PUBMED:23674937 Is nocturnal panic a distinct disease category? Comparison of clinical characteristics among patients with primary nocturnal panic, daytime panic, and coexistence of nocturnal and daytime panic. Objective: Many patients with panic disorder (PD) experience nocturnal panic attacks. We investigated the differences in demographic variables and symptom characteristics as well as response to treatment among patients with primary day panic (DP), primary nocturnal panic (NP), and the coexistence of DP and NP (DP/NP), and discuss whether NP is a distinct disease category. Method: One hundred one consecutive untreated patients with PD were enrolled and subsequently divided into the NP, DP, and DP/NP groups. The presence of 13 panic attack symptom items as well as scores on the Panic Disorder Severity Scale (PDSS) and the Pittsburgh Sleep Quality Index (PSQI) were compared among the groups. After 3 months of regular treatment, PDSS scores were assessed again to evaluate treatment response. Results: Nocturnal panic attacks of the participants were mostly reported to occur in the first tertile of nocturnal sleep. The number of males, onset age, and presence of choking sensation were significantly higher, and the PDSS score was significantly lower in the NP group compared with the other groups. The DP/NP group showed the highest PDSS score, and participants in this group were prescribed the highest doses of medication among all groups. Only diagnostic sub-category was significantly associated with treatment response. The total score for PDSS and PSQI correlated significantly only in the NP group. Conclusions: DP/NP could be a severe form of PD, while primary NP could be a relatively mild subcategory that may partially share common pathophysiology with adult type night terror. abstract_id: PUBMED:32664103 Wilson disease patient with rare heterozygous mutations in ATP7B accompanied by distinctive nocturnal enuresis: A case report. Introduction: Wilson disease (WD) is an autosomal-recessive disorder of copper metabolism, which exhibits various symptoms due to the combination of environmental and genetic factors. Here, we report a WD patient who displayed distinctive symptom of nocturnal enuresis. Patient Concerns: The patient was a 31-year old woman, who recently developed nocturnal enuresis, combined with hand tremors, trouble speaking, and panic disorder at night. Diagnosis: The patient had been diagnosed with WD by Kayser-Fleischer rings, abnormal copper metabolism, neuropsychiatric symptoms, and magnetic resonance imaging when she was 17. The diagnosis was further confirmed by genetic analysis, which revealed a compound heterozygous mutations in ATP7B gene (c.2195T>C and c.3044T>C). The patient exhibited nocturnal enuresis, but the ambulatory electroencephalogram, routine urinalysis, residual urine detection, color doppler ultrasound of kidney, ureter, and bladder all displayed no abnormality. Interventions: The patient was treated with sodium dimercaptosulphonate, supplemented with Glutathione and Encephalin-inosine. Outcomes: The urinary copper excretion level decreased gradually, and the nocturnal enuresis was alleviated along with the neuropsychiatric symptoms by copper chelation therapy. Conclusion: In this study, we proved that variants c.2195T>C and c.3044T>C is involved in pathogenesis of WD, and revealed that nocturnal enuresis may be a symptom of WD. abstract_id: PUBMED:34615562 A Scoping Review on Category A Agents as Bioweapons. Category A agents are biological pathogens that could pose a threat to health and human safety if used as bioweapons. The exploration and possibility of these threats must be comprehensively reviewed to create a preparedness plan to recognize outbreaks, to educate the public, and to offer vaccinations and/or treatment options, if available. A scoping review using PRISMA guidelines was performed to categorize current information on Category A biological agents as well as understand their potential for future threats. The results used 34 articles and found that while botulin neurotoxins were the most lethal, anthrax posed the most likely threat for use as a bioweapon. Most research was conducted on plague, though it is not the most likely threat. Smallpox is the most likely agent to vaccinate against as there is already a working vaccine that has proven effective and the issue at hand is the need for a larger stockpile. Ultimately, preparedness efforts should include vaccinations and continued research and development of them. Category A agents are a serious public health concern; updated and reformed bioterrorism preparedness plans could greatly minimize panic and mortality. abstract_id: PUBMED:27822271 The Relationship Between Child Anxiety Related Disorders and Primary Nocturnal Enuresis. Background: Nocturnal enuresis, often called bedwetting or sleep wetting, is a common problem in children after the age of five and may lead to symptoms such as infection, incontinence and frequent urination. This problem refers to a state in which children after the age of five have no control of their urine for six continuous months and it cannot be attributed to any organic factors or drug use. Objectives: In this study we aimed to study generalized anxiety disorder as one of the possible causes of primary nocturnal enuresis. Materials And Methods: In this case-control study 180 children with primary nocturnal enuresis and same number of healthy children with a mean age of 7 - 17 years old with the same demographic characteristics were selected. The study took place at Amir Kabir hospital of Arak, Iran during year 2014. After collecting the information, diagnosis was verified based on the diagnostic and statistical manual of mental disorders (DSM) IV-TR criteria. Results were analyzed using the SPSS software (IBM Corp. Released 2011. IBM SPSS Statistics for Windows, version 20.0. Armonk, NY: IBM Corp.). Results: Frequency of generalized anxiety disorder, panic disorder, school phobia, social anxiety, separation anxiety, history of anxiety in mother, history of primary nocturnal enuresis in parent's family and body mass index had a significant difference between the two groups (P = 0.005). Conclusions: With the results obtained from this study we could say that there was a clear significant difference between the two control and patient groups for all subgroups of anxiety disorders such as generalized anxiety disorder and their relationship with primary nocturnal enuresis. Given the higher prevalence of generalized anxiety disorder, panic disorder, school phobia, social anxiety, separation anxiety and comparison with healthy children, it is recommended for all children with primary nocturnal enuresis to be investigated and treated for generalized anxiety disorder. abstract_id: PUBMED:34929433 Comparing symptom networks of daytime and nocturnal panic attacks in a community-based sample. Nocturnal panic refers to waking in a state of panic without obvious triggers, experiencing the same symptoms as panic attacks that occur while awake. Interrelationships between daytime and nocturnal panic symptoms have not been examined despite theories suggesting panic symptoms perpetuate one another in a forward feedback loop. The current study compared associations between symptoms in daytime and nocturnal panic using network analysis. Network theory conceptualizes symptoms as causing one another, rather than originating from a latent variable (i.e., a disorder). Given that nocturnal panic originates from sleep stages without cognitive activity, cognitive symptoms were expected to be more central in daytime panic networks than nocturnal panic networks. Prior literature indicates similar nocturnal and daytime panic severity; thus, we expected that panic groups would report equivalent panic symptom severity. An online community sample (N = 215) provided panic symptom history. Panic network structures did not differ, although the daytime panic network produced stronger and more numerous connections between physical and cognitive symptoms. The nocturnal panic group, however, reported more severe cognitive symptoms than the daytime panic group. These results challenge biologically-focused nocturnal panic theories and suggest a more significant role of cognitive symptoms in perpetuating nocturnal panic attacks once the individual awakens. abstract_id: PUBMED:15505142 Treatment of nocturnal disturbances and excessive daytime sleepiness in Parkinson's disease. Nocturnal disturbances are common in Parkinson's disease (PD) patients, with almost 70% of these patients reporting nocturnal disturbances. The etiology of sleep disturbances in patients with PD is still controversial. They might be dependent on dopaminergic drugs, on disease progression, or on a combination of these two factors. Nocturnal disturbances can be categorized in four groups: 1) PD-related motor symptoms, including nocturnal akinesia, early-morning dystonia, painful cramps, tremor, and difficulty turning in bed; 2) treatment-related nocturnal disturbances; 3) psychiatric symptoms, including hallucinations, vivid dreams, depression, dementia, insomnia, psychosis, and panic attacks; 4) other sleep disorders, including insomnia, REM behavioral disorder (RBD), restless legs syndrome (RLS), periodic leg movements (PLMS), and excessive daytime sleepiness (EDS). Specific treatment options are supplied for every group. A global evaluation of nocturnal disturbances would provide clinicians with a valuable tool to establish an optimal regimen that could positively influence all nocturnal disturbance categories and thus improve PD management on. However, it is important to consider that management of some nocturnal disturbances in a group may worsen nocturnal symptoms of another group or may increase EDS. PD-related symptoms can be treated with long-acting DA agonists to obtain continuous DA receptor stimulation during the night. Both treatment-related nocturnal disturbances and psychiatric symptoms may be related to drug treatment, and therefore, in both cases, drug reduction or discontinuance should be considered. Some sleep disorders, such as RLS and PLMS, may be controlled by DA agents, and others, such as insomnia and EDS, may be improved by reducing dopaminergic stimulation. abstract_id: PUBMED:33494550 Awakening from Sleep with Numbness and Indescribable Odd Feeling: Nocturnal Panic Attack or Sleep-Related Epilepsy? Paroxysmal events during sleep can be classified into parasomnias, sleep-related movements, psychiatric events, neurologic events, or medically related events. Diagnosis can be difficult because of the frequent overlap of clinical descriptors and lack of diurnal findings. We report a case of a 68-year-old man who presented to the hospital complaining of awakening from sleep with numbness, which was followed by an indescribable odd feeling. We discuss overlapping clinical features of nocturnal panic and sleep-related epilepsy. abstract_id: PUBMED:12364936 Nocturnal panic attacks. The panic-respiration connection has been presented with increasing evidences in the literature. We report three panic disorder patients with nocturnal panic attacks with prominent respiratory symptoms, the overlapping of the symptoms with the sleep apnea syndrome and a change of the diurnal panic attacks, from spontaneous to situational pattern. The implication of these findings and awareness to the distinct core of the nocturnal panic attacks symptoms may help to differentiate them from sleep disorders and the search for specific treatment. abstract_id: PUBMED:17937981 Panic disorder with nocturnal panic attacks: symptoms and comorbidities. The aim of this study was to determine the relationship between nocturnal panic attacks and comorbidities, clinical variables and panic attack symptoms. One hundred and six consecutive patients with DSM-IV panic disorder were enrolled in the study. The patients were divided into two groups depending on the presence of nocturnal panic attacks. Comorbidities were diagnosed with the help of SCID-I and SCID-II. The groups were compared using the Beck Depression Inventory, State-Trait Anxiety Inventory and Symptom Checklist. Nocturnal panic attacks were not related to comorbidities or age at the onset of the disease. The scores from the Beck Depression Inventory, general scores from the Symptom Checklist, somatization, obsession-compulsion, interpersonal sensitivity and anger-hostility sub-scale scores were higher in the nocturnal panic attack group. Patients with nocturnal panic attacks experience more frequent respiratory symptoms, suggesting that nocturnal panic attacks may be related to respiratory symptoms. Our findings demonstrate that patients with nocturnal panic attacks have more respiratory symptoms of panic, depressive and other psychiatric symptoms than the no nocturnal panic group. abstract_id: PUBMED:2918300 Nocturnal panic. The phenomenon of nocturnal panic was examined by comparing individuals with panic disorder who reported the experience of nocturnal panic (N = 37) with those who did not (N = 56). Variables that were assessed included frequency and intensity of daytime panic attacks, generalized anxiety, and responses to a standardized physiological assessment. In addition, the most recent nocturnal panic was rated on measures of intensity, duration, and circumstances. Nocturnal panic occurred in individuals who were equally avoidant and distressed as individuals who did not experience nocturnal panic. However, nocturnal panickers experienced daytime panics and general somatic sensations more frequently than other panickers. The similarity of nocturnal panic to sleep apnea, dream-induced anxiety attacks, and night terrors is discussed. The nature of nocturnal panic is described within the context of an explanatory model that combines behavioral and physical factors. Answer: Based on the provided abstracts, nocturnal panic is not necessarily considered a distinct disease category but rather a manifestation of panic disorder that can occur during the night. The study in abstract PUBMED:23674937 suggests that primary nocturnal panic (NP) could be a relatively mild subcategory of panic disorder (PD) that shares common pathophysiology with adult-type night terror. This study also indicates that patients with coexistence of daytime panic (DP) and nocturnal panic (DP/NP) may experience a more severe form of PD. The presence of nocturnal panic attacks does not seem to constitute a separate disease but rather a variation within the spectrum of panic disorder. The abstract PUBMED:34929433 supports this view by comparing symptom networks of daytime and nocturnal panic attacks and finding that while the network structures did not differ significantly, the daytime panic network had stronger connections between physical and cognitive symptoms. This suggests that nocturnal panic attacks are not fundamentally different from daytime panic attacks in terms of their symptom interrelationships. Furthermore, abstract PUBMED:17937981 indicates that patients with nocturnal panic attacks experience more respiratory symptoms and have higher scores on depression and psychiatric symptom scales, but it does not suggest that nocturnal panic is a separate disease category. Instead, it highlights that nocturnal panic attacks may be associated with specific symptoms and comorbidities. In summary, the evidence from the abstracts suggests that nocturnal panic is a manifestation of panic disorder that occurs at night and may have some distinct clinical features, but it is not classified as a separate disease category. Instead, it is seen as part of the broader diagnosis of panic disorder with specific considerations for those who experience panic attacks during sleep (PUBMED:23674937, PUBMED:34929433, PUBMED:17937981).
Instruction: Is heroin-assisted treatment effective for patients with no previous maintenance treatment? Abstracts: abstract_id: PUBMED:25228817 The current status of opioid maintenance treatment in France: a survey of physicians, patients, and out-of-treatment opioid users. Aim: Project Access France was a national survey designed to provide real-world observations on the status of opioid dependence treatment in France. Methods: The views of physicians (n=100), patients (n=130), and out-of-treatment opioid users (n=33) were collected via interviews and questionnaires. Results: Physicians reported being moderately satisfied with treatment programs in their area (rating 6.9 out of 10). Most physicians (82%) reported being concerned about misuse and diversion of medication-assisted treatment (MAT) medications and 50% identified psychosocial/behavioral counseling as the key change that would most improve patient care. Among patients, the mean number of previous MAT episodes was low (1.5); 78% reported that it was easy to access a doctor to undergo MAT; 14% reported regularly or sometimes using heroin; misuse and diversion were reported in 15% and 39% of patients, respectively; and 57% of patients were not receiving psychosocial help. Out-of-treatment opioid users reported using drugs on a regular basis (42% regularly used heroin) and cited 'not wanting to give up drugs completely' as the most frequent reason for staying out of MAT. Conclusion: This survey highlights a number of positive features of the open-access, GP-based treatment model for opioid dependence in France. Challenges remain with regard to continued misuse/diversion of MAT medications and limited patient access to psychosocial support. abstract_id: PUBMED:38091901 Who receives heroin-assisted treatment? A comparison of patients receiving opioid maintenance treatment in Denmark. Background: Since 2010, heroin-assisted treatment (HAT) has been one of the treatment options available to people with opioid use disorder (OUD) in Denmark. This study aimed to characterize HAT patients at treatment start and compare their individual characteristics to those of patients entering traditional opioid maintenance treatment (OMT) with methadone or buprenorphine during the same period. Methods: Patients who initiated HAT or OMT with methadone or buprenorphine in Denmark from 2010 to 2018 were included (n=6798). Multiple national registers were linked to compare treatment groups in terms of socio-demographic variables, previous OUD treatment episodes, hospital-based care, and criminal conviction history. Results: Nearly all HAT patients had a history of methadone treatment (91%) and half had residential treatment experience (48%). In the year previous to admission, HAT patients recorded the highest percentages of non-fatal overdoses (12%) and chronic hepatitis C diagnoses (16%), and the lowest percentages of psychiatric disorders (11%) compared to traditional OMT patients. Criminal convictions were also common: 39% of the HAT group had committed a property crime and 18% a drug-related crime the year before HAT entry. During the study period, an overall reduction in OMT enrollments for each year was recorded. The HAT proportion to the total remained fairly stable (4%-10%), while the buprenorphine proportion increased. Conclusions: In Denmark, OMT patients exhibited numerous vulnerabilities at treatment start, and among the patient groups, HAT patients were the most burdened. HAT seems to reach the target group and adhere to formulated eligibility criteria. abstract_id: PUBMED:20424456 Is heroin-assisted treatment effective for patients with no previous maintenance treatment? Results from a German randomised controlled trial. Background/aims: Until now, the medical prescription of diamorphine (heroin) has been suggested as suitable for patients who have failed previous maintenance treatments. The aim of this paper is to assess the effects of diamorphine on opioid-dependent patients with no previous maintenance treatment experience (NPME). Methods: The German heroin trial compared diamorphine versus methadone maintenance treatment and included 107 patients with NPME. This paper is a sub-analysis of these patients. Results: When comparing this subsample with the rest of the participants in the study, large baseline differences were found, showing a more severe drug use profile in patients with NPME. However, no differences were found in terms of treatment outcome and treatment retention. In the subsample with NPME, outcome measures on the reduction of illicit drug use were significantly better under diamorphine compared to methadone treatment, while there was no difference in health outcomes. Conclusion: Controlled studies are now necessary to examine whether diamorphine treatment could be considered as one of several options in treating severely opioid-dependent patients, regardless of previous maintenance treatment experience. abstract_id: PUBMED:32099510 Opioid and Polydrug Use Among Patients in Opioid Maintenance Treatment. Purpose: Opioid maintenance treatment reduces a person's use of heroin. However, frequent substance use in treatment is a problem. Aim: To examine the association between opioid maintenance treatment and opioid/polydrug use, and whether social factors, adverse experiences, social resources, and quality of life are associated with opioid/polydrug use during the first 12 months in treatment. Patients And Methods: Forty-seven participants from treatment units in Bergen, Norway participated in five waves of data collection. Every third month, a structured face-to-face interview collected self-reported data on sociodemographic characteristics, opioid/polydrug use, participants' social resources or adverse experiences, and quality of life. Data were collected as part of KVARUS, the National Quality Register for Substance Abuse Treatment. A multilevel binary logistic regression analysis was conducted to examine the association of opioid/polydrug use and time in current treatment. The analysis included regressions of opioid/polydrug use on time-invariant baseline adverse experiences and social resources, and time-varying reports of quality of life. Results: There was a significant negative association between time in treatment and use of opioids, b =-0.89, SE = 0.19, p = <0.01. Furthermore, a negative association of age at substance use on polydrug use was found, b =-0.40, SE =0.19, p = 0.03. A higher overall quality of life was significantly associated with lower odds of opioid use during opioid maintenance treatment, b = -0.62, SE = 0.23, p = < 0.01. Social dimensions, participants' adverse experiences, and social resources were not associated with polydrug or opioid use. Conclusion: Opioid maintenance treatment is associated with lowered opioid use, but to a lesser degree with polydrug use. Our findings add quality of life as an important factor that should be given particular attention because it can offer insight to aspects that can affect the patients' opioid use. abstract_id: PUBMED:22647623 When and how to detoxify clients from methadone maintenance treatment? Background: Methadone is prescribed in France as a maintenance treatment for heroin dependence since 1969. Nevertheless, the optimal duration of methadone maintenance treatment and how detoxification from methadone at the end of the treatment should be performed is still discussed. Objective: To conduct a literature review on when and how detoxify clients from methadone maintenance treatment and to collect the opinion of experts in the field. DOCUMENTARY SOURCES: We searched the PubMed, Embase, Cochrane Library and PsycINFO databases on the 1966-2011 period using the keywords "methadone", "maintenance", "detoxification", "tapering", "cessation", "withdrawal". We also searched data in other addictive journals in French that are not available in those databases. We also collected the opinion of the physician in charge of the oldest methadone program in France (1969). STUDIES SELECTION: We excluded studies that used methadone as short time treatment of heroin withdrawal and thus selected 23 articles. Results: There is a consensus on when methadone maintenance treatment should be stopped, defined by the client's will to stop, the judgement from the physician that the client has been stable for a period of time that is long enough, but also the client's motivation to live his life without maintenance treatment. There is also a majority, among articles on how methadone treatment should be stopped, recommending ambulatory, practical approaches using slow tapering of the dose, with the ability to go back to the previous dose if needed, namely in case of relapse to heroin use, heavy withdrawal or psychiatric symptoms. Limits: There are few articles addressing the subject, especially comparing prospectively different cessation strategies. Conclusion: Methadone maintenance treatment should not necessarily be maintained all life long and can be stopped within its prescription setting, including medical, psychological and social evaluation. abstract_id: PUBMED:29719345 Status Investigation of Outpatients Receiving Methadone Maintenance Treatment (MMT) in Shanghai from 2005 to 2016. Background: Methadone maintenance treatment (MMT) is an effective measure to control drug abuse, prevent AIDS, and improve family and social functions among those with heroin addiction. Relevant surveys in recent years show that the number of outpatients receiving MMT has a downward trend. Aims: To understand variation in maintenance treatment rates and causes of withdrawal for outpatients receiving MMT in Shanghai since initiation of this program. Method: This study was a retrospective investigation, with data from the AIDS Comprehensive Prevention and Control Data Information Management System of the China AIDS Prevention and Control Center. Descriptive statistics were used to describe demographic data, treatment maintenance rate, and number of new outpatients receiving MMT in Shanghai From May 2005 to June 2016. The causes of withdrawal were summarized and analyzed. Results: From May 2005 to June 2016, there were a total of 7181 outpatients receiving MMT in Shanghai. These patients were primarily male (male to female ratio around 3:1), young adults (more than 90% of these patients were 25 to 54 years old), with junior high school education level and below (65.4%), single (total of unmarried, divorced and widowed: 63.1%), and unemployed or underemployed (81.5%). The daily dose of methadone in MMT patients showed an upward trend since 2008, and gradually declined after reaching its peak in 2013. The mean (sd) dose of methadone taken in the years studied was 56 (2.75) ml/d. The number of new outpatients increased sharply in 2007 and 2008 (more than 1500), and then decreased year by year. The number of outpatients had increased continuously from 2005 to 2011, with the peak in 2011 (3840 patients), and then decreased gradually. The maintenance rate was stable at over 80% since 2010. The main causes of withdrawal: 1) arrested due to unrelated criminal causes (19.89%), 2) sent to compulsory isolated rehabilitation center due to occasional drug use, and 3) physical reasons (disease/pregnancy/death, 11.80%). Conclusion: The maintenance rate has been kept at a relatively good level since the initiation of the MMT outpatient clinic service in Shanghai. The number of patients receiving treatment showed an increase-then-decrease trend. The main causes of patients' withdrawal were mainly related to "crime" and "relapse". In order to make MMT outpatient service better, subsequent studies need to carry out related investigations to understand the causes of these changes and patients reasons for withdrawal. abstract_id: PUBMED:27355105 Outcomes of Patients in Long-Term Opioid Maintenance Treatment. Background: Despite the importance of duration of opioid maintenance treatment (OMT), only few studies have reported outcomes of long-term OMT. Objectives: To describe outcomes of long-term (> 5 years) OMT patients with respect to substance use, physical and mental health, and socioeconomic characteristics. Methods: Patients (n = 160) were recruited from 15 OMT offices in different regions of Germany. Data were collected using a structured interview at baseline, and clinical recordings, including urine drug screenings, during 12 monhts follow-up. Results: Patients had a mean age of 44 years. During follow-up, 23% of patients showed indications of an alcohol problem. Cannabis was used by 56%, often frequently. Heroin was used by 28%, mostly infrequently. Three quarters of patients either had a non-substance related mental disorder (48.1%, most frequently affective and anxiety disorders) or somatic diagnosis (61.3%, frequently hepatitis C, HIV, or cardiovascular diseases), or both. Unemployment rate was 43.1% at baseline (27% for patients without comorbidity) and remained generally stable during follow-up. No arrests or incarcerations were recorded. During follow-up, 2.5% of patients prematurely terminated OMT, 2.5% regularly completed OMT. Conclusions: The sample as a whole was characterized by stable living conditions, high unemployment, low illicit opiate use, and a high retention rate. Continuation of OMT could enable further treatment of comorbidity and prevent resumption of a drug-dominated lifestyle. But it may well be asked how within the context of OMT further improvements can be achieved, especially with regard to further decrease of alcohol use and the treatment of depression. abstract_id: PUBMED:32299305 Sleeping Patterns in Patients with Opioid Use Disorder: Effects of Opioid Maintenance Treatment and Detoxification. The aim of the study was to explore whether abstinent patients on recent opioid detoxification or on opioid maintenance treatment suffer from sleeping problems. 199 patients on opioid maintenance treatment (methadone, diacetylmorphine and buprenorphine) or recent opioid detoxification were included in this exploratory cross-sectional study. We used the Pittsburgh Sleep Quality Index (PSQI) and the Regensburger Insomnia Scale (RIS) in order to assess potential sleeping problems. There was a significant effect of the condition "opioid maintenance" or "recent opioid detoxification" on the total score of PSQI and RIS. All opioid maintenance drugs used by the study population were associated with more sleeping problems compared to the detoxification group when calculated with RIS values. Recently abstinent patients (opioid detoxification) displayed significantly fewer sleep disturbances than opioid-maintained patients. Since sleeping problems can seriously impair treatment success and quality of life, screening for sleep disturbances and their subsequent treatment is of pronounced relevance. abstract_id: PUBMED:24548802 Methadone maintenance for HIV positive and HIV negative patients in Kyiv: acceptability and treatment response. Background: With up to 40% of opioid injectors infected with HIV, Ukraine has one of the most concentrated HIV epidemics in the world, mainly due to unsterile injection practices and a historical absence of effective prevention services. Harm reduction programs, including syringe exchange and a small buprenorphine treatment program, were introduced in 2004 and methadone maintenance was allowed in 2007. Despite an initial expansion, by 2009, only 3221 injectors were receiving methadone treatment. A growing body of research on methadone maintenance has found high retention rates with reduction in opioid use and HIV risk behaviors. We report on the acceptability and initial outcome of methadone treatment as a function of HIV status, an issue that has not yet been reported for injectors in Ukraine. Methods: Longitudinal observational study of a 12-week course of methadone treatment in 25 HIV+ and 25 HIV- opioid addicted individuals recruited from a harm reduction program and the city AIDS Center. Drug use and HIV risk were assessed at baseline and weeks 4, 8, 12 and 20; all patients were offered continued methadone maintenance in the Kyiv city program at the end of 12 weeks. Results: Fifty-four individuals were asked if they were interested in the study and 50, demographically similar to other samples of opioid addicted Ukrainians, agreed to participate. Two died of non-study related causes; the other 48 completed assessments at weeks 4, 8 and 12, and 47 completed followups at week 20. Significant reductions were seen in use of heroin (p<0.0001), other opiates/analgesics (p<0.0001), and HIV risk behaviors (drug, sex, total; all p<0.0001). All 48 patients chose to continue methadone after the 12-weeks of study medication ended. Unlike most opioid treatment studies, sexual risk was somewhat higher than injecting risk at study intake. Conclusions: Methadone maintenance was well accepted by HIV+ and HIV- opioid dependent individuals and has the potential for significant public health impact if made more widely available with sustained access and support. abstract_id: PUBMED:37954240 Methadone maintenance treatment is more effective than compulsory detoxification in addressing gut microbiota dysbiosis caused by heroin abuse. Introduction: Heroin use disorder (HUD) is commonly accompanied by gut dysbiosis, but the roles of gut microbiota in HUD treatment, such as compulsory detoxification and methadone maintenance treatment (MMT), remain poorly understood. Methods: In this study, we performed 16 s rDNA and whole metagenome sequencing to analyze the gut microbial profiles of HUD patients undergoing heroin addiction, heroin withdrawal (compulsory detoxification), and MMT. Results: Our findings revealed that, compared to healthy controls, microbial diversity was significantly decreased in HUD patients who were in a state of heroin addiction and withdrawal, but not in those receiving MMT. We observed significant alterations in 10 bacterial phyla and 20 bacterial families in HUD patients, while MMT partially restored these changes. Whole metagenome sequencing indicated gut microbiota functions were significantly disrupted in HUD patients experiencing heroin addiction and withdrawal, but MMT was found to almost reverse these dysfunctions. In addition, we identified 24 featured bacteria at the genus level that could be used to effectively distinguish between healthy individuals and those with heroin addiction, heroin withdrawal, or receiving MMT. Furthermore, we found the relative abundance of Actinomyces, Turicibacter and Weissella were positively associated with the Hamilton Depression Scale score in different states of HUD patients. Discussion: This study provides evidence from the gut microbiota perspective that MMT is a more effective approach than compulsory detoxification for HUD treatment. Answer: Yes, heroin-assisted treatment (HAT) has been found to be effective for patients with no previous maintenance treatment (NPME). A German randomized controlled trial specifically assessed the effects of diamorphine (heroin) on opioid-dependent patients who had not undergone previous maintenance treatment. The results indicated that, despite these patients having a more severe drug use profile at baseline, there were no differences in treatment outcome and treatment retention when compared to the rest of the participants. Moreover, for the NPME subgroup, outcomes on the reduction of illicit drug use were significantly better under diamorphine compared to methadone treatment, although no difference was observed in health outcomes (PUBMED:20424456). This suggests that diamorphine treatment could be considered as an option for treating severely opioid-dependent patients, regardless of their previous maintenance treatment experience.
Instruction: Does the choice of local anesthetic affect the catecholamine response to stress during epidural anesthesia? Abstracts: abstract_id: PUBMED:8267197 Does the choice of local anesthetic affect the catecholamine response to stress during epidural anesthesia? Background: Previous work has established that 2-chloroprocaine epidural anesthesia has no effect on circulating plasma epinephrine concentrations in young, healthy, resting volunteers, and results in a decrease in norepinephrine concentration only when a level of analgesia to pinprick of C-8 is reached. The current study was performed to evaluate the possibility that this finding is unique to 2-chloroprocaine. Methods: Nine healthy volunteers were studied on three occasions at least 48 h apart; each received three local anesthetics (0.75% bupivacaine, 2% lidocaine, and 3% 2-chloroprocaine, all without epinephrine). After placement of lumbar epidural and central venous catheters, blood samples were drawn from the central venous catheter at the following stages: (1) 20 min after catheter placement (baseline), (2) during the first cold pressor test (CPT; hand held in an ice water bath for 90 s), (3) 20 min after reaching epidural analgesia to T-1 level of analgesia, and (4) during a second CPT (epidural analgesia to T-1). Monitoring consisted of noninvasive cardiac output (impedance), noninvasive blood pressure, and EKG. Results: Extensive epidural block (stage 3) altered measured variables only minimally with respect to resting baseline state. During stage 2 (first CPT), mean arterial pressure (MAP), heart rate (HR), cardiac index (CI), epinephrine, and norepinephrine increased. During stage 4 (second CPT), increases in HR and CI were not attenuated by any of the three local anesthetics. Increases in MAP were attenuated by epidural anesthesia with all three local anesthetics. Bupivacaine and 2-chloroprocaine epidural anesthesia significantly attenuated increases in plasma catecholamines, but lidocaine epidural anesthesia did not. Conclusions: Epidural anesthesia with all three local anesthetic agents tested resulted in an incomplete sympathectomy in the resting state in healthy young men, judged by plasma catecholamine concentrations and cardiovascular variables minimally changed from resting baseline. Lidocaine epidural anesthesia did not attenuate the catecholamine response to CPT, indicating decreased blockade of sympathetic efferent neural traffic compared with bupivacaine and chloroprocaine epidural anesthesia. abstract_id: PUBMED:12462786 Choice of local anesthetic for epidural anesthesia in physiological labor In comparative analysis of epidural anesthesia by bupivacaine and ropivacaine the latter was shown to give smaller motor block and so can be applied at the end of both first and second stage of labor. Adequate anesthesia provides uneventful course of the second stage of labor, decreases its duration and eliminates the disorders of central hemodynamics and hormonal homeostasis. Currently naropin seems to be anesthetic of choice for epidural anesthesia of in labor by way of natural maternal passages. abstract_id: PUBMED:7848933 Differential epidural block. Does the choice of local anesthetic matter? Background And Objectives: It is well established that spinal anesthesia results in a differential block to the sensations of pinprick and cold temperature discrimination. However, the existence of differential block during epidural anesthesia has not always been accepted. Recently, it has been shown that lumbar epidural anesthesia with chloroprocaine and lidocaine produces a differential block to pinprick and cold sensation. The purpose of this study was to determine if the choice of local anesthetic used for epidural anesthesia has any influence on the relative levels of anesthesia, analgesia, and cold sensation. Methods: The authors studied nine healthy subjects; each was studied three times and received one of three local anesthetics (0.75% bupivacaine, 2% lidocaine, and 3% chloroprocaine) via an epidural catheter placed into the second or third lumbar epidural space. The authors tested the following modalities compared to an unblocked dermatome: anesthesia, loss of sensation to pinprick; analgesia, loss of an equally sharp sensation to pinprick; and cold sensation, loss of cold sensation to alcohol. Results: Twenty minutes after injection of the local anesthetic, zones of differential sensory block existed for all three agents tested. Anesthesia and analgesia were the most caudad and cephalad, respectively, while loss-to-cold sensation was found to be between these two levels. There was no significant difference in the dermatomal level achieved among the three local anesthetics tested. Sensory testing data observed 10 minutes later showed that no significant change had occurred. Conclusions: This study reaffirms the existence of differential sensory block during epidural anesthesia and establishes that the observed differential block appears to be independent of the local anesthetic used. abstract_id: PUBMED:34150121 Effect of general anesthesia combined with epidural anesthesia on circulation and stress response of patients undergoing hysterectomy. Objective: To investigate the effect of general anesthesia combined with epidural anesthesia on circulation and stress response of patients undergoing hysterectomy. Methods: A total of 97 patients undergoing hysterectomy in our hospital from December 2017 to December 2019 were recruited as the research participants, of whom 44 patients (general anesthesia group) received general anesthesia and 53 patients (joint group) received general anesthesia combined with epidural anesthesia during operation. The hemodynamic indexes, anesthetic effect, anesthetic recovery effect, cognitive function, and stress substance levels of the two groups were compared. Results: Compared with the general anesthesia group, the SBP and HR of the patients in the joint group were more stable, and the anesthesia effect and recovery effect in the joint group were better. The MMSE score of the joint group at 6 h and 12 h after anesthesia was significantly higher than that of the general anesthesia group (P < 0.001). There was no significant difference in the levels of adrenaline and norepinephrine between the two groups before operation (P > 0.05). The levels of stress substances in the two groups increased at 30 min after operation (P < 0.001), and those in the joint group were significantly lower than those in the control group (P < 0.001). Conclusion: Compared with general anesthesia, general anesthesia combined with epidural anesthesia produces better anesthetic effect in hysterectomy, has less influence on patients' circulatory response and can reduce stress response. abstract_id: PUBMED:31871339 Local Anesthetic Systemic Toxicity Following General and Epidural Anesthesia in A patient with a History of Muscle Relaxant-induced Anaphylaxis. We here report that a 71-year-old Japanese woman with a history of anaphylaxis induced by muscle relaxants had local anesthetic systemic toxicity (LAST) following an abdominal surgery under general anesthesia with combined spinal-epidural anesthesia without muscle relaxants. The total dosages of local anesthetics reached 0.67 mg/kg of ropivacaine and 11.5 mg/kg of lidocaine over 12.5 h to obtain sufficient muscle relaxation for surgery. Regional anesthesia is useful in cases in which muscle relaxants are to be avoided during a surgery. However, especially for a patient with risk factors and prolonged surgery, precautions should be taken to prevent LAST. abstract_id: PUBMED:22141218 Nonanalgesic effects of thoracic epidural anesthesia Thoracic epidural anesthesia, which has been performed since the 1950s, has progressed from being one analgesic technique among others to its present status as the technique of choice for managing pain after major abdominal and thoracic surgery. In addition to providing effective analgesia, the epidural infusion of local anesthetic agents produces a sympathetic block that offers advantages over other types of pain control, particularly with respect to the cardiovascular, respiratory, and gastrointestinal systems. Thoracic epidural anesthesia provides dynamic pain relief, allowing the patient to resume activity early. It also permits early extubation and is associated with fewer postoperative pulmonary complications, shorter duration of paralytic ileus, and a better response to the stress of anesthesia and surgery. However, meta-analyses have not yet demonstrated that postoperative outcomes are improved. This review describes the nonanalgesic effects of thoracic epidural anesthesia. abstract_id: PUBMED:16244020 Administration of local anesthetic through the epidural needle before catheter insertion improves the quality of anesthesia and reduces catheter-related complications. Epidural catheter placement offers flexibility in block management. However, during epidural catheter insertion, complications such as paresthesia and venous and subarachnoid cannulation may occur, and suboptimal catheter placement can affect the quality of anesthesia. We performed this prospective, randomized, double-blind study to assess the effect of a single-injection dose of local anesthetic (20 mL of 2% lidocaine) through the epidural needle as a priming solution into the epidural space before catheter insertion. We randomized 240 patients into 2 equal groups and measured the quality of anesthesia and the incidence of complications. In the needle group (n = 100), catheters were inserted after injection of a full dose of local anesthetic through the needle. In the catheter group (n = 98), the catheters were inserted immediately after identification of the epidural space. Local anesthetic was then injected via the catheter. We noted the occurrence of paresthesia, inability to advance the catheter, or IV or subarachnoid catheter placement. Sensory and motor block were assessed 20 min after the injection of local anesthetic. Surgery was initiated when adequate sensory loss was confirmed. In the catheter group, the incidence of paresthesia during catheter placement was 31.6% compared with 11% in the needle group (P = 0.00038). IV catheterization occurred in 8.2% versus 2% of patients in the catheter and needle groups, respectively (P = 0.048). More patients in the needle group had excellent surgical conditions than the catheter group (89.6% versus 72.9; P < 0.003). We conclude that giving a single-injection dose via the epidural needle before catheter placement improves the quality of epidural anesthesia and reduces catheter-related complications. abstract_id: PUBMED:10204902 The association between injected volume of local anesthetic and spread of epidural anesthesia: a hypothesis. Background And Objectives: Clinically, an increase of the injected volume of local anesthetic does not result in a linear increase in the spread of epidural anesthesia. This study was designed to evaluate this observation. Methods: One-hundred twenty adult women undergoing abdominal gynecologic surgery were randomly assigned into 6 groups of 20 patients each. An epidural catheter was inserted in each patient via the L1-L2 interspace, and 2, 4, 6, 8, 10, and 12 mL of 2% mepivacaine was injected, respectively, in each group. Fifteen minutes later, the cephalad extent of hypesthesia above the second sacral dermatome was tested with an alcohol swab. Results: The regression analysis between injected volume (x mL) and number of anesthetized dermatomes (y) approximates a cubic polynominal equation rather than a simple equation, as follows; y = 3.021 x - 0.274 x2 + 0.009673 x3 (R2 = 0.938, P = .0435). Conclusions: The present study demonstrates lack of a linear relationship between injected volume of local anesthetic and spread of epidural anesthesia. Whether the cubic equation developed is clinically useful remains unclear because of the variability of the data. abstract_id: PUBMED:28321983 Chloroprocaine for epidural anesthesia in infants and children. Continuous epidural infusions are an effective and safe method of providing anesthesia and postoperative analgesia in infants and children with multiple advantages over systemic medications, including earlier tracheal extubation, decreased perioperative stress response, earlier return of bowel function, and decreased exposure to volatile anesthetic agents with uncertain long-term neurocognitive effects. Despite these benefits, local anesthetic toxicity remains a concern in neonates and infants because of their decreased metabolic capacity for amide local anesthetics. Chloroprocaine, an ester local anesthetic agent, which is rapidly metabolized in plasma at all ages, is an attractive alternative for this special population, particularly in the presence of superimposed liver impairment or when higher infusion rates are needed for surgical incisions stretching many dermatomes. The current manuscript reviews the literature pertaining to the use of 2-chloroprocaine for regional anesthesia in infants and children. Dosing regimens are presented and the applications of 2-chloroprocaine in this population are discussed. abstract_id: PUBMED:10439646 Epidural vs. intradural anesthesia in ambulatory surgery Spinal anesthesia is the technique of choice for many outpatient procedures. With appropriate screening and preparation, it can provide excellent surgical conditions and highly satisfactory anesthesia for the patient, while remaining cost-effective. Intradural anesthesia has advantages over epidural anesthesia: technical simplicity, rapid onset, efficacy and depth of blockade. Its use has been controversial, however, for many years due to the potential risk of headache after puncture of the dura mater. Epidural anesthesia causes fewer hemodynamic changes and provides greater dose flexibility and local anesthetic concentration, with less risk of headache after accidental puncture of the dura mater. The drawbacks are that it takes longer to perform and onset of blockade comes later. Both techniques are valid alternatives to general anesthesia in outpatient surgery. The choice of one over the other will depend on patient characteristics, availability of a presurgical area, and the anesthesiologist's skill. Answer: Yes, the choice of local anesthetic can affect the catecholamine response to stress during epidural anesthesia. A study involving healthy volunteers who received three different local anesthetics (0.75% bupivacaine, 2% lidocaine, and 3% 2-chloroprocaine) showed that during a cold pressor test, which is a stress-inducing procedure, increases in mean arterial pressure (MAP) were attenuated by epidural anesthesia with all three local anesthetics. However, only bupivacaine and 2-chloroprocaine significantly attenuated increases in plasma catecholamines, while lidocaine did not. This indicates that lidocaine epidural anesthesia may have a decreased blockade of sympathetic efferent neural traffic compared with bupivacaine and chloroprocaine epidural anesthesia (PUBMED:8267197).
Instruction: Spontaneous hemopneumothorax: is conservative treatment enough? Abstracts: abstract_id: PUBMED:16037871 Spontaneous hemopneumothorax: is conservative treatment enough? Purpose: Spontaneous hemopneumothorax is a rare disorder, occurring in 1% to 12% of patients with spontaneous pneumothorax. The present review was undertaken to emphasize the potential life-threatening condition of spontaneous hemopneumothorax and reassess the benefit of conservative treatment with chest tube drainage. Material And Methods: From 1997 to 2002, 291 cases of spontaneous pneumothorax were treated in our department. Of these, 9 (3.09%) developed hemopneumothorax (> 400 ml). The clinical features of these patients and the results of conservative and surgical management were retrospectively reviewed. Results: Seven patients were treated conservatively and two required VATS and thoracotomy because of worsening clinical condition. The amount of aspirated blood ranged from 400 to 3700 ml (mean, 1533 ml). Six patients received a homologous blood transfusion. Conclusions: In conclusion, hemopneumothorax is a serious condition complicating spontaneous pneumothorax. Conservative treatment is adequate in most cases and should be performed if bleeding persists for less than 24 hours after chest tube placement. abstract_id: PUBMED:10786105 Spontaneous hemopneumothorax--results of conservative treatment. Spontaneous hemopneumothorax is a serious condition complicating spontaneous pneumothorax. Early stage thoracotomy has been advocated to stop the bleeding and evacuate the coagulated blood from the pleural cavity. The present review was undertaken to emphasize the potential life-threatening condition of spontaneous hemopneumothorax and reassess the benefit of conservative treatment with chest tube drainage. Out of six patients, five patients were treated conservatively and one required an emergency thoracotomy because of cardiovascular collapse. Although three patients still had minimal residual hemopneumothorax on discharge, none of them developed empyema or trapped lung. In conclusion, conservative treatment is efficient and should be performed if bleeding persists for less than 24 hours after chest tube placement. abstract_id: PUBMED:35812674 Successful treatment of aortic dissection with pulmonary embolism: A case report. Background: Aortic dissection (AD) and pulmonary embolism (PE) are both life-threatening disorders. Because of their conflicting treatments, treatment becomes difficult when they occur together, and there is no standard treatment protocol. Case Summary: A 67-year-old man fell down the stairs due to syncope and was brought to our hospital as a confused and irritable patient who was uncooperative during the physical examination. Further examination of the head, chest and abdomen by computed tomography revealed a subdural hemorrhage, multiple rib fractures, a hemopneumothorax and a renal hematoma. He was admitted to the Emergency Intensive Care Unit and given a combination of oxygen therapy, external rib fixation, analgesia and enteral nutrition. The patient regained consciousness after 2 wk but complained of abdominal pain and dyspnea with an arterial partial pressure of oxygen of 8.66 kPa. Computed tomography angiograms confirmed that he had both AD and PE. We subsequently performed only nonsurgical treatment, including nasal high-flow oxygen therapy, nonsteroidal analgesia, amlodipine for blood pressure control, beta-blockers for heart rate control. Eight weeks after admission, the patient improved and was discharged from the hospital. Conclusion: Patients with AD should be alerted to the possibility of a combined PE, the development of which may be associated with aortic compression. In patients with type B AD combined with low-risk PE, a nonsurgical, nonanticoagulant treatment regimen may be feasible. abstract_id: PUBMED:19919921 Early surgery for treatment of spontaneous hemopneumothorax. Purpose: Spontaneous hemopneumothorax (SHP) is a rare life threatening disorder. We retrospectively investigated patients with SHP who were treated with video- assisted thoracic surgery (VATS), and report our results. Methods: From January 1993 to July 2006, 239 patients with spontaneous pneumothorax were treated, among whom 11 (4.6%) were diagnosed with SHP. Results: All 11 patients had a collapsed lung condition worse than moderate and a chest tube inserted, of whom 10 underwent an emergency operation. The points of hemorrhaging, each of which were in the apical portion of the lung, were easily revealed during VATS, and we were able to distinguish between brisk flow and seepage. Hemostasis was acquired using VATS in all surgery cases, while the other was treated with tube drainage. The single patient who did not undergo surgical treatment had recurrent spontaneous pneumothorax 3 months later. Conclusion: It is important to perform surgery for SHP at the appropriate time. VATS was found to be an easily performed and safe procedure for initial treatment in patients with active hemorrhaging and massive blood clotting in the thorax. The long-term outcome of our patients with early surgical indication was excellent and we recommend early surgical treatment for SHP. abstract_id: PUBMED:32089840 New perspectives for a huge traumatic pneumatocele treatment in a young adult. Traumatic pneumatocele (TP) is a very rare condition that occurs mostly in children and young adults. Conservative treatment is full-fledged for uncomplicated cases. We presented here an 18-year-old young adult with chest trauma caused by a fall. Chest computed tomography disclosed a cavitary lesion 10.5 × 8.5 × 6.4 cm3 over right upper lobe (RUL) with bilateral lower lung contusion and a mild haemopneumothorax but without rib fractures. Under conservative treatment, the condition lasted for ~8 months before complete resolution. Detailed temporal changes in chest X-ray were recorded for the analysis of natural healing processes. abstract_id: PUBMED:12811722 Pigtail tube drainage in the treatment of spontaneous pneumothorax. We report our experience in using pigtail tube drainage in the treatment of spontaneous pneumothorax compared with traditional chest tube thoracotomy. One hundred two patients were included and analyzed. Fifty patients were treated with the pigtail tube and 52 patients were treated with the traditional chest tube. The extubation time, mean hospital stay, evacuation rate, and total cost were similar without a significant statistical difference. Among the chest tube group, 15 underwent surgical intervention as a result of delayed resolution and hemopneumothorax; and among the pigtail group, 15 had other procedures for pneumothorax, including chest tube insertion and surgical intervention. This verifies our initial suspicion that the effectiveness of the pigtail drainage system is no less than that of the chest tube. Therefore, when considering ambulatory ability and good patient compliance, the pigtail tube drainage system can be considered as the treatment of choice for spontaneous pneumothorax. abstract_id: PUBMED:11988791 Thoracoscopic treatment for spontaneous hemopneumothorax. Spontaneous hemopneumothorax is a rare clinical entity sometimes requiring an operation in the early stage. Two patients who underwent successful video-assisted thoracic surgery (VATS) for spontaneous hemopneumothorax are presented. In both cases, the bleeding point was clearly identified, and hemostasis was easily obtained by clipping the point and placing three access ports in the usual fashion. Furthermore, the evacuation of clotted blood and resection of bulla were performed with no difficulties. The postoperative courses were smooth, and no complications occurred, although the preoperative general condition in the second case was hemodynamically unstable. The advantages of VATS over conventional thoracotomy include less time required to access the pleural cavity, a better view, and more facilitated manipulation during surgery. Because spontaneous hemopneumothorax is a benign disease, VATS should be considered an initial treatment option in all patients with this condition, even those with active bleeding. abstract_id: PUBMED:10714120 Surgical treatment for spontaneous hemopneumothorax complicated by delayed re-bleeding: a case report A 34-year-old man was admitted to the hospital due to spontaneous hemopneumothorax. A chest tube drainage was performed, and hemorrhagic plueral effusion of 1,600 ml was drained. Because of this, the patient was transferred to the emergency center of our hospital. Following a blood transfusion, we continued to treat conservatively for nine days, because no more bleeding was recognized. On day ten, the patient suddenly started bleeding again, thus, an emergency operation was performed. At the operation under a thoracoscope, a bleeding point was ligated with surgical clip, however, it was difficult to remove blood clots that were attached with the lung surface, it was impossible to continue the thoracoscopic surgery. If re-bleeding occurs after the acute phase, problems may arise from conservative treatment. So, early surgical treatment should be considered. abstract_id: PUBMED:18278465 Diagnosis and treatment of 42 cases of multiple injuries with pancreatic injury. In order to summarize the clinical diagnosis and treatment methods for 42 cases of multiple injuries with pancreatic injury, a retrospective analysis on 42 cases of multiple injuries with pancreatic injury from January 1990 to January 2006 was carried out in our hospital. Most cases were associated with hemopneumothorax and rib fractures (52.3%), shock (50%), multiple fractures (47.6%), and severe brain injury (26.1%). In 42 cases, one case died of severe hemorrhagic shock, and the remaining 41 cases (97.6%) were cured (including 40 cases receiving surgical operation and one case receiving the conservative treatment). Postoperative complications occurred in 16 cases (21 cases/times): pancreatic fistula (5 cases/times) and incisional wound infection (5 cases/times), intra-abdominal infection (3 cases/times), stress ulcer (3 cases/times), pleural effusion (3 cases/times), pulmonary infection (one case) and wound dehiscence (1 case). The principle therapy of multiple injuries with pancreatic injury is to rescue life, followed by active treatment to prevent injuries which giving rise to the abnormal respiratory and circulatory functions, management of cerebral hernia and other injuries which endangers life at last, and the pancreatic injury to increase the survival rate and survival quality. abstract_id: PUBMED:34035334 Recommendation for management of patients with their first episode of primary spontaneous pneumothorax, using video-assisted thoracoscopic surgery or conservative treatment. International guidelines do not recommend surgery for the first episode of primary spontaneous pneumothorax (PSP), except in cases of persistent air leak, hemopneumothorax, bilateral pneumothorax, or occupations at risk. However, these recommendations have been challenged because of a significant reduction in the recurrence rate in emerging studies. We evaluated the rationale of recommendations by systematically reviewing RCTs and observational studies by using the Grading of Recommendations, Assessment, Development, and Evaluations (GRADE) system. We searched articles in PubMed, EMBASE, and Cochrane databases up to August 15, 2020. The primary outcomes were the recurrence rate and complication rate. The secondary outcomes were hospital stay and drainage duration. Nine eligible studies with 1121 patients were retrieved and analyzed. The recurrence rate was lower in the VATS than in conservative treatment with moderate evidence (OR 0.13, 95% CI 0.09 to 0.19, P < 0.001, I2 = 0%). We did not find significant differences in complication rate (Peto OR 1.17, 95% CI 0.33 to 4.12, P = 0.80), hospital stay duration (MD - 0.48 days, 95% CI - 2.84 to 1.87, P = 0.69, very low evidence), and in drainage duration (MD - 3.99 days, 95% CI - 9.06 to 1.08, P = 0.12, very low evidence) between the two groups. Our results would suggest VATS treatment as a weak recommendation for patients with the first episode of PSP, based on our systematic review of the current evidence by using the GRADE system, indicating that different treatments will be appropriate for different patients and that patients' values and preferences should be incorporated through shared decision making.Trial REGISTRY: PROSPERO; No.: CRD42020162267. Answer: Spontaneous hemopneumothorax is a serious condition that complicates spontaneous pneumothorax, and the adequacy of conservative treatment has been a subject of discussion. The literature presents a range of perspectives on the management of this condition. Several studies suggest that conservative treatment with chest tube drainage is adequate in most cases of spontaneous hemopneumothorax, especially if bleeding persists for less than 24 hours after chest tube placement (PUBMED:16037871; PUBMED:10786105). These studies report that conservative treatment can be efficient, and even in cases where minimal residual hemopneumothorax was present upon discharge, complications such as empyema or trapped lung did not develop. However, other studies advocate for early surgical intervention, particularly in cases with active hemorrhaging and massive blood clotting in the thorax. Video-assisted thoracic surgery (VATS) has been reported as an easily performed and safe procedure for initial treatment in such patients, with excellent long-term outcomes (PUBMED:19919921; PUBMED:10714120). VATS allows for clear identification of the bleeding point and facilitates hemostasis, and it is recommended as an initial treatment option even in patients with active bleeding (PUBMED:11988791). In cases where conservative treatment is chosen, pigtail tube drainage has been reported to be as effective as traditional chest tube thoracotomy, with similar outcomes in terms of extubation time, hospital stay, evacuation rate, and total cost (PUBMED:12811722). The decision between conservative and surgical treatment may also be influenced by the presence of other injuries or complications. For instance, in a case of aortic dissection with pulmonary embolism, a nonsurgical, nonanticoagulant treatment regimen was successfully employed (PUBMED:35812674). In summary, while conservative treatment with chest tube drainage is often adequate for spontaneous hemopneumothorax, early surgical intervention with VATS may be preferable in cases with active hemorrhage or significant blood clotting. The choice of treatment should be individualized based on the patient's condition, the extent of bleeding, and the presence of other injuries or complications.
Instruction: Are Females at Greater Risk for Revision Surgery After Hip Resurfacing Arthroplasty With the Articular Surface Replacement Prosthesis? Abstracts: abstract_id: PUBMED:27121872 Are Females at Greater Risk for Revision Surgery After Hip Resurfacing Arthroplasty With the Articular Surface Replacement Prosthesis? Background: Female patients undergoing hip resurfacing arthroplasties may be at greater risk of revision surgery than males, but it is unclear whether this is related to sex or other factors. We focused our analysis on data from a prospective multicenter cohort study monitoring the ASR(TM) hip resurfacing arthroplasty prosthesis on the potential association of sex on patient-reported outcome measures (PROMs), metal ion levels, revision surgery, and presence of adverse local tissue reaction. As thousands of patients with the ASR(TM) prosthesis are still undergoing followup it is critical to optimize the protocol for monitoring these patients. Questions/purposes: We wished (1) to assess the associations between sex and implant survival, and adverse local tissue reaction; and (2) to report the differences between sexes in metal ion levels and patient-reported outcome measures. Methods: One thousand two hundred fifty-two patients (1390 hips) who underwent hip resurfacing arthroplasty with implantation of the ASR(TM) prosthesis from April 2003 to July 2010 were eligible for enrollment in a multicenter followup study of the ASR(TM) Hip Resurfacing System after the voluntary recall of this device was initiated by DePuy in 2010. Nine hundred seventy patients (1098 hips) were enrolled at a mean of 7 years after surgery, with a mean followup of 2 years (range, 1-3.5 years). Nine hundred fifty-eight patients (1084 hips) met the inclusion criteria: ability to provide informed consent, complete PROMs, and continued routine followup. A subset of patients (150 patients, 171 hips), who all were from one center, with annual metal artifact reduction sequence MRI were analyzed. Ninety-three percent of patients from this center had routine MRI performed. The EuroQoL (EQ-5D), Harris hip score (HHS), University of California Los Angeles (UCLA) activity score, VAS pain, radiographs, patient and surgery details, and blood cobalt and chromium levels were obtained. Cox regression analysis was conducted to identify factors associated with implant survival, using any revision as the end point, and presence of adverse local tissue reaction. Results: In patients who had unilateral surgery, the only variable found to be associated with revision surgery was HHS (hazard ratio [HR], 0.96; 95% CI, 0.94-0.97; p < 0.001). In patients who had bilateral surgery, only HHS (HR, 0.93; 95% CI, 0.90-0.97; p < 0.001) and cobalt level (HR, 1.02; 95% CI, 1.01-1.03; p < 0.001) were associated with risk for revision. In patients with metal artifact reduction sequence MRI, the only variable found to be associated with presence of adverse local tissue reaction was cobalt level (HR, 1.06; 95% CI, 1.02-1.10; p = 0.001). Cobalt and chromium concentrations were greater in female patients than in male patients (cobalt, median 1.89 versus median 1.12 parts per billion [ppb], p < 0.001; chromium, median 2.03 versus median 1.17 ppb, p < 0.001). Slight differences were observed between males and females in HHS (males median 96 versus females median 94, p < 0.001) and UCLA scores (median 8 versus median 6, p < 0.001); however, there was no difference between sexes for VAS pain (median 0.5 versus median 0.5, p = 0.405). Differences were identified between males and females in the distribution of EQ-5D scores, yet the medians were the same (median 1.0 versus median 1.0, p < 0.001). Conclusions: Male and female patients who had hip resurfacing arthroplasty with implantation of the ASR(TM) prosthesis should be followed with equal vigilance as both are at similar risk of revision surgery and adverse local tissue reaction. Metal ion levels and HHS should be obtained at followup to monitor for risk of revision and as a screening tool for MRI. Further research is necessary to evaluate if these relationships persist in patients with other metal-on-metal prostheses. Level Of Evidence: Level II, therapeutic study. abstract_id: PUBMED:30873748 Femoral side-only revision options for the Birmingham resurfacing arthroplasty. Background: The Birmingham Hip Resurfacing (BHR) system (Smith and Nephew) was developed as an alternative to conventional total joint replacement for younger, more active patients. Among other complications exists the risk for femoral component failure. The only marketed revision option for such a complication involves exchange of all components for a total replacement arthroplasty. This presents as a considerable and potentially unnecessary operative burden where revision of only the femoral prosthesis would suffice. We have analysed revision options for BHR in the context of periprosthetic femoral fractures with a stable acetabular component. Methods: Technical details of dual mobility hip systems available in Australia were collated and analysed to assess for potential 'off label' use with an existing BHR acetabular component. These data were then compared with the custom-made Smith and Nephew dual mobility implant with respect to clearance and sizing. Results: Two dual mobility articulation modalities from two companies were identified as appropriate for potential usage with four products analysed in detail. These two demonstrated acceptable sizing and clearance measurements. Conclusion: Comparison between readily available dual mobility prostheses with custom-made implants showed off label dual mobility prosthetic use to be a viable alternative for femoral-only revisions with in situ BHR. Single component revision has several advantages which include: a less complex surgical procedure, shorter operative time, decreased blood loss and the expectation of resultant lower morbidity. Furthermore, this less complex revision surgery should give comparable results to that of primary total hip arthroplasty. abstract_id: PUBMED:33407687 Living with a recalled implant: a qualitative study of patients' experiences with ASR hip resurfacing arthroplasty. Background: Total hip arthroplasty is the traditional treatment for osteoarthritis in the hip joint. Hip resurfacing arthroplasty, with metal on metal bearing, is a modern concept initially developed mainly for young active people. The metal-on-metal hip arthroplasty implant, Articular Surface Replacement (ASR), was implanted in approximately 93,000 patients before it was recalled in 2010 due to a high complication rate. This study aimed to evaluate patients' own experiences living with an implant that they knew had a high complication rate and had been recalled from the market. Methods: A total of 14 patients, still living with the implant, of a cohort of 34 patients were available for follow-up. Qualitative semi-structured interviews were conducted with 14 patients where a majority actively sought for metal-on-metal hip resurfacing arthroplasty (HRA), and subsequently underwent HRA with an ASR prosthesis between 11/21/2006 and 09/28/2009. The responses were analyzed using content analysis described by Graneheim and Lundman to compress text and identify categories and subcategories. Results: The results showed that most patients had already decided that they wanted a metal-on-metal HRA implant before meeting the surgeon. They expressed that the implant made it possible to live an active life. A majority did not think about the fact that they had a hip implant, because they lacked subjective pain. Most of the patients were positive about the annual exams at the hospital and wanted them to continue. None of them felt that their trust towards the healthcare system had changed after the implant recall. They expressed a belief that they would need new surgery sooner than they first thought. Conclusions: Despite all the attention when the ASR prosthesis was recalled, patients with ASR-HRA did not report themselves negatively affected by the recall in this group of patients where a majority had actively sought for an HRA procedure. The healthcare system has an obligation to continue the annual exams, even if the implant provider does not continue reimbursement. abstract_id: PUBMED:29987949 Establishing thresholds for metal ion levels in patients with bilateral Articular Surface Replacement hip arthroplasty. Introduction: The primary aim of the current study was to determine whether metal ion levels could identify well- from poorly-functioning implants in patients after undergoing bilateral Articular Surface Replacement (ASR) Hip System. The secondary aim was to establish threshold values for Co and Cr that could be used to predict clinical problems in these patients. Materials And Methods: The study included 125 patients treated with bilateral ASR hip resurfacing (HRA) and 93 patients treated with bilateral ASR XL total hip arthroplasty (THA) at a mean follow-up of 7.1 years post-surgery. Patients were labelled as well- or poorly-functioning based on strict criteria. We calculated the Co and Cr safe upper ion limits (SUL) for ASR HRA and ASR XL THA separately and assessed these cutoffs ability to predict clinical problems. Results: In patients with bilateral ASR HRA, the well-functioning group had lower ions than the poorly-functioning. In ASR THA patients, there was no difference in blood Co or Cr levels between well- and poorly-functioning implant groups (p = 0.118 and p = 0.379, respectively). Both Co and Cr could effectively discriminate between well-functioning and poorly-functioning ASR HRA implants (p < 0.001 and p < 0.001, respectively), but not for ASR THA implants (p = 0.118 and p = 0.564, respectively). SULs of 4.0 ppb for Co and 4.0 ppb for Cr could differentiate between well- and poorly-functioning implants with high specificity. Conclusions: Metal ion levels are a more reliable tool in MoM HRA than THA when discriminating well- from poorly-functioning implants in bilateral patients. For patients treated with bilateral ASR HRA, ion levels above 4.0 ppb are highly specific for the detection of clinical problems. abstract_id: PUBMED:29520467 Outcomes after revision of metal on metal hip resurfacing to total arthroplasty using the direct anterior approach. Background: Function after revision total hip arthroplasty (THA) in failed metal-on-metal (MoM) hip resurfacing arthroplasty (HRA) is variable, but post-operative complication rates are reportedly high. We hypothesized HRA conversion to THA using the direct anterior approach (DAA) would be associated with optimal outcome. Methods: Seventeen MoM-HRAs in 15 patients (seven males, eight females) were revised through the DAA. The mean age was 45 years (28-59 yrs). The most common indications for revision were aseptic loosening of the acetabular component or of the femoral component and femoral neck fracture. In 16 hips, a conversion to a ceramic-on-ceramic (CoC) (13 hips) or to a metal-on polyethylene (MOP) (2), or to a large-head MoM (1) THA was done. An isolated femoral revision was done in one hip. Results: After 6.7 ± 3 years, no hip had required a re-revision. The Postel-Merle d'Aubigne (PMA) functional score improved from 9 (4-14) to 16 (12-18) (p < 0.001). An intra-operative fracture of the greater trochanter (one hip) and dysesthesia of the lateral femoral cutaneous nerve (four hips) were reported. Mean serum chromium concentration decreased from 33.2 μg/L (11.8-62 μg/L) pre-operatively to 5.8 μg/L (0.4-35.5 μg/L) post-operatively (p < 0.001), and mean serum cobalt concentration decreased from 35.8 μg/L (6.3-85.5 μg/L) to 4.7 μg/L (0.26-25.7 μg/L) (p = 0.003). Conclusion: Revision of failed MoM-HRA using the DAA resulted in an acceptable clinical outcome, no specific complication and no further surgery. A consistent decline in serum ion levels may be expected following HRA conversion to THA. abstract_id: PUBMED:32905713 Hip resurfacing arthroplasty in young patients: international high-volume centres' report on the outcome of 11,382 metal-on-metal hip resurfacing arthroplasties in patients ⩽50 years at surgery. Background: Total hip arthroplasty (THA) in patients younger than 50 years poses significant challenges including postoperative limitations of activity and higher failure rates. Sub-par outcomes of hip resurfacing arthroplasty (HRA) in registries remain controversial due to multiple confounders. Favourable HRA results in some studies are often regarded as irreproducible. The aim of this study is to analyse HRA outcomes in a large international cohort. Patients And Methods: We compiled a database of 11,382 HRA patients ⩽50 years from an international group of 27 experienced HRA centres from 13 countries. 18 different metal-on-metal (MoM) HRA designs were included with a mean follow-up of 7.6 years. Outcomes were implant survivorship, revision rates, causes for revision, clinical scores and metal ion levels. Outcomes were compared between genders, sizes, implant types and pre-operative diagnoses. Results: Overall cumulative Kaplan-Meier survivorship was 88.9% at 22 years (95% CI: 88.3-89.5%). 2 HRA designs (DePuy Articular Surface Replacement (ASR), and Corin Cormet Hip Resurfacing System (CORMET)) led to inferior results while all others yielded similar survivorships. Excluding ASR and CORMET, implant survivorship in 11,063 cases was 95% at 10 years and 90% at 22 years. In men, implant survivorship was excellent: 99% at 10 years and 92.5% at 21 years. In females, implant survivorship was 90% at 10 years and 81.3% at 22 years. The overall revision rate was 3.6% with most common reasons for revision being implant loosening and adverse local tissue reactions. The best survivorship was found in patients with osteoarthritis (95% CI, 92.1-93.3% at 22 years), the poorest was among dysplastic hips (78.3%; 95% CI, 76.5-80.1% at 20 years, p < 0.001). Conclusions: Comparable revision rates demonstrated here may mitigate some concerns for safety and longevity of MoM HRA implants. Higher demands for activity and functionality in younger patients make HRA a potential alternative to THA. abstract_id: PUBMED:28574125 Risk factors for mid-term revision surgery in patients with articular surface replacement total hip arthroplasty. Introduction: This study assessed the associations between gender and implant survival, as well as adverse local tissue reaction (ALTR), in patients with articular surface replacement (ASR) XL total hip arthroplasty (THA). Secondly, we sought to report the differences between genders in metal ion levels and patient reported outcome measures (PROMs) in these patients. Methods: 563 unilateral ASR XL THA patients were enrolled in a multicentre follow-up study at a mean of 6.4 years after index surgery. All patients had blood metal ion levels and PROMs obtained annually, and a valid anteroposterior pelvis radiograph. A sub-set of patients from a single centre had annual MRI performed and were analysed for the presence of moderate-to-severe ALTR. Results: 60 hips (11%) were revised during the study period. The only variables found to be associated with revision surgery in patients with unilateral THA were VAS pain (hazard ratio [HR], 1.35; p<0.001) and elevated cobalt metal ion levels (HR, 1.05; p<0.001). No variables assessed were found to be associated with prevalence of ALTR. Chromium concentrations were greater in female patients than males, while cobalt levels were similar between genders. Males reported higher HHS, EQ-5D and UCLA scores than females. Conclusions: Both males and females with metal-on-metal THA implants should be followed with equal vigilance as gender does not appear to be associated with poor outcomes, such as revision surgery and presence of ALTR. abstract_id: PUBMED:26695509 Increased risk for extended acetabular reconstruction in failed hip resurfacing as compared to failed total hip arthroplasty. Introduction: Hip resurfacing (HR) is intended to preserve the femoral bone stock during primary arthroplasty. On the other hand, little has been reported regarding the intraoperative need of bone reconstruction for extended acetabular defects during hip resurfacing revision. Thus, the aim of the presented study was to identify whether there is an increased need for acetabular bone reconstruction in HR revision surgery. Materials And Methods: We analyzed the data of 38 patients who underwent 39 conversions from a HR to a total hip arthroplasty (THA). Acetabular bone defects and the respective revision technique were compared against a temporary cohort of patients undergoing revision surgery of a conventional THA. Results: In 29 HR patients revision required either autogenous or allogenous impaction bone grafting to adequately manage acetabular host bone degradation. In 10 cases additional implantation of a reinforcement device was necessary. Compared to the THA cohort revision of failed HR is associated with a significantly increased risk of higher grade bone defects (Paprosky classification) and extended acetabular reconstruction (p < 0.05). Conclusions: This study provides evidence that revision of failed HR devices is associated with an increased risk for extensive acetabular defects. Furthermore, the preoperative radiographic assessment of HR devices often underestimates the intraoperative acetabular defect. Surgeons should be aware of this fact not to technically underestimate HR revision procedures. abstract_id: PUBMED:32071874 Hip resurfacing arthroplasty for osteonecrosis of the femoral head: Implant-specific outcomes and risk factors for failure. Background: Hip resurfacing arthroplasty (HRA) may be a suitable option for treating osteonecrosis of the femoral head (ONFH). However, concerns regarding the extent of osteonecrosis, amount of defect under the prosthesis, and implant-related complications remain. This study aimed to report implant-specific outcomes and risk factors for failure of HRA in ONFH. Methods: A total of 202 HRAs (166 patients) performed by a single surgeon were investigated. The stage, size, and location of ONFH were evaluated using preoperative radiographs and magnetic resonance images. Clinical, radiographic results, and serum metal concentrations of articular surface replacement (ASR) and non-ASR devices were compared. Logistic regression analysis was performed to identify the contributors of failures. The mean follow-up duration was 10.6 years. Results: Twenty-six hips (12.9%) were operated with Birmingham Hip Resurfacing (BHR), 99 (49.0%) with ASR, and 77 (38.1%) with Conserve Plus. The mean Harris Hip Score improved from 52.1 to 93.2 at the final follow-up (P < 0.001). Revision-free survivorships of non-ASR and ASR implants were 99.0% and 82.4%, respectively (P < 0.001). In multivariate analysis, the use of ASR prosthesis, greater combined necrotic angle, and smaller head size were associated with revision surgery. A large combined necrotic angle was the only independent risk factor for mechanical failure at the femoral side (P = 0.029). Conclusion: HRA for ONFH using BHR and Conserve Plus implants demonstrated favourable clinical outcomes with high revision-free survival rates at 10 years. However, care should be taken for large necrotic lesions that can lead to femoral neck fracture or aseptic femoral loosening. The Translational Potential Of This Article: This study suggests HRA performed for appropriately selected patients with ONFH can show excellent long-term clinical results. Therefore, HRA should remain as one of the treatment options for ONFH, and further development of HRA implants should be continued. abstract_id: PUBMED:29530907 Comparison of hip function and quality of life of total hip arthroplasty and resurfacing arthroplasty in the treatment of young patients with arthritis of the hip joint at 5 years. Objective: To compare the medium-term clinical effectiveness of total hip arthroplasty and resurfacing arthroplasty. Design: Single centre, two-arm, parallel group, assessor blinded, randomised controlled trial with 1:1 treatment allocation. Setting: A large teaching hospital in England. Participants: 122 patients older than 18 years with severe arthritis of the hip joint, suitable for resurfacing arthroplasty of the hip. Patients were excluded if they were considered to be unable to adhere to trial procedures or complete questionnaires. Interventions: Total hip arthroplasty (replacement of entire femoral head and neck); hip resurfacing arthroplasty (replacement of the articular surface of femoral head only, femoral neck remains intact). Both procedures replaced the articular surface of the acetabulum. Outcomes: The outcome measures were hip function assessed using the Oxford Hip Score (OHS) and health-related quality of life assessed using the EuroQol (EQ-5D). Patients were followed up annually for a minimum of 5 years. Outcome data were modelled using the generalised estimating equation methodology to explore temporal variations during follow-up. Results: 60 patients were randomly assigned to hip resurfacing arthroplasty and 62 to total hip arthroplasty. 95 (78%) of the 122 original study participants provided data at 5 years. There was a small decrease in both hip functions and quality of life in both groups of patients each year during the 5-year follow-up period. However, there was no evidence of a significant difference between treatments group in the OHS (P=0.333) or the EQ-5D (P=0.501). Conclusions: We previously reported no difference in outcome in the first year after surgery. The current medium-term results also show no evidence of a difference in hip function or health-related quality of life in the 5 years following a total hip arthroplasty versus resurfacing arthroplasty. Trial Registration Number: ISRCTN33354155. UKCRN 4093. Answer: According to the study by DePuy on the ASR(TM) hip resurfacing arthroplasty prosthesis, both male and female patients should be followed with equal vigilance as both are at similar risk of revision surgery and adverse local tissue reaction (PUBMED:27121872). The study found that the only variables associated with revision surgery were the Harris hip score (HHS) and cobalt level, rather than the patient's sex. Additionally, cobalt and chromium concentrations were greater in female patients than in male patients, but these differences did not translate into a higher risk of revision surgery based on sex alone. The study concluded that metal ion levels and HHS should be obtained at followup to monitor for risk of revision and as a screening tool for MRI, and further research is necessary to evaluate if these relationships persist in patients with other metal-on-metal prostheses (PUBMED:27121872). In contrast, another study on the outcomes of 11,382 metal-on-metal hip resurfacing arthroplasties in patients ≤50 years at surgery reported that in females, implant survivorship was 90% at 10 years and 81.3% at 22 years, which was lower than in males, where implant survivorship was 99% at 10 years and 92.5% at 21 years (PUBMED:32905713). This suggests that while the DePuy study did not find a significant difference in revision risk between sexes for the ASR(TM) prosthesis, other studies have reported lower survivorship rates for females in a broader context of metal-on-metal hip resurfacing arthroplasties. Therefore, while the specific study on the ASR(TM) prosthesis did not find females to be at greater risk for revision surgery, other data suggest that females may have lower implant survivorship rates in general for hip resurfacing arthroplasty, indicating a potential for higher risk of revision surgery in the broader context of metal-on-metal hip resurfacing arthroplasties.
Instruction: Do the psychosocial risks associated with television viewing increase mortality? Abstracts: abstract_id: PUBMED:23683712 Do the psychosocial risks associated with television viewing increase mortality? Evidence from the 2008 General Social Survey-National Death Index dataset. Background: Television viewing is associated with an increased risk of mortality, which could be caused by a sedentary lifestyle, the content of television programming (e.g., cigarette product placement or stress-inducing content), or both. Methods: We examined the relationship between self-reported hours of television viewing and mortality risk over 30 years in a representative sample of the American adult population using the 2008 General Social Survey-National Death Index dataset. We also explored the intervening variable effect of various emotional states (e.g., happiness) and beliefs (e.g., trust in government) of the relationship between television viewing and mortality. Results: We find that, for each additional hour of viewing, mortality risks increased 4%. Given the mean duration of television viewing in our sample, this amounted to about 1.2 years of life expectancy in the United States. This association was tempered by a number of potential psychosocial mediators, including self-reported measures of happiness, social capital, or confidence in institutions. Although none of these were clinically significant, the combined mediation power was statistically significant (P < .001). Conclusions: Television viewing among healthy adults is correlated with premature mortality in a nationally representative sample of U.S. adults, and this association may be partially mediated by programming content related to beliefs or affective states. However, this mediation effect is the result of many small changes in psychosocial states rather than large effects from a few factors. abstract_id: PUBMED:26568572 Association Between Television Viewing Time and All-Cause Mortality: A Meta-Analysis of Cohort Studies. Findings on the association between television (TV) viewing and all-cause mortality in epidemiologic studies have been inconsistent. Therefore, we conducted a meta-analysis of data from prospective cohort studies to quantify this association. Relevant articles were identified by searching MEDLINE (PubMed; National Library of Medicine, Bethesda, Maryland) and EMBASE (Elsevier B.V., Amsterdam, the Netherlands) from inception to March 1, 2015, and reviewing the reference lists of retrieved articles. Study-specific results were pooled using a random-effects model. Of 2,578 citations identified by the search strategy, 10 cohort studies (61,494 deaths among 647,475 individuals) met the inclusion criteria. The summary relative risk of all-cause mortality for the highest category of TV viewing time versus the lowest was 1.33 (95% confidence interval: 1.20, 1.47), with heterogeneity among studies (I(2) = 66.7%, P(heterogeneity) = 0.001). In dose-response meta-analysis, TV viewing time was statistically significantly associated with all-cause mortality risk in a J-shaped fashion (P(nonlinearity) = 0.001). These results indicate that prolonged TV viewing time might increase the risk of all-cause mortality. Given the high prevalence of excessive TV viewing, public health recommendations or interventions aimed at decreasing the amount of TV viewing time in modern societies are warranted. abstract_id: PUBMED:10351850 Psychosocial and demographic correlates of television viewing. Purpose: The present study was grounded in the notion that previously identified physical activity determinants were associated in reverse with television viewing. Identifying demographic and psychosocial correlates and other potential determinants of television viewing may allow for the development and conceptualization of models and interventions to reduce sedentary behavior. Design: Cross-sectional data from self-report measures were analyzed using multiple hierarchical regression. Setting: Data were collected from 1995 to 1996 on seniors attending a large public university in southern California. Sample: Subjects were 321 female and 255 male university seniors attending a public university in southern California. The sample's ethnic distribution was 53.8% European-American, 16.3% Latino, 16.6% Asian/Pacific Islander, 16.1% African-American, and 4.3% other. Measures: Typical television viewing practices on weekdays and weekends were assessed using self-report items. Physical activity determinants were assessed using self-report measures derived from social cognitive theory and the transtheoretical model of behavior change. Results: The variables explaining a significant amount of variance after controlling for ethnicity and employment status in females were barriers to physical activity (p < .05), body image dissatisfaction (p < .01), and processes of change (p < .02). The entire model for females captured 16% of the variance, with the psychosocial variables accounting for 8% of the variance. The significant variables in the male model were quality of life (p < .01) and grade point average (p < .03). The full model accounted for 15% of the variance, with nondemographic variables accounting for 5% of the variance in television viewing. Conclusions: Relationships between television viewing and potential determinants were found. However, additional variables that may influence the amount of television viewing need to be conceptualized and investigated. abstract_id: PUBMED:31193543 Childhood and adolescent television viewing and internalising disorders in adulthood. Time spent watching television during childhood and adolescence has been linked to socio-emotional and physical health problems in adulthood. It is unclear whether excessive television viewing is a risk factor for internalising mental health disorders such as anxiety and depression. Longitudinal associations between television viewing in childhood and adult diagnoses of anxiety and depression were investigated in a population-based birth cohort from Dunedin, New Zealand. Mean weekday television viewing time was reported by parents and adolescents between ages 5 and 15 years (1977-1987). Diagnoses of any anxiety disorder and major depression were made using standard criteria from symptoms reported for the previous year at ages 18, 21, 26, 32, and 38 years (between 1990 and 2012). Analyses adjusted for sex, parent and teacher reports of worry/fearfulness at age 5, and socioeconomic status during childhood. Diagnoses were counted if present at any of these assessments. Approximately half of all participants met criteria for anxiety disorder or depression during at least one adult assessment. Participants who had watched more television during childhood and adolescence were more likely to have a diagnosis of anxiety in sex-adjusted analyses (OR [95% CI] 1.22 [1.05, 1.41], p = 0.01), although this association weakened after adjustment for early childhood worry/fearfulness and socioeconomic status. There was no association between television viewing and depression in sex- or fully-adjusted analyses. Excessive television viewing during childhood and adolescence may be a risk factor for developing an anxiety disorder in adulthood, but does not appear to influence the long-term risk for major depression. abstract_id: PUBMED:15855072 Life without TV? cultivation theory and psychosocial health characteristics of television-free individuals and their television-viewing counterparts. Much attention has been paid to the amount of time Americans spend watching television. Cultivation theory has been important in exploring behavioral effects of television viewing for many years. However, psychosocial health has received much less scrutiny in relation to television viewing time. This investigation examined the hypotheses that television-free individuals and viewers adhering to the American Academy of Pediatrics (AAP) recommendations (up to 2 hr of viewing per day) would display a more positive psychosocial health profile when compared with more frequent television viewers. Results confirmed the hypothesis for women, but not for men. Our analysis showed that moderate television viewing, as defined by the AAP, provides a similar relation with psychosocial health as being television-free. Results are discussed in a cultivation theory framework. abstract_id: PUBMED:34379115 Television Viewing, Physical Activity, and Loneliness in Late Life. Background And Objectives: Television viewing is the most common leisure activity in late life and may ease loneliness but encourage sedentary behavior. These associations may be particularly evident among older adults who live alone and who may lack other forms of companionship throughout the day. Research Design And Methods: Adults aged 65+ (N = 257) participated, of whom 34% lived alone. Participants completed an initial interview followed by a 5- to 6-day data collection involving multimethods: (a) Electronically Activated Recorders (30 s every 7 min) provided audio recordings of television viewing, (b) Actical accelerometers objectively measured physical activity, and (c) ecological momentary assessments every 3 hr assessed social interactions. Results: On average, older adults spent approximately 37% of their waking time (6.4 hr a day) watching television. Multilevel models revealed that television viewing occurred when participants were alone or with a spouse and was associated with a greater proportion of time sedentary, lower activity, and higher ratings of loneliness compared to when not watching television. Older adults who lived alone reported greater loneliness during 3-hr intervals when viewing television, but older adults who lived with others spent a greater proportion of time sedentary when viewing television. Discussion And Implications: Findings are discussed with regard to different rationales and ways of watching television-as compensation for social isolation or as a passive leisure activity with a social partner. We discuss ideas for research on additional aspects of television viewing and screen time in late life. abstract_id: PUBMED:23335598 Predictors of parents' intention to limit children's television viewing. Background: Scientific evidence demonstrates a link between viewing time and several poor health outcomes. We use a reasoned action approach to identify the determinants and beliefs associated with parents' intention to limit their children's television viewing. Methods: We surveyed a random sample of 516 caregivers to children ages 3-16 in a large Northeastern city. Multiple regression analyses were used to test a reasoned action model and examine the differences across demographic groups. Results: The intention to limit viewing (-3 to 3) was low among parents of adolescents (M: 0.05) compared with parents of 3-6 year olds (M: 1.49) and 7-12 year olds (M: 1.16). Attitudes were the strongest determinant of intention (β = 0.43) across all demographic groups and normative pressure was also significantly related to intention (β = 0.20), except among parents of adolescents. Relevant beliefs associated with intention to limit viewing included: limiting television would be associated with the child exercising more, doing better in school, talking to family more and having less exposure to inappropriate content. Conclusions: Attitudes and normative pressure play an important role in determining parents' intention to limit their child's television viewing. The beliefs that were associated with parents' intention to limit should be emphasized by health professionals and in health communication campaigns. abstract_id: PUBMED:26217527 Television viewing time and risk of incident obesity and central obesity: the English longitudinal study of ageing. Background: Research suggests television viewing time may be associated with incident obesity and central obesity in young adults. No study has investigated these associations in older English adults. The aim of this study was to investigate longitudinal associations between television viewing time and incident obesity and central obesity in a sample of older English adults. Analyses of data from the English Longitudinal Study of Ageing. At baseline (2008), participants reported their television viewing time. Research nurses recorded obesity and central obesity by body mass index and waist circumference, respectively, at four year follow-up. Associations between television viewing time and incident obesity (BMI > 30 kg/m(2)) and central obesity (waist >102 cm men; > 88 cm women) at four year follow-up were examined using adjusted logistic regression. Participants gave full written informed consent to participate in the study and ethical approval was obtained from the London Multicentre Research Ethics Committee. Results: A total of 3777 initially non-obese participants (aged 64.8 ± 8.6 yrs, 46.4% male) were included in the analyses using BMI as an outcome and 2947 for the analyses using waist circumference. No significant associations were found between television viewing time and incident obesity. A significant association was found between watching ≥6 hrs/d of television (compared to <2 hrs/d) and central obesity (Odds Ratio 1.48; 95% confidence interval 1.07 to 2.03) after adjustment for covariables including physical activity. Conclusions: In this sample of older community dwelling English adults greater television viewing time was associated with incident central obesity, but not total obesity when measured by BMI. Interventions to reduce the incidence of central obesity in this age group that focus on reducing TV time, as well as targeting other health behaviours (eg, increasing physical activity levels, improving dietary intake) might prove useful. abstract_id: PUBMED:24965030 Television viewing, computer use, time driving and all-cause mortality: the SUN cohort. Background: Sedentary behaviors have been directly associated with all-cause mortality. However, little is known about different types of sedentary behaviors in relation to overall mortality. Our objective was to assess the association between different sedentary behaviors and all-cause mortality. Methods And Results: In this prospective, dynamic cohort study (the SUN Project) 13 284 Spanish university graduates with a mean age of 37 years were followed-up for a median of 8.2 years. Television, computer, and driving time were assessed at baseline. Poisson regression models were fitted to examine the association between each sedentary behavior and total mortality. All-cause mortality incidence rate ratios (IRRs) per 2 hours per day were 1.40 (95% confidence interval (CI): 1.06 to 1.84) for television viewing, 0.96 (95% CI: 0.79 to 1.18) for computer use, and 1.14 (95% CI: 0.90 to 1.44) for driving, after adjustment for age, sex, smoking status, total energy intake, Mediterranean diet adherence, body mass index, and physical activity. The risk of mortality was twofold higher for participants reporting ≥ 3 h/day of television viewing than for those reporting <1 h/d (IRR: 2.04 [95% CI 1.16 to 3.57]). Conclusions: Television viewing was directly associated with all-cause mortality. However, computer use and time spent driving were not significantly associated with higher mortality. Further cohort studies and trials designed to assess whether reductions in television viewing are able to reduce mortality are warranted. The lack of association between computer use or time spent driving and mortality needs further confirmation. abstract_id: PUBMED:28645071 Television viewing and risk of mortality: Exploring the biological plausibility. Background And Aims: Television (TV) viewing is a major component of leisure sedentary time, and has been consistently associated with cardiovascular disease. We examined the extent to which metabolic biomarkers explain the association between TV viewing and mortality. Methods: Participants (N = 8,451, aged 64.8 ± 9.9 yrs) were drawn from The English Longitudinal Study of Ageing (ELSA), a national prospective cohort study of community-dwelling men and women living in England. The individual participant data were linked with death records from the National Health Service registries from 2008 to 2012. Cox proportional hazards models were used to estimate the risk of death according to time spent watching TV, with biomarkers added in a stepwise fashion to estimate potential mediation. Results: Over an average follow up of 4 years (33,832 person years), there were 370 deaths. In models adjusted for comorbidities, psychosocial factors, and health behaviours including physical activity, there was an association between TV viewing and mortality (≥6 h per day vs. < 2 h per day [Ref]; hazard ratio = 1.98, 95% CI, 1.25, 3.15). Adjustment for inflammatory markers (C-reactive protein and fibrinogen) accounted for ∼15.7% of the association between TV viewing and mortality, but metabolic risk factors (HDL-cholesterol, triglycerides, glycated haemoglobin) did not contribute. Conclusions: The association between TV viewing and mortality was partly mediated by inflammatory markers, although the relationship remains largely unexplained. Answer: Yes, the psychosocial risks associated with television viewing have been found to increase mortality. Evidence from the 2008 General Social Survey-National Death Index dataset indicates that for each additional hour of television viewing, mortality risks increased by 4%, which could amount to about 1.2 years of reduced life expectancy in the United States. This association may be partially mediated by programming content related to beliefs or affective states, although the mediation effect is the result of many small changes in psychosocial states rather than large effects from a few factors (PUBMED:23683712). A meta-analysis of cohort studies also found that prolonged television viewing time might increase the risk of all-cause mortality, with a summary relative risk of 1.33 for the highest category of TV viewing time versus the lowest. The association followed a J-shaped fashion, indicating that excessive TV viewing is particularly detrimental (PUBMED:26568572). Furthermore, television viewing during childhood and adolescence has been linked to a higher likelihood of having a diagnosis of anxiety in adulthood, suggesting that excessive television viewing may be a risk factor for developing an anxiety disorder later in life (PUBMED:31193543). In older adults, television viewing was associated with higher ratings of loneliness compared to when not watching television, and those who lived alone reported greater loneliness during intervals when viewing television (PUBMED:34379115). Overall, these studies suggest that there is a significant relationship between television viewing and increased mortality risk, which may be influenced by various psychosocial factors and emotional states associated with television content and viewing habits.
Instruction: Are extra-labral MR findings useful in the diagnosis of a labral tear? Abstracts: abstract_id: PUBMED:22697321 Are extra-labral MR findings useful in the diagnosis of a labral tear? Objective: To determine diagnostic performance statistics of extra-labral magnetic resonance (MR) findings for detection of labral tears in a population of patients with clinical suspicion of this diagnosis. Materials And Methods: Seventy-nine patients clinically suspected of having a labral tear (who underwent arthroscopy) had their MR studies retrospectively reviewed to determine the presence of lateral acetabular oedema-like marrow signal, ganglia, dysplastic femoral bumps, synovial herniation pits and geodes. These findings were then correlated with the arthroscopic presence (or absence) of a labral tear. Results: All findings (lateral acetabular oedema-like marrow signal, ganglia, dysplastic femoral bumps, synovial herniation pits and geodes) had a specificity and positive predictive value (PPV) of 100%. Lateral acetabular oedema-like marrow signal had a sensitivity of 35% and 20% negative predictive value (NPV). This was the only statistically significant finding (P < 0.05). The sensitivity and NPV of ganglia were 12% and 16%, dysplastic femoral bumps (12%, 16%), synovial herniation pits (4%, 14%) and geodes (6%, 15%) respectively, (P > 0.05). Conclusion: Lateral acetabular oedema-like marrow signal is a useful sign (100% PPV) in the MR diagnosis of a labral tear, if one is clinically suspected. The other findings (ganglia, dysplastic femoral bumps, synovial herniation pits and geodes) were not statistically significant. Further studies are required to evaluate these. abstract_id: PUBMED:30237734 Clinical characteristics of patients with posterosuperior labral tear: a comparison with patients with other shoulder disorders. Background: Among the various causes of shoulder pain, the glenoid labral tear is likely to be overlooked due to its ambiguous symptoms, especially if clinicians do not have a prior suspicion, making it difficult to diagnose, unlike adhesive capsulitis or a rotator cuff tear. Therefore, the aim of this study was to evaluate the clinical features of posterosuperior (PS) labral tear. Methods: Of the patients who visited the clinic, the medical records of patients with shoulder pain who underwent shoulder magnetic resonance imaging or ultrasound after intra-articular injection (arthrosonography) were investigated retrospectively. Based on these criteria, a total of 120 patients with adhesive capsulitis, rotator cuff problem, or PS labral tear were included for analysis in this study. Results: In the physical examination, all patients with a PS labral tear were positive on the O'Brien test (13/13, 100%) and the external rotation (ER) test with the upper arm in the abduction position (abduction and ER [ABER] test) (13/13, 100%), but they were negative on other tests, such as the Hawkins-Kennedy (1/13, 7.69%) and Neer (0/13, 0.00%) tests. Except one patient, all patients with a PS labral tear were negative on the ER test with the upper arm in the neutral position (ER test) (1/13, 7.69%). In the clinical history, all patients with a PS labral tear had a characteristic episode of shoulder pain occurring in the excessive ABER posture. Conclusion: In patients with characteristic physical findings (no definite loss of motion of the shoulder joint, negative in an ER test, but positive in an ABER test and O'Brien test), and with a clinical history of PS labral tear (a history of pain occurring with the excessive ABER posture), clinicians need to have a suspicion of PS labral tear and to identify the PS labral tear using MRI or arthrosonography. abstract_id: PUBMED:34020005 Variability of glenoid labral tear patterns: a study of 280 sequential surgical cases. Background: Glenoid labrum tears are a common cause of shoulder pain and instability and tear patterns have historically been ascribed into categorical descriptions such as anterior, posterior, and superior labral tears (SLAP [superior labrum anterior and posterior]) with multiple subtypes. Although often quoted as representing no more than 10% of instabilities, posterior shoulder instability may be more common than previously recognized. The purpose of this study was to review observed labral tear patterns and compare incidence and morphologies to historical descriptions. Methods: All patients undergoing arthroscopic or open labral repair (Current Procedural Terminology codes 29806, 29807, 23455, 23460, 23462, 23465) by 2 fellowship-trained shoulder surgeons from July 2012 to May 2019 were retrospectively reviewed. Labral tears were categorized into 3 groups: exclusively anterior to the midline of the glenoid, exclusively posterior, and those crossing the midline of the glenoid. Chief complaint, mechanism of injury, hand dominance, preoperative MRI interpretation by surgeon, and independent radiologist were analyzed for each tear type. Results: During the 7-year period, 280 patients underwent arthroscopic or open labral repair. Sixty percent of tears were traumatic, with dislocation being the most common traumatic mechanism at 31.4%. Ten distinct tear patterns were identified: 3 types of 90° tears (anteroinferior, posteroinferior, and posterosuperior), 4 types of 180° tears (anterior, posterior, inferior, and SLAP), 2 types of 270° tears (anteroinferior and anterosuperior), and 360° labral tears. A total of 134 tears (47.9%) were classified as posterior, and 72 tears (25.7%) were anterior. Seventy-four tears (26.4%) were combined anterior-posterior tears. Labral tears involving some portion of the posterior labrum constituted 74% of tears. A significant association between tear location and primary complaint (P < .001) was noted. Patients with anterior tears complained of only instability in 62.5% of cases, and only pain in 22%. Patients with posterior labral tears complained primarily of pain in 68% of cases, and instability in 21%. There was an accurate preoperative diagnosis given by both radiologists and surgeons on 30% (n = 63) of the tears. Conclusion: There is a wide variety of labral tear patterns identified at the time of surgery, and the incidence of posterior labral tears is higher than previously described. Isolated Bankart lesions are relatively rare and are often associated with more extensive labral lesions. Patients with posterior labral pathology more often complain of pain rather than instability, whereas patients with anterior labral tears more often complain of instability. abstract_id: PUBMED:26496553 Usefulness of Unenhanced MRI and MR Arthrography of the Shoulder in Detection of Unstable Labral Tears. Objective: Shoulder surgeons commonly intervene to repair unstable labral tears (tears that displace with patient movement). Surgeons can detect unstable tears at surgery. It is difficult to be certain if a tear is unstable from a static MR image. This study reports the comparative benefits of using unenhanced MRI and MR arthrography together to detect unstable labral tears. Materials And Methods: One hundred fifty consecutive unenhanced shoulder MRI and MR arthrography examinations performed on the same patients were reviewed retrospectively by consensus reading of two musculoskeletal radiologists. Both unenhanced MRI and MR arthrography were performed on each patient on the same day. Labral tears were assessed. It was also determined if there was any difference in position of the labral tear between unenhanced MR images and MR arthrograms. A change in position of 4 mm or more between unenhanced MR images and MR arthrograms was considered indicative of an unstable tear. All patients proceeded to arthroscopy. Results: Of these 150 patients, 94 had superior labral anterior-to-posterior (SLAP) tears, 53 had posterior labral tears, and 42 had anterior labral tears on MRI. All lesions described on MRI were described on arthroscopy. Twenty-three SLAP tears, 16 posterior labral tears, and 17 anterior labral tears showed a change in the position of the labral tear when comparing unenhanced MR images versus MR arthrograms of 4 mm or more. All of these labral tears were considered unstable by the surgeon, and all of these patients had surgical tacking performed. The other labral tears showed motion on unenhanced MR images and MR arthrograms of less than 4 mm. All of these tears except for three were considered stable on arthroscopic examination. There were five SLAP tears, three anterior labral tears, and four posterior labral tears seen on arthroscopy that were not seen on unenhanced MRI or MR arthrography. The tears not seen on unenhanced MRI or MR arthrography were considered stable at arthroscopy. Conclusion: In this study, unenhanced MRI and MR arthrography of the shoulder was useful in diagnosing unstable labral tears in 23 patients with SLAP tears, 16 patients with posterior labral tears, and 17 patients with anterior labral tears. This information was useful in surgical planning. Of 133 tears that moved less than 4 mm on unenhanced MR images and MR arthrograms, 130 were considered stable on arthroscopy. abstract_id: PUBMED:34544472 The correlation between the labrum size and the labral tear in asymptomatic volunteers and symptomatic patients. Background: Some studies have proved that labrum size is associated with symptoms in patients with hip labral tear. The correlation between the labrum size and the labral tear in asymptomatic volunteers and symptomatic patients is still uncertain. Methods: The volunteers with no history of pain, injury, or surgery were recruited from the community. Patients who were diagnosed with labral tear and underwent hip arthroscopic surgery in this period in our hospital were also included. The length and height of the acetabular hip labrum were measured at three separate anatomic sites through magnetic resonance imaging (MRI) along the acetabular rim: lateral, anterior, and anteroinferior. Results: A total of 70 volunteers (125 hips) and 70 patients (70 hips) were included in this study. Sixty-six (52.8%) hips had labral tears in all 125 hips of volunteers. The lateral labral length of volunteers with labral tears was significantly larger than those without labral tears (P < .05). In 14 volunteers with unilateral labral tears, length of lateral, anterior, and anteroinferior labrum in the side with tear were significantly larger than normal on the other side. The anterior labral height of volunteers was significantly larger than that of patients (P < .05). Conclusions: In conclusion, asymptomatic volunteers with larger length of lateral, anterior, and anteroinferior labrum are more prone to present with labral tears. Symptomatic patients with labral tears exhibited thinner anterior labrum. Further studies are warranted to explore the mechanisms of labral tears in asymptomatic people and validate the use of labral size as a guide to differential diagnosis and treatment. abstract_id: PUBMED:33681399 Allocation of Anchors During Labral Repair: A Multicenter Cohort Analysis of Labral Treatment in Hip Arthroscopy. Background: While previous studies have established several techniques for suture anchor repair of the acetabular labrum to bone during arthroscopic surgery, the current literature lacks evidence defining the appropriate number of suture anchors required to effectively restore the function of the labral tissue. Purpose/hypothesis: To define the location and size of labral tears identified during hip arthroscopy for acetabular labral treatment in a large multicenter cohort. The secondary purpose was to differentiate the number of anchors used during arthroscopic labral repair. The hypothesis was that the location and size of the labral tear as well as the number of anchors identified would provide a range of fixation density per acetabular region and fixation method to be used as a guide in performing arthroscopic repair. Study Design: Cross-sectional study; Level of evidence, 3. Methods: We used a multicenter registry of prospectively collected hip arthroscopy cases to find patients who underwent arthroscopic labral repair by 1 of 7 orthopaedic surgeons between January 2015 and January 2017. The tear location and number of anchors used during repair were described using the clockface method, where 3 o'clock denoted the anterior extent of the tear and 9 o'clock the posterior extent, regardless of sidedness (left or right). Tear size was denoted as the number of "hours" spanned per clockface arc. Chi-square and univariate analyses of variance were performed to evaluate the data for both the entire group and among surgical centers. Results: A total of 1978 hips underwent arthroscopic treatment of the acetabular labrum; the most common tear size had a 3-hour span (n = 820; 41.5%). Of these hips, 1645 received labral repair, with most common repair location at the 12- to 3-o'clock position (n = 537; 32.6%). The surgeons varied in number of anchors per repair according to labral size (P < .001 for all), using 1 to 1.6 anchors for 1-hour tears, 1.7 to 2.4 anchors for 2-hour tears, 2.1 to 3.2 anchors for 3-hour tears, and 2.2 to 4.1 for 4-hour tears. Conclusion: Variation existed in the number of anchor implants per tear size. When labral repair involved a mean clockface arc >2 hours, at least 2 anchor points were fixated. abstract_id: PUBMED:31653186 MR arthrographic evaluation of posterior paraglenoid labral cysts: a retrospective study. Background: Paraglenoid labral cysts (PLCs) around the shoulder are uncommon. Magnetic resonance imaging (MRI) is the primary imaging modality for the description of PLCs. Purpose: The purpose of this study was to evaluate PLCs in the posterior part of the glenoid bone via MR arthrography as well as to describe associated labral abnormalities. Material And Methods: This retrospective study included 14 patients, diagnosed with 15 posterior PLCs at MR arthrography between 2007 and 2012. Conventional MRI and MR arthrography were used for all patients. Results: A total of 15 PLCs were detected in 14 patients with eight located on the right shoulder and six on the left shoulder. One case had two PLCs. While two cysts were multiloculated, the remaining 13 were seen as unilocated simple cysts. Moreover, 14 of 15 posterior PLCs (60%) were associated with labral tears at MR arthrography. The cysts in proximity to the glenoid labrum were posterosuperior in 33.3% (n = 5), mid-posterior in 36.7% (n = 7), and postero-inferior in 20% (n = 3). The majority of patients with posterosuperior and mid-posterior cysts had an associated superior labral tear from anterior to posterior (SLAP) lesions. Four of six patients with mid-posterior cysts had minimal denervation atrophy in the infraspinatus muscle. Conclusion: Posterior PLCs are mostly associated with posterior labral defects. The majority of cysts localized in the posterosuperior and mid-posterior were also associated with SLAP lesions. Denervation atrophy in the infraspinatus muscle may frequently accompany mid-posterior PLCs. abstract_id: PUBMED:37493964 Acetabular Labral Reconstruction: Review of Techniques and Outcomes. Purpose Of Review: To analyze advances in labral reconstruction, and to discuss the literature relating to efficacy, outcomes, and technical considerations of labral reconstruction. Recent Findings: The available evidence suggests that labral reconstruction is a safe and effective procedure that can improve pain and function in patients with severe and/or complex labral tears. The superiority of labral reconstruction over labral repair is uncertain, and it remains unclear what graft type and technique should be used in labral reconstructions. Labral reconstruction assists in restoring the function of the labrum of the hip. It consistently restores stability of the hip joint and demonstrates good clinical outcomes at follow-up. The optimal graft type, technique, and extent are still debated, and more research is needed. Ultimately, despite this uncertainty, labral reconstruction is proving to be a valuable tool for hip preservation specialists. abstract_id: PUBMED:36006463 Accuracy of MR arthrography in the detection of posterior glenoid labral injuries of the shoulder. Objective: The purpose of this study is to evaluate the accuracy of MR arthrography in detecting isolated posterior glenoid labral injuries using arthroscopy as the reference standard. Methods: MR arthrograms of 97 patients with isolated posterior glenoid labral tears by arthroscopy and those of 96 age and gender-matched controls with intact posterior labra were reviewed by two blinded radiologists for the presence and location of posterior labral abnormalities. The sensitivity and specificity of detection of posterior labral tears were calculated as well as the prevalence of associated pathologies. Medical records were reviewed for demographics, history and direction of shoulder instability, and prior surgery. Results: Posterior labral pathology was detected by MR arthrography with sensitivities of 76% and 84% for readers 1 and 2, and a specificity of 88% for both readers. Kappa value for interreader agreement was 0.91. Twenty-two of twenty-three (96%) tears isolated to the posteroinferior quadrant on arthroscopy were correctly identified on MRI. Commonly associated pathologies included paralabral cyst (38%), humeral fracture (7%), and glenoid fracture (2%). Fifteen of ninety-seven (16%) patients with posterior tears on both arthroscopy and MRI had glenoid rim deficiency on imaging versus no patients with intact posterior labra (p < 0.001). Forty of ninety-seven (41%) patients with posterior tears on arthroscopy had a history of posterior instability versus none without posterior tears. There was no significant difference in tear length on MRI between those with a history of instability and those without (p = 0.56). Conclusion: MR arthrography is accurate in detecting posterior glenoid labroligamentous injuries. abstract_id: PUBMED:36876844 Reliability of Conventional Hip MRI in Detecting Labral Tear and Labrocartilagenous Lesions in cases of Femoroacetabular Impingement, A Comparative Study with Hip Arthroscopy. Background: Imaging studies play a crucial role in diagnosing femoroacetabular impingement (FAI), including plain radiography and Magnetic Resonance Imaging (MRI). FAI is a combined pathology of bony abnormality, labral and labrocartilagenous erosions. Surgical treatment for such cases has become more established and preoperative imaging is the roadmap that includes the assessment of labrum and articular cartilage. Methods: During a period of 2 years, thirty-seven patients with a clinical diagnosis of FAI were retrospectively enrolled in this study, including 17 men and 20 women, aged 27-62 years. There were 22 right hips and 15 left hips. MRI was done for all patients to identify bony details, labral and chondral abnormalities and to exclude coexisting disorders. The imaging findings were compared with the arthroscopic data. Results: 15 patients had Pincer FAI, 11 patients had CAM, and 11 patients had combined Cam/Pincer FAI. Labral tear was detected in 100 % of patients, 97 % had an anterosuperior labral tear. 82 % of patients had partial thickness cartilage lesions and 8 % had full thickness cartilage lesions. MRI had a sensitivity of 100 % compared to hip arthroscopy in detecting labral tear, 60 % in detecting cartilage erosion. Conclusion: Conventional hip MRI detects bony changes in FAI, type of impingement and associated labral tear and cartilage erosions in comparison to the hip arthroscopy. Answer: Yes, extra-labral MR findings can be useful in the diagnosis of a labral tear. The study identified in abstract PUBMED:22697321 found that lateral acetabular oedema-like marrow signal had a specificity and positive predictive value (PPV) of 100% for the MR diagnosis of a labral tear when clinically suspected. However, the other findings such as ganglia, dysplastic femoral bumps, synovial herniation pits, and geodes were not statistically significant in this regard. The study concluded that lateral acetabular oedema-like marrow signal is a useful sign in the MR diagnosis of a labral tear, suggesting that extra-labral MR findings can indeed be helpful in certain cases.
Instruction: Serologic and histologic findings in patients with capsular contracture after breast augmentation with smooth silicone gel implants: is serum hyaluronan a potential predictor? Abstracts: abstract_id: PUBMED:16328636 Serologic and histologic findings in patients with capsular contracture after breast augmentation with smooth silicone gel implants: is serum hyaluronan a potential predictor? Background: In this study, breast implant capsular tissues and blood samples from 25 cases were studied to characterize the relationship between capsular findings and serum analysis. The serum fibrosis indexes hyaluronan and the aminoterminal propeptide of procollagen type III (PIIINP) are fairly well correlated in several other studies with the inflammation grade and fibrosis in patients with progressive fibrotic disorders such as liver cirrhosis. Methods: The study enrolled 25 female patients (average age, 40 +/- 12 years) with capsular contracture after bilateral cosmetic breast augmentation using smooth silicone gel implants (Mentor). The implants were placed in a submuscular position through an incision in the inframammary fold. The implant removals were prompted by development of capsular fibrosis (Baker grades 1-4). Samples of capsular tissue were obtained from all the patients for standard histologic and immunohistochemical analyses. Blood samples were drawn from all the patients immediately before surgery. Sera from 20 healthy female patients (average age, 34 +/- 9 years) who had undergone plastic surgery for reduction mammaplasty were used as controls. Results: Histology. Capsular tissue was significantly thicker in patients with grades 3 and 4 contracture than in women with grade 2 contracture according the classification by Baker. There was a moderate (n = 15) or severe (n = 10) chronic inflammatory reaction in the capsules around the implants. Fibroblasts and macrophages represented the major cell population found in the fibrous capsules. In addition, activated CD4+ cells were detected. An inner layer with synovia-like metaplasia and multinucleated giant cells was found. Fibroblast-like cells formed the most common cell type in the capsules, along with macrophages, scattered polymorphonuclear leukocytes, lymphocytes, plasma cells, and mast cells. Serum analysis. There was a significantly higher level (p < 0.05) of hyaluronan serum concentration in patients with capsular contracture (26 +/- 14 microg/l) than in control subjects (12 +/- 6 microg/l). There was a positive correlation between the grade of capsular contracture (Baker 1-4) and the hyaluronan serum concentration (Baker 1-2: 15 +/- 3 microg/l; Baker 3-4: 35 +/- 12 microg/l) (r2 = 0.73; p < 0.05). Conclusion: : In this study, serum hyaluronan levels were significantly elevated in patients with constrictive fibrosis after breast augmentation, and there was a positive correlation with the stage of capsular contracture. Serum hyaluronan concentration may help in defining patients at risk for capsular fibrosis. If treatment with new drugs can be started as a preventive measure, it may be possible to reduce the rate of patients who require surgical intervention. abstract_id: PUBMED:16816700 Determination of serum fibrosis indexes in patients with capsular contracture after augmentation with smooth silicone gel implants. Background: Several studies have confirmed a correlation between the concentration of serum hyaluronan and progressive fibrotic disorders such as liver cirrhosis. The aim of this study was to explore the relationship between serum hyaluronan levels and capsular contracture after aesthetic breast augmentation. Methods: The study included 25 female patients (average age, 40 +/- 12 years) with capsular contracture after cosmetic breast augmentation with smooth silicone gel implants (Mentor, Santa Barbara, Calif.). The implants were placed in a submuscular position through an inframammary fold incision. The implant removals were prompted by development of capsular fibrosis (Baker grades I through IV). Samples of capsular tissue were obtained from all patients for standard histologic and immunohistochemical analysis. Blood samples were drawn from all patients immediately before operation. Sera from 20 healthy female patients (average age, 34 +/- 9 years) were used as control. Results: : Capsular tissue revealed significantly higher thickness in patients with grade III/IV contracture than in women with grade II contracture, according to Baker's classification. There was a moderate (n = 15) or severe (n = 10) chronic inflammatory reaction in the capsules around the implants. Fibroblasts, fibroblast-like cells, and macrophages represented the major cell populations found within the fibrous capsules, along with scattered polymorphonuclear leukocytes, lymphocytes, plasma cells, and mast cells. In addition, activated CD4+ cells were detected. An inner layer with synovia-like metaplasia and multinucleated giant cells was found. There was a significantly higher level (p < 0.05) of hyaluronan serum concentration in patients with capsular contracture (26 +/- 14 microg/liter) compared with control subjects (12 +/- 6 microg/liter). There was a positive correlation between the grade of capsular contracture (Baker grades I through IV) and the hyaluronan serum concentration (Baker grade II, 15 +/- 3 microg/liter; Baker grade III, 34 +/- 13 microg/liter; Baker grade IV, 42 +/- 11 microg/liter) (r = 0.73; p < 0.05). Conclusions: Serum hyaluronan levels were significantly elevated in patients with capsular contracture after breast augmentation, and there was a positive correlation with stage of capsular contracture. Further study is necessary to determine whether hyaluronan might be useful as a predictor for the development and progress of capsular fibrosis. abstract_id: PUBMED:29908334 Dual surface modification of PDMS-based silicone implants to suppress capsular contracture. In this study, we report a new physicochemical surface on poly(dimethylsiloxane) (PDMS)-based silicone implants in an effort to minimize capsular contracture. Two different surface modification strategies, namely, microtexturing as a physical cue and multilayer coating as a chemical cue, were combined to achieve synergistic effects. The deposition of uniformly sized microparticles onto uncured PDMS surfaces and the subsequent removal after curing generated microtextured surfaces with concave hemisphere micropatterns. The size of the individual micropattern was controlled by the microparticle size. Micropatterns of three different sizes (37.16, 70.22, and 97.64 μm) smaller than 100 μm were produced for potential application to smooth and round-shaped breast implants. The PDMS surface was further chemically modified by layer-by-layer (LbL) deposition of poly-l-lysine and hyaluronic acid. Short-term in vitro experiments demonstrated that all the PDMS samples were cytocompatible. However, lower expression of TGF-β and α-SMA, the major profibrotic cytokine and myofibroblast marker, respectively, was observed in only multilayer-coated PDMS samples with larger size micropatterns (70.22 and 97.64 μm), thereby confirming the synergistic effects of physical and chemical cues. An in vivo study conducted for 8 weeks after implantation in rats also indicated that PDMS samples with larger size micropatterns and multilayer coating most effectively inhibited capsular contracture based on analyses of tissue inflammation, number of macrophage, fibroblast and myofibroblast, TGF-β expression, collagen density, and capsule thickness. Statement Of Significance: Although poly(dimethylsiloxane) (PDMS)-based silicone implants have been widely used for various applications including breast implants, they usually cause typical side effects called as capsular contracture. Prior studies have shown that microtexturing and surface coating could reduce capsular contracture. However, previous methods are limited in their scope for application, and it is difficult to obtain FDA approval because of the large and nonuniform size of the microtexture as well as the use of toxic chemical components. Herein, those issues could be addressed by creating a microtexture of size less than 100 m, with a narrow size distribution and using layer-by-layer deposition of a biocompatible polymer without using any toxic compounds. Furthermore, this is the first attempt to combine microtexture with multilayer coating to obtain synergetic effects in minimizing the capsular contracture. abstract_id: PUBMED:17205246 Does the surface structure of implants have an impact on the formation of a capsular contracture? Background: The formation of a fibrous capsule around a silicone breast implant is part of a physiologic foreign body reaction after breast augmentation. In contrast, the formation of a capsular contracture is a local complication of unknown cause. This study aimed to discover whether the surface structure of the implant (textured vs smooth) has any impact on the formation of a capsular contracture. Methods: This prospective study included 48 female patients with unilateral capsular fibrosis grades 1 to 4 in Baker's clinical scaling system. Of these patients, 14 received implants with a textured surface (Mentor), and 34 received implants with a smooth surface (Mentor). The implants all were placed in a submuscular position by the same experienced plastic surgeon. The clinical data were assessed using standardized patient questionnaires. For histologic diagnosis, operatively excised capsular tissue was examined. Preoperatively, venous blood samples for determining serum hyaluronan concentrations were taken from the patients. The control group consisted of 20 patients without capsular fibrosis. Results: The separate analysis of clinical data for the patients with textured and those with smooth-surfaced breast implants showed a slightly reduced degree of symptoms for the patients with textured silicone breast implants, as compared with those who had smooth-surfaced implants. The histologic assessment of the fibrosis showed a symmetric distribution for Wilflingseder scores 1 to 3 (29% each), whereas 13% of the capsular tissues could be assigned to Wilflingseder score 4. In contrast, the histologic assessment of the patients with smooth-surfaced implants predominantly showed a Wilflingseder score of 3 (65%). The serologic investigations via enzyme-linked immunoassay (ELISA) showed serum hyaluronan concentrations of 10 to 57 ng/ml (25.0 +/- 11.7 ng/ml). Therefore, no statistically significant differences in terms of serum hyaluronan levels could be determined between the two groups of patients. In comparison with the control group, the patients with implants showed elevated serum hyaluronan levels (p < 0.05). Conclusions: The histologic examination and serum hyaluronan concentration analysis showed no statistically significant difference between smooth-surfaced and textured implants (Mentor) with respect to the development of capsular contracture. On the other hand, the severity of capsular contracture showed a positive linear correlation with the degree of local inflammatory reactions, which were independent of the implant surface. abstract_id: PUBMED:25516275 Effect of Antiadhesion Barrier Solution and Fibrin on Capsular Formation After Silicone Implant Insertion in a White Rat Model. Introduction: One of the most serious complications of breast reconstruction and augmentation using silicone implants is capsular contracture. Several preventive treatments, including vitamin E, steroids, antibiotics, and cysteinyl leukotriene inhibitors, have been studied, and their clinical effects have been reported. However, the problem of capsular contracture has not yet been completely resolved. This study was performed to compare anti-adhesion barrier solution (AABS) and fibrin in their ability to prevent fibrotic capsule formation and simultaneously evaluated their effect when used in combination by capsular thickness analysis and quantitative analysis of matrix metalloproteinases (MMPs), tissue inhibitors of metalloproteinases (TIMPs), and type I collagen within the fibrous capsule. Materials And Methods: This study used female six-week-old Sprague-Dawley rats. Eighty rats were equally subdivided into the four following groups: AABS-treated, fibrin-treated, AABS and fibrin combined-treated, and untreated control groups. Each rat received two silicone chips under the panniculus carnosus muscle layer. The test materials were applied around the silicon chips. Four weeks later, the implantation sites including the skin and muscle were excised to avoid the risk of losing the fibrous capsule around the implants. The capsular thickness was analyzed by Masson's trichrome stain. Quantitative analysis of type I collagen, MMPs, and TIMPs was performed by real-time PCR, Western blot, and zymography. Results: The mean capsular thickness was 668.10 ± 275.12 μm in the control group, 356.97 ± 112.11 μm in the AABS-treated group, 525.96 ± 130.97 μm in the fibrin-treated group, and 389.24 ± 130.51 μm in the AABS and fibrin combined-treated group. Capsular thickness was significantly decreased in all experimental groups (p < 0.05). Capsular thickness was greater in the fibrin-treated group than in the AABS-treated group (p < 0.05). There was no statistically significant difference in capsular thickness between the AABS and fibrin combined-treated group and the AABS- or fibrin-treated group (p > 0.05). Compared to the control group, the experimental groups had significantly lower expressions of type I collagen and MMP-1 (p < 0.05), but there was no statistically significant difference in expressions of type I collagen and MMP-1 between the AABS-, fibrin-, and AABS and fibrin combined-treated groups (p > 0.05). The expressions of MMP-2 and TIMP-2 were not significantly different between the control and the experimental groups (p > 0.05). Conclusion: AABS is more effective in reducing capsular thickness compared with fibrin treatment in a white rat model. abstract_id: PUBMED:33109001 Cosmetic Breast Augmentation without Silicone Implants Cosmetic Breast Augmentation without Silicone Implants Abstract. Cosmetic breast augmentation procedures without silicone implants are becoming increasingly popular. Mastopexy can be adapted to the needs of the patient and usually result in an at least optical augmentation. Lipofilling is a safe procedure in reconstructive and aesthetic breast surgery and can help to restore natural appearance, compensate volume deficiencies, and correct asymmetries, that may be congenital or a consequence of prior surgeries. Cosmetic breast augmentation of very small breasts or when aiming at higher breast volumes using lipofilling only are usually combined with procedures for tissue expansion in order to increase the take rate of the injected fat. Hyaluronic acid injections is no valid alternative and are not recommended for cosmetic breast augmentation. abstract_id: PUBMED:15220598 Matrix metalloproteinases, tissue inhibitors of metalloproteinases, aminoterminal propeptide of procollagen type III, and hyaluronan in sera and tissue of patients with capsular contracture after augmentation with Trilucent breast implants. In various fibrotic diseases, matrix metalloproteinases (MMPs) and their natural inhibitors, the tissue inhibitors of metalloproteinases (TIMPs), play an important role. In our study, serum concentrations of MMP-1, MMP-2, MMP-9, TIMP-1, and TIMP-2 were determined by enzyme-linked immunosorbent assay in 17 female patients with Baker grade II (n =9), III (n =7), and IV (n =1) capsular contracture after bilateral cosmetic mamma augmentation with Trilucent implants (AEI, Inc., Caversham, United Kingdom). Samples of capsular tissue for standard histology and immunohistochemistry were obtained from all patients. Sera from 20 female patients who had plastic surgery for reduction mammaplasty were used as the control group. The aminoterminal propeptide of procollagen type III (PIIINP) and hyaluronan were analyzed as markers for fibrogenesis in both groups, too. Statistical analysis was performed using the Mann-Whitney test and Spearman rank correlation. Patients with capsular contracture presented significantly higher concentrations of TIMP-1 and TIMP-2 in their sera than did the control group (p < 0.05), which correlated with Baker grade (r = 0.7 versus r = 0.65; p < 0.05). The concentration of MMP-2 was significantly higher in the sera of patients with capsule fibrosis, whereas there were no significant differences in MMP-1, MMP-9, and PIIINP serum concentrations. Patients with capsule fibrosis had a significantly lower MMP-to-TIMP ratio (1.1 +/- 0.4, p <0.05) than the control group (1.5 +/- 0.4), which correlated with the Baker classification (r =0.7; p <0.05). The hyaluronan serum concentration of patients with capsular contracture was significantly higher (p < 0.05) and correlated with the Baker grade (r = 0.73; p < 0.05), whereas PIIINP showed no difference. In the histologic evaluation, there was a chronic inflammatory reaction in the capsules around the breast implants and refracting material within the substance. Immunohistochemically, TIMP-1 and TIMP-2 showed an intensive accumulation, and MMP-2 showed a local reaction. PIIINP could be detected, too, whereas there was no staining for MMP-1 and MMP-9. The elevated systemic MMP-2 concentration and the local positive staining in the tissue might be due to the chronic inflammatory reaction. Nevertheless, the balance between MMPs and their natural inhibitors is disturbed in patients with capsule contracture. The elevated systemic concentration of TIMPs might be a pathway in the pathogenesis of severe fibrosis after breast augmentation with alloplastic material. Hyaluronan might be a useful marker for early prediction of capsule fibrosis, whereas PIIINP is not useful as a predictor. abstract_id: PUBMED:38424927 Injected Breast Materials: Review of Imaging Features and Characteristics. In the United States, silicone and saline breast implants with their familiar radiologic appearance are the mainstays of breast augmentation. However, less well-known sequelae of unconventional injected materials introduced for cosmetic and noncosmetic purposes may also be encountered on breast imaging-for example, free silicone, paraffin and/or oil, polyacrylamide gel, autologous fat, and hyaluronic acid, which are encountered in the setting of breast augmentation. Breast injection of go-yak is not cosmetic but also results in characteristic imaging findings. Breast changes due to extravasation of chemotherapy or interstitial brachytherapy can mimic the appearance of injected noncosmetic materials. Because many of these materials can mimic or obscure imaging findings of breast cancer, it is important to recognize their varied appearances and the limitations of imaging alone in delineating breast injection material from cancer. Given the relatively uncommon incidence of injected materials into the breast, this article aims to review the imaging appearance in order to aid radiologists in maximizing cancer detection and ensuring optimal patient management. abstract_id: PUBMED:23187716 The effect of antiadhesion agent on peri-implant capsular formation in rabbits. Purpose: Capsular contracture is the most troublesome complication after aesthetic breast surgery. Capsule formation can be seen as a normal foreign body reaction caused by implant insertion into the body. Pathological capsular contracture can lead to severe symptoms including pain, tenderness, and breast distortion. Hypertrophic scar hypothesis, one of the prevailing theories, implicates hematoma, granuloma, or other factors in capsular contractures. There are also animal studies that measure adhesion-induced capsule using fibrin glue. The authors performed the experiment to evaluate reductions in capsule formation using antiadhesion agent (AAA). Methods: Twelve smooth-surfaced cohesive-gel implants were implanted in 12 New Zealand white rabbits weighing 1.8 to 2.6 kg. These 5 × 5 × 1 cm sized miniature implants were designed in accordance with products currently used for breast augmentation. After skin incision, the exposed latissimus dorsi muscle was elevated, and a submuscular pocket was made. The rabbits were divided into 2 groups. In the experimental group (n = 6), the implant and 2 mL of AAA (Guardix) were inserted into the pocket under the muscle. In the control group (n = 6), implants and 2 mL of saline were inserted into the pocket. During the 2-month follow-up period, the rabbits were imaged monthly by 3-dimensional computed tomography to study capsule formation changes. After 2 months, the animals were euthanized, and implants with peri-implant capsule were excised. We evaluated capsule thickness, collagen pattern, and myofibroblast ratio on ventral, lateral, and dorsal aspects in a blinded fashion. Results: No significant differences in capsule thickness or capsular contractures were observed on gross examination or 3-dimensional computed tomography. On histological evaluation, capsule was thinner on all aspects (ventral, P = 0.027; lateral, P = 0.027; dorsal, P = 0.028; all P < 0.05), the pattern of collagen had more parallel alignment at low density, and the myofibroblast ratio was lower (ventral, P = 0.009; lateral, P = 0.002; dorsal, P = 0.004; all P < 0.05) in the experimental group than in control group. Conclusions: We suggest that AAA can be helpful in reducing capsule formation. Later, clinical trials are needed to evaluate this finding. abstract_id: PUBMED:25238227 Physico-chemical characteristics of coated silicone textured versus smooth breast implants differentially influence breast-derived fibroblast morphology and behaviour. Capsule formation is an inevitable consequence of silicone breast implantation. Clinically challenging dense fibrocollagenous capsular contractures occur at different rates between smooth compared to textured surfaces. Host response is influenced by several factors including implant surface texture, chemistry and interactions between cells and the extracellular matrix (ECM). Specific coatings can modify the physico-chemical properties of implant surfaces eliciting specific cellular reactions. Therefore, we evaluated the physico-chemical characteristics of coated smooth versus textured silicone breast implants on breast-derived fibroblast morphology and behaviour using (a) confocal laser microscopy, (b) Raman spectroscopy and (c) the effect of four unique protein and glycosaminoglycan (GAG) coatings (aggrecan, collagen I, fibronectin and hyaluronic acid) on breast-derived fibroblast attachment, proliferation, morphology, spreading, cytotoxicity and gene expression. Collagen I, fibronectin and hyaluronic acid coatings exhibited satisfactory fibroblast adhesion (p<0.001) in comparison to uncoated surfaces. Cell adhesion was less on smooth surfaces compared to textured surfaces (p<0.001). Fibroblasts cultured on collagen I, fibronectin and hyaluronic acid coated implants demonstrated improved cell proliferation than uncoated surfaces (p<0.001). LDH assay showed that coating surfaces with collagen I, fibronectin and hyaluronic acid did not induce cytotoxicity. Alpha-actinin expression and fibroblast adhesion to the substrate were upregulated (p<0.001), in textured versus smooth surfaces. FAK, vinculin and paxillin expression were upregulated (p<0.001), in all surfaces coated with fibronectin and collagen I. In conclusion, we present original data for expression of adhesion-related genes, cell morphology and proliferation in breast fibroblasts following the application of specific coatings on breast implant surfaces. Answer: The studies investigating serologic and histologic findings in patients with capsular contracture after breast augmentation with smooth silicone gel implants have indicated that serum hyaluronan levels may be a potential predictor of capsular contracture. In a study involving 25 female patients with capsular contracture after cosmetic breast augmentation using smooth silicone gel implants, it was found that serum hyaluronan concentrations were significantly higher in patients with capsular contracture compared to control subjects. Moreover, there was a positive correlation between the grade of capsular contracture (Baker grades I through IV) and the hyaluronan serum concentration (PUBMED:16328636, PUBMED:16816700). Histologically, capsular tissue was significantly thicker in patients with higher grades of contracture, and there was a moderate to severe chronic inflammatory reaction in the capsules around the implants. The major cell populations found within the fibrous capsules were fibroblasts, fibroblast-like cells, and macrophages, along with other inflammatory cells. Activated CD4+ cells were also detected (PUBMED:16328636, PUBMED:16816700). The findings suggest that serum hyaluronan levels are elevated in patients with capsular contracture and correlate with the stage of capsular contracture. This correlation may help in defining patients at risk for capsular fibrosis and could potentially be used as a marker for early prediction of capsule fibrosis. However, further study is necessary to determine whether hyaluronan might be useful as a predictor for the development and progress of capsular fibrosis (PUBMED:16816700). Additionally, other studies have explored various methods to minimize capsular contracture, such as dual surface modification of PDMS-based silicone implants (PUBMED:29908334), the impact of implant surface structure (PUBMED:17205246), the use of antiadhesion barrier solutions and fibrin (PUBMED:25516275), and the role of matrix metalloproteinases and tissue inhibitors of metalloproteinases in capsular contracture (PUBMED:15220598). These studies contribute to the understanding of the pathophysiology of capsular contracture and potential interventions to prevent or reduce its occurrence.
Instruction: Do older adults expect to age successfully? Abstracts: abstract_id: PUBMED:37553781 Older and fearing new technologies? The relationship between older adults' technophobia and subjective age. Objective: This article aimed to identify different technophobia subgroups of older adults and examine the associations between these distinct subgroups and the subjective age. Methods: A sample of 728 retired older adults over the age of 55 was recruited in China. Latent profile analysis was conducted to identify technophobia subgroups using three indicators: techno-anxiety, techno-paranoia and privacy concerns. Analysis of Variance was applied to determine whether a relationship exists between the identified technophobic subgroups and subjective ages (Feel-age, Look-age, Do-age and Interests-age). Result: Four technophobia types were identified: 'low-technophobia' (24.59%), 'high-privacy concerns' (26.48%), 'medium-technophobia' (28.38%), and 'high-technophobia' (20.55%). Privacy concerns play a major role in the profiles of older adults who belong to the profiles of 'high-privacy concerns' and 'high-technophobia' (47.03%). A series of ANOVAs showed that older adults in the 'low-technophobia' were more likely to be younger subjective ages of the feel-age and interest-age. Conclusion: The majority of Chinese older adults do not suffer from high levels of technophobia, but do concerns about privacy issues. It also pointed out the younger subjective age might have a protective effect on older adults with technophobia. Future technophobia interventions should better focus on breaking the age stereotype of technology on older adults. abstract_id: PUBMED:31017475 What Can Be Done To Better Support Older Adults To Age Successfully In Their Homes And Communities? The rapid growth of the US population ages seventy-five and older in the coming years will increase the need for housing that accommodates mobility limitations and helps connect residents with supportive services and opportunities for socialization. While expanding the supply of housing with services such as those provided by independent and assisted living facilities is needed, so too are greater supports to allow older adults with disabilities to age successfully in their homes and communities. These include financial support for modifications to the home, the delivery of supportive services in the home by both family and paid caregivers, and the expansion of housing options in communities where older adults live. abstract_id: PUBMED:36824300 Childhood family risks and mental health of older adults in China: The moderating effect of age. Introduction: Childhood family risks (CFRs) are believed to have long-arm effects on people's mental health. However, it is unclear whether age can alleviate these long-arm effects. Aim: This study aimed to explore the relationship between CFRs and the mental health of older adults (mean [M] = 62.10, standard deviation [SD] = 8.02) in China and investigate whether age could moderate this relationship. Methods: This cross-sectional, survey-based study used data from the Chinese General Social Survey conducted in 2015, and the data of 4,237 respondents were included in the final analysis. Mental health was measured by two items, namely negative and positive emotions. The step-by-step regression procedure and moderation analysis technique were used. Results: For older adults in China, CFRs were significantly negatively associated with their mental health (β = -0.046, t = -2.690, p < 0.01), age was positively correlated with their mental health (β = 0.115, t = 7.157, p < 0.01), and age significantly moderated the relationship between CFRs and mental health (β = 0.277, t = 2.172, p < 0.05). As age increases, the correlation between CFRs and mental health decreases, and when age is one SD above the M, CFRs are no longer significantly associated with the mental health level (bsimple = -0.01, t = -0.077, p > 0.05). Conclusion: This study showed that CFRs were negatively associated with the mental health level of older Chinese adults, and age could significantly moderate the association. Therefore, it is essential to take preventive measures in advance to protect people's mental health and teach older adults to use emotion-regulation techniques to weaken the association between CFRs and mental health. abstract_id: PUBMED:30554730 Predictors of subjective age in community-dwelling older adults in Korea. Subjective age, or felt age, is highly related to one's actual health and survival. This study was conducted to explore subjective age in community-dwelling older adults in Korea, and to identify predictors of subjective age. Participants were 616 Korean older adults who were living in the community. A multiple regression analysis with a stepwise method was conducted to analyse subjective age. On average, the participants felt 7.8 per cent younger than their actual age. A multiple regression analysis revealed that less severe depression, having better perceived health, having no visual impairment, having higher handgrip strength, and living in a metropolitan area were associated with younger subjective age. Further studies are needed to determine the factors associated with subjective age in older adults. Based on this study, conceptual analyses of subjective age or longitudinal studies to determine the factors affecting subjective age are advisable. abstract_id: PUBMED:29359580 Subjective Age and Its Correlates Among Middle-Aged and Older Adults. The present study evaluates discrepancies in subjective age as reported by middle-age persons (aged 44-64 years) in comparison to older adults (aged 65 years and older), using a multidimensional definition of the concept. A convenience sample of 126 middle-aged and 126 older adults completed subjective age measures (felt age, desired age, and perceived old age), attitudes toward older adults, knowledge about aging, and sociodemographic questionnaires. Overall, participants reported feeling younger than they actually were and wanting to be younger than their chronological age. Perceived mean for old age was about 69 years. Discrepancies in felt age and desired age were significantly larger for the older group compared to the middle-aged group. Regarding perceived old age, compared to the younger group, older adults reported that old age begins at an older age. Findings suggest that middle-aged and older adults' perceptions regarding themselves and regarding old age in general are independent and need, therefore, separate research and practical attention. abstract_id: PUBMED:12410903 Do older adults expect to age successfully? The association between expectations regarding aging and beliefs regarding healthcare seeking among older adults. Objectives: To measure expectations regarding aging among community-residing-older adults, identify characteristics associated with having low expectations regarding aging, and examine whether expectations regarding aging are associated with healthcare-seeking beliefs for age-associated conditions. Design: Self-administered mail survey. Setting: Greater Los Angeles. Participants: Four hundred twenty-nine of 588 (73%) randomly selected community-residing adults aged 65 to 100 (mean age 76) cared for by 20 primary care physicians; 54% were women, and 76% were white. Measurements: The Expectations Regarding Aging Survey, a validated survey measuring expectations regarding aging; 13 items measuring care seeking beliefs; and validated measures of health status. Results: More than 50% of participants felt it was an expected part of aging to become depressed, to become more dependent, to have more aches and pains, to have less ability to have sex, and to have less energy. After adjusting for sociodemographic and health characteristics using multivariate regression, older age was independently associated with lower expectations regarding aging (P <.001), as was having lower physical and mental health-related quality of life. Having lower expectations regarding aging was independently associated with placing less importance on seeking health care (P =.049). Conclusions: Most older adults in this sample did not expect to achieve the model of successful aging in which high cognitive and physical functioning is maintained. Older age was independently associated with lower expectations regarding aging. Furthermore, having low expectations regarding aging was independently associated with not believing it important to seek health care. abstract_id: PUBMED:35682424 Is Subjective Age Associated with Physical Fitness in Community-Dwelling Older Adults? Although subjective age has been associated with a range of health-related outcomes, there has been little systematic study on the relationship between the subjective age and physical fitness in a given population. The purpose of this study was to determine the prospective association between subjective age and physical fitness in community-dwelling older adults. A sample of 276 older people who lived in the community was studied. Subjective age was measured by a face-to-face interview. Grip strength, balancing on one leg with eyes open, the 30 s chair stand test, 4 m habitual walk, and 6 min walk test were measured to reflect physical fitness. Results indicated that the felt younger older adults had a higher level of physical fitness compared to their felt older and felt the same counterparts. Multiple linear regression analysis indicated that all the measured physical fitness items were significantly associated with subjective age in older men. All of the measured physical fitness items except for the 4 m habitual walk were remarkably related to subjective age in older women. The findings suggest that subjective age is closely associated with physical fitness in community-dwelling older adults. Much attention should be paid to the promotion of physical fitness to improve the subjective age of older adults. abstract_id: PUBMED:31975837 Cognitive Functioning in Older Adults With Schizophrenia. Cognitive deficits are thought to be a core feature in schizophrenia and have been found to be strongly associated with impairments in functioning. It is estimated that more than 70% of patients with schizophrenia have cognitive impairment. The aim of this article is to critically review the emerging literature on cognition in older adults with schizophrenia. Specifically, we address the following questions: Are there differences in cognitive functioning between older adults with schizophrenia and their healthy age peers as well as with younger people with schizophrenia? What are the factors associated with cognitive deficits and their interaction over time? What are the life course trajectories of cognitive deficits, especially in later life? Are older adults with schizophrenia more likely to develop dementia, and, if so, does it differ from other dementias? Are there pharmacological and psychosocial interventions that can successfully treat cognitive deficits in older adults with schizophrenia? abstract_id: PUBMED:38163202 Development and psychometric properties of the age-friendly hospitals scale in older adults. Age-friendly Primary Health Care by the World Health Organization (WHO) provided a framework to guide countries in developing concrete and appropriate care in the health system, including encouraging the development of an Age-Friendly Hospital (AFH) care network. The study aimed to develop and evaluate the psychometric properties of the AFH scale (AFHS) in older adults. A cross-sectional study collected and analyzed data from 330 older adults between June 2018 and June 2019. The instrument was developed and validated according to the proposed guidelines. The study involved item generation and scale development, including content and face validity, pilot testing, exploratory factor analysis (EFA), internal consistency, and test-retest reliability. EFA was performed using principal axis factoring with a promax rotation. The original model of four factors and 22 items was conducted. Three factors with eigenvalues greater than one were extracted, and the scree plot examination confirmed the retention of three factors with 22 items after performing EFA. Using the EFA, we identified three main factors: care processes, communication and service, and physical environment. The developed scale can contribute to establishing AFH and quality healthcare institutions. It may be a valuable reference for healthcare facilities to evaluate and enhance their services, considering factors like limited resources and workforce. Furthermore, this scale can facilitate continuous improvement and long-term development of age-friendly healthcare institutions. abstract_id: PUBMED:33489646 Age-Related Change in Muscle Characteristics and Resistance Training for Older Adults. In older adults, muscle weakness contributes greatly to functional restrictions on daily living activities, increased risk of falls, and adverse physiological changes. It has been suggested that not only muscle mass but also muscular infiltration of noncontractile elements may influence muscular performance such as strength and rapid force production. It is proved that resistance training may provoke substantial increases in muscle size even if it is performed at low intensities in older individuals. Also, recent studies have demonstrated the effectiveness of resistance training on muscle quality such as muscular infiltration of noncontractile elements for older people. This review shows the age-related changes in muscle mass and muscle quality, which were measured by muscle echo intensity on ultrasound images, and low-intensity resistance training effects on muscle volume and muscle quality. Answer: Older adults' expectations regarding aging vary, and some may not anticipate achieving a model of successful aging where high cognitive and physical functioning is maintained. A study found that more than 50% of older adults surveyed felt it was an expected part of aging to experience increased dependency, more aches and pains, reduced sexual ability, and less energy. Older age was independently associated with lower expectations regarding aging, and having lower expectations was linked to placing less importance on seeking healthcare for age-associated conditions (PUBMED:12410903). This suggests that many older adults may have a somewhat pessimistic view of aging and may not fully expect to age successfully. However, it is important to note that expectations can be influenced by various factors, including health status, sociodemographic characteristics, and personal beliefs.
Instruction: Should a Lumbar MRI for back pain routinely include the sacro-iliac joint? Abstracts: abstract_id: PUBMED:26280717 Should a Lumbar MRI for back pain routinely include the sacro-iliac joint? Background: Back pain is a common problem and has been shown to affect approximately 85% of the adult population at any one time. The source of this pain can be difficult to identify and the sacroiliac joint (SIJ) has been proposed as a possible pain source. Its percentage contribution to lower back pain is controversial. Clinical provocative tests for SIJ pathology have been developed but these have high intra and inter-observer variability and the significant of positive findings is unclear. This study proposes that the SIJ should not be imaged as part of a routine MRI lumbar spine series. Methods: We retrospectively reviewed the images of 353 patients who had MRI lumbar spine scans. 130 had the SIJ imaged. We recorded the clinical findings and diagnosis at referral. We reviewed the images and documented the radiological findings. Results: SI joint pathology was most frequently identified when clinical suspected. Overall SIJ pathology found on MRI in only 0.02% of patients. Conclusions: We conclude that routinely imaging the SIJ in MRI lumbar spine series is not cost-effective or a useful use of resources. The SIJ should be imaged only if significant clinical findings are demonstrated. abstract_id: PUBMED:24482619 Early results of sacro-iliac joint fixation following long fusion to the sacrum in adult spine deformity. Background: The sacro-iliac joint (SIJ) is the largest joint in the human body. When the lumbar spine is fused to the sacrum, motion across the SIJ is increased, leading to increased degeneration of the SIJ. Degeneration can become symptomatic in up to 75% of the cases when a long lumbar fusion ends with a sacral fixation. If medical treatments fail, patients can undergo surgical fixation of the SIJ. Questions/purposes: This study reports the results of short-term complications, length of stay, and clinical as well as radiographic outcomes of patients undergoing percutaneous SIJ fixation for SIJ pain following long fusions to the sacrum for adult scoliosis. Methods: A retrospective review of all the patients who underwent a percutaneous fixation of the SIJ after corrective scoliosis surgery was performed in a single specialized scoliosis center between the years 2011-2013. Ten SIJ fusions were performed in six patients who failed conservative care for SIJ arthritis. Average age was 50 (range 25-60 years). The patients were 15.3 years in average after the original surgical procedure (range 4-25 years). Average post-operative follow-up was 10.25 months (range 15-4 months). The medical charts of the patients were reviewed for hospital stay, complications, pre- and post-operative pain, quality of life, and satisfaction with surgery using the visual analogues score (VAS), Scoliosis Research Society (SRS)22 and Oswestry Disability Index (ODI) questionnaires. Images were reviewed for fixation of the SIJ, fusion, and deviation of the implants from the SIJ. Results: There were no complications in surgery or post-operatively. Discharge was on post-operative day 2 (range 1-4 days). Leg VAS score improved from 6.5 to 2.0 (P < 0.005; minimal clinically important difference (MCID) 1.6). Back VAS score decreased from 7.83 to 2.67 mm (P < 0.005; MCID 1.2). ODI scores dropped from 22.2 to 10.5 (P = 0.0005; MCID 12.4). SRS22 scores increased from 2.93 to 3.65 (P = 0.035; MCID 0.2) with the largest increases in the pain, function, and satisfaction domains of the questionnaires. Conclusion: Fixation of the SIJ in patients that fail conservative care for SIJ arthritis after long fusions ending in the sacrum provides a reduction in back pain and improved quality of life in the short and medium range follow-up period. abstract_id: PUBMED:158822 Sacro-iliac joint scanning with technetium-99 diphosphonate. Quantitative sacro-iliac (SI) joint scanning with methylene diphosphonate labelled with technetium-99 (99TcMDP) was performed in 25 control patients, in 16 patients with definite ankylosing spondylitis, in 23 patients with mechanical low back pain, and in 12 patients with seronegative arthritis. The mean radio-isotope index in the control group was 1.2 +/- 0.15. The highest value was 1.5. Values in excess of 1.5 were seen in patients with clinically active ankylosing spondylitis but not those with inactive disease. Three of the 12 seronegative arthritis patients (without clinical or radiological evidence of sacro-iliitis) had elevated values: all of these were positive for HL-A B27. An important finding was that six of the 23 patients with mechanical or non-specific low back pain had values above 1.5, unassociated with B27. These data emphasize the need for caution in the interpretation of abnormal sacro-iliac scans. Radio-isotope bone scanning can provide a qualitative and quantitative assessment of inflammatory activity in joints with minimal radiation exposure. Various authors have shown its value in providing early evidence of sacro-iliitis (Russell et al., 1975; Namey et al., 1977). In this study, methylene diphosphonate labelled with technetium-99 (99TcMMDP) has been used to produce quantitative sacro-iliac scans in order to evaluate sacro-iliac disease in four groups of patients presenting with or without low back pain. abstract_id: PUBMED:2605031 External fixation of the pelvic girdle as a test for assessing instability of the sacro-iliac joint. External fixation of the pelvic girdle was used in a consecutive series of ten female patients suffering from suspected chronic instability of the sacro-iliac joint. The condition was attributed to pelvic relaxation after pregnancy in seven patients and to trauma in three. External fixation of the pelvis with a trapezoid frame reduced the average radiographic displacement of the symphysis from 5.0 to 2.4 mm in eight patients, relieved the dorsal pain in seven, and markedly improved walking ability in five. Seven of the ten patients were subsequently subjected to an anterior sacro-iliac joint arthrodesis in which square compression plates and autogenous bone grafts were used. At follow-up examination five of these patients were improved, and two unchanged. The results suggest that external fixation of the pelvis is useful in assessing painful sacro-iliac joint instability and should precede surgical intervention. abstract_id: PUBMED:6454524 An improved method for sacro-iliac joint imaging: a study of normal subjects, patients with sacro-iliitis and patients with low back pain. A new method is described for quantitative measurement of the uptake of 99mTc--methylene diphosphonate (MDP) by the sacro-iliac joints. The method uses 'regions of interest' providing advantages over the previously described 'slice' method; the two methods are compared in normal subjects, patients with known sacro-iliitis and patients with low back pain. Sacro-iliac activity, as calculated by the sacro-iliac index (SII) in normal patients, was shown to decrease with age in females but not in males. The SII was compared with radiographs of the sacro-iliac joints in the patients with known sacro-iliac joint disease and in those with low back pain. The method is useful for the exclusion of sacro-iliitis as a specific cause of back pain. abstract_id: PUBMED:29162138 Predictability of the effects of facet joint infiltration in the degenerate lumbar spine when assessing MRI scans. Background: Imaging results are frequently considered as hallmarks of disease by spine surgeons to plan their future treatment strategy. Numerous classification systems have been proposed to quantify or grade lumbar magnetic resonance imaging (MRI) scans and thus objectify imaging findings. The clinical impact of the measured parameters remains, however, unclear. To evaluate the pathological significance of imaging findings in patients with multisegmental degenerative findings, clinicians can perform image-guided local infiltrations to target defined areas such as the facet joints. The aim of the present retrospective study was to evaluate the correlation of MRI facet joint degeneration and spinal stenosis measurements with improvement obtained by image-guided intraarticular facet joint infiltration. Methods: Fifty MRI scans of patients with chronic lumbar back pain were graded radiologically using a wide range of classification and measurement systems. The reported effect of facet joint injections at the site was recorded, and a comparative analysis performed. Results: When we allocated patients according to their reported pain relief, 27 showed no improvement (0-30%), 16 reported good improvement (31-75%) and 7 reported excellent improvement (> 75%). MRI features assessed in this study did, however, not show any relevant correlation with reported pain after facet joint infiltration: Values for Kendall's tau ranged from τ = - 0.190 for neuroforaminal stenosis grading as suggested by Lee, to τ = 0.133 for posterior disc height as proposed by Hasegawa. Conclusion: Despite the trend in evidence-based medicine to provide medical algorithms, our findings underline the continuing need for individualised spine care that, along with imaging techniques or targeted infiltrations, includes diagnostic dimensions such as good patient history and clinical examination to formulate a diagnosis. Trial Registration: ClinicalTrials.gov , NCT03308149 , retrospectively registered October 2017. abstract_id: PUBMED:6210949 Infra-red thermography in the assessment of sacro-iliac inflammation. In a cross-sectional study designed to investigate the sensitivity of infra-red thermography in the detection of sacro-iliac regions were examined by thermography in a group of patients with ankylosing spondylitis and compared with normal volunteers and patients with other causes of low back pain. Thermograms were recorded both quantitatively via profile measurements across the sacro-iliac regions and sacrum and qualitatively via the pattern recorded by photography. Sacro-iliac disease activity was recorded clinically on the same day and was low overall in the patients examined. Thirteen of the 30 ankylosing spondylitis patients were abnormal thermographically either by profile or pattern measurements. None of the 13 patients with other causes of low back pain had increased sacro-iliac activity on thermography. None of four patients without X-ray evidence of sacro-iliitis was abnormal thermographically but there was a trend for increasing thermographic activity to be associated with increasing clinical activity. It was concluded that the thermographic technique examined was of little help in the diagnosis of early sacro-iliitis but might be more helpful in the objective serial assessment of sacro-iliitis in individual patients with active disease. abstract_id: PUBMED:7112148 Gout in the spine and sacro-iliac joints: radiological manifestations. It is well known that deposits of urates in soft tissues occur commonly in gout, particularly in para-articular areas and in articular cartilages of the limbs. Involvement of the spine and sacro-iliac joints by such deposits, however, has been regarded as being relatively unusual and has attracted little attention in the literature. As we were impressed by the frequency of episodes of acute back pain in our patients with gouty arthritis, established definitely on clinical and biochemical grounds, we undertook a radiological investigation of the spine and sacro-iliac joints in a series of 54 subjects. It was suspected that their episodes of pain were clinical manifestations of gout, and 12 of the group had suffered one or more attacks. Of these 12 subjects, eight were found to have radiological abnormalities. In six subjects, evidence of sacro-iliitis was demonstrated, which is comparable to the report of Resnick and Reinke [8], and in two patients vertebral lesions corresponded to those describe by Jaffe [5]. Hyperostotic spondylosis was present in no fewer than 29 of the series. Although lacking histological confirmation of the lesions demonstrated radiologically, we believe that our suspicions have been confirmed. abstract_id: PUBMED:34306901 Perioperative Outcomes of Minimally Invasive Sacroilliac Joint Fusion Using Hollow Screws Through a Lateral Approach: A Single Surgeon Retrospective Cohort Study. Sacroiliac joint (SIJ) pain is a common cause of lower back pain and a significant source of disability in the United States. There is no consensus on the best surgical treatment for SIJ pain that is not responsive to conservative therapy. Minimally invasive fusion of the SIJ using hollow fenestrated screws from a lateral trajectory is a newer technique for SIJ fusion. This study presents perioperative and patient-reported outcomes amongst 62 patients who underwent SIJ fixation with hollow fenestrated screws. We find that mean disability on the Oswestry disability index improved from 52.2% to 34.9% at one-year post-op. Mean operative time was 34±9 minutes and blood loss was 22±35ml. Only six patients required overnight hospitalization. There were two cases of complications requiring operative intervention. We conclude that SIJ fixation using hollow fenestrated screws is a safe and effective procedure for the fixation of the SIJ. Further investigation is warranted to determine the best surgical treatment for SIJ pain. abstract_id: PUBMED:27054038 Evaluation of MRI protocols for the assessment of lumbar facet joints after MR-guided focused ultrasound treatment. Background: MR-guided focused ultrasound (MRgFUS) might be a very safe and effective minimally invasive technique to treat facet joint pain caused by arthritis and other degenerative changes. However, there are still safety concerns for this treatment and challenges regarding MR imaging and temperature mapping due to susceptibility effects between the bone and soft tissue near the joint, which has resulted in poor MR image quality. The goal of this research was to evaluate multiple magnetic resonance imaging (MRI) pulse sequences for characterizing ablated lumbar facet joint lesions created by high-intensity focused ultrasound (FUS) and compare the findings to histological tissue assessment. In particular, we investigated the use of T2-weighted MRI to assess treatment effects without contrast administration. Methods: An IACUC approved study (n = 6 pigs) was performed using a 3T widebore MRI system equipped with an MRgFUS system. Facet joints of the lumbar vertebra were ablated using 1-MHz frequency and multiple sonication energies (300-800 J). In addition to T2-weighted MRI for treatment planning, T1-, T2-, and T2*-weighted and perfusion MRI sequences were applied. Signal intensity ratios of the lesions were determined. Histopathology was used to characterize cellular changes. Results: Ablation of the facet joint, using MRgFUS, was successful in all animals. T2-weighted images showed high signal intensity in the edematous facet joint and adjacent muscle, while delayed contrast-enhanced T1-weighted images showed an enhanced ring surrounding the target volume. T2*-weighted GRE images revealed inconsistent lesion visualization. Histopathology confirmed the presence of cellular coagulation (shrinkage), extracellular expansion (edema), and hemorrhage in the bone marrow. Conclusions: MRgFUS provided sufficient precision and image quality for visualization and characterization of ablated facet joints directly after ablation. MRI may help in monitoring the efficacy of FUS ablation without contrast after treating patients with back pain. Answer: Based on the provided abstracts, it appears that routinely including the sacroiliac joint (SIJ) in a lumbar MRI for back pain may not be necessary or cost-effective for all patients. One study suggests that SIJ pathology was most frequently identified when clinically suspected and that overall SIJ pathology found on MRI was only 0.02% of patients. The authors concluded that routinely imaging the SIJ in MRI lumbar spine series is not cost-effective or a useful use of resources and should only be imaged if significant clinical findings are demonstrated (PUBMED:26280717). However, it is important to note that in certain circumstances, such as in patients who have undergone long lumbar fusions ending in the sacrum, the SIJ can become symptomatic and may require surgical fixation if medical treatments fail (PUBMED:24482619). This suggests that in specific clinical scenarios, imaging of the SIJ may be warranted. In summary, while the SIJ can be a source of pain and may require intervention in certain cases, the routine inclusion of the SIJ in lumbar MRI scans for all patients with back pain is not supported by the evidence provided. Imaging of the SIJ should be considered based on clinical suspicion and individual patient circumstances rather than as a standard practice for all cases of back pain.
Instruction: Does immunohistochemical staining have a clinical impact in early gastric cancer conducted endoscopic submucosal dissection? Abstracts: abstract_id: PUBMED:22969232 Does immunohistochemical staining have a clinical impact in early gastric cancer conducted endoscopic submucosal dissection? Aim: To evaluate clinicopathologic parameters and the clinical significance related lymphovascular invasion (LVI) by immunohistochemical staining (IHCS) in endoscopic submucosal dissection (ESD). Methods: Between May 2005 and May 2010, a total of 348 lesions from 321 patients (mean age 63 ± 10 years, men 74.6%) with early gastric cancer (EGC) who met indication criteria after ESD were analyzed retrospectively. The 348 lesions were divided into the absolute (n = 100, differentiated mucosal cancer without ulcer ≤ 20 mm) and expanded (n = 248) indication groups after ESD. The 248 lesions were divided into four subgroups according to the expanded ESD indication. The presence of LVI was determined by factor VIII-related antigen and D2-40 assessment. We compared LVI IHCS-negative group with LVI IHCS-positive in each group. Results: LVI by hematoxylin-eosin staining (HES) and IHCS were all negative in the absolute group, while was observed in only the expanded groups. The positive rate of LVI by IHCS was higher than that of LVI by HES (n = 1, 0.4% vs n = 11, 4.4%, P = 0.044). LVI IHCS-positivity was observed when the cancer invaded to the mucosa 3 (M3) or submucosa 1 (SM1) levels, with a predominance of 63.6% in the subgroup that included only SM1 cancer (P < 0.01). In a univariate analysis, M3 or SM1 invasion by the tumor was significantly associated with a higher rate of LVI by IHCS, but no factor was significant in a multivariate analysis. There were no cases of tumor recurrence or metastasis during the median 26 mo follow-up. Conclusion: EGCs of the absolute group are immunohistochemically stable. The presence of LVI may be carefully examined by IHCS in an ESD expanded indication group with an invasion depth of M3 or greater. abstract_id: PUBMED:22792522 The clinical accuracy of endoscopic ultrasonography and white light imaging in gastric endoscopic submucosal dissection. Purpose: Gastric mucosal neoplastic lesions should have characteristic endoscopic features for successful endoscopic submucosal dissection. Materials And Methods: Out of the 1,010 endoscopic submucosal dissection, we enrolled 62 patients that had the procedure cancelled. Retrospectively, whether the reasons for cancelling the endoscopic submucosal dissection were consistent with the indications for an endoscopic submucosal dissection were assessed by analyzing the clinical outcomes of the patients that had the surgery. Results: The cases were divided into two groups; the under-diagnosed group (30 cases; unable to perform an endoscopic submucosal dissection) and the over-diagnosed group (32 cases; unnecessary to perform an endoscopic submucosal dissection), according to the second endoscopic findings, compared with the index conventional white light image. There were six cases in the under-diagnosed group with advanced gastric cancer on the second conventional white light image endoscopy, 17 cases with submucosal invasion on endoscopic ultrasonography findings, 5 cases with a size greater than 3 cm and ulcer, 1 case with diffuse infiltrative endoscopic features, and 1 case with lymph node involvement on computed tomography. A total of 25 patients underwent a gastrectomy to remove a gastric adenocarcinoma. The overall accuracy of the decision to cancel the endoscopic submucosal dissection was 40% (10/25) in the subgroup that had the surgery. Conclusions: The accuracy of the decision to cancel the endoscopic submucosal dissection, after conventional white light image and endoscopic ultrasonography, was low in this study. Other diagnostic options are needed to arrive at an accurate decision on whether to perform a gastric endoscopic submucosal dissection. abstract_id: PUBMED:33909540 Endoscopic instruments and techniques in endoscopic submucosal dissection for early gastric cancer. Introduction: Endoscopic submucosal dissection (ESD) is a less invasive local treatment for early gastric cancer (EGC).Areas covered: Japanese guidelines have recently expanded the endoscopic treatment indications for EGC. ESD is sometimes performed for lesions with a relative indication that has not previously been considered for ESD. ESD procedures are technically difficult, especially for larger and more invasive lesions. Complete resection and accurate histological assessment of EGC lesions are essential to evaluate endoscopic curability. Various endoscopic instruments, including three types of endoscopic knives and specific hemostatic forceps, have been developed, and endoscopic techniques, including the traction method, have been modified to simplify the procedure.Expert opinion: An insulated-tip knife with large tissue contact allows us to perform faster incisions, while a needle-type knife with minimal tissue contact allows us to perform precise incisions. Furthermore, a scissor-type knife with grasping tissues allows us to perform more secure incision. The traction method enables the operator to proceed with submucosal dissection procedures. In cases with small lesions, snaring can be used, as well as knife dissection. Understanding the features of each endoscopic instrument and technique is essential because appropriate selection and usage contribute to successful and safe ESD procedures. abstract_id: PUBMED:22195238 Endoscopic submucosal dissection of early gastric cancer. Gastric cancer is the most common cancer worldwide. The proportion of early gastric cancer (EGC) cases at diagnosis has increased because of the use of mass screening endoscopy in older adults. Endoscopic mucosal resection has become the standard treatment for EGC in cases with standard indications because of its low risk of lymph node metastasis. A new endoscopic method, endoscopic submucosal dissection, has recently become available. This method allows en bloc resection without limitation of the size of the lesion. The goal of this article is to review the history and methods of endoscopic treatment with EGC, the conventional and extended indications, the therapeutic outcomes, and the complication rates. abstract_id: PUBMED:35127322 Late recurrence of early gastric cancer 11 years after curative endoscopic submucosal dissection. A 73-year-old man was initially diagnosed with a 50-mm, depressed-type early gastric cancer on the anterior wall of the angulus, and the lesion was curatively resected en bloc by endoscopic submucosal dissection. Pathology revealed a 54 mm × 43 mm differentiated-type predominant adenocarcinoma with focal undifferentiated-type component that was confined to the mucosa without ulceration. Eleven years after endoscopic submucosal dissection, lymph node metastasis along the lesser curve was incidentally detected on magnetic resonance imaging and diagnosed by endoscopic ultrasound-guided fine-needle aspiration biopsy. The patient received distal gastrectomy with D2 lymph node dissection. A resected lymph node revealed a well-differentiated adenocarcinoma with a poorly differentiated component; thus, the final diagnosis was late recurrence of early gastric cancer originally treated by endoscopic submucosal dissection. This report demonstrates that metastatic recurrence may occur in curative endoscopic submucosal dissection for early gastric cancer and that gastric cancer can recur even after more than 5 years. abstract_id: PUBMED:22741122 Clinical outcomes of endoscopic submucosal dissection for undifferentiated or submucosal invasive early gastric cancer. Background/aims: Early gastric cancer (EGC) that is undifferentiated or shows submucosal invasion has not been generally accepted as an indication for endoscopic treatment. But recently, experiences with endoscopic submucosal dissection (ESD) for undifferentiated EGC or submucosal invasive (SM) EGC have increased. The aim of this study was to evaluate clinical outcomes of ESD for EGC with undifferentiation or submucosal invasion. Methods: Between August 2005 and August 2009, among 210 EGCs treated using ESD at our hospital, 18 lesions were diagnosed as undifferentiated gastric cancer and 41 as SM gastric cancer. A retrospective analysis was done on the medical records of these patients. Results: Mean follow-up periods were 19.39±11.2 months. During the follow-up period, local recurrence was noted in 4 lesions. Local recurrence rates of the EGC groups (group 1, mucosal cancer with undifferentiation; group 2, SM cancer with differentiation; group 3, SM cancer with undifferentiation) were 10%, 4.5%, and 50%, respectively. Groups 1 and 2 were not significantly different in local recurrence rates compared to the mucosal cancer with differentiation group (p=0.061, p=0.125, respectively). The undifferentiated EGC group was significantly lower in curability using ESD than the differentiated EGC group (55.6% vs. 89.6%, p=0.000). The curability of the SM EGC group was lower than the mucosal EGC group (36.6% vs. 98.9%). Conclusions: Complete resection using ESD is difficult in undifferentiated and SM gastric cancers. SM cancer with undifferentiation should be treated immediately by salvage operation. For mucosal cancer with undifferentiation or SM cancer with differentiation, one should consider careful short-term follow-up. abstract_id: PUBMED:33666751 Clinical feasibility and oncologic safety of primary endoscopic submucosal dissection for clinical submucosal invasive early gastric cancer. Purpose: This study aimed to evaluate the clinical feasibility and oncologic safety of primary endoscopic submucosal dissection for cT1bN0M0 gastric cancer by identifying patients who had undergone curative resection after endoscopic submucosal dissection and patients who had undergone primary surgery but could be treated with endoscopic submucosal dissection. Methods: A single-center retrospective cohort study was conducted on patients with cT1bN0M0 gastric cancer (size: ≤ 30 mm) and differentiated histology from January 2007 to May 2017. Submucosal invasion was evaluated using conventional endoscopy and endoscopic ultrasonography. Patients were divided into the primary endoscopic submucosal dissection and primary surgery groups according to initial treatment. Results: Curative resection rate was 65.0% (91/140) in the primary endoscopic submucosal dissection group. Of patients in the primary surgery group, 49.1% (82/167) were considered eligible for endoscopic submucosal dissection. No differences in en bloc resection rate were observed between pT1a and pT1b gastric cancers after endoscopic submucosal dissection (97.8 vs. 97.9%). However, the negative horizontal margin rate was higher in pT1a gastric cancer than pT1b gastric cancer (98.9 vs. 91.3%). There was no difference in overall survival among the curative treatment groups (log rank P = 0.310). Conclusion: In this study, 173 (56.4%) out of 307 patients with cT1bN0M0 gastric cancer could avoid unnecessary surgery if primary endoscopic submucosal dissection was used. Based on these findings, it is necessary to add an appropriate indication for endoscopic submucosal dissection in patients with cT1b gastric cancer. Further management could be determined based on pathologic findings after primary endoscopic submucosal dissection. abstract_id: PUBMED:37574957 Clinical Outcome of Endoscopic Submucosal Dissection for Papillary Type Early Gastric Cancer: A Multicenter Study. Background/aims: Papillary adenocarcinoma is classified to differentiated-type gastric cancer and is indicated for endoscopic submucosal dissection. However, due to its rare nature, there are limited studies on it. The purpose of this study was to determine the outcome of endoscopic submucosal dissection in patients with papillary-type early gastric cancer and to find the risk factors of lymph node metastasis. Methods: Patients diagnosed with papillary-type early gastric cancer at eight medical centers, who underwent endoscopic submucosal dissection or surgical treatment, were retrospectively reviewed. The clinical results and long-term outcomes of post-endoscopic submucosal dissection were evaluated, and the risk factors of lymph node metastasis in the surgery group were analyzed. Results: One-hundred and seventy-six patients with papillary-type early gastric cancer were enrolled: 44.9% (n=79) in the surgery group and 55.1% (n=97) in the endoscopic submucosal dissection group. As a result of endoscopic submucosal dissection, the en bloc resection and curative resection rates were 91.8% and 86.6%, respectively. The procedure-related complication rate was 4.1%, and local recurrence occurred in 3.1% of patients. Submucosal invasion (odds ratio, 3.735; 95% confidence interval, 1.026 to 12.177; p=0.047) and lymphovascular invasion (odds ratio, 7.636; 95% confidence interval, 1.730 to 22.857; p=0.004) were the risk factors of lymph node metastasis in papillary-type early gastric cancer patients. Conclusions: The clinical results of endoscopic submucosal dissection in papillary-type early gastric cancer were relatively favorable, and endoscopic submucosal dissection is considered safe if appropriate indications are confirmed by considering the risk of lymph node metastasis. abstract_id: PUBMED:25083097 Endoscopic submucosal tunnel dissection salvage technique for ulcerative early gastric cancer. Endoscopic submucosal dissection is an effective treatment modality for early gastric cancer (EGC), though the submucosal fibrosis found in ulcerative EGC is an obstacle for successful treatment. This report presents two cases of ulcerative EGC in two males, 73- and 80-year-old, with severe fibrosis. As endoscopic ultrasonography suggested that the EGCs had invaded the submucosal layer, the endoscopic submucosal tunnel dissection salvage technique was utilized for complete resection of the lesions. Although surgical gastrectomy was originally scheduled, the two patients had severe coronary heart disease, and surgeries were refused because of the risks associated with their heart conditions. The endoscopic submucosal tunnel dissection salvage technique procedures described in these cases were performed under conscious sedation, and were completed within 30 min. The complete en bloc resection of EGC using endoscopic submucosal tunnel dissection salvage technique was possible with a free resection margin, and no other complications were noted during the procedure. This is the first known report concerning the use of the endoscopic submucosal tunnel dissection salvage technique salvage technique for treatment of ulcerative EGC. We demonstrate that endoscopic submucosal tunnel dissection salvage technique it is a feasible method showing several advantages over endoscopic submucosal dissection for cases of EGC with fibrosis. abstract_id: PUBMED:31322655 Current indications of endoscopic submucosal dissection for early gastric cancer in Japan. Endoscopic submucosal dissection for early gastric cancer seems to be technically mature. For expanding its indications, the outcomes of endoscopic submucosal dissection must not be inferior to those of surgery, as early gastric cancer is mostly treated by surgery. Therefore, the principles for the indications of endoscopic submucosal dissection are as follows: (1) extremely low possibility of lymph node metastasis as the theoretical condition and (2) suitability for an en-bloc resection as the technical condition. Based on previous positive results, differentiated gastric cancer (>2 cm without ulcer [UL]; ≤3 cm with UL) has become an absolute indication for endoscopic submucosal dissection. Undifferentiated gastric cancer is also expected to become an absolute indication in the future. Thereafter, the complete indications of endoscopic submucosal dissection for early gastric cancer will be available. Further, more expanded indication criteria should be considered especially for elderly patients because invasiveness of gastrectomy. Answer: Yes, immunohistochemical staining (IHCS) does have a clinical impact in early gastric cancer (EGC) conducted endoscopic submucosal dissection (ESD). A study aimed to evaluate clinicopathologic parameters and the clinical significance related to lymphovascular invasion (LVI) by IHCS in ESD found that LVI by IHCS was observed only in the expanded indication groups for ESD, not in the absolute group. The positive rate of LVI by IHCS was higher than that by hematoxylin-eosin staining (HES), indicating that IHCS can detect LVI more effectively. LVI IHCS-positivity was associated with deeper invasion levels of cancer, such as mucosa 3 (M3) or submucosa 1 (SM1). The study concluded that while EGCs of the absolute group are immunohistochemically stable, the presence of LVI should be carefully examined by IHCS in an ESD expanded indication group with an invasion depth of M3 or greater (PUBMED:22969232). This suggests that IHCS can provide important information for the management of EGC patients undergoing ESD, particularly in assessing the risk of LVI in cases where the cancer has invaded deeper tissue layers.
Instruction: Does prehospital thrombolysis increase the proportion of patients who have an aborted myocardial infarction? Abstracts: abstract_id: PUBMED:19234016 Does prehospital thrombolysis increase the proportion of patients who have an aborted myocardial infarction? Background: An "aborted" myocardial infarction is defined as an acute coronary syndrome where there is rapid resolution of existing ST segment elevation associated with a rise in creatine kinase (CK) less than twice the upper limit of normal or a small troponin release compatible with minimal myocyte necrosis. Previous research has shown that earlier thrombolysis is associated with a higher rate of aborted infarction. It is also known that prehospital thrombolysis reduces the pain-to-needle time. Aim: To test the hypothesis that prehospital thrombolysis is associated with a higher incidence of aborted infarction in a UK setting. Methods: A retrospective analysis was performed for all patients given prehospital thrombolysis in the Avon sector catchment area of the Great Western Ambulance Service and Frimley Park Hospital between April 2004 and October 2006. The control group were patients given in-hospital thrombolysis at Frenchay Hospital or Frimley Park Hospital over the same period. Data reporting 12 h troponin levels, call-to-needle time, pain-to-needle time, door-to-needle time and incidence of aborted infarction were collected. Results: Of the patients receiving prehospital thrombolysis, 69% had a pain-to-needle time of 2 h or less compared with 40.4% of patients receiving in-hospital thrombolysis (p<0.001). The overall incidence of aborted infarction was 16.5%. Of those with aborted infarction for whom pain-to-needle times were available, 54% had a pain-to-needle time of <2 h. Despite the difference in pain-to-needle times in favour of prehospital thrombolysis, there was no difference in the incidence of aborted myocardial infarction between the prehospital thrombolysis cohort and the in-hospital cohort, with 18.2% of in-hospital patients having a troponin I level <0.5 ng/ml compared with 11.8% of the prehospital cohort (p = 0.124). Conclusion: Although prehospital thrombolysis improved pain-to-needle time and a shorter pain-to-needle time increased the incidence of aborted infarction, prehospital thrombolysis was not associated with an increase in the proportion of aborted myocardial infarctions. Further work is required to understand this unexpected finding. abstract_id: PUBMED:10735137 Benefit of prehospital thrombolysis in the treatment of acute myocardial infarction Randomized studies comparing early and late thrombolysis in the treatment of acute myocardial infarction show that mortality is lower if therapy is administered at an early stage, and especially if it is administered during the first hour of symptoms. Since only few patients can actually be treated within one or two hours, a search was made for time gaining strategies, including prehospital thrombolysis. In the region of Nijmegen, the Netherlands, it proved possible with prehospital treatment to treat 25% of the patients in the first hour after onset of ischaemic symptoms. The group given prehospital treatment included almost three times as many aborted infarctions as the group treated in hospital. In clinical practice, the proportion of aborted infarctions and more pronounced ECG abnormalities, namely, increase the probability of early prehospital treatment as well as the risk of death. Conditions of the organization of prehospital thrombolysis in a region are equipment for ECG transmission by telephone, good co-operation between cardiologists and cardiologic care units and special training of paramedics and general practitioners. abstract_id: PUBMED:25696316 Prehospital triage for angiography-guided therapy for acute myocardial infarction. Background: Time between symptom onset and effective reperfusion is of paramount importance in patients with acute myocardial infarction (MI) treated with reperfusion therapy. In the PHIAT (Pre-Hospital Infarction Angioplasty Triage) project, safety and feasibility of in-ambulance electrocardiography facilities for prehospital triage for direct transfer to an interventional centre to undergo immediate coronary angiography and angiography-guided therapy were evaluated. Methods And Results: The ambulances were equipped with a defibrillator and electrocardiography unit with computerised electrocardiographic analysis. Patients with acute MI symptoms and fulfilling certain criteria compatible with a large MI were included and pretreated with heparin and aspirin during transportation. During the study period, 284 patients were included. Eleven percent did not have an acute MI. PCI, performed in 94% (n=239) of the patients, was successful in 94%. Prehospital triage reduced time to treatment. In 32% of the patients triage resulted in direct transportation to the interventional centre instead of to the nearest community hospital. All-cause mortality was 9% after a mean follow-up of nine months. No serious bleeding complications were seen. Conclusion: Prehospital triage in the ambulance is safe and feasible. A striking percentage (11%) of the identified patients does not have an acute MI and this is more than has been reported from prehospital thrombolysis trials. abstract_id: PUBMED:31249961 Safety of Ticagrelor Compared to Clopidogrel after Prehospital Initiation of Treatment. Objectives The objective of this registry was to study the safety of prehospital initiation of ticagrelor compared with clopidogrel. Background Ticagrelor has replaced clopidogrel in many hospitals as the routinely used antiplatelet drug in patients with ST-segment elevation myocardial infarction (STEMI). Nevertheless, in the PLATelet inhibition and patient Outcomes (PLATO) trial, ticagrelor was associated with an increase in non-CABG (non-coronary artery bypass grafting)-related major bleeding. Data comparing the safety of ticagrelor and clopidogrel after prehospital initiation of treatment are not available. Methods A retrospective, multicenter registry was performed. Selection criteria were the administration of a prehospital loading dose of ticagrelor or clopidogrel according to the ambulance STEMI treatment protocol and the presentation to a percutaneous coronary intervention-capable hospital in our region between January 2011 and December 2012. Follow-up was performed using the electronic patient files for the time period between the antiplatelet loading dose and hospital discharge. The data were analyzed using a primary bleeding end point (any bleeding) and a secondary thrombotic end point (all-cause mortality, spontaneous myocardial infarction, definite stent thrombosis, stroke, or transient ischemic attack). Results Data of 304 clopidogrel-treated and 309 ticagrelor-treated patients were available for analysis. No significant difference in bleeding rate was observed between both groups, using univariate (17.8 vs. 20.1%; p = 0.47; odds ratio, 1.16 [95% confidence interval, 0.78-1.74]) and multivariate ( p = 0.42) analysis. Also for the secondary thrombotic end point (6.3 vs. 4.9%, p = 0.45), no significant differences were observed. Conclusion In this real-world registry, no significant differences in bleeding or thrombotic event rate were found between ticagrelor and clopidogrel after prehospital initiation of treatment. abstract_id: PUBMED:25696044 Time from symptom onset to treatment and outcome in prehospital thrombolysis for acute ST-elevation myocardial infarction. Background: Prehospital thrombolysis for acute ST-elevation myocardial infarction shortened treatment by 60 minutes, and created a large patient group who were treated within two hours. Objectives: We analysed our database of patients undergoing prehospital treatment for acute ST-elevation myocardial infarction in search of characteristics for a better outcome in the early treatment group. Methods: From 1994 to 2000 a total of 475 patients were treated using prehospital administration of anistreplase (in 407 patients) or reteplase (in 68 patients) after diagnosis was confirmed with transtelephonic transmission of the ECG. There was no age limit. The patient data were divided into two groups: one treated within two hours after onset of pain (291 patients, 62%), and one treated later (171 patients, 37%). Thirty-day mortality, symptoms and clinical signs of heart failure were used as parameters of outcome. Both univariate and stepwise logistic regression analyses were used to test 30-day mortality against age, actual time to treatment, prior myocardial infarction, hypertension, diabetes, anterior myocardial infarction, Killip class, systolic blood pressure and heart rate at presentation. Results: Overall 30-day mortality was 9.1%. Overall heart failure was in 16.6% of patients. Both mortality (5.5% vs. 15.5%, p<0.02) and heart failure (12.7% vs. 23.2%, p<0.02) were significantly lower in the early treatment group compared with the group treated late. Independent parameters showing a relation with 30-day mortality were age, time to treatment, hypertension and prior myocardial infarction. Age, time to treatment, hypertension and hyperlipidaemia were identified as predicting heart failure within the first 30 days. Conclusion: With prehospital thrombolysis, both 30-day mortality and heart failure were lower in an early treatment group with acute ST-elevation myocardial infarction. Independent variables for 30-day mortality were age, hypertension, prior myocardial infarction and time to treatment, and age, hypertension, hyperlipidaemia and time to treatment were independent predictors for heart failure. abstract_id: PUBMED:18447834 Prehospital thrombolytic therapy with tenecteplase in patients with ST-elevation myocardial infarction We carried out prehospital thrombolytic therapy (TLT) with tenecteplase in 24 patients with ST-elevation myocardial infarction (MI) at the background of standard concomitant therapy with unfractionated heparin, aspirin, and clopidogrel and analyzed clinical course of MI, ECG dynamics, systolic left ventricular function according to echocardiography. In 20 patients coronary angiography was performed after TLT. Mean symptom - needle time was 113 min. In 14 patients (58%) TLT was carried out within 2 hours after appearance of symptoms. In 19 patients the course of disease was uncomplicated. Electrocardiographic criteria of effective reperfusion were registered in 14 patients (58%), angiographic criteria - in 13 patients (65% of subjected to angiography). Percutaneous intervention was performed in 18 patients (75%). Systolic left ventricular dysfunction was revealed in 5 patients (21%), aborted MI - in 4 patients. Significant correlation was found between aborted MI (r=0,38, p=0,034) and uncomplicated MI (r=0,40, p=0,027) and performance of TLT during first 2 hours. There were no hemorrhagic complications. Results of the study evidence for high efficacy of prehospital thrombolysis with tenecteplase and real possibility of its use under conditions of existing system of organization of urgent cardiological aid. abstract_id: PUBMED:10532513 Effect of prehospital thrombolysis on aborting acute myocardial infarction. On administering thrombolysis in a prehospital setting, we found a threefold increase in the incidence of abortion of myocardial infarction, compared with the in-hospital program of a nearby hospital. Assessment of aborted myocardial infarction may be a better criterion for the efficacy of early thrombolysis than mortality data. abstract_id: PUBMED:38006762 Context-independent identification of myocardial ischemia in the prehospital ECG of chest pain patients. Non-traumatic chest pain is a frequent reason for an urgent ambulance visit of a patient by the emergency medical services (EMS). Chest pain (or chest pain-equivalent symptoms) can be innocent, but it can also signal an acute form of severe pathology that may require prompt intervention. One of these pathologies is cardiac ischemia, resulting from a disbalance between blood supply and demand. One cause of a diminished blood supply to the heart is acute coronary syndrome (ACS, i.e., cardiac ischemia caused by a reduced blood supply to myocardial tissue due to plaque instability and thrombus formation in a coronary artery). ACS is dangerous due to the unpredictable process that drives the supply problem and the high chance of fast hemodynamic deterioration (i.e., cardiogenic shock, ventricular fibrillation). This is why an ECG is made at first medical contact in most chest pain patients to include or exclude ischemia as the cause of their complaints. For speedy and adequate triaging and treatment, immediate assessment of this prehospital ECG is necessary, still during the ambulance ride. Human diagnostic efforts supported by automated interpretation algorithms seek to answer questions regarding the urgency level, the decision if and towards which healthcare facility the patient should be transported, and the indicated acute treatment and further diagnostics after arrival in the healthcare facility. In the case of an ACS, a catheter intervention room may be activated during the ambulance ride to facilitate the earliest possible in-hospital treatment. Prehospital ECG assessment and the subsequent triaging decisions are complex because chest pain is not uniquely associated with ACS. The differential diagnosis includes other cardiac, pulmonary, vascular, gastrointestinal, orthopedic, and psychological conditions. Some of these conditions may also involve ECG abnormalities. In practice, only a limited fraction (order of magnitude 10%) of the patients who are urgently transported to the hospital because of chest pain are ACS patients. Given the relatively low prevalence of ACS in this patient mix, the specificity of the diagnostic ECG algorithms should be relatively high to prevent overtreatment and overflow of intervention facilities. On the other hand, only a sufficiently high sensitivity warrants adequate therapy when needed. Here, we review how the prehospital ECG can contribute to identifying the presence of myocardial ischemia in chest pain patients. We discuss the various mechanisms of myocardial ischemia and infarction, the typical patient mix of chest pain patients, the shortcomings of the ST-elevation myocardial infarction (STEMI) and non-ST-elevation myocardial infarction (NSTEMI) ECG criteria to detect a completely occluded culprit artery, the OMI ECG criteria (including the STEMI-equivalent ECG patterns) in detecting completely occluded culprit arteries, and the promise of neural networks in recognizing ECG patterns that represent complete occlusions. We also discuss the relevance of detecting any ACS/ischemia, not necessarily caused by a total occlusion, in the prehospital ECG. In addition, we discuss how serial prehospital ECGs can contribute to ischemia diagnosis. Finally, we discuss the diagnostic contribution of a serial comparison of the prehospital ECG with a previously made nonischemic ECG of the patient. abstract_id: PUBMED:15940314 General practitioner-based prehospital thrombolysis in acute myocardial infarction Background: If properly organised, prehospital thrombolysis in acute myocardial infarction saves time to treatment and improves outcome. We have evaluated a quality assurance program for general practitioner-based (GP-based) prehospital thrombolysis. Material And Methods: GPs and the local ambulance service in eight local communities went through a one-day training programme. The GPs interpreted the ECGs themselves and initiated prehospital thrombolysis according to a set of criteria. Patients with prehospital thrombolysis were prospectively compared with patients receiving hospital thrombolysis during the same time period. Results: From 1999 to 2001, 69 patients received prehospital thrombolysis: 50% of all patients receiving thrombolysis. No complications were attributed to prehospital administration; 66 out of 69 patients received thrombolysis on a correct indication. The median call to needle time was reduced from 145 to 63 minutes in patients treated before arrival at a hospital, which gives a median of 82 minutes of time saved. Interpretation: GP-initiated prehospital thrombolysis is safe after proper training and saves clinically important time to treatment compared with hospital administration. abstract_id: PUBMED:12695450 Abortion of acute ST segment elevation myocardial infarction after reperfusion: incidence, patients' characteristics, and prognosis. Objectives: To study the incidence and patient characteristics of aborted myocardial infarction in both prehospital and in-hospital thrombolysis. Design: Retrospective, controlled, observational study. Setting: Two cities in the Netherlands, one with prehospital thrombolysis, one with in-hospital treatment. Patients: 475 patients with suspected acute ST elevation myocardial infarction treated before admission to hospital, 269 patients treated in hospital. Main Outcome Measures: Aborted myocardial infarction, defined as the combination of subsiding of cumulative ST segment elevation and depression to < 50% of the level at presentation, together with a rise of creatine kinase of less than twice the upper normal concentration. A stepwise regression analysis was used to test independent predictors for aborted myocardial infarction. Results: After correction for "unjustified" thrombolysis, 17.1% of the 468 prehospital treated patients and 4.5% of the 264 in-hospital treated patients fulfilled the criteria for aborted myocardial infarction. There was no difference in age, sex, risk factors, haemodynamic status, and infarct location of aborted myocardial infarction compared with established myocardial infarction. Time to treatment was shorter in the patients with aborted myocardial infarction (86 versus 123 minutes, p = 0.05). A shorter time to treatment, lower ST elevation at presentation, and higher incidence of preinfarction angina were independent predictors for aborted myocardial infarction. Aborted myocardial infarction had a 12 month mortality of 2.2%, significantly less than the 11.6% of established myocardial infarction. Conclusion: Prehospital thrombolysis is associated with a fourfold increase of aborted myocardial infarction compared with in-hospital treatment. A shorter time to treatment, a lower ST elevation, and a higher incidence of preinfarction angina were predictors of aborted myocardial infarction. Answer: Yes, prehospital thrombolysis does increase the proportion of patients who have an aborted myocardial infarction. Studies have shown that earlier thrombolysis, which is more likely to occur in a prehospital setting, is associated with a higher rate of aborted infarction (PUBMED:10735137; PUBMED:10532513; PUBMED:12695450). Specifically, one study found that prehospital thrombolysis led to almost three times as many aborted infarctions as in-hospital treatment (PUBMED:10735137), and another study reported a fourfold increase in aborted myocardial infarction with prehospital thrombolysis compared to in-hospital treatment (PUBMED:12695450). Additionally, a shorter time to treatment, which is a key advantage of prehospital thrombolysis, was identified as an independent predictor for aborted myocardial infarction (PUBMED:12695450). However, it is important to note that one study found no difference in the incidence of aborted myocardial infarction between the prehospital thrombolysis cohort and the in-hospital cohort, despite improved pain-to-needle time with prehospital thrombolysis (PUBMED:19234016). This suggests that while prehospital thrombolysis can increase the proportion of aborted myocardial infarctions, the relationship may not be consistent across all settings and requires further investigation to understand the variability in outcomes.
Instruction: Assessment of female patients with rectal intussusception and prolapse: is this a progressive spectrum of disease? Abstracts: abstract_id: PUBMED:23652754 Assessment of female patients with rectal intussusception and prolapse: is this a progressive spectrum of disease? Background: Rectal intussusception may be the initial abnormality of a progressive pelvic floor disorder culminating in external prolapse. The evidence, however, is unclear, and the pathophysiological mechanisms underlying this condition are unknown. Objective: The aim of this study is to identify the relationship between age, symptom duration, anorectal physiology parameters, and type of intussusception/prolapse in order to appreciate the natural history of the condition. Design: All female patients diagnosed proctographically with rectorectal/rectoanal intussusception or external prolapse between 1994 and 2007 were studied. Demographics, symptom duration, and anorectal physiology results were compared between these proctographic groups. Patients with repeat proctographic evaluation were also analyzed separately. Settings: This investigation was conducted at a tertiary academic colorectal center. Patients: A total of 1014 women (median age, 51; range, 16-96), including 32 who underwent repeat proctography, were analyzed. Main Outcome Measures: The primary outcomes measured were the differences in median age, symptom duration, and anorectal physiology parameters between the proctographic groups. Results: The cohort exhibited a statistically significant difference (p = 0.0001) in the median age of the proctographic groups with older patients diagnosed with rectoanal rather than rectorectal intussusception, which was supported by uni- and multivariate modeling. Symptom duration was statistically different (p = 0.0002) between the rectorectal intussusception (60 months; range, 1-936) and external rectal prolapse patient groups (36 months; range, 2-732). Patients with external rectal prolapse had statistically lower anal resting (median, 41 versus 77 cmH2O) and squeeze pressures (median 40 versus 56 cmH2O) than patients with rectorectal intussusception. Within 2 years, 19.2% and 3.8% of patients with rectorectal intussusception on the initial proctogram demonstrated progression to rectoanal intussusception and external prolapse. Limitations: This study was limited by its retrospective nature. Conclusion: Rectal intussusception may be an initial abnormality leading to external prolapse, but this appears to happen infrequently. Long-term observational studies are required to fully understand its natural history. abstract_id: PUBMED:27452753 Efficacy and safety of the Jinling procedure in the treatment of adult Hirschsprung disease Objective: To investigate the safety, efficacy and long-term outcomes of Jinling procedure in the treatment of adult Hirschsprung disease. Methods: Clinical and follow-up data of 125 patients with adult Hirschsprung disease undergoing Jinling procedure at the Department of General Surgery between January 2000 and January 2013 were summarized. All the patients were diagnosed by CT, barium enema, anorectal pressure detection and pathology examination. Abdominal symptoms, gastrointestinal quality of life index(GIQLI, the lower score, the worse quality of life), Wexner constipation score (higher score indicated worse symptom), defecography (evaluation included rectocele, mucosal prolapse, intramucosal intussusception, perineal prolapse) and other operative complications were compared before and after operation. Results: Among 125 patients, 69 were male and 56 were female with median age of (41.2±15.5) (18 to 75) years. The follow-up rates were 94.4%(118/125), 92.0%(115/125), 89.6%(112/125) and 88.0%(110/125) at postoperative months 1, 3, 6, and 12. Incidences of abdominal distension and abdominal pain were 100% and 82.4%(103/125) before operation, and were 7.3%(8/110) and 20.9%(23/110) at 12 months after surgery. Wexner score was significantly lower at postoperative months 1(8.7±2.9), 3 (7.2±2.8), 6(6.7±2.2) and 12(6.3±1.7) than that before operation (21.4±7.2) (P<0.01). GIQLI score was 51.6±11.9 before operation, though it decreased at 1 month (47.3±5.5)(P<0.05) after surgery, but increased significantly at postoperative months 3, 6, 12(68.9±8.0, 96.5±8.2, 103.2±8.6)(P<0.01). Abnormal rate of defecography was 70.4%(81/115), 48.2%(54/112) and 27.3%(30/110) at postoperative months 3, 6, 12, which was significantly lower than 91.2%(114/125) before operation (P<0.01). Morbidity of postoperative complication was 29.6%(37/125), including 5 cases of surgical site infection (4.0%), 2 of anastomotic bleeding (1.6%), 8 of anastomotic leakage (6.4%, one died of severe abdominal infection), 4 of urinary retention (3.2%), 3 of recurrent constipation (2.4%, without megacolon relapse), 11 of bowel obstruction (8.8%), 2 of anastomotic stricture(1.6%) and 2 of refractory staphylococcus aureus enteritis (1.6%, diagnosed by stool smear and culture, and both died finally). Conclusion: Jinling procedure is a safe and effective surgical procedure for adult Hirschsprung's disease. abstract_id: PUBMED:24411299 Symptom distribution and anorectal physiology results in male patients with rectal intussusception and prolapse. Background: Rectal intussusception and external rectal prolapse are uncommon proctographic findings in men reflecting the lack of studies investigating such patients. The aim of this study was to identify the demographic, clinical, and physiological characteristics of this population with a view to appreciate the mechanism of development of this condition. Methods: All men, presenting with symptoms of constipation or fecal incontinence, who were diagnosed proctographically with recto-rectal intussusception (RRI)/recto-anal intussusception (RAI) or external rectal prolapse (ERP) between 1994 and 2007 at a tertiary academic colorectal unit were studied. Demographics, relevant comorbidities, distribution and symptom duration, and anorectal physiology results were analyzed retrospectively for each proctographic group and intergroup comparisons performed. Results: Two hundred five men (median age 50 y; range, 13-86) including 155 (75.6%) without any relevant comorbidities were studied. A significant proportion of patients in all proctographic groups reported rectal evacuatory difficulty ([RRI, 46.4%], [RAI, 39.4%], [ERP, 44.8%]; P = 0.38,analysis of variance). Patients also reported a combination of fecal incontinence symptoms (e.g., urge, passive, postdefecatory leakage) that did not differ across the proctographic groups. Anorectal physiological parameters were within normal range and were not found to be statistically different between the proctographic groups with the exception of anal resting pressure, which was lowest in ERP patients (62 cm H2O; range, 14-155) compared with patients with RRI (89 cm H2O; range, 16-250; P = 0.003) and RAI (92 cm H2O; range, 38-175; P = 0.006). Conclusions: Men with rectal intussusception and prolapse present with a combination of symptoms, predominantly defective rectal evacuation. Anorectal physiological assessment has failed to shed light into the mechanism of development of this condition and thus, the need for large observational studies incorporating integrated defecographic and manometric assessments of the evacuation process. abstract_id: PUBMED:34357449 Solitary rectal ulcer syndrome (SRUS): observational case series findings on MR defecography. Objective: Radiological findings in solitary rectal ulcer syndrome (SRUS) are well described for evacuation proctography (EP) but sparse for magnetic resonance defecography (MRD). In order to rectify this, we describe the spectrum of MRD findings in patients with histologically proven SRUS. Materials And Methods: MRD from twenty-eight patients (18 female; 10 males) with histologically confirmed SRUS were identified. MRD employed a 1.5-T magnet and a standardized technique with the rectal lumen filled with gel and imaged sagittally in the supine position, before, during, and after attempted rectal evacuation. A single radiologist observer with 5 years' experience in pelvic floor imaging made the anatomical and functional measurements. Results: Sixteen patients (10 female) demonstrated internal rectal intussusception and 3 patients (11%) demonstrated complete external rectal prolapse. Anterior rectoceles were noted in 12 female patients (43%). Associated anterior and middle compartment weakness (evidenced by excessive descent) was observed in 18 patients (64%). Cystocele was found in 14 patients (50%) and uterine prolapse was noted in 7 patients (25%). Enterocoeles were detected in 5 patients (18%) and peritoneocoele in 5 patients (18%). None had sigmoidocoele. Sixteen patients (57%) demonstrated delayed voiding and 13 patients (46%) incomplete voiding, suggesting defecatory dyssynergia. Conclusion: MRD can identify and grade both rectal intussusception and dyssynergia in SRUS, and also depict associated anterior and/or middle compartment descent. Distinction between structural and functional findings has important therapeutic implications. Key Points: MRD can identify and grade both rectal intussusception and dyssynergia in patients with SRUS. MRD is an acceptable substitute to evacuation proctography in assessing anorectal dysfunctions when attempting to avoid ionizing radiation. SRUS influences the pelvic floor globally. MRD depicts associated anterior and/or middle compartment prolapse. abstract_id: PUBMED:11257618 Endometriosis of the intestinal tract: a study of 44 cases of a disease that may cause diverse challenges in clinical and pathologic evaluation. Endometriosis of the intestinal tract may mimic a number of diseases both clinically and pathologically. The authors evaluated 44 cases of intestinal endometriosis in which endometriosis was the primary pathologic diagnosis, and evaluated them for a variety of gross and histologic changes. Cases with preneoplastic or neoplastic changes were excluded specifically because they were the subject of a previous study. The patients ranged in age from 28 to 56 years (mean age, 44 years), and presenting complaints included abdominal pain (n = 15), an abdominal mass (n = 12), obstruction (n = 8), rectal bleeding (n = 2), infertility (n = 3), diarrhea (n = 2), and increasing urinary frequency (n = 1). The clinical differential diagnoses included diverticulitis, appendicitis, Crohn's disease, tubo-ovarian abscess, irritable bowel syndrome, carcinoma, and lymphoma. Forty-two patients underwent resection of the diseased intestine and two patients underwent endoscopic biopsies. In 13 patients there were predominantly mural masses, which were multiple in two patients (mean size, 2.6 cm). In addition, 11 cases had luminal stenosis or strictures, six had mucosal polyps, four had submucosal masses that ulcerated the mucosa (sometimes simulating carcinoma), three had serosal adhesions, one had deep fissures in the mucosa, and one was associated with appendiceal intussusception. Involvement of the lamina propria or submucosa was identified in 29 cases (66%) and, of these, 19 had features of chronic injury including architectural distortion (n = 19), dense lymphoplasmacytic infiltrates (n = 7), pyloric metaplasia of the ileum (n = 1), and fissures (n = 1). Three cases had features of mucosal prolapse (7%), ischemic changes were seen in four (9%), and segmental acute colitis and ulceration were seen in four and six cases (9% and 13%) respectively. In 14 patients, endometriosis formed irregular congeries of glands involving the intestinal surface epithelium, mimicking adenomatous changes. Mural changes included marked concentric smooth muscle hyperplasia and hypertrophy, neuronal hypertrophy and hyperplasia, and fibrosis of the muscularis propria with serositis. Follow-up of 20 patients (range, 1-30 years; mean, 7.8 years) revealed that only two patients had recurrent symptoms. None of the patients developed inflammatory bowel disease. Endometriosis can involve the intestinal tract extensively, causing a variety of clinical symptoms, and can result in a spectrum of mucosal alterations. Because the endometriotic foci may be inaccessible to endoscopic biopsy or may not be sampled because of their focality, clinicians and pathologists should be aware of the potential of this condition to mimic other intestinal diseases. abstract_id: PUBMED:38001430 Laparoscopic ventral mesh rectopexy vs. transperineal mesh repair for obstructed defecation syndrome associated with rectocele: comparison of selectively distributed patients. Purpose: Obstructed defecation syndrome represents 50-60% of patients with symptoms of constipation. We aimed to compare the two frequently performed surgical methods, laparoscopic ventral mesh rectopexy and transperineal mesh repair, for this condition in terms of functional and surgical outcomes. Methods: This study is a retrospective review of 131 female patients who were diagnosed with obstructed defecation syndrome, attributed to rectocele with or without rectal intussusception, enterocele, hysterocele or cystocele, and who underwent either laparoscopic ventral mesh rectopexy or transperineal mesh repair. Patients were evaluated for surgical outcomes based on the operative time, the length of hospital stay, operative complications, using prospectively designed charts. Functional outcome was assessed by using the Initial Measurement of Patient-Reported Pelvic Floor Complaints Tool. Results: Fifty-one patients diagnosed with complex rectocele underwent laparoscopic ventral mesh rectopexy, and 80 patients diagnosed with simple rectocele underwent transperineal mesh repair. Mean age was found to be 50.35 ± 13.51 years, and mean parity 2.14 ± 1.47. Obstructed defecation symptoms significantly improved in both study groups, as measured by the Colorectal Anal Distress Inventory, Constipation Severity Instrument and Patient Assessment of Constipation-Symptoms scores. Minor postoperative complications including wound dehiscence (n = 3) and wound infection (n = 2) occurred in the transperineal mesh repair group. Conclusion: Laparoscopic ventral mesh rectopexy and transperineal mesh repair are efficient and comparable techniques in terms of improvement in constipation symptoms related to obstructed defecation syndrome. A selective distribution of patients with or without multicompartmental prolapse to one of the treatment arms might be the preferred strategy. abstract_id: PUBMED:3830377 Rectal prolapse. Rectal prolapse occurs mostly in the geriatric female patient and can be a very disabling condition. The etiology is intussusception of the rectosigmoid secondary to excessive and prolonged straining. Medical therapy for this disease process is not helpful and patients will require a surgical procedure. The two best surgical procedures for the correction of rectal prolapse are low anterior resection of the rectosigmoid and proctopexy. A few patients who are unfit for laparotomy may require the Thiersch Wire procedure. Two unresolved problems after surgical therapy are continuing constipation and incontinence. Constipation is treated by dietary measures, stool softeners, and periodic enemas. Laxatives are to be discouraged. Incontinence in patients with rectal prolapse improves in most patients after a procedure to correct the prolapse. In those patients in which incontinence persists, no form of therapy has been found to be uniformly successful. abstract_id: PUBMED:23412016 Pelvic floor ultrasonography: an update. Female pelvic floor dysfunction encompasses a number of highly prevalent clinical conditions such as female pelvic organ prolapse, urinary and fecal incontinence, and sexual dysfunction. The etiology and pathophysiology of those conditions are, however, not well understood. Recent technological advances have seen a surge in the use of imaging, both in research and clinical practice. Among the techniques available such as sonography, X-ray, computed tomography and magnetic resonance imaging, ultrasound is superior for pelvic floor imaging, especially in the form of perineal or translabial imaging. The technique is safe with no radiation, simple, cheap, easily accessible and provides high spatial and temporal resolutions. Translabial or perineal ultrasound is useful in determining residual urinary volume, detrusor wall thickness, bladder neck mobility and in assessing pelvic organ prolapse as well as levator function and anatomy. It is at least equivalent to other imaging techniques in diagnosing, such diverse conditions as urethral diverticula, rectal intussusception and avulsion of the puborectalis muscle. Ultrasound is the only imaging method capable of visualizing modern slings and mesh implants and may help selecting patients for implant surgery. Delivery-related levator injury seems to be the most important etiological factor for pelvic organ prolapse and recurrence after prolapse surgery, and it is most conveniently diagnosed by pelvic floor ultrasound. This review gives an overview of the methodology. Its main current uses in clinical assessment and research will also be discussed. abstract_id: PUBMED:1052468 Esophago-gastric invagination in patients with sliding hiatus hernia. Intussusception of the distal esophagus into a reducible hiatus hernia is described in nine female and three male patients. The main radiographic feature is demonstration of a lobulated fundal mass of changeable size and configuration surrounding the narrowed distal esophageal segment. This pseudotumor is produced by inversion of the hiatus hernia into the stomach, and may be mistaken for a neoplasm. Disinvagination invariably occurs when maneuvers directed toward demonstration of a sliding hernia are utilized during upper gastrointestinal fluoroscopy. It is emphasized that esophago-gastric invagination frequently accounts for masses shown in the cardia of older women with intermittent dysphagia and crampy epigastric pain. abstract_id: PUBMED:22486952 Analysis of experience in the use of dynamic pelvic magnetic resonance imaging in the assessment of obstructive defaecation syndrome Introduction: The aetiological diagnosis of obstructive defaecation syndrome (ODS) requires, among others, imaging tests. The purpose of this study is to descriptively analyse and compare the findings of dynamic pelvic magnetic resonance imaging (DPMRI) with the clinical examinations in patients with ODS. Material And Methods: A prospective comparative study was made between the physical examination and the DPMRI, with a descriptive analysis of the results. A total of 30 patients were included (2 males and 28 females), with a median age of 60 (range 23-76) years, with symptoms of ODS. An anamnesis and detailed physical examination and a DPMRI were performed on all of them. Functional (anismus) and morphological changes (rectocele, enterocele, intussusception, etc.), were analysed. Results: The physical examination did not detect anomalies in 6 (20%) patients. A rectocele was diagnosed in 21 (70%) of the cases, and 2 (6.7%) a rectal mucosal prolapse. The DPMRI showed evidence of pelvic floor laxity in 22 (73.3%) cases, an enterocele in 4 (13.3%), a sigmoidocele in 2 (6.7%), intussusception in 8 (26.7%), rectal mucosal prolapse in 4 (13.3%), anismus in 3 (10%), and a cystocele in 4 (13.3%). The rectocele was the most frequent diagnosis, being given in 26 (86.6%) patients. Conclusions: Magnetic resonance imaging provides an overall pelvic assessment with good definition of the tissues, and does not use ionising radiation, is well tolerated, and provides us with complementary information to arrive at the diagnosis, and establish the best treatment for ODS. Larger studies comparing videodefaecography (VD), currently considered the Gold Standard technique, are needed to be able to demonstrate whether it is superior or not to DPMRI. Answer: The assessment of female patients with rectal intussusception and prolapse suggests that these conditions may be part of a progressive spectrum of pelvic floor disorders. A study examining female patients diagnosed proctographically with rectorectal/rectoanal intussusception or external prolapse found statistically significant differences in median age, with older patients more likely to be diagnosed with rectoanal rather than rectorectal intussusception. Symptom duration also differed between the rectorectal intussusception and external rectal prolapse patient groups, with the latter having a shorter duration of symptoms. Additionally, patients with external rectal prolapse had statistically lower anal resting and squeeze pressures than those with rectorectal intussusception. Within 2 years, a small percentage of patients with rectorectal intussusception on the initial proctogram demonstrated progression to rectoanal intussusception and external prolapse, suggesting that rectal intussusception may be an initial abnormality leading to external prolapse, although this appears to happen infrequently (PUBMED:23652754). In a separate study, pelvic floor ultrasonography was highlighted as a useful tool for assessing pelvic floor dysfunction, including pelvic organ prolapse, urinary and fecal incontinence, and sexual dysfunction. Ultrasound is particularly beneficial for visualizing modern slings and mesh implants and may help in selecting patients for implant surgery. It is also capable of diagnosing delivery-related levator injury, which is an important etiological factor for pelvic organ prolapse and recurrence after prolapse surgery (PUBMED:23412016). Another study on obstructed defecation syndrome (ODS) compared laparoscopic ventral mesh rectopexy and transperineal mesh repair for patients diagnosed with rectocele with or without additional pelvic floor disorders. Both surgical methods were found to be efficient and comparable in terms of improvement in constipation symptoms related to ODS. The study suggested that selectively distributing patients with or without multicompartmental prolapse to one of the treatment arms might be the preferred strategy (PUBMED:38001430). Overall, these findings indicate that rectal intussusception and prolapse may indeed be part of a progressive spectrum of disease, with various factors such as age, symptom duration, and anorectal physiology parameters playing a role in the progression of the condition.
Instruction: Is it possible to differentiate early-onset accommodative esotropia from early-onset essential esotropia? Abstracts: abstract_id: PUBMED:12928681 Is it possible to differentiate early-onset accommodative esotropia from early-onset essential esotropia? Purpose: To determine the frequency of > or =2.50 diopter (D) hyperopia in infantile esotropia with onset up to the age of 6 months and by evaluating the treatment results of these cases retrospectively, to find the factors that may help to differentiate early-onset accommodative esotropia from early-onset essential esotropia. Methods: The charts of 256 patients with infantile esotropia were reviewed. Thirtyseven cases, with hyperopia of > or =2.50 D, no other systemic and neurologic disease, and at least 1 year of follow-up, were included in this study. The age at the start of therapy, refractive error, deviation angle, type of therapy (antiaccommodative therapy, surgery) and the presence of amblyopia, latent nystagmus, inferior oblique overaction, dissociated vertical deviation and cross-fixation were recorded for each case. Results: The prevalence of high hyperopia was found to be 14.4% (37/256) in infantile esotropia. In 18 of the cases (48.6%), antiaccommodative therapy alone was found to be adequate (Group I). In the remaining 19, although antiaccommodative therapy was found to decrease the deviation angle significantly (P<0.001), surgery was also required (Group II). Groups were compared with respect to age at the initial examination, refractive error, deviation angle, presence of amblyopia, anisometropia, and inferior oblique overaction, but no factor could be determined to predict the pure refractive ones (P>0.05). Essential infantile esotropia-associated findings did not help because they are rarely evident at the time of initial diagnosis. Conclusions: Half of the high hyperopic infantile esotropes could be corrected fully by antiaccommodative therapy alone, while the remaining ones could also benefit significantly. It is strongly recommended to try spectacles at first in the treatment of infantile esotropia with hyperopia > or =2.5 D. abstract_id: PUBMED:10646748 Early-onset refractive accommodative esotropia. Introduction: We studied the natural history of pure refractive accommodative esotropia after spectacle correction in patients with onset before 1 year old to determine whether their outcomes and characteristics were different from those of patients with more typical age at onset of refractive accommodative esotropia. Methods: We retrospectively reviewed the charts of 17 children with onset of refractive accommodative esotropia before 1 year old. Records of 26 children with onset after 2 years old were reviewed as controls. Results: The mean ages at diagnosis were 9 months and 48 months for the study and control groups, respectively. All 17 study patients and all 26 control patients were initially well aligned with spectacles at distance and near. Follow-up averaged 34 months for study patients and 41 months for control patients. Three study patients (17.6%) and 1 control patient (3.8%) had eventual deterioration and required strabismus surgery (P = .28). None of the study patients developed amblyopia, whereas 42% of control patients did (P = .001). Seven of 15 (47%) of the study patients with known birth history were born prematurely, whereas only 3 of 24 (12%) control patients were born prematurely (P = .03). Conclusions: Refractive accommodative esotropia was diagnosed at as early as and age 4 months. Prematurity appeared to be a risk factor. Amblyopia was not detected in any patient with early-onset refractive accommodative esotropia. Treatment with full hyperopic spectacle correction led to long-term stable alignment, with relatively few patients requiring surgery. abstract_id: PUBMED:22496003 Early onset of acquired comitant non-accommodative esotropia in childhood. Background: The aim of this study was to describe the clinical characteristics and surgical outcome of acquired comitant non-accommodative esotropia without obvious cause in children with an early onset ≤ three years of age. Patients And Methods: 24 consecutive patients (13 females) with acquired comitant non-accommodative esotropia underwent strabismus surgery. Outcome measures included amount of deviation and level of binocularity at last follow-up. Results: An acute onset of strabismus was observed in 83 % of patients. Accompanying features such as diplopia, covering one eye or stumbling were observed in 40 % of the children. Cycloplegic retinoscopy revealed a mild to moderate hypermetropia in all children (mean hypermetropia + 2.2 diopters). 90.5 % of children were aligned within 8 PD of orthotropia. 81 % regained normal stereovision (Lang I/II). Conclusions: All children exhibited the typical features of acute acquired comitant esotropia (AACE) type II (Burian-Franceschetti). Collective characteristics of this form of strabismus are a comitant and relatively large deviation, mild hypermetropia, a potential of normal binocular cooperation and absence of an accommodative component and neurological pathology. Time and suddenness of strabismus onset may be less helpful in defining this entity. abstract_id: PUBMED:10204132 Early-onset accommodative esotropia. Purpose: To determine the frequency of accommodative esotropia with onset by 6 months of age; to determine if the presence or absence of characteristics usually associated with infantile esotropia can help in the diagnosis; and to determine if antiaccommodative therapy is adequate treatment for the esotropia. Methods: The charts of 100 patients with infantile esotropia, seen over a 2-year period (September 1995 to September 1997), were reviewed. We identified those with at least 2.25 diopters (D) of hyperopia and determined the presence of large angle esotropia (> 30 to 40 prism diopters [delta]), amblyopia, inferior oblique overaction, dissociated vertical deviation, latent nystagmus, and cross-fixation. The success of antiaccommodative therapy, if attempted, was also evaluated. Results: Of 100 patients with infantile esotropia, 15 (15%) were found to have at least +2.25 D. This represented 8% of all patients with accommodative esotropia seen over the same time period. The average age at initial examination was 21 months, although the reported age of onset in all cases was 6 months or less. Two had surgery before presenting to our institute. Eleven of 13 (84%) had 40 delta or less. Six (40%) of the 15 had amblyopia, 5 (33%) had inferior oblique overaction, 3 (20%) had dissociated vertical deviation, 1 (7%) had latent nystagmus, and 4 (27%) had cross-fixation. Of the 13, 7 were given glasses initially, with 3 being fully corrected. Six were not given glasses, all had surgery, and all were given glasses postoperatively for a residual esotropia. Conclusion: Fifteen percent of infantile esotropia patients and 8% of accommodative esotropia patients have infantile accommodative esotropia. Other characteristics of infantile esotropia may be present, but are less frequent, and at least 40% are fully corrected with spectacles indicating that when the hyperopia is 2.25 D or greater, antiaccommodative therapy should be instituted before surgery. abstract_id: PUBMED:28904218 Early Onset Sixth-Nerve Palsy with Eccentric Fixation. Purpose: To report four cases of early onset sixth-nerve palsy all of whom had eccentric fixation. Methods: A retrospective case note review was undertaken of all cases presenting to the senior author's private and NHS practice with early onset sixth palsy between 2006 and 2012. As well as demographic information, details of ophthalmic, orthoptic, electrophysiological examinations, and radiological investigations that were extracted from the records. Results: Four children with unilateral or asymmetric early onset sixth-nerve palsy were identified, of which three were congenital. All four had MRI and only one had a normal MRI. Age at presentation ranged from 14-42 months, but all four had marked esotropia and poor visual acuities in the worst affected eye with eccentric fixation, which became more easily or only noticeable after surgical correction. Three patients with congenital sixth-nerve palsy underwent vertical muscle transposition with Botulinum Toxin A (BTXA) to the ipsilateral medial rectus, and two of these patients also had Foster sutures to the transposed vertical muscles. The fourth patient had unilateral medial rectus recession and lateral rectus resection. The mean preoperative measurement was 55Δ ET (range 50-60Δ), and the mean postoperative measurement was 11Δ ET (range 16XT-25ET) at near, and 2Δ XT (range 15XT-14ET) at distance. Conclusions: We speculate that early onset paralytic strabismus due to congenital sixth-nerve palsy results in an inability to cross fixate which results in the development of eccentric fixation. Attempts to use reverse occlusion to negate the eccentric fixation failed. We therefore recommend early surgery for this condition to avoid this sequelae. abstract_id: PUBMED:7395949 Early-onset accommodative esotropia. Twenty-one patients had esotropia diagnosed before age 12 months. In all of these patients esotropia was eliminated, at least initially, by antiaccommodative therapy consisting of either full hypermetropic spectacles or miotics, or both. Clinical findings evaluated in these patients were comparable to similar clinical factors studied in a larger series of patients with accommodative esotropia, most with onset after 12 months of age. In approximately 50% of these patients early-onset accommodative esotropia deteriorated to a nonaccommodative esotropia that required surgery despite aggressive antiaccommodative therapy. All but one of the surgically treated patients continued to need hypermetropia spectacles postoperatively to maintain alignment. abstract_id: PUBMED:27116548 Longitudinal Development of Refractive Error in Children With Accommodative Esotropia: Onset, Amblyopia, and Anisometropia. Purpose: We investigated longitudinal changes of refractive error in children with accommodative esotropia (ET) throughout the first 12 years of life, its dependence on age at onset of ET, and whether amblyopia or anisometropia are associated with defective emmetropization. Methods: Longitudinal refractive errors in children with accommodative ET were analyzed retrospectively. Eligibility criteria included: initial hyperopia ≥ +4.00 diopters (D), initial cycloplegic refraction before 4 years, at least 3 visits, and at least one visit between 7 and 12 years. Children were classified as having infantile (N = 30; onset ≤ 12 months) or late-onset (N = 78; onset at 18-48 months) accommodative ET. Cycloplegic refractions culled from medical records were converted into spherical equivalent (SEQ). Results: Although the initial visit right eye SEQ was similar for the infantile and late-onset groups (+5.86 ± 1.28 and +5.67 ± 1.26 D, respectively), there were different developmental changes in refractive error. Neither group had a significant decrease in hyperopia before age 7 years, but after 7 years, the infantile group experienced a myopic shift of -0.43 D/y. The late-onset group did not experience a myopic shift at 7 to 12 years. Among amblyopic children, a slower myopic shift was observed for the amblyopic eye. Among anisometropic children, the more hyperopic eye experienced more myopic shift than the less hyperopic eye. Conclusions: Children with infantile accommodative ET experienced prolonged hyperopia followed by a myopic shift after 7 years of age, consistent with dissociation between infantile emmetropization and school age myopic shift. In contrast, children with late-onset accommodative ET had little myopic shift before or after 7 years. abstract_id: PUBMED:10505839 Early onset accommodative esotropia. N/A abstract_id: PUBMED:12075294 Risk factors for the development of accommodative esotropia following treatment for infantile esotropia. Purpose: One aim of the study was to determine whether accommodative esotropia after surgical alignment in infantile esotropia occurs because a pre-existing accommodative component is unmasked at the time of surgery or whether it occurs as a sequela of infantile esotropia. A second aim of the present study was to examine risk factors for accommodative esotropia after surgery for infantile esotropia. Methods: A total of 80 consecutive patients who were enrolled in a prospective study of infantile esotropia had been followed for more than 4 years and had achieved orthoposition were included. Twelve potential risk factors were examined: age at onset, initial esodeviation, initial refractive error, age at alignment, delay in alignment, presurgical glasses, amblyopia, additional surgical procedures, unstable alignment, increase in hypermetropia, peripheral fusion, and stereopsis. Mantel-Haenszel odds ratios were computed for each factor and were corrected to relative risks. Results: Overall, 48 of 80 children (60%) developed accommodative esotropia at a mean age of 33 months. Increasing hypermetropia, delay in alignment, and poor stereopsis posed significant risks for accommodative esotropia. The remaining 9 factors were not associated with increased risk for accommodative esotropia. Conclusions: Accommodative esotropia is unlikely to be a pre-existing condition in most cases because the mean age of onset was 23 months postoperative and the prevalence of preoperative hypermetropia greater than +3.00 D was low. Both delay in alignment and stereopsis risk factors may reflect compromised binocular sensory status that allows accommodative esotropia to occur at low to moderate levels of hypermetropia. Identification of children treated for infantile esotropia who are at risk for accommodative esotropia may allow for prevention or early treatment. abstract_id: PUBMED:26669419 Preoperative Factors Affecting Stereopsis after Surgical Alignment of Acquired Partially Accommodative Esotropia. Background/aim: Despite successful ocular realignment, many strabismus patients never develop high levels of stereopsis. The purpose of this study was to determine preoperative factors that affect postoperative stereopsis in patients with acquired partially accommodative esotropia (APAET). Methods: This was a retrospective chart review of patients who underwent successful surgery for APAET. We compared preoperative factors between patients achieving postoperative stereopsis of 100 seconds of arc or better versus those with worse than 100 seconds of arc. Results: Fifty-seven patients met our inclusion criteria. Twenty-four (42%) had a final stereopsis of 100 seconds of arc or better. The mean age of onset of esodeviation for patients attaining stereopsis of 100 seconds of arc or better was 31.8 ± 12.9 months, versus 23.9 ± 10.0 months (p = 0.012) for patients with worse than 100 seconds of arc. Duration of constant misalignment was not significantly different between the two groups (30.1 ± 18.5 for patients attaining 100 seconds of arc versus 27.3 ± 18.6 months; p = 0.57). A multivariate regression analysis found older age of onset to be the only predictive factor for achieving better postoperative stereopsis (odds ratio 1.065, 95% CI: 1.014-1.118). Conclusion: Age of onset appears to be the most important factor affecting postoperative stereopsis in patients with APAET. Patients with an age of onset after 36 months tended to have better outcomes regardless of the duration of misalignment. Duration of misalignment and age at surgery did not have a significant impact on postoperative stereopsis in our patient population. Answer: Differentiating early-onset accommodative esotropia from early-onset essential esotropia can be challenging. A study aimed to determine the frequency of > or =2.50 diopter (D) hyperopia in infantile esotropia with onset up to the age of 6 months found that half of the high hyperopic infantile esotropes could be corrected fully by antiaccommodative therapy alone, while the remaining ones could also benefit significantly from such therapy. However, no factor could be determined to predict the pure refractive ones, and essential infantile esotropia-associated findings did not help because they are rarely evident at the time of initial diagnosis (PUBMED:12928681). Another study on early-onset refractive accommodative esotropia diagnosed before 1 year old found that treatment with full hyperopic spectacle correction led to long-term stable alignment, with relatively few patients requiring surgery. Prematurity appeared to be a risk factor, and amblyopia was not detected in any patient with early-onset refractive accommodative esotropia (PUBMED:10646748). In contrast, a study on early onset of acquired comitant non-accommodative esotropia in childhood with an onset ≤ three years of age described the clinical characteristics and surgical outcome of this condition, noting that all children exhibited mild to moderate hypermetropia and a potential of normal binocular cooperation, with absence of an accommodative component and neurological pathology (PUBMED:22496003). Another study on early-onset accommodative esotropia found that 15% of infantile esotropia patients and 8% of accommodative esotropia patients have infantile accommodative esotropia. The study suggested that when the hyperopia is 2.25 D or greater, antiaccommodative therapy should be instituted before surgery (PUBMED:10204132). In summary, while it is possible to differentiate early-onset accommodative esotropia from early-onset essential esotropia, the differentiation may not be straightforward and often requires careful evaluation of the refractive error, response to antiaccommodative therapy, and the presence of associated findings.
Instruction: Do Atmospheric Changes and the Synodic Lunar Cycle Affect the Development of Spontaneous Pneumothorax? Abstracts: abstract_id: PUBMED:26324030 Do Atmospheric Changes and the Synodic Lunar Cycle Affect the Development of Spontaneous Pneumothorax? Background: Pneumothorax is defined as air in pleural space. The etiology of spontaneous pneumothorax (SP) is still under investigation and, despite many studies, remains uncertain. The aim of this study was to investigate the effects of the lunar cycle and daily weather changes on SP development. Methods: The data of patients admitted to our clinic with SP were analysed retrospectively. The daily atmospheric pressure, relative ratio of humidity and temperature in degrees Celsius of each day were obtained. The mean values for each day, from the first to the 29th day, of the synodic lunar cycle (SLC) were calculated for the five-year study period. The attacks were allocated to the appropriate day of an ideal 29-day SLC, irrespective of the calendar date. Results: A total of 131 patients who were admitted to our hospital with SP (130 males and 1 female with an average age of 32.4±12.2) were included in this study. The number of patients with SP showed a statistically significant correlation with mean atmospheric pressure (p=0.005), relative humidity (p=0.007) and outdoor temperature (p=0.02) but not with the SLC. Conclusions: SP is significantly influenced by weather-related factors. Changes in atmospheric pressure, humidity and outdoor temperature had obvious effects on the development of SP. However, the SLC had no effect on SP. abstract_id: PUBMED:11735326 Onset of spontaneous pneumothorax and the synodic lunar cycle. The relation between spontaneous pneumothorax events and synodic lunar cycle was studied in a retrospective analysis of patients with a first or recurrent spontaneous pneumothorax. The study included a total of 244 patients, 203 males and 41 females. The data were arranged along the first (new moon) to 28th day (one day before another new moon) of the synodic lunar cycle. Periodogram analysis revealed a 14-day rhythm, significant for the male and pooled sample. Cosinor analysis found the whole synodic lunar cycle and its 2nd, 4th and 5th harmonics as significant. Maximal accumulation of cases happened 1 week before and 1 week after the new moon. Mechanisms of a putative moon influence are not clear. abstract_id: PUBMED:6489489 Spontaneous pneumothorax related to falls in atmospheric pressure. The relation of changes in atmospheric pressure to admissions due to radiologically verified spontaneous pneumothorax, was studied in 282 consecutive patients. A fall in atmospheric pressure of at least 10 mbar within 24 h was followed by a statistically significant increase in the number of admissions 2 days later. abstract_id: PUBMED:32082775 Correlation between meteorological changes and primary spontaneous pneumothorax: Myth or fact? Background: This study aims to investigate the relationship between meteorological changes and the development of primary spontaneous pneumothorax. Methods: Medical records of 1,097 patients ( 975 males, 122 females; mean age 23.5±4.2 years; range, 17 to 32 years) admitted to our hospital with a diagnosis of primary spontaneous pneumothorax between January 2010 and January 2014 were evaluated retrospectively. Daily mean values for air temperature, wind speed, humidity rate and atmospheric pressure values obtained from the local meteorological observatory were recorded. The four-year study period was separated into two groups as days with at least one primary spontaneous pneumothorax development (group 1) and days without any primary spontaneous pneumothorax development (group 2). Results: Within the study period of a total of 1,461 days, 1,097 cases were recorded in 759 days during which primary spontaneous pneumothorax was observed. Eighty-nine percent of the patients were male. There was no significant difference between the groups in terms of mean air temperature, humidity rate, and wind speed. Atmospheric pressure was significantly lower in group 1 (p<0.001). Decrease in atmospheric pressure with respect to the previous day increased the risk of primary spontaneous pneumothorax development significantly (p<0.001). Conclusion: In our study, low atmospheric pressure and significant pressure decreases showed a strong correlation with primary spontaneous pneumothorax. Temperature, wind speed, and humidity values did not influence primary spontaneous pneumothorax development. abstract_id: PUBMED:10492270 Atmospheric pressure changes and outdoor temperature changes in relation to spontaneous pneumothorax. Study Aims: To examine the influence of atmospheric pressure (AP) and temperature changes on the incidence of idiopathic spontaneous pneumothorax (SP). Methods: From December 1991 through November 1993, 115 consecutive SP cases were selected. Patients were included after being in Amsterdam at least 1 full day before contracting the SP. Differences in air temperature and AP (provided hourly by the national weather bureau) for the days of the SP occurrence and the days previous to it were recorded to measure influences of air temperature and AP. The correlation between days with lightning and SP and clustering of SP was evaluated. Results: SP occurred on 14.7% of the days in the 2-year period. There was no relationship between SP and a rise or fall in AP (Poisson regression). There was an average temperature rise of 0.57 degrees C from the day prior to the day of the SP, compared with a 0.08 degrees C fall on the days without SP. This difference is statistically significant and was consistent over the four seasons and both years. Seventy-three percent of the SP cases were clustered. A relationship between SP and thunderstorms was found. Conclusions: AP differences do not seem to influence the chance of developing SP. SP occurs in clusters, and more often 1 to 2 days after thunderstorms. Whether the identified temperature rise prior to the SP is a causative factor is unlikely; coexisting weather phenomena might explain this unexpected finding and should be studied in the future. abstract_id: PUBMED:26034222 Do atmospheric conditions influence the first episode of primary spontaneous pneumothorax? Objectives: Several studies suggest that changes in airway pressure may influence the onset of primary spontaneous pneumothorax (PSP). The aim of this study was to investigate the influence of atmospheric changes on the onset of the first episode of PSP. Methods: We retrospectively analysed cases of pneumothorax admitted to our department between 1 January 2009 and 31 October 2013. Patients with recurrent pneumothorax, traumatic pneumothorax, older than 35 years or presenting history of underlying pulmonary disease were excluded. Meteorological data were collected from the Météo-France archives. Variation (Δ) of mean atmospheric pressure, and relative humidity, were calculated for each day between the day at which symptoms began (D-day), the day before first symptoms (D-1), 2 days before the first symptoms (D-2) and 3 days before the first symptoms (D-3). Results: Six hundred and thirty-eight cases of pneumothorax were observed during the period of this study; 106 of them (16.6%) were a first episode of PSP. We did not observe any significant differences between days with or without PSP admission for any of the weather parameters that we tested. We could not find any thresholds in the variation of atmospheric pressure that could be used to determine the probability of PSP occurrence. Conclusions: Variation of atmospheric pressure, relative humidity, rainfall, wind speed and temperature were not significantly related to the onset of the first episode of PSP in healthy patients. These results suggest that the scientific community should focus on other possible aetiological factors than airway pressure modifications. abstract_id: PUBMED:30638790 The relationship between the occurrence of spontaneous pneumothorax and atmospheric pressure Introduction: This study analyzes the relationship between the occurrence of spontaneous pneumothorax and atmospheric pressure. Patients And Methods: We conducted a retrospective study on 74 patients hospitalized for spontaneous pneumothorax in the department of respiratory diseases from the Ibn Roshd university hospital of Casablanca between January 2013 and June 2014. The value of atmospheric pressure at the time of the occurrence of pneumothorax was recorded and analyzed. Results: The mean age was 46 years with 70 men and 4 women. Eighty-one percent of patients were smokers and 89% lived in an urban zone. Overall, 79.7% of pneumothoraces occurred in anticyclonic conditions, with a median atmospheric pressure of 1018 hPa±4.18. In all, 12.2% of pneumothoraces occurred in atmospheric depression with a median atmospheric pressure of 1010 hPa±1.92. The remaining 8.1% occurred in isobaric conditions. Pneumothoraces occurred significantly more during periods of high pressure compared to other periods (P<0.001; odds ratio: 28.41; relative risk 6.56 [3.52<RR<12.22]). There was no difference in the age, number pf patients with chronic obstructive pulmonary disease or idiopathic pneumothorax between the atmospheric condition groups. Conclusion: Increased atmospheric pressure is a risk factor for spontaneous pneumothorax independent of age or the presence of underlying lung disease. abstract_id: PUBMED:11996760 Spontaneous pneumothorax and atmospheric pressure In order to assess factors associated with spontaneous pneumothorax (SP), mainly climatic changes, a prospective study was undertaken of 62 SP episodes among patients admitted to our hospital during a two-year period, from January 1994 to January 1996. Atmospheric pressure (AP) changes were analyzed, with daily recording of the number of unusual changes in AP (increases above 95th percentile and decreases below 5th percentile) and how many of these changes were followed by some episode of spontaneous pneumothorax during the following five days. To measure the degree of this association between the emergence of pneumothorax and exposure to unusual changes I AP, the relative risk (RR) was calculated. A total of 77 unusual AP changes were observed, 17 of which concurred with the emergence of 8 episodes of SP during the following five days, with a RR of 2.7 (1.6-4.4). Although the etiology of SP is unknown and probably of multifactorial origin, these data suggest that unusual changes in AP may play a relevant role in triggering this condition. abstract_id: PUBMED:27666936 Atmospheric temperature and pressure influence the onset of spontaneous pneumothorax. Background: The aim of the study was to examine the influence of the changes in the atmospheric temperature (ATemp) and the atmospheric pressure (APres) on the occurrence of a spontaneous pneumothorax (SP). Patients And Methods: From January 2000 to March 2014, 192 consecutive SP events were examined. The ATemp and APres data at the onset of SP, as well as those data at 12, 24, 36, 48, 60, and 72 h prior to the onset time, were analyzed. Results: The frequencies of SP occurrence were not statistically different according to the months or seasons, but were statistically different according to the time period (P < .01) and SP events occurred most frequently from 12:00 to 18:00. SP events frequently occurred at an ATemp of 25 degrees Celsius or higher. There was a significantly negative correlation between the APres and the ATemp at the SP onset time. The values of change in the APres from 36 to 24 h prior to SP onset were significantly lower than the preceding values. Conclusions: In this study, we observed that a SP event was likely to occur in the time period from 12:00 to 18:00, at an ATemp of 25 degrees Celsius or higher, and at 24-36 h after a drop of APres. abstract_id: PUBMED:29404132 Do Meteorological Changes Have an Effect on The Occurence of Spontaneous Pneumothorax? Objectives: Spontaneous pneumothorax refers to the leakage of air into the space between the parietal and the visceral layers of the pleura. It occurs with or without a known lung disease. We aimed to investigate the effects of atmospheric pressure, humidity, and temperature changes on the incidence of spontaneous pneumothorax (SP). Material And Methods: This study included 551 patients with spontaneous pneumothorax retrospectively screened between January 2009 and December 2013. The medical data of the patients were accessed via their medical records on the hospital automation system. The atmospheric pressure, temperature, humidity rate, amount of precipitation, and wind velocity on the day of spontaneous pneumothorax were obtained from the data provided by the general directorate of meteorology. The three consecutive days on which at least 2 cases of SP presented were collectively considered as a cluster. The study data were analyzed with the SPSS version 15 software package, using the Chi-square and the Student's t tests. A p value less than 0.05 was considered statistically significant. Results: Of the 552 patients included in the study, 89.3% had primary spontaneous pneumothorax and 10.7% had secondary spontaneous pneumothorax. Ninety-two percent of the patients were male and 8% were female. The mean age was 24 years. Clustering was observed in 71.7% of the study population. No significant differences were observed between yearly and monthly SP incidences. There were, however, differences between the days with SP and the days without SP with respect to atmospheric pressure, ambient temperature, wind velocity, and humidity rate. The differences between the atmospheric pressures were not statistically significant, although the differences between the ambient temperature and the humidity rate were statistically significant (p≤ 0.05). Conclusion: We determined that the changes in the ambient temperature and the humidity rate affected the rate of spontaneous pneumothorax by altering the meteorological conditions. Answer: The development of spontaneous pneumothorax (SP) appears to be influenced by atmospheric changes, but the effect of the synodic lunar cycle on SP development is less clear and somewhat controversial based on the provided abstracts. Several studies have found a significant correlation between atmospheric changes and the incidence of SP. For instance, a decrease in atmospheric pressure was followed by an increase in the number of admissions for SP two days later (PUBMED:6489489), and low atmospheric pressure and significant pressure decreases showed a strong correlation with primary spontaneous pneumothorax (PUBMED:32082775). Another study found that increased atmospheric pressure is a risk factor for SP independent of age or the presence of underlying lung disease (PUBMED:30638790). Additionally, unusual changes in atmospheric pressure may play a relevant role in triggering SP (PUBMED:11996760), and a drop in atmospheric pressure 24-36 hours prior to an SP event was observed to be a likely time for SP occurrence (PUBMED:27666936). However, the relationship between SP and the synodic lunar cycle is less consistent. One study found no effect of the synodic lunar cycle on SP (PUBMED:26324030), while another study revealed a 14-day rhythm significant for the male and pooled sample, with maximal accumulation of cases happening 1 week before and 1 week after the new moon (PUBMED:11735326). This suggests that there may be some influence of the lunar cycle on SP, but it is not as well established as the influence of atmospheric changes. In summary, atmospheric changes, particularly those related to atmospheric pressure, seem to have a clear effect on the development of SP, while the influence of the synodic lunar cycle on SP is less definitive and may require further investigation to fully understand any potential relationship.
Instruction: Does weight for height have prognostic significance in children with acute lymphoblastic leukemia? Abstracts: abstract_id: PUBMED:8037340 Does weight for height have prognostic significance in children with acute lymphoblastic leukemia? Purpose: We tested the hypothesis that weight for height, a simple index of nutritional status, is related to prognosis in childhood acute lymphoblastic leukemia (ALL). Patients And Methods: The study population was composed of 78 children with ALL tested at one U.K. center on the same protocol (UKALL-X). Outcome measures were relapse/no relapse and time to first relapse. Influence of weight for height, expressed as standard deviation scores, was tested using survival analysis in a retrospective design. Results: The weight-for-height standard deviation score had a significant influence on time until first relapse (log ranks test, p = 0.012), with the highest risk of early relapse in children at the lower end of the weight-for-height distribution. Conclusions: The results suggest that weight for height does have an influence on outcome in ALL, but the mechanism is unclear and the finding requires confirmation by larger scale prospective studies. abstract_id: PUBMED:28644306 The Associations of Height-for-Age, Weight-for-Age, and Weight-for-Height With Pediatric Acute Lymphoblastic Leukemia. Height at diagnosis has been analyzed in connection with acute lymphoblastic leukemia (ALL). Most prior studies have compared cases to national reference data derived from previous birth cohorts. Our objective was to determine the association of height-for-age Z score (HAZ) at time of diagnosis with the odds ratio (OR) of ALL in a case-control study (N=498) with a contemporaneous population of age-matched, sex-matched, and ethnicity-matched controls. We hypothesized that cases would have greater mean HAZ at time of diagnosis/interview, after adjustment for weight-for-age (WAZ) and weight-for-height (WHZ). HAZ was not associated with ALL. For each 1 SD increase in WAZ the OR of ALL was 0.83 (95% confidence interval [CI], 0.68-0.99). The OR of ALL was increased among children who had either a WAZ ≤-2 (OR, 5.10; 95% CI, 1.85-16.75) or WHZ of ≤-2 (OR, 5.27; 95% CI, 1.65-23.61). Previous findings of taller height among ALL cases may arise from the choice of control populations. Children with low WAZ or WHZ were at increased odds of ALL. abstract_id: PUBMED:12885815 Height and weight in children treated for acute lymphoblastic leukemia: relationship to CNS treatment. Purpose: We evaluated the long-term effects of treatment on height and weight in children with acute lymphoblastic leukemia (ALL) treated with one of the following three different CNS therapies: intrathecal therapy alone, intrathecal therapy with conventional cranial radiation, or intrathecal therapy with twice-daily radiation. Patients And Methods: Between 1987 and 1995, 618 children treated on two consecutive Dana-Farber Cancer Institute Consortium protocols for ALL were measured for height and weight at diagnosis, and approximately every 6 months thereafter. Patient height, weight, and body mass index (BMI) were converted to z scores for age and sex using the 2000 Centers for Disease Control and Prevention growth charts for the United States. Results: Children younger than 13 years at diagnosis had a statistically significant decrease in their height z scores and an increase in their BMI z scores, regardless of whether they had received cranial radiation. Young age at diagnosis and increased chemotherapy intensity were major risk factors. Unexpectedly, there was no significant difference in long-term height between children who received radiation and those who did not. Conclusion: Final height is compromised in survivors of ALL. The detrimental effects on height occur during therapy without the ability for long-term catch-up growth. Although patients became overweight for height, this seemed to be a result of relative height loss with normal weight gain rather than accelerated weight gain. The type of CNS treatment received did not affect changes in height, weight, or BMI. abstract_id: PUBMED:30358906 Changes in body mass index, height, and weight in children during and after therapy for acute lymphoblastic leukemia. Background: Children with acute lymphoblastic leukemia (ALL) have an increased risk of obesity and short stature. To the authors' knowledge, data regarding patients treated on contemporary protocols without cranial irradiation are limited. Methods: Changes in z scores for body mass index (BMI), height, and weight from the time of diagnosis to 5 years off therapy were evaluated using multivariable analysis in 372 children with ALL who were aged 2 to 18 years at the time of diagnosis and were enrolled on the St. Jude Children's Research Hospital Total XV protocol from 2000 through 2007. Results: The percentage of overweight/obese patients increased from 25.5% at the time of diagnosis to approximately 50% during the off-therapy period. Median BMI z scores increased significantly during glucocorticoid therapy (induction: ∆0.56; 95% confidence interval [95% CI], 0.29-0.64 [P<.001]; and reinduction II: ∆0.22; 95% CI, 0.13-0.49 [P=.001]) and during the first year after therapy (∆0.18; 95% CI, 0.08-0.46 [P=.006]). Among patients who were of healthy weight/underweight at the time of diagnosis, those aged 2 to <10 years at diagnosis had a significantly higher risk of becoming overweight/obese during or after therapy compared with those aged ≥10 years (P=.001). Height z scores declined during treatment and improved after therapy. Being aged 2 to <10 years at the time of diagnosis, being of low-risk status, having a white blood cell count < 50×109 /L at the time of diagnosis, and having negative central nervous system disease were associated with significantly better improvements in z scores for height during the off-therapy period compared with being aged ≥10 years, being of standard-risk/high-risk status, having a white blood cell count ≥ 50×109 /L, and having positive central nervous system disease, respectively. Conclusions: The results of the current study demonstrate that obesity is prevalent, and height growth, especially in patients with identified risk factors, appears compromised. Multidisciplinary intervention should begin during induction therapy and continue during the off-therapy period. abstract_id: PUBMED:3578168 Comparable effects of 1800- and 2400-rad (18- and 24-Gy) cranial irradiation on height and weight in children treated for acute lymphocytic leukemia. To examine the effects of "low-dose" cranial irradiation on growth and to determine if one can predict patients in whom growth will be most affected, we studied 47 children with acute lymphocytic leukemia who had been treated with 2400 rad (24 Gy), 1800 rad (18 Gy), or no whole-brain irradiation. Serial measurements of height, weight, and weight for height were obtained by retrospective chart review. The effects of 1800 rad (18 Gy) and 2400 rad (24 Gy) treatment were indistinguishable. Height percentiles among irradiated patients decreased by a mean of 12% six months after diagnosis, and growth generally did not catch up. Moreover, although 33 irradiated patients maintained heights within the normal range, In 11 patients (33%) a dramatic falloff occurred such that by three years following diagnosis their height for age was more than 30 percentiles below the original value. These patients were all identifiable at six months since their height percentiles had already decreased by more than 15%. Although weight percentiles did not change following irradiation, the weight-for-height ratio increased and patients were relatively stockier three years after therapy than they had been at diagnosis. In patients who had received chemotherapy alone, the weight-for-height ratio also increased, but this appeared to be due to a disproportionate increase in weight. Longer follow-up and evaluation of larger cohorts of patients treated with 1800 rad (18 Gy) will be needed to confirm these results. abstract_id: PUBMED:3863545 Weight gain and height velocity during prolonged first remission from acute lymphoblastic leukaemia. A retrospective analysis of the medical records of 86 children in prolonged remission from acute lymphoblastic leukaemia was performed to calculate changes in the rate of increase in height and weight gain. The rate of increase in height decreased during initial treatment, and the potential for final adult height was not regained. Weight gain was excessive; this started during treatment and persisted into the remission years. Values of weight adjusted for height did not return to values found before treatment until eight years after diagnosis. Several factors can account for this weight gain, but there is a practical need to provide dietary advice, particularly when chemotherapy is stopped. abstract_id: PUBMED:9628245 Final height and weight of long-term survivors of childhood malignancies. The aim of this study was to investigate growth and final height in young adults after therapy for malignant diseases. Final height and weight was studied in 50 long-term survivors (LTS) of childhood cancer (aged 17-31 years; 30 men, 20 women) 3-18 years after treatment for malignant diseases (7 acute lymphoblastic leukemia, 20 lymphoma, 8 sarcoma, 15 malignant central nervous system [CNS] tumours). None of the LTS had been treated with growth hormone (GH). A decrease in final height SDS (Standard deviation score) occurred in both LTS of malignant CNS tumours (median height SDS at diagnosis, 0.3; range, -0.9 to 2.2; median final height SDS, -1.3; range, -3.9 to 1.9; p < 0.01) and LTS of lymphoma (p < 0.05) or leukemia (p < 0.05). However, only LTS who received cranial (p < 0.05) or craniospinal (p < 0.001) irradiation (XRT) exhibited reduced final heights. LTS who had received XRT not involving the CNS or had received no XRT at all presented no reduction in final height. LTS of CNS tumours treated with high craniospinal XRT doses (24 to 56 Gy) reached lower (p < 0.01) final heights when compared with LTS of leukemia who received lower cranial XRT doses (18 to 24 Gy). Final height SDS correlated with chronological age at initiation of therapy (p < 0.05). No correlation was found between the cumulative doses of applied chemotherapeutic agents and the final height of LTS. During follow-up LTS developed an increase in weight for height index (WFH) which occurred independent of XRT. In conclusion, cranial and craniospinal XRT especially in young children with malignancies resulted in a decrease in final height SDS. As 6 of 15 LTS of malignant CNS tumours exhibited a final height SDS below -2 SD, analysis of pituitary function and substitution of GH after diagnosis of GH deficiency should be considered for these patients at a young age. Others factors not directly related to XRT are responsible for the increased risk for obesity in LTS of childhood cancer. abstract_id: PUBMED:26855505 Growth in Children Treated for Acute Lymphoblastic Leukemia: Single Institution Report from North India. Survivors of childhood leukemia are at risk of impaired growth and short stature as adults due to intensive combination chemotherapy and radiation injury. This study was undertaken to evaluate anthropometry in children treated for acute lymphoblastic leukemia (ALL). Children treated for ALL and off treatment for a minimum period of 2 years were evaluated for height, weight and BMI. Z scores were calculated for height, weight and BMI: at induction, 6 months after starting treatment, at end of treatment and at 2 years after completion of therapy. Change in z scores were calculated and compared with CDC criteria and Agarwal standards for Indian children. Fifty two boys and 21 girls were analyzed. Height and weight z scores were seen to show a steep decrease during the initial intensive phase of therapy. The gain in height and weight continued to be slow during therapy and catch up occurred after cessation of therapy. On completion of therapy, patients were shorter, but not significantly so. Girls <9 years were significantly shorter. Weight remained on the lower side of normal. Change of z scores was statistically significant for weight at end of treatment (p = 0.032) and 2 years after completion of treatment (p = 0.00). BMI z score increased throughout the study period. Peak growth velocities were also late in the study subjects Anthropometric variables of height, weight and BMI are affected by ALL during therapy. Growth deceleration is maximum during the intensive phase of therapy. Catch up growth occurs but children remain smaller than their peers. abstract_id: PUBMED:37675394 Can CD34+CD38- lymphoblasts, as likely leukemia stem cells, be a prognostic factor in B-cell precursor acute lymphoblastic leukemia in children? Background: CD34+CD38- lymphoblasts as likely leukemia stem cells (LSCs) may be responsible for a worse response to treatment and may be a risk factor for recurrence in B-cell precursor acute lymphoblastic leukemia (BCP-ALL). Objective: The study objective was to assess the prognostic role of CD34+CD38- lymphoblasts in bone marrow on the day of BCP-ALL diagnosis. Methods: 115 patients with BCP-ALL, the median age of 4.5 years (range 1.5-17.9 years), gender: female 63 (54.8%) with BCP-ALL were enrolled; Group I (n = 90)-patients with CD34+CD38+ antigens and Group II (n = 20)-patients with CD34+CD38- antigens on the lymphoblast surface. Results: A worse response on Days 8, 15, and 33 of therapy and at the end of treatment in Group II (CD34+CD38-) was more often observed but these differences were not statistically significant. A significantly higher incidence of BCP-ALL recurrence was in Group II. Conclusions: 1.In BCP-ALL in children, the presence of CD34+CD38- lymphoblasts at the diagnosis does not affect the first remission.2.In BCP-ALL in children, the presence of CD34+CD38- lymphoblasts at the diagnosis may be considered an unfavorable prognostic factor for disease recurrence.3.It is necessary to further search for prognostic factors in BCP-ALL in children. abstract_id: PUBMED:3606184 Height at diagnosis of malignancies. Studies of the presenting height of children with malignancies have produced conflicting results, from an excess of taller patients to an excess of shorter patients. The problems of measurement bias, inadequate comparison populations, small numbers of patients, subgroup analyses, and overreliance on simple significance tests are all possible reasons for the variation in results. To clarify this issue, we studied heights at diagnosis of 3657 children and adolescents aged under 18 years. Their malignancies included acute lymphoblastic leukaemia, non-Hodgkin's lymphoma, Hodgkin's disease, acute non-lymphoblastic leukaemia, osteosarcoma, retinoblastoma, neuroblastoma, Wilms' tumour, rhabdomyosarcoma, and Ewing's sarcoma. Compared with published standards for the heights of children in control populations, no significant deviation from population norms was found for patients in any of the 10 disease categories after proper adjustment for multiple significance testing. Answer: Yes, weight for height does appear to have prognostic significance in children with acute lymphoblastic leukemia (ALL). A study found that weight for height, expressed as standard deviation scores, had a significant influence on time until first relapse in children with ALL, with the highest risk of early relapse in children at the lower end of the weight-for-height distribution (PUBMED:8037340). Additionally, children with low weight-for-age (WAZ) or weight-for-height (WHZ) were at increased odds of ALL (PUBMED:28644306). These findings suggest that nutritional status, as indicated by weight for height, may be related to prognosis in childhood ALL. However, the mechanism is unclear, and further large-scale prospective studies are required to confirm these findings.
Instruction: Is the use of videotape recording superior to verbal feedback alone in the teaching of clinical skills? Abstracts: abstract_id: PUBMED:20021688 Is the use of videotape recording superior to verbal feedback alone in the teaching of clinical skills? Background: In recent times, medical schools have committed to developing good communication and history taking skills in students. However, there remains an unresolved question as to which constitutes the best educational method. Our study aims to investigate whether the use of videotape recording is superior to verbal feedback alone in the teaching of clinical skills and the role of student self-assessment on history taking and communication skills. Methods: A randomized controlled trial was designed. The study was conducted with 52 of the Dokuz Eylul University Faculty of Medicine second year students. All students' performances of communication and history taking skills were assessed twice. Between these assessments, the study group had received both verbal and visual feedback by watching their video recordings on patient interview; the control group received only verbal feedback from the teacher. Results: Although the self-assessment of the students did not change significantly, assessors' ratings increased significantly for videotaped interviews at the second time. Conclusions: Feedback based on videotaped interviews is superior to the feedback given solely based on the observation of assessors. abstract_id: PUBMED:12481442 Teaching mental health skills to general practitioners and medical officers. David Goldberg opened by describing the research that had led up to the present WPA teaching package. Early research had demonstrated that many psychological illnesses were not detected in primary care settings (Goldberg & Huxley 1980; ibid 1992), and these findings have been replicated in 14 centres round the world, with broadly similar results (Ustun & Sartorius 1995). We have found that in the UK the problem is not defects in factual knowledge, but not having clinical skills to assist in the management of mental disorders in general medical settings. The clinical skills needed in primary care are seldom taught in medical schools, and cannot be learned by listening to a lecture: it is necessary to practice them after they have been demonstrated. To do this it is convenient to break complex clinical skills down into their components: these are called "micro-skills", and we will deal later with the way in which these are taught. The most powerful method for improving mental health skills in this setting is to provide doctors with feedback--either video or audio--of their interview with real patients. The emphasis of such teaching must be on the interview techniques used by the doctor, rather than the clinical problems displayed by the particular patient being interviewed (Gask et al 1991). The problem with this is that video-feedback teaching of the necessary type is not always available, so we have developed videotapes that we can send out to distant locations, and which focus the attention of both local tutor and postgraduates on what should be learned. Because it is essential that most of the teaching is done by the live teacher rather than the videotape, there are always several "discussion points" so that postgraduates can ask questions, or describe their own way of dealing with particular situations. The videotapes are supplied together with teaching notes for the tutor, power points slides which can be adapted to suit local conditions, "role plays" to allow postgraduates to practice each skill they wish to learn, and other support materials. There is also a paper written by ourselves in association with Norman Sartorius, who has encouraged us to prepare the teaching package under the auspices of the WPA. Linda Gask described the process of teaching specific 'microskills', by working through how the skills necessary for the management of people who present in primary and general medical settings have been described and taught in the UK (see box 1). A model of the strategies and skills to be [figure: see text] taught was first developed utilizing the experience professionals and teachers from both primary care and mental health. A videotape was produced in which the skills to be acquired were demonstrated by real primary care doctors in role-played interviews with the addition of subtitles to label particular skills. The videotape is then utilised in a group teaching session to model the specific component skills of the model or 'microskills' to the participants in order to demonstrate exactly how the strategies of the model are applied in a real consultation. Watching the videotape will not however change behaviour. To do this, it is necessary to role-play brief scenarios so that the professional is able to practice the actual words he or she would use. This role-play work may be carried out in pairs, with one doctor playing the professional and the other playing the patient, or in threes, with the addition of an observer who ensures that the participants keep to the task. At the end of the role-play all participants provide feedback. These methods are described in much more detail in Gask (1999). Finally, there is also the possibility of videotaping one of these role-played interviews and teaching on this tape with the group as a whole. The specific skills and methods required to do this are described in much more detail along with the research evidence for these methods in Gask (1998). Our approach to facilitating the group in the exercise of videofeedback teaching is summarised in box 2. abstract_id: PUBMED:6428655 Comparison between videotape and personal teaching as methods of communicating clinical skills to medical students. The efficacy of video recording in transmitting clinical knowledge and skills to medical students was tested by recording on videotape demonstrations of physical examinations given by five clinicians to a randomly selected group of 12 students (personal group) from the first clinical year and then showing these recordings, under identical conditions, to 13 students from the same year (video group). The efficacy of both the personal and video mediums in terms of whether content was retained was tested by a questionnaire completed by all students at the end of the sessions and by a structured clinical assessment in which students were asked to demonstrate some of the same clinical tasks three weeks after the demonstration. In answering the questionnaire the video group obtained a mean (SD) score of 20.8 (7.0) (maximum possible score 40), which was not significantly different from the score achieved by the personal group (17.4 (7.7)). The video group was able to reproduce 44 (10)% of the total clinical steps demonstrated and the personal group 45 (14)%. Videotaped demonstrations can be as effective as personal teaching of clinical methods, and video should be developed as a medium for first line clinical teaching. abstract_id: PUBMED:6620341 The use of videotape programs to teach interpersonal skills. Interpersonal skills training evolved rapidly in recent years as teachers began to exploit the distinct advantages of "live" and simulated presentations in videotapes and films as teaching aids. At the same time, psychologists and psychiatrists started to define the basic aspects of interpersonal relations. Many used prepared scripts to produce videotape programs that modeled human interactions. While clinical experience and formal studies clarified some characteristics of effective visual training programs, the impact of dramatic and technical elements in these productions was not fully explored. In the present paper, the authors review the nature of interpersonal skills and the current status of visual aids for teaching human relations and show how videotape productions with characters, a story line, and action sequences can enhance learning. The experience of producing a series of videotapes about prehospital emergency care is used as a paradigm for discussing how theatrical techniques, applied through discrete production steps, can reinforce training goals and appeal to a wide range of students with diverse learning styles. abstract_id: PUBMED:8988825 Videotape feedback in teaching laryngoscopy. Purpose: To evaluate if videotape feedback provides educational insights for students learning laryngoscopy that they would not otherwise perceive. Methods: Twenty-six medical students were videotaped while performing laryngoscopy for oral intubation. Before and after reviewing their performance on the videotape, they answered a standardized questionnaire assessing the adequacy of positioning, head movement during laryngoscopy, degrees of neck flexion and head extension, time elapsed, and whether the laryngoscope contacted the upper lip or teeth. After the review, they were asked if being videotaped was distracting, whether it provided new instructional insights and, if so, which was most important. Results: Only 4% of students felt that initial head and neck positioning was suboptimal and this increased to 38% after videotape review (P = 0.029). The perceived inadequacy of positioning seemed related to initial overestimation of head extension (34.0 +/- 15 degrees) compared with that seen on videotape (21.5 +/- 13.5 degrees, P = 0.003). The estimated duration of laryngoscopy was underestimated (55 +/- 32 sec vs. 75 +/- 29 sec, P = .024) before videotape review. Although 26.9% (7/26) of students admitted feeling distracted by the video camera, all felt the experience had educational value. Conclusion: Videotape feedback changed students' perception of how they performed laryngoscopy. In particular, head extension was overestimated and duration of laryngoscopy underestimated. abstract_id: PUBMED:27805746 Digital recording as a teaching and learning method in the skills laboratory. Aim And Objectives: To obtain information on how nursing students react to, think about and learn from digital recording as a learning and teaching method over time. Background: Based on the teaching and learning philosophy of the university college, we used digital recording as a tool in our daily sessions in skills laboratory. However, most of the studies referred to in the background review had a duration of from only a few hours to a number of days. We found it valuable to design a study with a duration of two academic semesters. Design: A descriptive and interpretative design was used. Method: First-year bachelor-level students at the department of nursing participated in the study. Data collection was carried out by employing an 'online questionnaire'. The students answered five written, open-ended questions after each of three practical skill sessions. Kvale and Brinkmann's three levels of understanding were employed in the analysis. Results: The students reported that digital recording affected factors such as feeling safe, secure and confident and that video recording was essential in learning and training practical skills. The use of cameras proved to be useful, as an expressive tool for peer learning because video recording enhances self-assessment, reflection, sensing, psychomotor performance and discovery learning. Conclusion: Digital recording enhances the student's awareness when acquiring new knowledge because it activates cognitive and emotional learning. The connection between tutoring, feedback and technology was clear. Relevance To Clinical Practice: The digital recorder gives students direct and immediate feedback on their performance from the various practical procedures, and may aid in the transition from theory to practice. Students experienced more self-confidence and a feeling of safety in their performances. abstract_id: PUBMED:263262 Use of videotape feedback with severely disturbed adolescents. A year of group therapy using videotape feedback with seriously disturbed adolescents offers evidence that such feedback facilitates the correction of these youngsters' distorted body images, low self-esteem, lack of capacity for self-observation, and poor peer relationships. The conscious use of videotape also played an integral part in the group process. It seemed to serve as a third therapist, with considerable amounts of resistance and transference focused on the equipment as it went through the process of being monster, toy and useful tool. An additional benefit was provided to the cotherapists in terms of supervision, and in teaching other staff the techniques of group therapy. abstract_id: PUBMED:10354225 Recording skills practice on videotape can enhance learning - a comparative study between nurse lecturers and nursing students. Video recording techniques have been used in educational settings for a number of years. They have included viewing video taped lessons, using whole videos or clips of tapes as a trigger for discussion, viewing video recordings to observe role models for practice, and being video recorded in order to receive feedback on performance from peers and tutors. Although this last application has been in use since the 1960s, it has only been evaluated as a teaching method with health care professionals in the past 10 years and mostly in the areas of medical and counsellor education. In nurse education, however, use of video recording techniques has been advocated without any empirical evidence on its efficacy. This study has used nursing degree students and nurse educationalists to categorize statements from four cohorts of students who took part in a 12-day clinical supervision course during which their interpersonal skills were recorded on videotape. There were two categories: positive and negative/neutral. Analysis of the data showed that between 61% and 72% of the subjects gave an overall positive categorization to the statements in the questionnaire. Chi-square tests were significant for all groups in both categories. This suggests that both nursing students and nurse lecturers thought that course participants' statements expressed a positive belief that video tape recording is useful in enhancing students' ability to learn effective interpersonal skills in clinical supervision. abstract_id: PUBMED:6841621 Videotape and verbal feedback in behavioral couple therapy: a review. Reviewed the experimental evidence for the efficacy of videotape and verbal feedback in behavioral couple therapy; it is concluded that research to date does not permit definitive conclusions concerning the efficacy of either procedure. Studies on behavioral differences between happy and distressed couples also were summarized. The evidence indicates that it is, as yet, difficult to ascertain just how the communication behaviors of happy and distressed couples differ. abstract_id: PUBMED:4077851 Videotape review: a valuable tool for self-assessment of teaching skills. This paper describes the components of an instructor improvement program known as VISIT (Videotaping Instruction for Self-Assessment of Instructional Technique). Through the use of videotape recordings of classroom, laboratory or clinical teaching and observation by trained observers, faculty can pinpoint instructional strengths and problem areas, plan refinements and evaluate the impact of these refinements. Ten years of experience with VISIT indicates that post-lecture consultation sessions between the instructor and observer are vital to the success of this activity. Answer: Based on the provided abstracts, the use of videotape recording appears to be superior to verbal feedback alone in the teaching of clinical skills. A randomized controlled trial conducted with medical students showed that assessors' ratings increased significantly for videotaped interviews after the students received both verbal and visual feedback by watching their video recordings, compared to the control group that received only verbal feedback from the teacher (PUBMED:20021688). This suggests that feedback based on videotaped interviews is more effective than feedback given solely based on the observation of assessors. Additionally, the use of videotape feedback in teaching laryngoscopy changed students' perception of how they performed the procedure, indicating that videotape feedback can provide educational insights that students might not otherwise perceive (PUBMED:8988825). Similarly, digital recording as a teaching and learning method in a skills laboratory was found to enhance students' awareness when acquiring new knowledge, activate cognitive and emotional learning, and provide direct and immediate feedback on performance, leading to increased self-confidence and a feeling of safety in their performances (PUBMED:27805746). Moreover, a study comparing videotape and personal teaching methods found that videotaped demonstrations can be as effective as personal teaching of clinical methods (PUBMED:6428655). Another study highlighted that videotape feedback facilitates the correction of distorted body images, low self-esteem, lack of capacity for self-observation, and poor peer relationships among severely disturbed adolescents (PUBMED:263262). In the context of mental health skills teaching to general practitioners and medical officers, it was emphasized that the most powerful method for improving skills is to provide doctors with feedback—either video or audio—of their interview with real patients (PUBMED:12481442). Furthermore, a comparative study between nurse lecturers and nursing students suggested that video recording is useful in enhancing students' ability to learn effective interpersonal skills in clinical supervision (PUBMED:10354225). Overall, the evidence from these studies supports the conclusion that videotape recording, as a supplement to verbal feedback, is a superior method for teaching and enhancing clinical skills.
Instruction: Hyperendemic and mesoendemic zones of Fasciola infection surrounding urban Lima : an emerging disease? Abstracts: abstract_id: PUBMED:17431434 Hyperendemic and mesoendemic zones of Fasciola infection surrounding urban Lima : an emerging disease? Introduction: The infection caused by F. hepatica is endemic in Perú with an increasing number of endemic areas, the reporting of them is essential to know the impact of the infection in the population. Objective: To report the prevalence rate of fascioliasis in areas where it had not been detected human cases previously. Material And Methods: Three districts in Lima department were studied; La Chaqui, province of Canta; San Lorenzo of Quinti and San Miguel of Sangallaya, province of Huarochirí. It included children and young adults (less than 25 years old). One stool sample by person was collected and processed by means of the Rapid Sedimentation Technique described by Lumbreras. Results: Overall, the prevalence rate in the three districts was 8.6% (n=291); 3.9% (n=102) in La Chaqui, 26.9% (n=27) in San Miguel of Sangallaya, and 8.6% (n=163) in San Lorenzo of Quinti. Conclusion: F. hepatica infection is a Health Public problem in populations surrounding Lima city. Therefore, control and prevention measurements, besides being addressed to authochtonus people, should also take into account high-risk groups such as frequent travelers to endemic areas and people with dietetic programs based on raw vegetables. abstract_id: PUBMED:33195605 Sheep and Cattle Reservoirs in the Highest Human Fascioliasis Hyperendemic Area: Experimental Transmission Capacity, Field Epidemiology, and Control Within a One Health Initiative in Bolivia. The Northern Bolivian Altiplano is the human fascioliasis hyperendemic area where the highest prevalences and intensities of infection by Fasciola hepatica in humans have been reported. Four animal species are the reservoir species for F. hepatica in this area, namely, sheep, cattle, pigs, and donkeys. Livestock for the Aymara inhabitants is crucial because vegetable cultures are not viable due to the inhospitality of the very high altitude of 3,820-4,100 m. A One Health initiative has been implemented in this area in recent years, as the first such control action in a human endemic area ever. Among the different control axes included, special focus is devoted to the two main reservoirs sheep and cattle. Egg embryonation, miracidial infectivity, intramolluscan development, cercarial production, infected snail survival, and metacercarial infectivity were experimentally studied in altiplanic sheep and cattle isolates. These laboratory studies were performed using altiplanic isolates of the lymnaeid species Galba truncatula, the only vector present in the hyperendemic area. Experiments were made at constant 12 h day/12 h night and varying 20/20°C and 22/5°C photoperiods. Infections were implemented using mono-, bi-, and trimiracidial doses. Results demonstrate that sheep and cattle have the capacity to assure F. hepatica transmission in this very high-altitude area. Field surveys included prevalence studies by coprology on fecal samples from 1,202 sheep and 2,690 cattle collected from different zones of the Northern Bolivian Altiplano. Prevalences were pronouncedly higher and more homogeneous in sheep (63.1%; range: 38.9-68.5%) than in cattle (20.6%; range: 8.2-43.3%) in each one of the different zones. Although similarities between the prevalences in sheep and cattle appeared in the zones of the highest and lowest infection rates, this disappeared in the other zones due to cattle treatments. Comparison with past surveys demonstrates that this hyperendemic area is stable from the disease transmission point of view. Therefore, the control design should prioritize sheep and cattle within the One Health action. Studies performed in the Bolivian Altiplano furnish a baseline for future initiatives to assess the transmission and epidemiological characteristics of fascioliasis in the way for its control in other high altitude Andean endemic areas. abstract_id: PUBMED:14640504 Hyperendemic fascioliasis associated with schistosomiasis in villages in the Nile Delta of Egypt. Coprologic surveys were carried out in villages of the Behera Governorate in the Nile Delta region of Egypt to characterize the epidemiologic features of human fascioliasis caused by Fasciola hepatica and F. gigantica in this lowland endemic area by comparison with fascioliasis caused by only F. hepatica in areas hyperendemic for human disease in the Andean highlands of South America. The fascioliasis prevalences detected (range = 5.2-19.0%, mean = 12.8%) are the highest obtained in Egypt. The comparison with previous results suggests that in the Nile Delta, fascioliasis is spreading from an original situation of sporadic human cases in well-known endemic foci for animal disease to an endemic distribution in humans, which may be characterized as a mesoendemic region that includes several hyperendemic areas for human disease. As in Andean countries, a relationship with sex was found, although in Egypt prevalences, but not intensities, appeared to be significantly higher in females. All ages appear to be susceptible to liver fluke infection, with prevalences and intensities being lower before and after school age, a situation that is consistent with that detected in Andean countries, although the peak in the 9-11-year-old age group appears less pronounced in Egypt. The intensities were very high when compared with those found in subjects sporadically infected in areas endemic for animal disease, but relatively low for a hyperendemic situation, although the intensities may not be conclusive because of the transmission seasonality of fascioliasis in the Nile Delta. The marked similarities in the qualitative and quantitative spectrums of protozoans and helminths, multiparasitisms, and associations between liver flukes and other parasitic species suggest physiographic-hydrographic and behavioral-social characteristics similar in all areas hyperendemic for human fascioliasis, which are independent of other factors such as climate, altitude, and cultural or religious features. The significant positive association between liver fluke infection and schistosomiasis mansoni detected in one locality has never been described elsewhere, and must be considered relevant from clinical, pathologic, diagnostic, and therapeutic points of view. Interestingly, the relationships of schistosomiasis prevalences and intensities with sex and age follow patterns similar to those found in fascioliasis. abstract_id: PUBMED:33251272 Donkey Fascioliasis Within a One Health Control Action: Transmission Capacity, Field Epidemiology, and Reservoir Role in a Human Hyperendemic Area. A One Health initiative has been implemented for fascioliasis control in a human hyperendemic area for the first time. The area selected for this multidisciplinary approach is the Northern Bolivian Altiplano, where the highest prevalences and intensities in humans have been reported. Within the strategic intervention axis of control activities concerning animal reservoirs, complete experimental studies, and field surveys have been performed to assess the fascioliasis transmission capacity and epidemiological role of the donkey for the first time. Laboratory studies with altiplanic donkey-infecting Fasciola hepatica and altiplanic Galba truncatula snail vector isolates demonstrate that the donkey assures the viability of the whole fasciolid life cycle. Several aspects indicate, however, that F. hepatica does not reach, in the donkey, the level of adaptation it shows in sheep and cattle in this high altitude hyperendemic area. This is illustrated by a few-day delay in egg embryonation, longer prepatent period despite similar miracidial infectivity and shorter patent period in the intramolluscan development, lower cercarial production per snail, different cercarial chronobiology, shorter snail survival after shedding end, shorter longevity of shedding snails, and lower metacercarial infectivity in Wistar rats. Thus, the role of the donkey in the disease transmission should be considered secondary. Field survey results proved that liver fluke prevalence and intensity in donkeys are similar to those of the main reservoirs sheep and cattle in this area. Fasciolid egg shedding by a donkey individual contributes to the environment contamination at a rate similar to sheep and cattle. In this endemic area, the pronounced lower number of donkeys when compared to sheep and cattle indicates that the epidemiological reservoir role of the donkey is also secondary. However, the donkey plays an important epidemiological role in the disease spread because of its use by Aymara inhabitants for good transport, movements, and travel from one locality/zone to another, a repercussion to be considered in the present geographical spread of fascioliasis in the Altiplano due to climate change. Donkey transport of parasite and vector, including movements inside the zone under control and potential introduction from outside that zone, poses a problem for the One Health initiative. abstract_id: PUBMED:9270738 A population-based coprological study of human fascioliasis in a hyperendemic area of the Bolivian Altiplano. The community of Chijipata Alta, at an altitude of 3850 m, near the southern coast of Lake Titicaca in the northern Altiplano of Bolivia, was surveyed for human fascioliasis. The global prevalence (66.7%) and intensity (eggs per gram of faeces--epg: range: 24-4440; arithmetic mean: 1001; geometric mean: 390) proved to be the highest known in the world by means of coprological techniques. These results suggest the existence of highly hyperendemic subzones among the large human fascioliasis-endemic zone of the Bolivian northern Altiplano. Despite the decrease in prevalence and intensity from children (75.0%, 24-4440 epg) to adults (41.7%, 144-864 epg), our findings show that in an hyperendemic zone adult subjects either maintain the parasites acquired when young or are newly infected as the consequence of inhabiting a zone of high infection risk. abstract_id: PUBMED:16153314 Epidemiology of fascioliasis in human endemic areas. Considered a secondary zoonotic disease until the mid-1990s, human fascioliasis is at present emerging or re-emerging in many countries, including increases of prevalence and intensity and geographical expansion. Research in recent years has justified the inclusion of fascioliasis in the list of important human parasitic diseases. At present, fascioliasis is a vector-borne disease presenting the widest known latitudinal, longitudinal and altitudinal distribution. Fasciola hepatica has succeeded in expanding from its European original geographical area to colonize five continents, despite theoretical restrictions related to its biology and in turn dependent upon environmental and human activities. Among the different epidemiological situations, human hypo- to hyperendemic areas, including epidemics, are noteworthy. A global analysis of the distribution of human cases shows that the expected correlation between animal and human fascioliasis only appears at a basic level. Areas presenting very high human prevalences and intensities, especially in children and females, have been recently described. In hypo- to hyperendemic areas of Central and South America, Europe, Africa and Asia, human fascioliasis presents a range of epidemiological characteristics related to a wide diversity of environments. Thus far well-known epidemiological patterns of fascioliasis may not always explain the transmission characteristics in any given area and control measures should consider the results of ecoepidemiological studies undertaken in the zones concerned. abstract_id: PUBMED:21308203 Human and animal fascioliasis in Peru: impact in the economy of endemic zones Fasciola hepatica is the causative agent of fasciolosis in Peru; the disease is an important public health problem by the high prevalence of the human infection affecting mainly children and a major veterinary problem by the high rates of infected livestock. The human disease is endemic in the Sierra and the Coast but sporadic in the Amazonia, and reported in 18 Departments, while the animal infection in 21 of 24 Departments of Peru. Transmission occurs in Andean rural populations engaged in agriculture, but recently an increasing number of people became infected in the cities. The epidemiological situation in Peru includes i) Departments with non-autochtonous cases, where infection occurs by consumption of contaminated vegetables brought from endemic areas or infection is acquired by visit to endemic areas; ii) Departments with hypoendemic and mesoendemic villages, where transmission occurs by ingest of contaminated vegetables and prevalence ≤10%; and iii) Departments with hyper-endemic villages with human prevalence >10 %, with an intense transmission by consumption of contaminated vegetables. The disease affects bovine, sheep, goat, swine, equine, South American camelids, rabbits and guinea pigs. The negative impact of fasciolosis in the livestock economy is not lesser than US$ 50 million per year, estimation based on the prevalence and the number of condemned livers in the abattoirs. It is difficult to estimate the economic impact of this infection in the human health due to its status of neglected disease, but fasciolosis is hyper-endemic in the poorest Andean areas of Peru where the situation has to be recognized as a public health emergency. abstract_id: PUBMED:21767521 Hyperendemic human fascioliasis in Andean valleys: an altitudinal transect analysis in children of Cajamarca province, Peru. A coprological survey including 476 2-18 year old school children from six rural localities between 2627 and 3061 m altitude was performed in Cajamarca province, Peru. Prevalences of fascioliasis ranging from 6.7 to 47.7% (mean 24.4%) proved to be the highest so far recorded in that human hyperendemic area. Higher prevalences in females and in the 2-5 year old group were not significant. Intensities ranged from 24 to 864 eggs per gram (arithmetic mean: 113; geometric mean: 68), the majority shedding less than 100, and without significant differences according to gender or age group. Fasciola hepatica was the most common helminth within a spectrum of 11-12 protozoan and 9-11 helminth species, 97.3% of the children showing infection with at least one parasite. The highest levels corresponded to coinfection with seven different species in females and subjects older than 5 years. Fascioliasis prevalence correlation with altitude appeared significant. An epidemiological characterisation of the valley transmission pattern of fascioliasis in Cajamarca is made by comparison with other better known hyperendemic areas. Results suggest that human fascioliasis may be widespread throughout different parts of Cajamarca province, even far away from the city, and that long-term fascioliasis chronicity and superimposed repetitive infections may be probably frequent. abstract_id: PUBMED:33997234 Domestic pig prioritized in one health action against fascioliasis in human endemic areas: Experimental assessment of transmission capacity and epidemiological evaluation of reservoir role. The Northern Bolivian Altiplano is the human fascioliasis hyperendemic area where the highest prevalences and intensities in humans have been reported. Preventive chemotherapy was implemented in the last ten years. Surveillance showed high human infection and re-infection rates in between the annual triclabendazole monodose treatments. A complementary One Health control action was launched to decrease the infection risk. Among the multidisciplinary axes, there is the need to establish animal reservoir species priorities for a more efficient control. Laboratory and field studies were performed for the first time to assess the Fasciola hepatica transmission capacity of the pig and its potential reservoir role. The experimental follow-up of altiplanic pig isolates through altiplanic Galba truncatula snail vector isolates were performed at different miracidial doses and different day/night temperatures. Experiments included egg embryonation, miracidial infectivity, lymnaeid snail infection, intramolluscan larval development, cercarial production, chronobiology of the cercarial shedding, vector survival to infection, metacercarial infectivity of mammal host, and adult stage development. Surveys included the assessment of prevalence, intensity, egg measurements and egg shedding rates in nature. Pig contribution was evaluated by comparing with the main altiplanic reservoirs sheep and cattle. Results demonstrated that the pig assures the whole F. hepatica life cycle and participates in its transmission in this area. The fast egg embryonation, high cercarial production, long multi-wave shedding chronobiological pattern in monomiracidial infections at permanent 20 °C temperature, and the high daily egg outputs per pig are worth mentioning. The high infection risk suggests early infection of freely running piglets and evolutionary long-term adaptation of the liver fluke to this omnivorous mammal, despite its previously evoked resistance or non-suitability. Genetic, physiological and immune similarities with humans may also underlie the parasite adaptation to humans in this area. The pig should be accordingly included for appropriate control measures within a One Health action against human fascioliasis. The pig should henceforth be considered in epidemiological studies and control initiatives not only in fascioliasis endemic areas with human infection risk on other Andean countries, but also in rural areas of Latin America, Africa and Asia where domestic pigs are allowed to run freely. abstract_id: PUBMED:34041348 Spread of the fascioliasis endemic area assessed by seasonal follow-up of rDNA ITS-2 sequenced lymnaeid populations in Cajamarca, Peru. Fascioliasis is a worldwide emerging snail-borne zoonotic trematodiasis with a great spreading capacity linked to animal and human movements, climate change, and anthropogenic modifications of freshwater environments. South America is the continent with more human endemic areas caused by Fasciola hepatica, mainly in high altitude areas of Andean regions. The Peruvian Cajamarca area presents the highest human prevalences reported, only lower than those in the Bolivian Altiplano. Sequencing of the complete rDNA ITS-2 allowed for the specific and haplotype classification of lymnaeid snails collected in seasonal field surveys along a transect including 2007-3473 m altitudes. The species Galba truncatula (one haplotype preferentially in higher altitudes) and Pseudosuccinea columella (one haplotype in an isolated population), and the non-transmitting species Lymnaea schirazensis (two haplotypes mainly in lower altitudes) were found. Climatic seasonality proved to influence G. truncatula populations in temporarily dried habitats, whereas L. schirazensis appeared to be more climatologically independent due to its extreme amphibious ecology. Along the southeastern transect from Cajamarca city, G. truncatula and L. schirazensis shared the same site in 7 localities (46.7% of the water collections studied). The detection of G. truncatula in 11 new foci (73.3%), predominantly in northern localities closer to the city, demonstrate that the Cajamarca transmission risk area is markedly wider than previously considered. Lymnaea schirazensis progressively increases its presence when moving away from the city. Results highlight the usefulness of lymnaeid surveys to assess borders of the endemic area and inner distribution of transmission foci. Similar lymnaeid surveys are still in need to be performed in the wide northern and western zones of the Cajamarca city. The coexistence of more than one lymnaeid transmitting species, together with a morphologically indistinguishable non-transmitting species and livestock movements inside the area, conform a complex scenario which poses difficulties for the needed One Health control intervention. Answer: The infection caused by Fasciola hepatica, known as fascioliasis, is indeed endemic in Peru, with an increasing number of endemic areas, including those surrounding the urban area of Lima. The prevalence rate of fascioliasis in areas where human cases had not been previously detected was reported to be 8.6% in three districts in the Lima department, indicating that F. hepatica infection is a public health problem in populations around Lima city (PUBMED:17431434). This suggests that the disease may be emerging in these zones. Moreover, the disease has been characterized as hyperendemic in certain parts of Peru, affecting mainly children and causing significant public health concerns. In some regions, the prevalence of human fascioliasis is greater than 10%, which indicates intense transmission through the consumption of contaminated vegetables (PUBMED:21308203). The epidemiological situation in Peru includes areas with non-autochthonous cases, hypoendemic and mesoendemic villages, and hyper-endemic villages, with transmission occurring in Andean rural populations engaged in agriculture and increasingly in urban populations (PUBMED:21308203). The spread of fascioliasis in the Nile Delta of Egypt, which is a lowland endemic area, has also been compared to the Andean highlands of South America, suggesting that fascioliasis is spreading from sporadic human cases in known endemic foci for animal disease to an endemic distribution in humans, which may be characterized as a mesoendemic region that includes several hyperendemic areas for human disease (PUBMED:14640504). In summary, the evidence points to fascioliasis as an emerging disease in the hyperendemic and mesoendemic zones surrounding urban Lima, with the need for control and prevention measures to address both autochthonous populations and high-risk groups such as frequent travelers to endemic areas and people with diets based on raw vegetables (PUBMED:17431434).
Instruction: Do metoprolol and carvedilol have equivalent effects on diurnal heart rate in patients with chronic heart failure? Abstracts: abstract_id: PUBMED:16043407 Do metoprolol and carvedilol have equivalent effects on diurnal heart rate in patients with chronic heart failure? Background: Carvedilol exerted a greater reduction in mortality than metoprolol tartrate in the Carvedilol or Metoprolol European Trial (COMET). However, it is unclear if the degree and time course of beta1-blockade during a 24-h period was similar with each agent at the doses used. Therefore we analyzed 24-h ECG Holter recordings from a study which compared the long-term clinical efficacy of metoprolol tartrate to carvedilol in chronic heart failure patients using the same dosing regimen as in COMET. Methods And Results: Fifty-one patients with chronic heart failure with a mean LVEF 26+/-1.8% were randomized in a double-blind fashion to receive metoprolol tartrate 50 mg bid or carvedilol 25 mg bid. 24-h ECG monitoring (Holter) was performed at baseline, 12 weeks and 1 year. Adequate quality recordings for analysis were obtained from 43 subjects at baseline, 42 at 12 weeks and 29 subjects at 1 year. Both drugs produced a fall in average 24-h heart rate from baseline at 12 weeks and at 1 year: metoprolol 88+/-3 to 71+/-2 and 69+/-3 bpm; carvedilol 83+/-3 to 70+/-2 and 70+/-3 bpm respectively (all p<0.001). The pattern of suppression of heart rate during the 24-h period was similar for both drugs. Conclusion: Metoprolol tartrate 50 mg bid and carvedilol 25 mg bid had similar effects on 24-h heart rate. This result suggests that the degree of beta1-blockade produced by these two drugs in these doses is comparable and the superior survival effect of carvedilol compared to metoprolol seen in COMET is likely to be due to actions of carvedilol other than beta1-blockade. abstract_id: PUBMED:34821080 The prognostic value of heart rate at discharge in acute decompensation of heart failure with reduced ejection fraction. Aims: The effect of elevated heart rate (HR) on morbidity and mortality is evident in chronic stable heart failure; data in this regard in acute decompensated heart failure (ADHF) setting are scarce. In this single-centre study, we sought to address the prognostic value of HR and beta-blocker dosage at discharge on all-cause mortality among patients with heart failure and reduced ejection fraction and ADHF. Methods And Results: In this retrospective observational study, 2945 patients were admitted for the first time with the primary diagnosis of ADHF between January 2008 and February 2018. Patients were divided by resting HR at discharge into three groups (HR < 70 b.p.m., HR 70-90 b.p.m., and HR > 90 b.p.m.). Evidence-based beta-blockers were defined as metoprolol, bisoprolol, and carvedilol. The doses of prescribed beta-blockers were calculated into a percentage target dose of each beta-blocker and divided to four quartiles: 0 < Dose ≤ 25%, 25% < Dose ≤ 50%, 50% < Dose ≤ 75%, and >75% of the target dose. Cox regression was used to calculate the hazard ratio for various HR categories and adjusting for clinical and laboratory variables. At discharge, 1226 patients had an HR < 70 b.p.m., 1347 patients had an HR at range 70-90 b.p.m., and 372 patients with an HR > 90 b.p.m. The 30 day mortality rate was 2.2%, 3.7%, and 12.1% (P < 0.001), respectively. Concordantly, 1 year mortality rate was 14.6%, 16.7%, and 30.4% (P < 0.001) among patients with HR < 70 b.p.m., HR 70-90 b.p.m., and HR > 90 b.p.m., respectively. The adjusted hazard ratio was significantly increased only in HR above 90 b.p.m. category (hazard ratio, 2.318; 95% confidence interval, 1.794-2.996). Conclusions: Patients with ADHF and an HR of <90 b.p.m. at discharge had significantly a lower 1 year mortality independent of the dosage of beta-blocker at discharge. It is conceivable to discharge these patients with lower HR. abstract_id: PUBMED:25488166 Insufficient control of heart rate in stable coronary artery disease patients in Latvia. Background And Objective: Heart rate (HR) ≥70 beats per minute (bpm) increases cardiovascular risk in coronary artery disease (CAD) patients. The objective of the analysis is to characterize HR as well as other clinical parameters in outpatients with stable CAD in Latvia. Materials And Methods: CLARIFY is an ongoing international registry of outpatients with established CAD. Latvian data regarding 120 patients enrolled in CLARIFY and collected at baseline visit during 2009-2010 were analyzed. Results: The mean HR was 67.7±9.5 and 66.9±10.7bpm when measured by pulse palpation and electrocardiography, respectively. HR ≤60bpm and ≥70bpm was observed in 25% and 35.8% of patients, respectively. When analyzing patients with angina symptoms, 22.8% had HR ≤60bpm while HR ≥70bpm was observed in 33.3% of the cases. HR ≥70bpm was observed in 36.2% of patients with symptoms of chronic heart failure. Beta-blockers were used in 81.7% of the patients. Metoprolol (long acting succinate), bisoprolol, nebivolol and carvedilol in average daily doses 63.8, 5.3, 4.5, and 10.4mg/d were used in 47, 37, 11 and 3 cases, respectively. Among patients with HR ≥70bpm 79.1% were using beta-blockers. Medications did not differ significantly between the three groups according to HR level (≤60, 61-69 and ≥70bpm). Conclusions: Despite the wide use of beta-blockers, HR is insufficiently controlled in the analyzed sample of stable CAD patients in Latvia. Target HR ≤60bpm is achieved only in 25% of the patients while more than one third have increased HR ≥70bpm. abstract_id: PUBMED:32757485 Resting heart rate in ambulatory heart failure with reduced ejection fraction treated with beta-blockers. Aims: Current guidelines recommend beta-blocker therapy in chronic heart failure with reduced ejection fraction (HFrEF) titrated according to tolerated target dose. The efficiency of this strategy to obtain adequate heart rate (HR) control remains unclear in clinical practice. The aim of this study was to determine, in a real-world setting, the proportion of HFrEF patients who fail to achieve beta-blocker target doses, whether target doses of beta-blockers have a relationship with the adequacy in reducing resting HR over time. Methods And Results: Beta-blocker dose and resting HR of consecutive ambulatory patients with a diagnosis of HFrEF (ejection fraction ≤ 35%) in sinus rhythm were reviewed at the first outpatient contact in the Cleveland Clinic Health System from the year 2000 to 2015. Patients who did not receive beta-blocker therapy, have congenital heart disease and hypertrophic cardiomyopathy, were not in sinus rhythm, or have a history of heart transplant were excluded. Patients were followed up until their last known visit at the Cleveland Clinic. Median resting HR was 71 b.p.m. [inter-quartile range (IQR) 60-84 b.p.m.] in 8041 patients (median age 65; 68% male) with 67% on carvedilol, 32% on metoprolol succinate, and 1% on bisoprolol. In 3674 subjects (56%), resting HR was ≥70 b.p.m. At final follow-up after a median of 21 months (IQR 0.1-7.2 years), resting HR was 72 b.p.m. (IQR 60-84 b.p.m.) in the subset of patients with persistently low ejection fraction ≤ 35%. HR ≥ 70 b.p.m. was observed in 55% of this group. Beta-blocker target dose was achieved in 19%, 5%, and 15% of those receiving carvedilol, metoprolol succinate, and bisoprolol, respectively. In the subset of patients who experienced beta-blocker up-titration, reduced mortality or hospitalization due to heart failure was observed in patients who experienced the lowest HR after titration. Conclusions: In our single-centre experience, the majority of patients with chronic HFrEF treated with beta-blocker therapy did not achieve target doses over time, and a substantial proportion had inadequate control of resting HR. There was no relationship between achieved beta-blocker target dose and resting HR control. abstract_id: PUBMED:29774407 Beta blockers and chronic heart failure patients: prognostic impact of a dose targeted beta blocker therapy vs. heart rate targeted strategy. Background: Beta blockers improve survival in patients with chronic systolic heart failure (CHF). Whether physicians should aim for target dose, target heart rate (HR), or both is still under debate. Methods And Results: We identified 1,669 patients with systolic CHF due to ischemic heart disease or idiopathic dilated cardiomyopathy from the University Hospital Heidelberg and the Clinic of Ludwigshafen, Germany. All patients were treated with an angiotensin converting enzyme inhibitor or angiotensin receptor blocker and had a history of CHF known for at least 6 months. Target dose was defined as treatment with ≥ 95% of the respective published guideline-recommended dose. Target HR was defined as 51-69 bpm. All-cause mortality during the median follow-up of 42.8 months was analysed with respect to beta blocker dosing and resting HR. 201 (12%) patients met the dose target (group A), 285 (17.1%) met the HR target (group B), 627 (37.6%) met no target (group C), and 556 (33.3%) did not receive beta blockers (Group D). 5-year mortality was 23.7, 22.7, 37.6, and 55.6% for group A, B, C, and D, respectively (p < 0.001). Survival for group A patients with a HR ≥ 70 bpm was 28.8% but 14.8% if HR was 50-70 bpm (p = 0.054). Conclusions: Achieving guidelines recommended beta blocker dose or to HR control has a similar positive impact on survival. When on target dose, supplemental HR control additionally improves survival. abstract_id: PUBMED:14708208 Beta-blocking drugs indicated in patients with heart failure Until recently, beta-blocking drugs were considered to be contraindicated in patients with chronic heart failure. However, several well-conducted randomised clinical trials have now proven otherwise. Yet, it was still not clear whether nonselective alpha-, beta 1- and beta 2-receptor blockade with carvedilol would be superior to selective beta 1-receptor blockade with metoprolol. One of the studies ('Carvedilol or metoprolol European trial' (COMET)) demonstrated a statistically significant 17% reduction of all-cause mortality with carvedilol. Although striking, the results may have been influenced by differences in blood pressure and heart rate, as well as the short-acting formula of metoprolol that was used. Furthermore, the 'Carvedilol hibernation reversible ischaemia trial; marker of success' (CHRISTMAS) study demonstrated myocardial hibernation in the majority of ischaemic heart-failure patients, and showed beneficial effects on left-ventricle function with carvedilol in both hibernated and non-hibernated patients. Despite this and the rest of the overwhelming evidence, at present only a minority of eligible chronic heart-failure patients are treated with beta-blockers. abstract_id: PUBMED:18287590 Differential effects of carvedilol and metoprolol succinate on plasma norepinephrine release and peak exercise heart rate in subjects with chronic heart failure. Dosing equivalency of carvedilol and metoprolol remains a debate. Degree of beta 1-blockade is best assessed by blunting of the exercise-induced heart rate. Accordingly, the authors have investigated dosing equivalency by examining baseline and peak exercise heart rates and norepinephrine levels in subjects with chronic heart failure treated with carvedilol or metoprolol. Thirty-seven subjects treated with carvedilol (32.9 +/- 3.5 mg; n = 23) or metoprolol succinate (XL) (96.4 +/- 15.9 mg; n = 14) referred for cardiopulmonary exercise testing were studied prospectively. Carvedilol versus metoprolol XL subjects did not differ with respect to baseline heart rate (73 +/- 2 vs 70 +/- 3 bpm), or baseline plasma norepinephrine levels (597.5 +/- 78.3 vs 602.1 +/- 69.6 pg/mL), P = NS. However, despite similar peak exercise norepinephrine levels (2735.8 +/- 320.1 vs 2403.1 +/- 371.6 pg/mL), heart rate at peak exercise was higher in subjects receiving carvedilol (135 +/- 4 bpm) than those receiving metoprolol XL (117 +/- 6 bpm), P = 0.02. Similar norepinephrine release and more complete beta 1-blockade is observed in well-matched subjects with chronic heart failure treated with a mean daily dose of metoprolol XL 96.4 mg compared with carvedilol 32.9 mg. abstract_id: PUBMED:15921806 Exchange of beta-blockers in heart failure patients. Experiences from the poststudy phase of COMET (the Carvedilol or Metoprolol European Trial). Background: The Carvedilol or Metoprolol European Trial (COMET) reported a significant survival benefit for carvedilol, a beta1-, beta2- and alpha1-blocker, vs. metoprolol tartrate, a beta1-selective blocker, in patients with mild-to-severe chronic heart failure (CHF). Patients on treatment with metoprolol might benefit from switching to carvedilol. Aim: To investigate the safety and tolerability of switching beta-blockers in CHF. Methods: At the end of COMET, the Steering Committee recommended that study medication was stopped without unblinding, and patients were commenced on open-label beta-blockade at a dose equivalent to half the dose of blinded therapy, with subsequent titration to target or maximum tolerated dose. Patients were followed for 30 days. Results: 1321 out of 1440 patients were transitioned to open-label treatment (76.8% to carvedilol). Serious adverse and CHF-related events were respectively 9.4% and 4.7% in those switching from carvedilol to metoprolol and 3.1% and 1.5% in patients switching from metoprolol to carvedilol. Patients who switched from carvedilol to metoprolol showed the highest mortality or hospitalisation rate (12.3%) in comparison with those who switched from metoprolol to carvedilol (3.1%, p<0.001) or who stayed on the same drug (carvedilol: 2.5%, p<0.001; metoprolol: 4.2%, p=0.04). Reducing the initial dose of the second beta-blocker maximised the safety of this strategy. Event rate was higher in patients with more severe heart failure and in those withdrawing from beta-blockade. Conclusion: Our data show that switching beta-blockers is a practical, safe and well-tolerated strategy to optimise treatment of CHF. Patients who switched to carvedilol showed the lowest rate of adverse events. A closer clinical monitoring is recommended during transition in high-risk patients. abstract_id: PUBMED:17719466 Resting heart rate in cardiovascular disease. The importance of resting heart rate (HR) as a prognostic factor and potential therapeutic target is not yet generally accepted. Recent large epidemiologic studies have confirmed earlier studies that showed resting HR to be an independent predictor of cardiovascular and all-cause mortality in men and women with and without diagnosed cardiovascular disease. Clinical trial data suggest that HR reduction itself is an important mechanism of benefit of beta-blockers and other heart-rate lowering drugs used after acute myocardial infarction, in chronic heart failure, and in stable angina pectoris. Pathophysiological studies indicate that a relatively high HR has direct detrimental effects on the progression of coronary atherosclerosis, on the occurrence of myocardial ischemia and ventricular arrhythmias, and on left ventricular function. Studies have found a continuous increase in risk with HR above 60 beats/min. Although it may be difficult to define an optimal HR for a given individual, it seems desirable to maintain resting HR substantially below the traditionally defined tachycardia threshold of 90 or 100 beats/min. These findings suggest that the potential role of HR and its modulation should be considered in future cardiovascular guidance documents. abstract_id: PUBMED:16040619 Influence of heart rate, blood pressure, and beta-blocker dose on outcome and the differences in outcome between carvedilol and metoprolol tartrate in patients with chronic heart failure: results from the COMET trial. Aims: We studied the influence of heart rate (HR), systolic blood pressure (SBP), and beta-blocker dose on outcome in the 2599 out of 3029 patients in Carvedilol Or Metoprolol European Trial (COMET) who were alive and on study drug at 4 months after randomization (time of first visit on maintenance therapy). Methods And Results: By multivariable analysis, baseline HR, baseline SBP, and their change after 4 months were not independently related to subsequent outcome. In a multivariable analysis including clinical variables, HR above and SBP below the median value achieved at 4 months predicted subsequent increased mortality [relative risk (RR) for HR>68 b.p.m. 1.333; 95% confidence intervals (CI) 1.152-1.542; P<0.0001 and RR for SBP>120 mmHg 0.78; 95% CI 0.671-0.907; P<0.0013]. Achieving target beta-blocker dose was associated with a better outcome (RR 0.779; 95% CI 0.662-0.916; P<0.0025). The superiority of carvedilol as compared to metoprolol tartrate was maintained in a multivariable model (RR 0.767; 95% CI 0.663-0.887; P=0.0004) and there was no interaction with HR, SBP, or beta-blocker dose. Conclusion: Beta-blocker dose, HR, and SBP achieved during beta-blocker therapy have independent prognostic value in heart failure. None of these factors influenced the beneficial effects of carvedilol when compared with metoprolol tartrate at the pre-defined target doses used in COMET. Answer: Metoprolol tartrate at a dose of 50 mg bid and carvedilol at a dose of 25 mg bid have been found to have similar effects on 24-hour heart rate in patients with chronic heart failure. A study that analyzed 24-hour ECG Holter recordings from patients randomized to receive either of the two drugs showed that both medications produced a significant reduction in average 24-hour heart rate from baseline at 12 weeks and at 1 year. The pattern of suppression of heart rate during the 24-hour period was similar for both drugs. This suggests that the degree of beta1-blockade produced by these two drugs at the doses used is comparable (PUBMED:16043407).
Instruction: Operations on the thoracic aorta and hypothermic circulatory arrest: is aprotinin safe? Abstracts: abstract_id: PUBMED:9451066 Operations on the thoracic aorta and hypothermic circulatory arrest: is aprotinin safe? Introduction: The safety of aprotinin, especially when used with profound hypothermic circulatory arrest, is still a matter of intense debate despite its presumed salutary effects on blood loss. Many investigators have reported toxic renal effects of high-dose aprotinin in such patients, but no prospective, randomized study has been conducted. To assess the potential detrimental effect of aprotinin on renal function and its putative reduction of blood loss, 50 patients undergoing thoracic aortic operations with the use of profound hypothermic circulatory arrest were randomly assigned to receive either low-dose aprotinin (1 x 10(6) kallikrein activation units) or placebo. Methods: The specific renal tubular markers beta-2-microglobulin and beta-N-acetyl-D-glucosaminidase, as well as serum creatinine and blood urea nitrogen, creatinine clearance, sodium excretion, and potassium excretion, were measured to evaluate renal function preoperatively, immediately after the procedure, and 24 hours and 48 hours later. Results: No statistically significant difference was found in any measured renal parameter between the two groups (analysis of variance). Renal dysfunction, defined as an elevation of serum creatinine early postoperatively (> or = 1.5 times the preoperative value), occurred in two patients who received aprotinin and in one patient in the control group. Temporary dialysis (hemodialysis or continuous venovenous hemofiltration) was needed in two patients in the aprotinin group versus one in the control group. Furthermore, patients treated with aprotinin had significantly less total postoperative blood loss (718 +/- 340 ml vs 920 +/- 387 ml, p = 0.04). The aprotinin recipients also had a significantly lower transfusion requirement (p < 0.05). Conclusion: This controlled trial of low-dose aprotinin in patients undergoing thoracic aortic operations using profound hypothermic circulatory arrest demonstrated no detectable deleterious effects on renal function; moreover, the use of aprotinin was associated with significantly lower need for transfusion. abstract_id: PUBMED:7685587 Renal dysfunction and intravascular coagulation with aprotinin and hypothermic circulatory arrest. High-dose aprotinin was used in 20 patients undergoing primary or repeat operations on the thoracic or thoracoabdominal aorta using cardiopulmonary bypass and hypothermic circulatory arrest. The activated clotting times immediately before the establishment of hypothermic circulatory arrest exceeded 700 seconds in all but 1 patient. Three patients (15%) required reoperation for bleeding. Seven patients died during hospitalization, and 5 had postmortem examination. Platelet-fibrin thrombi were present in multiple organs including the coronary arteries of 4 patients with myocardial infarction or failure, the pulmonary arteries of 2 patients, 1 of whom died of acute right ventricular failure, the brains of 2 patients who sustained a stroke, and the kidneys of 4 patients, 3 of whom had development of renal dysfunction. Renal dysfunction occurred in 13 patients (65%), and all were 65 years of age or older. Five of these patients required hemodialysis. Among 20 age-matched patients who had similar operations without aprotinin, there was one hospital death (5%) from myocardial infarction, and renal dysfunction developed in 1 patient (5%), who did not require dialysis. None of these 20 patients required reoperation for bleeding. Although aprotinin has been shown to reduce blood loss in patients having cardiac operations employing cardiopulmonary bypass, this benefit was not attained in this group of patients with thoracic aortic disease in whom hypothermic circulatory arrest was used. Use of aprotinin in elderly patients undergoing these procedures was associated with an increased risk of renal dysfunction and failure, and of myocardial infarction and death. abstract_id: PUBMED:8611985 Aprotinin use with hypothermic circulatory arrest for aortic valve and thoracic aortic surgery: renal function and early survival. Background And Aim Of Study: Aprotinin is widely used during high risk cardiac surgery to reduce blood loss. Concern has been expressed about the safety of aprotinin in association with hypothermic circulatory arrest for surgery of the thoracic aorta and aortic valve. Methods: A consecutive series of 19 patients undergoing surgery of the ascending aorta and/or the aortic arch using hypothermic circulatory arrest (15 - 20 degrees C) in conjunction with the use of aprotinin were studied prospectively from January 1993 to October 1994. The indications for operation were aortic dissection (n = 15) (11 acute) or annuloaortic ectasia (n = 4); 11 were emergency procedures. Ten patients underwent aortic valve replacement as part of a composite aortic root replacement and in seven patients aortic valve resuspension was possible. Results: Mean total chest tube drainage was 878 +/- 548 ml (range 300 - 2,000 ml) with a mean usage of homologous blood of 2,328 +/- 1,600 ml. All but one patient survived (mortality 5.3%). None of the survivors experienced any adverse cardiac or neurological events. Serum creatinine rose significantly from a mean of 102 +/- 17 micromol/L preoperatively, to a mean of 172 +/- 100 micromol/L postoperatively (p<0.05), however, none of the patients became anuric or required dialysis and all values returned to preoperative levels by six weeks after surgery. Median intensive care stay was two days (range 1 - 20 days) and the median postoperative hospital stay was 11 days (range 6 - 50 days). Conclusion: These data suggest that aprotinin in conjunction with hypothermic circulatory arrest for surgery of the thoracic aorta and aortic valve has no adverse effect on early survival. However, significant though transient postoperative renal dysfunction was commonly observed in our experience. abstract_id: PUBMED:20392457 Deep hypothermic circulatory arrest is not a risk factor for acute kidney injury in thoracic aortic surgery. Objective: Previous studies describe a high incidence of acute kidney injury after open thoracic aortic surgery. Findings may be confounded by patient selection, including surgery with deep hypothermic circulatory arrest only or emergency procedures. We studied incidence and risk factors of acute kidney injury in patients undergoing thoracic aortic surgery. Methods: We reviewed 851 patients undergoing elective thoracic aortic surgery with and without deep hypothermic circulatory arrest between 2000 and 2007, focusing on clinical outcome and acute kidney injury defined by consensus RIFLE (Risk, Injury, Failure, Loss of function, End-stage renal disease) criteria. Results: Mean age was 59±16 years; 29% were women. Surgical procedures included aortic root or ascending aorta in 817 patients (96%), aortic arch in 172 (20%), and descending thoracic aorta in 54 (6%), with 20% reoperative procedures. Deep hypothermic circulatory arrest was used in 238 (28%). Incidence of postoperative acute kidney injury (all RIFLE classes) was 17.7%; 2.1% required renal replacement therapy. Mortality increased with RIFLE class severity of acute kidney injury (P<.001). Independent risk factors for acute kidney injury were increased age, elevated body mass index, hypertension, impaired left ventricular ejection fraction, preoperative anemia, and cardiopulmonary bypass duration. Deep hypothermic circulatory arrest, aprotinin use, and preoperative creatinine level were not independently associated with acute kidney injury. Conclusions: Thoracic aortic surgery can be performed with low rates of acute kidney injury, comparable to other cardiac surgical procedures. Deep hypothermic circulatory arrest and preoperative serum creatinine are not independent risk factors. RIFLE criteria allow comparison with previous studies and correlate well with patient outcome. Risk estimates for acute kidney injury require multivariable prediction. abstract_id: PUBMED:7715123 Pseudoaneurysm of saphenous vein graft after CABG A 69-year-old man was admitted because of angina pectoris and thoracic descending aorta aneurysm. Staged operations were planned. First, he underwent CABG (coronary artery bypass grafting) with SVGs (saphenous vein grafts) to #4 PD, #7 and #12. Aprotinin was administrated to reduce blood loss. The routine postoperative graft angiography and enhanced CT showed a pseudoaneurysm in the SVG to #4 PD. We planned an elective operation of pseudoaneurysm repair and graft replacement of thoracic descending aorta. Also in this second operation, continuous infusion of aprotinin was started after the induction of anesthesia. About 30 minutes later, he suddenly fell in shock and cardiac arrest. Partial cardiopulmonary bypass was established and median sternotomy was performed. In the mediastinum, no bleeding was found. We found out a bleeding point of the SVG to #4 PD and a hemostatic clip on the right ventricule, and closed the bleeding point with suture. The cause of the pseudoaneurysm seemed to be defluxion of the hemostatic clip for a side brunch of the SVG. The cause of the preoperative shock may be an anaphylaxis to readministrated aprotinin. abstract_id: PUBMED:9263343 Coagulation disturbance in profound hypothermia: the influence of anti-fibrinolytic therapy. Extensive thoracic aortic resections often require a period of profoundly hypothermic circulatory arrest. The extent of surgical dissection, damaging effects of cardiopulmonary bypass, and coagulation disturbances of hypothermia predispose to bleeding. Although impervious vascular grafts and biological glues have made an important contribution to eliminating the vicious cycle of transfusion of stored blood and worsening coagulopathy, hemorrhage remains an important cause of morbidity in these patients. Thrombin generation by activation of the coagulation cascades also leads to excessive fibrinolytic activity with the potential to disrupt the hemostatic process. Pharmacological antifibrinolytic therapy with aprotinin or other agents has been shown to preserve hemostasis, but the efficacy of antifibrinolytic therapy remains unproven in thoracic aortic operations with hypothermic circulatory arrest. This report discusses the interactions of hypothermia with the coagulation system, together with the efficacy of fibrinolytic therapy from existing surgical experience. abstract_id: PUBMED:15223414 Repair of aortic false aneurysm using deep hypothermia and circulatory arrest. Background: Aortic false aneurysms are a rare complication of surgery of the aorta that can occur several months to years after the initial operation. We reviewed our results with false aneurysm repair using deep hypothermia and circulatory arrest. Methods: Eleven patients were reoperated on for false aneurysm of either the ascending or descending thoracic aorta. Femorofemoral cardiopulmonary bypass with full-dose aprotinin and a heparinized system was used in all patients. Hypothermic circulatory arrest at an average of 20 degrees C was instituted in all patients for repair. Six patients had a patch repair with either polyethylene terephthalate fiber (Dacron) or bovine pericardium, 4 had tube replacement of the aorta, and 1 had primary repair of the defect. Results: Three patients had false aneurysm formation at a site of coarctation repair in the descending aorta, and the 8 others had false aneurysms in the ascending aorta at the site of a previous aortotomy. Six patients had proven infection as the cause; the causative agent was Staphylococcus species in all cases. Mean cardiopulmonary bypass time was 178 +/- 51 minutes, and circulatory arrest time averaged 39 +/- 18 minutes. Operative mortality was 18% (2 of 11); the cause of death was cardiogenic shock in both patients. The mean time to extubation in survivors was 5 days, and the average time to discharge was 16 days. Conclusions: Although mediastinal infection is a common cause, aortic false aneurysms can be safely approached using femorofemoral cardiopulmonary bypass, hypothermic circulatory arrest, and patch repair with acceptable operative mortality and long-term survival. abstract_id: PUBMED:11139108 The effects of aprotinin on blood product transfusion associated with thoracic aortic surgery requiring deep hypothermic circulatory arrest. Objective: To compare the effects of aprotinin on blood product use and postoperative complications in patients undergoing thoracic aortic surgery requiring deep hypothermic circulatory arrest. Design: A retrospective study. Setting: A university hospital. Participants: Nineteen patients who underwent elective or urgent thoracic aortic surgery. Interventions: None. Measurements And Main Results: The total number of units of packed red blood cells, fresh frozen plasma, and platelets was significantly less in the group that received aprotinin (p = 0.01, 0.04, and 0.01). The intraoperative transfusion of packed red blood cells and platelets, collection and retransfusion of cell saver, and postoperative transfusion of fresh frozen plasma were also significantly less in the aprotinin group (p = 0.01, 0.02, 0.01, and 0.05). No patient in either group sustained renal dysfunction or a myocardial infarction. Two patients who had not received aprotinin suffered from chronic postoperative seizures, and one patient who had received aprotinin sustained a perioperative stroke. Conclusions: Low-dose aprotinin administration significantly decreases blood product transfusion requirements in the setting of thoracic aortic surgery requiring deep hypothermic circulatory arrest, and it does not appear to be associated with renal or myocardial dysfunction. abstract_id: PUBMED:9390585 Risk factors for acute postoperative renal failure in thoracic or thoracoabdominal aortic surgery: a prospective study. Unlabelled: Acute postoperative renal failure is a common complication of thoracic aorta, thoracoabdominal aorta, or aortic arch surgery. To identify variables associated with acute postoperative renal failure, we prospectively studied 475 consecutive patients undergoing thoracoabdominal aortic surgery over a 12-yr period, including those requiring emergent surgery. One hundred twenty-one (25%) patients developed acute postoperative renal failure, and 39 (8%) required hemodialysis. Using multivariate analysis, acute postoperative renal failure was significantly associated with the following variables: age >50 yr (odds ratio [OR] 2.90 [95% confidence interval 1.52-5.53]), preoperative serum creatinine >120 micromol/L (OR 2.76 [1.70-4.48]), duration of left kidney ischemia >30 min (OR 2.01 [1.27-3.17]), packed red cells administration >5 units (OR 2.04 [1.24-3.37]), and Cell-Saver administration >5 units (OR 2.31 [1.34-1.96]). Reimplantation of visceral, renal arteries and the Adamkievicz artery; duration of visceral, spinal, and right kidney ischemia; requirement for fresh frozen plasma; administration of aprotinin; extracorporeal circulation; and procedures with circulatory arrest and profound hypothermia were not predictive of postoperative renal failure. In addition, age >50 yr (OR 5.59 [1.31-23.91]), requirement for packed red blood cells >5 unit (OR 3.91 [1.58-9.67]), and preoperative serum creatinine concentration >120 micromol/L (OR 2.26 [1.13-4.53]) were independent factors for acute renal failure requiring hemodialysis. In conclusion, acute renal failure is often observed after thoracic aortic surgery. Numerous predictive factors must be considered when evaluating the etiology of this complication. Implications: Acute postoperative renal insufficiency is a common complication of thoracic aortic surgery. This study found that age >50 yr, preoperative renal dysfunction, duration of renal ischemia, and amount of blood transfusion are significant predictors of this complication. abstract_id: PUBMED:10391353 Anti-fibrinolytic therapy in thoracic aortic surgery. Background: Bleeding is an important cause of morbidity in thoracic aortic surgery. Methods: We reviewed the mechanisms for fibrinolysis in aortic surgery and the propensity for intervention. Several studies have addressed the safety and efficacy of aprotinin. Results: The endothelium regulates the balance between thrombosis and fibrinolysis. During hypothermic circulatory arrest, thrombin generation stimulates protein C production and tissue plasminogen activator release to promote fibrinolysis. Hypothermia also adversely affects platelet function and coagulation. Controversy exists regarding the effectiveness and dangers of antifibrinolytic agents after circulatory arrest. Conclusions: Fibrinolysis remains problematic during thoracic aortic aneurysm surgery. Heparin management is complicated by aprotinin and insufficient heparin may result in thrombotic events. Aprotinin is safe during rewarming or postoperatively. Answer: The safety of aprotinin in the context of operations on the thoracic aorta involving hypothermic circulatory arrest has been explored in various studies, with mixed results. One controlled trial of low-dose aprotinin in patients undergoing thoracic aortic operations with profound hypothermic circulatory arrest found no detectable deleterious effects on renal function. Furthermore, the use of aprotinin was associated with significantly lower postoperative blood loss and transfusion requirements (PUBMED:9451066). However, another study reported that high-dose aprotinin use in patients undergoing similar procedures was associated with an increased risk of renal dysfunction and failure, myocardial infarction, and death, especially in elderly patients (PUBMED:7685587). A study focusing on aprotinin use with hypothermic circulatory arrest for aortic valve and thoracic aortic surgery observed significant though transient postoperative renal dysfunction, but no adverse effect on early survival was noted (PUBMED:8611985). Another study concluded that deep hypothermic circulatory arrest is not a risk factor for acute kidney injury in thoracic aortic surgery and that aprotinin use was not independently associated with acute kidney injury (PUBMED:20392457). A retrospective study found that low-dose aprotinin administration significantly decreases blood product transfusion requirements in thoracic aortic surgery requiring deep hypothermic circulatory arrest without being associated with renal or myocardial dysfunction (PUBMED:11139108). In summary, the safety of aprotinin in thoracic aortic surgery with hypothermic circulatory arrest appears to be dose-dependent and may vary based on patient characteristics such as age. While some studies suggest that low-dose aprotinin can be used safely and effectively to reduce blood loss and transfusion requirements, high-dose aprotinin has been associated with increased risks, particularly in older patients. It is important to consider individual patient factors and the specifics of the surgical procedure when evaluating the safety of aprotinin in this context.
Instruction: Do we agree about when patients are psychotic? Abstracts: abstract_id: PUBMED:18759806 Do we agree about when patients are psychotic? Objective: To investigate into the use of the term 'psychotic' as defined by ICD-10 or by the concept of impaired reality testing, among psychiatric staff members. Method: Questionnaire investigation using 11 short case vignettes. Results: Responses were received from 266 psychiatric staff members: psychiatrists, nursing staff and psychologists. When using ICD-10, patients were identified as psychotic with a sensitivity ranging from 90% to 55%. Specificity ranged from 60% to 75%. According to the concept of impaired reality testing, all three groups showed a sensitivity of about 60%, whereas specificity ranged from 65% to 50%. The combined use of the terms correlated significantly with responses regarding indication for legal detention for psychiatrists and nursing staff. Conclusion: In identifying a patient as 'psychotic' a broad concept of impaired reality testing was widely used particularly in cases with legal issues. Psychotic symptoms, however, were identified with high sensitivity and specificity. abstract_id: PUBMED:34338130 Dosing antipsychotics in special populations of patients with schizophrenia: severe psychotic agitation, first psychotic episode and elderly patients. Antipsychotic (AP) dosing is well established in nonelderly patients with acute exacerbations of schizophrenia, but not in special populations.This review describes the AP dosing procedures that have been used in clinical studies for acute psychotic agitation, a first episode of psychosis (FEP), and elderly patients. AP dosing data was extracted from the databases of drug regulatory authorities, and from clinical studies available in the medical literature. In acute psychotic agitation, intramuscular and oral APs are frequently prescribed in higher doses than those that saturate D2 receptors. Supersaturating doses of APs should be avoided due to an increased risk of adverse effects. In FEP, many studies showed efficacy of low doses of APs. Studies with risperidone and haloperidol suggested a dose reduction of approximately one third. Titration with a lower starting dose is recommended in elderly patients, due to possible decreases in pharmacokinetic clearance, and due to the risk of concomitant diseases and drug interactions. Exposure to some APs has been associated with QTc prolongation and arrhythmias, and a small but significant increase in the risk of stroke and mortality with APs has been seen, particularly in older people with dementia-related psychosis. abstract_id: PUBMED:25815261 Associative stigma in family members of psychotic patients in Flanders: An exploratory study. Aim: To assess presence and severity of associative stigma in family members of psychotic patients and factors for higher associative stigma. Methods: Standardized semi-structured interview of 150 family members of psychotic patients receiving full time treatment. This study on associative stigma in family members of psychotic patients was part of a larger research program on the burden of the family, using "Interview for the Burden of the Family" and the chapters stigma, treatment and attribution from the "Family interview Schedule". The respondents were relatives, one per patient, either partner or parent. The patients had been diagnosed with schizophrenia or schizo-affective disorder. All contacts with patients and relatives were in Dutch. Relatives were deemed suitable to participate in this research if they saw the patient at least once a week. Recruitment took place in a standardized way: after obtaining the patient's consent, the relatives were approached to participate. The results were analyzed using SPSS Version 18.0. Results: The prevalence of associative stigma in this sample is 86%. Feelings of depression in the majority of family members are prominent. Twenty-one point three percent experienced guilt more or less frequent, while shame was less pronounced. Also, 18.6% of all respondents indicated that they tried to hide the illness of their family member for others regularly or more. Three six point seven percent really kept secret about it in certain circumstances and 29.3% made efforts to explain what the situation or psychiatric condition of their family member really is like. Factors with marked significance towards higher associative stigma are a worsened relationship between the patient and the family member, conduct problems to family members, the patients' residence in a residential care setting, and hereditary attributional factors like genetic hereditability and character. The level of associative stigma has significantly been predicted by the burden of aggressive disruptions to family housemates of the psychotic patient. Conclusion: Family members of psychotic patients in Flanders experience higher associative stigma compared to previous international research. Disruptive behavior by the patient towards in-housing family members is the most accurate predictor of higher associative stigma. abstract_id: PUBMED:35705753 AGREE-S: AGREE II extension for surgical interventions: appraisal instrument. Background: The Appraisal of Guidelines Research and Evaluation (AGREE) II instrument was developed to evaluate the quality of clinical practice guidelines. Evidence suggests that development, reporting, and appraisal of guidelines on surgical interventions may be better informed by modification of the instrument. Objective: We aimed to develop an AGREE II extension specifically designed for appraisal of guidelines of surgical interventions. Methods: In a three-part project funded by the United European Gastroenterology and the European Association for Endoscopic Surgery, (i) we identified factors that were associated with higher quality of surgical guidelines, (ii) we statistically calibrated the AGREE II instrument in the context of surgical guidelines using correlation, reliability, and factor analysis, and (iii) we undertook a Delphi consensus process of stakeholders to inform the development of an AGREE II extension instrument for surgical interventions. Results: Several features were prioritized by stakeholders as of particular importance for guidelines of surgical interventions, including development of a guideline protocol, consideration of practice variability and surgical expertise in different settings, and specification of infrastructures required to implement the recommendations. The AGREE-S-AGREE II extension instrument for surgical interventions has 25 items, compared to the 23 items of the original AGREE II instrument, organized into the following 6 domains: Scope and purpose, Stakeholders, Evidence synthesis, Development of recommendations, Editorial independence, and Implementation and update. As the original instrument, it concludes with an overall appraisal of the quality of the guideline and a judgement on whether the guideline is recommended for use. Several items were amended and rearranged among domains, and an item was deleted. The Rigor of Development domain of the original AGREE II was divided into Evidence Synthesis and Development of Recommendations. Items of the AGREE II domain Clarity of Presentation were incorporated in the new domain Development of Recommendations. Three new items were introduced, addressing the development of a guideline protocol, support by a guideline methodologist, and consideration of surgical experience/expertise. Conclusion: The AGREE-S appraisal instrument has been developed to be used for assessment of the methodological and reporting quality of guidelines on surgical interventions. abstract_id: PUBMED:38147388 Initial mental disorders in patients with psychotic of schizophrenia in adolescent Objective: Clarification of the phenomenology and dynamics of initial mental disorders in patients with psychotic pubertal forms of schizophrenia, identification of the features of these disorders depending on sex. Material And Methods: The study material was adolescent patients with psychotic forms of schizophrenia. Forty-three people were examined (women 44.2%, median age 14.3 [13.0; 16.8] years). Clinical-anamnestic, clinical-psychopathological, clinical-statistical methods were used. Results: The duration of the initial stage of the disease ranged from several hours to 5 years (median 0.8 [0.5; 1.7]. The structure of initial disorders was dominated by nonspecific mental disorders characterized by a combination of symptoms of non-psychotic and subpsychotic levels (20 patients - 43.5%; including 12 males - 26% and 8 females - 24.1% ). If behavioral disorders and episodes of psychoactive substance use were significantly more often detected in boys, then dysmorphic phenomena were detected in girls (p<0.05). In accordance with the dynamics of development, acute (n=14 - 32%), subacute (n=21 - 49%), fluctuating (n=8 - 19%) variants of initial disorders are distinguished. Young men turned to a psychiatrist in a more timely manner (already at the stage of prodrome), while girls sought psychiatric help only at the stage of development of psychotic disorders. Conclusion: A psychotic episode in adolescents suffering from schizophrenia developed acutely in a third of cases. In the vast majority of cases, the initial mental disorders that appeared on the eve of acute psychosis was subacute, or differed in a fluctuating course, characterized by clinical polymorphism and multidirectionality. Differences were revealed in the preference for the occurrence of a number of initial disorders in boys and girls, as well as in the timeliness of seeking psychiatric help. abstract_id: PUBMED:29426253 Symptom Profile of Psychiatric Patients With Psychosis or Psychotic Mood Disorder in Prison. There is evidence that psychiatric patients with psychotic or manic disorders who are incarcerated suffer from the same symptoms as psychiatric patients who are treated in the community. There are also indications that their symptoms might be more severe. The aim of this study was to examine the severity of psychotic and manic symptoms, as well as to collect information about the emotional functioning of patients admitted to a prison psychiatric ward. Incarcerated patients with a diagnosis of psychotic or a manic disorder were examined with the Brief Psychiatric Rating Scale-Expanded (BPRS-E). With the scores of 140 assessments, a symptom profile was created using the domains of the BPRS-E. This profile was compared with the clinical profile of three nonincarcerated patient groups described in literature with a diagnosis in the same spectrum. We found high scores on positive and manic psychotic symptoms and hostility, and low scores on guilt, depression, and negative symptoms. High scores on manic and psychotic symptoms are often accompanied by violent behavior. Low scores on guilt, depression, and negative symptoms could be indicative of externalizing coping skills. These characteristics could complicate treatment in the community and warrant further research along with clinical consideration. abstract_id: PUBMED:31516455 Multimodal treatment for smoking cessation with varenicline in alcoholic, methadone-maintained, and psychotic patients: A one-year follow-up. Introduction: Numerous studies have evaluated the efficacy and safety of varenicline for smoking cessation in smokers in the general population and, to a lesser extent, among the psychiatric population. However, few studies have evaluated varenicline in patients with other addictions. The present study was conducted to assess outcomes of a multimodal treatment for smoking cessation intervention with varenicline in a sample of alcohol and substance use disorders and patients with psychotic disorders. Methods: This was a prospective, multicenter study. The patient sample comprised alcoholics in remission, methadone-maintained patients, and patients with psychotic disorders, all of whom wanted to stop smoking. All participants received multimodal treatment for smoking cessation therapy (psychological therapy plus varenicline). Smoking abstinence and changes in the psychopathological state of patients were assessed at predefined time points during a 12-month follow-up. The probability of tobacco abstinence after one year of treatment was computed using Kaplan-Meier life tables. Results: The probability of abstinence at one year was 0.225 (95% CI: 0.1430-0.319). By group, the probabilities were as follows: patients with psychotic disorders 0.254 (95% CI: 0.118-0.415); alcoholics 0.237 (95% CI: 0.098-0.409); and methadone-maintained patients 0.177 (95% CI: 0.065-0.335). Patients with previous quit attempts had a higher probability of achieving abstinence at one year (p<0.01). Conclusions: The results of this study support the use of multimodal treatment with varenicline in patients with alcohol addiction in remission, patients on methadone maintenance, and patients with stable psychotic disorders. Previous smoking cessation attempts were predictive of smoking cessation success in these patients. abstract_id: PUBMED:36928629 Psychotic symptoms in patients with mild intellectual disability: follow an integrative approach In The Netherlands, approximately 1 in 8 citizens (2.2 million) have an IQ between 70 and 85. Together the combination of a low IQ (< 70) and low adaptive functioning is referred to as mild intellectual disability (ID). People with ID are vulnerable to stress and psychological problems. The diagnosis and treatment of psychotic symptoms in patients with mild ID can prove to be a challenge. In this clinical lesson we present two different cases. We encourage physicians to adhere to a holistic approach. Furthermore the AAIDD model may guide structuring the analysis of psychotic symptoms. This enables finding the right treatment which may include medical treatment or emphasis on context improvement or a combination of both. Upon treatment initiation with antipsychotics, it's important to start low and go slow regarding the dosage. And if the context is the main problem, it is important to taper off the use of antipsychotics. abstract_id: PUBMED:36598576 What information can we gain from the quality appraisal of guidelines with physical activity recommendations for cancer patients?A systematic review using the AGREE II and AGREE-REX tools. Purpose: There has been growing amount of evidence supporting the benefits of physical activity (PA) on oncological patients' cancer-related health outcomes. Although guidelines on cancer rehabilitation are widely available, the varying quality and practical applicability limited the clinical application of PA recommendations. To assist the future development of guidelines, in this systematic review, we evaluated the quality and applicability of current cancer rehabilitation guidelines with PA recommendations and synthesized PA recommendations for the oncological population. Methods: A systematic search was conducted in PubMed, CINAHL, PEDro, EMBASE, and guideline repositories to identify guidelines with PA recommendations for cancer patients from 1 May 2016 to 1 June 2022. The quality of included guidelines was appraised using the tools "Appraisal of Guidelines for Research and Evaluation II" (AGREE II) and AGREE-REX (Recommendation Excellence). PA recommendations were synthesized from the guidelines. Results: Sixteen guidelines were extracted. The AGREE II domain "clarity of presentation" obtained the highest score, while "applicability" received the lowest, ranging from 33.33% to 98.58%. The AGREE-REX domains "values and preferences" and "implementability" generally scored lower and ranged from 45.83% to 74.17% and 55% to 88.33%, respectively. Eight high-quality guidelines were identified, and the included PA recommendations were extracted. Conclusion: There were some disparities in the quality of the included guidelines. Methodological weaknesses were commonly observed in domains "applicability," "values and preferences," and "implementability"; particular attention should be given to these domains when developing future guidelines. Furthermore, this analysis indicated that more rigorous, high-quality studies are needed to generate evidence for supporting PA recommendations and provide guidance on research gaps in the field of cancer rehabilitation. abstract_id: PUBMED:27335517 Third-person Diagnostic Interview on the Cognitive Insight Level of Psychotic Patients with an Insight at the Denial Level. Objectives: According to the previous findings, the third-person technique improved the clinical insight of psychotic patients, therefore the present study aims to examine the effect of a third-person interview compared to a first-person interview on the level of cognitive insight of psychotic patients with an insight at the denial level. Materials And Methods: In this study, using interviews and questionnaires, a total number of 44 patients of Razi Psychiatric Educational and Treatment Center with an insight at the denial level being assessed using diagnostic interviews were divided randomly into two groups. Then, the two groups of patients' cognitive insights were evaluated using Beck Cognitive Insight Scale. Results: The findings indicated that in psychotic patients with an insight at the denial level, the third-person technique of interview compared to the first-person had little effect on the improvement of overall cognitive insight and its components, including self-reflection and self-assurance; however, this effect was not strong enough to make a significant difference between the two groups of patients. Conclusion: According to the study findings, we can conclude that the third-person interview compared to the first-person interview has no effect on the improvement of the cognitive insight of psychotic patients with an insight at the denial level. This finding is consistent with the previous studies indicating that although the theory of mind has some correlations with the clinical insight of patients, it has no effect on their cognitive insight. Answer: The question "Do we agree about when patients are psychotic?" refers to the consensus among psychiatric staff members on the identification of psychosis in patients. According to a study investigating the use of the term 'psychotic' as defined by ICD-10 or by the concept of impaired reality testing among psychiatric staff members, there is variability in how the term is applied. When using ICD-10 criteria, patients were identified as psychotic with a sensitivity ranging from 90% to 55% and specificity ranging from 60% to 75%. According to the concept of impaired reality testing, sensitivity was about 60%, while specificity ranged from 65% to 50%. The combined use of the terms correlated significantly with responses regarding the indication for legal detention for psychiatrists and nursing staff. This suggests that while psychotic symptoms can be identified with relatively high sensitivity and specificity, there is a broad use of the concept of impaired reality testing, particularly in cases with legal issues (PUBMED:18759806). Therefore, the answer to the question is that there is not complete agreement among psychiatric staff members about when patients are psychotic, as evidenced by the variability in sensitivity and specificity in identifying psychosis using different criteria. The study highlights the need for a more standardized approach to the identification of psychosis to ensure consistency across different practitioners and settings.
Instruction: Increase in alcohol related deaths: is hepatitis C a factor? Abstracts: abstract_id: PUBMED:12195003 Increase in alcohol related deaths: is hepatitis C a factor? Aim: To evaluate recent trends in alcohol related deaths in the UK and to consider possible causative factors. Design: Observational retrospective study of the database of the Office for National Statistics, alcohol consumption data reported by the General Household Survey, and other published data. Setting: England, 1993-9. Results: Deaths for each million of the population from alcohol related illness increased by 59% in men and 40% in women over the years 1993 to 1999. One subgroup of alcohol related deaths, ICD 571.3 (alcoholic liver damage unspecified), showed a 243% increase in men aged 40 to 49 years over the same period. Figures for younger men, and women in all age groups, showed less pronounced increases. There has been no associated rise in alcohol intake. There has been an increase in the incidence of hepatitis C virus (HCV) infection in recent years, and alcohol consumption in HCV positive individuals accelerates the progression to cirrhosis. Circumstantial evidence links the rise in HCV infection to the use of illicit drugs in the 1970s and 1980s, among those currently aged 40 to 59 years. Conclusions: The recent increase in alcohol related deaths cannot be solely explained by a change in drinking habits. It is suggested that this probably results from the rapid progression of alcoholic cirrhosis in people who have acquired HCV infection through intravenous drug use. Alcohol consumption in HCV positive individuals is firmly linked with a poor outcome. abstract_id: PUBMED:36647412 Liver Transplantation in Alcohol-related Liver Disease and Alcohol-related Hepatitis. Alcohol-related liver disease (ARLD) remains one of the leading causes of chronic liver disease and the prevalence of alcohol-related cirrhosis is still increasing worldwide. Thus, ARLD is one of the leading indications for liver transplantation (LT) worldwide especially after the arrival of direct-acting antivirals for chronic hepatitis C infection. Despite the risk of alcohol relapse, the outcomes of LT for ARLD are as good as for other indications such as hepatocellular carcinoma (HCC), with 1-, 5-, and 10- year survival rates of 85%, 74%, and 59%, respectively. Despite these good results, certain questions concerning LT for ARLD remain unanswered, in particular because of persistent organ shortages. As a result, too many transplantation centers continue to require 6 months of abstinence from alcohol for patients with ARLD before LT to reduce the risk of alcohol relapse even though compelling data show the poor prognostic value of this criterion. A recent pilot study even observed a lower alcohol relapse rate in patients receiving LT after less than 6 months of abstinence as long as addictological follow-up is reinforced. Thus, the question should not be whether LT should be offered to patients with ARLD but how to select patients who will benefit from this treatment. abstract_id: PUBMED:36843895 Alcohol-related medicosocial problems and liver disorders: Burden of alcoholic cirrhosis and hepatocellular carcinoma in Turkiye. The World Health Organization 2018 report stated that 2.3 billion persons over 15 years old consume alcohol, and a total of 3.0-3.3 million people died because of uncontrolled or harmful alcohol intake in 2016. Injuries, accidents, liver cirrhosis, and other medical disorders are mainly responsible for alcohol-related disability and deaths. After emphasizing the importance of alcohol-related disorders and necessary universal precautions, we focus on alcohol consumption features and alcohol-related cirrhosis and hepatocellular carcinoma in Turkiye. It is estimated that alcohol per se is responsible for 12% of cirrhosis and 10% of hepatocellular carcinoma cases. Additional factors such as hepatitis B virus and hepatitis C virus infections have markedly increased the risk of the development of hepatocellular carcinoma in alcoholic cirrhosis. abstract_id: PUBMED:30755830 Identification of special key genes for alcohol-related hepatocellular carcinoma through bioinformatic analysis. Background: Alcohol-related hepatocellular carcinoma (HCC) was reported to be diagnosed at a later stage, but the mechanism was unknown. This study aimed to identify special key genes (SKGs) during alcohol-related HCC development and progression. Methods: The mRNA data of 369 HCC patients and the clinical information were downloaded from the Cancer Genome Atlas project (TCGA). The 310 patients with certain HCC-related risk factors were included for analysis and divided into seven groups according to the risk factors. Survival analyses were applied for the HCC patients of different groups. The patients with hepatitis B virus or hepatitis C virus infection only were combined into the HCC-V group for further analysis. The differentially expressed genes (DEGs) between the HCCs with alcohol consumption only (HCC-A) and HCC-V tumors were identified through limma package in R with cutoff criteria│log2 fold change (logFC)|>1.0 and p < 0.05. The DEGs between eight alcohol-related HCCs and their paired normal livers of GSE59259 from the Gene Expression Omnibus (GEO) were identified through GEO2R (a built-in tool in GEO database) with cutoff criteria |logFC|> 2.0 and adj.p < 0.05. The intersection of the two sets of DEGs was considered SKGs which were then investigated for their specificity through comparisons between HCC-A and other four HCC groups. The SKGs were analyzed for their correlations with HCC-A stage and grade and their prognostic power for HCC-A patients. The expressional differences of the SKGs in the HCCs in whole were also investigated through Gene Expression Profiling Interactive Analysis (GEPIA). The SKGs in HCC were validated through Oncomine database analysis. Results: Pathological stage is an independent prognostic factor for HCC patients. HCC-A patients were diagnosed later than HCC patients with other risk factors. Ten SKGs were identified and nine of them were confirmed for their differences in paired samples of HCC-A patients. Three (SLC22A10, CD5L, and UROC1) and four (SLC22A10, UROC1, CSAG3, and CSMD1) confirmed genes were correlated with HCC-A stage and grade, respectively. SPP2 had a lower trend in HCC-A tumors and was negatively correlated with HCC-A stage and grade. The SKGs each was differentially expressed between HCC-A and at least one of other HCC groups. CD5L was identified to be favorable prognostic factor for overall survival while CSMD1 unfavorable prognostic factor for disease-free survival for HCC-A patients and HCC patients in whole. Through Oncomine database, the dysregulations of the SKGs in HCC and their clinical significance were confirmed. Conclusion: The poor prognosis of HCC-A patients might be due to their later diagnosis. The SKGs, especially the four stage-correlated genes (CD5L, SLC22A10, UROC1, and SPP2) might play important roles in HCC development, especially alcohol-related HCC development and progression. CD5L might be useful for overall survival and CSMD1 for disease-free survival predication in HCC, especially alcohol-related HCC. abstract_id: PUBMED:34386231 Impact of alcohol abstinence on survival after hepatic resection for hepatocellular carcinoma in patients with alcohol-related liver disease. Background: This study aimed to evaluate the prognostic impact of alcohol abstinence on survival after hepatic resection for hepatocellular carcinoma (HCC) in patients with alcohol-related liver disease (ALD). Patients And Methods: In total, 92 patients with ALD-HCC who underwent initial and curative hepatic resection were identified, including 56 and 36 patients with and without alcohol abstinence, respectively. Results: The 3-, 5-, and 7-year recurrence-free survival (RFS) were 46%, 43%, and 37% in the abstinence group, and 61%, 36%, and 36% in the non-abstinence group, respectively (p = 0.71). The 3-, 5-, and 7-year overall survival (OS) were 91%, 76%, and 66% in the abstinence group, and 87%, 57%, and 44% in the non-abstinence group, respectively (p = 0.023). Multivariate analysis revealed that non-abstinence was an independent prognostic factor for OS (P = 0.026). The incidence rate of liver-related death including HCC-specific death, liver failure, and renal failure in cirrhosis (hepatorenal syndrome) between the non-abstinence and abstinence groups were 41.7% vs. 19.6% (p = 0.032). Worsening of the Child-Pugh grade at intrahepatic recurrence was more frequently observed in the non-abstinence (33.3%) than that in the abstinence group (6.5%) (p = 0.039). Conclusions: Alcohol abstinence might improve the long-term survival of patients with ALD-HCC undergoing hepatic resection. abstract_id: PUBMED:20472416 Hospitalisation for an alcohol-related cause among injecting drug users in Scotland: increased risk following diagnosis with hepatitis C infection. Background: The rate of hepatitis C (HCV) related liver disease progression is known to be strongly associated with alcohol consumption, yet there are very few data on alcohol use in injecting drug users (IDUs), who represent 90% of Scotland's HCV-diagnosed population. To investigate the extent of alcohol use in IDUs, we used hospitalisation with an alcohol-related diagnosis as an indicator for problematic consumption levels, and compared admission rates pre- and post-HCV diagnosis. Methods: Data for 41,062 current/former IDUs attending drug treatment/support services in Scotland from April 1995 to March 2006 were linked to the national hospital discharge database to retrieve alcohol-related episodes, and to the national HCV Diagnosis database to determine HCV-diagnosed status. Relative risks were estimated using Cox proportional hazards regression for recurrent events. Results: The proportion of IDUs with ≥1 alcohol-related admission following first attendance at drug services was greater among those diagnosed with HCV by the end of follow-up (16%) compared with those who were not (6%). For the 9145 IDUs who had been diagnosed with HCV by 31 March 2006, there was a 1.5-fold increased relative risk of an alcohol-related admission >30 days post-HCV diagnosis (95% CI: 1.2-1.7) compared with >30 days pre-HCV diagnosis, adjusted for sex, age, and deprivation. Conclusions: IDUs diagnosed with HCV infection have an increased risk of subsequent hospital admission for an alcohol-related cause. Because of the synergistic effect of HCV infection and excessive alcohol intake on the development of cirrhosis, it is imperative that alcohol intake is addressed in the management of chronic HCV infection in this population. abstract_id: PUBMED:23237980 The contributions of viral hepatitis and alcohol to liver-related deaths in opioid-dependent people. Background: Mortality rates are elevated among heroin-dependent populations compared to the general population. Liver disease is emerging as an important contributor to mortality as the heroin-dependent population ages. Two major risk factors for liver disease are hepatitis C virus infection and chronic heavy alcohol use. Both of these are highly prevalent among heroin dependent people, but their relative contribution to liver-related mortality is poorly understood. Methods: Data recording all prescriptions of opioid substitution treatment in New South Wales, Australia, 1997-2005, were linked to the National Death Index. Crude and standardised mortality rates and standardised mortality ratios were calculated for liver-related and other major causes of death. Frequency counts were obtained for viral hepatitis and alcohol mentions in underlying liver deaths. Results: There were 208 underlying liver deaths for a CMR of 72.4 per 100,000 py (95% CI 62.9, 82.9), and liver deaths occurred at 9.8 times the general population rate (95% CI 8.5, 11.2). There were increases in liver-related mortality over time. Viral hepatitis was mentioned in three-quarters (n=156, 76%), and alcohol in 43% (n=90) of underlying liver deaths. Conclusions: Liver-related deaths were shown to be increasing in this heroin-dependent population, and the majority of these deaths involved chronic viral hepatitis infection. Increased uptake of treatment for hepatitis C virus infection is crucial to reducing the burden of liver-related mortality in this population. Hepatitis B vaccination, and screening of OST patients for alcohol use disorders and delivery of brief interventions as clinically indicated may also be of benefit. abstract_id: PUBMED:29754030 Receipt of alcohol-related care among patients with HCV and unhealthy alcohol use. Background: Alcohol use-particularly unhealthy alcohol use-exacerbates risks associated with Hepatitis C virus (HCV). However, whether unhealthy alcohol use is appropriately addressed among HCV+ patients is understudied. We examined receipt of alcohol-related care among HCV+ patients and unhealthy alcohol use. Methods: All positive alcohol screens (AUDIT-C score ≥5) documented 10/01/09-5/30/13 were identified from national electronic health records data from the Veterans Health Administration (VA). Regression models estimated unadjusted and adjusted proportions of HCV+ and HCV- patients receiving 1) brief intervention within 14 days of positive screening, 2) specialty addictions treatment, and 3) pharmacotherapy for alcohol use disorder (AUD) in the year following positive screening. Adjusted models included demographics, alcohol use severity, and mental health and substance use disorder comorbidities. Results: Among 830,825 VA outpatients with positive alcohol screening, 31,841 were HCV+. Among HCV+, unadjusted and adjusted prevalences were 69.2% (CI, 68.7-69.6) and 71.9% (CI, 71.4-72.4) for brief intervention, 29.9% (CI, 29.4-30.4) and 12.7% (CI 12.5-12.9) for specialty addictions treatment, and 5.9% (CI, 5.7-6.1) and 3.3% (CI, 3.1-3.4) for pharmacotherapy, respectively. Among the 20,320 (64%) patients with HCV and documented AUD, unadjusted and adjusted prevalences were 40.0% (CI, 39.3-40.6) and 26.7% (CI, 26.3-27.1) for specialty addictions treatment and 8.1% (CI, 7.7-8.4) and 6.4% (CI, 6.1-6.6) for pharmacotherapy, respectively. Receipt of alcohol-related care was generally similar across HCV status. Conclusions: Findings highlight under-receipt of recommended alcohol-related care, particularly pharmacotherapy, among patients with HCV and unhealthy alcohol use who are particularly vulnerable to adverse influences of alcohol use. abstract_id: PUBMED:24112091 Chronic hepatitis C virus infection is associated with all-cause and liver-related mortality in a cohort of HIV-infected patients with alcohol problems. Aims: To assess the association between hepatitis C virus (HCV) infection and overall and liver-related death in human immunodeficiency virus (HIV)-infected patients with alcohol problems. Design: We analyzed data from a cohort of HIV-infected adults with current or past alcohol problems enrolled between 2001 and 2003, searching for causes of death until 2010 using the National Death Index. Setting And Participants: Participants were HIV-infected adults with current or past alcohol problems, recruited in Boston, MA from HIV clinics at two hospitals, homeless shelters, drug treatment programs, subject referrals, flyers and another cohort study with comparable recruitment sites. Measurements: The primary and secondary outcomes were all-cause and liver-related mortality, respectively. The main independent variable was hepatitis C virus (HCV) RNA status (positive versus negative). Mortality rates and Kaplan-Meier survival curves were calculated by HCV status for both overall and liver-related mortality. Cox proportional hazards models were used to assess the association between HCV infection and overall and liver-related death, adjusting for alcohol and drug use over time. Findings: A total of 397 adults (50% HCV-infected) were included. As of 31 December 2009, 83 cohort participants had died (60 HCV-infected, 23 HCV-uninfected; log-rank test P < 0.001), and 26 of those deaths were liver-related (21 HCV-infected, five HCV-uninfected; log-rank test P < 0.001). All-cause and liver-related mortality rates were 4.68 and 1.64 deaths per 100 person-years for HCV-infected patients and 1.65 and 0.36 per 100 person-years for those without HCV, respectively. In the fully adjusted Cox model, HCV infection was associated with both overall [hazard ratio (HR) = 2.55, 95% confidence interval (CI) = 1.50-4.33, P < 0.01], and liver-related mortality (HR = 3.24, 95% CI = 1.18-8.94, P = 0.02]. Conclusion: Hepatitis C virus infection is associated independently with all-cause and liver-related mortality in human immunodeficiency virus-infected patients with alcohol problems, even when accounting for alcohol and other drug use. abstract_id: PUBMED:26500036 Clinical impact of alcohol-related cirrhosis in the next decade: estimates based on current epidemiological trends in the United States. Background: Identifying changes in the epidemiology of liver disease is critical for establishing healthcare priorities and allocating resources to develop therapies. The projected contribution of different etiologies toward development of cirrhosis in the United States was estimated based on current publications on epidemiological data and advances in therapy. Given the heterogeneity of published reports and the different perceptions that are not always reconcilable, a critical overview rather than a formal meta-analysis of the existing data and projections for the next decade was performed. Methods: Data from the World Health Organization Global Status Report on Alcohol and Health of 2014, Scientific Registry of Transplant Recipients from 1999 to 2012, National Institute on Alcohol Abuse and Alcoholism, and the Centers for Disease Control and Prevention were inquired to determine future changes in the epidemiology of liver disease. Results: Alcohol consumption has increased over the past 60 years. In 2010, transplant-related costs for liver recipients were the highest for hepatitis C (~$124 million) followed by alcohol-related cirrhosis (~$86 million). We anticipate a significant reduction in incidence cirrhosis due to causes other than alcohol because of the availability of high efficiency antiviral agents for hepatitis C, universal and effective vaccination for hepatitis B, relative stabilization of the obesity trends in the United States, and novel, potentially effective therapies for nonalcoholic steatohepatitis. The proportion of alcohol-related liver disease is therefore likely to increase in both the population as a whole and the liver transplant wait list. Conclusions: Alcohol-related cirrhosis and alcohol-related liver disorders will be the major cause of liver disease in the coming decades. There is an urgent need to allocate resources aimed toward understanding the pathogenesis of the disease and its complications so that effective therapies can be developed. Answer: Yes, hepatitis C is a factor in the increase of alcohol-related deaths. The observational retrospective study of the database of the Office for National Statistics in England from 1993 to 1999 showed that deaths from alcohol-related illness increased significantly, with a particularly sharp rise in alcoholic liver damage in men aged 40 to 49. This increase could not be solely explained by a change in drinking habits. The study suggested that the rapid progression of alcoholic cirrhosis in individuals with hepatitis C virus (HCV) infection, likely acquired through intravenous drug use in the 1970s and 1980s, was a contributing factor. Alcohol consumption in HCV positive individuals is linked with a poor outcome, as it accelerates the progression to cirrhosis (PUBMED:12195003). Additionally, hospitalization data from Scotland indicated that injecting drug users (IDUs), who represent a significant portion of the HCV-diagnosed population, had an increased risk of hospital admission for an alcohol-related cause after being diagnosed with HCV. This suggests that the combination of HCV infection and excessive alcohol intake synergistically increases the risk of developing cirrhosis (PUBMED:20472416). Moreover, a study on HIV-infected patients with alcohol problems found that HCV infection was independently associated with both all-cause and liver-related mortality, even when accounting for alcohol and other drug use. This highlights the significant impact of HCV infection on mortality in populations with alcohol problems (PUBMED:24112091). In summary, the evidence from these studies indicates that hepatitis C infection is indeed a factor in the increase of alcohol-related deaths, particularly due to its role in accelerating the progression of liver disease in individuals who consume alcohol.
Instruction: Microsimulation model of CT versus MRI surveillance of Bosniak IIF renal cystic lesions: should effects of radiation exposure affect selection of imaging strategy? Abstracts: abstract_id: PUBMED:25415728 Microsimulation model of CT versus MRI surveillance of Bosniak IIF renal cystic lesions: should effects of radiation exposure affect selection of imaging strategy? Objective: The objective of this study was to quantify the effects of radiation-induced cancer risks in patients with Bosniak category IIF lesions undergoing CT versus MRI surveillance. Materials And Methods: We developed a Markov-Monte Carlo model to determine life expectancy losses attributable to radiation-induced cancers in hypothetical patients undergoing CT versus MRI surveillance of Bosniak IIF lesions. Our model tracked hypothetical patients as they underwent imaging surveillance for up to 5 years, accounting for potential lesion progression and treatment. Estimates of radiation-induced cancer mortality were generated using a published organ-specific radiation-risk model based on Biological Effects of Ionizing Radiation VII methods. The model also incorporated surgical mortality and renal cancer-specific mortality. Our primary outcome was life expectancy loss attributable to radiation-induced cancers. A sensitivity analysis was performed to assess the stability of the results with variability in key parameters. Results: The mean number of examinations per patient was 6.3. In the base case, assuming 13 mSv per multiphase CT examination, 64-year-old men experienced an average life expectancy decrease of 5.5 days attributable to radiation-induced cancers from CT; 64-year-old women experienced a corresponding life expectancy loss of 6.9 days. The results were most sensitive to patient age: Life expectancy loss attributable to radiation-induced cancers increased to 21.6 days in 20-year-old women and 20.0 days in 20-year-old men. Varied assumptions of each modality's (CT vs MRI) depiction of lesion complexity also impacted life expectancy losses. Conclusion: Microsimulation modeling shows that radiation-induced cancer risks from CT surveillance for Bosniak IIF lesions minimally affect life expectancy. However, as progressively younger patients are considered, increasing radiation risks merit stronger consideration of MRI surveillance. abstract_id: PUBMED:35037159 Assessment of the workload and financial burden of Bosniak IIF renal cyst surveillance in a tertiary referral hospital. Background: The Bosniak classification is a CT classification which stratifies renal cysts based on imaging appearances and therefore associated risk of malignancy. Bosniak IIf cysts are renal which have complex features and therefore require surveillance. Aims: The aim of this study is to assess the economic and workload burden of diagnosing and following up Bosniak IIf cysts on the urology service in a tertiary hospital in the West of Ireland. Methods: All patients with a diagnosis of Bosniak IIf renal cysts attending our urology service between 1st of January 2012 and 31st December 2020 were analysed. The following data were collected: number and modality of follow up scans, number of MDT discussions, number and type of outpatient appointments, surgical intervention, and length of follow up. Financial data were provided by the hospital finance department. Results: One hundred and sixty-two patients were included. Total cost of follow up was €164,056, costing €1,012.7 per patient. Cost of outpatient visits was €77,850. Follow-up length ranged from 1 to 109 months, median follow up time 17.5 months. Overall cost of imaging was €74,518. There were a total of 80 MDT discussions at an overall cost of €11,688. Conclusions: This study demonstrates that surveillance of patients with Bosniak IIf renal cysts represents a significant burden upon both radiology and urology services. Surveillance for these patients could be streamlined in the future through a number of initiatives such as virtual OPDs and dedicated MDTs. abstract_id: PUBMED:32755181 Bosniak Classification of Cystic Renal Masses Version 2019: Comparison of Categorization Using CT and MRI. BACKGROUND. Bosniak classification version 2019 proposed refinements for cystic renal mass characterization and now formally incorporates MRI, which may improve concordance with CT. OBJECTIVE. The purpose of this study is to compare concordance of CT and MRI in evaluation of cystic renal masses using Bosniak classification version 2019. METHODS. Three abdominal radiologists retrospectively reviewed 68 consecutive cystic renal masses from 45 patients assessed with both CT and MRI renal mass protocols within a year between 2005 and 2019. CT and MRI were reviewed independently and in separate sessions, using both the original and 2019 versions of Bosniak classification systems. RESULTS. Using Bosniak classification version 2019, cystic renal masses were classified into 12 category I, 19 category II, 13 category IIF, four category III, and 20 category IV by CT and eight category I, 15 category II, 23 category IIF, nine category III, and 13 category IV by MRI. Among individual features, MRI showed more septa (p < 0.001, p = 0.046, p = 0.005; McNemar test) for all three radiologists, although both CT and MRI showed a similar number of protrusions (p = 0.823, p = 1.0, p = 0.302) and maximal septa and wall thickness (p = 1.0, p = 1.0, p = 0.145). Of the discordant cases with version 2019, MRI led to a higher categorization in 12 masses. The reason for upgrade was most commonly because of protrusions identified only on MRI (n = 4), an increased number of septa (n = 3), and a new category: heterogeneously T1-weighted hyperintensity (n = 3). Neither modality was more likely to lead to a categorization change for either version 2019 (p = 0.502; McNemar test) or the original (p = 0.823) Bosniak classification system. Overall interrater agreement was substantial for both CT (κ = 0.745) and MRI (κ = 0.655) using version 2019 and was slightly higher than that of the original system for CT (κ = 0.707) and MRI (κ = 0.623). CONCLUSION. CT and MRI were concordant in the majority of cases using Bosniak classification version 2019, and category changes by modality were not statistically significant. Interrater agreements were substantial for both CT and MRI. CLINICAL IMPACT. Bosniak classification version 2019 as applied to cystic renal masses has substantial interrater agreement and does not lead to systematic category upgrades with either CT or MRI. abstract_id: PUBMED:36416398 Outcomes of Bosniak Classification Version 2019 Class IIF Cystic Renal Masses at Imaging Surveillance. BACKGROUND. Bosniak classification system version 2019 (v2019) recommends that class IIF masses undergo follow-up imaging at 6 months, 12 months, and then annually for 5 years. The frequency and timing of upgrade on follow-up imaging are incompletely understood. OBJECTIVE. The purpose of this article is to describe the temporal evolution of Bosniak v2019 class IIF cystic renal masses, with attention to outcomes at 6-month follow-up, the time to class upgrade, and malignant histologic diagnoses. METHODS. This retrospective study included 219 patients (91 women, 128 men; median age, 72 years) with 246 localized class IIF masses from January 2005 to June 2022. Patients underwent both a baseline and at least one follow-up renal-mass protocol contrast-enhanced CT or MRI examination. Two radiologists evaluated masses at all follow-up time points to categorize masses as downgraded (class I or II), stable (localized class IIF), or upgraded (class III or IV, solid, or category T3a, N1, or M1 or higher disease); a third radiologist resolved discrepancies. Incidence rate of upgrade was determined. Histopathologic outcomes were assessed for resected masses. RESULTS. Median follow-up was 28.4 months (IQR, 13.7-59.4 months). At 6-month follow-up, five (2%) masses were downgraded, 241 (98%) were stable, and none were upgraded. On the basis of final follow-up, 14 (6%) masses were downgraded, 223 (91%) were stable, and nine (4%) were upgraded. All upgrade events entailed a class increase to III (n = 7) or IV (n = 2); no mass became solid or developed T3, N1, or M1 disease. Among the nine upgraded masses, median time to upgrade was 53.5 months (IQR, 23.2-63.7 months). Incidence rate of upgrade was 3.006 per 100,000 person-days (95% CI, 1.466-5.516). Ten masses were resected; histopathology was benign in six and malignant in four. Of the four malignant masses, one was upgraded to class III after 15 months of preoperative follow-up imaging, and three remained class IIF on preoperative follow-up imaging. No resected malignant mass developed postoperative recurrence. CONCLUSION. Bosniak v2019 class IIF masses are unlikely to represent aggressive malignancy; only 4% were upgraded over time and never on initial 6-month follow-up. CLINICAL IMPACT. The currently recommended initial 6-month follow-up imaging examination for class IIF masses is of questionable clinical utility. abstract_id: PUBMED:36777990 Cost-effectiveness of Contrast-Enhanced Ultrasound for Diagnosis and Active Surveillance of Complex Cystic Renal Lesions. Purpose: To determine the cost-effectiveness of Contrast-Enhanced Ultrasound (ceUS) for the active surveillance of complex renal masses compared to the more established imaging modalities of CT and MRI. Methods: A decision-analytic Markov state microsimulation model was constructed in TreeAge Pro. We simulated independent cohorts of 100,000 60-year-old individuals with either a Bosniak IIF or Bosniak III complex renal mass who were followed for 10 years or until death. The model compared three imaging strategies: (1) ceUS, (2) contrast-enhanced magnetic-resonance imaging (ceMRI), and (3) contrast-enhanced computed tomography (ceCT) for active surveillance of a complex renal mass. Results: For 60-year-old patients with either Bosniak IIF or III renal masses, ceUS was the most cost-effective strategy even after varying rates of active surveillance from 10-100%. Conclusion: ceUS is a viable and cost-effective option in the active surveillance of Bosniak class IIF and III renal cysts. Even after varying the rates of active surveillance usage, ceUS was robust and remained the most dominant strategy. For patients who have impaired kidney functions, ceUS is can be a safer alternative than non-contrast enhanced CT or MRI in the management of patients with Bosniak III renal cysts. abstract_id: PUBMED:33009722 Update on MRI of Cystic Renal Masses Including Bosniak Version 2019. Incidental cystic renal masses are common, usually benign, and almost always indolent. Since 1986, the Bosniak classification has been used to express the risk of malignancy in a cystic renal mass detected at imaging. Historically, magnetic resonance imaging (MRI) was not included in that classification. The proposed Bosniak v.2019 update has formally incorporated MRI, included definitions of imaging terms designed to improve interobserver agreement and specificity for malignancy, and incorporated a variety of masses that were incompletely defined or not included in the original classification. For example, at unenhanced MRI, homogeneous masses markedly hyperintense at T2 -weighted imaging (similar to cerebrospinal fluid) and homogeneous masses markedly hyperintense at fat suppressed T1 -weighted imaging (approximately ≥2.5 times more intense than adjacent renal parenchyma) are classified as Bosniak II and may be safely ignored, even when they have not been imaged with a complete renal mass MRI protocol. MRI has specific advantages and is recommended to evaluate masses that at computed tomography (CT) 1) have abundant thick or nodular calcifications; 2) are homogeneous, hyperattenuating, ≥3 cm, and nonenhancing; or 3) are heterogeneous and nonenhancing. Although MRI is generally excellent for characterizing cystic renal masses, there are unique weaknesses of MRI that bear consideration. These details and others related to MRI of cystic renal masses are described in this review, with an emphasis on Bosniak v.2019. A website (https://bosniak-calculator.herokuapp.com/) and mobile phone apps named "Bosniak Calculator" have been developed for ease of assignment of Bosniak classes. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY STAGE: 3. abstract_id: PUBMED:38199901 Bosniak Classification of Cystic Renal Masses: Looking Back, Looking Forward. Rationale And Objectives: According to the 2019 update of the Bosniak classification, the main imaging features that need to be evaluated to achieve a correct characterization of renal cystic masses include the thickness of walls and septa, the number of septa, the appearance of walls and septa, the attenuation/intensity on non-contrast CT/MRI and the presence of unequivocally perceived or measurable enhancement of walls and septa. Despite the improvement deriving from a quantitative evaluation of imaging features, certain limitations seem to persist and some possible scenarios that can be encountered in clinical practice are still missing. Materials And Methods: A deep analysis of the 2019 update of the Bosniak classification was performed. Results: The most notable potential flaws concern: (1) the quantitative measurement of the walls and septa; (2) the fact that walls and septa > 2 mm are always referred to as "enhancing", not considering the alternative scenario; (3) the description of some class II masses partially overlaps with each other and with the definition of class I masses and (4) the morphological variations of cystic masses over time is not considered. Conclusion: The present paper analyzes in detail the limitations of the 2019 Bosniak classification to improve this important tool and facilitate its use in daily radiological practice. abstract_id: PUBMED:32515608 Bosniak Classification Version 2019 of Cystic Renal Masses Assessed With MRI. OBJECTIVE. The purpose of this study was to determine how use of Bosniak classification version 2019 affects categorization and overall accuracy of MRI evaluation of cystic renal masses with tissue pathologic analysis as the reference standard. MATERIALS AND METHODS. MR images of 50 consecutively registered patients with tissue pathologic results from 2005 to 2019 were retrospectively reviewed by two abdominal radiologists. Each radiologist independently assigned a category based on the original and Bosniak classification version 2019 systems. Interreader agreements (kappa statistic) for both were calculated, and consensus reading was performed. Tissue pathologic analysis was used as the reference standard to determine whether a lesion was benign or renal cell carcinoma. RESULTS. Fifty-nine cystic renal masses were characterized as 38 renal cell carcinomas and 21 benign lesions on the basis of the results of tissue pathologic analysis. By consensus, according to the original Bosniak criteria, the renal masses were classified into three category I, five category II, four category IIF, 25 category III, and 22 category IV lesions. By consensus, according to the version 2019 criteria, the renal masses were classified into three category I, two category II, 12 category IIF, 18 category III, and 24 category IV lesions. Overall sensitivity and specificity for identifying renal cell carcinoma were 95% and 81%, respectively, with the original classification system and 100% and 86%, respectively, with version 2019. Weighted interreader agreement was moderate for both the original system (κ = 0.57) and version 2019 (κ = 0.55). CONCLUSION. Use of Bosniak classification version 2019 system improves sensitivity and specificity for malignancy among cystic renal masses characterized with MRI. Most lesions that changed categories were reclassified as Bosniak category IIF. abstract_id: PUBMED:29071140 A practical guide and decision-making protocol for the management of complex renal cystic masses. Objectives: To analyse the management, pathology and outcomes of complex renal cystic masses (CRCM) and to develop a decision-making tool for daily clinical care using the Bosniak classification system for CRCM. Patients And Methods: A comprehensive dataset of 185 patients with 188 CRCM and a minimum follow-up of 3 years were analysed for management, pathology and outcomes. Results: We analysed 35 Bosniak II, 34 Bosniak IIF, 58 Bosniak III, and 61 Bosniak IV lesions. The overall incidence of renal cell carcinoma was 8.6%, 29.4%, 62.1%, and 78.7% for each category. Based on our surveillance strategy of Bosniak IIF masses, we recommend computed tomography (CT)/magnetic resonance imaging (MRI) every 2 years after the initial examination. We also recommend performing one MRI (as an adjunct to CT) during the early follow-up period (<4 years). The use of MRI correlation for differential diagnostic purposes has proven useful for marginal Bosniak II, IIF and III cases. Conclusions: From our data, we have created a decision-making protocol to guide urologists in planning a safe and effective diagnostic and treatment strategy for CRCM. The Bosniak classification is a useful tool for clinical decision-making. Uncertainties still remain for Bosniak IIF and III lesions. Our protocol shows that individualised decision-making is necessary in a significant proportion of CRCM. abstract_id: PUBMED:29383479 Magnetic resonance imaging as an adjunct diagnostic tool in computed tomography defined Bosniak IIF-III renal cysts: a multicenter study. Introduction: CT imaging is the standard examination for renal cystic lesions and defines the Bosniak category, which dictates further management. Given that Bosniak II/IIF/III renal cystic lesions can potentially harbor renal cell carcinoma (RCC), additional diagnostic modalities may be required in management decision making. Aim: To determine the value of additional magnetic resonance imaging in CT-defined Bosniak IIF-III renal cystic lesions. Materials And Methods: This a multicenter retrospective study of 46 consecutive patients, diagnosed with cystic renal lesions between 2009 and 2016. The inclusion criteria were: (1) cystic renal lesion classified as Bosniak IIF-III on CT, (2) a subsequent MRI examination, and (3) documented outcome via surgery for cystic renal mass or follow-up. Results: 46 patients (35 males, 11 females) were included. The mean size of the cystic lesion was 3.92 cm (0.7-10 cm). According to the CT findings, Bosniak IIF and III were found in 12 (26.1%) and 34 (73.9%) cases. Reclassification of Bosniak category was done after MRI examination in 31 cases (67.4%). An upgrade rate of 58.7% (27 cases) to a higher category was made, while the downgrade rate to a lower category was achieved in 4 cases (8.7%). As a result, significant therapeutic management change was made in 12/31 patients (38.7%), of whom 8 underwent subsequent surgery. Conclusion: MRI study may reduce the use of Bosniak IIF category (in comparison with CT), which has a direct impact on therapeutic management (surgery vs. surveillance) in a significant proportion of patients. Answer: The microsimulation model developed to compare CT versus MRI surveillance of Bosniak IIF renal cystic lesions suggests that radiation-induced cancer risks from CT surveillance minimally affect life expectancy. However, the study found that the life expectancy loss attributable to radiation-induced cancers from CT surveillance was greater for younger patients, with 20-year-old women and men experiencing an average life expectancy decrease of 21.6 days and 20.0 days, respectively. This indicates that as the age of patients decreases, the radiation risks increase, which merits stronger consideration of MRI surveillance for younger patients (PUBMED:25415728). The study's primary outcome was life expectancy loss attributable to radiation-induced cancers, and the results were most sensitive to patient age. The mean number of examinations per patient was 6.3, and the base case assumed 13 mSv per multiphase CT examination. For 64-year-old men and women, the average life expectancy decrease was 5.5 days and 6.9 days, respectively. The study also performed a sensitivity analysis to assess the stability of the results with variability in key parameters, including varied assumptions of each modality's (CT vs MRI) depiction of lesion complexity, which also impacted life expectancy losses (PUBMED:25415728). In conclusion, while the radiation-induced cancer risks from CT surveillance for Bosniak IIF lesions minimally affect life expectancy for the average patient, the risks are more significant for younger patients, suggesting that MRI surveillance should be considered more strongly in these cases to avoid the effects of radiation exposure.
Instruction: Is fibromyalgia-related oxidative stress implicated in the decline of physical and mental health status? Abstracts: abstract_id: PUBMED:24373370 Is fibromyalgia-related oxidative stress implicated in the decline of physical and mental health status? Objectives: Fibromyalgia (FM) is a form of non-articular rheumatism characterised by chronic widespread musculoskeletal aching. Although some works have investigated the possible role of oxidative stress in the pathophysiology of FM, none has analysed a significant number of oxidative markers in the same patients. Consequently, we have performed an exhaustive study of the oxidative/antioxidative status in FM patients and healthy controls, as well as the relationship with FM clinical parameters. Methods: In 45 female patients and 25 age-matched controls, we investigated the oxidative (lipid and protein peroxidation, and oxidative DNA damage) and antioxidative status (total antioxidant capacity (TAC), and antioxidant enzyme activities and compounds). Functional capacity and musculoskeletal pain were assessed by Fibromyalgia Impact Questionnaire (FIQ) and Visual Analogue Scale (VAS), respectively. The physical (PCS-12) and mental (MCS-12) health status was evaluated by SF-12. Results: A significant increase in oxidative DNA damage and protein carbonyl content was found in FM patients vs. controls, as well as in antioxidant compounds such as copper and ceruloplasmin. Patients had diminished levels of TAC and zinc. Enzyme activities of superoxide dismutase, glutathione peroxidase, and catalase were lower in FM patients. Significant correlations were observed in patients between oxidative DNA damage and MCS-12, and zinc and PCS-12. Conclusions: These findings reveal an imbalance between oxidants and antioxidants in FM patients. The lower antioxidant enzyme activities may lead to oxidative stress through the oxidation of DNA and proteins, which may affect the health status of FM patients. abstract_id: PUBMED:27443526 Extra Virgin Olive Oil Improves Oxidative Stress, Functional Capacity, and Health-Related Psychological Status in Patients With Fibromyalgia: A Preliminary Study. Objectives: Fibromyalgia (FM) is a chronic disease that imposes physical, psychological, and social limitations. We have reported that oxidative stress may play a role in the pathophysiology of FM. Olive oil has been shown to be effective treatment against the oxidative stress associated with several diseases. The aim of this study was to investigate the effect of olive oil on oxidative stress and health-related parameters in FM. Methods: This preliminary study was performed on blood samples of 23 women diagnosed with FM who consumed 50 ml of organic olive oil daily for 3 weeks. Subjects were randomized into two groups: one ingested extra virgin olive oil (EVOO) and the other refined olive oil (ROO), which have different antioxidant content. The patients' oxidative (lipid, protein, and DNA oxidation) and antioxidative (antioxidant enzyme activities and compounds) profiles were examined before and after the treatment period. Functional capacity and physical and mental health status were assessed using the Fibromyalgia Impact Questionnaire (FIQ) and the Physical Component (PCS-12) and Mental Component Summaries (MCS-12) of the Short Form-12 Health Survey, respectively. Results: Significant differences were found in pre-post change between the EVOO and ROO groups for protein carbonyls, lipid peroxidation, and FIQ and MCS-12 scores. Differences between groups approached statistical significance for oxidative DNA damage and levels of the antioxidant compound zinc. Conclusions: EVOO may protect women with FM against oxidative stress in addition to improving functional capacity and health-related psychological status. Findings suggest that olive oil may be a valuable therapeutic support in FM. abstract_id: PUBMED:27650725 Smoking, depression, & stress: predictors of fibromyalgia health status. The purpose of the present study was to examine the effects of smoking, stress, and depression on fibromyalgia health status. Fibromyalgia syndrome (FMS) is a chronic musculoskeletal pain condition that negatively affects health status. Health status is not only affected by the constellation of physical symptoms, but also by mood symptoms, stress levels, and patient behaviors (e.g. smoking). Participants were 491 individuals with a physician's diagnosis of FMS. They completed self-report measures of their current levels of depression, stress, the number of cigarettes smoked per day, and health status. A linear regression analysis was conducted to determine whether these measures predicted FMS health status. All three measures predicted worse health status, predicting 51.5% of the variance in health status. However, it is important to evaluate and treat more than just the physical symptoms of FMS. Attention should also be paid to mental health status and to engagement in unhealthy behaviors in order to reduce their effects on FMS health status. Future researchers should design and evaluate interventions that target these modifiable risk factors to determine the extent to which they could improve health outcomes. abstract_id: PUBMED:22248994 Relationship of weight status with mental and physical health in female fibromyalgia patients. Objective: To analyze the association of weight status with anxiety, depression, quality of life and physical fitness in fibromyalgia (FM) patients. Methods: The sample comprised 175 Spanish female FM patients (51.2 ± 7 years). We assessed quality of life by means of the Short-Form-36 Health Survey (SF36) and anxiety and depression by means of the Hospital Anxiety and Depression Scale (HADS). We used standardized fieldbased fitness tests to assess cardiorespiratory fitness, muscular strength, flexibility, agility, and static and dynamic balance. BMI was calculated and categorized using the international criteria. Results: 33% of the sample was normal-weight, 35% overweight and 33% obese. HADS-anxiety and HADS-depression levels increased across the weight status categories. Obese patients had higher anxiety and depression levels compared to normal-weight patients (p < 0.05) whereas no differences were observed between overweight and obese patients. Physical functioning, bodily pain, general health (all p < 0.01) and mental health (p < 0.05) subscales from the SF36 were worse across the weight status categories. Likewise, levels of cardiorespiratory fitness, dynamic balance/motor agility (both p < 0.05) and upper-body flexibility (p < 0.001) decreased as the weight status increased. Pairwise comparisons showed significant differences mainly between the normal-weight versus obese groups. Conclusion: Obese female FM patients displayed higher levels of anxiety and depression and worse quality of life, cardiorespiratory fitness, dynamic balance/motor agility and upper-body flexibility than their normal-weight peers. abstract_id: PUBMED:26490627 Post-traumatic stress disorder symptoms may explain poor mental health in patients with fibromyalgia. Symptoms of post-traumatic stress disorder are common in fibromyalgia patients. This study compared post-traumatic stress disorder symptoms in fibromyalgia patients and healthy controls and determined whether patient-control differences in post-traumatic stress disorder symptoms mediated differences in mental health. In all, 30 patients and 30 healthy controls completed questionnaires assessing symptoms of post-traumatic stress disorder and mental health. Fibromyalgia patients had greater symptoms of post-traumatic stress disorder and mental health than controls. Patient-control differences in mental health symptoms were fully or partially mediated by differences in post-traumatic stress disorder symptoms. Healthcare providers should understand the role of trauma as management of trauma symptoms may be one strategy for improving mental health. abstract_id: PUBMED:23139343 Self-reported physical health, mental health, and comorbid diseases among women with irritable bowel syndrome, fibromyalgia, or both compared with healthy control respondents. Context: Physicians often encounter patients with functional pain disorders such as irritable bowel syndrome (IBS), fibromyalgia (FM), and their co-occurrence. Although these diseases are diagnosed exclusively by patients' report of symptoms, there are few comparative studies about patients' perceptions of these diseases. Objective: To compare perceptions of these conditions among 4 groups-3 clinical groups of older women with IBS, FM, or both disorders (IBS plus FM) and 1 similarly aged control group of women with no IBS or FM-using their responses to survey questions about stressful life events, general physical and mental health, and general medical, pain, and psychiatric comorbidities. Method: Using data from the Biopsychosocial Religion and Health Study survey, responses from women were compared regarding a number of variables. To compare stress-related and physical-mental health profiles across the 4 groups, 1-way analyses of variance and χ(2) tests (with Tukey-Kramer and Tukey post hoc tests, respectively) were used, with α set to .05. Results: The present study comprised 3811 women. Participants in the control group, the IBS group, the FM group, and the IBS plus FM group numbered 3213 (84.3%), 366 (9.6%), 161 (4.2%), and 71 (1.9%), respectively, with a mean (standard deviation) age of 62.4 (13.6), 64.9 (13.7), 63.2 (10.8), and 61.1 (10.9) years, respectively. In general, participants in the control group reported fewer lifetime traumatic and major life stressors, better physical and mental health, and fewer comorbidities than respondents in the 3 clinical groups, and these differences were both statistically significant and substantial. Respondents with IBS reported fewer traumatic and major life stressors and better health (ratings and comorbidity data) than respondents with FM or respondents with IBS plus FM. Overall, respondents with both diseases reported the worst stressors and physical-mental health profiles and reported more diagnosed medical, pain, and psychiatric comorbidities. Conclusion: The results revealed statistically significant, relatively large differences in perceptions of quality of life measures and health profiles among the respondents in the control group and the 3 clinical groups. abstract_id: PUBMED:37476142 Meditation and Its Mental and Physical Health Benefits in 2023. This article discusses the power of meditation and how beneficial it is for the body. Magnetic resonance imaging (MRI) has shown many positive brain changes and improved several brain functions. Meditation has several benefits improving the immune system and inflammatory processes by decreasing cytokine; appropriate telomere shortening also has helped healthy aging. Regarding physical health, meditation has been beneficial in various multi-factorial diseases like diabetes, hypertension, and fibromyalgia. It has also helped bring down blood cholesterol levels and increase high-density lipoproteins (HDL) levels. Improvement was also seen in systolic and diastolic blood pressure. Mental health is another aspect influenced by meditation, as positive emotion brought about by meditation helps address various mental problems like social anxiety disorder, post-traumatic stress disorder (PTSD), anxiety, and depression. Overall, it seems to have some impact in all health areas. However, the magnitude of its effect is not known. More diverse and detailed studies should yield more beneficial clinical outcomes. abstract_id: PUBMED:22324423 Association of employment and working conditions with physical and mental health symptoms for people with fibromyalgia. Purpose: This study examines physical and mental health symptoms among people with fibromyalgia (FM) by employment status and working conditions. Method: Secondary data analysis of the 2007 National Fibromyalgia Association Questionnaire study resulted in employment and symptom information for 1702 people of working age with FM. In this cross-sectional internet study, six factors of symptom clusters (physical, mental health, sleeping, concentration, musculoskeletal, support) were seen in the data. Linear regression models used employment, age, income, gender, and education to predict symptom clusters. Among those employed, working conditions were also associated with symptom severity. Results: In the predominately female sample, 51% were working. Of these, 70% worked over 30 hours/week and half had flexible hours. Employment, higher income, and education were strongly associated with fewer symptoms. Working conditions, including level of physical and mental exertion required on the job as well as coworkers' understanding of FM, were related to symptoms, particularly physical and mental health symptoms. Many participants reported modifying their work environment (66%) or changing occupations (33%) due to FM. Conclusions: Work modifications could allow more people with FM to remain employed and alleviate symptoms. Persons with FM should be counseled to consider what elements of their work may lead to symptom exacerbation. abstract_id: PUBMED:20524385 The relationship between problem gambling and mental and physical health correlates among a nationally representative sample of Canadian women. Objectives: Gambling has become an increasingly common activity among women since the widespread growth of the gambling industry. Currently, our knowledge of the relationship between problem gambling among women and mental and physical correlates is limited. Therefore, important relationships between problem gambling and health and functioning, mental disorders, physical health conditions, and help-seeking behaviours among women were examined using a nationally representative Canadian sample. Methods: Data were from the nationally representative Canadian Community Health Survey Cycle 1.2 (CCHS 1.2; n = 10,056 women aged 15 years and older; data collected in 2002). The statistical analysis included binary logistic regression, multinomial logistic regression, and linear regression models. Results: Past 12-month problem gambling was associated with a significantly higher probability of current lower general health, suicidal ideation and attempts, decreased psychological well-being, increased distress, depression, mania, panic attacks, social phobia, agoraphobia, alcohol dependence, any mental disorder, comorbidity of mental disorders, chronic bronchitis, fibromyalgia, migraine headaches, help-seeking from a professional, attending a self-help group, and calling a telephone help line (odds ratios ranged from 1.5 to 8.2). Conclusions: Problem gambling was associated with a broad range of negative health correlates among women. Problem gambling is an important public health concern. These findings can be used to inform healthy public policies on gambling. abstract_id: PUBMED:37721351 Oxidative stress may be a contributing factor in fibromyalgia patients' pain mechanisms. Objective: The pathophysiology of fibromyalgia (FM), a continuously painful syndrome with no known origin, has been related to mitochondrial dysfunction, oxidative stress, and inflammation. Recent studies have shown that FM may be associated with an oxidative balance disorder. The objective of this study was to measure the levels of oxidative stress in FM patients and try to understand the association between FM and free radicals. Methods: This study was performed on 100 volunteers admitted to the University of Health Sciences, Sultan 2, Abdulhamid Han Health Application and Research Center Physical Therapy and Rehabilitation Clinic, including 50 healthy controls and 50 patients with FM. To analyze oxidative stress biomarkers, total oxidant status (TOS) and total antioxidant status (TAS) levels were measured. Total thiol (TT) and native thiol (NT) concentrations were measured to determine the relationship between thiol groups. Disulfide (DIS) and oxidative stress index (OSI) were calculated with mathematical formulas. Results: While TOS and OSI levels were statistically higher in FM patients, TAS levels were significantly lower compared to the healthy control group (p<0.001). In comparison to the healthy control group, FM patients had considerably decreased TT and NT levels. DIS levels were significantly higher in FM patients than in controls (p<0.001). Conclusions: Reactive oxygen species have several negative impacts on the human body. As a result of the measurements we analyzed, the relationship between FM and oxidative stress should be studied in terms of disease progression and may help improve the treatment process. Answer: Yes, fibromyalgia-related oxidative stress is implicated in the decline of physical and mental health status. A study found a significant increase in oxidative DNA damage and protein carbonyl content in fibromyalgia (FM) patients compared to controls, as well as diminished levels of total antioxidant capacity (TAC) and zinc. Lower antioxidant enzyme activities in FM patients may lead to oxidative stress through the oxidation of DNA and proteins, which could affect the health status of FM patients. Significant correlations were observed between oxidative DNA damage and mental health status, and zinc and physical health status (PUBMED:24373370). Additionally, extra virgin olive oil, which is high in antioxidants, was shown to improve oxidative stress markers, functional capacity, and health-related psychological status in FM patients, suggesting that oxidative stress may play a role in the pathophysiology of FM and that antioxidants may have therapeutic benefits (PUBMED:27443526). Furthermore, oxidative stress may be a contributing factor in the pain mechanisms of FM patients, as evidenced by higher levels of total oxidant status (TOS) and oxidative stress index (OSI), and lower levels of total antioxidant status (TAS) in FM patients compared to healthy controls. The relationship between FM and oxidative stress should be studied in terms of disease progression and may help improve the treatment process (PUBMED:37721351).
Instruction: Does pediatric surgical specialty training affect outcome after Ramstedt pyloromyotomy? Abstracts: abstract_id: PUBMED:15121951 Does pediatric surgical specialty training affect outcome after Ramstedt pyloromyotomy? A population-based study. Objective: Ramstedt pyloromyotomy is a common operation in infants and is often done by general surgeons. We wished to determine whether there are any differences in outcome when this procedure is done by subspecialist pediatric general surgeons as compared with general surgeons. Methods: All Ramstedt pyloromyotomies in the province of Ontario between 1993 and 2000 were reviewed. Children with complex medical conditions or prematurity were excluded. Cases done by general surgeons were compared with those done by pediatric surgeons, specifically examining hospital stay and complications. Results: Of 1777 eligible infants, 67.9% were operated on by pediatric surgeons and 32.1% by general surgeons. Total and postoperative lengths of stay were longer in the general surgeon group compared with the pediatric surgeons (4.31 vs 3.50 days for length of stay; 2.95 vs 2.25 days for postoperative length of stay). The general surgeons had a higher overall complication rate (4.18% vs 2.58%). The incidence of duodenal perforation among general surgeons was almost 4 times that of pediatric surgeons (relative risk: 3.65; 95% confidence interval: 1.43-9.32). Of the 4 infants who required repeat surgery because of an incomplete pyloromyotomy, all were originally operated on by a general surgeon. Analysis of the effect of surgeon volume on outcomes suggested that higher volume resulted in better outcome in both groups. Conclusion: Subspecialist pediatric general surgeons achieve superior outcomes for children who undergo Ramstedt pyloromyotomy. abstract_id: PUBMED:18206465 Residency training in pyloromyotomy: a survey of 331 pediatric surgeons. Purpose: Both pediatric and general surgeons perform pyloromyotomy. Laparoscopic pyloromyotomy (LAP), and changes in referral patterns have affected the training of pediatric surgery fellows and general surgery residents. We surveyed pediatric surgeons regarding these issues. Methods: We mailed an Institutional Review Board of New Hanover Regional Medical Center-approved survey to 701 members of the American Pediatric Surgical Association within the United States to determine each surgeon's preferred technique for pyloromyotomy (LAP vs Ramstedt or transumbilical procedures [OPEN]), practice setting, involvement with trainees, and opinions regarding pyloromyotomy. Significance was determined using chi(2) analyses. Results: A total of 331 (48%) surgeons responded: 197 (60%) performed most or all OPEN, and 85 (26%), most or all LAP. Laparoscopic pyloromyotomy was more likely in academic practices and children's hospitals (P < .05). Residents under surgeons performing LAP were less likely to participate (58% vs 91%; P < .05) or gain competence (22% vs 42%; P < .5). Only 34% of surgeons performing LAP believed that general surgery residents should learn pyloromyotomy, whereas 67% of surgeons performing OPEN believed that residents should learn the procedure (P < .05). A total of 307 (93%) surgeons believed at least 4 OPEN were necessary to become competent, but 126 (44%) reported that their residents performed fewer than 4. Only 104 (31%) surgeons believed that their residents were competent in pyloromyotomy. There were 303 (92%) surgeons who believed that pyloromyotomy should be performed only by pediatric surgeons when possible. Conclusions: Most general surgical residents are not learning pyloromyotomy, in part because of the adoption of laparoscopic technique, limited operative experience, and the opinion of most pediatric surgeons that the procedure should be performed only by pediatric surgeons. abstract_id: PUBMED:8769050 Ramstedt's pyloromyotomy--what is the correct incision? Aim: To retrospectively review Starship's complication rates of the vertical midline incision and transverse right upper quadrant incision in Ramstedt's pyloromyotomy. Method: A retrospective chart and operation note review over a seven and a half year period of all infants having a Ramstedt's pyloromyotomy for infantile hypertrophic pyloric stenosis at Auckland Hospital. Results: Of 140 patients there were 117 transverse and 18 midline incisions (with 5 exclusions). No statistically significant differences were found between either group for wound, dehiscence, wound infection or incisional hernia. However, wound dehiscence occurred relatively more often in the midline group (2/18) than in the transverse group (3/117), although this did not reach statistical significance (Fisher exact test p-value = 0.15). Conclusions: There is no convincing evidence that either incision is better than the other. There is a low number of wound complications occurring at Starship with Ramstedt's operation. Overall results compare favourably with both New Zealand and overseas institutions. abstract_id: PUBMED:31237854 Minimally Invasive Supraumbilical Approach for Pyloromyotomy. Background: The three major approaches for Ramstedt pyloromyotomy - right upper quadrant incision, supraumbilical incision and laparoscopic method, are often compared, with some preference given to the supraumbilical approach. It becomes widely adopted in many centers around the world. Aim: To analyse the early results of the supraumbilical incision in treatment of hypertrophic pyloric stenosis and to test a hypothesis that this technique may be valuable in our clinical conditions. Materials And Methods: Within a ten-month period five children with hypertrophic pyloric stenosis were selected (using single random sample) for pyloromyotomy via supraumbilical incision and another five children - via Robertson incision. This technique consists of semi lunar cutting in the upper half of umbilicus, extended cranially in the midline. After a Ramstedt pyloromyotomy, linea alba is sutured and the reshaped skin is sutured in semilunar manner around the umbilicus. The scar was estimated with Patient and Observer Scar Assessment Scale. Results: The operations were performed by pediatric surgeons with different experience and with basic equipment. The operative time was 5-10 min longer for the supraumbilical incision. The pyloromyotomy led to a definitive healing, with timely feeding and discharging, without any complication. The quality of the scar was significantly better after the supraumbilical incision. Conclusion: Supraumbilical incision is reliable and related to low complication rates. It leaves better scar than the Robertson incision and is an excellent alternative in search for less invasive techniques. abstract_id: PUBMED:29506814 A simulated training model for laparoscopic pyloromyotomy: Is 3D printing the way of the future? Background: Hypertrophic pyloric stenosis (HPS) is a common neonatal condition treated with open or laparoscopic pyloromyotomy. 3D-printed organs offer realistic simulations to practice surgical techniques. The purpose of this study was to validate a 3D HPS stomach model and assess model reliability and surgical realism. Methods: Medical students, general surgery residents, and adult and pediatric general surgeons were recruited from a single center. Participants were videotaped three times performing a laparoscopic pyloromyotomy using box trainers and 3D-printed stomachs. Attempts were graded independently by three reviewers using GOALS and Task Specific Assessments (TSA). Participants were surveyed using the Index of Agreement of Assertions on Model Accuracy (IAAMA). Results: Participants reported their experience levels as novice (22%), inexperienced (26%), intermediate (19%), and experienced (33%). Interrater reliability was similar for overall average GOALS and TSA scores. There was a significant improvement in GOALS (p<0.0001) and TSA scores (p=0.03) between attempts and overall. Participants felt the model accurately simulated a laparoscopic pyloromyotomy (82%) and would be a useful tool for beginners (100%). Conclusion: A 3D-printed stomach model for simulated laparoscopic pyloromyotomy is a useful training tool for learners to improve laparoscopic skills. The GOALS and TSA provide reliable technical skills assessments. Level Of Evidence: II. abstract_id: PUBMED:34172284 A history of the surgical correction of pyloric stenosis. Infantile hypertrophic pyloric stenosis (IHPS) is a complex disorder with an incidence from 4 to 5 per 1000 live births with a strongly male predilection. The natural history of this initially fatal condition with nearly 100% mortality has evolved to a disease with a good prognosis. Pyloromyotomy by Ramstedt was first described 110 years ago and still remains the standard of surgical treatment for patients with IHPS. The laparoscopic approach is becoming more common and is the preferred method of treatment now in many centres. abstract_id: PUBMED:23133001 A modified spreader for pyloromyotomy. A modified spreader for pyloromyotomy is innovated for congenital hypertrophic pyloric stenosis. It was found safe, effective and useful. abstract_id: PUBMED:32984197 Laparoscopic vs. Open Pyloromyotomy in Treatment of Infantile Hypertrophic Pyloric Stenosis. Background/Purpose: Laparoscopic pyloromyotomy gained wide popularity in management of pyloric stenosis with contradictory results regarding its benefits over classic open approach. This study aimed at comparing both regarding their safety, efficiency, and outcome. Methods: This is a prospective randomized controlled study performed from April 2017 to April 2019. It included 80 patients, divided randomly into two groups, where laparoscopic pyloromyotomy was performed in group A and open pyloromyotomy in group B. Both groups were compared regarding operative time, post-operative pain score, time required to reach full feeding, hospital stay, complications, and parents' satisfaction. Results: Median operative time was 21 min in group A vs. 30 min in group B (P = 0). Pain Assessment in Neonates scores were generally higher in group B with more doses of analgesics required (P = 0). Mean time needed to reach full feeding was 15.2 and 18.8 h in groups A and B, respectively (P = 0). Median hospital stay was 19 h in group A and 22 h in group B (P = 0.004). Parents' satisfaction also was in favor of group A (P = 0.045). Although no significant difference was reported between both groups regarding early and late complications, some complications such as mucosal perforation and incomplete pyloromyotomy occurred in the laparoscopic group only. Conclusion: Laparoscopic pyloromyotomy was found superior to open approach regarding faster operative time, less need of analgesics, easier development of oral feeding, shorter hospital stay, and better parents' satisfaction. Yet, there are still some concerns about the safety and efficiency of this procedure over open technique. abstract_id: PUBMED:2051428 Modified Ramstedt's pyloromyotomy for the treatment of infantile hypertrophic pyloric stenosis. Thirty-seven infants with hypertrophic pyloric stenosis had a modified Ramstedt pyloromyotomy performed. The incidence of postoperative vomiting was low (16%), with no infant suffering sustained vomiting requiring re-exploration for incomplete pyloromyotomy. The modified Ramstedt's pyloromyotomy is a simple procedure and may confer an advantage in the postoperative course of the recovering infant. abstract_id: PUBMED:19221947 Double-Y pyloromyotomy: a new technique for the surgical management of infantile hypertrophic pyloric stenosis. Background: Many innovations and approaches have been tried for the surgical management of infantile hypertrophic pyloric stenosis (IHPS) since Ramstedt's first successful surgery almost one hundred years ago. We define here a new technique focusing on the pylorus which may offer better results for this common condition. Methods: A prospective study of 40 patients with infantile hypertrophic pyloric stenosis was carried out over a period of 3 years, from January 2005 to January 2008. The patients were divided into 2 equal groups of 20 patients each. The study was designed that all patients selected for the study would be optimized preoperatively with regard to hydration, acid-base status and electrolyte imbalance. All surgeries were performed after obtaining informed consent. Standard preoperative preparation and postoperative feeding regimes were used. The patients were operated on an alternate basis, i.e., one patient by Ramstedt's pyloromyotomy and the next with a double-Y pyloromyotomy. Data on patient demographics, operative time, anesthesia complications, complications, postoperative vomiting and weight gain was collected. Patients were followed up for a period of 3 months postoperatively. Statistical assessment was done using Student's T-test. Results: No significant statistical differences were found in the patient population regarding age, sex, weight at presentation, symptoms and clinical condition including electrolyte imbalance and acid-base status. The groups were also equal in terms of anesthesia and no anesthesia-related complications occurred. However a significant difference was noted in vomiting during the first postoperative week between the double-Y pyloromyotomy (DY) and the Ramstedt's pyloromyotomy (RP) group (2.7 +/- 0.98 days vs. 3.45 +/- 0.94 days; p = 0.018) and in the weight gain during the first 10 postoperative days (245.50 +/- 24.17 g vs. 225.25 +/- 21.61 g; p = 0.008) respectively. No long-term complications were reported and no redo pyloromyotomies were needed. Conclusion: The double-Y pyloromyotomy (Alayet's pyloromyotomy) seems to be a good technique for the surgical management of IHPS. It offered a better functional outcome in terms of postoperative vomiting during the first postoperative week and weight gain during the first 10 days in our initial series while having a safety profile similar to Ramstedt's pyloromyotomy. Answer: Yes, pediatric surgical specialty training does affect the outcome after Ramstedt pyloromyotomy. A population-based study in Ontario between 1993 and 2000 showed that subspecialist pediatric general surgeons achieved superior outcomes for children who underwent Ramstedt pyloromyotomy compared to general surgeons. The study found that hospital stays, both total and postoperative, were shorter for cases handled by pediatric surgeons. Additionally, the overall complication rate was lower for pediatric surgeons, and the incidence of duodenal perforation was almost four times lower than that of general surgeons. Furthermore, all cases requiring repeat surgery due to an incomplete pyloromyotomy were initially operated on by general surgeons. The study also suggested that higher surgeon volume resulted in better outcomes in both groups (PUBMED:15121951).
Instruction: Persistent sepsis-induced hypotension without hyperlactatemia: is it really septic shock? Abstracts: abstract_id: PUBMED:21126850 Persistent sepsis-induced hypotension without hyperlactatemia: is it really septic shock? Purpose: The prognostic value of hyperlactatemia in septic shock is unquestionable. However, as current definitions do not include hyperlactatemia as a mandatory criterion, some hypotensive patients may be diagnosed as having septic shock despite exhibiting normolactatemia. The significance of persistent sepsis-induced hypotension without hyperlactatemia is unclear. Is it really septic shock? Our aim was to determine differences in outcome between patients diagnosed as having septic shock but exhibiting normal vs elevated lactate levels during evolution. We also explored the potential implications of including hyperlactatemia as an obligatory diagnostic criterion. Methods: We performed retrospective analyses on a cohort of 302 septic shock patients. Results: When we divided patients according to the presence of hyperlactatemia, 34% evolved without hyperlactatemia and exhibited a very low mortality risk (7.7% compared with 42.9% of those with hyperlactatemia). These patients also presented less severe organ dysfunctions and higher central venous O(2) saturation values, and required lower norepinephrine doses. The potential inclusion of hyperlactatemia in septic shock definition would reduce incidence in 34% but increase absolute mortality risk in 11%. Conclusions: Persistent sepsis-induced hypotension without hyperlactatemia may not constitute a real septic shock. Our results support the need to review the current definition of septic shock. Hyperlactatemia could represent an objective parameter worth to be explored as a potential diagnostic criterion for septic shock. abstract_id: PUBMED:22570774 Persistent Sepsis-Induced Hypotension without Hyperlactatemia: A Distinct Clinical and Physiological Profile within the Spectrum of Septic Shock. Introduction. A subgroup of septic shock patients will never develop hyperlactatemia despite being subjected to a massive circulatory stress. Maintenance of normal lactate levels during septic shock is of great clinical and physiological interest. Our aim was to describe the clinical, hemodynamic, perfusion, and microcirculatory profiles associated to the absence of hyperlactatemia during septic shock resuscitation. Methods. We conducted an observational study in septic shock patients undergoing resuscitation. Serial clinical, hemodynamic, and perfusion parameters were registered. A single sublingual microcirculatory assessment was performed in a subgroup. Patients evolving with versus without hyperlactatemia were compared. Results. 124 septic shock patients were included. Patients without hyperlactatemia exhibited lower severity scores and mortality. They also presented higher platelet counts and required less intensive treatment. Microcirculation was assessed in 45 patients. Patients without hyperlactatemia presented higher PPV and MFI values. Lactate was correlated to several microcirculatory parameters. No difference in systemic flow parameters was observed. Conclusion. Persistent sepsis-induced hypotension without hyperlactatemia is associated with less organ dysfunctions and a very low mortality risk. Patients without hyperlactatemia exhibit less coagulation and microcirculatory derangements despite comparable macrohemodynamics. Our study supports the notion that persistent sepsis-induced hypotension without hyperlactatemia exhibits a distinctive clinical and physiological profile. abstract_id: PUBMED:24553507 Reclassifying the spectrum of septic patients using lactate: severe sepsis, cryptic shock, vasoplegic shock and dysoxic shock. Objective: The current definition of severe sepsis and septic shock includes a heterogeneous profile of patients. Although the prognostic value of hyperlactatemia is well established, hyperlactatemia is observed in patients with and without shock. The present study aimed to compare the prognosis of septic patients by stratifying them according to two factors: hyperlactatemia and persistent hypotension. Methods: The present study is a secondary analysis of an observational study conducted in ten hospitals in Brazil (Rede Amil - SP). Septic patients with initial lactate measurements in the first 6 hours of diagnosis were included and divided into 4 groups according to hyperlactatemia (lactate >4mmol/L) and persistent hypotension: (1) severe sepsis (without both criteria); (2) cryptic shock (hyperlactatemia without persistent hypotension); (3) vasoplegic shock (persistent hypotension without hyperlactatemia); and (4) dysoxic shock (both criteria). Results: In total, 1,948 patients were analyzed, and the sepsis group represented 52% of the patients, followed by 28% with vasoplegic shock, 12% with dysoxic shock and 8% with cryptic shock. Survival at 28 days differed among the groups (p<0.001). Survival was highest among the severe sepsis group (69%, p<0.001 versus others), similar in the cryptic and vasoplegic shock groups (53%, p=0.39), and lowest in the dysoxic shock group (38%, p<0.001 versus others). In the adjusted analysis, the survival at 28 days remained different among the groups (p<0.001) and the dysoxic shock group exhibited the highest hazard ratio (HR=2.99, 95%CI 2.21-4.05). Conclusion: The definition of sepsis includes four different profiles if we consider the presence of hyperlactatemia. Further studies are needed to better characterize septic patients, to understand the etiology and to design adequate targeted treatments. abstract_id: PUBMED:34036519 Comparison of 5% human albumin and normal saline for fluid resuscitation in sepsis induced hypotension among patients with cirrhosis (FRISC study): a randomized controlled trial. Aims: Sepsis and septic shock are common causes of hospitalization and mortality in patients with cirrhosis. There is no data on the choice of fluid and resuscitation protocols in sepsis-induced hypotension in cirrhosis. Methods: In this open-label trial conducted at a single center, we enrolled 308 cirrhotics with sepsis-induced hypotension and randomized them to receive either 5% albumin or normal saline. The primary endpoint was a reversal of hypotension [mean arterial pressure, MAP, ≥ 65 mmHg] at 3 h. Secondary endpoints included serial effects on heart rate, arterial lactate and urine output. Results: 154 patients each received 5% albumin (males, 79.8%, mean MAP 52.9 ± 7.0 mm Hg) or 0.9% saline (85.1%, 53.4 ± 6.3 mm Hg) with comparable baseline parameters and liver disease severity. Reversal of hypotension was higher in patients receiving 5% albumin than saline at the end of one hour [25.3% and 11.7%, p = 0.03, Odds ratio (95% CI)-1.9 (1.08-3.42)] and at the end of three hours [11.7% and 3.2%, p = 0.008, 3.9 (1.42-10.9)]. Sustained reduction in heart rate and hyperlactatemia (p < 0.001) was better in the albumin group. At one week, the proportion of patients surviving was higher in the albumin group than those receiving saline (43.5% vs 38.3%, p = 0.03). Female gender and SOFA ≥ 11 were predictors of non-response to fluid. Conclusions: 5% human albumin is safe and beneficial in reversing sepsis-induced hypotension compared to normal saline in patients with cirrhosis improving clinically assessable parameters of systemic hemodynamics, tissue perfusion and in-hospital short-term survival of cirrhosis patients with sepsis. abstract_id: PUBMED:36852973 LOW DOSE OF ESMOLOL ATTENUATES SEPSIS-INDUCED IMMUNOSUPPRESSION VIA MODULATING T-LYMPHOCYTE APOPTOSIS AND DIFFERENTIATION. Abstract: Background: Immunosuppression caused by immune cell apoptosis and an imbalance of T helper 2 cells (T H 2) and T helper 1 cells (T H 1), is associated with poor outcomes in septic patients. Esmolol was reported to improve survival by modulating immune responses in septic shock. Whether esmolol could alleviate sepsis-induced immunosuppression and the optimal dose are unclear. Methods: Four hours after cecal ligation and puncture (CLP), Wistar rats were randomized into CLP, CLP + E-5 (esmolol: 5 mg·kg -1 ·h -1 ) and CLP + E-18 (esmolol: 18 mg·kg -1 ·h -1 ) groups. Eight rats were underwent sham operation. Eighteen hours after CLP, hemodynamics and organ histological injuries were evaluated, peripheral blood mononuclear cells apoptosis and T-lymphocyte subsets counts were determined by flow cytometry, and the expression of p-Akt, Bcl-2, cleaved Caspase-3, and p-Erk1/2 in splenic CD4 + T-lymphocytes was determined by western blot and immunohistochemistry. β 1 -Adrenoreceptor expressions were evaluated using real-time polymerase chain reaction and immunohistochemistry. Results: Cecal ligation and puncture induced tachycardia, hypotension, hyperlactatemia, and multiple organ injury. Heart rate was unchanged in the CLP + E-5 group but decreased in the CLP + E-18 group. Hypotension, lactatemia, and multiple organ injuries were improved only in the CLP + E-5 group. T-lymphocyte apoptosis and T H 2/T H 1 ratio was decreased in CLP + E-5 but not in CLP + E-18. p-Akt and Bcl-2 expressions were increased, while cleaved Caspase-3 and p-Erk1/2 expressions were decreased in CLP + E-5. β 1 -Adrenoreceptor expressions were unchanged in both CLP + E-5 and CLP + E-18 groups. Conclusions: Low dose of esmolol reduced T-lymphocyte apoptosis and restored T H 2/T H 1 ratio in septic shock. Esmolol might modulate Akt/Bcl-2/Caspase-3 pathway to relieve T-lymphocyte apoptosis and inhibit Erk1/2 activity to decrease T H 0 differentiation to T H 2. Esmolol may be a potential immunoregulator of septic shock. abstract_id: PUBMED:8482098 Role of nitric oxide in the hemodynamic changes of sepsis. Objective: To study the role of nitric oxide in the hemodynamic changes of sepsis. Design: Prospective, randomized, controlled, intervention study. Subjects: Twenty-five sheep randomized to four groups: Group A (n = 8, nonseptic sheep) received NG-nitro L-arginine (20 mg/kg i.v.) followed 15 mins later by L-arginine (200 mg/kg i.v.); group B (n = 4, nonseptic sheep) received L-arginine followed 15 mins later by NG-nitro L-arginine; group C (n = 7, septic sheep) received NG-nitro L-arginine (20 mg/kg i.v.) alone; group D (n = 6, septic sheep) received L-arginine (200 mg/kg i.v.) followed by NG-nitro L-arginine (20 mg/kg i.v.). Interventions: Sheep were anesthetized with pentobarbital, mechanically ventilated and monitored with a pulmonary artery catheter, a peripheral artery catheter, and a Miller catheter in the left ventricle. Sepsis was induced by the intravenous administration of live Escherichia coli (1.5 x 10(9) microorganisms/kg over 30 mins), which resulted in systemic hypotension, pulmonary hypertension, high cardiac output, and hyperlactatemia. Acetylcholine was administered before and after each intervention. Measurements And Main Results: In nonseptic sheep (groups A and B) NG-nitro L-arginine induced an increase in mean blood pressure (BP), pulmonary arterial pressure, and systemic and pulmonary vascular resistances, accompanied by a decrease in cardiac index and the first derivative of left ventricular pressure. L-arginine administered to normal sheep induced systemic vasodilation. In the sepsis groups (groups C and D), the increases in BP and systemic vascular resistances induced by NG-nitro L-arginine were significant but less marked than in nonseptic sheep. Pretreatment of septic sheep with L-arginine totally abolished the NG-nitro L-arginine induced increases in systemic and pulmonary vascular resistances in this group. The administration of L-arginine in these animals induced both systemic and pulmonary vasodilation. Acetylcholine-mediated vasodilation was severely impaired in sepsis. In this condition, pretreatment with L-arginine improved the response to acetylcholine. Conclusions: These data support the view that nitric oxide plays a significant role in modulating systemic and pulmonary vasomotor tone in normal and septic sheep. L-arginine produced systemic vasodilation in normal sheep, whereas both systemic and pulmonary vasodilation were observed in septic animals. The impaired response to an endothelium-dependent vasodilator in sepsis was improved by the previous administration of L-arginine. abstract_id: PUBMED:26909707 If Channel Inhibition With Ivabradine Does Not Improve Cardiac and Vascular Function in Experimental Septic Shock. Objective: Previous studies have suggested that lowering heart rate (HR) by selective β1-blockers improves sepsis-induced cardiac and vascular dysfunction primarily by decreasing proinflammatory pathways. However, the impact of isolated heart rate reduction (HRR) on hemodynamics and inflammatory pathways remains unknown. The present study was designed to assess the effects of HRR by ivabradine, an If channel inhibitor, on cardiovascular function and inflammatory pathways in peritonitis-induced septic shock in rats. Design: Randomized animal study. Setting: University research laboratory. Interventions: Four hours after cecal ligation and puncture (CLP), Wistar rats were randomly allocated to the following groups: CLP (n = 8) and CLP + ivabradine (n = 8, administered per os 4 h after the surgery). Another eight Wistar male rats underwent sham operation. All rats received a continuous infusion of saline (10 mL kg h), analgesic (nalbuphine: 0.2 mg kg h), and antibiotics (imipenem and cilastatin sodium: 10 mg kg) 4 h after the surgery. Assessment at 18 h included hemodynamics, in vivo cardiac function by echocardiography, and ex vivo vasoreactivity by myography. Circulating cytokine levels (TNF-α, IL-6, and IL-10) were measured by ELISA, whereas cardiac and vascular protein expressions of NF-κB/IκBα/iNOS and Akt/eNOS were assessed by Western blotting. Results: Compared with sham animals, CLP induced tachycardia, hypotension, decreased cardiac output, hyperlactatemia, and vascular hyporesponsiveness to vasopressors. Compared with the CLP group, adjunction of ivabradine decreased the HR without any impact on blood pressure, lactatemia, or vascular responsiveness to vasopressors. Adjunction of ivabradine to CLP rats had no impact on TNF-α, IL-6, and IL-10 cytokines, or on the protein expression levels of phosphorylated forms of NF-κB, Akt, eNOS, and degradation of IκBα in cardiac and vascular tissues. Conclusion: Isolated HRR by ivabradine in an experimental model of septic shock does not appear to be associated with any effect on the tested parameters of cardiac function or on vascular responsiveness to vasopressors. Moreover, in this setting, ivabradine does not alter the circulating levels of selected pro/anti-inflammatory cytokines or cardiac and vascular NF-κB/IκBα protein expression levels. abstract_id: PUBMED:30474573 Potential therapy strategy: targeting mitochondrial dysfunction in sepsis. Recently, the definition of sepsis was concluded to be a life-threatening organ dysfunction caused by a dysregulated host response to infection. Severe patients always present with uncorrectable hypotension or hyperlactacidemia, which is defined as septic shock. The new definition emphasizes dysregulation of the host response and multiple organ dysfunction, which is partially attributed to metabolic disorders induced by energy crisis and oxidative stress. Mitochondria are a cellular organelle that are well known as the center of energy production, and mitochondrial damage or dysfunction is commonly induced in septic settings and is a predominant factor leading to a worse prognosis. In the present review, we determine the major mitochondrial disorders from morphology to functions in sepsis. In the following, several clinical or pre-clinical assays for monitoring mitochondrial function are demonstrated according to accumulated evidence, which is the first step of specific therapy targeting to modulate mitochondrial function. Accordingly, various reagents used for regulating mitochondrial enzyme activities and promoting biogenesis have been documented, among which mitochondria-targeted cation, TPP-conjugated antioxidants are the most valuable for future trials and clinical treatment to improve mitochondrial function as they may take advantage of the prognosis associated with septic complications. abstract_id: PUBMED:34595228 Classification of Septic Shock Phenotypes Based on the Presence of Hypotension and Hyperlactatemia in Cats. Background: Three different phenotypes of septic shock based on changes in blood pressure and lactate are recognized in people. Dysoxic shock, representing the combination of fluid-refractory hypotension and hyperlactatemia, is characterized by greater disease severity and mortality compared to cryptic shock (hyperlactatemia alone) and vasoplegic shock (hypotension with normal blood lactate). Little is known about septic shock and specifically its phenotypes in cats. Objective: To analyze the characteristics and prognostic implications of three septic shock phenotypes in cats with sepsis. Methods: Cats with septic shock were prospectively included. Septic shock was defined by the presence of hypotension (mean blood pressure <60 mmHg) requiring vasopressor support and/or persistent hyperlactatemia (>4 mmol/L) and classified in three subgroups: dysoxic shock, vasoplegic shock and cryptic shock. Clinical and clinicopathological variables including APPLEfast and APPLEfull scores, occurrence of multi-organ dysfunction syndrome (MODS; presence of at least two dysfunctional organs simultaneously) and outcome were compared among subgroups. Cats with sepsis showing normal blood pressure and lactate concentrations hospitalized during the study period were included as uncomplicated sepsis, and compared to cats with septic shock for selected variables. Length of hospital stay and mortality were evaluated in the whole study population. Odds ratios for mortality were calculated using logistic regression analysis. Significance was set at P < 0.05. Results: The study enrolled 48 cats with uncomplicated sepsis and 37 cats with septic shock (dysoxic shock n = 17; vasoplegic shock n = 11; cryptic shock n = 7). Cats with dysoxic shock had significantly higher APPLEfast and APPLEfull scores compared to vasoplegic and cryptic shock. Mortality rates were not significantly different among cryptic (57%), dysoxic (65%) and vasoplegic shock (91%), while MODS occurrence was significantly lower in cats with cryptic shock (57%) compared to patients affected by dysoxic (94%) and vasoplegic (100%) shock. Cats with septic shock had higher frequency of MODS and greater mortality rate than cats with uncomplicated sepsis. Conclusion: Despite similar in-hospital mortality, cats with dysoxic and vasoplegic shock are characterized by having higher occurrence of multi- organ dysfunction compared to cats affected by cryptic shock. Results from this study suggest novel means of identifying high-risk subgroups of septic cats. abstract_id: PUBMED:10663292 S-Methylisothiourea sulfate improves renal, but not hepatic dysfunction in canine endotoxic shock model. Objective: Excess production of nitric oxide (NO) by inducible NO synthase (iNOS) has been implicated in the pathophysiology of septic shock. This study was designed to see whether S-methylisothiourea sulfate (SMT), a selective inhibitor for iNOS, prevents cardiovascular changes and multiple organ damage in the canine endotoxic shock model. Design: Prospective, comparable, experimental study. Setting: Laboratory at a university hospital. Subjects: Twenty male mongrel dogs were studied under pentobarbital anesthesia. Interventions: Dogs were divided into three groups: bacterial lipopolysaccharide (LPS) group (n = 7) receiving continuous infusion of LPS (2 mg/kg/h for 1 h); LPS plus SMT group (n = 7) receiving LPS and SMT (1 mg/kg, bolus i. v., followed by continuous infusion of 1 mg/kg/h for 1 h); and vehicle plus SMT group (n = 6). Measurements And Results: Hemodynamics, blood gas parameters, and urine output were measured during 6 h observation periods. Serum levels of lactate, transaminases, and bilirubin were measured at baseline, 1 and 6 h. Creatinine and free water clearance, urine sodium excretion and fractional excretion of sodium were calculated. LPS caused a profound hypotension associated with decreases in cardiac output and oxygen delivery, lactic acidosis, renal and liver dysfunction, and thrombocytopenia. SMT prevented the LPS-induced hypotension and renal dysfunction, whereas it did not affect the LPS-induced decreases in cardiac output or oxygen delivery, hyperlactatemia, liver dysfunction, or thrombocytopenia. SMT alone had no appreciable effects on hemodynamics, blood gases, liver or renal functions. Conclusions: These findings show that SMT improves renal, but not hepatic dysfunction, in dogs with endotoxic shock, suggesting that iNOS-derived NO plays differential roles in sepsis-associated multiple organ dysfunction. Answer: The question of whether persistent sepsis-induced hypotension without hyperlactatemia constitutes true septic shock is addressed in several studies. According to the retrospective analysis of a cohort of 302 septic shock patients, 34% evolved without hyperlactatemia and exhibited a very low mortality risk (7.7%) compared with those with hyperlactatemia (42.9%). These patients also presented with less severe organ dysfunctions, higher central venous O2 saturation values, and required lower norepinephrine doses. The study suggests that including hyperlactatemia as an obligatory diagnostic criterion for septic shock could reduce its incidence by 34% but increase the absolute mortality risk by 11% (PUBMED:21126850). Another observational study in septic shock patients undergoing resuscitation found that those without hyperlactatemia exhibited lower severity scores and mortality, higher platelet counts, required less intensive treatment, and presented higher PPV and MFI values in microcirculatory assessments. The study concluded that persistent sepsis-induced hypotension without hyperlactatemia is associated with less organ dysfunctions and a very low mortality risk, indicating a distinctive clinical and physiological profile (PUBMED:22570774). A secondary analysis of an observational study stratified septic patients based on hyperlactatemia and persistent hypotension, identifying four different profiles: severe sepsis, cryptic shock, vasoplegic shock, and dysoxic shock. The study found that survival at 28 days differed among the groups, with the highest survival in the severe sepsis group and the lowest in the dysoxic shock group. The study suggests that the definition of sepsis includes different profiles when considering the presence of hyperlactatemia (PUBMED:24553507). In summary, the presence of persistent sepsis-induced hypotension without hyperlactatemia may not necessarily constitute true septic shock, as these patients exhibit a distinct clinical and physiological profile with less severe organ dysfunction and lower mortality risk. These findings support the need to review the current definition of septic shock and consider hyperlactatemia as a potential diagnostic criterion (PUBMED:21126850; PUBMED:22570774; PUBMED:24553507).
Instruction: Is the mattress important in helping babies keep warm? Abstracts: abstract_id: PUBMED:34912909 Evaluating the implementation of helping babies survive program to improve newborn care conditiona. Background: The main reason of newborn mortalities in low- and middle-income countries is due to the lack of skilled caregivers in providing essential care for babies. The aim of the present study was to evaluate the implementation of helping babies survive (HBS) program to improve newborn care condition in Isfahan, Iran. Materials And Methods: This quasi-experimental study was conducted in the labor and midwifery wards of Shahid Beheshti Hospital in Isfahan. Convenience sampling method was used for all healthy newborns who weighed >1500 g. First, the samples were selected for the control group. Then, the Helping Babies Breathe and Essential Care for Every Baby training courses were held over for ward nurses and midwives. Then, the samples of the intervention group were selected. The research tools consisted of demographic characteristic questionnaire, caregiver performance evaluation checklist, and breastfeeding registration checklist. SPSS software version 16 was used for data analysis. Results: A total of 130 newborns were divided into control (n = 65) and intervention groups (n = 65). The average time of umbilical cord clamping increased from 13.85 to 61.48 s, and the average duration of skin-to-skin contact between mother and baby increased from 11.75 to 60.47 min. The mean of early initiation of breastfeeding improved during the 1st h and the 1st day of the birth. The rate of neonatal hypothermia in the intervention group decreased sharply. Conclusion: The implementation of the HBS program can positively impact newborn care condition. abstract_id: PUBMED:17429905 Is the mattress important in helping babies keep warm?--Paradoxical effects of a sleeping surface with negligible thermal resistance. Background: Heavy wrapping and head covering are risk factors for Sudden Infant Death Syndrome. A new mattress construction, PurFlo, has extremely low thermal resistance, and when used with an infant sleeping bag minimizes the risk of head covering. Aim: To investigate the thermal balance and metabolic rate of infants sleeping on a conventional mattress or a Purflo mattress in infant sleeping bags. Methods: A longitudinal study of thermal balance of infants during day-time sleep on both mattress types in thermoneutral and cool conditions at ages 3 weeks (n = 24), 3 months (n = 22) and 5 months (n = 18). Results: In thermoneutral conditions axillary temperatures in quiet sleep were lower on the conventional mattress than on the PurFlo mattress (p < 0.05, Wilcoxon test). On lowering room temperature to 15-16 degrees C axillary temperatures fell, particularly in the older babies, and at each age were lower on the conventional mattress than the Purflo (differences 0.14-0.72 degrees C, p < 0.05, Wilcoxon test). Conclusion: In both thermoneutral and cool conditions, infant temperatures were higher on the PurFlo than the conventional mattress. The more deformable surface of the PurFlo mattress may lead to more effective insulation by the sleeping bag despite a lack of mattress insulation. abstract_id: PUBMED:27733840 Anticipated Guilt for Not Helping and Anticipated Warm Glow for Helping Are Differently Impacted by Personal Responsibility to Help. One important motivation for people behaving prosocially is that they want to avoid negative and obtain positive emotions. In the prosocial behavior literature however, the motivations to avoid negative emotions (e.g., guilt) and to approach positive emotions (e.g., warm glow) are rarely separated, and sometimes even aggregated into a single mood-management construct. The aim of this study was to investigate whether anticipated guilt if not helping and anticipated warm glow if helping are influenced similarly or differently when varying situational factors related to personal responsibility to help. Helping scenarios were created and pilot tests established that each helping scenario could be formulated both in a high-responsibility version and in a low-responsibility version. In Study 1 participants read high-responsibility and low-responsibility helping scenarios, and rated either their anticipated guilt if not helping or their anticipated warm glow if helping (i.e., separate evaluation). Study 2 was similar but here participants rated both their anticipated guilt if not helping and their anticipated warm glow if helping (i.e., joint evaluation). Anticipated guilt was clearly higher in the high-responsibility versions, but anticipated warm glow was unaffected (in Studies 1a and 1b), or even higher in the low-responsibility versions (Study 2). In Studies 3 (where anticipated guilt and warm glow were evaluated separately) and 4 (where they were evaluated jointly), personal responsibility to help was manipulated within-subjects. Anticipated guilt was again constantly higher in the high-responsibility versions but for many types of responsibility-manipulations, anticipated warm glow was higher in the low-responsibility versions. The results suggest that we anticipate guilt if not fulfilling our responsibility but that we anticipate warm glow primarily when doing over and beyond our responsibility. We argue that future studies investigating motivations for helping should measure both anticipated negative consequences for oneself if not helping, and anticipated positive consequences for oneself if helping. abstract_id: PUBMED:26353887 Educational outcomes of Helping Babies Breathe training at a community hospital in Honduras. Objectives: Helping Babies Breathe is an evidence-based curriculum designed to teach basic neonatal resuscitation in low-resource countries. The purpose of this study was to evaluate the acquisition of knowledge and skills following this training and correlation of learner characteristics to performance in a Spanish-speaking setting. Methods: Thirty-one physicians and 39 nurses completed Helping Babies Breathe training at a Honduran community hospital. Trainee knowledge and skills were evaluated before and after the training using a multiple-choice questionnaire, bag-mask ventilation skills test, and two objective structured clinical exams (OSCEs). Linear mixed-effects models were used to analyze assessment scores pre- and post-training by profession (physician or nurse) while controlling for covariates. Results: Helping Babies Breathe training resulted in significant increases in mean scores for the multiple-choice question test, bag-mask ventilation skills test, and OSCE B. Time to initiation of effective bag-mask ventilation decreased from a mean of 74.8 to 68.4 s. Despite this improvement in bag-mask ventilation, only 42 % of participants were able to initiate effective bag-mask ventilation within the Golden Minute. Although physicians scored higher on the pre-test multiple-choice questions and bag-mask ventilation, nurses demonstrated a greater mean difference in scores after training. OSCE B scores pre- and post-training increased similarly between professions. Nurses' and physicians' performance in simulation was not significantly different after the training. Assessment scores and course feedback indicated a need for more skills practice, particularly with bag-mask ventilation. Conclusions: When evaluated immediately after an initial workshop, Helping Babies Breathe training resulted in significant gains in neonatal resuscitation knowledge and skills. Following training, nurses, who commonly do not perform these skills in real-life situations, were able to perform at a similar level to physicians. Further studies are necessary to determine how to sustain this knowledge and skills over time, tailor the course to learner characteristics, and whether this training translates into improvements in clinical practice. abstract_id: PUBMED:30252052 Knowledge acquisition after Helping Babies Survive training in rural Tanzania. Background: While the effectiveness of Helping Babies Breathe (HBB) training in Tanzania has been reported, no published studies of Essential Care for Every Baby (ECEB) and Essential Care for Small Babies (ECSB) in this setting have been found. This study compared knowledge before and after HBB, ECEB and ECSB training in Tanzania. Methods: Training was provided to future facilitators (n=16) and learners (n=24) in Tanzania. Using standardized multiple-choice questions, knowledge was assessed pre- and post-HBB and ECEB courses for both learners and facilitators, while ECSB assessment was conducted with facilitators only. A >80% score was considered to be a pass. Paired t-tests were used for hypothesis testing. Results: Knowledge significantly improved for both facilitators and learners on HBB and ECEB (p<0.001) and for facilitators on ECSB (p<0.001). After training, learners had difficulty identifying correct responses on one HBB item (21% incorrect) and three ECEB items (25-29% incorrect). After training, facilitators had difficulty identifying correct responses on five ECSB items (22-44% incorrect). Conclusions: Training improved knowledge in Tanzania, but not sufficiently for feeding, especially for low birthweight babies. Targeted training on feeding is warranted both within the Helping Babies Survive program and in preclinical training to improve knowledge and skill to enhance essential newborn care. abstract_id: PUBMED:31073423 Designing and Implementing the Helping Babies Breathe Program in Tanzania. The first day and especially the first hour are critical to newborn survival with the highest risk of intrapartum-related neonatal deaths, from 60 to 70%, occurring within 24 hours of birth. Birth asphyxia (BA) or failure to initiate or sustain spontaneous breathing at birth contributes to approximately 27 to 30% of neonatal deaths. In 2009, Helping Babies Breathe (HBB), an evidence-based educational program developed to teach neonatal resuscitation techniques in limited-resource setting, was introduced and piloted in Tanzania. HBB resulted in a significant 47% reduction in early neonatal mortality from 13.4 to 7.1 per 1,000 live-born deliveries ( p < 0.0001) and a significant reduction (24%) in fresh stillbirths from 19.0 per 1,000 preimplementation to 14.4 per 1,000 births postimplementation ( p = 0.001). The use of stimulation and suctioning increased, whereas the need for bag mask ventilation decreased significantly post-HBB. This success was attributed to several key strategies including elevating BA as a national priority in health care, identification of a primary person (a pediatrician) at the ministerial level who assumed ownership of the program, local site ownership by a midwife, a commitment to train all birth attendants in the current health workforce in HBB, a commitment to provide required resuscitation equipment at all levels, and periodic review of the data (biannually) at a centralized meeting, under the direction of the Ministry of Health, involving all stakeholders to instill a sense of accountability. A national rollout of provider training is almost complete with almost 15,000 already trained. abstract_id: PUBMED:37189901 Delayed and Interrupted Ventilation with Excess Suctioning after Helping Babies Breathe with Congolese Birth Attendants. There is a substantial gap in our understanding of resuscitation practices following Helping Babies Breathe (HBB) training. We sought to address this gap through an analysis of observed resuscitations following HBB 2nd edition training in the Democratic Republic of the Congo. This is a secondary analysis of a clinical trial evaluating the effect of resuscitation training and electronic heart rate monitoring on stillbirths. We included in-born, liveborn neonates ≥28 weeks gestation whose resuscitation care was directly observed and documented. For the 2592 births observed, providers dried/stimulated before suctioning in 97% of cases and suctioned before ventilating in 100%. Only 19.7% of newborns not breathing well by 60 s (s) after birth ever received ventilation. Providers initiated ventilation at a median 347 s (>five minutes) after birth; no cases were initiated within the Golden Minute. During 81 resuscitations involving ventilation, stimulation and suction both delayed and interrupted ventilation with a median 132 s spent drying/stimulating and 98 s suctioning. This study demonstrates that HBB-trained providers followed the correct order of resuscitation steps. Providers frequently failed to initiate ventilation. When ventilation was initiated, it was delayed and interrupted by stimulation and suctioning. Innovative strategies targeting early and continuous ventilation are needed to maximize the impact of HBB. abstract_id: PUBMED:31993430 Teaching Helping Babies Breathe via Telehealth: A New Application in Rural Guatemala. Background: Helping Babies Breathe (HBB) is a neonatal resuscitation curriculum that teaches life-saving interventions utilized in the first minutes after birth, reducing morbidity and mortality. Traditionally, it requires in-person facilitators for didactic and hands-on training. Objectives: The aim of this study was to offer HBB to nurses and nursing students in Guatemala, with the lead facilitator presenting concepts via telehealth and in-person facilitators providing hands-on demonstration. Methods: Learners completed pre- and post-tests that included the standard HBB knowledge check, as well as an assessment of the course teaching model. Learners also completed the standard Objective Structured Clinical Evaluations (OSCEs). Results: Eighteen learners were included in the analysis. All but one learner (94%) passed the course, and the average percent improvement from the pre- to post-test was 12%. All learners achieved passing scores on the OSCEs. Learners responded positively to questions regarding the technology, connection with the instructor, and ability to ask questions. Ninety-four percent of the learners agreed with the statement "this lecture was as good via telehealth as in person." A cost analysis demonstrated approximately USD 3,979.00 in savings using telehealth compared to a standard in-person course. Conclusions: The telehealth model was successful in delivering course material to the learners and was well received. This model represents a cost-effective way to improve access to HBB. This study may not be generalizable to other populations, and the ability to use telehealth requires reliable internet connectivity, which may not be available in all settings. Further study and expansion of this pilot are needed to assess success in other settings. abstract_id: PUBMED:31818292 Training and well-equipped facility increases the odds of skills of health professionals on helping babies breathe in public hospitals of Southern Ethiopia: cross-sectional study. Background: Health professionals equipped with the adequate skills of helping baby breath remain the backbone in the health system in improving neonatal outcomes. However, there is a great controversy between studies to show the proximate factors of the skills of health care providers in helping babies breathe. In Ethiopia, there is a paucity of evidence on the current status of health care provider's skills of helping babies breathe despite the improvement in neonatal health care services. Therefore, this study intends to fill those gaps in assessing the skills of helping babies breathe and its associated factors among health professionals in public hospitals in Southern Ethiopia. Methods: A facility-based cross-sectional study was conducted among 441 health professionals from March 10 to 30, 2019. A simple random sampling method was used to select the study participants. The data were collected through pre-tested interviewer-administered questionnaire and observational checklist. A binary logistic regression model was used to identify significant factors for the skills of helping babies breathe by using SPSS version 25. The P-value < 0.05 used to declare statistical significance. Results: Overall, 71.1% (95%CI: 66.2, 75.4%) of health professionals had good skills in helping babies breathe. Age group from 25 to 34 (AOR = 2.24; 95%CI: 1.04, 4.81), training on helping babies breathe (AOR = 2.69; 95%CI: 1.49, 4.87), well-equipped facility (AOR = 2.15; 95%CI: 1.09, 4.25), and adequate knowledge on helping babies breathe (AOR = 2.21; 95%CI: 1.25, 3.89) were significantly associated with a health professionals good skill on helping babies breathe. Conclusions: Even though a significant number of care providers had good skills in helping babies breathe, yet there is a need to further improve the skills of the provider in helping babies breathe. Hence, health facilities should be equipped with adequate materials and facilitate frequent training to the provider. abstract_id: PUBMED:36793503 What Helping Babies Breathe knowledge and skills are formidable for healthcare workers? Introduction: Most neonatal deaths occur in the first week of life, due to birth asphyxia. Helping Babies Breathe (HBB), is a simulation-based neonatal resuscitation training program to improve knowledge and skills. There is little information on which knowledge items or skill steps are challenging for the learners. Methods: We used training data from NICHD's Global Network study to understand the items most challenging for Birth Attendants (BA) to guide future curriculum modifications. HBB training was provided in 15 primary, secondary and tertiary level care facilities in Nagpur, India. Refresher training was provided 6 months later. Each knowledge item and skill step was ranked from difficulty level 1 to 6 based on whether 91%-100%, 81%-90%, 71%-80%, 61%-70%, 51%-60% or <50% of learners answered/performed the step correctly. Results: The initial HBB training was conducted in 272 physicians and 516 midwives of which 78 (28%) physicians and 161 (31%) midwives received refresher training. Questions related to timing of cord clamping, management of a meconium-stained baby, and steps to improve ventilation were most difficult for both physicians and midwives. The initial steps of Objective Structured Clinical Examination (OSCE)-A i.e. equipment checking, removing wet linen and immediate skin-to-skin contact were most difficult for both groups. Midwives missed stimulating newborns while physicians missed cord clamping and communicating with mother. In OSCE-B, starting ventilation in the first minute of life was the most missed step after both initial and 6 months refresher training for physicians and midwives. At the retraining, the retention was worst for cutting the cord (physicians level 3), optimal rate of ventilation, improving ventilation & counting heart rate (midwives level 3), calling for help (both groups level 3) and scenario ending step of monitoring the baby and communicating with mother (physicians level 4, midwives 3). Conclusion: All BAs found skill testing more difficult than knowledge testing. The difficulty level was more for midwives than for physicians. So, the HBB training duration and frequency of retraining can be tailored accordingly. This study will also inform subsequent refinement in the curriculum so that both trainers and trainees will be able to achieve the required proficiency. Answer: Yes, the mattress is important in helping babies keep warm. A study investigating the thermal balance and metabolic rate of infants sleeping on different types of mattresses found that infant temperatures were higher on a PurFlo mattress compared to a conventional mattress in both thermoneutral and cool conditions (PUBMED:17429905). The PurFlo mattress, which has extremely low thermal resistance, when used with an infant sleeping bag, minimized the risk of head covering and maintained higher infant temperatures, suggesting that the sleeping surface can significantly impact an infant's warmth.
Instruction: Are We Missing Traumatic Bowel and Mesenteric Injuries? Abstracts: abstract_id: PUBMED:27266653 Are We Missing Traumatic Bowel and Mesenteric Injuries? Purpose: Traumatic bowel and mesenteric injury (TBMI), although an uncommon entity, can be lethal if not detected and treated in a timely manner. The purpose of our study was to evaluate the diagnostic accuracy of 64-slice multidetector computed tomography (MDCT) for the detection of TBMI in patients at our level 1 trauma centre. Methods: We used our hospital's trauma registry to identify patients with a diagnosis of TBMI from January 1, 2006, to June 30, 2013. Only patients who had a 64-slice MDCT scan at presentation and subsequently underwent laparotomy or laparoscopy were included in the study cohort. Using the surgical findings as the gold standard, the accuracy of prospective radiology reports was analyzed. Results: Of the 4781 trauma patients who presented to our institution, 44 (0.92%) had surgically proven TBMI. Twenty-two of 44 were excluded as they did not have MDCT before surgery. The study cohort consisted of 14 males and 8 females with a median age of 41.5 years and a median injury severity score of 27. In total 17 of 22 had blunt trauma and 5 of 22 had penetrating injury. A correct preoperative imaging diagnosis of TBMI was made in 14 of 22 of patients. The overall sensitivity of the radiology reports was 63.6% (95% confidence interval [CI]: 41%-82%), specificity was 79.6% (95% CI: 67%-89%), PPV was 53.9% (95% CI: 33%-73%), and the NPV was 85.5% (95% CI: 73%-94%). Accuracy was calculated at 75.3%. However, only 59% (10 of 17) of patients with blunt injury had a correct preoperative diagnosis. Review of the findings demonstrated that majority of patients with missed blunt TBMI (5 of 7) demonstrated only indirect signs of injury. Conclusion: The detection of TBMI in trauma patients on 64-slice MDCT can be improved, especially in patients presenting with blunt injury. Missed cases in this population occurred because the possibility of TBMI was not considered despite the presence of indirect imaging signs. The prospective diagnosis of TBMI remains challenging despite advances in CT technology and widespread use of 64-slice MDCT. abstract_id: PUBMED:34950276 Post traumatic arterial occlusive mesenteric ischemia: a rare case report. Mesenteric ischemia is an uncommon condition with very high mortality rates characterized by inadequate blood supply, inflammatory injury, and subsequent necrosis of the bowel wall. Acute arterial mesenteric ischemia is usually caused by cardiac emboli, atherosclerotic vascular disease, aortic aneurysm, or dissection. We report a case of a 60-year-old male who presented to the accident and emergency department complaining of abdominal pain following blunt abdominal trauma. An urgent contrast enhanced computed tomography scan demonstrated superior mesenteric artery thrombosis with ischemic small bowel. Surgical intervention was carried out with resection of the necrotic bowel followed by anastomosis. Acute occlusive mesenteric ischemia needs to be considered in cases of blunt trauma presenting with abdominal pain. Proper early diagnosis and management is essential as it carries a high risk of morbidity and mortality. abstract_id: PUBMED:34829375 The Current Diagnostic Accuracy on Free Peritoneal Fluid in Computed Tomography to Determinate the Necessity of Surgery in Blunt Bowel and Mesenteric Trauma-Systemic Review and Meta-Analysis. Traumatic bowel mesenteric injury (TBMI) is a challenge in trauma care. The presence of free peritoneal fluid (FF) in computed tomography (CT) was considered the indication for surgical intervention. However, conservative treatment should be applied for minor injuries. We conduct a systematic review to analyze how reliable the FF is to assess the TBMI. Publications were retrieved by structured searching among databases, review articles and major textbooks. For statistical analysis, summary receiver operating characteristic curves (SROCs) were computed using hierarchical models. Fourteen studies enrolling 4336 patients were eligible for final qualitative analysis. The SROC line was created by a hierarchical summary receiver operating characteristic model. The summary sensitivity of FF to predict surgical TBMI was 0.793 (95% CI: 0.635-0.894), and the summary specificity of FF to predict surgical TBMI was 0.733 (95% CI: 0.468-0.896). The diagnostic odds ratio was 10.531 (95% CI: 5.556-19.961). This study represents the most robust evidence (level 3a) to date that FF is not the absolute but an acceptable indicator for surgically important TBMI. However, there is still a need for randomized controlled trials to confirm. abstract_id: PUBMED:30479682 Transcatheter arterial embolization for initial hemostasis in a hemodynamically unstable patient with mesenteric hemorrhage: A case report. Surgical treatment of mesenteric injuries is necessary to control hemorrhage, manage bowel injuries, and evaluate bowel perfusion. It has recently been suggested that some patients can be managed with transcatheter arterial embolization (TAE) for initial hemostasis. We present a hemodynamically unstable patient who was initially managed by TAE for traumatic mesenteric hemorrhage. A 60-year-old man was injured in a motor vehicle accident and transported to our facility. On arrival, the patient was hemodynamically stable, and had abdominal pain. Physical examination revealed a seatbelt sign on the lower abdomen. A contrast-enhanced computed tomography (CT) scan showed intra-abdominal hemorrhage, mesenteric hematoma, and a giant-pseudoaneurysm, but no intra-abdominal free air or changes in the appearance of the bowel wall. After the CT scan, his vital signs deteriorated and surgical intervention was considered, but TAE was performed to control the hemorrhage. After TAE, the patient was hemodynamically stable and had no abdominal tenderness. A follow-up CT scan was performed 2 days later which showed partial necrosis of the transverse colon and some free air. Resection of the injured transverse colon with primary anastomosis was performed. The patient improved and was discharged 35 days after injury. TAE can be effective as the initial hemostatic procedure in patients with traumatic mesenteric hemorrhage. abstract_id: PUBMED:28834451 CT Findings of Traumatic Bucket-Handle Mesenteric Injuries. Objective: The purpose of this study is to describe a specific type of blunt traumatic mesenteric injury called a bucket-handle tear, review its varied CT appearances, and discuss the role of CT in its detection. Conclusion: A bucket-handle mesenteric injury is a rare but life-threatening blunt traumatic injury that can be difficult to detect prospectively on CT and for which delays in diagnosis and definitive surgical management can result in poor outcomes. abstract_id: PUBMED:34646913 Significant blunt bowel and mesenteric injury - Comparison of two CT scoring systems in a trauma registry cohort. Purpose: This study evaluated and compared two imaging-based scoring systems for the detection of significant blunt bowel and mesenteric injury (sBBMI) by emergency computed tomography (CT). Methods And Materials: We included all consecutive adult polytrauma patients admitted to our emergency department following a road traffic accident from January 2008 to June 2015, provided that intravenously contrast-enhanced whole-body CT examination was performed immediately after hospital admission. Two radiologists, blinded to patients' outcome, reviewed the CT examinations for distinctive direct intestinal or mesenteric vascular injury and indirect signs of abdominal injury. These findings were correlated with the patients' surgical or interventional radiology findings, autopsy, or clinical follow-up (>24 h). Two previously validated imaging-based bowel-injury scoring systems, the CT-based Faget score and the clinically and radiologically based Mc Nutt score (BIPS), were compared by applying each to our trauma cohort. Student t-test, chi-squared, and logistic regression were used in analyses. Results: Twenty-one of 752 analysed patients (2.8 %) had confirmed sBBMI. Active mesenteric bleeding, mesenteric and free pneumoperitoneum, small haemoperitoneum, non-focal bowel wall thickening, mesenteric/pericolic fat stranding, and anterior abdominal wall injury were significantly correlated with sBBMI, as did the two evaluated scoring systems (p < 0.001). However, multivariate logistic regression revealed the superiority of the Faget score to the McNutt score. Conclusion: The prevalence of sBBMI among polytrauma patients is low. Early diagnosis is necessary to avoid increased mortality. Certain CT features are pathognomic of sBBMI and must not be overlooked. Scoring systems are helpful, especially when they are based on radiological signs. abstract_id: PUBMED:7365850 Traumatic arteriovenous fistula of the superior mesenteric vessels. A case of traumatic arteriovenous fistula of the superior mesenteric vessels is added to a collective review of 16 previous cases. Nausea, vomiting, cramping abdominal pain, diarrhea, abdominal thrill and bruit were usually present. Arteriography was diagnostic. Four patients operated upon immediately or within 11 hours after penetrating abdominal trauma had superior mesenteric arteriovenous injuries repaired; the remainder had a mesenteric vascular injury missed at initial laparotomy. The location of the fistula determined the method of repair. Spontaneous obliteration occurred in one patient. In two of the three deaths, portal hypertension was persent and a fistula was discovered at autopsy; the other occurred 3 months after massive bowel resection. Thorough exploration of hematomas and lacerations of the mesentery will minimize or eliminate subsequent formation of an arteriovenous fistula. abstract_id: PUBMED:26815780 Blunt bowel and mesenteric injuries detected on CT scan: who is really eligible for surgery? Background: There is no consensually accepted approach to the management of blunt bowel and mesenteric injuries. Surgery is required urgently in the case of bowel perforation or haemodynamic instability, but several patients can be treated non-operatively. This study aimed to identify the risk factors for surgery in an initial assessment. Methods: We retrospectively reviewed the medical charts and computed tomography (CT) scans of adult patients presenting with a blunt abdominal trauma to our centre between the years 2004 and 2011. We included only patients with a CT scan showing suspected injury to the mesentery or bowel. Results: There were 43 patients (33 males and 10 females), with a mean Injury Severity Score (ISS) of 22. The most frequently suspected injuries based on a CT scan were mesenteric infiltrations in 40 (93 %) patients and bowel wall thickening in 22 (51 %) patients. Surgical therapy was required for 23 (54 %) patients. Four factors were independently associated with surgical treatment: a free-fluid peritoneal effusion without solid organ injury [adjusted odds ratio (OR) = 14.4, 95 % confidence interval (CI) [1.9-111]; p = 0.015], a beaded appearance of the mesenteric vessels (OR = 9 [1.3-63]; p = 0.027), female gender (OR = 14.2 [1.3-159]; p = 0.031) and ISS >15 (OR = 6.9 [1.1-44]; p = 0.041). Surgery was prescribed immediately for 11 (26 %) patients and with delay, after the failure of initially conservative treatment, for 12 (28 %) patients. The presence of a free-fluid peritoneal effusion without solid organ injury was also an independent risk factor for delayed surgery (OR = 9.8 [1-95]; p = 0.048). Conclusions: In blunt abdominal trauma, the association of a bowel and/or mesenteric injury with a peritoneal effusion without solid organ injury on an initial CT scan should raise the suspicion of an injury requiring surgical treatment. Additionally, this finding should lead to a clinical discussion of the benefit of explorative laparotomy to prevent delayed surgery. However, these findings need validation by larger studies. abstract_id: PUBMED:29503782 Missing Globe: A Case of Severe Head Trauma, Eyelid Laceration, and Traumatic Enucleation. We report an unusual periocular injury in a 19-year-old motorcycle rider during an accident. The patient had lacerations on the right upper and lower eyelids and the globe was enucleated en bloc. Despite disorganization of the eyelids and orbit, reconstruction of the eyelids and anophthalmic socket was successful. The primary reconstruction of the anophthalmic socket in the traumatic enucleation is a real challenge, especially when the conjunctival and the orbital tissues are missing or disorganized. It is proposed to utilize the optic nerve regeneration techniques in the cases of traumatic enucleation when the globe is intact. However, in the current case, the globe was unavailable. abstract_id: PUBMED:37663688 Delayed presentation of traumatic diaphragmatic hernia complicated by bowel obstruction and perforation: a case report. Post-traumatic diaphragmatic hernia is a rare but life-threatening condition resulting from a traumatic injury to the diaphragm. We present a case of a 48-year-old man with a history of fall injury and a delayed presentation of a right-sided diaphragmatic hernia with subsequent bowel obstruction and perforation in a patient with a history of trauma and tuberculosis. The patient underwent exploratory laparotomy with manual reduction of the herniated bowel, primary repair of the diaphragmatic defect, and the creation of a double-barrel ileostomy. This case highlights the importance of considering traumatic diaphragmatic hernia in patients with a history of trauma presenting with abdominal symptoms. Timely diagnosis and surgical intervention are crucial in preventing serious complications associated with this condition. Answer: The question "Are We Missing Traumatic Bowel and Mesenteric Injuries?" reflects a concern in trauma care regarding the detection and timely treatment of traumatic bowel and mesenteric injuries (TBMI). A study evaluating the diagnostic accuracy of 64-slice multidetector computed tomography (MDCT) for detecting TBMI at a level 1 trauma center found that the overall sensitivity of radiology reports was 63.6%, with a specificity of 79.6%, indicating that there is room for improvement, especially in patients with blunt injury. The study concluded that the prospective diagnosis of TBMI remains challenging despite advances in CT technology and the widespread use of 64-slice MDCT (PUBMED:27266653). Another study reported a case of post-traumatic arterial occlusive mesenteric ischemia, emphasizing the need for early diagnosis and management due to the high risk of morbidity and mortality associated with this condition (PUBMED:34950276). A systematic review and meta-analysis found that the presence of free peritoneal fluid (FF) in CT scans is an acceptable, but not absolute, indicator for surgically important TBMI. The summary sensitivity of FF to predict surgical TBMI was 0.793, and the summary specificity was 0.733 (PUBMED:34829375). Transcatheter arterial embolization (TAE) has been suggested as an effective initial hemostatic procedure in patients with traumatic mesenteric hemorrhage, as demonstrated in a case report of a hemodynamically unstable patient (PUBMED:30479682). The CT findings of a specific type of blunt traumatic mesenteric injury called a bucket-handle tear were described, highlighting the difficulty in detecting this life-threatening injury on CT scans and the importance of timely diagnosis and surgical management (PUBMED:28834451). A study comparing two CT scoring systems for detecting significant blunt bowel and mesenteric injury (sBBMI) found that certain CT features are pathognomonic of sBBMI and that scoring systems based on radiological signs are helpful (PUBMED:34646913). In summary, while there have been advances in imaging technology and the development of scoring systems to aid in the detection of TBMI, the literature suggests that these injuries can still be missed, and there is a need for heightened clinical awareness and improved diagnostic strategies to ensure timely and appropriate management of TBMI.
Instruction: Spinal level of fetal myelomeningocele: does it influence ventricular size? Abstracts: abstract_id: PUBMED:9207526 Spinal level of fetal myelomeningocele: does it influence ventricular size? Objective: Our objective was to determine whether the level of the spinal defect influences ventricular size in fetuses with myelomeningoceles. Materials And Methods: Sonograms of 51 fetuses with open spina bifida were reviewed to determine the gestational age, ventricular atrial diameter, severity of posterior fossa deformity, and the level of the spinal defect. Four categories for spinal defect level were used: sacral, low lumbar, high lumbar, and thoracic. Regression models for ventricular atrial diameter were fit, adjusting for gestational age, posterior fossa deformity, and spinal defect level. A Fisher's exact test was used to investigate a relationship between the level of the spinal defect and the severity of the posterior fossa deformity. Results: Spinal defect level was distributed among the four categories as follows: sacral (n = 7), low lumbar (n = 30), high lumbar (n = 10), and thoracic (n = 4). The level of the spinal defect did not significantly affect ventricular size (p > .1), and the level of the spinal defect did not show any relationship to the severity of the posterior fossa deformity (p > .8). Conclusion: The level of the spinal defect does not independently affect the degree of ventriculomegaly or severity of the posterior fossa deformity in fetuses with myelomeningoceles. abstract_id: PUBMED:8739582 Prenatal spinal evaluation and functional outcome of patients born with myelomeningocele: information for improved prenatal counselling and outcome prediction. Objective: Prenatal ultrasonography can localize the level of the spinal cord malformation, allowing prediction of the potential postnatal neurological deficit and functional prognosis. Methods: This study has two evaluations: (a) a retrospective prenatal review of 26 fetuses with spinal dysraphism (1987-1991), and (b) a follow-up descriptive study of patients (1971-1981) who underwent closure of the spinal lesion and ventricular shunting in the neonatal period. Results: Prenatal ultrasound evaluation enabled the accurate definition of the last intact vertebral level which allows separation of fetuses into three functional groups (last intact level L2, L3-4, L5-sacral). Patterns of ambulation, urinary and bowel continence, and school performance vary according to level of spinal lesion and the neurological deficit. The need for ventricular shunts, the incidence of other spinal malformations and surgical interventions did not vary with the level of the spinal lesion. Conclusions: The functional outcome for patients with myelomeningocele is variable; however, distinct patterns emerge based on the level of spinal dysraphism and the resultant neurological deficit. By relating the level of the fetal spinal lesion to outcome data, more precise functional prognoses can be given to families. abstract_id: PUBMED:37499752 Prophylactic Spinal Cord Untethering in Children with Spinal Dysraphism Undergoing Thoracolumbar Fusion for Scoliosis. Background: It remains unclear whether spinal cord untethering is necessary to reduce the chances of neurologic decline in children with myelomeningocele and complex closed spinal dysraphism who undergo thoracolumbar fusion for scoliosis. We sought to determine the neurologic and functional outcomes of children with spinal dysraphism undergoing spinal fusion for scoliosis with and without prophylactic spinal cord untethering. Methods: Retrospective, single-center review of patients with spinal dysraphism treated with thoracolumbar fusion over the last 10 years (2009-2019) with or without prophylactic spinal cord untethering. Results: Seventeen patients with myelomeningocele and complex closed spinal dysraphism underwent spinal fusion for scoliosis. Mean age at time of surgery was 13.9 years. Prophylactic spinal cord untethering was performed in 8 of 17 (47%) patients. The change in Cobb angle after surgery was similar between the 2 groups (19.4° untethered vs. 19.9° no untethering). The ambulatory status was similar between the groups, with 37% of the untethered cohort and 44% of the non-untethered cohort being community or household ambulators. There were no changes in intraoperative motor or sensory evoked potentials in any patient during fusion surgery. No patient had a change in motor level or ambulatory status after scoliosis surgery. Conclusions: Our data suggest that prophylactic spinal cord untethering in children with spinal dysraphism undergoing thoracolumbar fusion for scoliosis may not be necessary in patients with moderate curvatures. Our conclusions are limited by the small sample size. A larger review of registry data may yield more powerful conclusions on the necessity of prophylactic spinal cord untethering in this patient population. abstract_id: PUBMED:21527197 Management of the urinary tract in spina bifida cases varies with lesion level and shunt presence. Purpose: The urinary and gastrointestinal tracts remain an enormous burden to the patient with spinal dysraphism. We examined our cases to determine the relationship between the level and type of spinal dysraphism, presence of a ventricular shunt, and urinary and gastrointestinal tract management. Materials And Methods: After receiving ethics approval we reviewed the charts of 155 patients younger than 17 years with spinal dysraphism. We compiled all pertinent neurological, genitourinary and gastrointestinal outcomes from our pediatric and adolescent spina bifida clinic. Results: Of our cohort 43% performed clean intermittent catheterization, including significantly more patients with myelomeningocele vs those with lipomeningocele or tethered cord (73% vs 19%, p <0.01). The intestinocystoplasty rate varied with lesion level and was significantly higher in cases of thoracic and thoracolumbar lesions than in cases of tethered cord, lipomeningocele and sacral myelomeningocele (p <0.005). Of the patients 52% older than 5 years were continent of urine but this varied significantly with lesion level and ventricular shunt presence. The urinary continence rate was highest in patients with a tethered spinal cord (57%), lipomyelomeningocele (65%) or sacral myelomeningocele (60%) (vs thoracic and lumbar myelomeningocele p <0.005). However, there was minimal difference between lumbar and thoracic lesions (25% and 26%, respectively). Of our patients 73% were also continent of feces, which did not vary with lesion level. Conclusions: Our data confirm the association of lesion level, a ventricular shunt and continence. abstract_id: PUBMED:23157392 Spinal level of myelomeningocele lesion as a contributing factor in posterior fossa volume, intracranial cerebellar volume, and cerebellar ectopia. Object: McLone and Knepper's unified theory of Chiari malformation Type II (CM-II) describes how the loss of CSF via the open posterior neuropore fails to create adequate distending pressure for the developing rhomboencephalic vesicle. The authors of the present article describe the relationship between the posterior fossa volume and intracranial cerebellar volume as being related to the distance from the obex of the fourth ventricle to the myelomeningocele lesion using a common mathematical model, the Hagen-Poiseuille law. Methods: All newborns who required closure of a myelomeningocele at the authors' institution between 2008 and 2011 and who were between 4 weeks premature and 2 months, corrected gestational age, at the time of MRI were included in this study. Volumes and measurements were obtained from axial and sagittal T2-weighted MR images of the brain and spine. Results: A total of 56 newborn infants met the inclusion criteria. There was a direct linear relationship between both posterior fossa volume and cerebellar volume and the spinal level of the myelomeningocele lesion (p = 0.0012 and p = 0.0041, respectively). There was a negative linear relationship between the cerebellar descent, the spinal level of the lesion, and posterior fossa volume and cerebellar volume. These relationships strengthen in patients with no syringomyelia and are not significant in those groups with syringomyelia. The results of a 1-way ANOVA for the 3 groups did not reach significance. Conclusions: Using a linear equation derived from the Hagen-Poiseuille law that describes pressure in the fourth ventricle as being directly related to the length of the central canal from the obex to the myelomeningocele lesion, the authors were able to explain the directly observed linear relation between posterior fossa volume, intracranial cerebellar volume, and cerebellar descent to the level of the spinal lesion. As this model assumes a uniform radius of the central canal they were able to validate this model when they observed a strengthening in relationships in the no syringomyelia group and statistically insignificant correlations in the groups with syringomyelia. They therefore propose that the spinal level of the lesion is one of the major determinants of posterior fossa volume, intracranial cerebellar volume, and cerebellar ectopia. abstract_id: PUBMED:34523765 Correlation of fetal ventricular size and need for postnatal cerebrospinal fluid diversion surgery in open spina bifida. Objectives: Open spina bifida is a common cause of hydrocephalus in the postnatal period. In-utero closure of the fetal spinal defect decreases the need for postnatal cerebrospinal fluid (CSF) diversion surgery. Good prenatal predictors of the need for postnatal CSF diversion surgery are currently lacking. In this study, we aimed to assess the association of fetal ventriculomegaly and its progression over the course of pregnancy with the rate of postnatal hydrocephalus requiring intervention. Methods: In this retrospective study, fetuses with a prenatal diagnosis of open spina bifida were assessed longitudinally. Ventricular diameter, as well as other potential predictors of the need for postnatal CSF diversion surgery, were compared between fetuses undergoing prenatal closure and those undergoing postnatal repair. Results: The diameter of the lateral ventricle increased significantly throughout gestation in both groups, but there was no difference in maximum ventricular diameter at first or last assessment between fetuses undergoing prenatal closure and those undergoing postnatal repair. There was no significant difference in the rate of progression of ventriculomegaly between the two groups, with a mean progression rate of 0.83 ± 0.5 mm/week in the prenatal-repair group and 0.6 ± 0.6 mm/week in the postnatal-repair group (P = 0.098). Fetal repair of open spina bifida was associated with a lower rate of postnatal CSF diversion surgery (P < 0.001). In all subjects, regardless of whether they had prenatal or postnatal surgery, the severity of ventriculomegaly at first and last assessments was associated independently with the need for postnatal CSF diversion surgery (P = 0.005 and P = 0.001, respectively), with a greater need for surgery in fetuses with larger ventricular size, even after controlling for gestational age at assessment. Conclusions: In fetuses with open spina bifida, fetal ventricular size increases regardless of whether spina bifida closure is performed prenatally or postnatally, but the need for CSF diversion surgery is significantly lower in those undergoing prenatal repair. Ventriculomegaly is associated independently with the need for postnatal CSF diversion in fetuses with open spina bifida, irrespective of timing of closure. © 2021 International Society of Ultrasound in Obstetrics and Gynecology. abstract_id: PUBMED:15258836 Spinal fusion in children with spina bifida: influence on ambulation level and functional abilities. The aim of this study was to determine the influence of spinal fusion on ambulation and functional abilities in children with spina bifida for whom early mobilization was stimulated. Ten children (three males and seven females) with myelomeningocele were prospectively followed. Their mean age at operation was 9.3 years (standard deviation (SD): 2.4). Spinal curvature was measured according to Cobb. Pelvic obliquity and trunk decompensation were measured as well. The ambulation level was scored according to Hoffer, and functional abilities, as well as the amount of caregiver assistance, were documented using the Pediatric Evaluation of Disability Inventory. All patients were assessed before surgery and three times after surgery, with a total follow-up duration of 18 months after surgery. After spinal fusion, magnitude of primary curvature decreased significantly (p=0.002). Pelvic obliquity and trunk decompensation did not change. In spite of less immobilization as compared with other reported experiences, ambulation became difficult in three out of four patients who had been able to ambulate prior to surgery. Functional abilities and amount of caregiver assistance concerning self-care (especially regarding dressing upper and lower body, and self-catheterization) and mobility (especially regarding transfers) showed a nonsignificant trend to deterioration within the first 6 months after surgery, but recovered afterwards. From pre-surgery to 18 months after surgery, functional skills on self-care showed borderline improvement (p=0.07), whereas mobility did not (p=0.2). Mean scores on caregiver assistance improved significantly on self-care (p=0.03), and borderline on mobility (p=0.06), meaning that less caregiver assistance was needed compared with pre-surgery. The complication rate was high (80%). In conclusion, within the first 6 months after spinal fusion, more caregiver assistance is needed in self-care and mobility. It takes about 12 months to recover to pre-surgery level, while small improvement is seen afterwards. After spinal fusion, ambulation often becomes difficult, especially in exercise walkers. These findings are important for health-care professionals, in order to inform and prepare the patients and their parents properly for a planned spinal fusion. abstract_id: PUBMED:27846400 Spinal cordectomy: A new hope for morbid spinal conditions. A spinal cordectomy is a treatment option for several disorders of the spinal cord like post-traumatic syringomyelia, spinal cord tumor and myelomeningocele. We have done a systematic analysis of all reported cases of spinal cordectomy to investigate the possible outcomes and complications. A PubMed search was performed for literature published from 1949 to 2015 with search words "spinal cordectomy", "spinal cord transection" and "cordectomy for malignant spinal cord tumors" to select articles containing information about the indication, outcome and complication of spinal cordectomy performed for diverse etiologies. Spinal cordectomy was performed for post-traumatic syrinx (76 cases), SPAM (2 cases), Central pain of spinal cord origin (22 cases), Spasticity (8 cases), Spinal tumors (16 cases) and Myelomeningocele (30 cases). Among the 76 cases, 60 cases fulfilled the inclusion criteria for our outcome analysis in terms of improvement, stabilization or deterioration after spinal cordectomy. The results showed 78.3% excellent improvement, 13.4% stable and 8.3% (5 cases) deterioration. The reported causes of failure of spinal cordectomy for post-traumatic syrinx were scarring of a proximal stump and severe arachnoid adhesion. Sixteen cases of spinal cordectomy related with spinal cord tumors have been reported. Also reported were seven cases of GBM, two cases of AA and one each case of anaplastic tanycytic ependymoma, schwanoma, neurofibroma, atypical meningioma and malignant ganglioglioma. Cordectomy shouldbe strongly considered in patients having malignant spinal cord tumors with complete motor loss and sensory loss below the level of the lesion as a means of preventing the spread of disease from the original tumor focus. Spinal cordectomy is a treatment option with a good outcome for post-traumatic spinal morbidity, spinal cord tumors and myelomeningocele. However, since it is an invasive and irreversible procedure, it is only considered when other options have been exhausted. abstract_id: PUBMED:9202854 Surgical treatment of syringomyelia associated with spinal dysraphism. Clinical and radiological features of syringomyelia in 15 patients with spinal dysraphism are reported. There were 8 patients with occult spinal dysraphism (lumbosacral lipoma) and 7 with spina bifida aperta (meningomyelocele). Syringomyelia with spinal dysraphism can be radiologically divided into two types according to the dysraphic state. The syrinx in the patients with occult spinal dysraphism occurred immediately rostral to the lipoma and was localized to the lower thoracic to lumbar levels, while in the meningomyelocele patients the syrinx extended from the cervical to the thoracic level. Large syrinx formation was recognized in 1 of the 7 occult spinal dysraphism cases and 3 of the 8 meningomyelocele cases. For syringomyelia with occult spinal dysraphism, 4 patients underwent syringo-subarachnoid shunting (S-S shunt, 2 cases) or syringostomy (2 cases) during an untethering operation. In the case of meningomyelocele, S-S shunts were placed in 2 patients. Collapse of the syrinx was achieved in all 6 patients who underwent S-S shunting or syringostomy. Decreased size of the syrinx was also noted in 3 occult spinal dysraphism patients who underwent untethering operations alone. In conclusion, a large syrinx in the case of spinal dysraphism should be surgically treated. S-S shunting is effective in both types of syringomyelia. Foramen magnum decompression may be an alternative method of surgical treatment for syringomyelia in patients with meningomyelocele. abstract_id: PUBMED:1201524 Congenital displacement of temporal cortex into the central spinal canal. This is a report of the first recorded observation of displacement of temporal cortex into the central spinal canal in an infant with Arnold Chiari malformation, platybasia, aqueductal atresia, hydrocephalus and meningomyelocele. The combination of an absent right cerebellar hemisphere and malformed fourth ventricular roof provided the anatomical background for this unique event. Answer: The spinal level of fetal myelomeningocele does not independently affect the degree of ventriculomegaly or the severity of the posterior fossa deformity in fetuses with myelomeningoceles. A study reviewed sonograms of 51 fetuses with open spina bifida and found that the level of the spinal defect did not significantly affect ventricular size (p > .1), and there was no relationship between the level of the spinal defect and the severity of the posterior fossa deformity (p > .8) (PUBMED:9207526).
Instruction: Conformational transitions provoked by organic solvents in beta-lactoglobulin: can a molten globule like intermediate be induced by the decrease in dielectric constant? Abstracts: abstract_id: PUBMED:9218954 Conformational transitions provoked by organic solvents in beta-lactoglobulin: can a molten globule like intermediate be induced by the decrease in dielectric constant? Background: It is known that nonnative states of protein molecules can exist in living cells and can be involved in a number of physiological processes. It has also been established that the membrane surface can be responsible for the partial denaturation of proteins due to negative charges on it. The local decrease in the effective dielectric constant of water near the organic surface has been suggested to be an additional driving force for protein denaturation in the membrane field, but data to confirm this suggestion were lacking. Results: Conformational transitions induced in beta-lactoglobulin by methanol, ethanol, isopropanol, dimethylformamide and dioxane were studied by near and far UV circular dichroism, steady-state tryptophan fluorescence and fluorescence decay of 8-anilinonaphthalene-1-sulfonate (8-ANS). The existence of at least two noncoinciding cooperative transitions has been established in all solvent systems studied. The first of these transitions describes the disruption of rigid tertiary structure in protein molecules, while the second reflects the formation of an expanded helical conformation typical of proteins in concentrated organic solvents. This means that the organic solvents provoke the formation of a denatured intermediate state with pronounced secondary structure and native-like compactness. We show that the positions of maxima in fI versus dielectric constant dependence virtually coincide for all five solvent systems studied. Conclusions: The decrease in the dielectric constant of the solvent induces in beta-lactoglobulin an equilibrium intermediate state. This state, being denatured, is relatively compact and has pronounced secondary structure and high affinity for the hydrophobic fluorescent probe 8-ANS, i.e. possesses all the properties of the molten globule intermediate state. abstract_id: PUBMED:9556528 Decrease of dielectric constant transforms the protein molecule into the molten globule state. The effect of organic solvents of different nature (alcohols, amides, and cyclic ethers) on the structural properties of beta-lactoglobulin was studied by circular dichroism and fluorescence spectroscopy. It is shown that the protein molecule undergoes at least two consecutive conformational transitions upon increase of the organic solvent concentration. Denaturation of the protein molecule occurs in the first step and a transition into the highly helical non-compact state in the second. In other words, under such conditions a compact denatured intermediate state is formed with a pronounced secondary structure, i.e., a molten globule-like state. It is shown that decrease of the dielectric constant of the media is responsible for structural changes of the protein molecule. abstract_id: PUBMED:26657584 Insight into the co-solvent induced conformational changes and aggregation of bovine β-lactoglobulin. Many proteins form ordered irreversible aggregates called amyloid fibrils which are responsible for several neurodegenerative diseases. β-lactoglobulin (β-lg), an important globular milk protein, self-assembles to form amyloid-like fibrils on heating at low pH. The present study investigated the effects of two commonly used organic solvents acetonitrile (MeCN) and antimicrobial preservative benzyl alcohol (BA) on the conformation and self-assembly of β-lg at ambient condition. Both MeCN and BA induced a concentration-dependent conformational change showing exposure of hydrophobic patches, loss of tertiary structure and higher α-helical structure at moderate concentrations. In the presence of 50-80% (v/v) MeCN and 1.5-3% (v/v) BA further structural transitions from α-helical to non-native β-sheet structure were observed with a molten globule-like intermediate at 70% MeCN. These non-native β-sheet structures have high tendency to form aggregates. The formation of β-lg self-assembly was confirmed by Thioflavin T studies, Congo red assay, Rayleigh scattering and dynamic light scattering analysis. Transmission electron microscopy studies showed amyloid fibril formation in both MeCN and BA. Our results showed that BA enhances the unfolding and self-assembly of β-lg at much lower concentration than MeCN. Thus solvent composition forces the protein to achieve the non-native structures which are responsible for protein aggregation. abstract_id: PUBMED:30880641 Characterization of heterogeneous intermediate ensembles on the guanidinium chloride-induced unfolding pathway of β-lactoglobulin. Folding pathway of β-LgA (β-lactoglobulin) evolves through the conformational α→β transition. The α→β transition is a molecular hallmark of various neurodegenerative diseases. Thus, β-LgA may serve as a good model for understanding molecular mechanism of protein aggregation involved in neurodegenerative diseases. Here, we studied the conformational dynamics of β-LgA in 6 M GdmCl at different temperatures using MD simulations. Structural order parameters such as RMSD, Rg, SASA, native contacts (Q), hydrophobic distal-matrix and free-energy landscape (FEL) were used to investigate the conformational transitions. Our results show that GdmCl destabilizes secondary and tertiary structure of β-LgA by weakening the hydrophobic interactions and hydrogen bond network. Multidimensional FEL shows the presence of different unfolding intermediates at 400 K. I1 is long-lived intermediate which has mostly intact native secondary structure, but loose tertiary structure. I2 is structurally compact intermediate formed after the partial loss of secondary structure. The transiently and infrequently buried evolution of W19 shows that intermediate conformational ensembles are structurally heterogeneous. We observed that the intermediate conformations are largely stabilized by non-native H-bonds. The outcome of this work provides the molecular details of intermediates trapped due to non-native interactions that may be regarded as pathogenic conformations involved in neurodegenerative diseases.Communicated by Ramaswamy H. Sarma. abstract_id: PUBMED:2318308 Evidence for a molten globule state as a general intermediate in protein folding. The folding of globular proteins occurs through intermediate states whose characterisation provides information about the mechanism of folding. A major class of intermediate states is the compact 'molten globule', whose characteristics have been studied intensively in those conditions in which it is stable (at acid pH, high temperatures and intermediate concentrations of strong denaturants). In studies involving bovine carbonic anhydrase, human alpha-lact-albumin, bovine beta-lactoglobulin, yeast phosphoglycerate kinase, beta-lactamase from Staphylococcus aureus and recombinant human interleukin 1 beta, we have demonstrated that a transient intermediate which accumulates during refolding is compact and has the properties of the 'molten globule' state. We show that it is formed within 0.1-0.2 s. These proteins belong to different structural types (beta, alpha + beta and alpha/beta), with and without disulphide bridges and they include proteins with quite different times of complete folding (from seconds to decades of minutes). We propose that the formation of the transient molten globule state occurs early on the pathway of folding of all globular proteins. abstract_id: PUBMED:8467066 Reversible effects of medium dielectric constant on structural transformation of beta-lactoglobulin and its retinol binding. The secondary structure transformation of beta-lactoglobulin from a predominantly beta-structure into a predominantly alpha-helical one, under the influence of solvent polarity changes is reversible. Independent of the alcohol used--methanol, ethanol, or 2-propanol--the midpoints of the observed structural transformation occur around dielectric constant epsilon approximately 60. The structural change destroying the hydrophobic core formed by the beta-barrel structure leads, at room temperature, to the dissociation of the retinol/beta-lactoglobulin complex in the neighborhood of dielectric constant epsilon approximately 50. However, when the dielectric constant of the medium is raised back to epsilon approximately 70 by the decrease of the temperature, both the refolding of BLG into a beta-structure and the reassociation of the retinol/beta-lactoglobulin complex are observed. The esterification of beta-lactoglobulin carboxyl groups has two effects: on the one hand it accelerates the beta-strand<==>alpha-helix transition induced by alcohols. On the other hand, the esterification of beta-lactoglobulin strengthens its interaction with retinol as it may be deduced from the smaller apparent dissociation constant of retinol/methylated beta-lactoglobulin complex. The binding of retinol to modified or unmodified beta-lactoglobulin has no influence (stabilizing or destabilizing) on the folding changes induced by alcohol. abstract_id: PUBMED:10194367 Folding-unfolding equilibrium and kinetics of equine beta-lactoglobulin: equivalence between the equilibrium molten globule state and a burst-phase folding intermediate. The denaturant-induced equilibrium unfolding transition of equine beta-lactoglobulin was investigated by ultraviolet absorption, fluorescence, and circular dichroism (CD) spectra. An equilibrium intermediate populates at moderate denaturant concentrations, and its CD spectrum is similar to that of the molten globule state previously observed for this protein at acid pH [Ikeguchi, M., Kato, S., Shimizu, A., and Sugai, S. (1997) Proteins: Struct., Funct., Genet. 27, 567-575]. The unfolding and refolding kinetics were also investigated by the stopped-flow CD and fluorescence. A significant change in the CD intensity was observed within the dead time of measurements (25 ms) when the refolding reaction was initiated by diluting the urea-unfolded protein solution, indicating the transient accumulation of the folding intermediate. The CD spectrum of this burst-phase intermediate agrees well with that of the molten globule state at acid pH. The stability of the burst-phase intermediate was also estimated from the urea-concentration dependence of the burst-phase amplitude, and it shows a fair agreement with that of the equilibrium intermediate. These results indicate that the molten globule state of equine beta-lactoglobulin populates at moderate urea concentration as well as at acid pH and it is equivalent with the kinetic folding intermediate. abstract_id: PUBMED:32026791 Impact of industrial cream heat treatments on the protein composition of the milk fat globule membrane. The impact of cream processing on milk fat globule membrane (MFGM) was assessed in an industrial setting for the first time. Three creams and their derived MFGM fractions from different stages of the pasteurization procedure at a butter dairy were investigated and compared to a native control as well as a commercial MFGM fraction. The extent of cross-linking of serum proteins to MFGM proteins increased progressively with each consecutive pasteurization step. Unresolved high molecular weight aggregates were found to consist of both indigenous MFGM proteins and β-lactoglobulin as well as αs1- and β-casein. With regards to fat globule stability and in terms of resistance towards coalescence and flocculation after cream washing, single-pasteurized cream exhibited reduced sensitivity to cream washing compared to non- and double-pasteurized creams. Inactivation of the agglutination mechanism and the increased presence of non-MFGM proteins may determine this balance between stable and non-stable fat globules. abstract_id: PUBMED:21344910 Method for estimating the internal permittivity of proteins using dielectric spectroscopy. Protein charge organization is dependent on the low-permittivity region in the hydrophobic core of the molecule. We suggest a novel approach to estimate the dielectric constant of this region by comparing measured and simulated first- and second-order charge moments. Here, the dipole moment is measured as a function of pH using dielectric spectroscopy. The results are compared to dipole moments based on Poisson-Boltzmann estimates of pK(a) shifts calculated from structures in the Protein Data Bank. Structures are additionally refined using CHARMM molecular dynamics simulations. The best estimate for the internal permittivity is found by minimizing the root-mean-square residual between measured and predicted charge moments. Using the protein β-lactoglobulin, a core dielectric constant in the range of 6-7 is estimated. abstract_id: PUBMED:11453757 Beta-lactoglobulin molten globule induced by high pressure. Beta-lactoglobulin (beta-LG) was treated with high hydrostatic pressure (HHP) at 600 MPa and 50 degrees C for selected times as long as 64 min. The intrinsic tryptophan fluorescence of beta-LG indicated that HHP treatment conditions induced a conformational change. HHP treatment conditions also promote a 3-fold increase in the extrinsic fluorescence of 1-anilinonaphthalene-8-sulfonate and a 2.6-fold decrease for cis-paraneric acid, suggesting an increase in accessible aromatic hydrophobicity and a decrease in aliphatic hydrophobicity. Far-ultraviolet circular dichroism (CD) spectra reveal that the secondary structure of beta-LG converts from native beta-sheets to non-native alpha-helices following HHP treatment, whereas near-ultraviolet CD spectra reveal that the native tertiary structure of beta-LG essentially disappears. Urea titrations reveal that native beta-LG unfolds cooperatively, but the pressure-treated molecule unfolds noncooperatively. The noncooperative state is stable for 3 months at 5 degrees C. The nonaccessible free thiol group of cysteine121 in native beta-LG became reactive to Ellman's reagent after adequate HHP treatment. Gel electrophoresis with and without beta-mercaptoethanol provided evidence that the exposed thiol group was lost concomitant with the formation of S-S-linked beta-LG dimers. Overall, these results suggest that HHP treatments induce beta-LG into hydrophobic molten globule structures that remain stable for at least 3 months. Answer: Yes, conformational transitions provoked by organic solvents in beta-lactoglobulin can induce a molten globule-like intermediate state due to the decrease in dielectric constant. Studies have shown that when beta-lactoglobulin is exposed to organic solvents such as methanol, ethanol, isopropanol, dimethylformamide, and dioxane, at least two noncoinciding cooperative transitions occur. The first transition disrupts the rigid tertiary structure of the protein molecules, while the second leads to the formation of an expanded helical conformation typical of proteins in concentrated organic solvents. This suggests that organic solvents can provoke the formation of a denatured intermediate state with pronounced secondary structure and native-like compactness (PUBMED:9218954). Further research indicates that the decrease in the dielectric constant of the media is responsible for structural changes in the protein molecule, leading to the formation of a compact denatured intermediate state with a pronounced secondary structure, which is characteristic of a molten globule-like state (PUBMED:9556528). Additionally, the effects of organic solvents such as acetonitrile and benzyl alcohol on the conformation and self-assembly of beta-lactoglobulin have been investigated, showing that these solvents induce concentration-dependent conformational changes, including the exposure of hydrophobic patches, loss of tertiary structure, and an increase in alpha-helical structure. At certain concentrations, a transition to non-native beta-sheet structure occurs with a molten globule-like intermediate (PUBMED:26657584). Moreover, the reversible effects of medium dielectric constant on the structural transformation of beta-lactoglobulin and its retinol binding have been observed, where the secondary structure transformation from a predominantly beta-structure to a predominantly alpha-helical one occurs around a dielectric constant of approximately 60 (PUBMED:8467066). In conclusion, the decrease in the dielectric constant induced by organic solvents can indeed lead to the formation of a molten globule-like intermediate state in beta-lactoglobulin.
Instruction: Should FDG-PET scanning be routinely used for patients with an unknown head and neck squamous primary? Abstracts: abstract_id: PUBMED:17059628 Should FDG-PET scanning be routinely used for patients with an unknown head and neck squamous primary? Background: Between 1 and 2 per cent of head and neck squamous cell carcinoma patients will reveal no evidence of a primary malignancy. The management of this group poses many problems, including the morbidity associated with wide field irradiation as well as the difficulty in treatment when a primary does emerge. The aim of this study was to assess the use of fluoro-deoxy-glucose positron emission tomography (FDG-PET) imaging in patients presenting with an unknown head and neck primary and to consider its routine use in such patients. Methods: We enrolled 25 patients into our study over a four year period. They all presented with a histologically proven, metastatic, squamous cell carcinoma of the neck for which no primary could be found despite full clinical, endoscopic and radiological evaluation with computed tomography (CT) and/or magnetic resonance imaging (MRI). Additionally, all the patients underwent imaging using FDG-PET. The images were interpreted by two radiologists experienced in PET imaging. Results: A primary was identified in nine of the 25 patients (42 per cent); however, of these patients, six had false positive results and only three patients were true positives with supportive histology. In the remaining 16 patients, no abnormality was identified on CT, MRI or PET. Of these 16 patients, two eventually displayed a primary carcinoma, the other 14 patients remaining without evidence of any primary. Conclusion: Despite the high number of positive PET scans, the actual true positive rate was 3/9 (33 per cent); conversely, the true negative rate was 14/16 (88 per cent). We conclude from this study that there is a role for FDG-PET in the patient with an unknown head and neck primary, particularly in the context of a negative PET scan. abstract_id: PUBMED:31010622 FDG PET/CT for metastatic squamous cell carcinoma of unknown primary of the head and neck. Objective: The role of FDG PET/CT is uncertain in squamous cell carcinoma of unknown primary (SCCUP). Published data are limited by heterogeneity of pathology, varied diagnostic criteria, and small sample size. Materials/methods: This retrospective study was conducted in 40 patients who had FDG PET/CT for SCCUP. Prior to PET/CT, patients had underwent standard clinical evaluations including flexible rhinolaryngoscopy. The majority of the patients had anatomic imaging. All subjects had histopathological investigations, and follow-up data after PET/CT. Results: FDG PET/CT detected the primary in 16 of 40 patients (40%), and the most common locations of the primary were in the base of the tongue, palatine tonsil, and hypopharynx. There were 10 false positive and 6 false negative PET/CT scans. There was no difference in the mean SUVs of the suspected primary lesions between true and false positive groups. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and overall accuracy of FDG PET/CT for SCCUP in this patient cohort were 72.7% (16/22), 44.4% (8/18), 61.5% (16/26), 57.1% (8/14), and 60% (24/40), respectively. The tonsil was the site with the most false positive and false negative findings. Conclusions: FDG PET/CT might be an effective single shot of whole-body imaging for detection of the primary in SCCUP. Low specificity remained the most notable weakness of FDG-PET in work-up for SCCUP. In addition to varied physiologic uptake or inflammatory related uptake in the common locations of the primary, the image indication bias might be another significant contributor of high false positive rate. abstract_id: PUBMED:26549998 Carcinoma of unknown primary in the head and neck: The evaluation of the effectiveness of (18)F-FDG-PET/CT, own experience. Aim: The aim of the present study was to estimate the clinical effectiveness of (18)F-FDG-PET/CT in the detection of the primary tumor in patients with histologically proven squamous cell carcinoma cervical lymph nodes metastasis from an unknown primary. Background: (18)F-fluorodeoxyglucose positron emission tomography combined with CT ((18)F-FDG-PET/CT) is believed to be very helpful in localization of primary tumor in CUP Syndrome patients. Material And Method: 41 patients referred to Poznan Medical University Department of Head and Neck Surgery from January 2010 to December 2013 with CUP Syndrome were included in the study. All patients presented fine-needle biopsy proven squamous cell carcinoma metastasis of the upper-, or mid neck lymph nodes. The final results were obtained from the histopathologic reports of tissue samples from anatomical regions suspected for primary tumor, additional imaging exams as well as clinical follow-up data. Results: The (18)F-FDG-PET/CT successfully detected primary tumor in 7 out of 41 patients (17%). In two more cases the primary tumor was indicated in the lung. 24 of 41 patients (58.5%) analyzed in our study remained without evidence of a primary tumor. In 4 cases (9.75%) we did not reveal any pathology within the localizations indicated by PET/CT on panendoscopy. In 4 cases we obtained histological confirmation of neoplasm on panendoscopy despite the negative results of PET/CT examinations. Conclusion: We may suppose a relatively high usefulness of (18)F-FDG-PET/CT in the diagnosis process of CUP Syndrome patients. High NPV may indicate patients with no symptoms of primary tumor, which allows to avoid extensive resection or extra imaging. abstract_id: PUBMED:11894635 Unknown primary detected by FDG-PET. A review of the present indications of FDG-PET in head and neck cancers. To investigate the indications of 18F-2-fluoro-2-Deoxy-D-glucose (FDG) Positron Emission Tomography (PET) in head and neck cancer, the present study focuses on a case of cervical metastatic lymph node, which is not detected by the routine approach. It discusses the potential role of FDG-PET in the detection of unknown primary tumors, lymph node metastasis and post radiation follow-up, and demonstrates the implications of its findings through a few examples. Based on the literature in the field of head and neck oncology, the paper recommends the following uses for FDG-PET: 1. To guide biopsy or even local resection at the initial stage of examining the unknown primary lesions in case of high clinical suspicion 2. A whole body PET in high-risk patients may prevent unnecessary treatment and reduce the number of examinations 3. To monitor tumor response before full-dose irradiation so as not to delay the salvage surgery when applicable 4. To detect residual, recurrent or secondary neoplasm after definitive radiotherapy at least 4 months post-treatment 5. To revise the necessity of neck treatment in case of a negative PET, in the NO necks; and 6. In cases of clinical suspicion for laryngeal cancer recurrence and absence of objective findings before obtaining biopsy. abstract_id: PUBMED:28731808 Clinical Practice in PET/CT for the Management of Head and Neck Squamous Cell Cancer. Objective: The purpose of this article is to summarize the evidence for the value of PET/CT for the management of patients with head and neck squamous cell cancer and suggest best clinical practices. Conclusion: FDG PET/CT is a valuable imaging tool for identifying unknown primary tumors in patients with known cervical node metastases leading to management change and is the standard of care for the initial staging of stage III and IV head and neck squamous cell carcinomas (HNSCCs), for assessing therapy response when performed at least 12 weeks after chemoradiation therapy, and for avoiding unnecessary planned neck dissection. Neck dissection is avoided if PET/CT findings are negative-regardless of the size of the residual neck nodes-because survival outcomes are not compromised. FDG PET/CT is valuable in detecting recurrences and metastases during follow-up when suspected because of clinical symptoms and serves as a prognostic marker for patient survival outcomes, for 5 years. Using FDG PET/CT for routine surveillance of HNSCC after 6 months of treatment without any clinical suspicion should be discouraged. abstract_id: PUBMED:17177026 FDG PET and PET/CT for the detection of the primary tumour in patients with cervical non-squamous cell carcinoma metastasis of an unknown primary. In patients with a neck metastasis from an unknown primary with non-squamous cell cancer (non-SCC) histology, the primary is often located outside the head and neck area. We retrospectively evaluated 326 patient records and found 14 patients with non-SCC neck lymph node metastasis from an unknown primary undergoing whole body F-18-fluorodeoxyglucose (FDG) positron emission tomography (PET) with or without coregistered computed tomography (PET/CT). The PET or PET/CT findings were verified by pathological work-up, additional imaging tests, and clinical follow-up. PET detected pathological FDG uptake suspicious for the primary in eight patients. PET or PET/CT findings were true positive in seven patients, true negative in 4, false positive in 1, and false negative in two patients. In one patient PET/CT revealed a synchronous ovarian carcinoma. The results suggest that whole body imaging with FDG PET and PET/CT can be useful to identify unknown primaries of non-SCC origin. However, the work-up of patients undergoing PET or PET/CT in our study was very heterogeneous and the primary was more likely found in patients without extensive imaging before PET scanning. Further studies should evaluate if the histology of a neck nodal metastasis should influence the choice of the imaging method and the role of PET and PET/CT imaging for the work up of patients with a non-SCC neck lymph node metastasis of an unknown primary. abstract_id: PUBMED:37995312 Head and Neck Squamous Cell Carcinoma of Unknown Primary: A Diagnostic Work-Up. The current work-up of the primary tumor site of a head and neck squamous cell carcinoma of unknown primary is not standardized and results in several time-consuming procedures that delay treatment initiation. This article seeks to consolidate contemporary strategies used to identify the primary tumor site of an unknown primary head and neck squamous cell carcinoma and offer recommendations based on current literature review. abstract_id: PUBMED:25210484 FDG-PET/CT in the Assessment of Treatment Response after Oncologic Treatment of Head and Neck Squamous Cell Carcinoma. Background: In many centers, (18)F-fluorodeoxyglucose-positron emission tomography/computed tomography (FDG-PET/CT) is used to monitor treatment response after definitive (chemo)radiotherapy [(C)RT] for head and neck squamous cell carcinoma (HNSCC), but its usefulness remains somewhat controversial. We aimed at assessing the accuracy of FDG-PET/CT in detecting residual disease after (C)RT. Method: All HNSCC patients with FDG-PET/CT performed to assess treatment response 10-18 weeks after definitive (C)RT at our institution during 2008-2010 were included. The patient charts were reviewed for FDG-PET/CT findings, histopathologic findings, and follow-up data. The median follow-up time for FDG-PET/CT negative patients was 26 months. Results: Eighty-eight eligible patients were identified. The stage distribution was as follows: I, n = 1; II, n = 15; III, n = 17; IV, n = 55. The negative predictive value, positive predictive value, specificity, sensitivity, and accuracy of FDG-PET/CT in detecting residual disease were 87%, 81%, 94%, 65%, and 85%, respectively. The corresponding specific figures for the primary tumor site were 91%, 71%, 94%, 59%, and 86% and for the neck 93%, 100%, 100%, 75%, and 94%, respectively. Conclusions: In patients who have received definitive (C)RT for HNSCC, post-treatment FDG-PET/CT has good potential to guide clinical decision-making. Patients with negative scan can safely be followed up clinically only, while positive scan necessitates tissue biopsies or a neck dissection to rule out residual disease. abstract_id: PUBMED:16609665 Contribution of 18-F-FDG PET for detection of head and neck carcinomas with an unknown primary tumor Objectives: To evaluate the usefulness of fusioning positron emission tomography with computed tomography (PET/CT) for the detection of head and neck carcinomas of unknown primary. Methods: This prospective study included 20 patients with cervical lymph node of squamous cell carcinoma of unknown primary after standard initial workup (nasofibroscopy and CT scan of the chest and head and neck). Patients underwent PET/CT and panendoscopy of the upper airways and upper digestive tract with PET/CT directed biopsies. The follow-up was 6 months minimum in all patients. Results: A potential primary tumor was found in 10 of the 20 cases and confirmed by histology in 7 cases (3 bases of tongue, 1 tonsillar pillar, 1 vallecula, 1 tonsillar fossa, 1 piriform sinus). Four of these seven patients presented a normal endoscopy (diagnosis was made with submucosal PET/CT directed biopsies). PET/CT was normal in 10 of 20 cases with 3 false negatives. The sensitivity of PET/CT was 70%, the specificity was 70% and the accuracy was 70%. Conclusion: PET/CT seems to be of interest in the detection of head and neck carcinoma of unknown primary. PET/CT detected 15% of unknown primary tumors with treatment-related implications. abstract_id: PUBMED:14623746 Staging of head and neck squamous cell cancer with extended-field FDG-PET. Background: Accurate baseline staging is necessary to appropriately treat head and neck squamous cell carcinoma. [F-18]-fluorodeoxyglucose positron emission tomography (FDG-PET) is valuable for locoregional staging of primary head and neck disease. The effectiveness of FDG-PET for the detection of distant metastatic or synchronous disease remains unproven. Objective: To investigate the utility of FDG-PET extended into the abdomen (extended-field FDG-PET) for wide-field staging of head and neck tumors. Methods: This is a prospective institutional study of 35 consecutive patients diagnosed with American Joint Committee on Cancer (AJCC)-defined stage II-IV squamous cell carcinoma of the oral cavity, oropharynx, or larynx between September 2000 and June 2002. Thirty-three patients (94%) were eligible for analysis. All patients were routinely staged with chest radiography, liver function tests, and extended-field FDG-PET. Chest or abdominal computed tomographic scans were used as corroborative studies and were obtained only when one of the above tests indicated distant disease. Results: Of 33 patients, 7 (21%) had evidence of distant disease by extend-field FDG-PET-4 with metastases and 3 with synchronous primary cancers of the aerodigestive tract. [F-18]-fluorodeoxyglucose PET detected hepatic, bone, gastrointestinal, and mediastinal disease not identified by chest radiography or liver function tests. Two of the 7 patients with FDG-avid distant disease had false-negative staging by all other tests, including computed tomography. Conclusions: Extended-field FDG-PET is feasible and may improve staging of primary head and neck squamous cell carcinoma. Use of staging FDG-PET must be explicitly described in reports from centers engaged in prospective research to facilitate comparison with historical results. Answer: The use of FDG-PET scanning in patients with an unknown head and neck squamous primary has been evaluated in several studies, and the results suggest that it can be a valuable tool in certain contexts. One study found that FDG-PET identified a primary tumor in 42% of patients, but the true positive rate was only 33%, with a high number of false positives. However, the true negative rate was 88%, suggesting that a negative PET scan could be useful in ruling out a primary tumor (PUBMED:17059628). Another study reported that FDG PET/CT detected the primary in 40% of patients, with a sensitivity of 72.7% and a specificity of 44.4%. The study noted that low specificity was a significant weakness, with high false positive rates due to varied physiologic uptake or inflammatory uptake (PUBMED:31010622). A different study showed that (18)F-FDG-PET/CT successfully detected primary tumors in 17% of patients, indicating its usefulness in the diagnostic process of CUP Syndrome patients (PUBMED:26549998). Furthermore, FDG-PET has been recommended for guiding biopsy or local resection in cases of high clinical suspicion for unknown primary lesions, and for whole-body imaging in high-risk patients to prevent unnecessary treatment (PUBMED:11894635). The clinical practice in PET/CT for the management of head and neck squamous cell cancer suggests that FDG PET/CT is valuable for identifying unknown primary tumors, leading to management changes and is considered the standard of care for initial staging of advanced HNSCCs (PUBMED:28731808). However, routine surveillance of HNSCC after 6 months of treatment without clinical suspicion should be discouraged. In summary, while FDG PET/CT has limitations such as false positives and false negatives, it can be an effective tool for detecting unknown primary tumors in head and neck squamous cell carcinoma. The decision to use FDG PET/CT should be based on individual patient circumstances, considering the potential benefits and limitations of the scan. Given the evidence, FDG PET scanning could be considered for routine use in patients with an unknown head and neck squamous primary, particularly when other diagnostic modalities have failed to identify the primary tumor and when clinical suspicion remains high.
Instruction: Do blood constituents in varicose veins differ from the systemic blood constituents? Abstracts: abstract_id: PUBMED:26100448 Do blood constituents in varicose veins differ from the systemic blood constituents? Objective: Varicose veins represent one of the most frequent vascular diseases and are in most cases benign. However, advanced disease is frequently associated with complications such as chronic venous insufficiency and superficial vein thrombosis. The pathogenic mechanisms are not well understood. Besides increased venous pressure, it is suggested that local blood constituents trigger various mechanisms responsible for the progression of the disease and its complications. Design: The aim of this study was to investigate the changes in the blood in varicose veins and to compare them with the systemic markers of inflammation and endothelial damage. Materials And Methods: Forty patients with primary varicose veins were included in the study. Most patients were class C2. Blood samples were taken from the leg from the tortuous and dilated varicose tributaries of the great saphenous vein and from the cubital vein. Results: The values of basic hematologic tests were comparable between blood samples (varicose vs. systemic). In varicose veins, the following parameters were significantly increased in comparison with systemic blood: hsCRP (3.12 ± 2.18 mg/L vs. 2.04 ± 2.21 mg/L, p = .04), IL-6 (3.54 ± 2.59 pg/mL vs. 2.25 ± 1.27 pg/mL, p = .008), vWF (118.4 ± 27% vs. 83.2 ± 22%, p < .05). D-dimer, in samples taken from the leg varicose veins, was also significantly higher than in the systemic blood (104.3 ± 9.3 ng/mL vs. 89.5 ± 8.3 ng/mL, p = .039). Conclusions: Some inflammatory markers and indicators of endothelial dysfunction are increased in varicose vein blood. This is most probably the consequence of deteriorated blood flow in dilated and tortuous superficial veins, and increased venous pressure. Damage to the venous wall, which causes a chronic inflammatory response, together with the procoagulant properties of local blood may promote further progression of the disease and thrombotic complications. abstract_id: PUBMED:897622 Leg blood and central circulation at varying blood volumes. A peroperative study of nine young patients with varicose veins. Cardiac output, central pressures and leg blood flow wre studied in a group of nine young individuals undergoing surgery for varicose veins. These subjects did not differ from older ones in an earlier, corresponding study in respect of central or femoral blood flow. The haemodynamic affects of peroperative variations of the blood volume were also the same. Thus, the common femoral arterial blood flow decreased by 40% from a mean of 450 to 260 ml/min during the operation. After transfusion of 900 ml of blood the flow increased to 575 ml/min. As a low blood predisposes to thrombosis, it is concluded that, irrespective of age, adequate peroperative volume substitution is of great importance in counteracting such a risk. abstract_id: PUBMED:913059 Leg blood flow and central circulation at various blood volumes: a peroperative study of nine patients with varicose veins. 1. In a group of nine middle-aged patients undergoing varicose vein surgery, cardiac output, right atrial, pulmonary arterial and capillary pressures, and leg blood flow were measured after induction of general anaesthesia but before operation, and also during operation before and after blood substitution. 2. Under anaesthesia, the mean pre-operative blood flows in the superficial and common femoral arteries were 160 ml/min and 280 ml/min respectively. These flows are comparable with those obtained in other studies under similar conditions but lower than values obtained in conscious subjects. During the operation the leg blood flow decreased by 24%. As cardiac output remained unchanged, the fractional leg blood flow fell. After transfusion of 900 ml of blood the leg blood flow doubled. 3. It is concluded that anaesthesia, surgical trauma and variations in blood volume greatly influence the leg blood flow and that an adequate substitution of operative blood loss is of utmost importance to achieve an optimum peripheral circulation. abstract_id: PUBMED:6609870 Measurement of azygos venous blood flow by a continuous thermal dilution technique: an index of blood flow through gastroesophageal collaterals in cirrhosis. A method to quantitate blood flow through the gastroesophageal collaterals in portal hypertensive patients was developed. Since gastroesophageal collaterals drain into the azygos system, it is postulated that measurement of blood flow in the azygos vein should provide a quantitative measurement of gastroesophageal collateral blood flow changes in portal hypertensive patients. Azygos blood flow was measured using a double thermodilution catheter directed under fluoroscopy to the azygos vein. Ten patients with alcoholic cirrhosis were studied. Five of these patients had a history of repeated bleeding from gastroesophageal varices (Group I). The azygos blood flow in these patients was 596 +/- 78 ml per min. The other five patients all had decompressive surgery of the portal system (Group II). In these patients the azygos venous blood flow was 305 +/- 29 ml per min (p less than 0.01). The coefficient of variation of repeated baseline measurements was of 4.4 +/- 0.6%. The azygos venous blood flow measurement is a rapid, simple and sensitive method to evaluate blood flow changes in the vessels involved in gastroesophageal bleeding due to portal hypertension. abstract_id: PUBMED:1150017 Similarity of arterial and intravenous vasopressin on portal and systemic hemodynamics. The effects of superior mesenteric arterial and intravenous infusions of vasopressin and low and high dose intravenous infusions of vasopressin on splanchnic and systemic hemodynamics were compared in 20 anesthetized dogs. The following parameters were evaluated: flow in the superior mesenteric artery and portal vein, portal and systemic blood pressure, and cardiac output. In the comparison of selective arterial and intravenous infusions, no statistically significant difference was found between the degree of changes in portal flow, portal and systemic blood pressure, and cardiac output. Only the superior mesenteric artery flow showed a greater decrease with the selective arterial injection. In a comparison of intravenous high dose (corresponding to that used clinically) and low dose (one-fifth) infusions of vasopressin, a relatively high splanchnic and low systemic effectiveness of the low dose was found. It resulted in only a 15 to 20% smaller effect on flow in the superior mesenteric artery and portal vein and portal pressure; however, about 40% lesser systemic effect on arterial blood pressure and cardiac output than the high dose. The results of this experimental work warrant exploration in clinical practice, preferably by a controlled study. If clinical success in controlling hemorrhage confirms these hemodynamic results, an intravenous. low dose infusion of vasopressin would appear to be the method of choice in the vasoconstrictive therapy of gastrointestinal bleeding from varices. abstract_id: PUBMED:33513121 Non-Invasive Capillary Blood Pressure Measurement Enabling Early Detection and Classification of Venous Congestion. Capillary blood pressure (CBP) is the primary driving force for fluid exchange across microvessels. Subclinical systemic venous congestion prior to overt peripheral edema can directly result in elevated peripheral CBP. Therefore, CBP measurements can enable timely edema control in a variety of clinical cases including venous insufficiency, heart failure and so on. However, currently CBP measurements can be only done invasively and with a complicated experimental setup. In this work, we proposed an opto-mechanical system to achieve non-invasive and automatic CBP measurements through modifying the widely implemented oscillometric technique in home-use arterial blood pressure monitors. The proposed CBP system is featured with a blue light photoplethysmography sensor embedded in finger/toe cuffs to probe skin capillary pulsations. The experimental results demonstrated the proposed CBP system can track local CBP changes induced by different levels of venous congestion. Leveraging the decision tree technique, we demonstrate the use of a multi-site CBP measurement at fingertips and toes to classify four categories of subjects (total N = 40) including patients with peripheral arterial disease, varicose veins and heart failure. Our work demonstrates the promising non-invasive CBP measurement as well as its great potential in realizing point-of-care systems for the management of cardiovascular diseases. abstract_id: PUBMED:28074124 Involvement of blood mononuclear cells in the infertility, age-associated diseases and cancer treatment. Blood mononuclear cells consist of T cells and monocyte derived cells. Beside immunity, the blood mononuclear cells belong to the complex tissue control system (TCS), where they exhibit morphostatic function by stimulating proliferation of tissue stem cells followed by cellular differentiation, that is stopped after attaining the proper functional stage, which differs among various tissue types. Therefore, the term immune and morphostatic system (IMS) should be implied. The TCS-mediated morphostasis also consists of vascular pericytes controlled by autonomic innervation, which is regulating the quantity of distinct tissues in vivo. Lack of proper differentiation of tissue cells by TCS causes either tissue underdevelopment, e.g., muscular dystrophy, or degenerative functional failures, e.g., type 1 diabetes and age-associated diseases. With the gradual IMS regression after 35 years of age the gonadal infertility develops, followed by a growing incidence of age-associated diseases and cancers. Without restoring an altered TCS function in a degenerative disease, the implantation of tissue-specific stem cells alone by regenerative medicine can not be successful. Transfused young blood could temporarily restore fertility to enable parenthood. The young blood could also temporarily alleviate aging diseases, and this can be extended by substances inducing IMS regeneration, like the honey bee propolis. The local and/or systemic use of honey bee propolis stopped hair and teeth loss, regressed varicose veins, improved altered hearing, and lowered high blood pressure and sugar levels. Complete regression of stage IV ovarian cancer with liver metastases after a simple elaborated immunotherapy is also reported. abstract_id: PUBMED:12751560 Oxygenation of blood in varicose veins. The aim of this prospective, controlled study is to compare the oxygen tension of blood from varicose veins with that from both the antecubital vein of the same patients and the normal long saphenous vein of normal controls at high altitude in Abha (8,000 feet above sea level), Saudi Arabia. Forty-two subjects (21 normal controls and 21 with uncomplicated primary varicose veins) had blood samples taken from 41 normal long saphenous veins and 35 varicose veins near the ankle, respectively (a total of 76 lower limbs). Samples were also taken simultaneously from the right antecubital vein in all the subjects. There was no difference in either oxygen tension (pO2) or saturation (sO2) between blood from varicose veins and blood from normal long saphenous veins. However, in patients with varicose veins, both pO2 and sO2 of varicose veins blood were significantly higher than those of arm venous blood (P = 0.009 and P = 0.018, respectively). In normal subjects, blood from normal long saphenous veins had also significantly higher sO2 (P = 0.001) than that from arm veins, but pO2 was not higher. In conclusion, the theory of arteriovenous communication is poorly founded. There must be other more important explanations for the pathogenesis of varicose veins. abstract_id: PUBMED:697707 Effect of X-ray contrast medium on peripheral circulation, arterial blood flow and venous capacity in the lower extremities during phlebography (author's transl) The influence of contrast medium (20 ml Meglumine Diatrizoate 65%; Angiografin) on peripheral circulation was studied after injection into the vena saphena magna in 12 patients (mean age 40.5 years) with markedly varicose veins. Pneumatic segment-plethysmography and "mercury-in-rubber" strain-gauge-plethysmography were used to investigate local and systemic effects of contrast medium on intravascular basic volume and pressure-dependent venous capacity. Heart rate was controlled by Ecg-monitoring, venous blood pressure by means of a Schwarzer pressure transducer, and peripheral blood flow by means of pneumatic segment-plethysmography. In addition peripheral vascular resistance was calculated. Local effect of contrast medium on large veins was a small insignificant increase (2.2%) of venous capacity. However, there was a significant increase of additional 10% after recirculation (2 P less than 0.01). Intravascular basic volume showed a significant increase as well (2 P less than 0.01). In addition the calculated increase of total intravascular volume--represented by intravascular basic volume plus venous capacity--was 16.3% related to an intravenous pressure of 70 mm Hg. According to these findings, 20 ml contrast medium enlarges the blood volume dumping into the legs with postural changes by approximately 270 ml. Arterial inflow increased significantly (2P less than 0.01) by 40% after contrast injection, whereas mean arterial blood pressure decreased simultaneously. Consequently a reduction of peripheral vascular resistance of 32% was calculated. The neglectable local effect of contrast medium on pressure-dependent venous capacity is probably due to the markedly reduced distensibility of varicose veins. In contrast a striking effect of contrast medium is observed after recirculation, which is felt to be due to a general vasodilatation of capillary vessels and pre- and postcapillary resistance vessels. abstract_id: PUBMED:17116611 Revisiting of preoperative blood ordering policy--a single institute's experience in Taiwan. Background: If unnecessary blood orders can be reasonably waived, it will reduce both workload and financial expenditure. A review of the surgical blood ordering practice is, therefore, mandatory. Methods: Routine preoperative blood orders were retrospectively audited. After receiving the requests, we usually performed only type and screen tests without cross matching until an actual need for transfusion occurred. Transfusion probability (number of patients transfused / number of procedures x 100) was calculated. One unit of donation was defined as 500 mL whole blood. If surgical procedures were associated with insignificant blood loss (number of units transfused < or = 1) and transfusion probability was less than 5%, then it was considered to be safe to disregard a preoperative blood order. Results: The blood ordering practices for 5,472 patients who received various surgical procedures were reviewed over a period of 48 operation days. Neither preoperative requests for preparation of red cells nor transfusion was made in 3,482 patients. Preoperative requests for preparation of red cells were made in 1,990 patients, but only 751 (37.74%) actually received blood transfusion on the day of the operation. Analysis showed that it would have been safe to disregard a preoperative blood order for ophthalmic surgery, ear surgery, nose surgery (endoscopic sinus surgery, submucosal turbinectomy), microlaryngoscopic surgery, tracheostomy, thyroidectomy, mastectomy, laparoscopic cholecystectomy, hemicolectomy, hernioplasty, arthroscopic surgery, laminectomy, laparoscopically assisted vaginal hysterectomy, vasectomy and varicose vein surgery. Conclusion: A review of preoperative blood orders has identified certain surgical procedures with insignificant blood loss and low transfusion probability, for which preoperative blood orders may be safely disregarded in order to reduce unnecessary laboratory workload while not jeopardizing patient safety. Answer: Yes, blood constituents in varicose veins do differ from the systemic blood constituents. A study that investigated changes in the blood in varicose veins compared to systemic markers of inflammation and endothelial damage found that certain parameters were significantly increased in varicose vein blood. These parameters included high-sensitivity C-reactive protein (hsCRP), interleukin-6 (IL-6), von Willebrand factor (vWF), and D-dimer. The elevated levels of these markers in varicose veins suggest a chronic inflammatory response and increased procoagulant properties, which may promote the progression of the disease and thrombotic complications (PUBMED:26100448).
Instruction: Muscle function and body composition profile in adolescents with restrictive anorexia nervosa: does resistance training help? Abstracts: abstract_id: PUBMED:26084569 Muscle function and body composition profile in adolescents with restrictive anorexia nervosa: does resistance training help? Purpose: The aim of this study was to analyze the effects of short-term resistance training on the body composition profile and muscle function in a group of Anorexia Nervosa restricting type (AN-R) patients. Methods: The sample consisted of AN-R female adolescents (12.8 ± 0.6 years) allocated into the control and intervention groups (n = 18 each). Body composition and relative strength were assessed at baseline, after 8 weeks and 4 weeks following the intervention. Results: Body mass index (BMI) increased throughout the study (p = 0.011). Significant skeletal muscle mass (SMM) gains were found in the intervention group (p = 0.045, d = 0.6) that correlated to the change in BMI (r = 0.51, p < 0.031). Meanwhile, fat mass (FM) gains were significant in the control group (p = 0.047, d = 0.6) and correlated (r > 0.60) with change in BMI in both the groups. Significant relative strength increases (p < 0.001) were found in the intervention group and were sustained over time. Conclusions: SMM gain is linked to an increased relative strength when resistance training is prescribed. Although FM, relative body fat (%BF), BMI and body weight (BW) are used to monitor nutritional progress. Based on our results, we suggest to monitor SMM and relative strength ratios for a better estimation of body composition profile and muscle function recovery. Implications for Rehabilitation Anorexia Nervosa Restricting Type (AN-R) AN-R is a psychiatric disorder that has a major impact on muscle mass content and function. However, little or no attention has been paid to muscle recovery. High intensity resistance training is safe for AN-R after hospitalization and enhances the force generating capacity as well as muscle mass gains. Skeletal muscle mass content and muscular function improvements are partially maintained for a short period of time when the exercise program ceases. abstract_id: PUBMED:6881081 A comparison between muscle function and body composition in anorexia nervosa: the effect of refeeding. Skeletal muscle function, body composition (total body nitrogen and total body potassium) and standard parameters of nutritional assessment were measured in six severely depleted patients with primary anorexia nervosa, both on admission and during oral refeeding. The function of the adductor pollicis muscle was assessed by electrical stimulation of the ulnar nerve. On admission muscle function was markedly abnormal in the patients with anorexia nervosa (n = 6) compared with normal subjects (n = 22), with a significant increase in the force of contraction at 10 Hz, with a mean +/- SEM of 48.0 +/- 3.7% and 28.8 +/- 1.2%, respectively (p less than 0.001). There was slowing of the maximal relaxation rate, 6.6 +/- 0.6% and 9.6 +/- 0.2%, respectively (p less than 0.001) and increased muscle fatigue 18.6 +/- 5.9% and 3.5 +/- 0.8%, respectively (p less than 0.01). Initially, the mean serum albumin was normal (4.0 +/- 0.1 g/dl), although there was evidence of severe depletion of somatic protein stores, with a low total body nitrogen and creatinine-height index. Within 4 wk of oral refeeding, maximal relaxation rate and muscle fatigability were restored to normal, and within 8 wk all parameters of muscle function were normal. During the study total body nitrogen increased by only 13% and was still 19.4% below the predicted normal total body nitrogen, whereas total body potassium increased by 32% and body fat by 46%. Normalization of muscle function may be related to restoration of muscle electrolytes rather than repletion of body nitrogen. abstract_id: PUBMED:36863049 Association between Body Composition Parameters and Heart Rate in a Sample of Adolescents with Anorexia Nervosa. Objectives: Eating disorders affect the physical and mental health of millions of Americans. Body composition trends in relation to heart rate in adolescents with eating disorders remain understudied. The aim of the present study was to determine whether body composition parameters (percent body fat, percent skeletal muscle mass) are correlated with heart rate in a sample of adolescents with anorexia nervosa. Methods: This study included patients 11 to 19 years old who presented to an outpatient eating disorder clinic (N = 49). Patients underwent bioelectrical impedance analysis to estimate body composition parameters. Descriptive statistics, linear regression, and paired t tests were used to evaluate the data. Results: Heart rate was inversely associated with percent skeletal muscle mass (P < 0.001) and positively associated with percent body fat (P = 0.001). Patients demonstrated significant improvements in weight, body mass index percentile, skeletal muscle mass, percent body fat, and heart rate when comparing results at the first and last visits (P < 0.01). Conclusions: Overall, there was an inverse relation between percent skeletal muscle mass and heart rate and a positive association between body fat and heart rate. Our study demonstrates the importance of assessing percent body fat and skeletal muscle mass rather than weight or body mass index alone in adolescents with eating disorders. abstract_id: PUBMED:11756057 Body composition in adolescents with anorexia nervosa. Background: Malnourished patients with anorexia nervosa have altered body composition characterized by depletion of fat and fat-free mass. Objectives: The objectives of this study were to assess the body composition of adolescents with anorexia nervosa compared with that of control subjects and to investigate the relation between simple anthropometric measures and reference techniques for measuring body composition. Design: Twenty-three adolescent females with anorexia nervosa aged 15.46 +/- 1.34 y (x +/- SD) were studied. Body composition was measured by anthropometry, dual-energy X-ray absorptiometry (DXA) (for body fat), and prompt gamma in vivo neutron activation analysis [for total body nitrogen (TBN)]. Twenty-five female subjects provided the control DXA data. TBN measurements were compared with prediction equations based on sex, height, age, and weight. Results: Anorexia nervosa patients had significantly lower weight (40.2 +/- 4.6 kg), body mass index (in kg/m(2): 15.3 +/- 1.2), percentage of body fat (DXA) (13.8 +/- 5.8%), percentage of TBN predicted for age (73 +/- 10%), trunk fat (2.1 +/- 1.0 kg), leg fat (2.6 +/- 1.1 kg), and trunk-to-leg fat ratio than did control subjects (P < 0.05). In anorexia nervosa patients, significant correlations were found between triceps skinfold thickness and percentage of body fat (r = 0.83), body mass index and percentage of body fat (r = 0.46), and body weight and TBN (r = 0.84, P < 0.05). Conclusions: Hospitalized adolescent females with anorexia nervosa are depleted of total body fat and protein. We identified 3 simple anthropometric measures (triceps skinfold thickness, BMI, and body weight) that can be used to assess body composition and nutritional status in malnourished adolescents with anorexia nervosa. abstract_id: PUBMED:37598747 Cross-sectional assessment of body composition and detection of malnutrition risk in participants with low body mass index and eating disorders using 3D optical surface scans. Background: New recommendations for the assessment of malnutrition and sarcopenia include body composition, specifically reduced muscle mass. Three-dimensional optical imaging (3DO) is a validated, accessible, and affordable alternative to dual X-ray absorptiometry (DXA). Objective: Identify strengths and weaknesses of 3DO for identification of malnutrition in participants with low body mass index (BMI) and eating disorders. Design: Participants were enrolled in the cross-sectional Shape Up! Adults and Kids studies of body shape, metabolic risk, and functional assessment and had BMI of <20 kg/m2 in adults or <85% of median BMI (mBMI) in children and adolescents. A subset was referred for eating disorders evaluation. Anthropometrics, scans, strength testing, and questionnaires were completed in clinical research centers. Lin's Concordance Correlation Coefficient (CCC) assessed agreement between 3DO and DXA; multivariate linear regression analysis examined associations between weight history and body composition. Results: Among 95 participants, mean ± SD BMI was 18.3 ± 1.4 kg/m2 in adult women (N = 56), 19.0 ± 0.6 in men (N = 14), and 84.2% ± 4.1% mBMI in children (N = 25). Concordance was excellent for fat-free mass (FFM, CCC = 0.97) and strong for appendicular lean mass (ALM, CCC = 0.86) and fat mass (FM, CCC = 0.87). By DXA, 80% of adults met the low FFM index criterion for malnutrition, and 44% met low ALM for sarcopenia; 52% of children and adolescents were <-2 z-score for FM. 3DO identified 95% of these cases. In the subset, greater weight loss predicted lower FFM, FM, and ALM by both methods; a greater percentage of weight regained predicted a higher percentage of body fat. Conclusions: 3DO can accurately estimate body composition in participants with low BMI and identify criteria for malnutrition and sarcopenia. In a subset, 3DO detected changes in body composition expected with weight loss and regain secondary to eating disorders. These findings support the utility of 3DO for body composition assessment in patients with low BMI, including those with eating disorders. This trial was registered at clinicaltrials.gov as NCT03637855. abstract_id: PUBMED:24810684 Resistance training enhances muscular performance in patients with anorexia nervosa: a randomized controlled trial. Objective: Low-intensity exercise applied in anorexia nervosa patients has been shown to have a harmless effect on body composition and to effect short-term improvements in muscular strength and agility. The aim of this study was to determine the effects of a high-intensity resistance training program designed for adolescents to improve strength and agility in anorexia nervosa restricting-type patients (AN-R). Methods: From a total of 36 female patients with AN-R, one group (intervention, n = 18) underwent a supervised high-intensity resistance training program lasting 8 weeks, and the other group with no exercise (control, n = 18). Body weight, body mass index, whole-body muscular strength, and agility were assessed before, after, and 4 weeks after training (detraining). Results: Leg-press, bench-press, and lateral row tests improved significantly (p < 0.001) after 8 weeks of training compared with controls. Improvements were maintained after the detraining period. The training program also showed beneficial effects on agility. Discussion: A high-intensity resistance training program adapted to the recommendations for adolescents in AN-R patients was effective and safe, improving muscular strength in the whole body and the ability to perform daily tasks. However, long-term maintenance of gains seems to be linked to the continuance of training or the use of a maintenance program. abstract_id: PUBMED:20307824 Does resistance training improve the functional capacity and well being of very young anorexic patients? A randomized controlled trial. Purpose: We determined the effects of a 3-month low-moderate-intensity strength training program (2 sessions/week) on functional capacity, muscular strength, body composition, and quality of life (QOL) in 22 young (12-16 yrs) anorexic outpatients. Methods: Patients were randomly assigned to a training or control group (n=11 [10 females] each). Training sessions were of low intensity (loads for large muscle groups ranging between 20%-30% and 50%-60% of six repetitions maximum [6RM] at the end of the program). We measured functional capacity by the time up and go and the timed up and down stairs tests. Muscular strength was assessed by 6RM measures for seated bench and leg presses. We estimated percent body fat and muscle mass. We assessed patients' QOL with the Short Form-36 items. Results: The intervention was well tolerated and did not have any deleterious effect on patients' health, and did not induce significant losses in their body mass. The only studied variable for which a significant interaction (group x time) effect was found (p=.009) was the 6RM seated lateral row test. Conclusions: Low-moderate-intensity strength training does not seem to add major benefits to conventional psychotherapy and refeeding treatments in young anorexic patients. abstract_id: PUBMED:38452291 Cross-sectional and longitudinal changes in body composition, anxiety, and depression in a clinical sample of adolescents with anorexia nervosa. Objective: Eating disorders among children and adolescents have increased in prevalence, and mortality rates for anorexia nervosa are among the highest for any psychiatric disorder. Our current study aimed to (a) examine the cross-sectional relationship between body composition and anxiety/depressive symptoms among 97 adolescents and young adults who have been diagnosed with anorexia nervosa, (b) examine the longitudinal changes in body composition and anxiety/depressive symptoms over three months (from baseline to follow-up visit), and (c) examine the longitudinal relationship between change in body composition and change in anxiety/depression over three months. Method: A retrospective chart review was conducted within an interdisciplinary eating disorder clinic between August 2019 and December 2021. In total, 97 adolescents aged 11-20 years old with diagnoses of anorexia nervosa were included in the analyses. Body composition data were collected at each visit along with parent- and youth-report measures of symptoms of anxiety/depression symptoms. Results: Findings indicated adolescents demonstrated some improvement in body composition, as well as parent-reported reductions in anxiety/depression symptoms. Based on parent reports, increased BMI percentile was associated with improvements in anxiety/depression symptoms. On the other hand, youth did not report significant changes in anxiety/depressive symptoms. Additionally, there were no associated improvements with body composition measures, which may be associated with continued body dissatisfaction or symptoms of anxiety and depression predating the eating disorder. Conclusions: These results suggest the importance of including interventions addressing depression, anxiety, and body image as part of treatment. abstract_id: PUBMED:28562123 The role of total body fat mass and trunk fat mass, combined with other endocrine factors, in menstrual recovery and psychopathology of adolescents with Anorexia Nervosa. Objective: To determine the threshold of total body and trunk fat mass required for menstrual recovery and to assess the impact of body composition in psychopathology of adolescents with Anorexia Nervosa (AN). Methods: Prospective study of 60 adolescents presented with secondary amenorrhea and diagnosed with AN. Anthropometrics, body composition by dual-energy X-ray absorptiometry, hormonal studies and responses to mental health screens (EAT-26), were obtained at the beginning and at complete weight restoration, in all adolescents, independently of menstrual recovery (Group A) or not (Group B). Results: At weight restoration, Group A total body fat mass, trunk fat mass, and trunk/extremities fat ratio were significantly higher (p < .001) than Group B. Menstruation was expected in 20% of total body fat mass and 20% of trunk fat mass (% of total trunk tissue). At time of menstrual recovery, total body fat mass (%) and trunk fat mass (%) were significantly negatively correlated with EAT-26 (r = -0.363, p = .032) and (r = -0.416, p = .013), respectively, while an increase of 0.40% of trunk fat mass (%) lowers EAT-26 by one unit. Discussion: Trunk fat mass distribution can positively influence psychopathology of adolescents with AN. abstract_id: PUBMED:30771231 Comparisons of bone density and body composition among adolescents with anorexia nervosa and atypical anorexia nervosa. Objective: To compare bone mineral density (BMD) and body composition among adolescents: (a) with atypical anorexia nervosa (AAN) versus anorexia nervosa (AN) and (b) those with and without a prior history of overweight. Method: Electronic medical records of patients 9-20 years with AN or AAN who underwent dual-energy x-ray absorptiometry scans were retrospectively reviewed and analyzed. Results: A total of 286 adolescents with AN or AAN were included. In linear regression models, AAN was associated with greater Z-scores in whole body bone mineral content (BMC, B = 0.88, p < 0.001), lumbar spine BMD (B = 0.79, p = 0.002), femoral neck BMD (B = 0.670, p = 0.009); fat mass index (B = 1.33, p = 0.003), and lean body mass index (LBMI, B = 1.10, p < 0.001) compared to AN, adjusting for age, sex, and duration of illness. A prior overweight history was associated with greater Z-scores in whole body BMC; lumbar spine BMD, total hip BMD, femoral neck BMD, and LBMI. Discussion: Adolescents with AAN had higher BMD Z-scores than adolescents with AN; adolescents with a prior overweight history had greater BMD Z-scores than adolescents without a prior overweight history. These findings may inform clinical guidelines for the medical management of AAN. Answer: Yes, resistance training does help improve muscle function and body composition profile in adolescents with restrictive anorexia nervosa (AN-R). Studies have shown that resistance training can be safe and effective for AN-R patients after hospitalization, enhancing the force-generating capacity as well as muscle mass gains (PUBMED:26084569). Specifically, short-term resistance training has been associated with significant skeletal muscle mass (SMM) gains in AN-R female adolescents, which correlated with an increase in body mass index (BMI). These gains in SMM were linked to increased relative strength, suggesting that monitoring SMM and relative strength ratios could provide a better estimation of body composition profile and muscle function recovery in this population (PUBMED:26084569). Another study found that high-intensity resistance training designed for adolescents with AN-R was effective in improving muscular strength in the whole body and the ability to perform daily tasks. The improvements in leg-press, bench-press, and lateral row tests were maintained even after a detraining period, indicating the potential for long-term benefits of resistance training (PUBMED:24810684). However, it is important to note that the benefits of resistance training may not be universal for all young anorexic patients. A study on the effects of a low-moderate-intensity strength training program on young anorexic outpatients did not find significant benefits beyond conventional psychotherapy and refeeding treatments, except for a significant interaction effect found in the 6RM seated lateral row test (PUBMED:20307824). In summary, resistance training can be a valuable component of rehabilitation for adolescents with restrictive anorexia nervosa, contributing to improvements in muscle function and body composition. However, the intensity and type of resistance training, as well as individual patient factors, may influence the extent of these benefits.
Instruction: Cancer family caregiver depression: are religion-related variables important? Abstracts: abstract_id: PUBMED:25110879 Cancer family caregiver depression: are religion-related variables important? Objective: Prevalence estimates for clinical depression among cancer family caregivers (CFC) range upwards to 39%. Research inconsistently reports risk for CFC depressive symptoms when evaluating age, gender, ethnicity, or length of time as caregiver. The discrepant findings, coupled with emerging literature indicating religiosity may mitigate depression in some populations, led us to investigate religion-related variables to help predict CFC depressive symptoms. Methods: We conducted a cross-sectional study of 150 CFC. Explanatory variables included age, gender, spousal status, length of time as caregiver, attendance at religious services, and prayer. The outcome variable was the Center for Epidemiological Studies Depression Scale score. Results: Compared with large national and state datasets, our sample has lower representation of individuals with no religious affiliation (10.7% vs. 16.1% national, p = 0.07 and 23.0% state, p = 0.001), higher rate of attendance at religious services (81.3% vs. 67.2% national, p < 0.001 and 30.0% state, p < 0.001), and higher rate of prayer (65.3% vs. 42.9% national, p < 0.001; no state data available). In unadjusted and adjusted models, prayer is not significantly associated with caregiver depressive symptoms or clinically significant depressive symptomology. Attendance at religious services is associated with depressive symptoms (p = 0.004) with an inversely linear trend (p = 0.002). Conclusion: The significant inverse association between attendance at religious services and depressive symptoms, despite no association between prayer and depressive symptoms, indicates that social or other factors may accompany attendance at religious services and contribute to the association. Clinicians can consider supporting a CFC's attendance at religious services as a potential preventive measure for depressive symptoms. abstract_id: PUBMED:27032294 Validation of a model of family caregiver communication types and related caregiver outcomes. Objective: Caring for the family is included as one of the eight domains of quality palliative care, calling attention to the importance of the family system and family communications about cancer during care and treatment of the disease. Previously, a model of family caregiver communication defined four caregiver communication types-Manager, Carrier, Partner, Lone-each with a unique communication pattern. The purpose of the present study was to extend the model of family caregiver communication in cancer care to further understand the impact of family communication burden on caregiving outcomes. Method: This mixed-method study employed fieldnotes from a family caregiver intervention focused on quality of life and self-reported caregiver communication items to identify a specific family caregiver type. Caregiver types were then analyzed using outcome measures on psychological distress, skills preparedness, family inventory of needs, and quality-of-life domains. Results: Corroboration between fieldnotes and self-reported communication for caregivers (n = 21, 16 women, mean age of 53 years) revealed a definitive classification of the four caregiver types (Manager = 6, Carrier = 5, Partner = 6, Lone = 4). Mean scores on self-reported communication items documented different communication patterns congruent with the theoretical framework of the model. Variation in caregiver outcomes measures confirmed the model of family caregiver communication types. Partner and Lone caregivers reported the lowest psychological distress, with Carrier caregivers feeling least prepared and Manager caregivers reporting the lowest physical quality of life. Significance Of Results: This study illustrates the impact of family communication on caregiving and increases our knowledge and understanding about the role of communication in caregiver burden. The research provides the first evidence-based validation for a family caregiver communication typology and its relationship to caregiver outcomes. Future research is needed to develop and test interventions that target specific caregiver types. abstract_id: PUBMED:29107528 Understanding Family Caregiver Communication to Provide Family-Centered Cancer Care. Objectives: To describe a family caregiver communication typology and demonstrate identifiable communication challenges among four caregiver types: Manager, Carrier, Partner, and Lone. Data Sources: Case studies based on interviews with oncology family caregivers. Conclusion: Each caregiver type demonstrates unique communication challenges that can be identified. Recognition of a specific caregiver type will help nurses to adapt their own communication to provide tailored support. Implications For Nursing Practice: Family-centered cancer care requires attention to the communication challenges faced by family caregivers. Understanding the challenges among four family caregiver communication types will enable nurses to better address caregiver burden and family conflict. abstract_id: PUBMED:31107193 Patient-family communication mediates the relation between family hardiness and caregiver positivity: Exploring the moderating role of caregiver depression and anxiety. Purpose: Despite the theoretical and empirical significance of positive aspects of caregiving in caregiver well-being, relatively little is known regarding family-related predictors of caregiver positivity. This study examines whether patient-family communication (p-f communication) mediates the relation between family hardiness and caregiver positivity and whether the mediating effects of p-f communication are moderated by the levels of caregiver depression and anxiety. Design/Sample: This study used secondary data obtained from a large-scale cross-sectional national survey conducted in South Korea. Participants were 544 spousal cancer patient-caregiver dyads recruited from the National Cancer Center and nine government-designated regional cancer centers in South Korea. Methods: To test the hypotheses, a simple mediation model and two moderated mediation tests were conducted using the PROCESS macro for SPSS. Findings: Higher family hardiness was related to higher p-f positive communication and higher caregiver positivity. The effects of family hardiness were partially mediated by p-f communication, controlling for caregiver sex, education, health status, depression and anxiety, time spent caregiving, and patient depression and anxiety, cancer stage, and time since diagnosis. The mediating effects of p-f communication were not significantly moderated by caregiver depression and anxiety. Conclusions/Implications: Health care professionals could consider p-f communication as a reasonable target of intervention to increase caregiver positivity, even for caregivers with heightened depression and anxiety. abstract_id: PUBMED:37190779 When Family Life Contributes to Cancer Caregiver Burden in Palliative Care. The difficulties of caring for a family member with advanced cancer are well documented. Support from a caregiver's social network-especially other family-is vital to reducing caregiver burden and psychological distress. However, the family environment is not always supportive as reports of family conflict and dissatisfaction with support are common. Despite knowing that family relationships are complex, little is known about the types of family stress that caregivers of advanced cancer patients face in their daily lives. To address this gap, researchers applied concepts from the double ABCX model to conduct a reflexive thematic analysis of interviews with 63 caregivers of cancer patients receiving outpatient palliative care. Four themes of family stress were identified: failed support, relational tensions, denial, and additional care work. Findings inform clinical assessment and caregiver intervention development by revealing the importance of measuring the mundane machinations of family life for caregivers of adult cancer patients. abstract_id: PUBMED:33962232 Caregiver burden and related factors during active cancer treatment: A latent growth curve analysis. Purpose: Caregiver burden is frequently studied cross-sectionally, but longitudinal studies on family caregiver burden during active cancer treatment are lacking. The goals of this study were to characterize trajectories of caregivers' burden during a 6-month active treatment period, and to examine which predictors are associated with their burden. Method: This study was a secondary analysis of data from a prospective study. A sample of 112 family caregivers of patients receiving cancer treatment were assessed at three time points (the initiation of new treatment regimen, 3-, and 6-month follow-up). Caregivers completed measures: Caregiver Reaction Assessment and Mutuality Scale of the Family Care Inventory. Data were analyzed using latent growth curve modeling. Results: The two highest burdens were subdomains related to disrupted schedule and financial problems. Models showed a decline in schedule burden over time, yet total burden and other subscales (financial problems, health problems, and lack of family support and self-esteem) remained relatively stable. In multivariate analysis, mutuality, the relationship quality between patients and caregivers was inversely related to burden at baseline. Being a spouse, a sole caregiver and lower income were related to higher burden over time. Conclusions: Our findings confirmed significant determinants of caregiver burden over the course of active treatment. It is important for health care providers to be attentive to vulnerable caregivers who are at higher risk of elevated burden over time. Considering the multidimensional nature of caregiver burden, early assessment and tailored support programs may be effective by focusing on patient-caregiver relationships, caregiving roles, and income. abstract_id: PUBMED:29114558 Predictors of caregiver burden in Iranian family caregivers of cancer patients. Background: Caregiver burden threatens the psychological, emotional, functional and even physical health of caregivers. The aims of this study were to determine caregiver burden and family distress and the relationship between them, also to explore predictors of caregiver burden in a sample of Iranian family caregivers of cancer patients. Materials And Methods: This is a cross-sectional study with correlational design. A total of 104 family caregivers of cancer patients were asked to respond to the Caregiver Burden Inventory (CBI) and the Family Distress Index (FDI) together with a sociodemographic questionnaire. For evaluating the relationship between CBI and FDI scores, the Pearson's product-moment correlation was used. In addition, multiple linear regression analysis was applied to explore the predictive factors of caregiver burden. Results: A high burden was experienced by almost half of the caregivers (48.1%). The FDI mean score was 9.76 ± 5.40 ranged from 0 to 24. A strong positive correlation was found between the caregiver burden and family distress (r = 0.76). Multiple linear regression results showed the predictive role of FDI score (β = 0.71, P = 0.001), patient's gender (β = -0.25, P = 0.001), and early cancer diagnosis (β =0.13, P = 0.027) in caregiver burden. They could explain 65% of variance in the level of burden in family caregivers. Conclusion: Family nurses should consider the caregivers burden and vulnerability of families with cancer patient, especially if the patient is a male or has a new diagnosis. They should also design special programs for the whole family as a system that family can adapt to the new situation. abstract_id: PUBMED:34728096 Understanding cancer caregiver burden over time: Dyadic assessments of family cohesion, conflict and communication. Objective: Previously we identified three distinct cancer communication concordance groups among cancer patient-caregiver dyads. This secondary analysis examined patient and caregiver reports of family functioning (cohesion and conflict) as associated with cancer communication concordance and assessed each as independent predictors of perceived caregiver burden among hematological cancer caregivers. Methods: A case series of hematological cancer patient-caregiver dyads (n = 171) were recruited from oncology clinics in Virginia and Pennsylvania and followed for 2 years. Using the previously identified communication groups this analysis prospectively examines patient and caregiver perceptions of family cohesion and conflict and the association with burden over time. Results: Caregiver burden decreased over time. Caregiver, but not patient perceptions of family cohesion decreased over time; decreased cohesion and increased conflict was associated with greater cancer communication discordance. Conclusions: This work lends further support to the use of cancer communication congruence typologies for identifying potentially vulnerable dyads. Discordant cancer communication and declining caregiver perceptions of family cohesion may represent opportunities to intervene using family focused supportive services. Practice Implications: Only caregiver perceptions of family functioning were associated with burden therefore identifying and supporting those caregivers with worsening communication and family function is important. abstract_id: PUBMED:30545383 The impact of depression and anxiety on quality of life in Chinese cancer patient-family caregiver dyads, a cross-sectional study. Background: Cancer and its treatment can result in psychological distress in both adults with cancer and in their family caregivers. This psychological distress acts as a significant adverse factor in patient-caregiver dyads. The study purposes included: (i) to assess anxiety and depression in adults with cancer and their family caregivers, and examine the dyadic relationship of anxiety and depression in patient-caregiver dyads; (ii) to investigate factors that may modify these relationships; and (iii) to explore the impact of anxiety and depression on patient-caregiver dyad quality of life (QOL). Methods: This was a secondary analysis of a cross-sectional study. Participants consisted of 641 patient-caregiver dyads. Participants completed a survey assessing adults with cancer-related, family caregiver-related, and family-related variables using a demographic/clinical information sheet. In addition, anxiety/depression and QOL were assessed by using the Chinese version of the Hospital Anxiety and Depression Scale and SF-12 respectively. Data were analyzed by using descriptive statistics, Pearson correlations, subgroup analysis, and the Actor-Partner Interdependence Model. Results: Nearly one-third of participants had experienced anxiety and depression. Adults with cancer and family caregivers experienced a similar degree of anxiety and depression. Correlations (r) of anxiety and depression between patient-caregiver dyads ranged from 0.25 to 0.32. Various factors influencing the anxiety and depression relationship between patient-caregiver dyads were identified, including adults with cancer-related (e.g., age, gender, marital status, level of being informed about the disease, different types of cancer and treatment), family caregiver-related (e.g., being the spouse of a patient, duration in their role as a family caregiver, and amount of time spent on caregiving each day), and family-related (family relationship pre- and post-cancer, financial burden on the family due to cancer treatment) variables. To some extent, both actor and partner effects were identified for anxiety and depression on the QOL of patient-caregiver dyads. Conclusions: Study findings call attention to anxiety and depression, as well as related factors, in patient-caregiver dyads. The underlined essential components and focus of intervention, which will be developed to decrease psychological distress and improve QOL in patient-caregiver dyads, included individual characteristics of patient-caregiver dyads, family relationship, and anxiety and depression in their counterparts. abstract_id: PUBMED:28257110 Cancer Communication and Family Caregiver Quality of Life. Family caregivers have enormous communication responsibilities tied to caregiving, such as sharing the patient's medical history with providers, relaying diagnosis and prognosis to other family members, and making decisions about care with the patient. While caregiver stress and burden has been widely documented in the caregiving literature, little is known about how communication burden, real or perceived communication challenges, impacts caregiver quality of life. In family caregiving, the City of Hope (COH) Quality of Life model proposes that the caregiving experience is reciprocal to the patient experience, impacting physical, social, psychological, and spiritual quality of life. We used data from a pilot study testing a communication coaching call intervention with family caregivers of lung cancer patients to analyze caregiver reported communication burden and quality of life. We found variances in each quality of life domain, suggesting that caregiver interventions should range from self-care skill building for physical care to psycho-educational interventions that support caregiver coping and communication skill building. These findings demonstrate the importance of caregiver assessment and attention to communication burden in quality cancer care. Answer: Yes, religion-related variables are important when considering depression among cancer family caregivers (CFC). A study conducted on 150 CFCs found that attendance at religious services was inversely associated with depressive symptoms. This suggests that social or other factors accompanying attendance at religious services may contribute to this association. However, the same study found no significant association between prayer and caregiver depressive symptoms or clinically significant depressive symptomology. Clinicians might consider supporting a CFC's attendance at religious services as a potential preventive measure for depressive symptoms (PUBMED:25110879).
Instruction: Is a planned caesarean section in women with cardiac disease beneficial? Abstracts: abstract_id: PUBMED:25539946 Is a planned caesarean section in women with cardiac disease beneficial? Objectives: In the general population, planned caesarean section is thought to be safer in high-risk situations as it avoids the greater risk of an emergency caesarean section. Only limited data exist on the optimal mode of delivery in women with structural heart disease. We investigated the relationship between mode of delivery and pregnancy outcome in women with pre-existing heart disease. Methods: The Registry on Pregnancy and Cardiac Disease is an on-going, global, prospective observational registry of women with structural heart disease. We report on 1262 deliveries, between January 2007 and June 2011. Results: The caesarean section was planned in 393 women (31%): 172 (44%) for cardiac and 221 (56%) for obstetric reasons of whom 53 delivered by emergency caesarean section. Vaginal delivery was planned in 869 (69%) women, of whom 726 (84%) actually delivered vaginally and 143 (16%) had an emergency caesarean section. Perinatal mortality(1.1 vs 2.7, p=0.14) and low apgar score (11.9 vs 10.1, p=0.45) were not significantly different in women who had a caesarean section or vaginal delivery; gestational age(37 vs 38 weeks p=0.003) and birth weight (3073 vs 2870 g p<0.001) were lower in women delivered by caesarean section compared with women delivered by vaginal delivery. In those delivered by elective or emergency caesarean section, there was no difference in maternal mortality (1.8% vs 1.5%, p=1.0), postpartum heart failure (8.8% vs 8.2% p=0.79) or haemorrhage (6.2% vs 5.1% p=0.61). Conclusions: These data suggest that planned caesarean section does not confer any advantage over planned vaginal delivery, in terms of maternal outcome, but is associated with an adverse fetal outcome. abstract_id: PUBMED:31310750 Planned vaginal delivery and cardiovascular morbidity in pregnant women with heart disease. Background: Although consensus guidelines on the management of cardiovascular disease in pregnancy reserve cesarean delivery for obstetric indications, there is a paucity of data to support this approach. Objective: The objective of the study was to compare cardiovascular and obstetric morbidity in women with cardiovascular disease according to the plan for vaginal birth or cesarean delivery. Study Design: We assembled a prospective cohort of women delivering at an academic tertiary care center with a protocolized multidisciplinary approach to management of cardiovascular disease between September 2011 and December 2016. Our practice is to encourage vaginal birth in women with cardiovascular disease unless there is an obstetric indication for cesarean delivery. We allow women attempting vaginal birth a trial of Valsalva in the second stage with the ability to provide operative vaginal delivery if pushing leads to changes in hemodynamics or symptoms. Women were classified according to planned mode of delivery: either vaginal birth or cesarean delivery. We then used univariate analysis to compare adverse outcomes according to planned mode of delivery. The primary composite cardiac outcome of interest included sustained arrhythmia, heart failure, cardiac arrest, cerebral vascular accident, need for cardiac surgery or intervention, or death. Secondary obstetric and neonatal outcomes were also considered. Results: We included 276 consenting women with congenital heart disease (68.5%), arrhythmias (11.2%), connective tissue disease (9.1%), cardiomyopathy (8.0%), valvular disease (1.4%), or vascular heart disease (1.8%) at or beyond 24 weeks' gestation. Seventy-six percent (n = 210) planned vaginal birth and 24% (n = 66) planned cesarean delivery. Women planning vaginal birth had lower rates of left ventricular outflow tract obstruction, multiparity, and preterm delivery. All women attempting vaginal birth were allowed Valsalva. Among planned vaginal deliveries 86.2% (n = 181) were successful, with a 9.5% operative vaginal delivery rate. Five women underwent operative vaginal delivery for the indication of cardiovascular disease without another obstetric indication at the discretion of the delivering provider. Four of these patients tolerated trials of Valsalva ranging from 15 to 75 minutes prior to delivery. Adverse cardiac outcomes were similar between planned vaginal birth and cesarean delivery groups (4.3% vs 3.0%, P = 1.00). Rates of postpartum hemorrhage (1.9% vs 10.6%, P < .01) and transfusion (1.9% vs 9.1%, P = .01) were lower in the planned vaginal birth group. There were no differences in adverse cardiac, obstetric, or neonatal outcomes in the cohort overall or the subset of women with high-risk cardiovascular disease or a high burden of obstetric comorbidity. Conclusion: These findings suggest that cesarean delivery does not reduce adverse cardiovascular outcomes and lend support to a planned vaginal birth for the majority of women with cardiovascular disease including those with high-risk disease. abstract_id: PUBMED:26389093 Maternal and newborn outcomes after a prior cesarean birth by planned mode of delivery and history of prior vaginal birth in British Columbia: a retrospective cohort study. Background: As rates for cesarean births continue to rise, more women are faced with the choice to plan a vaginal or a repeat cesarean birth after a previous cesarean. The objective of this population-based retrospective cohort study was to compare the safety of planned vaginal birth with cesarean birth after 1-2 previous cesarean sections. Methods: We identified singleton term births in British Columbia from 2000 to 2008 using data from the British Columbia Perinatal Data Registry. Women carrying a singleton fetus in cephalic presentation at term (37-41 weeks of gestation completed) with 1-2 prior cesarean births were included. Those with gestational hypertension, pre-existing diabetes and cardiac disease were excluded. Maternal and neonatal outcomes were classified as either life-threatening or non-life threatening. We compared outcomes among women with none versus at least 1 previous vaginal birth, by planned method of delivery. We estimated relative risks (RR) and 95% confidence intervals (CI) for composite outcomes using Poisson regression. Results: Of the 33 812 women in the sample, 5406 had a history of vaginal delivery and 28 406 did not. The composite risk for life-threatening maternal outcomes was elevated among women planning vaginal compared with cesarean birth both with and without a prior vaginal birth (RR 2.06, 95% CI 1.20-3.52) and (2.52, 95% CI 2.04-3.11). Absolute differences (attributable risk [AR]) were 1.01% and 1.31% respectively. Non-life threatening maternal outcomes were decreased among women planning a vaginal birth if they had had at least 1 prior vaginal delivery (RR 0.51, 95% CI 0.33-0.77; AR 1.17%). The composite risk of intrapartum stillbirth, neonatal death or life-threatening neonatal outcomes did not differ among women planning vaginal or cesarean birth with a prior vaginal delivery and non-life threatening neonatal outcomes were decreased, (RR 0.67, 95% CI 0.52-0.86); AR 1.92%). Interpretation: After 1 or 2 previous cesarean births, risks for adverse outcomes between planned vaginal and cesarean birth are reduced among women with a prior vaginal birth. Our data offer women and their health care providers the opportunity to consider risk profiles separately for women who have and have not had a prior vaginal delivery. abstract_id: PUBMED:32244032 Evaluation of mode of birth in pregnant women with heart disease. Objective: Maternal heart disease (HD) complicates 1-4 % of pregnancies and is associated with adverse maternal and fetal outcomes. Although vaginal birth is generally recommended in the guidelines, cesarean section (CS) rates in women with HD are often high. Aim of the present study was to evaluate mode of birth and pregnancy outcomes in women with HD in a tertiary care hospital in the Netherlands. Study Design: The study population consisted of 128 consecutive pregnancies in 99 women with HD, managed by a pregnancy heart team between 2012-2017 and ending in births after 24 weeks' gestation. Pregnancy risk was assessed per modified World Health Organization class. Mode of birth (planned and performed) and maternal and fetal complications (cardiovascular events, postpartum hemorrhage, prematurity, small for gestational age and death) were assessed for each pregnancy. Results: Pregnancy risk was classified as modified World Health Organization class I in 23 %, class II in 50 %, class III in 21 % and class IV in 6% of pregnancies. Planned mode of birth was vaginal in 114 pregnancies (89 %) and CS in 14 (11 %; nine for obstetric and five for cardiac indication). An unplanned CS was performed in 18 pregnancies (16 %; 16 for obstetric and two for cardiac indications). Overall mode of birth was vaginal in 75 % and CS in 25 %. Twelve cardiovascular events occurred in eight pregnancies (6 %), postpartum hemorrhage in nine (7 %) and small for gestational age in 14 (11 %). No maternal or fetal deaths occurred. Conclusions: Findings of this study indicate that - given that pregnancies are managed and mode of birth is meticulously planned by a multidisciplinary pregnancy heart team - vaginal birth is a suitable option for women with HD. abstract_id: PUBMED:28006009 Mode of Delivery and Pregnancy Outcome in Women with Congenital Heart Disease. Background: Advances in cardiac surgery and congenital cardiology have led to an increasing number of women with congenital heart disease (CHD) reaching childbearing age. In general, cardiologists recommend vaginal delivery for women with CHD to avoid complications from Caesarean section as many women with CHD tolerate vaginal delivery well. Methods And Results: This is a single-center study comparing mode of delivery, pregnancy outcome, indications for Caesarean section and induction of labor between women with and without CHD. A historical cohort study was conducted including 116 patients with CHD. An individual threefold matching with 348 women without CHD was carried out. Caesarean section was performed in 46.6% of pregnancies with CHD (33.6% without CHD, P = 0.012). Primary Caesarean section increases with severity of CHD (P = 0.036), 33.3% of women with CHD had primary planned Caesarean section due to cardiac reasons. Induction of labor was performed in 45.7% of attempted vaginal deliveries in women with CHD (27.9% without CHD, P = 0.001). Lower mean birth weight (P = 0.004) and Small for Gestational Age (SGA) (P < 0.001) were more common in women with CHD. One CHD patient suffered from postpartum hemorrhage. Conclusions: Concerns about maternal deterioration resulting in higher rates of induction of labor seem unjustified in most cases. Along with a possible reduction of Caesarean section on maternal request, a reduction of planned vaginal delivery may be expedient in reducing the rate of Caesarean section in women with CHD. abstract_id: PUBMED:34565724 Prevalence and the influencing factors for critical situation of 6 579 pregnant women with hypertensive disorders complicating pregnancy. Objectives: Hypertensive disorders complicating pregnancy is a kind of disease that seriously endangers the health of pregnant women and fetuses with high incidence. This study aims to analyze the prevalence of pregnant women with hypertensive disorders complicating pregnancy and the influential factors for critical pregnant women, and to provide basis for intervention measures. Methods: In an institution-based cross-sectional study, 100 683 pregnant women, who gave birth in all maternal and child health hospitals of Hunan Province from January 1, 2012 to December 31, 2019, were collected, and 6 579 pregnant women with hypertensive disorders complicating pregnancy were monitored. All data were analyzed through SAS9.4 software. The basic situation, clinical data, outcome, and complications of pregnant women were analyzed, and the risk factors for critical pregnant women with hypertensive disorders complicating pregnancy were analyzed. Results: The prevalence rate of hypertensive disorder complicating pregnancy was increased from 4.3% in 2012 to 7.1% in 2019, and the proportion of hypertensive disorder complicating pregnant women with complications in the hypertensive disorder complicating pregnant women was increased from 28.1% in 2012 to 83.7% in 2019. Elderly pregnant women accounted for 22.2%, married women accounted for 99.9%, women with university degree accounted for 49.5%, one pregnancy accounted for 38%, and zero delivery accounted for 63.5%. In the past, 18.4% of pregnant women had more than one cesarean section, accounting for 18.4%. About 99.0% of pregnant women had 5-10 antenatal check-ups, 72.6% had complications, and 93.8% were terminated when they were discharged. The first 3 complications were anemia in 2 355 cases (29.3%), gestational diabetes in 1 886 cases (23.4%), and subclinical hypothyroidism in 947 cases (11.8%). Logistic regression analysis showed that uterine rupture, placental abruption, placenta previa, anemia, and heart disease were independent risk factors for critical pregnant women. Conclusions: The prevalence of hypertensive disorders complicating pregnancy is on the rise. Pregnancy examination should be enhanced to identify the complications such as hypothyroidism, gestational diabetes, and anemia. Prevention and treatment measures should be actively taken for uterine rupture, placental abruption, placenta previa, anemia, and heart disease. abstract_id: PUBMED:36732098 Neonatal outcomes in pregnant women with pulmonary arterial hypertension associated with heart disease. Background: As an increasing number of women with pulmonary arterial hypertension (PAH) choose to become pregnant, outcomes in newborns have become a priority. The main purpose of this study was to compare the neonatal outcomes in pregnant women with PAH associated with heart disease. Methods: A single-center retrospective study was performed. Pregnancy with heart disease is divided into three groups: no PAH, mild PAH and severe PAH. Neonatal outcomes of pregnant women were compared among groups. Meanwhile, multivariable analyses were used to investigate the association between maternal PAH and adverse neonatal events. Results: A total of 127 pregnant women with heart disease were enrolled. Of these, 82 (64.6%) had no PAH, 19 (15%) had mild PAH and 26 (20.4%) had severe PAH. The offspring of women with severe PAH had a higher risk of preterm delivery, low birth weight, neonatal respiratory distress syndrome (NRDS), neonatal intensive care unit (NICU) admission and recurrence of congenital heart disease (CHD). Compared to the women without PAH, only the risk of preterm delivery (32-36 weeks) and NICU admission were slightly higher in mothers with mild PAH; other neonatal events were similar. Multivariate regression analyses showed that the risk of preterm delivery (<37 weeks) increased with the increasing severity of maternal PAH, with an OR of 3.1 (95% CI, 1.1-8.8) for mild and 21.9 (95%CI, 4.8-99.4) for severe PAH. The same pattern was observed for NICU admission. Mothers with severe PAH were independently associated with low birth weight (OR 13, 95%CI 4.3-39, P < 0.001), NRDS (OR 17.9, 95%CI 5.5-58.9, P < 0.001) and recurrence of CHD (OR 4.47, 95%CI 1.7-11.6, P = 0.002). Conclusion: Pregnancy in women with severe PAH can significantly increase the risks of neonatal events. While neonatal outcomes in pregnant women with mild PAH were considered optimistic in the present study, a multidisciplinary management of PAH in pregnancy would be necessary to have consistently good outcomes. abstract_id: PUBMED:32736999 Anesthesia for Pregnant Women with Pulmonary Hypertension. Despite advances in the therapy for pulmonary hypertension over the past decades, the prognosis of pregnant patients with pulmonary hypertension remains poor, with high maternal mortality. This poses a particular challenge for the mother and her medical team. In the present review, the authors have updated the classification and definition of pulmonary hypertension, summarized the current knowledge with regard to perioperative management and anesthesia considerations for these patients, and stressed the importance of a "pregnancy heart team" to improve long-term outcomes of pregnant women with pulmonary hypertension. abstract_id: PUBMED:32242847 IMPACT OF ARRHYTHMIAS ON PREGNANCY COURSE AND OUTCOMES IN WOMEN WITHOUT STRUCTURAL HEART DISEASE The aim of the study was to observe the influence of different type of arrhythmias on pregnancy course and outcomes in women without structural heart disease. 72 pregnant women without structural heart disease with different type of cardiac arrhythmias had been studied. The most common types of arrhythmias were supraventricular tachycardia (44%, the 1st group) and premature contractions (40%, the 2nd group). Less frequently diagnosed ventricular tachycardia (6%, the 3dgroupe) and bradyarrhythmia (10%, the 4th group). A more unfavorable course of pregnancy was in the 4th group: myometrial hypertonici (29%, p=0,001), the only artery of the fetal mbilical cord (14%, p=0,036). Median pregnancy duration was 39 (38;39,5) in all groups. Vaginal delivery was in 54 (75%) cases, Cesarean delivery - in 18 (25%) cases. Birth weight and 5 minute Apgar's score were 3378 (3023;3917,5) and 8,9 (7,8;9) in the 1st group, 3460 (3016,5;4020) and 8,9 (7,8;8,9) - in the 2nd group, 3430 (3300;3650) and 8,9 (8,9;8,9) - in the 3dgroupe, 3334 (2870;3530) and 8,9 (7,9;9) - in the 4th group (p>0,05). There were no maternal and neonatal death. The most common type of arrhythmias in pregnant women without structural heart disease were supraventricular tachycardia (44%) and premature beats (40%). It is associated by good pregnancy course and outcomes in cases qualified manadgment. abstract_id: PUBMED:35332050 Cardiac and obstetric outcomes in pregnant women with heart disease: appraisal of the 2018 mWHO classification. Objective: To appraise the application of the 2018 European Society of Cardiology-adapted modified WHO (mWHO) classification to pregnant women with heart disease managed at our maternal-fetal medicine referral centre and to assess whether the lack of a multidisciplinary Pregnancy Heart team has influenced their outcomes. Methods: A retrospective cohort study including all pregnancies with heart disease managed at our centre between June 2011 and December 2020. Cardiac conditions were categorised in five classes according to the mWHO classification. An additional class, named X, was created for conditions not included in this classification. Outcomes were compared among all classes and factors potentially associated to cardiac complications were assessed. Results: We identified 162 women with 197 pregnancies, for a prevalence of 0.7%. Thirty-eight (19.3%) gestations were included in class X. Caesarean section was performed in 64.9% pregnancies in class X, a rate similar to that of class II, II-III, and III/IV, and mostly for obstetric indications; in turn, it was more commonly performed for cardiology reasons in class II-III and III/IV. Cardiac complications occurred in 10.7%, with class X and II pregnancies showing the highest number of events (n=30.8% and 34.6%, respectively). Multiple gestation and urgent caesarean section associated with a 5-fold and 6.5-fold increase in complication rates. Conclusions: Even in a maternal-fetal medicine referral centre, the lack of a multidisciplinary team approach to women with heart disease may negatively impact their outcomes. Answer: The evidence from the provided abstracts suggests that a planned caesarean section in women with cardiac disease does not confer any advantage over planned vaginal delivery in terms of maternal outcomes and may be associated with adverse fetal outcomes. The Registry on Pregnancy and Cardiac Disease reported that perinatal mortality and low Apgar scores were not significantly different between women who had a caesarean section or vaginal delivery, but gestational age and birth weight were lower in women delivered by caesarean section (PUBMED:25539946). Another study supported the approach of encouraging vaginal birth in women with cardiovascular disease unless there is an obstetric indication for caesarean delivery, finding that adverse cardiac outcomes were similar between planned vaginal birth and caesarean delivery groups, with lower rates of postpartum hemorrhage and transfusion in the planned vaginal birth group (PUBMED:31310750). A retrospective cohort study in British Columbia found that risks for adverse outcomes between planned vaginal and caesarean birth are reduced among women with a prior vaginal birth (PUBMED:26389093). In the Netherlands, a study indicated that vaginal birth is a suitable option for women with heart disease when pregnancies are managed and mode of birth is meticulously planned by a multidisciplinary pregnancy heart team (PUBMED:32244032). A single-center study comparing women with and without congenital heart disease (CHD) suggested that concerns about maternal deterioration resulting in higher rates of induction of labor seem unjustified in most cases, and a reduction of planned vaginal delivery may be expedient in reducing the rate of caesarean section in women with CHD (PUBMED:28006009). In summary, the current evidence does not support a planned caesarean section as being more beneficial than planned vaginal delivery for women with cardiac disease, except in specific obstetric indications. A multidisciplinary approach and individualized care are essential for optimizing outcomes in this population.
Instruction: Does hospital performance on process measures directly measure high quality care or is it a marker of unmeasured care? Abstracts: abstract_id: PUBMED:22568614 Does hospital performance on process measures directly measure high quality care or is it a marker of unmeasured care? Objective: Quality measures may be associated with improved outcomes for two reasons. First, measured activities may directly improve care. Second, success on these measures may be a marker for other unmeasured aspects of high quality care. Our objective is to test the contribution of both possible effects. Data Sources: 2004 Medicare data on hospital performance from Hospital Compare and risk-adjusted mortality rates from Medicare Part A claims. Study Design: We studied 3,657 acute care U.S. hospitals and compared observed differences in condition-specific hospital mortality rates based on hospital performance with expected differences in mortality from the clinical studies underlying the measures. Principal Findings: Differences in observed mortality rates across U.S. hospitals are larger than what would be expected if these differences were due only to the direct effects of delivering measured care. Conclusions: Performance measures reflect care processes that both improve care directly and are also markers of elements of health care quality that are otherwise unmeasured. This finding suggests that process measures capture important information about care that is not directly measured, and that these unmeasured effects are in general larger than the measured effects. abstract_id: PUBMED:28110310 Impact of an Expanded Hospital Recognition Program for Stroke Quality of Care. Background: In 2009, the Get With The Guidelines-Stroke (GWTG-Stroke) program offered additional recognition if hospitals performed well on certain stroke quality measures. We sought to determine whether quality of care for all hospitals participating in GWTG-Stroke improved with this expanded recognition program. Methods And Results: We examined hospital-level performance on 6 quality of care (process) measures and 1 defect-free composite quality measure for stroke following expansion of the existing performance measure recognition program. Compliance with all measures improved following launch of the expanded program, and this rate increased significantly for all 9 measures. When evaluated as the relative rate of increase in use over time, process improvement slowed significantly (P<0.05) following launch of the program for 2 measures, and accelerated significantly for 1 measure. However, when evaluated as a gap in care, the decrease in the quality gap was greater following launch of the program for 5 of 6 (83%) measures. There was no evidence that other processes of stroke care suffered as the result of the increase in measures and expanded recognition program. Conclusions: While care for stroke continues to improve in this country, expanded hospital process performance recognition had mixed results in accelerating this improvement. However, the quality gap continues to shrink among those participating in provider performance programs. abstract_id: PUBMED:22911102 Nurse reported quality of care: a measure of hospital quality. As the primary providers of round-the-clock bedside care, nurses are well positioned to report on hospital quality of care. Researchers have not examined how nurses' reports of quality correspond with standard process or outcomes measures of quality. We assess the validity of evaluating hospital quality by aggregating hospital nurses' responses to a single item that asks them to report on quality of care. We found that a 10% increment in the proportion of nurses reporting excellent quality of care was associated with lower odds of mortality and failure to rescue; greater patient satisfaction; and higher composite process of care scores for acute myocardial infarction, pneumonia, and surgical patients. Nurse reported quality of care is a useful indicator of hospital performance. abstract_id: PUBMED:19938724 The relationship between Medicare's process of care quality measures and mortality. Using Medicare inpatient claims and Hospital Compare process of care quality data from the period 2004-2006, we estimate two model specifications to test for the presence of correlational and causal relationships between hospital process of care performance measures and risk-adjusted (RA) 30-day mortality for heart attack, heart failure, and pneumonia. Our analysis indicates that while Hospital Compare process performance measures are correlated with 30-day mortality for each diagnosis, after we account for unobserved heterogeneity, process of care performance is no longer associated with mortality for any diagnosis. This suggests that the relationship between hospital-level process of care performance and mortality is not causal. Implications for pay-for-performance are discussed. abstract_id: PUBMED:26850334 Unfinished nursing care: An important performance measure for nursing care systems. Performance measurement is a core administrative function and an essential component of organizational quality programs. The prevalence of performance measurement initiatives increased significantly after the release of the Institute of Medicine series on quality. Nursing performance measures are limited in their scope, resulting in an underrepresentation of process measures. Development of performance indicators that reflect how effectively organizational units actually transform nursing resources into nursing services should be a high priority. Unfinished nursing care is a nursing process performance measure that reflects the complexity of the nursing care environment and can be useful in comparing process performance across systems and organizations. Unfinished nursing care is congruent with many of the National Quality Forum requirements for endorsement and warrants further refinement as an important nurse-sensitive performance measure. abstract_id: PUBMED:29100860 Development of New Quality Measures for Hospital-Based Care of Suicidal Youth. Objectives: To develop, validate, and test the feasibility of implementation of 4 new quality measures assessing emergency department (ED) and inpatient care for suicidal youth. Methods: Four quality measures were developed to assess hospital-based care for suicidal youth. These measures, focused on counseling caregivers about restricting access to lethal means of self-harm and benefits and risks of antidepressant medications, were operationalized into 2 caregiver surveys that assessed ED and inpatient quality, respectively. Survey field tests included caregivers of youth who received inpatient and/or ED care for suicidality at 1 of 2 children's hospitals between July 2013 and June 2014. We examined the feasibility of obtaining measure scores and variation in scores. Multivariate models examined associations between quality measure scores and 4 validation metrics: modified Child Hospital Consumer Assessments of Health Care Providers and Systems, communication composites, hospital readmissions, and ED return visits. Results: Response rates were 35% (ED) and 31% (inpatient). Most caregivers reported receiving counseling to restrict their child's access to lethal means of self-harm (90% in the ED and 96% in the inpatient setting). In the inpatient setting, caregivers reported higher rates of counseling on benefits (95%) of newly prescribed antidepressants than risks (physical adverse effects 85%, increased suicidality 72%). Higher scores on the latter measure were associated with higher nurse (P < .001) and doctor (P < .01) communication composite scores. Measure scores were not associated with readmissions or ED return visits. Conclusions: These new quality measures evaluate key aspects of care for suicidal youth, and they may facilitate assessing quality of care for this vulnerable population. abstract_id: PUBMED:25638252 Performing well in financial management and quality of care: evidence from hospital process measures for treatment of cardiovascular disease. Background: Fiscal constraints faced by U.S. hospitals as a result of the recent economic downturn are leading to business practices that reduce costs and improve financial and operational efficiency in hospitals. There naturally arises the question of how this finance-driven management culture could affect the quality of care. This paper attempts to determine whether the process measures of treatment quality are correlated with hospital financial performance. Methods: Panel study of hospital care quality and financial condition between 2005 and 2010 for cardiovascular disease treatment at acute care hospitals in the United States. Process measures for condition-specific treatment of heart attack and heart failure and hospital-level financial condition ratios were collected from the CMS databases of Hospital Compare and Cost Reports. Results: There is a statistically significant relationship between hospital financial performance and quality of care. Hospital profitability, financial leverage, asset liquidity, operating efficiency, and costs appear to be important factors of health care quality. In general, public hospitals provide lower quality care than their nonprofit counterparts, and urban hospitals report better quality score than those located in rural areas. Specifically, the first-difference regression results indicate that the quality of treatment for cardiovascular patients rises in the year following an increase in hospital profitability, financial leverage, and labor costs. Conclusions: The results suggest that, when a hospital made more profit, had the capacity to finance investment using debt, paid higher wages presumably to attract more skilled nurses, its quality of care would generally improve. While the pursuit of profit induces hospitals to enhance both quantity and quality of services they offer, the lack of financial strength may result in a lower standard of health care services, implying the importance of monitoring the quality of care among those hospitals with poor financial health. abstract_id: PUBMED:33605800 Expert Stakeholder Prioritization of Process Quality Measures to Achieve Patient- and Family-Centered Palliative and End-of-Life Cancer Care. Importance: Quality measures of palliative and end-of-life care relevant to patients with advanced cancer have been developed, but few are in routine use. It is unclear which of these measures are most important for providing patient- and family-centered care and have high potential for improving quality of care. Objective: To prioritize process quality measures for assessing delivery of patient- and family-centered palliative and end-of-life cancer care in US Veterans Affairs (VA) health care facilities. Design, Setting, Participants: A panel of 10 palliative and cancer care expert stakeholders (7 physicians, 2 nurses, 1 social worker) rated process quality measure concepts before and after a 1-day meeting. Measures: Panelists rated 64 measure concepts on a nine-point scale on: (1) importance to providing patient- and family-centered care, and (2) potential for quality improvement (QI). Panelists also nominated five highest priority measure concepts ("top 5") on each attribute. Results: Panelists rated most measure concepts (54 premeeting, 56 post-meeting) as highly important to patient- and family-centered care (median rating ≥7). Considerably fewer (17 premeeting, 22 post-meeting) were rated as having high potential for QI. Measure concepts having postpanel median ratings ≥7 and nominated by one or more panelists as "top 5" on either attribute comprised a shortlist of 20 measure concepts. Conclusions: A panel of expert stakeholders helped prioritize 64 measure concepts into a shortlist of 20. Half of the shortlisted measures were related to communication about patient preferences and decision making, and half were related to symptom assessment and treatment. abstract_id: PUBMED:18441608 Hospital performance measures and quality of care. Increasingly, quality improvement initiatives emphasize public reporting of hospital performance measures, to encourage providers to improve, to help consumers pick providers, and to determine provider payments. Although these measures are based on compliance with well-established processes of care, it is unknown whether quality measured in this way is correlated with, or predictive of, clinical outcomes. This Issue Brief summarizes studies that examine and quantify the relationship between frequently used measures of hospital performance and hospital mortality. abstract_id: PUBMED:31760173 Provider-Level and Hospital-Level Factors and Process Measures of Quality Care Delivered in Pediatric Emergency Departments. Objective: Differences in the quality of emergency department (ED) care are often attributed to nonclinical factors such as variations in the structure, systems, and processes of care. Few studies have examined these associations among children. We aimed to determine whether process measures of quality of care delivered to patients receiving care in children's hospital EDs were associated with physician-level or hospital-level factors. Methods: We included children (<18 years old) who presented to any of the 12 EDs participating in the Pediatric Emergency Care Applied Research Network (PECARN) between January 2011 and December 2011. We measured quality of care from medical record reviews using a previously validated implicit review instrument with a summary score ranging from 5 to 35, and examined associations between process measures of quality and physician- and hospital-level factors using a mixed-effects linear regression model adjusted for patient case-mix, with hospital site as a random effect. Results: Among the 620 ED encounters reviewed, we did not find process measures of quality to be associated with any physician-level factors such as physician sex, years since medical school graduation, or physician training. We found, however, that process measures of quality were positively associated with delivery at freestanding children's hospitals (1.96 points higher in quality compared to nonfreestanding status, 95% confidence interval: 0.49, 3.43) and negatively associated with higher annual ED patient volume (-0.03 points per thousand patients, 95% confidence interval: -0.05, -0.01). Conclusion: Process measures of quality of care delivered to children were higher among patients treated at freestanding children's hospitals but lower among patients treated at higher volume EDs. Answer: Hospital performance on process measures does not solely measure high quality care directly but also serves as a marker of unmeasured aspects of care. The studies suggest that while process measures are correlated with improved outcomes, they may not directly cause these outcomes. For instance, differences in observed mortality rates across U.S. hospitals were larger than what would be expected if these differences were due solely to the direct effects of delivering measured care, indicating that performance measures reflect both direct improvements in care and are markers for other unmeasured elements of healthcare quality (PUBMED:22568614). Moreover, while the expansion of a hospital recognition program for stroke quality of care led to improvements in compliance with process measures, the impact on accelerating improvement was mixed, suggesting that other factors may be at play (PUBMED:28110310). Similarly, nurse-reported quality of care, which is based on the primary providers of bedside care, has been associated with lower odds of mortality and failure to rescue, greater patient satisfaction, and higher composite process of care scores, suggesting that nurses' perceptions may be indicative of broader aspects of hospital quality (PUBMED:22911102). However, when accounting for unobserved heterogeneity, the correlation between process performance measures and mortality was not found to be causal, implying that other unmeasured factors may influence mortality rates (PUBMED:19938724). Additionally, the development of new quality measures, such as those for hospital-based care of suicidal youth, can facilitate the assessment of quality of care for vulnerable populations, although these measures may not be directly associated with outcomes like readmissions or ED return visits (PUBMED:29100860). Financial management and quality of care also appear to be related, with better financial performance being associated with improved quality of care (PUBMED:25638252). Furthermore, expert stakeholders have prioritized process quality measures that are deemed important for patient- and family-centered palliative and end-of-life cancer care, highlighting the significance of communication and symptom assessment measures (PUBMED:33605800). In summary, while process measures are important indicators of hospital performance, they are not exclusively direct measures of high-quality care but also serve as proxies for other unmeasured aspects of care quality (PUBMED:18441608).
Instruction: Schizophrenia and affective disorder show different visual scanning behavior for faces: a trait versus state-based distinction? Abstracts: abstract_id: PUBMED:12208641 Schizophrenia and affective disorder show different visual scanning behavior for faces: a trait versus state-based distinction? Background: Abnormal visual scanpaths to faces and facial expressions in schizophrenia may underlie schizophrenic subjects' disturbed interpersonal communication. This study is the first to examine the specificity of these impairments to schizophrenia, by including an affective disorder psychiatric control group. Methods: The visual scanpath performance of 65 schizophrenia, 52 affective disordered, and 61 control subjects were compared in two experiments. In the "face recognition" experiment, subjects viewed four identifiable (non-degraded) neutral faces versus four matched non-identifiable (degraded) control faces. In the "facial affect recognition" experiment, subjects viewed positive (happy), negative (sad), and neutral (control) facial emotion stimuli. Concurrent behavioral tasks were face matching (face recognition) and expression matching (facial affect recognition), each under two multiple-choice conditions (7 or 3 options). Results: Scanpath disturbances were most apparent in schizophrenia subjects, who maintained a comparatively "restricted" scanpath style to all face stimuli. Schizophrenics subjects also showed the greatest recognition difficulties, particularly for neutral and happy faces. Scanpath parameters for affective disorder subjects differed only from the schizophrenia (but not the control) group, except for attention to facial features where they generally avoided facial features in all expressions and showed the greatest attentional problems of all groups for degraded faces. Conclusions: Our results suggest that a global restriction of visual scanpaths is specific to schizophrenic psychosis and might be a trait marker for this disorder, whereas scanpath abnormalities in affective disorder might instead reflect severe state-based (or discrete) attentional disturbances. abstract_id: PUBMED:8435388 Visual imagery and visual semantics in the cerebral hemispheres in schizophrenia. Divided visual field tasks were given to normal subjects, and patients with schizophrenia and affective disorder, to investigate hemisphere differences in the visual processing of standardised pictorial stimuli. There were two conditions: in the first, subjects were asked to decide whether a common entity represented by a picture was living or non-living, a task involving a categorical judgement based on semantic information; a left hemisphere task. In the second condition, subjects judged whether these depictions represented entities which were bigger or smaller than a cat; a right hemisphere task requiring visual imagery to compare spatial dimensions. It was found that the patient groups, while showing slower reaction time (RT) overall, both displayed a right hemisphere (RH) advantage on the imagery task. Furthermore, the schizophrenics' RHs showed the normal relationship between closeness of size comparison and RT, additional evidence that the visual imagery mechanism is intact. However, these patients failed to show the expected left hemisphere advantage on the visual-semantic task. There was a suggestion that performance on the semantic task was related to the experience of vivid imagery in normals and visual hallucinations in the schizophrenics. The possible contribution of hemispheric imbalance in the production of visual hallucinations from a disordered semantic system is discussed. abstract_id: PUBMED:24126043 Emotion recognition abilities across stimulus modalities in schizophrenia and the role of visual attention. Emotion can be expressed by both the voice and face, and previous work suggests that presentation modality may impact emotion recognition performance in individuals with schizophrenia. We investigated the effect of stimulus modality on emotion recognition accuracy and the potential role of visual attention to faces in emotion recognition abilities. Thirty-one patients who met DSM-IV criteria for schizophrenia (n=8) or schizoaffective disorder (n=23) and 30 non-clinical control individuals participated. Both groups identified emotional expressions in three different conditions: audio only, visual only, combined audiovisual. In the visual only and combined conditions, time spent visually fixating salient features of the face were recorded. Patients were significantly less accurate than controls in emotion recognition during both the audio and visual only conditions but did not differ from controls on the combined condition. Analysis of visual scanning behaviors demonstrated that patients attended less than healthy individuals to the mouth in the visual condition but did not differ in visual attention to salient facial features in the combined condition, which may in part explain the absence of a deficit for patients in this condition. Collectively, these findings demonstrate that patients benefit from multimodal stimulus presentations of emotion and support hypotheses that visual attention to salient facial features may serve as a mechanism for accurate emotion identification. abstract_id: PUBMED:21333977 Current visual scanpath research: a review of investigations into the psychotic, anxiety, and mood disorders. The human visual system is comprised of an array of complex organs, which jointly decode information from visible light to construct a meaningful representation of the surrounding environment. The study of visual scanpaths transpired in a bid to enhance our understanding of the role of eye movements underpinning adaptive functioning as well as psychopathology and was further aided by the advent of modern eye-tracking techniques. This review provides a background to the nature of visual scanpaths, followed by an overview and critique of eye movement studies in specific clinical populations involving the psychotic, anxiety, and mood disorders, and concludes with suggested directions for future research. We performed a Medline and PsycInfo literature search, based on variations of the terms "visual scanpath," "eye-tracking," and "eye movements," in relation to articles published from 1986 to the present. Eye-tracking studies in schizophrenia mostly concurred with the existence of a "restricted" scanning strategy, characterized by fewer number of fixations of increased durations, with shorter scanpath lengths, and a marked avoidance of salient features, especially in relation to facial emotion perception. This has been interpreted as likely reflecting dual impairments in configural processing as well as gestalt perception. Findings from the anxiety and mood disorders have conversely failed to yield coherent results, with further research warranted to provide corroborating evidence and overcome identified methodological limitations. Future studies should also look toward applying similar techniques to related disorders as well as conducting parallel neuroimaging investigations to elucidate potential neurobiological correlates. abstract_id: PUBMED:16487492 The clinical phenotypes of juvenile bipolar disorder: toward a validation of the episodic-chronic-distinction. Background: Recent research has addressed the issue of subtyping juvenile bipolar disorder (JBD). Accordingly, we set out to find out, in a naturalistic sample of bipolar children and adolescents with mania and mixed mania, whether the most useful subtyping should be based on clinical features (elated vs. irritable) or course (episodic vs. chronic). Methods: We studied 136 patients, 81 male patients (59.6%) and 55 female patients (40.4%), mean age 13.5 +/- 2.9 years, meeting the DSM-IV diagnosis of bipolar disorder, assessed by a structured clinical interview (Schedule for Affective Disorders and Schizophrenia for School-Age Children-Present and Lifetime Version [K-SADS-PL]). Results: Regarding course, 77 patients (56.6%) had an episodic course and 59 patients (43.4%) had a chronic course. Patients with chronic course were significantly younger, had an earlier onset of JBD, and presented a more frequent comorbidity with disruptive behavior disorders. According to the prevalent mood disturbance, 75 patients (55.1%) showed an elated and 61 patients (44.9%) showed an irritable mood. Elated mood was more frequent in patients with episodic course, whereas irritable mood was more frequent in the patients with chronic course. Conclusions: These findings suggest that chronic versus episodic course may be a putative differential feature. Further validation of such a distinction would require prospective studies, temperament evaluation, gender and neurobiologic approaches, and differential psychopharmacologic assignment and response. abstract_id: PUBMED:1422169 Visual perceptual dysfunction in patients with schizophrenic and affective disorders versus control subjects. Visual perception was evaluated with standard tests for 26 hospitalized patients with schizophrenia, 23 hospitalized patients with affective disorders, and 60 control subjects. Both patient groups differed significantly from the control group on low amplitude of accommodation, esophoria or exophoria, vergence duction suppression, and convergence and divergence recovery ductions. Only the affective group showed significantly reduced fusion at near distance, and only the schizophrenic group differed significantly from control subjects on disorganized left apex formation. These findings appear to confirm the presence of visual perceptual disorders that can cause disability in psychiatric patients, with important research and rehabilitation implications. abstract_id: PUBMED:22021957 Visual hallucinations in mania. Visual hallucinations occur in a wide variety of neurological and psychiatric disorders, including toxic disturbances, drug withdrawal syndromes, focal central nervous system lesions, migraine headaches, blindness, schizophrenia, and psychotic mood disorders. Visual hallucinations are generally assumed to characteristically reflect organic disorders and are very rare in affective disorders. Here, we present a case of visual hallucinations in a young female with bipolar illness during the manic phase. abstract_id: PUBMED:27174389 Visual mismatch negativity (vMMN): A review and meta-analysis of studies in psychiatric and neurological disorders. The visual mismatch negativity (vMMN) response is an event-related potential (ERP) component, which is automatically elicited by events that violate predictions based on prior events. VMMN experiments use visual stimulus repetition to induce predictions, and vMMN is obtained by subtracting the response to rare unpredicted stimuli from those to frequent stimuli. One increasingly popular interpretation of the mismatch response postulates that vMMN, similar to its auditory counterpart (aMMN), represents a prediction error response generated by cortical mechanisms forming probabilistic representations of sensory signals. Here we discuss the physiological and theoretical basis of vMMN and review thirty-three studies from the emerging field of its clinical applications, presenting a meta-analysis of findings in schizophrenia, mood disorders, substance abuse, neurodegenerative disorders, developmental disorders, deafness, panic disorder and hypertension. Furthermore, we include reports on aging and maturation as they bear upon many clinically relevant conditions. Surveying the literature we found that vMMN is altered in several clinical populations which is in line with aMMN findings. An important potential advantage of vMMN however is that it allows the investigation of deficits in predictive processing in cognitive domains which rely primarily on visual information; a principal sensory modality and thus of vital importance in environmental information processing and response, and a modality which arguably may be more sensitive to some pathological changes. However, due to the relative infancy of research in vMMN compared to aMMN in clinical populations its potential for clinical application is not yet fully appreciated. The aim of this review and meta-analysis therefore is to present, in a detailed systematic manner, the findings from clinically-based vMMN studies, to discuss their potential impact and application, to raise awareness of this measure and to improve our understanding of disease upon fundamental aspects of visual information processing. abstract_id: PUBMED:12798258 Depression and suicidal behavior in adolescent inpatients with obsessive compulsive disorder. Background: To investigate the prevalence and correlations of suicidal behavior in obsessive compulsive disorder (OCD) among adolescent psychiatric inpatients. Methods: A total of 348 adolescents, representing consecutive admissions to an adolescent inpatient unit, were assessed. Of these, 40 patients had OCD, 118 had schizophrenia, 59 had an affective disorder, 81 had a conduct disorder and 50 had an eating disorder. In addition, 87 normal community controls were assessed. All subjects were assessed for suicidal behavior by the Childhood Suicide Potential Scale (CSPS), for depression by the Beck Depression Inventory, for impulsiveness by the Impulse Control Scale, for anxiety by the State-Trait Anxiety Scale and for aggression by the Yudowsky Overt Aggression Scale. Results: All the psychiatrically ill subjects, including those with OCD, had high levels of depression, anxiety and impulsiveness, which were far higher than those of the controls. The rate of attempted suicide was, however, much lower in the OCD subjects. In addition, there was a significant inverse correlation between suicidal behavior levels on the CSPS and depression in the OCD subjects, while all other subjects showed the expected significant positive correlation between level of suicidal behavior and depression. Limitations: This study looked at a referred population and generalization to outpatient and community samples cannot be made. Distinguishing between the primary and the comorbid diagnosis is difficult and some findings are based on small sample size and therefore may be vulnerable to type I error. Conclusions: Although suicidal ideation and depressive symptoms are common in OCD adolescent inpatients, they seem to be protected against suicide attempts. The inverse relationship between suicidal behavior and depression may mean that suicidal behavior is, in some ways, qualitatively different from that seen in other psychiatrically ill adolescents. abstract_id: PUBMED:9783347 Backward masking in schizophrenia: time course of visual processing deficits during task performance. Backward masking deficits have been put forward as potential psychological markers for vulnerability to schizophrenia. This study was conducted to investigate whether schizophrenic patients improve their performance on a backward masking task during a single test session. The ability of a degraded stimulus version of the masking task to act as a specific diagnostic marker for paranoid schizophrenia (versus affective disorder) was also investigated. The backward masking task was performed on 18 paranoid schizophrenic patients, 18 unipolar depressed patients, and 18 non-psychiatric controls. Paranoid schizophrenic patients were included because they tend to show normal performance with traditional masking protocols. Schizophrenic patients made significantly more detection errors compared to depressives and non-psychiatric controls where interstimulus intervals (ISIs) longer than 14 ms were used. Unlike depressed patients and non-psychiatric controls, schizophrenic patients showed no reduction in error rate during the entire period over which the backward masking task was performed. The constant error rate which was observed at an ISI of 114 ms suggests that schizophrenic patients cannot attenuate the disruption effect due to deflection of attention from the target to the mask. The backward masking deficit in schizophrenia appears to arise from a temporarily stable visual processing impairment in performance within a single test session. Answer: The study of visual scanning behavior for faces among individuals with schizophrenia and affective disorders suggests that there are distinct patterns between the two groups, which may reflect a trait versus state-based distinction. Schizophrenia subjects exhibited a "restricted" scanpath style to all face stimuli and showed the greatest recognition difficulties, particularly for neutral and happy faces. This global restriction of visual scanpaths might be specific to schizophrenic psychosis and could potentially serve as a trait marker for the disorder (PUBMED:12208641). In contrast, individuals with affective disorders differed from the schizophrenia group, except for attention to facial features where they generally avoided facial features in all expressions and showed the greatest attentional problems of all groups for degraded faces. This suggests that scanpath abnormalities in affective disorder might reflect severe state-based (or discrete) attentional disturbances rather than a trait (PUBMED:12208641). Further supporting these findings, another study found that schizophrenic patients failed to show the expected left hemisphere advantage on a visual-semantic task, which could be related to the experience of vivid imagery in normals and visual hallucinations in schizophrenics (PUBMED:8435388). Additionally, patients with schizophrenia were less accurate than controls in emotion recognition during audio and visual only conditions but did not differ from controls on the combined condition, indicating that they benefit from multimodal stimulus presentations of emotion (PUBMED:24126043). Moreover, eye-tracking studies in schizophrenia mostly concurred with the existence of a "restricted" scanning strategy, characterized by fewer number of fixations of increased durations, with shorter scanpath lengths, and a marked avoidance of salient features, especially in relation to facial emotion perception (PUBMED:21333977). This is interpreted as likely reflecting dual impairments in configural processing as well as gestalt perception. In summary, the evidence suggests that schizophrenia is associated with a more pervasive and trait-like visual scanning impairment, while affective disorders may exhibit visual scanning abnormalities that are more state-dependent and related to the severity of attentional disturbances.
Instruction: Can vouchers deliver? Abstracts: abstract_id: PUBMED:35323489 Good Practices in Sponge Natural Product Studies: Revising Vouchers with Isomalabaricane Triterpenes. Species misidentification in the field of natural products is an acknowledged problem. These errors are especially widespread in sponge studies, albeit rarely assessed and documented. As a case study, we aim to revisit reports of isomalabaricane triterpenes, isolated from four demosponge genera: Jaspis, Geodia, Stelletta and Rhabdastrella. From a total of 44 articles (1981-2022), 27 unique vouchers were listed, 21 of which were accessed and re-examined here: 11 (52.4%) of these were misidentified. Overall, 65.9% of the studies published an incorrect species name: previously identified Jaspis and Stelletta species were all in fact Rhabdastrella globostellata. We conclude that isomalabaricane triterpenes were isolated from only two Rhabdastrella species and possibly one Geodia species. In addition to shedding a new light on the distribution of isomalabaricane triterpenes, this study is an opportunity to highlight the crucial importance of vouchers in natural product studies. Doing so, we discuss the impact of species misidentification and poor accessibility of vouchers in the field of sponge natural products. We advocate for stricter voucher guidelines in natural product journals and propose a common protocol of good practice, in the hope of reducing misidentifications in sponge studies, ensure reproducibility of studies, and facilitate follow-up work on the original material. abstract_id: PUBMED:28922907 Organ Vouchers and Barter Markets: Saving Lives, Reducing Suffering, and Trading in Human Organs. The essays in this issue of The Journal of Medicine and Philosophy explore an innovative voucher program for encouraging kidney donation. Discussions cluster around a number of central moral and political/theoretical themes: (1) What are the direct and indirect health care costs and benefits of such a voucher system in human organs? (2) Do vouchers lead to more effective and efficient organ procurement and allocation or contribute to greater inequalities and inefficiencies in the transplantation system? (3) Do vouchers contribute to the inappropriate commodification of human body parts? (4) Is there a significant moral difference between such a voucher system and a market in human organs for transplantation? This paper argues that while kidney vouchers constitute a step in the right direction, fuller utilization of market-based incentives, including, but not limited to, barter exchanges (e.g., organ exchanges, organ chains, and organ vouchers), would save more lives and further reduce human suffering. abstract_id: PUBMED:32153939 Impact of fruits and vegetables vouchers on food insecurity in disadvantaged families from a Paris suburb. Background: Social inequalities in nutrition lead a high number of families to struggle with food insecurity, even in developed countries. We aimed to assess the impact of fruits and vegetables vouchers on food security among disadvantaged households from a Paris suburb. Methods: We used a pre-post assessment design. Families answered face-to-face questionnaires on food consumption and food security status before and after a randomly assigned intervention. Households in the intervention group received vouchers to buy exclusively fruits and vegetables over one year. Both intervention and control groups benefitted from nutritional education through workshops performed by dieticians during the study period. The Household Food Security Module (HFSM) was used to assess food security status of households at inclusion. Food Insufficiency Indicator (FSI) was used to assess food security at inclusion and follow-up. Evolution of FSI on both groups was evaluated using McNemar test. Results: Among the 91 families included between May 2015 and May 2016, 64 completed the post assessment questionnaire. At inclusion, 68.3% of families were experiencing food insecurity and 78.1% were experiencing food insufficiency. No association was found between food consumptions and food security status. After one-year follow-up, the prevalence of food insufficiency was significantly decreased in the intervention group (61.8%, with p value = 0.03), and unchanged in the control group. Conclusion: In this pilot study, food insufficiency was significantly decreased in families receiving vouchers for fruits and vegetables over a one-year period. Trial Registration: NCT02461238, registered 3 June 2015 - Retrospectively registered, https://clinicaltrials.gov/ct2/show/NCT02461238. abstract_id: PUBMED:28847874 Risk factors for redemption of feline neutering vouchers issued by welfare organisations. Animal welfare organisations in the UK invest substantial resources to fund subsidised neutering vouchers to increase feline neutering rates. Little evidence exists to quantify factors influencing voucher redemption. This cross-sectional study assessed risk factors (including voucher value and expiry period) for redemption of neutering vouchers issued by staff of the feline welfare charity Cats Protection. Data were collected using telephone interview-administered questionnaires of cat owners who were issued a neutering voucher(s) and from Cats Protection voucher payment records. During the four-month study 2655 owners were issued 3935 vouchers. Multivariable logistic regression analysis revealed that owners of cats that had produced ≥1 litter before voucher request, owners eligible for a voucher due to receiving benefits and owners who rented their homes or lived with family were less likely to redeem vouchers than owners whose cats had not had a litter, were eligible for a voucher for other reasons and/or owners who owned their home, respectively. abstract_id: PUBMED:32878448 Heterogeneity in the Effects of Food Vouchers on Nutrition Among Low-Income Adults: A Quantile Regression Analysis. Purpose: To determine whether baseline fruit and vegetable (FV) intake or other predictors are associated with response to food vouchers (change in FV intake) among low-income adults. Design: Secondary analysis of a randomized, 2 x 2-factorial, community-based trial. Setting: San Francisco, California. Subjects: 359 low-income adults aged ≥21 years old. Intervention: Participants were mailed $20 of food vouchers monthly for 6 months, and randomized to 1 of 4 arms according to: eligible foods (FV only or any foods) and redemption schedule (weekly or monthly). Measures: Change in FV intake measured in cup equivalents between baseline and month 6 of the trial, based on 24-hour dietary recalls. Analysis: Quantile multivariate regressions were employed to measure associations between key predictors and change in FV intake across study arms. Results: FV-only weekly vouchers were associated with increased FV intake at the 25th percentile (0.24 cups/day, p = 0.048) and 50th percentile (0.37 cups/day, p = 0.02) of the distribution, but not at lower and higher quantiles. Response to the vouchers diminished 0.10 cups/day for each additional household member (p = 0.02). Conclusion: Response to food vouchers varied along the FV intake distribution, pointing to some more responsive groups and others potentially needing additional support to increase FV intake. Larger households likely need vouchers of higher dollar value to result in similar changes in dietary intake as that observed in smaller households. abstract_id: PUBMED:36596312 An evaluation of the pilot scheme on community care service vouchers for the elderly in Hong Kong. The Pilot Scheme on Community Care Service Vouchers for the Elderly (CCSVs) from 2013 onward aims at empowering users with autonomy in choosing services and service providers, enhancing financial sustainability of community care services (CCSs) and encouraging participation by service providers. This article analyses the effectiveness of the CCSVs in terms of utilization rate, attrition rate and reasons for withdrawal, and provides sound short-, medium- and long-term recommendations for CCSs. Secondary data from the annual survey conducted by the Social Welfare Department with all voucher holders in the first (2013-2017) and second phases (2016-2020) were used. The utilization rate of the CCSVs in the first phase was satisfactory, but the second phase was constrained by a low utilization rate, as 17%-35% of the CCSV holders did not use their vouchers to purchase CCSs. Moreover, the high attrition rate of CCSV holders in both phases warrants special attention. abstract_id: PUBMED:37226746 Transferable Exclusivity Vouchers and Incentives for Antimicrobial Development in the European Union. The European Commission's proposal to address antimicrobial resistance using transferable exclusivity vouchers (TEVs) is fundamentally flawed. European policymakers and regulators should consider alternatives, such as better funding for basic and clinical research, use of advance market commitments funded by a pay-or-play tax, or enacting an EU Fund for Antibiotic Development. abstract_id: PUBMED:34913216 Providing vouchers and value information for already free eye exams increases uptake among a low-income minority population: A randomized trial. We study whether vouchers without and with value information encourage attendance of already free follow-up appointments among low-income minority individuals referred for evaluation of possible eye disease. Between May 2017 and September 2018, 821 individuals referred from 114 screening events across Baltimore City were offered (1) standard referral for a free follow-up appointment and prescription glasses, (2) a paper voucher described as redeemable for free follow-up and prescription glasses, or (3) an otherwise identical paper voucher which also indicated the monetary value of the appointment ($250). Under all three conditions, all referred individuals received the same patient education, counseling, and appointment reminders. We find that vouchers without and with value information increase follow-up by 12.5 and 20.3 percentage points, respectively, corresponding to a 36% and 58% increase compared to the standard referral for free follow-up (i.e., without a voucher). We conclude that using vouchers is a promising, low-cost approach to increase uptake of already free health services, particularly when the vouchers also provide value information. abstract_id: PUBMED:29956890 A realist qualitative study to explore how low-income pregnant women use Healthy Start food vouchers. Healthy Start is the UK government's food voucher programme for low-income pregnant women and young children. It was introduced in 2006, but the impact of the programme on nutritional outcomes remains understudied. This study sought to explore potential outcomes of the Healthy Start programme (including intended and unintended outcomes) and develop explanations for how and why these outcomes might occur. A realist review preceded this study, in which programme theories were developed and tested using existing evidence. This qualitative study aimed to further refine and consolidate the programme theories from the realist review while remaining open to new and emerging theories (or hypotheses) about how low-income pregnant women use Healthy Start vouchers. Semistructured interviews were conducted with 11 low-income women from North West England, who received Healthy Start vouchers during pregnancy. A realist logic of analysis was applied to generate clear and transparent linkages between outcomes and explanations. The findings suggested that some women used the vouchers to improve their diets during pregnancy (intended outcome), whereas some women were diverted towards alternative or unintended outcomes. Women's circumstances, values, beliefs, and motivations influenced how they perceived and responded to the vouchers. This paper presents four evidence-based programme theories to explain four contrasting (and potentially overlapping) outcomes: dietary improvements (theory refined from review), shared benefits (new theory), financial assistance (theory refined from review), and stockpiling formula (new theory). It considers how the Healthy Start programme could be improved, to increase the possibilities for low-income women to experience the intended outcome of dietary improvements. abstract_id: PUBMED:34794009 The Roles of Nutrition Education and Food Vouchers in Improving Child Nutrition: Evidence from a Field Experiment in Ethiopia. Mothers' lack of knowledge about child nutrition and limited resources lead to poor diets among children in developing countries, increasing their risk of chronic undernutrition. We implemented a cluster randomized control trial that randomly provides four-month-long Behavior Change Communication (BCC) and food vouchers in Ethiopia. We find improvements in child-feeding practices and a reduction in chronic child undernutrition only when BCC and vouchers are provided together. BCC or voucher alone had limited impacts. Our results highlight the importance of adding an effective educational component to existing transfer programs. Answer: Vouchers can indeed deliver positive outcomes in various contexts, as evidenced by the studies provided. In the field of natural product studies, vouchers play a crucial role in ensuring the correct identification of species, which is essential for reproducibility and follow-up work on original material. A case study focusing on sponge natural product studies found that a significant number of species were misidentified, leading to incorrect reporting of isomalabaricane triterpenes. The study advocates for stricter voucher guidelines to reduce misidentifications in sponge studies (PUBMED:35323489). In the context of organ donation, a voucher program for encouraging kidney donation is discussed, with the suggestion that market-based incentives, including organ vouchers, could save more lives and reduce human suffering (PUBMED:28922907). Vouchers have also been shown to have a positive impact on food security among disadvantaged households. A study in a Paris suburb demonstrated that fruit and vegetable vouchers significantly decreased food insufficiency among families receiving them over a one-year period (PUBMED:32153939). In animal welfare, the redemption of neutering vouchers issued by welfare organizations was influenced by factors such as voucher value, expiry period, and the owner's circumstances, with certain groups being less likely to redeem vouchers (PUBMED:28847874). Food vouchers were found to have heterogeneous effects on nutrition among low-income adults, with some groups being more responsive to the vouchers and others potentially needing additional support to increase fruit and vegetable intake (PUBMED:32878448). The effectiveness of community care service vouchers for the elderly in Hong Kong was evaluated, revealing issues with utilization and attrition rates, suggesting the need for improvements in the voucher scheme (PUBMED:36596312). In the European Union, the proposal to use transferable exclusivity vouchers for antimicrobial development was criticized, with suggestions for alternative incentives (PUBMED:37226746). Providing vouchers, even for already free services, was shown to increase uptake among a low-income minority population, indicating that vouchers can be a promising approach to increase the use of health services (PUBMED:34913216). The Healthy Start food voucher program in the UK was explored, revealing that while some low-income pregnant women used the vouchers to improve their diets, others experienced alternative outcomes influenced by their circumstances and motivations (PUBMED:29956890).
Instruction: Early severe HIV disease precedes early antiretroviral therapy in infants: Are we too late? Abstracts: abstract_id: PUBMED:24925044 Early severe HIV disease precedes early antiretroviral therapy in infants: Are we too late? Objective: To describe the degree of HIV disease progression in infants initiating antiretroviral therapy (ART) by three months of age in a programmatic setting in South Africa. Design: This was a programmatic cohort study. Methods: Electronic and manual data extraction from databases and antiretroviral registers in 20 public clinics in Cape Town and electronic data extraction from a large ART service at Chris Hani Baragwanath Hospital in Soweto were performed. Records of all infants initiated on ART by three months of age between June 2007 and September 2010 were extracted. Demographics, immunological and clinical stage at ART initiation were analyzed descriptively by chi-square, two-sample t-test and Kaplan-Meier methods. Results: A total of 403 records were identified: 88 in Cape Town and 315 in Soweto. Median age at ART initiation was 8.4 [interquartile range (IQR): 7.2-9.7] weeks. At ART initiation, 250 infants (62%) had advanced HIV disease (CD4% &lt;25% or absolute CD4&lt;1500 cells/mm(3) or WHO clinical Stage 3 or 4). Median age at ART initiation by site was 10.3 (IQR: 8.2-11.9) weeks in Cape Town and 8.6 (IQR: 7.7-10.0) weeks in Soweto infants (p&lt;0.0001). In Cape Town, 73 infants (83%) had advanced HIV disease at ART initiation, compared to 177 infants (56%) in Soweto (p&lt;0.0001). On logistic regression, each month increase in age at ART initiation lowered the odds of initiating ART in an optimal state (OR: 0.56, CI: 0.36-0.94) and increased the odds of advanced HIV disease at ART initiation (OR: 1.69, CI: 1.05-2.71). Conclusions: ART initiation by three months of age may not adequately prevent disease progression. New emphasis on early diagnosis and rapid initiation of ART in the first weeks of life are essential to further reduce infant mortality. abstract_id: PUBMED:36207834 Significance of initiating antiretroviral therapy in the early stage of HIV infection. A growing number of guidelines now recommend that human immunodeficiency virus (HIV) infected patients should be given early antiretroviral therapy (ART), especially in acute HIV infection. ART during early infection can limit viral reservoirs and improve immune cell function. From a societal prospect, early-infected individuals who achieve a state of viral suppression through ART can reduce the chance of HIV transmission and reduce the acquired immunodeficiency syndrome (AIDS)-related disease burden. However, there are many problems in the early diagnosis and treatment of HIV infection, including personal and social factors, which hinder the implementation and development of early treatment. It is recommended that initiating ART in the early stage of HIV infection, combined with other treatment strategies, so as to achieve functional cure. abstract_id: PUBMED:25154159 Effect of antiretroviral therapy on pulmonary hypertension in HIV patients. Pulmonary hypertension is an important cause of dyspnoea in human immunodeficiency virus/acquired immune deficiency syndrome (HIV/AIDS) patients. The effect highly active antiretroviral therapy on pulmonary artery pressure has been an area of controversy. The purpose of this study was to evaluate the effect of antiretroviral therapy on pulmonary hypertension in HIV patients. This prospective cohort study was performed in a peripheral medical college of West Bengal with 88 patients from October 2008 to September 2011. By echocardiography, pulmonary artery pressure was recorded before initiation of antiretroviral therapy. After one year of antiretroviral therapy repeat echocardiography was done and pulmonary artery pressure was evaluated. The values were analysed. The present study showed that highly active antiretroviral therapy improves pulmonary artery pressure in HIV/AIDS patients if instituted at early stages (WHO classes I and II). However, at more advanced stages of pulmonary artery hypertension, it does not have any significant effect on reduction of the same. The present study documents that early detection of pulmonary artery hypertension in HIV/AIDS patients is essential and prompt institution of highly active antiretroviral therapy should be considered in them even when those patients do not fulfill the conventional criteria for initiation of this treatment. abstract_id: PUBMED:19532045 Renal disease: the effects of HIV and antiretroviral therapy and the implications for early antiretroviral therapy initiation. Purpose Of The Review: This review will summarize the relevant literature supporting the early initiation of antiretroviral therapy among persons with HIV and kidney disease. Recent Findings: Recent guidelines support the initiation of antiretroviral therapy among persons with HIVAN as soon as the diagnosis of kidney disease is made. However, few patients with HIV and kidney disease undergo renal biopsy to determine the histology of their renal lesion. Observational studies, however, suggest that antiretroviral therapy is associated with a lesser risk of new AIDS defining illness and mortality associated with the presence of proteinuria or increased creatinine. These abnormalities are seen in a larger proportion of persons with HIV than only those that undergo biopsy. Therefore, these markers could describe the subgroup of patients at highest risk of poor outcomes and potentially prompt the consideration of earlier initiation of therapy on an individual basis. Summary: Early initiation of antiretroviral therapy probably improves outcomes among persons with HIVAN. The presence of proteinuria or an elevated creatinine could prompt consideration for early initiation of antiretroviral therapy on a case-by-case basis. abstract_id: PUBMED:18784453 Early mortality among adults accessing antiretroviral treatment programmes in sub-Saharan Africa. Two-thirds of the world's HIV-infected people live in sub-Saharan Africa, and more than 1.5 million of them die annually. As access to antiretroviral treatment has expanded within the region; early pessimism concerning the delivery of antiretroviral treatment using a large-scale public health approach has, at least in the short term, proved to be broadly unfounded. Immunological and virological responses to ART are similar to responses in patients treated in high-income countries. Despite this, however, early mortality rates in sub-Saharan Africa are very high; between 8 and 26% of patients die in the first year of antiretroviral treatment, with most deaths occurring in the first few months. Patients typically access antiretroviral treatment with advanced symptomatic disease, and mortality is strongly associated with baseline CD4 cell count less than 50 cells/mul and WHO stage 4 disease (AIDS). Although data are limited, leading causes of death appear to be tuberculosis, acute sepsis, cryptococcal meningitis, malignancy and wasting syndrome. Mortality rates are likely to depend not only on the care delivered by antiretroviral treatment programmes, but more fundamentally on how advanced disease is at programme enrollment and the quality of preceding healthcare. In addition to improving delivery of antiretroviral treatment and providing it free of charge to the patient, strategies to reduce mortality must include earlier diagnosis of HIV infection, strengthening of longitudinal HIV care and timely initiation of antiretroviral treatment. Health systems delays in antiretroviral treatment initiation must be minimized, especially in patients who present with advanced immunodeficiency. abstract_id: PUBMED:23908475 Early immunologic failure is associated with early mortality among advanced HIV-infected adults initiating antiretroviral therapy with active tuberculosis. Background: The relationship between antiretroviral therapy (ART) response and early mortality after ART initiation is unknown. We hypothesized that early mortality is associated with decreased early immunologic response to ART. Methods: We prospectively determined the association between changes in plasma human immunodeficiency virus type 1 (HIV-1) RNA and CD4(+) T-cell counts (CD4 count) after 4 weeks of ART and early mortality in adults with pulmonary tuberculosis and pre-ART CD4 counts ≤ 125 cells/µL. Purified protein derivative (PPD)-specific immune recovery was determined by interferon-γ enzyme-linked immunosorbent spot assays. Levels of interleukin 6, C-reactive protein, and soluble CD14 were assessed. Patients with CD4 count and viral load values at baseline and week 4 were analyzed using multiple logistic regression. Results: Early immunologic response, but not pre-ART CD4 counts or virologic response, was related to early mortality (8 [interquartile range {IQR}, -18 to 43] vs 68 [IQR, 24-131] cells/µL, P = .002). In a logistic regression model, every 20 cells/µL increase in the CD4 count from baseline to week 4 was independently associated with a 40% reduction in the odds of death (odds ratio, 0.59 [95% confidence interval, .41-.87]). PPD-specific immune recovery was lower, whereas levels of immune activation were higher, among deaths. Conclusions: Early immunologic failure despite virologic suppression is associated with early mortality after ART initiation in advanced HIV/tuberculosis. abstract_id: PUBMED:34259393 Cognitive outcomes at ages seven and nine years in South African children from the children with HIV early antiretroviral (CHER) trial: a longitudinal investigation. Introduction: Many children living with HIV (CLWH) display impaired cognition. Although early combination antiretroviral therapy (ART) produces improved cognitive outcomes, more long-term outcome data are needed. After concluding the Children with HIV Early antiRetroviral (CHER) trial in 2011, we investigated cognitive performance, at seven and nine years of age. Participants had been randomized to deferred ART (ART-Def; n = 22); immediate time-limited ART for 40 weeks (ART-40W; n = 30) and immediate time-limited ART for 96 weeks (ART-96W; n = 18). We also recruited HIV-exposed uninfected (CHEU; n = 28) and HIV-unexposed (CHU; n = 35) children. Methods: Data were collected between May 2012 and December 2017. Mixed-model repeated-measures ANOVAs assessed differences over time between CLWH (ART-40W, ART-96W and ART-Def) and CHIV- CHEU and CHU between ART-Early (ART-40W and ART-96W), ART-Def, CHEU and CHU; and between ART-40W, ART-96W, ART-Def, CHEU and CHU. Results: All comparisons found significant effects of Time for most outcome variables (better scores at nine than at seven years; ps &lt; 0.05). The first ANOVAs found that for (a) motor dexterity, CLWH performed worse than CHIV- at seven years (p &lt; 0.001) but improved to equivalence at nine years, (b) visual-spatial processing and problem solving, only CLWH (p &lt; 0.04) showed significant performance improvement over time and (c) working memory and executive function, CLWH performed worse than CHIV- at both seven and nine years (p = 0.03 and 0.04). The second ANOVAs found that for (a) working memory, CHU performed better than ART-Early and CHEU (p &lt; 0.01 and &lt;0.04), and (b) motor dexterity, ART-Def performed worse than ART-Early, CHEU and CHU at seven years (p = 0.02, &lt;0.001 and &lt;0.001 respectively) but improved to equivalence at nine years (ps &gt; 0.17). Similarly, for motor dexterity, ART-Def performed worse than ART-96W, CHEU and CHU at seven years (p &lt; 0.04, &lt;0.001 and &lt;0.001) but improved to equivalence at nine years (ps &gt; 0.20). Conclusions: Although neurocognitive developmental trajectories for treatment groups and controls were largely similar (i.e. performance improvements from 7 to 9), all ART-treated children, regardless of treatment arm, remain at risk for cognitive deficits over early school ages. Although the nature of these deficits may change as cognitive development proceeds, there are potential negative consequences for these children's future learning, reasoning and adaptive functioning. abstract_id: PUBMED:37243444 Level of Antiretroviral Therapy Adherence and Associated Factors Among People Living with HIV in the Context of Early Antiretroviral Therapy Initiation in Wolaita Sodo Town, Ethiopia: Cross-Sectional Study. Early antiretroviral therapy (ART) initiation is WHO recommendation to fasten viral suppression and for prevention of further viral transmission sexually. There is no evidence showing the level of adherence to ART after initiation of the universal test and treat (UTT) strategy in Ethiopia, including study area. This study was aimed to determine the level of ART adherence and associated factors among HIV/AIDS patients in context of UTT strategy. A health facility-based study was conducted on 352 people living with HIV who started their ART follow-up after application of UTT strategy in Ethiopia from April 15 to June 5, 2020. Systematic random sampling method was used to select the study participants. Interviewer administered questionnaire was used to collect data and the collected data were entered directly in to SPSS version 21 and analyzed. Both bivariate and multivariate logistic regression analyses were carried out. Strength and direction of association was determined by using adjusted odds ratio (AOR) with 95% confidence interval. The total 352 participants included the study. The overall level of adherence was 290 (82.4%). Commonly used ART regimen was TDF +3TC+EFV, 201 (57.1%). In bivariate analysis, type of health institutions, crude odds ratio (COR) = 2.934 (1.388-6.200), age level of 18-27 years, COR = 0.357 (0.133-0.959), current viral load of 3log scale, COR = 0.357 (0.133-0.959), and change of ART medication, COR = 8.088 (1.973-33.165) were associated with medication adherence. Health institution type AOR = 2.615 (1.147-5.9600) and change of ART medication, AOR = 7.267 (1.683-31.384) were independent predictors of ART adherence. Low level of ART adherence was detected in this study. It was below the recommended good adherence standard and 90-90-90 target strategy. Therefore, patients should get adequate and comprehensive ART adherence counseling before initiation of ART treatment and during the follow-up time. abstract_id: PUBMED:19532051 Early antiretroviral therapy: the role of cohorts. Purpose Of Review: To review the data that contribute to the debate on the optimal time to initiate highly active antiretroviral therapy in HIV-infected individuals, with a focus on the information that is available from cohort studies. Recent Findings: The findings from cohort studies generally support initiation of highly active antiretroviral therapy at CD4 cell counts more than 350 cells/microl. In particular, the findings that death rates among treated HIV-infected individuals are higher than those in the general population, and that the risks of AIDS and serious non-AIDS events are higher in those with lower CD4 cell counts (even when the count remains &gt;350 cells/microl), suggest that earlier initiation of highly active antiretroviral therapy may prevent some excess morbidity and mortality. However, given the lack of adjustment for lead-time bias in many analyses, the potential for residual confounding and the possible incomplete ascertainment of relevant outcomes in cohorts, it cannot be concluded that the benefits of highly active antiretroviral therapy when started at higher CD4 cell counts will outweigh the possible detrimental effects. Summary: Whereas the data from cohort studies currently support initiation of highly active antiretroviral therapy at CD4 cell counts more than 350 cells/microl, there is an urgent need for data from randomized trials to inform this decision. abstract_id: PUBMED:26940802 Early Treatment in HIV Patients: A Cost-Utility Analysis from the Italian Perspective. Background And Objective: Highly active antiretroviral therapy (HAART) has modified the clinical course of human immunodeficiency virus (HIV) infection, reducing the rate of disease progression, the incidence of opportunistic infections and mortality. Several recent studies show early antiretroviral therapy reduces the risk of AIDS and HIV-related disease. The aim of this study was to perform an economic analysis to estimate the cost-utility of early antiretroviral therapy in Italy for managing HIV-infected patients. Methods: The incremental cost-utility analysis was carried out to quantify the benefits of the early-treatment approach in HIV subjects. A Markov simulation model including direct costs and health outcomes was developed from a third-party (Italian National Healthcare Service) payer's perspective for four CD4 strata. 5000 Monte Carlo simulations were performed on two distinct scenarios: Standard of care (SoC) in which 30% of patients started HAART with a CD4 count ≥500 cells/mm(3) versus the early-treatment scenario (ETS), where the number of patients starting HAART with a CD4 count ≥500 cells/mm(3) increased to 70%. A systematic literature review was carried out to identify epidemiological and economic data, which were subsequently used to inform the model. In addition, a one-way probabilistic sensitivity analysis was performed in order to measure the relationship between the effectiveness of the treatments and the number of patients to undergo early treatment. Results: The model shows, in terms of the incremental cost-effectiveness ratio (ICER) per quality-adjusted life years (QALY) gained, that early treatment appeared to be the most cost-effective option compared to SoC (ICER = €13,625) over a time horizon of 10 years. The cost effectiveness of ETS is more sustainable as it extends the time horizon analysis (ICER = €7526 per QALY to 20 years and €8382 per QALY to 30 years). The one-way sensitivity analysis on the main variables confirmed the robustness of the model for the early-treatment approach. Conclusion: Our model represents a tool for policy makers and health-care professionals to provide information on the cost effectiveness of the early-treatment approach in HIV-infected patients in Italy. Starting HAART earlier keeps HIV-infected patients in better health and reduces the incidence of AIDS- and non-AIDS-related events, generating a gain in terms of both patients' health and correct resource allocation. Answer: The question "Early severe HIV disease precedes early antiretroviral therapy in infants: Are we too late?" refers to the concern that initiating antiretroviral therapy (ART) by three months of age may not be sufficient to prevent disease progression in infants with HIV. A study conducted in South Africa found that a significant proportion of infants already had advanced HIV disease at the time of ART initiation, despite starting treatment by three months of age. Specifically, 62% of the infants had advanced HIV disease (CD4% <25% or absolute CD4<1500 cells/mm(3) or WHO clinical Stage 3 or 4) when they began ART. The median age at ART initiation was 8.4 weeks, and each month increase in age at ART initiation was associated with a lower likelihood of initiating ART in an optimal state and a higher likelihood of advanced HIV disease at ART initiation (PUBMED:24925044). This suggests that even with early ART initiation, many infants may already be experiencing significant disease progression, raising concerns that the current strategies may not be early enough to prevent the detrimental effects of HIV in this population. The findings underscore the need for new emphasis on early diagnosis and rapid initiation of ART in the first weeks of life to further reduce infant mortality and improve outcomes (PUBMED:24925044).
Instruction: Acute care surgery: now that we have built it, will they come? Abstracts: abstract_id: PUBMED:37952190 Statistical primer: using prognostic models to predict the future: what cardiothoracic surgery can learn from Strictly Come Dancing. Objectives: Prognostic models are widely used across medicine and within cardiothoracic surgery, where predictive tools such as EuroSCORE are commonplace. Such models are a useful component of clinical assessment but may be misapplied. In this article, we demonstrate some of the major issues with risk scores by using the popular BBC television programme Strictly Come Dancing (known as Dancing with the Stars in many other countries) as an example. Methods: We generated a multivariable prognostic model using data from the then-completed 19 series of Strictly Come Dancing to predict prospectively the results of the 20th series. Results: The initial model based solely on demographic data was limited in its predictive value (0.25, 0.22; R2 and Spearman's rank correlation, respectively) but was substantially improved following the introduction of early judges' scores deemed representative of whether contestants could actually dance (0.40, 0.30). We then utilize our model to discuss the difficulties and pitfalls in using and interpreting prognostic models in cardiothoracic surgery and beyond, particularly where these do not adequately capture potentially important prognostic information. Conclusion: Researchers and clinicians alike should use prognostic models cautiously and not extrapolate conclusions from demographic data alone. abstract_id: PUBMED:2528180 Primary acute peritonitis in adults The authors report 12 cases of primary acute peritonitis-that were operated over a period of 10 years, representing 2.8% of the total number of acute cases of peritonitis, with the exclusion of cases of postoperative peritonitis. Since they are so rare it is understandable that primary acute peritonitis of the adult are less well known by the general practitioner in surgery. The particular background of these patients, frequently involving other forms of pathologic features, and the generally depressed immunological background explains the atypical clinical evolution, with attenuated local abdominal signs, a fact which retards the diagnosis, and hence the therapy. As a general rule adults come rather late in surgical departments, usually transferred from another department (diabetes, internal medicine, gynecology, communicable diseases, etc.). The surgeon also has difficulties in making a diagnosis. When the decision to operate has been taken--in most of the cases this happens at a late stage-peritonitis is usually is the purulent phase and careful drainage of the peritoneal cavity is necessary, associated to antibiotherapy that should be applied on the surgical table, and with massive doses. Preoperative etiological diagnosis is difficult. Direct bacterioscopy of the peritoneal exudate is decisive and it should be asked for by the surgeon even in the early stage of surgery. Exhaustive visceral surgical exploration, which should, in principle, eliminate secondary peritonitis is neither easy to perform, nor without risks in these patients, usually aged, obese, with multiple interventions in antecedents. Appendectomy, as a complementary gesture, is contraindicated. The prognosis in the adult, in contrast with that of children, is severe, with very high perioperative morbidity and mortality (above 50% in the authors' experience). abstract_id: PUBMED:30243444 Evolution and Current Trends in the Management of Acute Appendicitis. The treatment of appendicitis has evolved since the first appendectomy in the eighteenth century. It seems to have come full circle with nonoperative management in the era before frequent surgical interventions, to open surgical interventions, minimally invasive interventions, and now back to a renewed interest in nonoperative management of acute appendicitis. Scoring systems to help refine the diagnosis of acute appendicitis and advances in medical imaging have also changed the management of this condition. Scientific investigations into the effects the microbiome of the appendix plays in this disease process are also being considered. abstract_id: PUBMED:2655362 Acute non-calculous cholecystitis The authors compared their own clinical experience with existing literature and noticed that acute non-calculous cholecystitis is a clinical picture with explosive evolution and a very high mortality, presenting atypical clinical signs in very ill patients recovering from previous non-surgical disease. In a series of 350 operations on gallbladder and biliary ducts over a five years period 86 cases of acute cholecystitis were confirmed. Only 4 were proven to be acute acalculous cholecystitis. This means 1.14% on the whole series and 4.65% on the series of acute cholecystitis. Early diagnosis and immediate surgery are needed to reduce mortality. abstract_id: PUBMED:15279444 Suppurative complications of acute pancreatitis Background/aims: This retrospective study investigated the management of the suppurative complication of acute pancreatitis in our clinic. Materials And Methods: A retrospective study was made in the Surgery no. 1 Clinic of Timisoara during the period of 6 years (1996-2001) and on the subject of the acute pancreatitis with suppurative complications. Results: As a result the study found a number of 16 cases of suppurative complications out of a total number of 224 acute pancreatitis (7.14%). The acute pancreatitis with suppurative complications was most frequent of biliary etiology (56.2%). All suppurations have occurred on serious necrotizing hemorrhagic acute pancreatitis background. The major elements that contributed to determining the diagnosis were: systemic toxicity, the permanent sepsis, the association of multiple organ failure, but the element that provided certainty in the diagnosis was the computed tomography (CT scan). The surgical mortality was 18.7%. Conclusions: The acute pancreatitis that evolves unfavorable under medical treatment must be operated. Necrotic tissues and purulent collections must be evacuated and drained efficiently. Open drainage through laparotomy lowered postoperative mortality. The early surgical intervention had a favorable effect on the prognostic. abstract_id: PUBMED:15560562 False surgical acute abdomen with intermittent acute porphyria This article presents a case of acute intermittent porphyria admitted to the Surgery Department of C.F. Craiova Hospital between 18.08.2003-26.08.2003 then transferred to the Colentina Hospital in Bucharest for diagnosis confirmation and adequate treatment. The purpose of this paper is to bring attention on a rare metabolic inherited disease that, due to its non-specific and often noisy symptoms and limited possibilities of biochemical, enzymatic and genetic diagnosis, could generate potential serious confusions. The presentes case illustrates the fact that sometimes the acute attack may be mistaken for an acute surgical affection which requires an emergency operation with all the aggravating consequences and delay in the real diagnosis. About 1% of acute attacks of porphyria may be fatal. Only the drugs known as safe should be prescribed. Basic treatment consists in oral and intravenous glucose and hematin administration. abstract_id: PUBMED:2659582 Acute appendicitis, echographic findings A case of acute appendicitis in which an appendicolith was present is described. The diagnosis was made by plain abdominal radiography and ultrasound, and was confirmed at surgery. The plain abdominal film showed a lamellated, round calcification, projecting over the right iliac wing. Ultrasound revealed a tubular, hypoechogenic structure, containing an oval hyperechogenic area with acoustic shadow. The recent literature was reviewed, and the "graded compression" technique is described. A summary is given of the ultrasound findings of acute appendicitis, and of its most important differential diagnoses in childhood: mesenterial adenitis, and--less frequently--terminal ileitis. abstract_id: PUBMED:9763841 Acute urinary retention in women Acute urinary retention in women is not a common problem. The incidence is 0.07 per 1000 inhabitants per year. The causes of acute urinary retention can be divided into four groups: obstructive, neurological, pharmacological and psychogenic. More than half of acute urinary retentions occur after surgery or parturition. The most common obstructive cause is a gynaecologic tumor. A psychogenic cause is a diagnosis per exclusionem. The treatment of the acute urinary retention has to be catheterisation before further investigation is done. abstract_id: PUBMED:19468404 Acute postoperative complications of hypospadias repair. Purpose: Complications in hypospadias surgery are higher than other reconstructive procedures. The incidence of complications can be reduced if proper preventive measures are taken. The review aims to highlight incidences, causes, and preventive measures of acute complications of hypospadias repair. Materials And Methods: Literature reports were reviewed in Pubmed by giving the key word acute complications of hypospadias repair, wound infection, wound dehiscence, flap necrosis, edema, penile torsion, urethral fistula, bleeding and hematoma and urethral stents problems. Summaries of all articles were reviewed with full text of relevant article and results were analyzed. Results: Besides mentioning the complications of hypospadias repair in individual articles on the subject, we did not come across any separate article on this subject in the published English literature. Fistula is the commonest complication followed by edema and penile torsion. Conclusions: Most acute complications can be prevented with adherence to principles of plastic and microsurgery, meticulous preoperative planning, and judicious postoperative care. Deviation from these principles may lead to disaster and even failure of the repair. The aim in hypospadias surgery should be following these principles and bring down the complication rates &lt; 5% in distal hypospadias and &lt; 10% in proximal hypospadias. abstract_id: PUBMED:7423764 Postischemic disorders in the surgery of acute intestinal obstruction Acute intestinal obstruction accompanied by disorders in the mesenterial blood circulation is one of the variations of the ischemic injury of the intestine. After recovery of blood circulation in the portion of the intestine subjected to ischemia, the products of autolysis of the intestinal wall come into the liver and general blood flow and bring about the development of intoxication and cardiopulmonary insufficiency. The therapeutic measures should be directed to stabilization of indices of hemodynamic microcirculation, to liquidation of metabolic acidosis and stimulation of urination. Answer: The provided abstracts do not directly address the question "Acute care surgery: now that we have built it, will they come?" However, they do provide insights into various aspects of acute care surgery, including the management of specific conditions, the evolution of treatment approaches, and the challenges associated with diagnosis and treatment in emergency surgical situations. For instance, the abstract from PUBMED:30243444 discusses the evolution of the management of acute appendicitis, indicating a shift from nonoperative to surgical interventions and back to nonoperative management, suggesting that treatment strategies in acute care surgery are dynamic and responsive to new evidence and trends. The abstract from PUBMED:2655362 highlights the high mortality associated with acute non-calculous cholecystitis and emphasizes the need for early diagnosis and immediate surgery, which is a critical aspect of acute care surgery. PUBMED:15279444 discusses the management of suppurative complications of acute pancreatitis, noting that unfavorable evolution under medical treatment necessitates surgical intervention, which is a decision-making process inherent in acute care surgery. PUBMED:15560562 presents a case of acute intermittent porphyria that was initially mistaken for an acute surgical condition, illustrating the diagnostic challenges that can arise in acute care settings. PUBMED:2659582 and PUBMED:9763841 discuss specific conditions (acute appendicitis and acute urinary retention, respectively) that may present to acute care surgery, while PUBMED:19468404 reviews complications associated with hypospadias repair, highlighting the importance of preventing acute complications in surgical care. Lastly, PUBMED:7423764 addresses postischemic disorders in the surgery of acute intestinal obstruction, emphasizing the importance of managing hemodynamic and metabolic complications in acute surgical care. In summary, while the abstracts do not directly answer the question about the utilization of acute care surgery services ("will they come?"), they collectively underscore the complexity, evolution, and critical nature of acute care surgery in managing emergent and life-threatening conditions. The abstracts suggest that acute care surgery is a vital and evolving field that addresses a wide range of urgent medical conditions, and as such, it is likely that there will be a continued need for these services.
Instruction: Male Urethral, Penile, and Incontinence Surgery: Is Resident Exposure Adequate? Abstracts: abstract_id: PUBMED:26284595 Male Urethral, Penile, and Incontinence Surgery: Is Resident Exposure Adequate? Objective: To evaluate trends in male urethral and penis/incontinence case volumes among urology residents and assess these for adequate surgical training/competency. Methods: Accreditation Council for Graduate Medical Education (ACGME) case logs of urology residents graduating from U.S. programs from 2009 to 2013 were reviewed to determine the surgical volume of select index categories. Male urethral cases encompass urethrectomy and urethroplasty, whereas male penis/incontinence cases include urethral slings and sphincters. Case volumes as "surgeon," "assistant," and "teaching assistant" were reviewed and compared to ACGME minimum requirements. Results: A total of 1032 graduating residents reported case logs. For male urethral surgery, residents reported weighted averages (standard deviation [SD]) of 12.7 (9.0) cases as "surgeon," 1.5 (3.5) cases as "assistant," and 0.2 (1.0) as "teaching assistant." The minimum requirement for these cases is 5. The annual 10th percentiles as "surgeon" ranged from 4 to 5 cases throughout the study period. For male penis/incontinence cases, residents reported weighted averages (SD) of 45.5 (22.7) cases as "surgeon," 3.6 (5.5) cases as "assistant," and 1.5 (3.0) cases as "teaching assistant." The minimum requirement is 10 cases. The 10th percentiles as "surgeon" ranged from 19 to 23 cases. Conclusion: Although the majority of residents met the minimum standard for these cases, about 10% of residents did not meet the requirement for male urethral surgery. In addition, a review of learning curves for these procedures suggests that the ACGME minimum requirements may be insufficient to confer actual competency in skill. Increasing this number in training or specialized postgraduate training programs is needed to provide actual competency. abstract_id: PUBMED:28057997 Male urethral diverticulum uncommon entity: Our experience. Out pouching of the urethral wall could be congenital or acquired. Male urethral diverticulum (UD) is a rare entity. We present 2 cases of acquired and 1 case of congenital male UD. Case 1A: 40 year male presented with SPC and dribbling urine. Clinically he had hard perineal swelling. RGU revealed large diverticulum in proximal bulbar, irregular narrow distal urethra and stricture just beyond diverticulum. Managed with perineal exploration, stone removal, diverticulum repair and urethroplasty using excess diverticular wall. Case 2A: 30 year male with obstructive lower urinary tract symptoms (LUTS). Retrograde urethrogram (RGU) revealed bulbar urethral diverticulum akin to anterior urethral valve, managed endoscopically. 1 year follow up urine stream satisfactory. Case 3A: 27 year male previously operated large proximal bulbar urethral stone with incontinence. RGU large proximal bulbar UD with wide open sphincter. Treated with excision of excess diverticular wall and penile clamp with pelvic exercises for incontinence. Congenital UD develops due to imperfect closure of urethral fold, Acquired UDs occurs secondary to stricture, infection, trauma, long standing impacted urethral stones or scrotal / skin flap urethroplasties. RGU and MCU are the best diagnostic technique to confirm and characterize the UD. Urethral diverticulectomy with urethral reconstruction is the recommended treatment for UD. UD is a rare entity. Especially in males, congenital are even more rare. Management should be individualized. Surgery can involve innovation and/or surgical modifications. We used excess diverticular flap for stricture urethroplasty in one case. abstract_id: PUBMED:16355730 Prostatopexy as a treatment for urethral sphincter mechanism incompetence in male dogs. Objectives: The incidence of urinary incontinence due to urethral sphincter mechanism incompetence (USMI) in male dogs is relatively rare compared with the incidence in bitches, but the medical management of USMI in male dogs is less rewarding than in bitches. Attempts have been made to manage this condition surgically using either urethral bulking agents such as Teflon or by relocating the intrapelvic bladder neck to an intra-abdominal position by vas deferentopexy. This paper reports the response to prostatopexy in male dogs with USMI. Methods: The response to prostatopexy was determined in nine severely incontinent male dogs with USMI that were followed up for periods ranging from 10 months to five years (mean 2.3 years). Results: One dog was cured, four were improved, and no improvement in the frequency or degree of urinary incontinence occurred in the remaining four animals. No complications were seen in any of the dogs. Clinical Significance: Prostatopexy may provide a further method of treating male dogs with USMI that do not respond to medical therapy. abstract_id: PUBMED:30895658 Management of male pelvic fracture urethral injuries: Review and current topics. Pelvic fractures from high-energy blunt force trauma can cause injury to the posterior urethra, known as pelvic fracture urethral injury, which is most commonly associated with unstable pelvic fractures. Pelvic fracture urethral injury should be suspected if a patient with pelvic trauma has blood at the meatus and/or difficulty voiding, and retrograde urethrography should be carried out if the patient is stable. Once urethral injury is confirmed, urinary drainage should be established promptly by placement of a suprapubic tube or primary realignment of the urethra over a urethral catheter. Although pelvic fracture urethral injury is accompanied by subsequent urethral stenosis in a high rate and it has been believed that primary realignment can reduce the risk of developing urethra stenosis, it also has a risk of complicating stenosis and its clinical significance remains controversial. Once inflammation and fibrosis have stabilized (generally at least 3 months after the trauma), the optimal management for the resulting urethral stenosis is delayed urethroplasty. Delayed urethroplasty can be carried out via a perineal approach using four ancillary techniques in steps (bulbar urethral mobilization, corporal separation, inferior pubectomy and urethral rerouting). Although pelvic trauma can impair continence mechanisms, the continence after repair of pelvic fracture urethral injury is reportedly adequate. Because erectile dysfunction is frequently encountered after pelvic fracture urethral injury and most patients are young with a significant life expectancy, its appropriate management can greatly improve quality of life. In the present article, the key factors in the management of pelvic fracture urethral injury are reviewed and current topics are summarized. abstract_id: PUBMED:9123813 Surgical treatment of male dogs with urinary incontinence due to urethral sphincter mechanism incompetence. A modified technique for fixation of the deferent ducts to the abdominal wall as a therapy for urinary incontinence caused by urethral sphincter mechanism incompetence (USMI) in male dogs is described, and the results in seven dogs are reported. The goal of this treatment was to achieve an effect similar to colposuspension in female dogs with USMI. An increase in urethral length of an average of 28 mm was obtained (range, 5 to 50 mm, measured radiographically). Preoperatively, the neck of the bladder was located intrapelvically in five of seven dogs. Postoperatively, the neck of the bladder was located intra-abdominally, near the caudo-ventral abdominal wall, in all dogs. After a follow-up period of 12 to 49 months, the response to surgery, based on lack of or decrease of incontinence, was excellent in three dogs, good in another three, and poor in one dog. abstract_id: PUBMED:33040587 A Modified Endoscopic Primary Realignment of Severe Bulbar Urethral Injury. Purpose: Male urethral injury is a common urologic emergency in developing countries. Whether early or late treatment of urethral injuries is often multifactorial and controversial. The goal of this study is to determine whether early realignment can reduce postsurgical complications and evaluate the clinical feasibility of emergency endoscopic urethroplasty using single rigid ureteroscopy in the treatment of bulbar urethral severe injury. Patients and Methods: Between September 2013 and March 2019, 15 male adult patients (mean age 35 years; from 21 to 62 years) with severe bulbar urethral injury were enrolled into the current study. The patients mainly presented with dysuria or painful urination (15/15, 100%), urethral bleeding (13/15, 86.7%), and urinary retention (11/15, 73.3%). Six of them had swelling of perineal or scrotal soft tissue, while four had testicular contusion. No pelvic fracture was found in all cases with CT scanning of the pelvic cavity. The bulbar urethral at grade IV was confirmed to be completely ruptured in all cases by endoscopy during operation. The modified endoscopic primary realignment was performed. Results: This new urethral repair technique was effectively performed in all patients and none converted to open operation. Mean operation time was 42.3 ± 11.5 minutes (28-52 minutes) and the mean Foley catheter indwelling time was 34.5 ± 6.9 days (28-42 days). During a follow-up of 41.3 ± 22.8 months (12-64 months), mild urethral strictures (grade I) (19.7 ± 9.5 weeks, 10-27 weeks postsurgery) developed in 8 patients (53.3%) and then were all improved 2.1 ± 0.8 months (1.3-2.9 months) after periodic dilatations of the urethra (4-10 times). Erectile dysfunction (ED) occurred in three patients (20%) after surgery, who recovered from mild ED to normal by administration with oral sildenafil (100 mg, three times a week) for 12 weeks. The International Index of Erectile Function-5 (IIEF-5) score was significantly improved after surgery (M ± SD, 25 ± 3) compared with before (16.4 ± 3.5) (p &lt; 0.05). No incontinence and other complications occurred in all cases. Conclusions: Early endoscopic realignment via suprapubic puncture cystostomy by single rigid ureteroscopy provides an effective, feasible, and safe procedure for severe bulbar urethral injury. abstract_id: PUBMED:34175873 New perspectives on the surgical treatment of posterior urethral obstruction. Purpose Of Review: Posterior urethral obstruction (PUO) from prostate surgery for benign and malignant conditions poses a significant reconstructive challenge. Endoscopic management demonstrates only modest success and often definitive reconstructive solutions are necessary to limit morbidity and firmly establish posterior urethral continuity. This often demands a combined abdominoperineal approach, pubic bone resection, and even sacrifice of the external urinary sphincter and anterior urethral blood supply. Recently, a robotic-assisted approach has been described. Enhanced instrument dexterity, magnified visualization, and adjunctive measures to assess tissue quality may enable the reconstructive surgeon to engage posterior strictures deep within the confines of the narrow male pelvis and optimize functional outcomes. The purpose of this review is to review the literature regarding endoscopic, open, and robotic management outcomes for the treatment of PUO, and provide an updated treatment algorithm based upon location and complexity of the stricture. Recent Findings: Contingent upon etiology, small case series suggest that robotic bladder neck reconstruction has durable reconstructive outcomes with acceptable rates of incontinence in carefully selected patients. Summary: Initial reports suggest that robotic bladder neck reconstruction for recalcitrant PUO may offer novel reconstructive solutions and durable function outcomes in select patients. abstract_id: PUBMED:35429421 Characterization of urethral diverticula in males. Introduction: Urethral diverticulum in a male is a rare entity and the literature is limited to case reports and small case series. The aim of our study is to characterize this disease in patients from three Mayo Clinic locations. Materials And Methods: Chart analysis was performed of patients across all three Mayo Clinic sites that had International Classification of Diseases (ICD) codes corresponding to urethral diverticulum or urethral diverticulectomy via CPT code. Data were available for patients that were seen from 6/1/2003 through 10/5/2018. Patients were classified by age, etiology, presenting symptomatology, location, treatment, pathology, and postoperative outcomes. Results: A total of 87 men met the initial search criteria with 52 having documented urethral diverticula. The most common presenting complaint in these men was incontinence (37%). The majority of diverticula (83%) were within the anterior urethra. The most common diagnostic modalities were retrograde urethrogram (46%) and cystoscopy (50%). Most diverticula were iatrogenic (77%). Of the men that were diagnosed, 42% went on to have diverticulectomy. Median follow up was 1.5 years. Eighteen percent of patients had persistent urinary symptoms following diverticulectomy with incontinence being the most common finding. Postoperative complications were experienced by 26% patients with the most prevalent complication being urethrocutaneous fistula. The patients who did not undergo diverticulectomy either had other surgical procedure to manage their coexisting conditions or were managed medically. Conclusion: Urethral diverticulum in males is a rare yet important entity that requires special consideration, especially in those who have had prior surgery within the lower urinary tract. abstract_id: PUBMED:26676215 Simultaneous Treatment of Female Urethral Diverticulum and Stress Urinary Incontinence with Urethral Diverticulectomy and Suburethral Synthetic Mesh Sling. Objectives: Pubovaginal fascial sling along with urethral diverticulectomy has been advised as the most appropriate anti-incontinence procedure for female stress urinary incontinence (SUI) with concomitant urethral diverticula (UD). We believe that suburethral synthetic mesh tape sling can also be safely used in some patients with concomitant SUI and UD. Herein, we present our experience for simultaneous treatment of UD and SUI with urethral diverticulectomy and suburethral synthetic mesh tape sling. Methods: From 2003 to 2008, there are three patients with UD and SUI in our institution. They received transvaginal urethral diverticulectomy and suburethral synthetic mesh tape sling simultaneously. Videourodynamics was done before and three months after the surgery. Results: Preoperative pelvis magnetic resonance imaging and videourodynamic study showed UD over distal urethra and SUI in all three patients. Urinalysis disclosed mild pyuria in two of the patients, and they both received intravenous antibiotics treatment to eradicate the infection prior to the surgery. They all underwent urethral diverticulectomy with suburethral synthetic mesh tape sling. The postoperative videourodynamic study showed no recurrence of UD and SUI. With a mean follow up of 33.3 months, there was no infection or exposure of synthetic mesh tape. Conclusions: In patients with UD and SUI, suburethral sling using synthetic mesh can be as effective and safe as facial sling in selected patients. abstract_id: PUBMED:33016661 Anterior urethral trauma in childhood: presentation of two cases. Introduction: The incidence of urethral trauma in children is low given that they have a short, mobile urethra, largely protected by the pubis. Clinical Case: We describe two clinical cases illustrating the variety of presentations anterior urethral lesions can have. The first child had acute urinary retention two months after falling astride, with subsequent diagnosis of urethral stricture following serial voiding cystourethrogram (SVCU). The second child had urinary incontinence after falling off a horse. SVCU showed a urethral diverticulum, characterized through ultrasound imaging. Both were treated with deferred surgery. Remarks: Urethral lesion should always be suspected after perineal trauma. SVCU is the gold standard diagnostic technique. Deferred urethroplasty is the treatment of choice, with good results, but there is a risk of incontinence and impotence. Answer: The study by PUBMED:26284595 aimed to evaluate the adequacy of surgical training and competency in male urethral, penile, and incontinence surgery among urology residents. The study reviewed Accreditation Council for Graduate Medical Education (ACGME) case logs of urology residents graduating from U.S. programs between 2009 and 2013. It focused on select index categories, including urethrectomy, urethroplasty, urethral slings, and sphincters. The findings indicated that while the majority of residents met the minimum standard for these cases, approximately 10% did not meet the requirement for male urethral surgery. Furthermore, the study suggested that the ACGME minimum requirements might be insufficient for actual competency in skill, implying that an increase in the number of these cases during training or specialized postgraduate training programs may be necessary to ensure actual competency.
Instruction: Can or should blood pressure be measured at pharmacy offices? Abstracts: abstract_id: PUBMED:26303331 Comparison of ambulatory blood pressure measurement with home, office and pharmacy measurements: is arterial blood pressure measured at pharmacy reliable? Rationale, Aims And Objectives: Standardizing arterial blood pressure (BP) measurement is difficult because of different performers like doctor or pharmacy employee. We investigated the reliability between different BP measurement methods. Methods: The study was conducted in an internal medicine service with 160 patients in Ankara, Turkey. First, the subjects' BP was measured by doctor. Then, 24-hour BP monitoring devices were placed. Participants were then instructed to measure their BPs both at home and in pharmacy. The next day, arterial BP was measured by the doctor for the second time. Results: The prevalence rates of masked and white coat hypertension were 8.8% (n = 14) and 8.1% (n = 13), respectively. There was no statistically significant differences between ambulatory measurement and home, office and pharmacy measurements (P &gt; 0.05). The consistency rate between ambulatory and home measurements was 97.5% (kappa = 0.947, P &lt; 0.001). The consistency rate between ambulatory and pharmacy measurements was 82.5% (kappa = 0.634, P &lt; 0.001). When compared with ambulatory measurement, the most sensitive (98.0%) and most specific (96.8%) method was home measurement. There was a moderate positive correlation between ambulatory and other measurements in both systolic and diastolic values. There was a positive and very strong correlation between ambulatory and home measurements of systolic and diastolic ABP values (respectively; r = 0.926 and r = 0.968) and there was a statistically significant relation between these measurements (P &lt; 0.001). Conclusion: The results of all measurement methods were close to each other. The most sensitive and specific method was home measurement when compared with ambulatory measurement. But both office and pharmacy measurements had also high sensitivity and specificity. abstract_id: PUBMED:20652831 Clinical value of blood pressure measurement in the community pharmacy. Aim Of The Study: To investigate whether the measurement of blood pressure in the community pharmacy is a valuable method to diagnose hypertension, to assess the need and the effectiveness of anti-hypertensive treatments, or, in general, to make clinical decisions. Method: Information has been extracted from articles published in English and in Spanish, from January 1989 to December 2009, in indexed magazines in MEDLINE and EMBASE. To perform the search, multiple and specified terms related to the community pharmacy setting, to blood pressure measurement and to the comparison and agreement between blood pressure measurement methods were used. Selected articles were those that: (1) compared and/or measured the agreement (concordance) between community pharmacy blood pressure measurements obtained in repeated occasions, or (2) compared and/or measured the agreement between the community pharmacy blood pressure measurement method and other measurement methods used in clinical practice for decision-making purposes: blood pressure measurement by a physician, by a nurse and home or ambulatory blood pressure monitoring. Articles were included and analyzed by two investigators independently, who essentially extracted the main results of the manuscripts, emphasizing the assessment of the blood pressure measurement methods used and the completed statistical analysis. Results: Only three studies comparing the community pharmacy blood pressure measurement method with other methods and one comparing repeated measurements of community pharmacy blood pressure were found. Moreover, these works present significant biases and limitations, both in terms of method and statistical analysis, which make difficult to draw consistent conclusions. Conclusion: Further research of high quality is needed, which results can guide the clinical decision-making based on the community pharmacy blood pressure measurement method. abstract_id: PUBMED:34163281 The Development of an Educational Video on Blood Pressure Measurement for Pharmacy Students. Introduction: High blood pressure is an important worldwide health issue. Pharmacists can perform multifaceted tasks in hypertension management such as measuring blood pressure. In a time where the use of educational videos in health professions education has increased, an educational video might be an option for teaching blood pressure measurement skills to pharmacy students. This project aimed to develop an educational video tailored to pharmacy students on oscillometric blood pressure measurement in a community pharmacy setting that can be used as a self-instruction video. Methods: The video was created with support from the university's multimedia center. The video development was roughly divided into pre-production, production, and post-production. Students' satisfaction with and perception of the video was surveyed. Results: An 11-minute 33-second self-instruction video in the German language on proper oscillometric blood pressure measurement tailored for pharmacy students was created. Along with descriptive slides, the video delineates the necessary steps of blood pressure measurement in a community pharmacy setting in a role-play, to support students in communication with the patient. Results of a survey on the satisfaction and perception of the video from thirty-seven pharmacy students were included in the analysis and revealed that the video was well accepted by pharmacy students. Moreover, approximately 95% responded that instructional videos should be included in future pharmacy education. Conclusion: We successfully developed an educational video on oscillometric blood pressure measurement for a community pharmacy setting. This work is a valuable form of support for faculty members, who intend to develop educational videos. This might be of interest especially during the coronavirus disease 2019 (COVID-19) pandemic, where distance learning has become highly relevant. abstract_id: PUBMED:11412571 Can or should blood pressure be measured at pharmacy offices? Objective: To evaluate concordance between blood-pressure (BP) measurements at the pharmacy office (PhO) and the nurse office (NO) in the health care centre (HCC). Design: Descriptive study. Setting: Community. Methods: 36 PhO have voluntarily participated in the province of Albacete, where they have done 3 BP measures, without previous instructions and with their usual measurements devices, for 3-5 subjects, who were referred to their HCC so that they were taken another 3 BP measures in the NO with their Hg sphymomanometer and in their usual measure conditions (blind measures in relation to those taken at the PhO). These subjects were given a stamped envelope to send the BP measurements (taken at the NO), to the Official Pharmacy College. Results: The 6 BP measurements have been completed to 96 subjects with an average of 57.3 years old (women 63%). The differences between PhO and NO were &gt; 5 mmHg in 58 subjects (60.4%) in the case of SBP and in 45 subjects (46.9%) in the case of DBP, and it was more than 15 mmHg in the 17 subjects (17.7%) with SBP and in 9 subjects (9.4%) with DBP. The difference average was 9.5 mmHg (SD, 8.4 mmHg) and 6.4 mmHg (SD, 5.3 mmHg) respectively. The use rate of digit 0 was 22% at the PhO and 46.5% at the NO. In most of PhO, measurement electronic devices have been used for the BP, but not validated for clinical use. Conclusions: The PhO can be a good place for the hypertension screening, but the chemist must be trained in the BP correct measurement and use validated electronic devices. Standardization measurement conditions and the use of validated electronic devices must be extended to the NO. abstract_id: PUBMED:23788809 Assessing pharmacy students' ability to accurately measure blood pressure using a blood pressure simulator arm. Objective: To compare student accuracy in measuring normal and high blood pressures using a simulator arm. Methods: In this prospective, single-blind, study involving third-year pharmacy students, simulator arms were programmed with prespecified normal and high blood pressures. Students measured preset normal and high diastolic and systolic blood pressure using a crossover design. Results: One hundred sixteen students completed both blood pressure measurements. There was a significant difference between the accuracy of high systolic blood pressure (HSBP) measurement and normal systolic blood pressure (NSBP) measurement (mean HSBP difference 8.4 ± 10.9 mmHg vs NSBP 3.6 ± 6.4 mmHg; p&lt;0.001). However, there was no difference between the accuracy of high diastolic blood pressure (HDBP) measurement and normal diastolic blood pressure (NDBP) measurement (mean HDBP difference 6.8 ± 9.6 mmHg vs. mean NDBP difference 4.6 ± 4.5 mmHg; p=0.089). Conclusions: Pharmacy students may need additional instruction and experience with taking high blood pressure measurements to ensure they are able to accurately assess this important vital sign. abstract_id: PUBMED:6614279 The effect of the dental setting on blood pressure measurement. Hypertension screening is a common activity in dental offices. However, this setting may elevate blood pressure, resulting in excessive patient concern and referrals. To experimentally test this question, blood pressure was measured in 60 subjects in a dental setting and a medical setting. Systolic and diastolic pressure differences were small but significantly higher (4.5 mmHg and 5.5 mmHg, respectively) in the dental setting. abstract_id: PUBMED:17429501 Evaluation of pharmacy students' blood pressure and heart rate measurement skills after completion of a patient assessment course. Objectives: To evaluate pharmacy students' skills at measuring brachial artery blood pressure and radial heart rate at the completion of a patient assessment course in the second-professional year of a doctor of pharmacy (PharmD) program. Methods: Students enrolled in a required patient assessment laboratory course (n = 83) participated in this study. Each student was randomly matched with a classmate and manually measured the classmate's blood pressure by auscultation of the brachial artery and heart rate by palpation of the radial pulse. Results: The student-device absolute disagreement was 6.5 +/- 4.8 mmHg for systolic blood pressure (SBP), 6.2 +/- 4.5 mmHg for diastolic blood pressure (DBP), and 5.3 +/- 4.0 beats per minute (BPM) for heart rate. Student and machine measurements of SBP, DBP, and HR significantly correlated. Conclusions: Pharmacy students in the second-professional year of a PharmD program demonstrated competence in but not mastery of manual blood pressure and heart rate measurement. These skills need further refinement during third- and fourth-professional year APPEs. abstract_id: PUBMED:18698382 Assessment of manual blood pressure and heart rate measurement skills of pharmacy students: a follow-up investigation. Objectives: To evaluate the impact of a laboratory course on the manual blood pressure (BP) and heart rate (HR) measurement skills of pharmacy students. Methods: After 1 lecture and 1 laboratory session on vital sign technique, pharmacy students enrolled in a patient assessment laboratory course were randomly paired with a classmate and manually measured the classmate's BP and HR. Within 2 minutes, the BP and HR were measured by an Omron 711-AC automatic monitor. The same assessment procedures with manual and automatic measurements were repeated near the end of the laboratory course. Student skills were also evaluated through direct observation by faculty members. Results: Student and machine measurements of systolic blood pressure (SBP), diastolic blood pressure (DBP), and HR significantly correlated at the final assessment (r = 0.92, 0.83, and 0.91 respectively; p &lt; 0.001 for each. The proportion of student and device values agreeing to within 5 units (mmHg and beats-per-minute) at baseline versus at the final assessment significantly improved from 38% to 67% for SBP, 51% to 77% for DBP, and 52% to 79% for HR (p &lt; 0.001 for each). The percentage of students correctly performing all 13 AHA endorsed steps for BP measurement improved significantly from 4.6% to 75.6% (p &lt; 0.001). Conclusions: Significant improvement and the attainment of competency in manual vital signs measurement were demonstrated by pharmacy students after 11 weeks of skill rehearsal in a laboratory course. abstract_id: PUBMED:21555294 Accuracy of automated community pharmacy-based blood pressure devices. Objectives: To estimate accuracy and reliability of pharmacy-based fixed-location automated blood pressure devices (ABPDs) and to test the hypothesis that an ABPD is less accurate with more variable results than a home blood pressure device (HBPD). Methods: Randomized study comparing 99 ABPDs with an Omron Digital HBPD in Indiana pharmacies. Each site was visited by one of five study investigators. A questionnaire was used to collect information about ABPDs. To test the ABPD against the HBPD, investigators measured their own blood pressure with each device three times in random order. Results: No significant differences were observed between HBPD and ABPD diastolic readings, whereas a statistically significant difference between HBPD and ABPD systolic readings was found. ABPD measurements are as reliable as HBPD measurements when comparing single measurements from each, but reliability differs with more than one reading. Conclusion: Compared with a valid HBPD, the ABPD produces inaccurate systolic blood pressure values but similar reliability. Regular blood pressure measurement by health professionals remains optimal for managing hypertensive individuals. abstract_id: PUBMED:22590472 The role of suboptimal home-measured blood pressure control for cognitive decline. Aim: We aim to analyze if there is any correlation between suboptimal home-/self-measured blood pressure values and the results from neuropsychological screening tests for early cognitive impairment. Methods: We studied 325 patients with treated hypertension. Mean age was 66.12 (±10.1) years. There were 119 (36.6%) male and 206 (63.4%) female patients, among them 52 (16%) with atrial fibrillation. Neuropsychological tests performed were the Mini-Mental State Examination, Montreal Cognitive Assessment, and Hachinski Ischemic Score; additionally, home-measured blood pressure was used. Results: There is a nonlinear age- and risk factor-dependent correlation between early stages of cognitive impairment and suboptimal home-measured blood pressure. Conclusion: The use of specific and sensitive neuropsychological tests for early cognitive impairment in patients with suboptimal home-measured blood pressure is effective in the everyday practice. Answer: Blood pressure measurement at pharmacy offices is a topic of interest and debate. According to the study by PUBMED:11412571, pharmacy offices can be suitable locations for hypertension screening, provided that pharmacists are properly trained in accurate blood pressure measurement techniques and use validated electronic devices. The study found that there were significant differences in blood pressure readings between pharmacy offices and nurse offices in health care centers, suggesting that standardization of measurement conditions and the use of validated devices is crucial for obtaining reliable readings. Another study, PUBMED:20652831, highlighted the need for high-quality research to guide clinical decision-making based on blood pressure measurements taken at community pharmacies. This study found that existing research had significant biases and limitations, making it difficult to draw consistent conclusions about the reliability of pharmacy blood pressure measurements. PUBMED:26303331 reported that there was no statistically significant difference between ambulatory blood pressure measurements and those taken at home, office, or pharmacy, with home measurements being the most sensitive and specific compared to ambulatory measurements. This suggests that under certain conditions, pharmacy blood pressure measurements can be reliable. PUBMED:23788809 assessed pharmacy students' ability to measure blood pressure accurately using a simulator arm and found that students may need additional instruction and experience, especially with high blood pressure measurements. PUBMED:17429501 evaluated pharmacy students' skills at measuring blood pressure and heart rate and found that while students demonstrated competence, they did not master the skill, indicating the need for further refinement during their education. PUBMED:18698382 showed that pharmacy students' manual blood pressure and heart rate measurement skills significantly improved after a laboratory course, suggesting that with proper training, individuals in pharmacy settings can accurately measure blood pressure. In summary, blood pressure can be measured at pharmacy offices, but it is essential that pharmacists are well-trained, use validated measurement devices, and follow standardized procedures to ensure the accuracy and reliability of the readings (PUBMED:11412571, PUBMED:20652831, PUBMED:26303331, PUBMED:23788809, PUBMED:17429501, PUBMED:18698382).
Instruction: Does a regular exercise program for ankylosing spondylitis influence body image? Abstracts: abstract_id: PUBMED:12195632 Does a regular exercise program for ankylosing spondylitis influence body image? Objective: To investigate whether participation in a regular exercise programme by patients with ankylosing spondylitis (AS) had a positive impact on their body image. Methods: Semi-structured interview and several questionnaired assessed exercise compliance, mood and other facets of disease impact. Body image was assessed using the Arthritis Body Experience Scale, which measured the concepts of body totality and body self-consciousness. Results: No correlation was found between exercise for AS and body image. However, body totality correlated with acceptance of illness (p = 0.37, p &lt; 0.05) and both measures of body image inversely correlated with depression (p &lt; 0.05). Conclusion: Normally, regular exercise has a positive effect on body image, but in AS, from this small study it seems that the recommended exercise program does not influence body image, which in these patients seems more closely related to mood. This warrants further evaluation, as a better understanding of influences on body image might improve compliance with therapeutic regimens. abstract_id: PUBMED:20460037 Body image in patients with ankylosing spondylitis. Objectives: Physical disabilities generally cause disturbances in a patient's body image. The aim of the present study is to assess the prevalence of different aspects of body image disturbances (attractiveness /self-confidence, negative body judgement, accentuation of external appearance, worry about possible physical deficits, sexual problems and physical vitality) in patients with ankylosing spondylitis (AS) in the context of the severity of impairment, mood and pain intensity. Methods: Fifty-six patients with AS and 48 healthy controls were investigated using a questionnaire for assessing one's body (1) and the body image questionnaire (2). Furthermore, in the AS patients' medical parameters including BASMI and BASFI, pain intensity and state of well-being were assessed. Results: In comparison to the controls, the AS patients reported significantly more worries about possible physical deficits. The mildly impaired patients (including significantly more women) were even more insecure than the severely impaired patients. Furthermore the patients considered themselves as being less attractive. The AS patients reported better physical vitality than the controls. This result might be due to patients carrying out regular physical activity and performing exercises with therapeutic value. There were no differences between the groups concerning sexual problems. The total group of AS patients showed normal scores in the state of well-being and did not report pain intensity as being higher than what is expected in chronic pain patients in general. Conclusions: Rheumatologists should address problems in body image in patients with AS and in severe cases psychological therapy should be initiated. abstract_id: PUBMED:26850804 Investigating the effects of a multidimensional exercise program on symptoms and antiinflammatory status in female patients with ankylosing spondylitis. Objective: The purpose of this study is to investigate the effects of a multidimensional exercise program on symptoms and antiinflammatory status in female patients with ankylosing spondylitis (AS). Methods: The BATH Indexes, Dougados Functional Index (DFI), Health Assessment Questionnaire in Spondyloarthopathies (HAQ-S), Ankylosing Spondylitis Quality of Life (ASQoL) and Beck Depression Inventory (BDI) were used to evaluate twenty-four female AS patients. ESR, CRP, TNF-α and IL-6 were also analyzed. All patients were assessed at baseline and with 3 weeks intervals till 12 week. A multidimensional exercise program was applied for three times a week. Results: There were significant differences in Bath Ankylosing Spondylitis Global Index (BAS-G) and Disease Activity Index (BASDAI), HAQ-S, ASQoL and BDI scores (p &lt; 0.05). The level of the ESR, CRP and IL-6 fluctuated slightly. There was only significant difference at 3 and 12 weeks as compared to baseline levels in TNF-α values (p = 0.048, p &lt; 0.001). Conclusion: We concluded that multidimensional exercise program should be taken into consideration for AS patients due to its positive effects on symptoms and antiinflammatory effects. abstract_id: PUBMED:29989439 Correlation between body image perception and spinopelvic parameters in ankylosing spondylitis. Objective: To identify relationships between spinopelvic parameters and body image perception in ankylosing spondylitis (AS). Methods: The study and control groups comprised 107 AS patients (18 women and 89 men) and 40 controls. All underwent anteroposterior and lateral radiography of whole spine, including hip joints, and completed clinical questionnaires. The radiographic parameters examined were sagittal vertical axis, sacral slope, pelvic tilt, pelvic incidence, lumbar lordosis, thoracic kyphosis, spinosacral angle (SSA), and spinopelvic angle (SPA). The Trunk appearance scale (TAPS), and the Scoliosis Research Society 22 appearance questionnaire (SRSapp) were administered to evaluate body image perception. Results: The patient and control groups were found to differ significantly in terms of sagittal vertical axis, sacral slope, pelvic tilt, pelvic incidence, lumbar lordosis, SSA, and SPA. However, no significant intergroup difference was observed for thoracic kyphosis (p &gt; .05). Correlation analysis revealed significant relationships between radiographic parameters and body image perception. Multiple regression analysis was performed to identify predictors of body image perception scores, and the results obtained revealed age, SSA, and SPA significantly predicted SRSapp scores and that age and SSA predicted TAPS scores. Conclusions: AS patients and normal controls were found to differ significantly in terms of sagittal spinopelvic parameters. Correlation analysis revealed significant relationships between radiographic parameters and body image perception. In particular, age, SSA, and SPA were found to be significant predictors of body image perception scores in AS patients. abstract_id: PUBMED:28360609 Coping with Stress and Body Image in Patients with Ankylosing Spondylitis. Introduction: The aim of this study was to determine coping with stress and body image in patients with ankylosing spondylitis (AS) and to investigate the correlation between these two characteristics together and also between them and disease activity/functional capacity. Method: Fourty healthy controls and 40 patients with AS who were diagnosed on the basis of Modified New York Criteria were included in the study. The exclusion criteria were another medical disease or comorbid psychiatric disorder. All participants were administered the Coping Orientations to Problems Experienced (COPE) questionnaire in order to evaluate attitudes to coping with stress and the Multidimensional Body-Self Relations Questionnaire (MBSRQ) to evaluate body image. The Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) and Bath Ankylosing Spondylitis Functional Index (BASFI) were used to evaluate AS patients' disease activities and functional capacities. Results: There was no difference between the two groups in terms of COPE scores. The MBSRQ 'health evaluation' subscale scores were lower and the 'fitness orientation' scores higher in the AS group. The COPE active coping subscale had a weak, positive correlation with MBSRQ total score and a weak, negative correlation with BASFI score. MBRSQ total score had a moderate, negative correlation with BASFI score, and a weak, negative correlation with BASDAI score. Conclusion: The attitudes to coping with stress in AS patients with no accompanying medical disease or psychiatric disorder may not differ from that in healthy controls. Negative health evaluation and fitness orientation must be characteristics considered in psychotherapeutic interventions applied to these patients. In addition, psychotherapeutic interventions directed toward coping with stress and body image may be especially useful in active stages of the disease and in patients with limited functional capacity. abstract_id: PUBMED:18985351 Effects of a home-based exercise program on quality of life, fatigue, and depression in patients with ankylosing spondylitis. The aim of this trial was to investigate the effects of a 12-week home-based exercise program (HEP) on quality of life (QOL) and fatigue in patients with Ankylosing Spondylitis (AS). Forty-three patients with AS were included in this study. Group 1 was given a HEP; Group 2 served as the control group. The functional capacity (Bath Ankylosing Spondylitis Functional Index), disease activity (Bath Ankylosing Spondylitis Disease Assessment Index), fatigue (Multidimensional Assessment of Fatigue Scale), depression (Beck Depression Inventory scores), and QOL (Short Form 36) of all participants were evaluated. There were significant improvements for all the parameters in two groups after the treatment. The improvements for all the parameters were better in the exercise group than in the control group. Home-based exercise programs are very effective in improving QOL and reducing fatigue. Because of these advantages, HEP should be advised for the management program in AS in addition to medical treatments. abstract_id: PUBMED:31594196 Is the whole-body cryotherapy a beneficial supplement to exercise therapy for patients with ankylosing spondylitis? Background: The treatment of ankylosing spondylitis (AS) patients requires a combination of non-pharmacological (education, exercise and physical therapy), as well as pharmacological treatment modalities. The optimal management of AS still remains unresolved. Objective: The aim was to measure and compare the effects of whole-body cryotherapy (WBC) at -110∘C and at -60∘C and exercise therapy alone on disease activity and the functional parameters of patients with AS. Methods: Ninety-two patients were allocated to three groups: with WBC at -110∘C or at -60∘C (each concurrent with exercise therapy), or exercise therapy alone. Disease activity and the functional parameters of the patients were measured at study entry and at the end of the 8-day treatment. Results: Supervised therapy, irrespective of the program, led to a significant reduction in disease activity (Bath Ankylosing Spondylitis Disease Activity Index: BASDAI, Ankylosing Spondylitis Disease Activity Score: ASDAS-CRP), disease-related back pain, fatigue, duration and intensity of morning stiffness and a significant improvement in the patient's functional capacity (Bath Ankylosing Spondylitis Functional Index: BASFI), spine mobility (Bath Ankylosing Spondylitis Metrology Index: BASMI) and chest expandability, with no changes in the levels of CRP. It has been demonstrated that following therapy, the group that underwent cryotherapy at -110∘C manifested significantly reduced disease activity (BASDAI) compared with exercise therapy only (p= 0.024). Conclusion: Adding cryotherapy at -110∘C to exercise therapy led to significantly reduced disease activity expressed in BASDAI, compared with exercise therapy alone. abstract_id: PUBMED:16813473 Effects of a multimodal exercise program for people with ankylosing spondylitis. Background And Purpose: Few randomized controlled studies have examined the effects of exercise in patients with ankylosing spondylitis (AS). This study investigated the effects of a 12-week, multimodal exercise program in patients with AS. Subjects: A convenience sample of 30 patients with AS (18 male, 12 female), with a mean age of 34.9 years (SD=6.28), participated in the study. Twenty-six subjects were classified as having stage I AS and 4 subjects were classified as having stage II AS according to the modified New York Criteria. Methods: This study was a randomized controlled trial. Subjects were assigned to either a group that received an exercise program or to a control group. The exercise program consisted of 50 minutes of multimodal exercise, including aerobic, stretching, and pulmonary exercises, 3 times a week for 3 months. Subjects in both groups received medical treatment for AS, but the exercise group received the exercise program in addition to the medical treatment. All subjects received a physical examination at baseline and at 12 weeks. The examinations were conducted under the supervision of a physician who specialized in physical medicine and rehabilitation and included the assessment of spinal mobility using 2 methods: clinical measurements (chin-to-chest distance, Modified Schober Flexion Test, occiput-to-wall distance, finger-to-floor distance, and chest expansion) and inclinometer measurements (gross hip flexion, gross lumbar flexion, and gross thoracic flexion). In addition, vital capacity was measured by a physiologist, and physical work capacity was evaluated by a doctorally prepared exercise instructor. Results: The measurements of the exercise group for chest expansion, chin-to-chest distance, Modified Schober Flexion Test, and occiput-to-wall distance were significantly better than those of the control group after the 3-month exercise period. The spinal movements of the exercise group improved significantly at the end of exercise program, but those of the control group showed no significant change. In addition, the results showed that the posttraining value of gross thoracic flexion of the exercise group was significantly higher than that of the control group. Physical work capacity and vital capacity values improved in the exercise group but decreased in the control group. Discussion And Conclusion: In this study, a multimodal exercise program including aerobic, stretching, and pulmonary exercises provided in conjunction with routine medical management yielded greater improvements in spinal mobility, work capacity, and chest expansion. abstract_id: PUBMED:24123522 Body image disturbance and quality of life in Chinese patients with ankylosing spondylitis. Background: Ankylosing spondylitis (AS) is a painful inflammatory disease of the axial skeleton that can cause body image disturbance (BID) and may lead to psychological changes in AS patients. Objective: This study investigated the effect of AS patient physical and psychological status on BID and quality of life (QOL). Methods: Overall, 112 AS patients (84 males, 28 females; mean age, 32.8 ± 10.8) and 127 healthy age-matched individuals (control group; 78 males, 49 females; mean age, 36.9 ± 12.6) were screened for inclusion in the single-centre study. Multiple instruments assessing physical function, psychological function, body image, and QOL were applied. Multivariate stepwise regression analyses were used to determine factors associated with BID, anxiety, and depression. Results: Of 103 AS patients and 121 control subjects, AS patients exhibited greater BID-associated social function, social life, and role function impairments (BID Questionnaire [BIDQ] 2.42 vs. 1.02; 2.20 vs. 1.19; 2.54 vs. 0.72, respectively), lower behavioural avoidance (BIDQ 3.07 vs. 3.49), and similar decreases in vitality and mental health (p &lt; .01). Impaired social function predicted anxiety and depression in AS patients. Disease status (daily activity, general health, and pain) and psychological status (BID, anxiety, and depression) predicted poor QOL (p &lt; .05). Conclusion: BID may play a significant role in causing clinical psychological dysfunction in AS patients, including anxiety and depression. Further research is required to fully assess whether these observations are similar in patients with variant AS severity. BID may be useful in clinical prognostic assessment and AS management. Practitioner Points: AS patients investigated in this study exhibited BID. There were significant relationships between the BIDQ, disease and psychological variables, and QOL. Disease status, BID, and anxiety and depression indicated a poor QOL. abstract_id: PUBMED:9818658 Exercise in ankylosing spondylitis: how much is optimum? Objective: To evaluate how much exercise patients with ankylosing spondylitis perform on a regular basis, to analyze demographic and clinical variables that might influence adherence to exercise, and to determine the effect of exercise on disease activity and function. Methods: We analyzed 4282 patients who completed a self-administered questionnaire regarding physical exercise. To determine the relationship between exercise, demographic data, and clinical variables, patients who exercise 2 to 4 hours and 10 or more hours per week were compared to nonexercising controls matched for age, sex, and disease duration. Results: Most of the patients reported 2 to 4 hours of exercise or no exercise. The group who performed moderate exercise had improved function and lower disease activity (p &lt; 0.001, p &lt; 0.015, respectively). The group who performed intensive exercise had improved function, but no difference in disease activity was found (p=0.033, p=0.394, respectively). Adherence to a regular exercise regime is associated with rheumatologist followup, beliefs in the benefits of exercise, and a higher education level. Conclusion: This study suggests there is an optimum duration for exercise performed over a weekly period. Consistency, rather than quantity, is of most importance. Individuals most likely to follow this regime attend a rheumatologist, believe that exercise is of benefit, and are in the higher education category. Those who are less educated and followed by general practitioners should be targeted. Answer: The influence of a regular exercise program on body image in patients with ankylosing spondylitis (AS) has been explored in several studies, but the results are not entirely conclusive. One study found no correlation between exercise for AS and body image, as measured by the Arthritis Body Experience Scale. However, it did find that body totality correlated with acceptance of illness and that both measures of body image inversely correlated with depression (PUBMED:12195632). This suggests that while exercise may not directly influence body image, it could have an indirect effect by improving mood and illness acceptance, which in turn could influence body image. Another study reported that AS patients considered themselves less attractive and had more worries about possible physical deficits compared to controls, but they also reported better physical vitality, which might be attributed to regular physical activity and therapeutic exercises (PUBMED:20460037). This indicates that while AS can negatively impact certain aspects of body image, engagement in physical activity may help improve other aspects, such as vitality. Furthermore, a study investigating the effects of a multidimensional exercise program on symptoms and anti-inflammatory status in female AS patients concluded that such a program should be considered due to its positive effects on symptoms (PUBMED:26850804). While this study did not directly measure body image, improvements in symptoms could potentially lead to a better body image. Additionally, a study found significant relationships between radiographic parameters and body image perception in AS patients, with age, SSA, and SPA being significant predictors of body image perception scores (PUBMED:29989439). This suggests that the physical changes associated with AS can influence body image perception. In summary, while a regular exercise program may not directly influence body image in AS patients, it can have positive effects on mood, acceptance of illness, and physical vitality, which could indirectly affect body image. Moreover, the physical manifestations of AS itself can impact body image perception, and exercise may help mitigate some of these effects.
Instruction: Type III home sleep testing versus pulse oximetry: is the respiratory disturbance index better than the oxygen desaturation index to predict the apnoea-hypopnoea index measured during laboratory polysomnography? Abstracts: abstract_id: PUBMED:26129636 Type III home sleep testing versus pulse oximetry: is the respiratory disturbance index better than the oxygen desaturation index to predict the apnoea-hypopnoea index measured during laboratory polysomnography? Objectives: In its guidelines on the use of portable monitors to diagnose obstructive sleep apnoea, the American Academy of Sleep Medicine endorses home polygraphy with type III devices recording at a minimum airflow the respiratory effort and pulse oximetry, but advises against simple pulse oximetry. However, oximetry is widely available and simple to use in the home. This study was designed to compare the ability of the oxygen desaturation index (ODI) based on oximetry alone with a stand-alone pulse oximeter (SPO) and from the oximetry channel of the ApneaLink Plus (ALP), with the respiratory disturbance index (RDI) based on four channels from the ALP to predict the apnoea-hypopnoea index (AHI) from laboratory polysomnography. Design: Cross-sectional diagnostic accuracy study. Setting: Sleep medicine practice of a multispecialty clinic. Participants: Patients referred for laboratory polysomnography with suspected sleep apnoea. We enrolled 135 participants with 123 attempting the home sleep testing and 73 having at least 4 hours of satisfactory data from SPO and ALP. Interventions: Participants had home testing performed simultaneously with both a SPO and an ALP. The 2 oximeter probes were worn on different fingers of the same hand. The ODI for the SPO was calculated using Profox software (ODI(SOX)). For the ALP, RDI and ODI were calculated using both technician scoring (RDI(MAN) and ODI(MAN)) and the ALP computer scoring (RDI(RAW) and ODI(RAW)). Results: The receiver-operator characteristic areas under the curve for AHI ≥ 5 were RDI(MAN) 0.88 (95% confidence limits 0.81-0.96), RDI(RAW) 0.86 (0.76-0.94), ODI(MAN) 0.86 (0.77-0.95), ODI(RAW) 0.84 (0.75-0.93) and ODI(SOX) 0.83 (0.73-0.93). Conclusions: We conclude that the RDI and the ODI, measured at home on the same night, give similar predictions of the laboratory AHI, measured on a different night. The differences between the two methods are small compared with the reported night-to-night variation of the AHI. abstract_id: PUBMED:30565351 Comparison of apnoea-hypopnoea index and oxygen desaturation index when identifying obstructive sleep apnoea using type-4 sleep studies. The concordance of different indices from type-4 sleep studies in diagnosing and categorising the severity of obstructive sleep apnoea is not known. This is a critical gap as type-4 sleep studies are used to diagnose obstructive sleep apnoea in some settings. Therefore, we aimed to determine the concordance between flow-based apnoea-hypopnoea index (AHIflow50% ) and oxygen desaturation index (ODI3% ) by measuring them concurrently. Using a random sub-sample of 296 from a population-based cohort who underwent two-channel type-4 sleep studies, we assessed the concordance between AHIflow50% and ODI3% . We compared the prevalence of obstructive sleep apnoea of various severities as identified by the two methods, and determined their concordance using coefficient Kappa(κ). Participants were aged (mean ± SD) 53 ± 0.9 years (48% male). The body mass index was 28.8 ± 5.2 kg m-2 and neck circumference was 37.4 ± 3.9 cm. The median AHIflow50% was 5 (inter-quartile range 2, 10) and median ODI3% was 9 (inter-quartile range 4, 15). The obstructive sleep apnoea prevalence reported using AHIflow50% was significantly lower than that reported using ODI3% at all severity thresholds. Although 90% of those with moderate-severe obstructive sleep apnoea classified using AHIflow50% were identified by using ODI3% , only 46% of those with moderate-severe obstructive sleep apnoea classified using ODI3% were identified by AHIflow50% . The overall concordance between AHIflow50% and ODI3% in diagnosing and classifying the severity of obstructive sleep apnoea was only fair (κ = 0.32), better for males (κ = 0.42 [95% confidence interval 0.32-0.57] versus 0.22 [95% confidence interval 0.09-0.31]), and lowest for those with a body mass index ≥ 35 (κ = 0.11). In conclusion, ODI3% and AHIflow50% from type-4 sleep studies are at least moderately discordant. Until further evidence is available, the use of ODI3% as the measure of choice for type-4 sleep studies is recommended cautiously. abstract_id: PUBMED:36045914 Impact of Desaturation Patterns versus Apnea-Hypopnea Index in the Development of Cardiovascular Comorbidities in Obstructive Sleep Apnea Patients. Various phenotypes of obstructive sleep apnea (OSA) have been recently described and are poorly assessed by the commonly used polysomnographic indices, such as the apnea-hypopnea index and oxygen desaturation index. Nocturnal hypoxemia is the hallmark of OSA and new quantitative markers, as hypoxic burden or desaturation severity, have been shown to be associated with cardiovascular (CV) mortality. The purpose of this overview is to review the endophenotypical and clinical characteristics of OSA, the current metrics, and to analyze different measurements of hypoxemia in OSA to predict the cardiovascular impact (eg hypoxic burden). Potential interest of multidimensional models to classify OSA, such as BAVENO classification, is also discussed, with the goal of focusing on specific endophenotypes that are likely to develop CV comorbidities, in order to guide clinicians to more aggressive management of OSA in these individuals. abstract_id: PUBMED:29564732 Validation of the oxygen desaturation index in the diagnostic workup of obstructive sleep apnea. Introduction: Obstructive sleep apnea (OSA) is common, and diagnosis requires expensive and laborious testing to assess the apnea hypopnea index (AHI). We performed an analysis to explore the relationship between the oxygen desaturation index (ODI) as measured with pulse oximetry and the AHI in our large portable monitoring (PM) database to find an optimal cutoff value for the ODI in order to be able to exclude AHI ≥ 5 on PM. Methods: Three thousand four hundred thirteen PM recordings were randomly divided into a training set (N = 2281) and a test set (N = 1132). The optimal cutoff for the ODI to exclude an AHI ≥ 5 on PM was determined in the training set and subsequently validated in the test set. Results: Area under the curve of the ODI to exclude an AHI ≥ 5 on PM was 0.997 in the training set and 0.996 in the test set. In the training set, the optimal cutoff to predict an AHI &lt; 5 was an ODI &lt; 5. Using this cutoff in the test set provided a sensitivity of 97.7%, a specificity of 97.0%, a positive predictive value of 99.2%, and a negative predictive value of 91.4%. Conclusion: An ODI &lt; 5 predicts an AHI &lt; 5 with high sensitivity and specificity when measured simultaneously using the same oximeter during PM recording. abstract_id: PUBMED:23732021 Usefulness of desaturation index for the assessment of obstructive sleep apnea syndrome in children. Objective: Obstructive sleep apnea syndrome (OSAS) can result in learning, behavioral, and even growth problems in children. Overnight polysomnography (PSG) is the gold standard for the diagnosis of OSAS, but this diagnostic technique is time consuming and inconvenient. It is well documented that patients with OSAS experience oxygen desaturation during sleep. The purpose of this study was to determine if the level of oxygen desaturation can be used to predict OSAS in children. Methods: This retrospective study included 148 children aged from 3 to 12 years; all were snorers and were suspected of having OSAS. Objective PSG parameters, nocturnal pulse oximeter data and subjective modified Epworth Sleepiness Scale (ESS) scores were assessed. Mild, moderate, and severe OSAS was classified as apnea-hypopnea index (AHI)≥1 and &lt;5, AHI≥5 and &lt;10, and AHI≥10, respectively. Results: Of the 148 children, 130 (87.8%) were diagnosed with OSAS. There was no correlation between the AHI and the modified ESS score (r = -0.048, p = 0.563). The desaturation index (DI) had the highest correlation with AHI (r = 0.886, p &lt; 0.001). Using the DI cut-off values of 2.05 to predict OSAS in children had good positive predictive value of 98.1%. The optimal DI cut-off values for predicting the occurrence of mild, moderate, and severe OSAS were 2.05 (sensitivity: 77.7%; specificity: 88.9%), 3.50 (sensitivity: 83.8%; specificity: 86.5%) and 4.15 (sensitivity: 89.1%; specificity: 86.0%), respectively. Conclusions: These findings suggest that DI, as determined using a nocturnal pulse oximeter, may be a good tool for predicting both the presence and the severity of OSAS in children. abstract_id: PUBMED:37176501 Predictive Power of Oxygen Desaturation Index (ODI) and Apnea-Hypopnea Index (AHI) in Detecting Long-Term Neurocognitive and Psychosocial Outcomes of Sleep-Disordered Breathing in Children: A Questionnaire-Based Study. Pediatric obstructive sleep apnea can negatively affect children's neurocognitive function and development, hindering academic and adaptive goals. Questionnaires are suitable for assessing neuropsychological symptoms in children with sleep-disordered breathing. The study aimed to evaluate the effectiveness of using the Oxygen Desaturation Index compared to the Obstructive Apnea-Hypopnea Index in predicting long-term consequences of sleep-disordered breathing in children. We conducted a retrospective analysis of respiratory polysomnography recordings from preschool and school-age children (mean age: 5.8 ± 2.8 years) and followed them up after an average of 3.1 ± 0.8 years from the home-based polysomnography. We administered three validated questionnaires to the parents/caregivers of the children by phone. Our results showed that children with an Oxygen Desaturation Index (ODI) greater than one event per hour exhibited symptoms in four domains (physical, school-related, Quality of Life [QoL], and attention deficit hyperactivity disorder [ADHD]) at follow-up, compared to only two symptoms (physical and school-related) found in children with an Obstructive Apnea-Hypopnea Index greater than one event per hour at the time of diagnosis. Our study also found a significant correlation between the minimum SpO2 (%) recorded at diagnosis and several outcomes, including Pediatric Sleep Questionnaire (PSQ) scores, physical, social, and school-related outcomes, and ADHD index at follow-up. These results suggest that the Oxygen Desaturation Index could serve as a valuable predictor of long-term symptoms in children with sleep-disordered breathing, which could inform treatment decisions. Additionally, measuring minimum SpO2 levels may help assess the risk of developing long-term symptoms and monitor treatment outcomes. abstract_id: PUBMED:15572543 Night-to-night variation of the oxygen desaturation index in sleep apnoea syndrome. The current study investigated the night-to-night variability and diagnostic accuracy of the oxygen desaturation index (ODI), as measured by ambulatory monitoring, in the diagnosis of mild and moderate obstructive sleep apnoea-hypopnoea syndrome. To assess the variability of the ODI, 35 patients were monitored at home during 7 consecutive nights by means of a portable recording device, the MESAM-IV. The ODI variability factor and the influence of age, body mass index (BMI), alcohol, and body position were assessed. Furthermore, the diagnostic accuracy of the MESAM-IV was measured by comparison with polysomnographical outcomes in 18 patients. During home recording, the median ODI was 10.9 (interquartile range: 5.8-16.1) across the patients. Although the reliability of the ODI was adequate, the probability of placing the patient in the wrong severity category (ODI &lt; or =15 or ODI &gt;15) when only one single recording was taken is 14.4%. ODI variability was not significantly influenced by age, BMI, time spent in a supine position, or mild dosages of alcohol. A good correlation was found between the apnoea-hypopnoea index and the ODI. In conclusion, the findings suggest that the diagnostic accuracy of the MESAM-IV is strong, since the oxygen desaturation index is correlated with the apnoea-hypopnoea index. In most obstructive sleep apnoea-hypopnoea syndrome patients, oxygen desaturation index variability is rather small, and screening could be reliably based on single 1-night recordings. abstract_id: PUBMED:29988275 The Ignored Parameter in the Diagnosis of Obstructive Sleep Apnea Syndrome: The Oxygen Desaturation Index. Objective: The apnea-hypopnea index (AHI) does not provide information about the apnea depth and length. We aimed to evaluate the correlation of the oxygen desaturation index (ODI) with AHI and the subjective symptoms because it is known that hypoxia plays an important role in morbidity and complications of obstructive sleep apnea syndrome (OSAS). Methods: We reviewed the data of patients who applied to our clinic between 2010 and 2014 and underwent polysomnography (PSG) with a diagnosis of suspected sleep apnea. The demographic and anthropometric data of the patients were recorded. Epworth sleepiness scale (ESS) and values of AHI and ODI were analyzed in PSG. Results: A total of 321 patients were divided into four groups, according to AHI as follows: 82 (25.5%) common snoring, 77 (24%) mild obstructive sleep apnea (OSA), 71 (22.1%) moderate OSA, and 91 (28.3%) severe OSA. A strong correlation was detected between AHI and ODI (p&lt;0.005 and r=0.904) in all patient groups. There was a positive correlation between AHI and ESS (p&lt;0.05 and r=0.435), but the correlation of ESS with ODI was stronger than that with AHI (p&lt;0.05 and r=0.504). Conclusion: The subjective symptoms of sleep apnea syndrome seem to be closely related to oxygen desaturations. Hypoxia during apnea periods of OSA is important; therefore, we suggest that ODI is as valuable as AHI in diagnosing and grading the OSAS. abstract_id: PUBMED:30736881 Correlation Between Oxygen Saturation and Pulse Tracing Patterns on Overnight Oximetry With Normal Desaturation Index Is an Independent Predictor of Obstructive Sleep Apnea. Study Objectives: Overnight pulse oximetry (OPO) is commonly used as a screening test for obstructive sleep apnea. Heart rate variability (HRV) correlates well with apnea-hypopnea index during polysomnography (PSG). We hypothesized that visual correlation of episodic increase in HRV with minimal oxygen desaturations on normal OPO (oxygen desaturation index less than 5 events/h) is predictive of OSA. Methods: A retrospective analysis of patients undergoing OPO and PSG in 1 year was performed. We included only OPO performed on room air and interpreted as normal. Visual correlation between simultaneous increase in HRV and minimal oxygen desaturation was independently assessed by three raters, resulting in the consensus agreement. The primary outcome was presence of OSA on the subsequent PSG. Results: Of 936 patients with OPO and PSG, 109 patients had normal overnight oximetry study on room air. Of these, 65 (60%) were females, median (interquartile range) age was 54 years (44, 67), body mass index was 29 kg/m2 (25, 32), and the median oxygen desaturation index was 1.8 events/h (1, 2.7). Consensus agreement identified 54 patients with visual correlation between pulse and minimal oxygen desaturations. Thirty-two patients (29%) were found to have OSA on PSG, of which 24 (75%) could have been accurately predicted by the consensus agreement (odds ratio 4.70, 95% confidence interval 1.87-11.8, P &lt; .001). When adjusted for pertinent clinical and demographic variables, consensus agreement was independently associated with diagnosis of OSA on subsequent PSG (odds ratio 5.6, 95% confidence interval 1.76-20.9, P = .003). Conclusions: Visual correlation between episodic increase in HRV and minimal oxygen desaturations on OPO is an independent predictor of OSA, and promising marker for clinical use. abstract_id: PUBMED:33018585 Is Cumulative Time of Oxygen Desaturation a Better Predictor of Cardiovascular Mortality than Apnoea Hypopnoea Index? In this paper, we explored the link between sleep apnoea and cardiovascular disease (CVD) using a time-series statistical measure of sleep apnoea-related oxygen desaturation. We compared the performance of a hypoxic measure derived from the polysomnogram with the Apnoea Hypopnoea Index (AHI) in predicting CVD mortality in patients of the Sleep Heart Health Study.We estimated the relative cumulative time of SpO2 below 90% (Tr90) using pulse oximetry signals from polysomnogram recordings as the hypoxic measure of desaturation patterns. Then, the survival curves for hypoxia quintiles were evaluated for the prediction of CVD mortality and were compared with the results using AHI for prediction. We also calculated the Cox hazard ratios for Tr90 and AHI. Our results show that the Tr90 was a better predictor of CVD mortality outcomes than AHI. Answer: Based on the provided abstracts, the respiratory disturbance index (RDI) and the oxygen desaturation index (ODI) measured at home give similar predictions of the laboratory apnoea-hypopnoea index (AHI), which is measured during laboratory polysomnography. The study by PUBMED:26129636 found that the receiver-operator characteristic areas under the curve for AHI ≥ 5 were close for both RDI and ODI, suggesting that the differences between the two methods are small compared to the reported night-to-night variation of the AHI. Therefore, it can be concluded that the RDI is not necessarily better than the ODI to predict the AHI measured during laboratory polysomnography. However, it is important to note that other studies have found varying levels of concordance between different indices used to diagnose and categorize the severity of obstructive sleep apnoea. For instance, PUBMED:30565351 reported that the prevalence of obstructive sleep apnoea reported using a flow-based AHI was significantly lower than that reported using ODI at all severity thresholds, and the overall concordance between the two in diagnosing and classifying the severity of obstructive sleep apnoea was only fair. Additionally, PUBMED:36045914 highlights that various phenotypes of obstructive sleep apnoea are poorly assessed by the commonly used polysomnographic indices such as AHI and ODI, suggesting that other measurements of hypoxemia might be better predictors of cardiovascular impact in OSA patients. In summary, while the RDI and ODI provide similar predictions of the AHI, the choice between these indices may depend on the specific context and the need for additional assessment of hypoxemia or other phenotypic characteristics of obstructive sleep apnoea.
Instruction: Is Helicobacter pylori infection a risk factor for migraine? Abstracts: abstract_id: PUBMED:18071631 Helicobacter pylori infection as an environmental risk factor for migraine without aura. Helicobacter pylori (H. pylori) infection has recently been associated with various extraintestinal pathologies and migraine. The aim of this study was to investigate the correlation of the H. pylori infection with the pathogenesis of migraine without aura, especially in cases not affected by endogenous risk factors, like hereditary pattern or hormonal fluctuations.A total of 49 outpatients (37 females and 12 males; age range: 19-47 years; mean age: 31,+/-14 years) affected by migraine without aura was evaluated. We divided them in 2 subgroups: a) with positive familial history, and/or with menstrual type of migraine b) with negative familial history and with menstrual unrelated type of migraine. H. pylori infection was diagnosed by the 13 C- urea breath test (INFAI - test). Control subjects consisted of 51 patients without any primary headache history (38 females; mean age of 32,+/-14,4 years; range 21-49 years), who underwent upper gastrointestinal (GI) endoscopy for investigation of anaemia or non ulcer dyspepsia. H. pylori detection was based on the histologic analysis of gastric mucosa biopsy. The prevalence of H. pylori infection was significantly higher in the migraineurs without aura compared to controls (p=0.016). The prevalence of H. pylori infection was significantly high in the mixed and in the female group of our patients without other predisposing factors for migraine without aura (81 and 87% respectively), while in the same groups with predisposing factors (menstruation and/or family history) the prevalence was only 36 and 37% respectively (p=0,001 for the first group and p=0,002 for the second group). Our results seem to highlight the role of H. pylori infection as a probable independent environmental risk factor for migraine without aura, especially in patients that are not genetically or hormonally susceptible to migraine. abstract_id: PUBMED:15742606 Is Helicobacter pylori infection a risk factor for migraine? A case-control study. Background: The relation between Helicobacter pylori and Migraine and usefulness of the eradication treatment on headache is controversial. The aim of this study was to determine whether Helicobacter pylori infection is a risk factor for migraine and whether the eradication of the bacterium can reduce frequency, duration and severity of clinical attacks of the disease. Methods: A total 70 consecutive patients with migraine with aura and without aura who came to Gastroenterology polyclinic with various complaints were enrolled in the study and compared with a group of 60 matched controls. Helicobacter pylori infection was diagnosed by histopathological biopsies, which was taken by endoscopy (Olympus-GIFXQ240 endoscope). The diagnosis and classification of migraine was made according to the International Headache Society (IHS) criteria. We assessed the frequency, duration and severity of clinical attacks of migraine before and after eradication treatment. The eradication control of Helicobacter pylori was made by Helicobacter pylori fecal antigen test by PCR after 2 months. Results: Helicobacter pylori positiveness is more relevant in the migranous patients compared with controls. 84.6% of patients with eradication treatment and 75% of classically treated patients informed to get benefit from the treatment. Conclusion: Helicobacter pylori should be examined in migranous patients and eradication of the infection may be helpful for the treatment of the disease. abstract_id: PUBMED:30299058 Extragastric symptoms associated with Helicobacter pylori infection This review article highlights the modern views of manifestations associated with Hp infection. The data are presented about the negative impact of the infection on the state of the musculoskeletal system, the development of migraine, progression of diffuse liver disease and the risk of developing liver cancer. The paper also provides information on the possible effects of Helicobacter pylori infection on the formation of halitosis, tympanosclerosis, male reproductive health disorders, colorectal cancer, Alzheimer's and Parkinson's disease, pre-eclampsia during pregnancy, and idiopathic chronic urticaria. In addition, the negative relationship between Hp infection, bronchial asthma, and inflammatory bowel disease is considered. abstract_id: PUBMED:10669671 Characteristics and prognosis of myocardial infarction in patients with normal coronary arteries. Study Objectives: Myocardial infarction with angiographically normal coronary arteries (MINC) is a life-threatening event with many open questions for physicians and patients. There are little data concerning the prognosis for patients with MINC. Design: Retrospective follow-up study. Setting: Tertiary referral center. Patients: Patients with MINC were investigated and compared to age- and sex-matched control subjects with myocardial infarction due to coronary artery disease (CAD). The patients were examined clinically using stress exercise and hyperventilation tests. Migraine and Raynaud's symptoms were determined by means of a standardized questionnaire. Serum lipoproteins; the seroprevalence of cytomegalovirus, Helicobacter pylori, and Chlamydia pneumoniae infections; and the most frequent causes of thrombophilia were assessed. Measurements And Results: From &gt; 4,300 angiographies that were performed between 1989 and 1996, 21 patients with MINC were identified. The mean +/- SD patient age at the time of myocardial infarction was 42 +/- 7.5 years. When compared to control subjects (n = 21), patients with MINC had fewer risk factors for CAD. In contrast, MINC patients had more frequent febrile reactions prior to myocardial infarction (six patients vs zero patients; p &lt; 0.05), and the migraine score was significantly higher (7.1 +/- 6.3 vs 2.2 +/- 4.1; p &lt; 0.01). The seroprevalence of antibodies against cytomegalovirus, C pneumoniae, and H pylori tended to be higher in patients with MINC and CAD as compared to matched healthy control subjects. Three patients with MINC vs none with CAD had coagulopathy. During follow-up (53 +/- 37 months), no major cardiac event occurred in the MINC group; no patients with MINC vs nine with CAD (p = 0.0001) underwent repeated angiography. Conclusion: High migraine score and prior febrile infection together with a lower cardiovascular risk profile are compatible with an inflammatory and a vasomotor component in the pathophysiology of the acute coronary event in MINC patients. The prognosis for these patients is excellent. abstract_id: PUBMED:11135029 Chronic Helicobacter pylori infection and migraine: a case-control study. Objective: To determine whether chronic Helicobacter pylori infection is a risk factor for migraine. Background: Preliminary studies have shown a high prevalence of Helicobacter pylori infection in patients with primary headaches. Methods: One hundred three consecutive patients with migraine were enrolled in the study and compared with a group of 103 matched controls. Helicobacter pylori infection was diagnosed by means of both (13)C-urea breath test and serology. Results: Of patients with migraine, 30.1% were positive for Helicobacter pylori, compared with 31.1% of controls (P = NS). The odds ratio for migraine associated with chronic Helicobacter pylori infection was 0.96 (95% confidence interval, 0.51 to 1.80). Demographic, clinical, and psychological characteristics of Helicobacter pylori-positive migraineurs were compared with those of migrainous patients without infection. Helicobacter pylori-positive patients had a significantly (P&lt;.05) lower incidence of food sensitivity than Helicobacter pylori-negative patients. No significant difference was found in any other feature examined. Conclusions: Our study suggests that chronic Helicobacter pylori infection is not more frequent in patients with migraine than in controls and that infection does not modify clinical features of the disease. abstract_id: PUBMED:11472381 Migraine and function of the immune system: a meta-analysis of clinical literature published between 1966 and 1999. Mechanisms underlying migraine precipitation are largely unknown. A role of the immune system in migraine precipitation is a matter of debate because of the association of atopic disorders and migraine. Recently, it was demonstrated that migraineurs benefit from eradication of a Helicobacter pylori infection, which substantiates a possible role for (sub-clinical) infections in precipitation of migraine. Since 1966, about 45 clinical investigations have reported on alterations of immune function in migraine patients, which we present in this review. Changes of serum levels of complement and immunoglobulins, histamine, cytokines and immune cells were found in some of these studies but in most cases not corroborated by others. Migraineurs suffering from comorbid atopic disorders show elevated plasma IgE levels but not patients without a type I hypersensitivity. Histamine plasma levels are chronically elevated in migraineurs, and interictally decreased lymphocyte phagocytotic function and increased plasma tumor necrosis factor alpha (TNFalpha) levels were found, and may be related to increased infection susceptibility. The cause of this increased susceptibility is unclear but most likely is a result of chronic stress, a well-known suppressor of the immune system. Stress relief enhances immune activity and triggers a burst of circulating vasoactive compounds that function as mediators of inflammation and potential precipitators of a migraine attack in vulnerable subjects. In conclusion, in the clinical literature of the past decades, there is no clear-cut evidence of an immune dysfunction in migraineurs, but we cannot totally exclude the possibility of an altered immune function in migraineurs. Discrepancies in the literature most likely are caused by the divergent patterns of sample collection relative to the time of the attack. We propose stringent definition of sample collection times for future studies of immune function in migraine patients. abstract_id: PUBMED:25356058 Association between Helicobacter pylori infection and migraine: a meta-analysis. Aim: To quantify the association between Helicobacter pylori (H. pylori) infection and migraine. Methods: A systematic literature search of PubMed and EMBASE was conducted from inception to December 2013. Studies that provided data dealing with H. pylori infection in patients with migraine, as well as healthy controls, were selected. Meta-analysis was carried out using the odds ratio (OR) with a fixed or random effects model, and a 95%CI for the OR was calculated. An unconditional logistic regression model was used to analyze potential parameters related to H. pylori prevalence. Subgroup analyses were conducted as methods of detection and evidence grade. Results: Five case-control studies published between 2000 and 2013 were finally identified, involving 903 patients, with a total H. pylori infection rate of 39.31%. The prevalence of H. pylori infection was significantly greater in migraineurs than in controls (44.97% vs 33.26%, OR = 1.92, 95%CI: 1.05-3.51, P = 0.001). A sensitivity test indicated that no single study dominated the combined results. Univariate regression analysis found that publication year, geographical distribution and evidence grade were relevant to the results and were the main reason for the heterogeneity. Subgroup analysis found a significantly greater infection rate of H. pylori in Asian patients with migraine, but no statistically significant infection rate in European patients. The ORs were 3.48 (95%CI: 2.09-5.81, P = 0.000) and 1.19 (95%CI: 0.86-1.65, P = 0.288), respectively. Conclusion: The pooled data suggest a trend of more frequent H. pylori infections in patients with migraine. abstract_id: PUBMED:35643992 A cross-sectional study on the association between Helicobacter pylori infection and headache. Background: The relationship between chronic Helicobacter pylori (HP) infection and headache has been discussed for long; nevertheless, the results of the studies are still contrasting. Objective: This cross-sectional study is aimed to investigate a possible association between HP and headache, mainly migraine. Methods: We screened, by a self-administered questionnaire, the subjects undergoing a breath test or an esophagogastroduodenoscopy. Migraine was diagnosed according to the international criteria. Results: A total of 3914 patients underwent a breath test and 2200 an esophagogastroduodenoscopy at two hospitals, in Piedmont (Italy), in a 5-year period; a total of 1362 questionnaires were included in the study. The mean age of the subjects was 53 years; there were 777 women (57%). HP was detected in 364 (27%) subjects. A total of 702 (51%) subjects suffered from headache: migraine with aura was diagnosed in 176 subjects (176/702, i.e., 25% of the headache group; 176/1362, i.e., 13% of the total population); migraine without aura in 98 subjects (98/702, i.e., 14% of the headache group; 98/1362, i.e., 7% of the total). The logistic regression model did not detect any significant association between HP infection and headache, while a significant association between HP and headache frequency (p =0.009) was found, independently of age, gender, comorbidity, and diagnostic category. Conclusion: Our study does not reveal an association between chronic HP infection and migraine. However, since HP is significantly associated with higher headache frequency, a role for HP as a risk factor for headache chronification, possibly underlain by inflammatory mechanisms, may be supposed. abstract_id: PUBMED:10479964 Helicobacter pylori and headache N/A abstract_id: PUBMED:35597897 Relationship between Helicobacter pylori infection and white matter lesions in patients with migraine. Background/aim: White matter lesions (WML) are more frequently observed in migraine patients than in the average population. Associations between Helicobacter pylori (H. pylori) infection and different extraintestinal pathologies have been identified. Here, we aimed to investigate the association between H. pylori infection and WML in patients diagnosed with episodic migraine. Materials And Methods: A retrospective study was conducted with 526 subjects with a diagnosis of episodic migraine. Hyperintensity of WML had been previously evaluated in these patients with brain magnetic resonance imaging (MRI) examinations. Previous endoscopic gastric biopsy histopathological examination of the same patients and reports on H. pylori findings were recorded. The demographic characteristics of the patients, such as age, gender and chronic systemic diseases such as hypertension and diabetes mellitus (DM) were recorded. Statistical evaluation was made. Results: Evaluation was made among 526 migraine patients who met the inclusion criteria, comprising 397 (75.5%) females and 129 (24.5%) males with a mean age of 45.57 ± 13.46 years (range, 18-69 years). WML was detected on brain MRI in 178 (33.8%) patients who were also positive for H. pylori (p &lt; 0.05). Subjects who are H. pylori-positive with migraine, WML were observed at a 2.5-fold higher incidence on brain MRI (odds ratio: 2.562, 95% CI 1.784-3.680). WML was found to be more significant in patients with hypertension and migraine than those without (p &lt; 0.001). Older age was also found to be associated with WML (OR = 1.07, 95% CI: 0.01-0.04, p &lt; 0.001). The age (p &lt; 0.001), H. pylori (p &lt; 0.001), hypertension (p &lt; 0.001), and hypertension + DM (p &lt; 0.05), had significant associations in predicting WML according to the multivariate logistic regression analysis. The presence of hypertension had a higher odds ratio value than the other variables. Conclusion: It was concluded that H. pylori infection, as a chronic infection, can be considered a risk factor in developing WML in subjects with migraine. Answer: The relationship between Helicobacter pylori (H. pylori) infection and migraine has been explored in various studies, with some suggesting a potential association. A study by PUBMED:18071631 found a significantly higher prevalence of H. pylori infection in migraineurs without aura compared to controls, particularly in patients without other predisposing factors for migraine, suggesting H. pylori infection as a probable independent environmental risk factor for migraine without aura. Similarly, PUBMED:15742606 reported that H. pylori positiveness was more relevant in migrainous patients compared with controls, and eradication of the infection may be helpful for the treatment of the disease. A meta-analysis by PUBMED:25356058 also supported the trend of more frequent H. pylori infections in patients with migraine, with a significantly greater infection rate in Asian patients with migraine. However, the study by PUBMED:11135029 did not find a higher frequency of chronic H. pylori infection in patients with migraine compared to controls, suggesting that the infection does not modify clinical features of the disease. PUBMED:35643992 did not reveal an association between chronic H. pylori infection and migraine but found a significant association between H. pylori and higher headache frequency, which may suggest a role for H. pylori as a risk factor for headache chronification. Additionally, PUBMED:35597897 found that H. pylori infection can be considered a risk factor in developing white matter lesions (WML) in subjects with migraine, which are more frequently observed in migraine patients. In summary, while some studies indicate a potential link between H. pylori infection and migraine, particularly in certain populations or under specific conditions, other studies do not support a strong association. The evidence is mixed, and further research may be needed to fully understand the relationship between H. pylori infection and migraine.
Instruction: Do gastrointestinal transit times and colonic dimensions change with time since spinal cord injury? Abstracts: abstract_id: PUBMED:21102573 Do gastrointestinal transit times and colonic dimensions change with time since spinal cord injury? Study Design: Long-term follow-up study. Objectives: To determine whether gastrointestinal transit times (GITTs) and colonic dimensions change during the first or subsequent decades after spinal cord injury (SCI). Setting: Aarhus University Hospital, Denmark. Methods: GITT and colonic dimensions were evaluated by means of radio-opaque markers. Group A (n=12) was investigated 1 year after SCI and again 13 (range 11-14) years later. Group B (n=10) was studied 19 (range 9-36) years after injury and again 12 (range 11-12) years later. All had been treated with conservative bowel management. Results: In group A, the median GITT 1 year after injury was 4.3 (range 1.1-6.5) days and 13 years later, it was 3.2 (range 1.3-6.5) days, P=0.96. In group B, the median GITT 19 year after injury was 3.4 (range 0.6-5.9) days and 12 years later, it was 3.2 (range 1.9-5.5) days, P=0.77. None of the two groups experienced a significant change in the diameter of the caecum/ascending colon, transverse colon, descending colon or the sigmoid during long-term follow-up. Megacolon was present in four patients at baseline and in five at follow-up. Conclusion: GITTs and colonic dimensions did not change, neither during the first decade nor long after SCI. abstract_id: PUBMED:11093323 Gastrointestinal and segmental colonic transit times in patients with acute and chronic spinal cord lesions. Study Design: Longitudinal study among patients with acute and chronic spinal cord injuries (SCI). Objectives: To compare total gastrointestinal transit times (GITT) and segmental colorectal transit times (CTT) in SCI patients with acute and chronic lesions to those of healthy volunteers. Furthermore, to examine the impact of time elapsed since injury on GITT and CTT, and finally to compare the pattern of colorectal dysfunction in patients with supraconal versus conal/cauda equina lesions. Setting: Surgical Research Unit and Department of Neurosurgery, University Hospital of Aarhus, Denmark. Methods: Patients took 10 radioopaque markers on six consecutive days and an abdominal X-ray was taken on day 7. GITT and CTTs were computed from the number of markers in the entire colorectum and in each colorectal segment respectively. We studied 26 patients with acute spinal cord lesions (15 supraconal and 11 conal/cauda equina lesions; time since injury=11 - 24 days) and 18 patients were available for follow-up 6 - 14 months later. Results were compared to 24 healthy volunteers. Results: In patients with acute supraconal or conal/cauda equina lesions GITT and CTTs of the ascending, transverse, and descending colon were significantly prolonged, but rectosigmoid transit time was only significantly prolonged in patients with conal/cauda equina lesions. In patients with chronic supraconal lesions GITT and CTTs of the transverse colon and the descending colon were significantly prolonged. In patients with chronic conal/cauda equina lesions GITT and CTT of the transverse, the descending colon and the rectosigmoid were significantly prolonged. Thus, supraconal SCI resulted in generalized colonic dysfunction whereas chronic conal/cauda equina lesions resulted in severe rectosigmoid dysfunction. Conclusion: SCI results in severely prolonged colonic transit times both in the acute and chronic phase. However, the type of colorectal dysfunction depends on the level of SCI. abstract_id: PUBMED:18645578 Reproducibility and validity of radiographically determined gastrointestinal and segmental colonic transit times in spinal cord-injured patients. Background: Total gastrointestinal transit times (GITT) and segmental colonic transit times (CTT) are commonly used to describe bowel function in individuals with spinal cord injury (SCI). Study Design: Reproducibility study. Objectives: To describe inter- and intrasubjective as well as interobserver variations in GITT and segmental CTT in patients with SCI. Furthermore, to study associations between GITT or segmental CTT and colorectal symptoms. Setting: Spinal Cord Unit, Viborg Hospital, Denmark. Methods: Thirty SCI patients took 10 radio-opaque markers on 6 consecutive days and an abdominal X-ray was taken on day 7. The same procedure was repeated after 3 months. GITT and CTT were computed from the number of markers in the entire colorectum and in the left and the right colon. Intra- and interobserver variations were described as dispersion (numerical difference/mean). Results: Intersubjective variation was large for GITT (range: 0.6-6.3 days), right CTT (range: 0.1-5.5 days) and left CTT (range: 0-4.9 days). Intrasubjective variation was acceptable for GITT (dispersion: 0.28) but less good for right (dispersion: 0.63) and left CTT (dispersion: 0.68). Interobserver variation was very small. No correlations were found between GITT, right or left CTT and colorectal symptoms. Conclusion: GITT and right and left CTT are of limited value for clinical decision-making in individual patients but may be useful for comparison of groups of patients with SCI. abstract_id: PUBMED:2229888 Colonic transit time and anorectal manometric anomalies in 19 patients with complete transection of the spinal cord. In order to determine the relative importance of sympathetic and parasympathetic centers in the control of colorectal motility, colonic transit and anorectal motility were studied in 19 patients with complete spinal cord transection: group 1 (n = 5) where transection was above T9; group 2 (n = 6) where transection was between T9 and L2; group 3 (n = 8) where transection involved S2-S4. Colonic transit time was calculated by the radiopaque markers. Methods and results were compared with those of eight controls, all bedridden for non-digestive surgery. Anorectal motility was investigated by anorectal manometry, and results were compared with those of 17 healthy controls. Both mean right and left colonic transit times were not different in the three groups of patients and the bedridden control group. Mean rectosigmoid and total transit time increased in groups 2 and 3. In five patients of group 3 the mean anal canal (lower part) pressure was higher than in controls. Recto-anal inhibitory reflex was present in all patients, but their was no correlation in group 3 between the volume of rectal distension and both amplitude and duration of the recto-anal inhibitory reflex, and in group 1 between rectal distension and the duration of the recto-anal inhibitory reflex. Rectosigmoid transit time increased when sympathetic and parasympathetic spinal centers are injured, suggesting the importance of this extrinsic nervous control; right and left colonic transit are not affected by spinal cord lesion suggesting that the distal colon but not the proximal colon, is under spinal cord nervous control.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:7847302 The effect of cisapride on segmental colonic transit time in patients with spinal cord injury. Objectives: The pathophysiological consequences of spinal cord injury (SCI) on function of the colon are complex and poorly understood. Regardless of the mechanism, many patients with SCI have deficient bowel control, which is frustrating and difficult to treat. We designed a study to assess whether a new prokinetic medication, cisapride, might be useful in this setting. Methods: Total and segmental colonic transit time were measured using the radiopaque marker technique in nine subjects with spinal cord injury and seven control subjects after the double-blind administration of cisapride (10 mg q.i.d.) or placebo. Results: In five quadriplegic subjects with prolonged colonic transit time, administration of cisapride was found to reduce left-sided colonic transit time from 24.2 to 13.8 h. In three of these five subjects, cisapride administration resulted in subjective improvement. No effect of cisapride on right-sided, rectosigmoid, or total colonic transit time was observed. Conclusion: The data suggest that cisapride might be a useful adjunctive measure in treating a subset of SCI patients with colonic inertia, but a larger study is needed before this can be routinely recommended. abstract_id: PUBMED:16751583 A comparison of methods of assessment of scintigraphic colon transit. Unlabelled: There is no standard method of analysis of scintigraphic colonic transit investigation. This study was designed to compare 4 techniques. Methods: Sixteen subjects (median age, 37.5 y; range, 21-61 y), who had sustained a spinal cord injury more than a year before the study, were given a pancake labeled with 10-18 MBq of (111)In bound to resin beads to eat. Anterior and posterior images were acquired with a gamma-camera 3 h after the meal and then 3 times a day for the next 4 d. Seven regions of interest, outlining the ascending colon, hepatic flexure, transverse colon, splenic flexure, descending colon, rectosigmoid, and total abdominal activity at each time point, were drawn on the anterior and posterior images. The counts were decay corrected and the geometric mean (GM), for each region, at each time point calculated. The GM was used to calculate the percentage of the initial total abdominal activity in each region, at each time point. Colonic transit was assessed in 4 ways: (a) Three independent nuclear medicine physicians visually assessed transit on the analog images and classified subjects into 5 categories of colonic transit (rapid, intermediate, generalized delay, right-sided delay, or left-sided delay). (b) Parametric images were constructed from the percentage activity in each region at each time point. (c) The arrival and clearance times of the activity in the right and left colon were plotted as time-activity curves. (d) The geometric center of the distribution of the activity was calculated and plotted on a graph versus time. The results of these 4 methods were compared using an agreement matrix. Results: Though simple to perform, the visual assessment was unreliable. The best agreement occurred between the parametric images and the arrival and clearance times of the activity in the right and left colon. Conclusion: The different methods of assessment do not produce uniform results. The best option for evaluating colonic transit appears to be a combination of the analog images, which provide a general overview of colonic transit and a quantitative method that demonstrates segmental transit. abstract_id: PUBMED:1442685 Gastrointestinal transit after spinal cord injury: effect of cisapride. Heartburn, bloating, and abdominal discomfort are common problems in patients with spinal cord injury but, despite its clinical significance, little is known about the gastrointestinal effects of spinal transections. To address the potential gastrointestinal pathophysiology of spinal cord injury, we measured mouth-to-cecum transit time (MCTT) in seven subjects with paraplegia and seven with quadriplegia. Gastric emptying was studied in six subjects with quadriplegia. MCTT was significantly prolonged in patients with quadriplegia, an abnormality corrected by the administration of cisapride. Patients with paraplegia, in contrast to those with quadriplegia, have normal mouth-to-cecum transit time. In addition, patients with quadriplegia had neither a prolonged gastric emptying time nor a change in gastric emptying time, with cisapride. Changes in gastrointestinal transit after spinal cord injury and the improvement of mouth-to-cecum transit time in subjects with quadriplegia, but not in those with paraplegia, may be explained by an imbalance between parasympathetic and sympathetic outflows to the gastrointestinal tract in this group of subjects. abstract_id: PUBMED:9360223 Colonic transit time after spinal cord injury. Colonic transit time (CTT) was measured with abdominal radiographs using Chaussade's technique in 30 spinal cord injured patients (ASIA A and B) following ingestion of 20 radiomarkers per day for three days. A significant increase in total CTT (p = 0.0001) and segmental CTT of the right colon (p = 0.0004) and of the left colon (p = 0.0001) was shown. While using on the average only 2.3 films of the abdomen per patient, we obtained results comparable with other radiologic techniques which use radiomarkers to measure CTT. The clinical relevance of these results is not clear and their correlation with intestinal symptoms remains to be investigated. abstract_id: PUBMED:9881734 The effect of electrical stimulation on colonic transit following spinal cord injury in cats. The effect of direct electrical stimulation on colinic transit and manometric recordings following spinal cord injury were assessed in five adult male cats. Intra-colonic catheters were surgically placed, stimulating electrodes were sutured to the colonic serosa and a laminectomy with spinal cord clamping at a T4 level was done to induce spinal cord injury (SCI). Twenty radiopaque markers were inserted through an intra-colonic catheter located 1 cm distal to the cecum and were monitored with daily fluoroscopy as a measure of colonic transit. Transit measurements were compared before SCI, after SCI and after SCI with electrical stimulation of 40 pps, 1 ms, and 0-50 mA. Colonic transit following SCI was significantly prolonged (P&lt;0.05) when compared to the transit before SCI. Electrical stimulation following SCI improved colonic transit to values not significantly different from those before SCI. Spontaneous colonic phasic motor activity was similar both before and after SCI. Manometric defection patterns were also observed to be similar before SCI and after SCI with electrical stimulation. Based on our scoring criteria, the most frequent response to electrical stimulation was an abdominal contraction. These findings demonstrate that colonic transit is prolonged following SCI and that direct electrical stimulation of the colon following SCI improves colonic transit in an animal model. abstract_id: PUBMED:12214902 Colonic transit time after spinal cord injury: any clinical significance? Background: Both bowel dysfunction and increases in colonic transit time (CTT) are frequently observed in individuals with spinal cord injury; however, it is unknown whether there is an association between chronic intestinal problems and changes in CTTs. The current study investigates a possible relationship between the main intestinal symptoms of SCI patients and CTT values. Methods: The following clinical variables and symptoms were investigated and collected in 30 individuals with SCI: total time for bowel care, abdominal pain, abdominal gas, success of rectal emptying, fecal incontinence, and decrease in quality of life. Total and segmental CTTs (right colon, left colon, and rectosigmoid colon) were assessed using radiopaque markers. The effects of the sociodemographic variables and the clinical symptoms on the different CTTs (total and segmental) were analyzed. Results: The assessed clinical conditions were observed in the following percentages of subjects: abdominal gas symptoms (70%), fecal incontinence (56%), abdominal pain (63%), total time for bowel care &gt; 1 hour (11%), difficult rectal emptying (66%), and decrease in quality of life (36%). We also observed an increase in total CTT in 47% of subjects; increases in segmental CTT were found in the right colon in 23%, in the left colon (60%), and in the rectosigmoid segment (23%). Statistical analyses failed to show a significant difference in mean CTT values between the group of symptomatic patients (1 or more symptoms) and the group of asymptomatic patients. No significant difference could be detected in the incidence of each intestinal symptom between the group of participants with normal CTT values and those with abnormal CTT values. For each of the clinical data assessed separately, a significantly longer CTT (left colon) was associated with the lack of abdominal pain (P &lt; .03) and the presence of fecal incontinence (P &lt; .01); successful rectal emptying was associated with significantly shorter total (P &lt; .02) and segmental CTTs for the left colon (P &lt; .01) and rectosigmoid colon (P &lt; .05). Conclusions: Besides an association between shorter CTT and successful rectal emptying, there was little relationship between CTTs and intestinal symptoms in this study of patients with SCI. Answer: Gastrointestinal transit times (GITTs) and colonic dimensions do not appear to change significantly with time since spinal cord injury (SCI). A long-term follow-up study conducted at Aarhus University Hospital, Denmark, found that GITTs and colonic dimensions did not change during the first decade or in the long term after SCI. This was observed in two groups of patients, one studied 1 year after SCI and again 13 years later, and the other studied 19 years after injury and again 12 years later. Both groups had been treated with conservative bowel management, and no significant changes were found in the diameter of various colonic segments or in the presence of megacolon at follow-up (PUBMED:21102573). In contrast, another study comparing patients with acute and chronic SCI to healthy volunteers found that SCI results in severely prolonged colonic transit times both in the acute and chronic phase. However, the type of colorectal dysfunction depended on the level of SCI, with supraconal SCI resulting in generalized colonic dysfunction and chronic conal/cauda equina lesions resulting in severe rectosigmoid dysfunction (PUBMED:11093323). Furthermore, a reproducibility study indicated that while GITT and segmental colonic transit times (CTT) are of limited value for clinical decision-making in individual patients, they may be useful for comparing groups of patients with SCI. This study also highlighted the large intersubjective variation in GITT and CTT among SCI patients (PUBMED:18645578). In summary, while individual variations exist and the level of SCI can influence the pattern of colorectal dysfunction, the overall evidence suggests that GITTs and colonic dimensions do not undergo significant changes over time in individuals with SCI who are managed conservatively (PUBMED:21102573).
Instruction: Identifying predictors of interferon-γ release assay results in pediatric latent tuberculosis: a protective role of bacillus Calmette-Guerin? Abstracts: abstract_id: PUBMED:22700862 Identifying predictors of interferon-γ release assay results in pediatric latent tuberculosis: a protective role of bacillus Calmette-Guerin?: a pTB-NET collaborative study. Rationale: Interferon-γ (IFN-γ) release assays are widely used to diagnose latent infection with Mycobacterium tuberculosis in adults, but their performance in children remains incompletely evaluated to date. Objectives: To investigate factors influencing results of IFN-γ release assays in children using a large European data set. Methods: The Pediatric Tuberculosis Network European Trials group pooled and analyzed data from five sites across Europe comprising 1,128 children who were all investigated for latent tuberculosis infection by tuberculin skin test and at least one IFN-γ release assay. Multivariate analyses examined age, bacillus Calmette-Guérin (BCG) vaccination status, and sex as predictor variables of results. Subgroup analyses included children who were household contacts. Measurements And Main Results: A total of 1,093 children had a QuantiFERON-TB Gold In-Tube assay and 382 had a T-SPOT.TB IFN-γ release assay. Age was positively correlated with a positive blood result (QuantiFERON-TB Gold In-Tube: odds ratio [OR], 1.08 per year increasing age [P &lt; 0.0001]; T-SPOT.TB: OR, 1.14 per year increasing age [P &lt; 0.001]). A positive QuantiFERON-TB Gold In-Tube result was shown by 5.5% of children with a tuberculin skin test result less than 5 mm, by 14.8% if less than 10 mm, and by 20.2% if less than 15 mm. Prior BCG vaccination was associated with a negative IFN-γ release assay result (QuantiFERON-TB Gold In-Tube: OR, 0.41 [P &lt; 0.001]; T-SPOT.TB: OR, 0.41 [P &lt; 0.001]). Young age was a predictor of indeterminate IFN-γ release assay results, but indeterminate rates were low (3.6% in children &lt; 5 yr, 1% in children &gt; 5 yr). Conclusions: Our data show that BCG vaccination may be effective in protecting children against Mycobacterium tuberculosis infection. To restrict use of IFN-γ release assays to children with positive skin tests risks underestimating latent infection. abstract_id: PUBMED:26199259 Discordance Between Tuberculin Skin Test and Interferon-γ Release Assay in Children Younger Than 5 Years Who Have Been Vaccinated With Bacillus Calmette-Guérin. Background: Interferon-γ release assays (IGRAs) offer the possibility of improved detection of latent tuberculosis infection (LTBI). Objective: To analyze discordant tuberculin skin testing (TST) and IGRA results in ethnic Croatian children as old as 5 years for whom there is documented exposure to an adult with active tuberculosis (TB) and who have been vaccinated with Bacillus Calmette-Guérin. Methods: In specimens from our cohort individuals, we tested the performances of the QuantiFERON-TB Gold In-Tube (QFT-GIT) test and TST and analyzed discordant results. Results: At the TST cutoff value of 10 mm or greater, the estimated prevalence of M. tuberculosis infection was 18.1% (31/171) using TST and 15.2% (26/171) using QFT-GIT. The results of these 2 tests showed an overall concordance of 87.7%. There was no evidence that subjects' age correlated with discordant results. Conclusions: The reasons for discordant results in young children are still unclear, which highlights the importance of further longitudinal studies to better understand the interpretation and any possible clinical implications of the results of these tests. abstract_id: PUBMED:25422024 Interferon-γ release assays for diagnosis of tuberculosis infection and disease in children. Tuberculosis (TB) remains an important problem among children in the United States and throughout the world. Although diagnosis and treatment of infection with Mycobacterium tuberculosis (also referred to as latent tuberculosis infection [LTBI] or TB infection) remain the lynchpins of TB prevention, there is no diagnostic reference standard for LTBI. The tuberculin skin test (TST) has many limitations, including difficulty in administration and interpretation, the need for a return visit by the patient, and false-positive results caused by significant cross-reaction with Mycobacterium bovis-bacille Calmette-Guérin (BCG) vaccines and many nontuberculous mycobacteria. Interferon-γ release assays (IGRAs) are blood tests that measure ex vivo T-lymphocyte release of interferon-γ after stimulation by antigens specific for M tuberculosis. Because these antigens are not found on M bovis-BCG or most nontuberculous mycobacteria, IGRAs are more specific tests than the TST, yielding fewer false-positive results. However, IGRAs have little advantage over the TST in sensitivity, and both methods have reduced sensitivity in immunocompromised children, including children with severe TB disease. Both methods have a higher positive predictive value when applied to children with risk factors for LTBI. Unfortunately, neither method distinguishes between TB infection and TB disease. The objective of this technical report is to review what IGRAs are most useful for: (1) increasing test specificity in children who have received a BCG vaccine and may have a false-positive TST result; (2) using with the TST to increase sensitivity for finding LTBI in patients at high risk of developing progression from LTBI to disease; and (3) helping to diagnose TB disease. abstract_id: PUBMED:19771743 Advances in clinical applications of T-SPOT. TB for interferon-gamma release assay in the immunodiagnosis of tuberculosis T-SPOT. TB is an interferon-gamma release assay to detect T-cell response to early secreting antigen target 6 and culture filtrate protein 10 peptides by enzyme-linked immunospot assay for tuberculosis diagnosis. It is highly sensitive and specific, and will not be affected by the subject's immune status and Bacillus Calmette-Guerin vaccination. This assay has been licensed for in-vitro diagnosis in Europe and the United States. Its potential roles in distinguishing active tuberculosis from latent tuberculosis infection and predicting active tuberculosis among individuals with latent tuberculosis have been increasingly studies. This article reviews the advances in the clinical application of T-SPOT. TB. abstract_id: PUBMED:25149414 Performance of the QuantiFERON-TB gold interferon gamma release assay among HIV-infected children in Botswana. Interferon gamma release assays (IGRAs) are poorly studied in HIV-infected children. The authors prospectively evaluated QuantiFERON-TB Gold results and family-described tuberculosis (TB) risk factors in 100 HIV-infected children in Botswana. Median age was 10.2 years; 58 were girls, 92 had received the Bacillus Calmette-Guérin (BCG) vaccine, 98 were receiving antiretroviral therapy, and the median body mass index was 15.8 kg/m(2). Eighty-nine children had undetectable viral loads and the median CD4 count was 962 cells/mm(3). Eighteen children had been treated for TB in the last 3 years. In the last 3 years, 36 (including 9 with TB) had contact with persons with TB (26 within/15 outside the home and 5 had &gt;1 contact). In all, 96 children had negative IGRAs, 3 were indeterminate, and 1 was positive. The positive IGRA was reported in a child treated for TB prior to 3 years. Interferon γ release assay positivity was rare in this pediatric cohort living in an area with a high prevalence of TB. abstract_id: PUBMED:20616761 Whole blood interferon-γ release assay is a useful tool for the diagnosis of tuberculosis infection particularly among Bacille Calmette Guèrin-vaccinated children. The performance of QuantiFERON-tuberculosis (TB) Gold-In-Tube assay was compared with the tuberculin skin test for the diagnosis of TB among children. It was shown that among non-Bacille Calmette Guèrin immunized children, agreement between tests was excellent both in those with TB disease and in TB contacts. Among Bacille Calmette Guèrin-immunized children, agreement was fair in those with active disease and poor among TB contacts. It is concluded that QuantiFERON-TB Gold-In-Tube compares with the tuberculin skin test in the diagnosis of TB disease and latent tuberculosis infection in TB contacts among children and has enhanced specificity. abstract_id: PUBMED:21144240 Diagnosis of latent tuberculosis infection in bacille Calmette-Guérin vaccinated subjects in China by interferon-gamma ELISpot assay. Objective: To evaluate the performance of an in-house interferon-gamma (IFN-γ) enzyme-linked immunospot (ELISpot) assay for the detection of latent tuberculosis infection (LTBI) in bacille Calmette-Guérin (BCG) vaccinated individuals with or without human immunodeficiency virus (HIV) infection. Methods: A total of 202 participants (93 HIV-infected and 109 non-infected) who received BCG vaccination at birth underwent tuberculin skin testing (TST) and T cell-based ELISpot assays. The performance of an in-house IFN-γ ELISpot assay (ELISpot) was evaluated by parallel comparison with the commercial IFN-γ release assay (IGRA) kit, T-SPOT®.TB, in 127 subjects. The effect of BCG vaccination on the ELISpot assay was prospectively determined by comparing the responses of IGRAs before and 3 months after BCG vaccination in 27 individuals. Results: High agreement between ELISpot and T-SPOT.TB was observed in both HIV-infected (92.47%, κ = 0.754) and non-HIV-infected subjects (97.06%, κ = 0.653). CD4(+) T-cell count does not affect Mycobacterium tuberculosis IFN-γ response as measured by both ELISpot and T-SPOT.TB. BCG vaccination resulted in 100% conversion of TST, but did not change responses to IGRAs. Conclusion: The performance of this in-house ELISpot assay is comparable to commercial T-SPOT.TB in diagnosing LTBI. Both assays are useful for diagnosing LTBI in BCG-vaccinated subjects with or without HIV infection in a setting with a high prevalence of tuberculosis. abstract_id: PUBMED:22105266 Interferon gamma release assay compared with the tuberculin skin test for latent tuberculosis detection in pregnancy. Objective: To estimate agreement and correlation between the tuberculin skin test and an interferon gamma release assay for detecting latent tuberculosis (TB) infection in pregnant women. Methods: We conducted a cross-sectional study of pregnant women initiating prenatal care at a university-affiliated public hospital between January 5, 2009, and March 15, 2010. Eligible women received a questionnaire about TB history and risk factors as well as the tuberculin skin test and phlebotomy for the interferon gamma release assay. Agreement and correlation between tests were estimated, and different cutoffs for interferon gamma release assay positivity were used to assess effect on agreement. Furthermore, predictors of test positivity and test discordance were evaluated using multivariable analysis. Results: Of the 220 enrolled women, 199 (90.5%) returned for tuberculin skin test evaluation. Over 70% were Hispanic and 65% were born in a country with high TB prevalence. Agreement between the tuberculin skin test and interferon gamma release assay was 77.39 (κ=0.26). This agreement was not significantly changed using different cutoffs for the assay. Birth bacille Calmette-Guérin vaccination was associated with tuberculin skin test positivity (odds ratio [OR] 4.33, 95% confidence interval [CI] 1.4-13.48, P=.01), but not interferon gamma release assay positivity. There were no statistically significant predictors of the tuberculin skin test and interferon gamma release assay result discordance; however, birth in a high-prevalence country was marginally associated with tuberculin skin test-positive and interferon gamma release assay-negative results (OR 2.94, 95% CI 0.86-9.97 P=.08). Conclusion: Comparing the tuberculin skin test and interferon gamma release assay results in pregnancy, concordance and agreement were poor. Given that much is still unknown about the performance of interferon gamma release assays in pregnancy, further research is necessary before the tuberculin skin test is abandoned for screening of latent TB infection in pregnancy. Level Of Evidence: III. abstract_id: PUBMED:22077219 Interferon gamma release assay in diagnosis of pediatric tuberculosis: a meta-analysis. Although interferon gamma release assays (IGRAs) have been widely used for the diagnosis of latent and active tuberculosis in adults, a relative lack of validation studies in children has led to caution in their clinical interpretation. This meta-analysis systematically evaluated two IGRAs (ELISA and ELISPOT) and the tuberculin skin test (TST). We searched databases (PubMed, MEDLINE, Ovid) between January 2000 and January 2011 using search terms of latent tuberculosis infection or tuberculosis and interferon gamma release assay, or T-SPOT.TB test, or QuantiFERON-TB Gold, or ESAT-6, or CFP-10, and child, or childhood, or pediatrics. We also collected data by performing a manual search of references from relevant articles and communicating with selected authors. The meta-analysis was conducted with random effects models to account for heterogeneity between selected studies. The sensitivities of all three tests in active tuberculosis were similar. The pooled sensitivity was 70% for ELISA studies, 62% for ELISPOT studies and 71% for TST. Calculated sensitivities for IGRAs and the TST differ in culture-confirmed tuberculosis [ELISA (85%) vs. ELISPOT (76%) vs. TST (85%)] and clinical diagnosed cases [ELISA (64%) vs. ELISPOT (58%) vs. TST (66%)]. The pooled specificity was 100% for ELISA and 90% for ELISPOT, but was much lower for TST [56% in all included studies and 49% in children with bacillus Calmette-Guerin (BCG) vaccination]. The agreement between the TST and IGRAs in non-BCG-vaccinated children is higher than that in BCG-vaccinated children. In the diagnosis of active tuberculosis in children, the TST and IGRAs have similar sensitivity. By contrast, the specificity of IGRAs is far greater than the TST, particularly in children with previous BCG vaccination. abstract_id: PUBMED:25861575 The Usefulness of the Tuberculosis Skin Test and the Interferon-gamma Release Assay in the Diagnosis of Latent Tuberculosis Infection in South Korea. Objectives: South Koreans receive the bacillus Calmette-Guerin (BCG) vaccination, which influence the result of the tuberculin skin test (TST); however, only a few studies have described the usefulness of the TST and interferon-γ release assay (IGRA) for diagnosing latent TB infection (LTBI). Therefore, our aim was to determine the usefulness of the TST and IGRA for diagnosing LTBI in a household contacts investigation. Methods: We reviewed the 329 household contacts who visited Chung-Ang University Hospital (Seoul, Korea) from May 1, 2011 to February 28, 2014. To evaluate the effectiveness of TST and IGRA for the diagnosis of LTBI, we examined the concordance rate between the two tests, based on age. We also evaluated the risk factors for LTBI. Results: The concordance rate between the two tests in individuals 0-24 years, 25-54 years, and over 55 years were 82.6% (κ = 0.64, p &lt; 0.01), 68.9% (κ = 0.40, p &lt; 0.01), and 68.4% (κ = 0.35, p &lt; 0.01), respectively. The ratio of positive TST to negative IGRA was higher in individuals 25-44 years old, whereas the ratio of negative TST to positive IGRA was higher in individuals older than 55 years old. Based on the TST, the risk factor for LTBI was a cavity (p &lt; 0.01). When using IGRA, the risk factors were contact time (p = 0.04) and age over 55 years old (p = 0.02). Conclusion: The concordance rate between TST and IGRA was not good after the age of 25 years. The IGRA test reflects the known risk factors more exactly. Answer: The studies provided offer insights into the factors influencing interferon γ (IFN-γ) release assay results in the context of pediatric latent tuberculosis (TB) and the potential protective role of Bacillus Calmette-Guerin (BCG) vaccination. One study found that prior BCG vaccination was associated with a negative IFN-γ release assay result, suggesting that BCG vaccination may be effective in protecting children against Mycobacterium tuberculosis infection (PUBMED:22700862). This study also highlighted that using IFN-γ release assays only in children with positive skin tests could underestimate latent TB infection. Another study reported no evidence that subjects' age correlated with discordant results between the tuberculin skin test (TST) and IFN-γ release assay in children younger than 5 years who had been vaccinated with BCG (PUBMED:26199259). This suggests that factors other than age may contribute to discordant results in this population. Further research indicated that IGRAs are more specific than the TST, yielding fewer false-positive results due to their lack of cross-reaction with BCG vaccines and most nontuberculous mycobacteria (PUBMED:25422024). This specificity is particularly useful in BCG-vaccinated children, where the TST may give false-positive results. The T-SPOT.TB, an IFN-γ release assay, is noted for its high sensitivity and specificity and is not affected by BCG vaccination status (PUBMED:19771743). Similarly, the QuantiFERON-TB Gold-In-Tube assay has shown enhanced specificity compared to the TST in diagnosing TB disease and latent TB infection in BCG-vaccinated children (PUBMED:20616761). An in-house IFN-γ ELISpot assay was found to perform comparably to the commercial T-SPOT.TB in diagnosing latent TB infection in BCG-vaccinated subjects, regardless of HIV infection status (PUBMED:21144240). However, another study found that IFN-γ release assay positivity was rare in a pediatric cohort living in an area with a high prevalence of TB, suggesting that the test may have limitations in certain populations (PUBMED:25149414). In summary, BCG vaccination appears to be a protective factor against a positive IFN-γ release assay result in children, indicating a potential protective role against TB infection. The specificity of The specificity of IGRAs is particularly advantageous in BCG-vaccinated populations, as it reduces the likelihood of false-positive results associated with the TST (PUBMED:25422024). However, the overall concordance between the TST and IGRAs in diagnosing latent TB infection can vary, with some studies reporting poor agreement, especially in populations vaccinated with BCG (PUBMED:22105266). A meta-analysis found that while the sensitivity of IGRAs and the TST for active TB in children was similar, the specificity of IGRAs was significantly higher, especially in BCG-vaccinated children (PUBMED:22077219). This suggests that IGRAs may be more reliable for ruling out TB infection in this group. Lastly, a study conducted in South Korea, where BCG vaccination is common, found that the concordance rate between the TST and IGRA was not good after the age of 25 years, and that IGRA more accurately reflected known risk factors for latent TB infection (PUBMED:25861575). In conclusion, BCG vaccination is a predictor of a negative IFN-γ release assay result in children, which may indicate a protective effect against TB infection. IGRAs offer a more specific alternative to the TST in BCG-vaccinated populations, although their sensitivity may be limited in certain high-prevalence settings. The use of IGRAs in conjunction with risk factor assessment and clinical judgment is important for the accurate diagnosis of latent TB infection in children.
Instruction: Distinguishing pelvic phleboliths from distal ureteral stones on routine unenhanced helical CT: is there a radiolucent center? Abstracts: abstract_id: PUBMED:9888730 Distinguishing pelvic phleboliths from distal ureteral stones on routine unenhanced helical CT: is there a radiolucent center? Objective: On radiographs of the abdomen and pelvis, phleboliths often have a characteristic radiolucent center that helps to distinguish them from ureteral stones. On unenhanced CT, the distinction between pelvic phleboliths and distal ureteral stones can be problematic. The objective of this study was to compare the appearance of phleboliths on routine clinical CT studies with their appearance on radiography and to determine if the radiolucent center seen on radiographs is revealed on CT. Subjects And Methods: During a 3-month interval, we identified 50 patients with acute flank pain who underwent both unenhanced CT and abdominal radiography. Patients with a radiograph of the pelvis and an unenhanced CT scan obtained within 1 month of each other were included. CT was performed with a collimation of 5 mm and a pitch of 1. Each phlebolith was examined using soft-tissue and bone settings and was also retrospectively pixel mapped. Results: Seventy-nine (66%) of 120 phleboliths revealed radiolucent centers on abdominal radiography. On CT, 119 (99%) of 120 phleboliths failed to reveal a low-attenuation center on both visual inspection and pixel mapping. Conclusion: Pelvic phleboliths were shown to lack a radiolucent center on routine clinical CT examinations despite their appearance on radiography. A radiolucent center therefore cannot be used to differentiate phleboliths from distal ureteral stones on unenhanced CT in patients with acute flank pain and suspected ureteral obstruction. abstract_id: PUBMED:9577482 Unenhanced helical CT criteria to differentiate distal ureteral calculi from pelvic phleboliths. Purpose: To identify imaging features at unenhanced helical computed tomography (CT) that help differentiate distal ureteral calculi from pelvic phleboliths. Materials And Methods: Retrospective analysis was performed of 184 pelvic calcifications identified at unenhanced helical CT in 113 patients. The size, shape, and attenuation of each calcification were recorded in addition to the presence of a central lucency and the appearance of the adjacent soft tissues. With profile analysis, a graphic representation was generated of attenuation in each pixel along a line drawn through each calcification. Results: Geometric configuration was seen in eight (21%) calculi but not in any phleboliths. Differences were significant (P &lt; .0001) between the mean attenuation of calculi and that of phleboliths. Among phleboliths, none had a mean attenuation greater than 278 HU, 13 (9%) had a visible central lucency, 31 (21%) had a bifid peak at profile analysis, 30 (21%) had the "comet sign" (adjacent eccentric, tapering soft-tissue mass corresponding to the noncalcified portion of a pelvic vein), and three (2%) had the soft-tissue rim sign (edema of the ureteral wall). Among calculi, none had a central lucency, bifid peak, or comet sign, but 29 (76%) had the soft-tissue rim sign. Conclusion: Analysis of pelvic calcifications at unenhanced helical CT can help differentiate calculi from phleboliths. abstract_id: PUBMED:11483423 Central lucency of pelvic phleboliths: comparison of radiographs and noncontrast helical CT. To compare how often the central lucency of pelvic phleboliths is seen on plain radiographs and noncontrast helical computed tomography (CT), both images in 70 patients with renal colic were analyzed. Both images revealed the same number of phleboliths in all cases except one in which one of two phleboliths demonstrated by CT was not seen on radiographs. Radiography revealed central lucency in 95 of 149 pelvic phleboliths (63%), but CT failed to depict a hypodense center in any of 150 phleboliths. The presence or absence of central lucency on CT images cannot, therefore, be used to differentiate phleboliths from distal ureteral calculi. abstract_id: PUBMED:15448998 Distinguishing pelvic phleboliths from distal ureteral calculi: thin-slice CT findings. The objective is to evaluate the appearance of phleboliths and distal ureteral stones by determining their roundness and presence of central lucency on thin-slice CT. Seventy-seven patients with pelvic radioopacities at unenhanced CT were selected. Those patients consequently underwent thin-slice CT with 1-mm collimation, a FOV of 10 cm, a pitch of 1:1 and a bone reconstruction algorithm. Three radiologists reviewed all images for roundness and the presence of central radiolucency. No calculi showed central lucency on either imaging technique used. Thin-slice CT revealed central lucency in 60% of phleboliths with 100% specificity. Three of 40 ureteral stones showed round contour. Thin-slice CT revealed a round contour in 97% of phleboliths with 93% specificity. Central lucency is a characteristic finding of pelvic phleboliths on thin-slice CT. This finding can therefore be used in combination with roundness as a problem-solving tool in differentiating phleboliths from distal ureteral stones. abstract_id: PUBMED:28114785 Pelvic Phlebolith: A Trivial Pursuit for the Urologist? Objective: Pelvic phleboliths are commonly encountered on plain and CT imaging and remain a source of frustration when attempting to differentiate them from ureteral calculi. Given their frequency, surprising little is known about their significance. We review the literature on pelvic phleboliths, specifically in relation to their history, demography, clinical significance, and methods to distinguish them from ureteral calculi. Methods: A comprehensive literature search was performed for all articles concerning pelvic phleboliths. Results: Pelvic phleboliths were first described in 19th century when the presence of calcified intravenous nodules was observed in human dissection. With the discovery of X-ray imaging in 1895, they have caused much diagnostic controversies since. Histologically they are composed of calcified laminated fibrous tissue, with a surface layer continuous with vein endothelium. Prevalence of pelvic phleboliths in adults is reported to be 38.9%-48%. They are more common in adults aged over 40, and appear to equally affect both genders. They may be associated with diverticulitis, vascular abnormalities, and are more commonly seen in individuals from economically developed countries. The soft-tissue "rim"sign (50%-77% sensitivity and 92%-100% specificity) and a geometric shape (100% positive predictive value [PPV]) are radiological signs predictive of ureteral calculi on unenhanced CT scanning. Radiological signs suggestive of phleboliths include the presence of central lucency (8%-60% sensitivity and 100% specificity), rounded shape (91% PPV), and the comet-tail sign (21%-65% sensitivity and 100% specificity). Phleboliths appear to have a significantly lower Hounsfield unit enhancement than ureteral calculi (160-350 HU). Conclusions: Pelvic phleboliths are a common radiological finding, especially in the older population, which continue to present diagnostic challenges in those with suspected ureteral calculi. With greater awareness, the uncertainty can be overcome by identifying defining characteristics when interpreting radiological investigations. abstract_id: PUBMED:32107579 Differentiation of distal ureteral stones and pelvic phleboliths using a convolutional neural network. The objectives were to develop and validate a Convolutional Neural Network (CNN) using local features for differentiating distal ureteral stones from pelvic phleboliths, compare the CNN method with a semi-quantitative method and with radiologists' assessments and to evaluate whether the assessment of a calcification and its local surroundings is sufficient for discriminating ureteral stones from pelvic phleboliths in non-contrast-enhanced CT (NECT). We retrospectively included 341 consecutive patients with acute renal colic and a ureteral stone on NECT showing either a distal ureteral stone, a phlebolith or both. A 2.5-dimensional CNN (2.5D-CNN) model was used, where perpendicular axial, coronal and sagittal images through each calcification were used as input data for the CNN. The CNN was trained on 384 calcifications, and evaluated on an unseen dataset of 50 stones and 50 phleboliths. The CNN was compared to the assessment by seven radiologists who reviewed a local 5 × 5 × 5 cm image stack surrounding each calcification, and to a semi-quantitative method using cut-off values based on the attenuation and volume of the calcifications. The CNN differentiated stones and phleboliths with a sensitivity, specificity and accuracy of 94%, 90% and 92% and an AUC of 0.95. This was similar to a majority vote accuracy of 93% and significantly higher (p = 0.03) than the mean radiologist accuracy of 86%. The semi-quantitative method accuracy was 49%. In conclusion, the CNN differentiated ureteral stones from phleboliths with higher accuracy than the mean of seven radiologists' assessments using local features. However, more than local features are needed to reach optimal discrimination. abstract_id: PUBMED:10715342 Pearls and pitfalls in the diagnosis of ureterolithiasis with unenhanced helical CT. Several signs to assist interpretation of unenhanced helical computed tomographic (CT) scans obtained for suspected ureterolithiasis have been described. Because signs such as perinephric stranding are not always readily apparent, a methodical approach to interpretation of CT studies is important in determining the presence or absence of ureterolithiasis. Evaluation of the poles of the kidneys is helpful in detecting subtle stranding of the perinephric fat. Inspection of the intrarenal collecting system within the poles of the kidneys is helpful in identifying subtle collecting system dilatation and can help prevent mistaking an extrarenal pelvis for hydronephrosis. Careful inspection of the ureter throughout its course is the most reliable method of distinguishing between ureteral stones and phleboliths. However, when the ureter cannot be followed antegrade, the pelvic portion can often be identified in a retrograde fashion. When secondary signs of obstruction are present but no stone is present, differential diagnostic considerations include a recently passed stone, pyelonephritis, urinary tract obstruction unrelated to stone disease, and protease inhibitor deposition disease. abstract_id: PUBMED:35058092 Is This Your Stone? Distinguishing Phleboliths and Nephroliths on Imaging in the Emergency Department Setting. Background: Ureteral stones are a common diagnosis in the emergency department (ED) setting, often found with computed tomography (CT). The high frequency of phleboliths can confound ureteral stone diagnosis on CT imaging. Objective: This article provides a narrative review of the etiological, epidemiological, and radiological distinctions between ureteral stones and phleboliths for appropriate ED diagnosis and management. Discussion: Multiple studies demonstrate phleboliths, benign calcified thrombi largely in pelvic vasculature, are found at frequencies equal to or greater than those of ureteral stones on CT imaging. The calcified foundation of phleboliths and most ureteral stones, as well as the close proximity of the distal ureters to pelvic vasculature, can cause consternation in the proper diagnosis of ureteral stones in the patient with abdominal pain. Radiological findings of circumferential ureteral soft tissue edema ("rim sign") and an irregular tapering soft tissue mass ("comet tail sign") can suggest a ureteral and phlebolith etiology, respectively. These signs demonstrate high specificity but low sensitivity, and clinicians must use history, examination, and secondary signs on imaging to differentiate ureteral stones from phleboliths. Conclusions: Phleboliths are a common finding on CT imaging. Radiological findings of rim sign and comet tail sign may help to differentiate phleboliths and ureteral stones; however, their low sensitivity and inconsistent presentation should prompt greater reliance on other signs of ureteral obstruction to aid in diagnosis of undifferentiated pelvic calcifications. abstract_id: PUBMED:9754108 Interobserver variability in the interpretation of unenhanced helical CT for the diagnosis of ureteral stone disease. Purpose: The purpose of this study was to analyze interobserver agreement in the interpretation of unenhanced helical CT (UHCT) for the evaluation of ureteral stone disease and obstruction. Method: One hundred three UHCT examinations were independently and retrospectively reviewed by five readers including attending radiologists, a radiology resident, and an attending urologist. Examinations were interpreted as positive, negative, or indeterminate for ureteral stone disease and obstruction. The Cohen kappa test was used to measure interobserver agreement. The accuracy of the readers was also assessed. Results: The kappa value ranged from 0.67 to 0.71 among the three attending radiologists and from 0.65 to 0.67 among the radiology attending physicians and radiology resident. Although the urologist tended to agree less well with the other readers (kappa range: 0.33-0.46), there was no statistically significant difference (p &lt; 0.05) in the accuracy among all five readers. The percentage of cases interpreted as indeterminate ranged from 8 to 25% and almost invariably involved difficulty distinguishing phleboliths from minimally obstructing distal ureteral calculi. The percentage of UHCT scans correctly interpreted as positive and correctly interpreted as negative ranged from 73% (n = 27) to 86% (n = 32) and 63% (n = 22) to 86% (n = 30), respectively. Conclusion: Interobserver agreement was very good among the radiology attending physicians and resident and moderate with the urologist. The examination is an accurate technique in the evaluation of ureteral stone disease, although limitations exist, particularly in the diagnosis of minimally obstructing distal ureteral calculi. abstract_id: PUBMED:9051021 Soft-tissue "rim" sign in the diagnosis of ureteral calculi with use of unenhanced helical CT. Purpose: To determine the value of the ureteric soft-tissue "rim" sign for differentiation of ureteral calculi from phleboliths on unenhanced helical computed tomographic (CT) scans and to identify factors that are associated with the presence of the rim sign. Materials And Methods: Of 442 patients with ureteral obstruction confirmed at CT, 200 had documented ureteral stone disease; 136 of these patients had ureteral calculi, excluding the ureterovesical or ureteropelvic junctions. Each calculus and phlebolith along the course of the ureter were evaluated retrospectively and blindly for the presence of the rim sign, stone location, and stone size; the duration of symptoms was documented when available. Results: The rim sign was present in 105 of 136 ureteral calculi (77%) and in 20 of 259 phleboliths (8%) and yielded a sensitivity of 77% and a specificity of 92% for distinguishing a calculus from a phlebolith. Calculi associated with the rim sign had a mean size of 4.3 mm, and calculi without a rim sign had a mean size of 6.3 mm (P &lt; .001). There was no difference in duration of symptoms among patients who did or did not exhibit a rim sign. Conclusion: In patients with flank pain, the presence of a rim sign is a strong indicator that a calcification along the course of the ureter is a stone. Absence of the rim sign indicates that a calcification remains indeterminate. Answer: On routine unenhanced helical CT, distinguishing pelvic phleboliths from distal ureteral stones can be challenging. The presence of a radiolucent center, which is often seen in phleboliths on abdominal radiography, could potentially aid in differentiation. However, studies have shown that on CT, pelvic phleboliths typically lack a radiolucent center. Specifically, one study found that 119 out of 120 phleboliths did not reveal a low-attenuation center on CT, despite 79 of these phleboliths showing a radiolucent center on abdominal radiography (PUBMED:9888730). Another study confirmed that none of the 150 phleboliths examined on CT depicted a hypodense center, indicating that the presence or absence of central lucency on CT images cannot be used to differentiate phleboliths from distal ureteral calculi (PUBMED:11483423). However, thin-slice CT has been found to be more effective in revealing the central lucency characteristic of phleboliths. A study using thin-slice CT with 1-mm collimation demonstrated central lucency in 60% of phleboliths, with 100% specificity, suggesting that this finding can be used in combination with roundness as a problem-solving tool in differentiating phleboliths from distal ureteral stones (PUBMED:15448998). In conclusion, while a radiolucent center is not typically visible on routine unenhanced helical CT, thin-slice CT may provide better visualization of this feature and aid in distinguishing pelvic phleboliths from distal ureteral stones.
Instruction: Is sleep apnea a winter disease? Abstracts: abstract_id: PUBMED:22700779 Is sleep apnea a winter disease?: meteorologic and sleep laboratory evidence collected over 1 decade. Background: The severity of obstructive sleep apnea increases by influence of conditions that are more frequent in winter. The hypothesis that the apnea-hypopnea index (AHI) of different patients undergoing polysomnography may be seasonally affected was tested. Methods: The retrospectively analyzed database included 7,523 patients of both sexes who underwent in-laboratory baseline polysomnography to investigate any complaint of disordered sleep, during 1 decade, between January 2000 and December 2009. Data on climate and air pollution were obtained from official organizations. AHI was the main outcome variable. Cosinor analysis, a statistical method for the investigation of time series, was used to detect seasonality. Results: The cosinor analysis confirmed the existence of a circannual pattern of AHI, with acrophase in winter and nadir during the summer. The seasonality is significant even after adjusting for sex, age, BMI, neck circumference, and relative air humidity. Median (25-75 interquartile range) AHI in the 6 months with colder weather was 17.8 (6.5-40.6/h), and in the warmer weather was 15.0 (5.7-33.2/h). The AHI correlated inversely with ambient temperature and directly with atmospheric pressure, relative air humidity, and carbon monoxide levels. Correlations with precipitation, particulate air matter &lt; 10 μm, sulfur dioxide, and ozone were nonsignificant. Conclusions: More sleep-disordered breathing events were recorded in winter than in other seasons. Cosinor analysis uncovered a significant seasonal pattern in the AHI of different patients undergoing polysomnography, independent of sex, age, BMI, neck circumference, and relative air humidity. This finding suggests that obstructive sleep apnea severity may be associated with other seasonal epidemiologic phenomena. abstract_id: PUBMED:27218012 Extrahepatic Manifestations of Nonalcoholic Fatty Liver Disease. Nonalcoholic fatty liver disease (NAFLD) is the most common cause of chronic liver disease worldwide with an increased prevalence of metabolic, macro- and microvascular complications. The primary causes of mortality in NAFLD are cardiovascular disease (CVD), malignancy and liver disease. NAFLD is a multisystem disease that affects a variety of extra-hepatic organ systems. The main focus of this review is to summarize the reported extra-hepatic associations, which include CVD, chronic kidney disease, obstructive sleep apnea, osteoporosis, psoriasis, colorectal cancer, iron overload and various endocrinopathies (e.g. type 2 diabetes mellitus, thyroid dysfunction, and polycystic ovarian syndrome). Due to the systemic manifestations of NAFLD patients require a multidisciplinary assessment and may benefit from more rigorous surveillance and early treatment interventions to decrease mortality related to malignancy or cardiometabolic diseases. abstract_id: PUBMED:25156766 Sleep in patients with restrictive lung disease. Restrictive lung disease leads to ventilatory defects and diffusion impairments. These changes may contribute to abnormal nocturnal pathophysiology, including sleep architecture disruption and impaired ventilation and oxygenation. Patients with restrictive lung disease may suffer significant daytime fatigue and dysfunction. Hypercarbia and hypoxemia during sleep may impact progression of lung disease and related symptoms. Little is known about the impact of treatment of sleep disruption on sleep quality and overall prognosis in restrictive lung disease. This review discusses the pathophysiology of sleep and comorbid sleep disorders in restrictive lung diseases including interstitial lung disease, neuromuscular disease, and obesity hypoventilation syndrome. abstract_id: PUBMED:31234200 Sleep in Parkinson's disease. Sleep disturbances are common in Parkinson's disease and comprise the entire spectrum of sleep disorders. On the one hand regulation of sleep and wakefulness is affected in Parkinson's disease, leading to the development of disorders, such as insomnia and daytime sleepiness. While on the other hand control of motor activity during sleep is impaired, with subsequent manifestation of parasomnias (mainly REM sleep behavior disorders, but also, albeit more rarely, sleepwalking, and overlap parasomnia). Restless legs syndrome has been reported to be frequent in patients with Parkinson's disease, although there is no consensus on whether it is more frequent in Parkinson's disease than in the general population. The same is true for sleep-related breathing disorders. Regarding the diagnosis of sleep disorders in patients with Parkinson's disease, one of the main challenges is correctly identifying excessive daytime sleepiness as there are many potential confounding factors, for example it is necessary to distinguish sleep-related breathing disorders from medication effects, and to distinguish restless legs syndrome from the concomitant presence of potential mimics specific to Parkinson's disease, such as akathisia, nocturnal leg cramps, nocturnal hypokinesia, early morning dystonia, etc. The correct diagnosis of REM sleep behavior disorder is also not always easy, and video-polysomnography should be performed in order to exclude mimic-like movements at the end of sleep apneas or violent periodic leg movements of sleep. These aspects and specific considerations about diagnosis and treatment of sleep disorders in patients with Parkinson's disease will be reviewed. abstract_id: PUBMED:25041352 Heart failure and Alzheimer's disease. It has recently been proposed that heart failure is a risk factor for Alzheimer's disease. Decreased cerebral blood flow and neurohormonal activation due to heart failure may contribute to the dysfunction of the neurovascular unit and cause an energy crisis in neurons. This leads to the impaired clearance of amyloid beta and hyperphosphorylation of tau protein, resulting in the formation of amyloid beta plaques and neurofibrillary tangles. In this article, we will summarize the current understanding of the relationship between heart failure and Alzheimer's disease based on epidemiological studies, brain imaging research, pathological findings and the use of animal models. The importance of atherosclerosis, myocardial infarction, atrial fibrillation, blood pressure and valve disease as well as the effect of relevant medications will be discussed. abstract_id: PUBMED:26053902 Systemic manifestations of chronic obstructive pulmonary disease. Chronic obstructive pulmonary disease is a complex multisystem disease with comorbidities and systemic manifestations that affect respiratory symptoms, exacerbation frequency and mortality. This article gives an overview of these systemic manifestations and their importance, and offers strategies for managing them. abstract_id: PUBMED:24123966 Polysomnographic findings in infantile Pompe disease. Infantile Pompe disease is a rare, autosomal recessive disorder due to deficiency of the enzyme acid α-glucosidase that degrades lysosomal glycogen. Clinical features of diffuse hypotonia, cardiomyopathy, and weakness are present within the first days to months of life in patients with classic infantile Pompe disease. Progression of the disease often leads to respiratory failure. Although sleep apnea is reported in late-onset Pompe disease, sleep pathology is not well characterized in infantile disease. In this retrospective study, we analyzed nocturnal polysomnography results from 17 patients with infantile-onset Pompe disease. Obstructive sleep apnea and hypoventilation were common among this cohort, even in those that did not have symptoms of sleep-disordered breathing. All patients with infantile-onset Pompe disease should undergo polysomnography as a routine part of their care. abstract_id: PUBMED:26814720 Chronic obstructive pulmonary disease comorbidities. Purpose Of Review: Classic descriptions of chronic obstructive pulmonary disease (COPD) centered on its impact on respiratory function. It is currently recognized that comorbidities contribute to the severity of symptoms and COPD progression. Understanding COPD-comorbidities associations could provide innovative treatment strategies and identify new mechanistic pathways to be targeted. Recent Findings: Some comorbidities are clustered with specific COPD phenotypes. There are stronger associations between airway-predominant disease and cardio-metabolic comorbidities, whereas in emphysema-predominant COPD sarcopenia and osteoporosis are frequent. These patterns suggest different inflammatory pathways acting by COPD phenotype. Osteoporosis is a major concern in COPD, particularly among men. Although β-blockers use for cardiac indications in COPD remains low, recent evidence suggests that this medication group could decrease COPD exacerbations. Gastroesophageal reflux is consistently associated with poor COPD outcomes, but mechanisms and impact of treatment are still unclear. Nontraditional comorbid conditions, such as cognitive impairment, anxiety, and depression have significant impact in COPD outcomes. Summary: Clinicians should screen their COPD patients for the presence of cardiovascular disease, diabetes, osteoporosis, sleep apnea, and sarcopenia, comorbidities for which specific treatments are available and associated with better COPD outcomes. The impact of interventions to treat gastroesophageal reflux disease, anxiety and depression is still to be defined. abstract_id: PUBMED:31464350 Associations between cerebrovascular risk factors and parkinson disease. Objective: To determine whether cerebrovascular risk factors are associated with subsequent diagnoses of Parkinson disease, and whether these associations are similar in magnitude to those with subsequent diagnoses of Alzheimer disease. Methods: This was a retrospective cohort study using claims data from a 5% random sample of Medicare beneficiaries from 2008 to 2015. The exposures were stroke, atrial fibrillation, coronary disease, hyperlipidemia, hypertension, sleep apnea, diabetes mellitus, heart failure, peripheral vascular disease, chronic kidney disease, chronic obstructive pulmonary disease, valvular heart disease, tobacco use, and alcohol abuse. The primary outcome was a new diagnosis of idiopathic Parkinson disease. The secondary outcome was a new diagnosis of Alzheimer disease. Marginal structural Cox models adjusting for time-dependent confounding were used to characterize the association between exposures and outcomes. We also evaluated the association between cerebrovascular risk factors and subsequent renal colic (negative control). Results: Among 1,035,536 Medicare beneficiaries followed for a mean of 5.2 years, 15,531 (1.5%) participants were diagnosed with Parkinson disease and 81,974 (7.9%) were diagnosed with Alzheimer disease. Most evaluated cerebrovascular risk factors, including prior stroke (hazard ratio = 1.55; 95% confidence interval = 1.39-1.72), were associated with the subsequent diagnosis of Parkinson disease. The magnitudes of these associations were similar, but attenuated, to the associations between cerebrovascular risk factors and Alzheimer disease. Confirming the validity of our analytical model, most cerebrovascular risk factors were not associated with the subsequent diagnosis of renal colic. Interpretation: Cerebrovascular risk factors are associated with Parkinson disease, an effect comparable to their association with Alzheimer disease. ANN NEUROL 2019;86:572-581. abstract_id: PUBMED:37495333 Pathobiology of Metabolic-Associated Fatty Liver Disease. Metabolic-associated fatty liver disease (MAFLD), formerly known as nonalcoholic fatty liver disease, is highly associated with the metabolic syndrome. Given its high heterogeneity in patients along with unpredictable clinical outcomes, MAFLD is difficult to diagnose and manage. MAFLD is associated with obesity, diabetes, metabolic derangements, lipid disorders, cardiovascular disorders, sleep apnea, sarcopenia, gut dysbiosis, and sex hormone-related disorders. Identification of risk factors is imperative in understanding disease heterogeneity and clinical presentation to reliably diagnose and manage patients. The complexity of MAFLD pathobiology is discussed in this review in relation to its association with common metabolic and nonmetabolic disorders. Answer: The hypothesis that sleep apnea may be seasonally affected, with a higher severity in winter, was tested in a study that analyzed a database of 7,523 patients who underwent in-laboratory baseline polysomnography over a decade. The study found a significant seasonal pattern in the apnea-hypopnea index (AHI), with the highest values (acrophase) occurring in winter and the lowest (nadir) during the summer. This seasonality remained significant even after adjusting for factors such as sex, age, BMI, neck circumference, and relative air humidity. The median AHI was higher in the colder months compared to the warmer months. Additionally, AHI correlated inversely with ambient temperature and directly with atmospheric pressure, relative air humidity, and carbon monoxide levels. The study concluded that more sleep-disordered breathing events were recorded in winter than in other seasons, suggesting that obstructive sleep apnea severity may be associated with seasonal epidemiologic phenomena (PUBMED:22700779). Therefore, based on this evidence, it can be suggested that sleep apnea has a seasonal variation with increased severity in the winter, which supports the notion of it being a "winter disease." Continuing from where I stopped, the study's findings indicate that obstructive sleep apnea may indeed have a seasonal component, with a higher severity observed during the winter months. The correlations found between AHI and various environmental factors such as temperature, atmospheric pressure, and air quality components like carbon monoxide suggest that these elements could play a role in the exacerbation of sleep apnea symptoms during colder weather. The use of cosinor analysis, a statistical method for investigating time series, helped to confirm the existence of this circannual pattern. Despite potential confounding factors, the study was able to demonstrate that the seasonal variation in AHI was independent of demographic and physical characteristics like sex, age, BMI, and neck circumference (PUBMED:22700779). This evidence supports the concept that sleep apnea can be considered a "winter disease" due to its increased severity during that season.
Instruction: Should bilateral internal thoracic artery grafting be used in elderly patients undergoing coronary artery bypass grafting? Abstracts: abstract_id: PUBMED:34958349 Bilateral internal thoracic artery coronary grafting: risks and benefits in elderly patients. Aims: Whether bilateral internal thoracic artery (BITA) grafting benefits elderly patients in coronary artery bypass grafting (CABG) remains unclear since they tend to have a limited life expectancy and severe comorbidities. We aim to evaluate the outcomes of BITA vs. single internal thoracic artery (SITA) grafting in elderly patients. Methods And Results: A meta-analysis was performed by database searching until May 2021. Studies comparing BITA and SITA grafting among elderly patients were included. One randomized controlled trial, nine propensity score matching, and six unmatched studies were identified, with a total of 18 146 patients (7422 received BITA grafting and 10 724 received SITA grafting). Compared with SITA grafting, BITA grafting had a higher risk of deep sternal wound infection (DSWI) [odds ratio: 1.67; 95% confidence interval (CI): 1.22-2.28], and this risk could not be significantly reduced by the skeletonization technique. Meanwhile, BITA grafting was associated with a higher long-term survival [hazard ratio: 0.83; 95% CI: 0.77-0.90], except for the octogenarian subgroup. Reconstructed Kaplan-Meier survival curves revealed 4-year, 8-year, and 12-year overall survival rates of 85.5%, 66.7%, and 45.3%, respectively, in the BITA group and 79.3%, 58.6%, and 34.9%, respectively, in the SITA group. No significant difference was observed in early mortality, perioperative myocardial infarction, perioperative cerebral vascular accidents, or re-exploration for bleeding. Conclusion: Compared with SITA grafting, BITA grafting could provide a long-term survival benefit for elderly patients, although this benefit remained uncertain in octogenarians. Meanwhile, elderly patients who received BITA were associated with a higher risk of DSWI and such a risk could not be eliminated by the skeletonization technique. abstract_id: PUBMED:30952540 Is the era of bilateral internal thoracic artery grafting coming for diabetic patients? An updated meta-analysis. Objective: Because of an increased risk of sternal wound complications, the use of bilateral internal thoracic artery grafting in diabetic patients remains controversial. The objective of the present meta-analysis is to compare the safety and efficacy of single internal thoracic artery and bilateral internal thoracic artery grafting in the diabetic population. Methods: Four electronic databases, including PubMed, the Cochrane Library, Embase, and ISI Web of Knowledge, were comprehensively searched. Prospective randomized trials or observational studies comparing single internal thoracic artery and bilateral internal thoracic artery were considered eligible for the current study. Results: A literature search yielded 1 randomized controlled trial and 17 observational studies (129,871 diabetic patients: 124,233 single internal thoracic arteries and 5638 bilateral internal thoracic arteries). Pooled analysis demonstrated overall incidence of deep sternal wound infection in the bilateral internal thoracic artery grafting group was significantly higher than in the single internal thoracic artery grafting group (3.26% for bilateral internal thoracic artery vs 1.70% for single internal thoracic artery). No significant difference was found between both groups in terms of risk of deep sternal wound infection when the skeletonized harvesting technique was adopted. Furthermore, in-hospital mortality was comparable between both groups (2.80% for bilateral internal thoracic artery vs 2.36% for single internal thoracic artery). However, compared with single internal thoracic artery grafting, bilateral internal thoracic artery grafting could confer a lower risk for long-term overall mortality (hazard ratio, 1.41; 95% confidence interval, 1.18-1.67; P &lt; .001; I2 = 63%) and cardiac mortality (hazard ratio, 3.15; 95% confidence interval, 2.23-4.46; P &lt; .001; I2 = 0%). Conclusions: Compared with single internal thoracic artery grafting, bilateral internal thoracic artery grafting is associated with enhanced long-term survival among diabetic patients. Skeletonization of bilateral internal thoracic artery is not associated with an increased risk of deep sternal wound infection. Therefore, surgeons should be encouraged to adopt bilateral internal thoracic artery grafting in a skeletonized manner more routinely in diabetic patients. abstract_id: PUBMED:35967226 Minimally invasive coronary artery bypass grafting with ultrasonically skeletonized internal thoracic artery. Objective: This study aimed to examine the feasibility and safety of minimally invasive cardiac surgery coronary artery bypass grafting using an ultrasonically skeletonized internal thoracic artery in the authors' initial experience. Methods: From February 2012 to May 2021, 247 consecutive patients who underwent minimally invasive coronary artery bypass grafting using an ultrasonically skeletonized internal thoracic artery were reviewed retrospectively. Internal thoracic arteries were harvested in a full skeletonized fashion using an ultrasonic scalpel via left minithoracotomy. Bilateral internal thoracic arteries were used in 108 patients, and the internal thoracic arteries as in situ grafts were used in 393 anastomoses. Total arterial revascularization was performed in 126 patients, and 142 patients underwent aortic nontouch minimally invasive coronary artery bypass grafting. Results: The patients' mean (range) age was 65.9 ± 11.5 (30-90) years. The mean (range) number of anastomoses performed was 2.6 ± 1.1 (1-6). Forty-six patients (18.6%) had 4 grafts, 94 patients (38.1%) had 3 grafts, and 60 patients (24.3%) had 2 grafts. Minimally invasive coronary artery bypass grafting was completed without conversion to sternotomy in all patients. Cardiopulmonary bypass was performed in 3 patients (1.2%), reinterventions due to bleeding were performed in 7 patients (2.8%), and chest wound infections were observed in 5 patients (2.0%). There was 1 (0.4%) mortality. Conclusions: Minimally invasive coronary artery bypass grafting using an ultrasonically skeletonized internal thoracic artery is feasible and has shown good perioperative outcomes. This approach has the potential for further optimization with revascularization strategies. abstract_id: PUBMED:31638700 Bilateral internal thoracic artery grafting: propensity analysis of the left internal thoracic artery versus the right internal thoracic artery as a bypass graft to the left anterior descending artery. Objectives: To compare different configurations of the bilateral internal thoracic arteries for the left coronary system and examine early and late outcomes, including mid-term graft patency. Methods: We reviewed 877 patients who underwent primary isolated coronary artery bypass grafting using in situ bilateral internal thoracic arteries [in situ right internal thoracic artery (RITA)-to-left anterior descending artery (LAD) grafting, n = 683; in situ left internal thoracic artery (LITA)-to-LAD grafting, n = 194]. We compared mid-term patency between the grafts. Propensity score matching was performed to investigate early and long-term outcomes. Results: The 2-year patency rate for RITA-to-LAD and LITA-to-LAD grafts were similar. Multivariate analysis revealed that RITA-to-non-LAD anastomosis (P = 0.029), postoperative length of stay (P = 0.003) and chronic obstructive pulmonary disease (P = 0.005) were associated with graft failure. After statistical adjustment, 176 propensity-matched pairs were available for comparison. RITA-to-LAD grafting enabled a more distal anastomosis. Kaplan-Meier analysis revealed that the incidences of death, repeat revascularization and myocardial infarction were significantly higher in the LITA-to-LAD group among both the unmatched and matched samples (P = 0.045 and 0.029, respectively). Conclusions: The mid-term patency and outcomes of RITA-to-LAD grafting are good and reduces future cardiac event, in contrast to LITA-to-LAD grafting. abstract_id: PUBMED:25841772 Off-pump Bilateral Internal Thoracic Artery Grafting. Background: The long-term benefit of bilateral internal thoracic grafts (BITA) is well established. BITA grafting is often avoided in diabetic, female, obese, elderly and other high-risk patients because of concerns for deep sternal wound infection. The objective of this study is to analyse early results of our BITA grafting and to establish the safety of BITA use in all patients. Methods: All cases of isolated consecutive unselected CABG were included in this retrospective study. BITA were used in-situ - one was used to graft left anterior descending artery and the other was used as inflow for a composite graft with radial artery. Dual inflow using in situ BITA grafting allows easy bailout by using an additional vein graft. Results: BITA was used in 574 patients out of 602 (95.35%). Incidence of early death was 1.33% (8/602), stroke 0.5% (3/602), reoperation for bleeding 0.17% (1/602). Deep sternal wound infection was not seen in any patient but nine patients (1.5%) had superficial wound infection which healed with dressing. Conclusion: We have used BITA in 95% of our unselected, consecutive off-pump CABG patients without any major wound complications. Our limited experience has shown off-pump CABG using BITA grafting can be safely adopted routinely with excellent early result. abstract_id: PUBMED:38336822 Off-pump bilateral internal thoracic artery grafting in patients with left main coronary artery disease. Background: To compare postoperative outcomes in patients with left main coronary artery disease who underwent off-pump isolated coronary artery bypass grafting for multivessel disease using either skeletonized bilateral or single internal thoracic artery (ITA). Methods: Among 1583 patients who underwent isolated coronary artery bypass grafting (CABG) in our hospital between 2002 and 2022, 604 patients with left main coronary artery disease underwent single (n = 169) or bilateral (n = 435) ITA grafting. We compared postoperative outcomes between the two groups after adjusting preoperative characteristics using inverse probability of treatment weighting. Results: After adjustment using inverse probability of treatment weighting method, the sum of weights was 599.74 in BITA group and 621.64 in SITA group. There was no significant difference in postoperative deep sternal wound infection (p = 0.227) and 30-day mortality (p = 0.612). Follow-up was completed in 98.7% (596/604) of the patients, and the mean follow-up duration was 6.7 years. At 10 years, the overall survival following bilateral versus single ITA grafting was 71.2% and 60.6%, respectively (log-rank test, p = 0.040), and freedom from major adverse cardiac and cerebrovascular events (MACCE) was 63.3% and 46.3%, respectively (log-rank test, p = 0.008). In multivariate Cox proportional hazard models, bilateral ITA grafting was significantly associated with a lower risk of all-cause death (hazard ratio [HR]: 0.706, 95% confidence interval [CI]: 0.504-0.987; p = 0.042) and MACCE (HR: 0.671, 95% CI: 0.499-0.902; p = 0.008). Conclusions: Bilateral skeletonized ITA grafting is associated with lower rates of all-cause death and MACCE than single ITA grafting in patients with left main coronary artery disease undergoing off-pump CABG. abstract_id: PUBMED:35012781 Additional survival benefit of bilateral in situ internal thoracic artery grafting with composite radial artery graft in total arterial off-pump coronary artery bypass grafting. Objective: This study aimed to elucidate whether the use of bilateral internal thoracic arteries (BITAs) confers additional survival benefits compared with a single internal thoracic artery (SITA) in total arterial grafting with the radial artery. Methods: Between 2002 and 2016, 617 patients underwent a bilateral in situ internal thoracic artery grafting with the radial artery as a composite I-graft (BITA-I group) and 516 patients underwent single in situ internal thoracic artery grafting with the radial artery as a composite Y-graft (SITA-Y group). All anastomoses were performed without cardiopulmonary bypass and aortic manipulation. Propensity score matching was performed to adjust covariates and compared the outcomes between the 2 groups. Subanalysis was also performed to evaluate the effects of the BITA-I group on survival according to the covariates using Cox proportional hazards regression analysis. Results: Propensity score matching yielded 348 well-matched pairs. Early postoperative outcomes were similar in the 2 groups. The BITA-I group showed significantly better survival than the SITA-Y group (79.3% vs 70.2% at 10 years, P = .015). The subanalysis revealed a significantly better survival in the BITA-I group among overall patients (hazard ratio, 0.68; 95% confidence interval, 0.49-0.93). There was a significant positive effect on survival in the BITA-I group among patients without comorbidities or those aged &lt;77 years. Conclusions: BITA grafting with the radial artery provides better long-term survival than SITA grafting with the radial artery, which is enhanced among patients aged &lt;77 years with minimum comorbidities. abstract_id: PUBMED:37576079 Coronary artery bypass grafting using bilateral internal thoracic arteries in patients with diabetes and obesity: A systematic review and meta-analysis. Background: Patients with diabetes and obesity are at higher risk of adverse long-term outcomes following coronary artery bypass grafting. The use of bilateral internal thoracic arteries (BITA) can potentially offer survival benefit in higher risk patients compared to single internal thoracic artery (SITA), but BITA is not routinely used due to lack of clear evidence of efficacy and concerns over sternal wound complications. Methods: Medline, Embase and the Cochrane Library were searched for studies comparing the efficacy and safety of BITA and SITA grafting in patients with diabetes and obesity. Meta-analysis of mortality and sternal wound complications was performed. Results: We identified eight observational and ten propensity matched studies, and one RCT, comparing BITA and SITA which included patients with diabetes (n = 19,589); two propensity matched studies and one RCT which included patients with obesity (n = 6,972); mean follow up was 10.5 and 11.3 years respectively. Meta-analysis demonstrated a mortality reduction for BITA compared to SITA in patients with diabetes (risk ratio [RR] 0.79; 95% confidence interval [CI] 0.70-0.90; p = 0.0003). In patients with obesity there was a non-significant reduction in mortality in the BITA group (RR 0.73, 95% CI 0.47-1.12; p = 0.15). There was a significantly higher rate of sternal wound complications following BITA observed in patients with diabetes (RR 1.53, 95% CI 1.23-1.90; p = 0.0001) and obesity (RR 2.24, 95% CI 1.63-3.07; p &lt; 0.00001). Conclusions: BITA is associated with better long-term survival in patients with diabetes. The effects of BITA grafting in patients with obesity are uncertain. BITA is associated with higher rates of sternal wound complications compared to SITA in both patients with diabetes and obesity. abstract_id: PUBMED:29054190 Bilateral internal thoracic artery grafting in India-Time to raise the bar. Objectives: Bilateral internal thoracic artery (BITA) grafting strategy is the current trend in coronary artery bypass grafting for multivessel coronary artery disease. Although better long-term outcomes have been shown, BITA grafting is underutilized as a main strategy for revascularization by most of the surgeons. The survey was conducted to ascertain the current usage and concerns of BITA grafting in India. Methods: Database of 856 Indian cardiac surgeons currently with predominantly adult practice was prepared and a questionnaire was sent about use of single and bilateral ITA grafts and BITA grafting strategy in different clinical scenarios. Results: A total of 112 surgeons (13.08%) from 75 institutions responded and 92 surgeons (10.7%) completed the survey. Single ITA is used by 79% of surgeons in more than 90% of their patients. 31% and 29% of surgeons use BITA grafting in 5-10% and 11-98% of their patients respectively. 53% of surgeons avoided the usage of BITA grafting in patients with smoking, 35% of surgeons in chronic obstructive pulmonary disease, 58% of surgeons in obesity and 62% of surgeons in acute coronary syndrome, 36% of surgeons in patients with left ventricular dysfunction and 61% of surgeons in patients with poor coronary anatomy. The concerns for BITA usage are risk of deep sternal wound infection (DSWI) (40%), increased operative time (27%), unknown superiority (12%) and limited length of right ITA (5%). Conclusions: The usage of BITA grafting is restricted to 10% in India and main concerns are DSWI and increased operative time. abstract_id: PUBMED:33388437 Bilateral internal thoracic artery use in coronary artery bypass grafting in the post-ART era - Perspective. There is still lack of convincing evidence about the superiority of bilateral internal thoracic artery (BITA) use in coronary artery bypass grafting (CABG) and BITA grafts continue to be underutilized. Arterial Revascularization Trial (ART) did not demonstrate the superiority of BITA versus single ITA grafting after 10 years. We have reviewed the most recent literature, assessed the current status as well as indications of BITA grafting in the post-ART era. We believe that BITA grafting is not appropriate for all patients especially in light of the findings of ART. However, the use of BITA is justified in patients of younger age and those without comorbidities (poorly controlled diabetes, obesity, chronic obstructive pulmonary disease, previous mediastinal irradiation, long-term steroid use, elderly women). Further prospective randomized studies with long-term follow-up are needed to validate the benefits of BITA grafting. Answer: The use of bilateral internal thoracic artery (BITA) grafting in elderly patients undergoing coronary artery bypass grafting (CABG) is a subject of ongoing debate. A meta-analysis aimed at evaluating the outcomes of BITA versus single internal thoracic artery (SITA) grafting in elderly patients found that BITA grafting was associated with a higher risk of deep sternal wound infection (DSWI), but also with a higher long-term survival benefit, except for the octogenarian subgroup (PUBMED:34958349). This suggests that while there are benefits to BITA grafting in terms of survival, the increased risk of DSWI is a significant concern, particularly in elderly patients who may have severe comorbidities. Another study focusing on diabetic patients, who are at increased risk of sternal wound complications, found that BITA grafting is associated with enhanced long-term survival, and when the skeletonized harvesting technique is used, the risk of DSWI is not significantly increased (PUBMED:30952540). This indicates that with careful surgical technique, the risks associated with BITA grafting can be mitigated to some extent in certain high-risk populations. A study on minimally invasive CABG using ultrasonically skeletonized internal thoracic artery reported good perioperative outcomes, suggesting that less invasive approaches might reduce complications (PUBMED:35967226). However, this study did not specifically focus on elderly patients. In contrast, a study comparing different configurations of BITA for the left coronary system found that RITA-to-LAD grafting reduces future cardiac events compared to LITA-to-LAD grafting (PUBMED:31638700), which may be relevant when considering grafting strategies in elderly patients. An analysis of off-pump BITA grafting showed that it can be safely adopted with excellent early results and without major wound complications, even in high-risk patients (PUBMED:25841772). Similarly, another study found that off-pump bilateral skeletonized ITA grafting is associated with lower rates of all-cause death and major adverse cardiac and cerebrovascular events (MACCE) than single ITA grafting in patients with left main coronary artery disease (PUBMED:38336822).
Instruction: Does use of the CPREzy involve more work than CPR without feedback? Abstracts: abstract_id: PUBMED:18374466 Does use of the CPREzy involve more work than CPR without feedback? Aim: Feedback during CPR may facilitate quality in chest compressions, but has also been associated with caregiver complaints such as stiff wrists, the need for more force and increased fatigue. This concern about extra work is, when using the CPREzy with its own spring-loaded surface, particularly relevant in the face of an increased number of successive compressions. This manuscript evaluates the objective workloads for caregivers with and without the CPREzy. Materials And Methods: An air pressure driven, piston device was used to generate controlled compressions in a manikin model. The pressure was applied for chest compressions with each of the following: the cylindrical end of the piston, a wooden block as dummy for the CPREzy, and the CPREzy itself. Three manikins with subjectively different spring compliances were selected for the tests. Series of 20 compressions were performed over a wide range of pressures. Results: No additional force is required to achieve a given depth of compression with or without the CPREzy. However, some additional work is required, ranging from 21 to 26.5%. This work is caused by the longer compression distance associated with the need to compress two springs (e.g. the CPREzy and the chest wall) instead of one (e.g. the chest wall). Conclusion: The subjective feeling of increased rescuer fatigue with the CPREzy can, at least in part, be attributed to the extra work required for compressing the spring of the CPREzy. Improved accuracy in chest compression depth is likely to be another, more significant, factor in rescuer fatigue. abstract_id: PUBMED:16457935 The quality of chest compressions by trained personnel: the effect of feedback, via the CPREzy, in a randomized controlled trial using a manikin model. Even after training, the ability to perform effective cardiac compressions has been found to be poor and to decrease rapidly. We assessed this ability with and without a non-invasive feedback device, the CPREzy, during a 270s CPR session in an unannounced, single-blinded manikin study using 224 hospital employees and staff chosen at random and using a non-cross over design. The two groups self-assessed their knowledge and skills as adequate. However, the control group (N=111) had significantly more difficulty in delivering chest compressions deeper than 4 cm (25 versus 1 candidate in the CPREzy group), P=0.0001. The control group compressed ineffectively in 36% (+/-41%) of all compressions as opposed to 6+/-13% in the CPREzy group (N=112, P=0.0001). If compressions were effective initially, the time until &gt;50% of compressions were less than 4 cm deep was 75+/-81s in the control group versus 194+/-87 s in the CPREzy group (P=0.0001 [-180 to -57.5]). After a few seconds of training in its use, our candidates used the CPREzy effectively. Against the background knowledge that estimation of compression depth by the rescuer or other team members is difficult, and that performing effective compressions is the cornerstone of any resuscitation attempt, our data suggests that a feedback device such as the CPREzy should be used consistently during resuscitation. abstract_id: PUBMED:34223369 Do automated real-time feedback devices improve CPR quality? A systematic review of literature. Aim: Automated real-time feedback devices have been considered a potential tool to improve the quality of cardiopulmonary resuscitation (CPR). Despite previous studies supporting the usefulness of such devices during training, others have conflicting conclusions regarding its efficacy during real-life CPR. This systematic review aimed to assess the effectiveness of automated real-time feedback devices for improving CPR performance during training, simulation and real-life resuscitation attempts in the adult and paediatric population. Methods: Articles published between January 2010 and November 2020 were searched from BVS, Cinahl, Cochrane, PubMed and Web of Science, and reviewed according to a pre-defined set of eligibility criteria which included healthcare providers and randomised controlled trial studies. CPR quality was assessed based on guideline compliance for chest compression rate, chest compression depth and residual leaning. Results: The selection strategy led to 19 eligible studies, 16 in training/simulation and three in real-life CPR. Feedback devices during training and/or simulation resulted in improved acquisition of skills and enhanced performance in 15 studies. One study resulted in no significant improvement. During real resuscitation attempts, three studies demonstrated significant improvement with the use of feedback devices in comparison with standard CPR (without feedback device). Conclusion: The use of automated real-time feedback devices enhances skill acquisition and CPR performance during training of healthcare professionals. Further research is needed to better understand the role of feedback devices in clinical setting. abstract_id: PUBMED:28761704 CPR performance in the presence of audiovisual feedback or football shoulder pads. Objective: The initiation of cardiopulmonary resuscitation (CPR) can be complicated by the use of protective equipment in contact sports, and the rate of success in resuscitating the patient depends on the time from incident to start of CPR. The aim of our study was to see if (1) previous training, (2) the presence of audiovisual feedback and (3) the presence of football shoulder pads (FSP) affected the quality of chest compressions. Methods: Six basic life support certified athletic training students (BLS-ATS), six basic life support certified emergency medical service personnel (BLS-EMS) and six advanced cardiac life support certified emergency medical service personnel (ACLS-EMS) participated in a crossover manikin study. A quasi-experimental repeated measures design was used to measure the chest compression depth (cm), rate (cpm), depth accuracy (%) and rate accuracy (%) on four different conditions by using feedback and/or FSP. Real CPR Help manufactured by ZOLL (Chelmsford, Massachusetts, USA) was used for the audiovisual feedback. Three participants from each group performed 2 min of chest compressions at baseline first, followed by compressions with FSP, with feedback and with both FSP and feedback (FSP+feedback). The other three participants from each group performed compressions at baseline first, followed by compressions with FSP+feedback, feedback and FSP. Results: CPR performance did not differ between the groups at baseline (median (IQR), BLS-ATS: 5.0 (4.4-6.1) cm, 114(96-131) cpm; BLS-EMS: 5.4 (4.1-6.4) cm, 112(99-131) cpm; ACLS-EMS: 6.4 (5.7-6.7) cm, 138(113-140) cpm; depth p=0.10, rate p=0.37). A statistically significant difference in the percentage of depth accuracy was found with feedback (median (IQR), 13.8 (0.9-49.2)% vs 69.6 (32.3-85.8)%; p=0.0002). The rate accuracy was changed from 17.1 (0-80.7)% without feedback to 59.2 (17.3-74.3)% with feedback (p=0.50). The use of feedback was effective for depth accuracy, especially in the BLS-ATS group, regardless of the presence of FSP (median (IQR), 22.0 (7.3-36.2)% vs 71.3 (35.4-86.5)%; p=0.0002). Conclusions: The use of audiovisual feedback positively affects the quality of the depth of CPR. Both feedback and FSP do not alter the rate measurements. Medically trained personnel are able to deliver the desired depth regardless of the presence of FSP even though shallower chest compressions depth can be seen in CPR with FSP. A feedback device must be introduced into the athletic training settings. abstract_id: PUBMED:29525478 CPR feedback/prompt device improves the quality of hands-only CPR performed in manikin by laypersons following the 2015 AHA guidelines. Purpose: We investigated the effects of a cardiopulmonary resuscitation (CPR) feedback/prompt device on the quality of chest compression (CC) during hands-only CPR following the 2015 AHA guidelines. Methods: A total of 124 laypersons were randomly assigned into three groups. The first (n=42) followed the 2010 guidelines, the second (n=42) followed the 2015 guidelines with no feedback/prompt device, the third (n=40) followed the 2015 guidelines with a feedback/prompt device (2015F). Participants underwent manual CPR training and took a written basic life support examination, then required to perform 2min of hands-only CPR monitored by a CPR feedback/prompt device. The quality of CPR was quantified as the percentage of correct CCs (mean CC depth and rate, complete recoil and chest compression fraction (CCF)) per 20s, as recorded by the CPR feedback/prompt device. Results: Significantly higher correct ratios of CC, CC depth, and rate were achieved in the 2010 group in each minute vs the 2015 group. The greater mean CC depth and rate were observed in the 2015F group vs the 2015 group. The correct ratio of CC was significantly higher in the 2015F group vs the 2015 group. CCF was also significantly higher in the 2015F group vs the 2015 group in the last 20s of CPR. Conclusions: It is difficult for a large percentage of laypersons to achieve the targets of CC depth and rate following the 2015 AHA guidelines. CPR feedback/prompt devices significantly improve the quality of hands-only CPR performance by laypersons following the standards of the 2015 AHA guidelines. abstract_id: PUBMED:28499516 Improving CPR Performance. Cardiac arrest continues to represent a public health burden with most patients having dismal outcomes. CPR is a complex set of interventions requiring leadership, coordination, and best practices. Despite the widespread adoption of new evidence in various guidelines, the provision of CPR remains variable with poor adherence to published recommendations. Key steps health-care systems can take to enhance the quality of CPR and, potentially, to improve outcomes, include optimizing chest compressions, avoiding hyperventilation, encouraging intraosseous access, and monitoring capnography. Feedback devices provide instantaneous guidance to the rescuer, improve rescuer technique, and could impact patient outcomes. New technologies promise to improve the resuscitation process: mechanical devices standardize chest compressions, capnography guides resuscitation efforts and signals the return of spontaneous circulation, and intraosseous devices minimize interruptions to gain vascular access. This review aims at identifying a discreet group of interventions that health-care systems can use to raise their standard of cardiac resuscitation. abstract_id: PUBMED:34223393 Can hospital adult code-teams and individual members perform high-quality CPR? A multicenter simulation-based study incorporating an educational intervention with CPR feedback. Aims: A multicenter simulation-based research study to assess the ability of interprofessional code-teams and individual members to perform high-quality CPR (HQ-CPR) at baseline and following an educational intervention with a CPR feedback device. Methods: Five centers recruited ten interprofessional teams of AHA-certified adult code-team members with a goal of 200 participants. Baseline testing of chest compression (CC) quality was measured for all individuals. Teams participated in a baseline simulated cardiac arrest (SCA) where CC quality, chest compression fraction (CCF), and peri-shock pauses were recorded. Teams participated in a standardized HQ-CPR and abbreviated TeamSTEPPS® didactic, then engaged in deliberate practice with a CPR feedback device. Individuals were assessed to determine if they could achieve ≥80% combined rate and depth within 2020 AHA guidelines. Teams completed a second SCA and CPR metrics were recorded. Feedback was disabled for assessments except at one site where real-time CPR feedback was the institutional standard. Linear regression models were used to test for site effect and paired t-tests to evaluate significant score changes. Logistic univariate regression models were used to explore characteristics associated with the individual achieving competency. Results: Data from 184 individuals and 45 teams were analyzed. Baseline HQ-CPR mean score across all sites was 18.5% for individuals and 13.8% for teams. Post-intervention HQ-CPR mean score was 59.8% for individuals and 37.0% for teams. There was a statistically significant improvement in HQ-CPR mean scores of 41.3% (36.1, 46.5) for individuals and 23.2% (17.1, 29.3) for teams (p &lt; 0.0001). CCF increased at 3 out of 5 sites and there was a mean 5-s reduction in peri-shock pauses (p &lt; 0.0001). Characteristics with a statistically significant association were height (p = 0.01) and number of times performed CPR (p = 0.01). Conclusion: Code-teams and individuals struggle to perform HQ-CPR but show improvement after deliberate practice with feedback as part of an educational intervention. Only one site that incorporated real-time CPR feedback devices routinely achieved ≥80% HQ-CPR. abstract_id: PUBMED:24215730 Comparing three CPR feedback devices and standard BLS in a single rescuer scenario: a randomised simulation study. Background: Efficiently performed basic life support (BLS) after cardiac arrest is proven to be effective. However, cardiopulmonary resuscitation (CPR) is strenuous and rescuers' performance declines rapidly over time. Audio-visual feedback devices reporting CPR quality may prevent this decline. We aimed to investigate the effect of various CPR feedback devices on CPR quality. Methods: In this open, prospective, randomised, controlled trial we compared three CPR feedback devices (PocketCPR, CPRmeter, iPhone app PocketCPR) with standard BLS without feedback in a simulated scenario. 240 trained medical students performed single rescuer BLS on a manikin for 8min. Effective compression (compressions with correct depth, pressure point and sufficient decompression) as well as compression rate, flow time fraction and ventilation parameters were compared between the four groups. Results: Study participants using the PocketCPR performed 17±19% effective compressions compared to 32±28% with CPRmeter, 25±27% with the iPhone app PocketCPR, and 35±30% applying standard BLS (PocketCPR vs. CPRmeter p=0.007, PocketCPR vs. standard BLS p=0.001, others: ns). PocketCPR and CPRmeter prevented a decline in effective compression over time, but overall performance in the PocketCPR group was considerably inferior to standard BLS. Compression depth and rate were within the range recommended in the guidelines in all groups. Conclusion: While we found differences between the investigated CPR feedback devices, overall BLS quality was suboptimal in all groups. Surprisingly, effective compression was not improved by any CPR feedback device compared to standard BLS. All feedback devices caused substantial delay in starting CPR, which may worsen outcome. abstract_id: PUBMED:36863107 Using real-time device-based visual feedback in CPR recertification programs: A prospective randomised controlled study. Background: Receiving regular training to maintain CPR skills is critical for in-service health-care professionals, especially because motor skills diminish over time. Objectives: To compare the effects of real-time device-based visual feedback and conventional instructor-based feedback on the chest compression skills and self-efficacy of nurses receiving a CPR recertification program. Design: A prospective randomised controlled study with repeated measurements was conducted according to the CONSORT 2010 guidelines. Methods: A total of 109 nurses were recruited, and 98 nurses were eligible for random allocation. The control group (CG, n = 49) was advised by instructors for skill correction, and the experimental group (EG, n = 49) adjusted their skills according to on-screen real-time feedback data. The study outcomes were CPR performance metrics and self-efficacy that were assessed immediately after the training session (T1) and retested after 12 weeks (T2). Results: In the EG, the percentage of the appropriate rate, depth, and chest recoil at T1 significantly improved by 24.47 % (P &lt; .001), 19.63 % (P &lt; .001), and 11.52 % (P = .001), respectively. The EG exhibited significantly higher chest compression total scores at T1, and the difference remained significant at T2 (P &lt; 0.001). Moreover, the self-efficacy in the EG significantly improved at T1 (2.76; P &lt; .001) and T2 (2.58; P &lt; .001). Conclusion: Compared with instructor-based feedback, real-time device-based visual feedback improved chest compression quality and CPR self-efficacy. abstract_id: PUBMED:15629562 CPREzy: an evaluation during simulated cardiac arrest on a hospital bed. CPREzy is a new adjunct designed to improve the application of manual external chest compressions (ECC) during cardiopulmonary resuscitation (CPR). The aim of this study was to determine the effect of using the CPREzy device compared to standard CPR during the simulated resuscitation of a patient on a hospital bed. Twenty medical student volunteers were randomised using a cross over trial design to perform 3 min of continuous ECC using CPREzy and standard CPR. There was a significant improvement in ECC depth with CPREzy compared to standard CPR 42.9 (4.4) mm versus 34.2 (7.6): mm, P = 0.001; 95% CI d.f. 4.4-12.9 mm. This translated to a reduction in the percentage of shallow compressions (&lt;38 mm) with CPREzy 16 (23)% compared to standard CPR 59 (44)%, P = 0.003. There was a small increase in the percentage of compression regarded excessive (&gt;51 mm): CPREzy 6.5 (19)% versus standard CPR 0 (0.1)%. P = 0.012). There was no difference in compression rate or duty cycle between techniques. Equal numbers of participants (40% in each group) performed one of more incorrectly placed chest compression. However the total number of incorrect compressions was higher for the CPREzy group (26% versus 3.9% standard CPR, P &lt; 0.001). This was due to a higher number of low compressions (26% of total compressions for CPREzy versus 1% for standard CPR, P &lt; 0.001). In conclusion, CPREzy was associated with significant improvements in ECC performance. Further animal and clinical studies are required to validate this finding in vivo and to see if it translates to an improvement in outcome in human victims of cardiac arrest. Answer: Yes, the use of the CPREzy involves more work than CPR without feedback. The manuscript evaluating the objective workloads for caregivers with and without the CPREzy found that although no additional force is required to achieve a given depth of compression with or without the CPREzy, some additional work is required, ranging from 21 to 26.5%. This extra work is attributed to the longer compression distance associated with the need to compress two springs (the CPREzy and the chest wall) instead of one (the chest wall alone). The subjective feeling of increased rescuer fatigue with the CPREzy can be attributed to the extra work required for compressing the spring of the CPREzy, as well as the improved accuracy in chest compression depth which is likely a more significant factor in rescuer fatigue (PUBMED:18374466).
Instruction: Do patients with systemic sclerosis have abnormal gallbladder function? Abstracts: abstract_id: PUBMED:9262975 Do patients with systemic sclerosis have abnormal gallbladder function? Objective: To determine gallbladder motility in patients with systemic sclerosis. Design: Case control study. Setting: University hospital, out-patient department of rheumatology. Patients: Ten patients with systemic sclerosis according to the criteria of the American Rheumatism Association with documented involvement of the gastrointestinal tract and 10 healthy controls matched for age, sex and body mass index. Intervention: Cephalic vagal cholinergic simulation by modified sham feeding and hormonal stimulation by infusion of cholecystokinin. Measurements: Gallbladder volume obtained by ultrasonography and determination of plasma cholecystokinin concentrations. Results: Fasting gallbladder volumes were not significantly different between patients with systemic sclerosis and controls (19.6 +/- 1.9 cm3 and 23.3 +/- 2.9 cm3, respectively, mean plus or minus standard error of the mean). Neither were there significant differences in reduction of gallbladder volume in response to modified sham feeding (35 +/- 4% and 33 +/- 4%, respectively) nor during cholecystokinin infusion (56 +/- 4% and 60 +/- 6%, respectively). The increase in plasma cholecystokinin levels during infusion was not different in the two groups. Conclusion: Gallbladder motility in patients with systemic sclerosis is preserved in response to both cholinergic and hormonal stimulation, even when other gastrointestinal motor disturbances are present. These results suggest that patients with systemic sclerosis are not at increased risk for cholelithiasis because of gallbladder dysmotility. abstract_id: PUBMED:8853227 Gallbladder motility in systemic sclerosis. In 20 patients with systemic sclerosis (SSc) and 24 healthy controls, gallbladder motility was evaluated by abdominal ultrasonography after stimulation by a standard liquid meal. Results from patients with normal and disturbed oesophageal function were analysed separately in order to investigate the significance of gallbladder motility as a parameter for gastrointestinal involvement in SSc. All patients showed a marked decrease in gallbladder size after stimulation (patients 61 +/- 13%; controls 48 +/- 12%). Patients with oesophageal dysfunction (n = 12) had a slightly lower gallbladder contraction (maximal decrease = 58 +/- 13%) when compared to patients with normal oesophageal function (n = 8; 66 +/- 13%); however, this difference was not statistically significant. Gallbladder motility in patients with SSc was not reduced when compared with healthy controls. SSc-induced oesophageal dysfunction was not associated with impaired gallbladder motility. Thus, measurement of gallbladder emptying is not a helpful tool when looking for gastrointestinal involvement in SSc. abstract_id: PUBMED:8351230 Volume of the functioning gallbladder and its emptying in patients with progressive systemic sclerosis Unlabelled: The aim of the study was to measure the gallbladder volume (GV) and its emptying after meal stimulus in patients with progressive systemic sclerosis (PSS). 18 women with PSS, aged 49.8 +/- 9.5 yr, participated in the study. Control values were obtained from 30 healthy women aged 26.5 +/- 3.5 yr. The duration of PSS symptoms ranged from 2 to 25 yr. Subjects with gallstones or cholecystitis as well those receiving drugs affecting the gastroduodenal motor function were excluded from the study. GV was measured according to Dodds et all (1985) with the real-time sonographic apparatus (Sonoline SL Siemens, Germany), after an overnight fast and after a test meal. Measurements were repeated in 10 min intervals through a 40 min duration of the study. Similar volumes of fasted gallbladder were found in the PSS group and in the control group: 25.4 cm3, SEM 3.0 (PSS), vs. 20.9 cm3, SEM 1.6 (control), p &gt; 0.05. There was not any correlation between the fasted GV and duration of the disease symptoms. The test meal evoked in both groups a similar trend of the decrease in the GV during 40 min and the increase in the ejection fraction at this time. Although the changes observed were statistically not significant (p &gt; 0.05) there were small differences in the gallbladder ejection fraction in two patients, not exceeding 14% and 18%, and being apparently below of any individual value in the control group. Conclusion: the gallbladder motor function is very rarely disturbed in patients with PSS. abstract_id: PUBMED:37214354 Incidence and Predictors of an Abnormal Liver Function Test Among 674 Systemic Sclerosis Patients: A Cohort Study. Background: Abnormal liver function tests (LFTs) can indicate cirrhosis or liver cancer leading to mortality among systemic sclerosis (SSc) patients. No recent studies have investigated the clinical predictors of an abnormal LFT in SSc. We aimed to determine the incidence of abnormal LFT (including from hepatitis and cholestasis) and to identify its clinical predictors in SSc patients. Methods: An historical cohort was conducted on 674 adult SSc patients who attended the Scleroderma Clinic, Khon Kaen University, between January 2012 and November 2019 and who underwent routine screening for LFT. A Cox regression was used to analyze the clinical predictors of abnormal LFT. Results: Four hundred and thirty cases, representing 4190 person-years, had abnormal LFTs (viz, from hepatitis, cholestasis, and cholestatic hepatitis) for an incidence rate of 10.2 per 100 person-years. The respective incidence of hepatitis, cholestasis, and cholestatic hepatitis was 20.5, 12.9, and 20.4 per 100 person-years. The respective median first-time detection of hepatitis, cholestasis, and cholestatic hepatitis was 3.0, 5.9, and 2.8 years, and none had signs or symptoms suggestive of liver disease. According to the Cox regression analysis, the predictors of an abnormal LFT in SSc were elderly onset of SSc (hazard ratio (HR) 1.02), alcoholic drinking (HR 1.74), high modified Rodnan Skin Score (mRSS) (HR 1.03), edematous skin (HR 2.94), Raynaud's phenomenon (HR 1.39), hyperCKaemia (HR 1.88), and methotrexate use (HR 1.55). In contrast, current sildenafil treatment (HR 0.63) and high serum albumin (HR 0.70) were protective factors. Conclusion: Occult hepatitis, cholestasis, and cholestatic hepatitis can be detected in SSc patients using LFT screening, especially in cases of early disease onset. The long-term outcome is uncertain, and more longitudinal research is required. abstract_id: PUBMED:10606308 Motor activity of the gallbladder in systemic sclerosis. Objective: We sought to measure gallbladder emptying in scleroderma patients, when stimulated by exogenous cholecystokinin. Methods: Twenty-eight consecutive scleroderma patients were evaluated. Ten were excluded for the presence of gallstones. Gallbladder motor function was studied in 18 patients and 18 controls, using specific parameters for the quantification of gallbladder emptying dynamics. Resting gallbladder volumes were compared using the Dodds method with real-time ultrasound. Cholecystokinin (CCK)-stimulated gallbladder function (0.02 microg/kg CCK intravenous infusion/30 min) was assessed by a scintigraphic technique using 99mTc-DISIDA. Five patients presented with CREST syndrome, 13 with the diffuse form of scleroderma. Four were men, 14 women (average age = 46.6+/-15.4 yr). Patients and controls were paired by gender, age, and weight. Results: Resting gallbladder volumes were larger in the four men with scleroderma than in the women with this disease (p &lt; 0.03, Mann-Whitney). The mean gallbladder resting volume in scleroderma patients was not different from the mean volume detected among controls (p = 0.25), even when controlling for gender (p = 0.78 for women, p = 0.08 for men), scleroderma disease subtype (p = 0.50), or disease duration (p = 0.48). Latency period, ejection period, ejection rate, or ejection fraction as measured during cholecystokinin-stimulated scintigraphic studies were not significantly different between patients and controls. A trend was detected for reduction of the ejection period in scleroderma women (p = 0.70) when compared with scleroderma men. More than 35% of the scleroderma patients presented biliary lithiasis. Conclusions: There was no significant difference in gallbladder dynamics measured by a scintigraphic technique in scleroderma patients, compared with controls, when gallbladder motor function was evaluated by intravenous CCK. abstract_id: PUBMED:1020742 The pancreatic involvement in disseminated "collagen" disorders. Studies of pancreatic secretion in patients with scleroderma and Sjögren's "disease". In this study of 66 patients with Sjögren's syndrome and/or scleroderma abnormal pancreatic secretory patterns were observed in 43 patients, 17 of 23 with Sjögren's and 19 of 31 with scleroderma. Pancreatic insufficiency was often associated with depressed gastric secretion and/or abnormal gallbladder function. While in both disorders digestive complaints are attributable to one or more dysfunctions, the pathogenetic factors in scleroderma appear to be more varied and require careful investigation for complete elucidation. Antibiotics, antimetabolites and steroid therapies are also additional causal factors involved in the initiation of gastrointestinal dysfunction and pathology. abstract_id: PUBMED:8147358 Effect of erythromycin on gastric and gallbladder emptying and gastrointestinal symptoms in scleroderma patients is maintained medium term. Objectives: Scleroderma patients frequently present esophageal and gastric emptying abnormalities and small bowel dysfunction. Erythromycin, a macrolide antibiotic, has been found to accelerate gastric and gallbladder emptying in both healthy subjects and diabetic patients. Our objective was to investigate the effects of 4-wk oral erythromycin administration on the gastric and gallbladder emptying, gastrointestinal symptoms (early satiety, abdominal pain, nausea, bloating, vomiting, and constipation), and motilin plasma levels of patients with scleroderma. Methods: 12 scleroderma patients were investigated before and after 4-wk treatment with 250 mg oral erythromycin three times a day. The effect of a single i.v. dose of 2 mg/kg/h erythromycin on gastric and gallbladder emptying before starting the oral treatment was also evaluated. Gastric and gallbladder emptying after a solid meal were evaluated by sonography. Results: Single i.v. administration of erythromycin before the meal reduced gastric emptying T1/2 from 121.3 +/- 14.0 to 45.5 +/- 7.3 min (p &lt; 0.01) and accelerated gallbladder emptying without affecting the peak. Four-week oral administration of erythromycin reduced gastric emptying T1/2 from 121.3 +/- 14.0 min to 46.5 +/- 8.3 min (p &lt; 0.01). Peak gallbladder emptying was also significantly accelerated, while total emptying remained unchanged (p &lt; 0.01). Furthermore, 4-wk erythromycin administration reduced both motilin plasma levels (from 223.4 +/- 53.8 to 145.4 +/- 67.2 pmol/L, p &lt; 0.01) and symptoms of nausea, vomiting, and abdominal pain (p &lt; 0.01), and increased bowel movements in a subset of scleroderma patients with intestinal pseudo-obstruction. Conclusions: Erythromycin stimulates gastrointestinal motility in patients with scleroderma. Administered medium-term, it accelerates gastric and gallbladder emptying and alleviates gastrointestinal symptoms. abstract_id: PUBMED:30554151 Abnormal Nailfold Capillaroscopy Is Common in Patients with Connective Tissue Disease and Associated with Abnormal Pulmonary Function Tests. Objective: To assess the presence of a systemic sclerosis (SSc) pattern on nailfold capillary microscopy (NCM) in patients with Raynaud phenomenon (RP) and to explore its association with abnormal pulmonary function tests (PFT). Methods: NCM patterns were assessed in 759 consecutive patients with RP. Patterns were classified as normal (n = 354), nonspecific (n = 159), or SSc pattern (n = 246). Abnormal PFT was defined as forced vital or diffusion capacity &lt; 70%. Patients were classified as primary RP (n = 245), or secondary: no definite diagnosis (n = 391), SSc (n = 40), primary Sjögren syndrome (pSS; n = 30), systemic lupus erythematosus (SLE; n = 30), mixed connective tissue disease (MCTD; n = 7), rheumatoid arthritis (RA; n = 15). Results: An SSc pattern on NCM was frequently observed in most patients with a definite diagnosis: SSc (88%), pSS (33%), SLE (17%), MCTD (71%), and RA (13%). In patients without definite diagnosis, 17% had a normal NCM pattern, 35% nonspecific, and 48% SSc pattern. Abnormal PFT was more frequent in patients with an SSc pattern (35.9% vs 19.5%, p = 0.002), even when corrected for SSc diagnosis (p = 0.003). Absence of an SSc pattern had high negative predictive value (88%); positive predictive values were low. Conclusion: SSc pattern on NCM is common in patients with RP, and in those with connective tissue diseases other than SSc. It is associated with a higher prevalence of abnormal PFT, independent of the presence of an SSc diagnosis. Although these data need validation in a prospective setting, they underline the importance of NCM in RP and putative value to stratify the risk of pulmonary involvement in early stages of disease. abstract_id: PUBMED:7824860 Effect of erythromycin administration on upper gastrointestinal motility in scleroderma patients. Background: Gastrointestinal involvement is frequent in patients with scleroderma. Erythromycin, a macrolide antibiotic, has been shown to accelerate gastric emptying in normal subjects and diabetic patients. The present study investigated the effects of acute erythromycin administration on gastric and gallbladder motility in patients with scleroderma and gastrointestinal involvement. Methods: Twelve scleroderma patients and 14 healthy subjects were investigated. Each subject was investigated on 4 different days. Gastric and gallbladder emptying and gastric motility were determined by sonography and manometry, and the effect of 2 mg/kg/h erythromycin in fasted patients or after semisolid meal evaluated. Results: The half-time of gastric emptying in response to semisolid meal was 121.3 +/- 14.0 min (SE) in scleroderma patients and 45.7 +/- 10.4 min in healthy subjects (P &lt; 0.01). The peak of gallbladder emptying occurred later in scleroderma patients (95.0 +/- 5.0 min) than in healthy subjects (45.0 +/- 8.0 min) (P &lt; 0.01). Erythromycin stimulated gastric and gallbladder motility in fasted subjects, as shown by manometry and sonography, and accelerated gastric and gallbladder emptying when administered immediately before the meal (P &lt; 0.01). Conclusions: Erythromycin accelerates gastric and gallbladder emptying in scleroderma patients and might be helpful in the treatment of gastrointestinal motor abnormalities in these patients. abstract_id: PUBMED:37216899 The presence of abnormal septal motion on echocardiography is a predictor of abnormal cardiac magnetic resonance in systemic sclerosis. Objectives: We aimed to perform a comprehensive analysis of the ECG, two-dimensional echocardiography (2DE) and cardiac MRI (CMR) findings in patients with systemic sclerosis (SSc), and also to investigate correlations between CMR findings and some ECG and echocardiography (ECHO) results. Methods: We retrospectively analysed data from patients with SSc who were regularly seen at our outpatient referral centre, all assessed with ECG, Doppler ECHO and CMR. Results: Ninety-three patients were included; mean (s.d.) age of 48.5 (10.3) years, 86% female, 52% diffuse SSc. Eighty-four (90%) of the patients had sinus rhythm. The most common ECG finding was the left anterior fascicular block, recorded in 26 patients (28%). The abnormal septal motion (ASM) was found in 43 (46%) patients on ECHO. Myocardial involvement (inflammation or fibrosis), as assessed by multiparametric CMR, was present in &gt;50% of our patients. The age- and sex-adjusted model showed that ASM on ECHO increased significantly the odds of increased extracellular volume [odds ratio (OR) 4.43, 95% CI 1.73, 11.38], increased T1 Relaxation time (OR 2.67, 95% CI 1.09, 6.54), increased T2 Relaxation time (OR 2.56, 95% CI 1.05, 6.22), increased signal intensity ratio in T2-weighted imaging (OR 2.56, 95% CI 1.05, 6.22), presence of late gadolinium enhancement (OR 3.85, 95% CI 1.52, 9.76) and mid-wall fibrosis (OR 3.64, 95% CI 1.48, 8.96). Conclusion: This study indicates that the presence of ASM on ECHO is a predictor of abnormal CMR in SSc patients, and a precise assessment of ASM may serve as an important point for selecting the patients that should be evaluated by CMR for early detection of myocardial involvement. Answer: Based on the provided abstracts, patients with systemic sclerosis (SSc) do not appear to have significantly abnormal gallbladder function. Several studies have investigated gallbladder motility in SSc patients using different methods of stimulation and measurement, including ultrasonography after a standard liquid meal (PUBMED:8853227), ultrasonography with cephalic vagal cholinergic simulation and hormonal stimulation by infusion of cholecystokinin (PUBMED:9262975), and scintigraphic techniques using 99mTc-DISIDA after cholecystokinin stimulation (PUBMED:10606308). These studies have generally found that gallbladder motility in SSc patients is preserved and not significantly different from healthy controls. For instance, one study found that fasting gallbladder volumes and the reduction of gallbladder volume in response to modified sham feeding or cholecystokinin infusion were not significantly different between SSc patients and controls (PUBMED:9262975). Another study reported that gallbladder motility was not reduced in SSc patients compared to healthy controls, and that SSc-induced oesophageal dysfunction was not associated with impaired gallbladder motility (PUBMED:8853227). Similarly, another study concluded that the gallbladder motor function is very rarely disturbed in patients with progressive systemic sclerosis (PUBMED:8351230). Furthermore, a study that evaluated the effect of erythromycin on gastric and gallbladder emptying in scleroderma patients found that erythromycin stimulates gastrointestinal motility and accelerates gastric and gallbladder emptying, suggesting that any potential motility issues can be addressed pharmacologically (PUBMED:8147358, PUBMED:7824860). However, it is important to note that while gallbladder motility may not be significantly affected, systemic sclerosis patients can still experience other gastrointestinal issues, and abnormal liver function tests (LFTs) can indicate cirrhosis or liver cancer leading to mortality among SSc patients (PUBMED:37214354). Therefore, while gallbladder dysmotility may not be a common issue, SSc patients can still have other significant gastrointestinal and hepatic complications.
Instruction: Does tumoral (111)In-ibritumomab accumulation correlate with therapeutic effect and outcome in relapsed or refractory low-grade B-cell lymphoma patients undergoing (90)Y-ibritumomab radioimmunotherapy? Abstracts: abstract_id: PUBMED:25117746 Does tumoral (111)In-ibritumomab accumulation correlate with therapeutic effect and outcome in relapsed or refractory low-grade B-cell lymphoma patients undergoing (90)Y-ibritumomab radioimmunotherapy? Objectives: The aim of this study was to determine whether tumoral (111)In-ibritumomab accumulation on pre-treatment imaging correlates with therapeutic responses and progression-free survival (PFS) in patients with non-Hodgkin's lymphoma (NHL) undergoing (90)Y-ibritumomab radioimmunotherapy (RIT). Methods: This was a retrospective study of 39 patients with low-grade B-cell NHL treated with RIT. We classified the patients into positive and negative groups according to the presence or absence of tumoral (111)In-ibritumomab accumulation on pre-treatment (111)In-ibritumomab examinations. We then determined the correlation between the (111)In-ibritumomab imaging findings and the patients' therapeutic responses and PFS. Results: Tumoral (111)In-ibritumomab accumulation was positive in 64.1% and negative in 35.9% of the patients. The (111)In-positive patients had a significantly higher overall response rate (ORR) compared to the (111)In-negative patients (100.0% vs. 78.6%, p = 0.02). The (111)In-negative patients with advanced disease (stages III/IV) had a significantly lower ORR (40%) and a significantly higher rate of progressive disease (40.0%) compared to those of the (111)In-negative patients with limited disease (stages I/II) (100% and 0%, p = 0.009 each). However, these two groups had similar 2-year PFS rates (65.0% vs. 50.0%, p = 0.80). Conclusions: (111)In-ibritumomab imaging findings seem to correlate with ORR and the progressive disease rate after RIT, but not with PFS. Key Points: All 39 NHL patients had tumoral accumulation on pretreatment FDG-PET/CT. 64.1% of NHL patients had tumoral accumulation on a pretreatment (111) In-ibritumomab examination. (90) Y-ibritumomab RIT resulted in high overall response and complete remission rates. (111) In-ibritumomab avidity of lymphoma lesions could predict a strong therapeutic effect. (111) In-ibritumomab imaging findings did not correlate with progression-free survival. abstract_id: PUBMED:12901152 Radioimmunotherapy of relapsed or refractory low-grade, follicular, or transformed B-cell non-Hodgkin's lymphoma. Radioimmunotherapy consists of radiolabeled monoclonal antibodies for the treatment of malignancy. For more than a decade, radioimmunotherapy has shown great promise for the treatment of B-cell lymphoma. During the past decade, two products targeted to the CD20 antigen on B cells, iodine-131 tositumomab and yttrium-90 (90Y) ibritumomab tiuxetan, have been tested extensively in registration trials for potential licensing approval by the US Food and Drug Administration (FDA). Both products have produced response rates of 70% to 80% in low-grade and follicular lymphoma, and response rates of 50% to 60% in low-grade or follicular lymphoma that has transformed into an intermediate or high-grade lymphoma. Median duration of response to a single course of treatment has been about 1 year, with complete remission rates in one quarter to one third of patients. In February, 2002, 90Y ibritumomab tiuxetan was formally approved by the FDA for the treatment of relapsed or refractory low-grade, follicular, or transformed B-cell lymphoma, including rituximab-refractory transformed lymphoma, thus becoming the first radioimmunotherapeutic agent approved by the agency. The product became available for commercial use in April, 2002. Clinical trials will determine how radioimmunotherapy will be integrated into the treatment of lymphoma. abstract_id: PUBMED:12011122 Randomized controlled trial of yttrium-90-labeled ibritumomab tiuxetan radioimmunotherapy versus rituximab immunotherapy for patients with relapsed or refractory low-grade, follicular, or transformed B-cell non-Hodgkin's lymphoma. Purpose: Radioimmunotherapy combines biologic and radiolytic mechanisms to target and destroy tumor cells, thus offering a needed therapeutic alternative for refractory non-Hodgkin's lymphoma (NHL) patients. This phase III randomized study compares the novel radioimmunotherapy yttrium-90 ((90)Y) ibritumomab tiuxetan with a control immunotherapy, rituximab, in 143 patients with relapsed or refractory low-grade, follicular, or transformed CD20(+) transformed NHL. Patients And Methods: Patients received either a single intravenous (IV) dose of (90)Y ibritumomab tiuxetan 0.4 mCi/kg (n = 73) or rituximab 375 mg/m(2) IV weekly for four doses (n = 70). The radioimmunotherapy group was pretreated with two rituximab doses (250 mg/m(2)) to improve biodistribution and one dose of indium-111 ibritumomab tiuxetan for imaging and dosimetry. The primary end point, overall response rate (ORR), was assessed by an independent, blinded, lymphoma expert panel. Results: ORR was 80% for the (90)Y ibritumomab tiuxetan group versus 56% for the rituximab group (P =.002). Complete response (CR) rates were 30% and 16% in the (90)Y ibritumomab tiuxetan and rituximab groups, respectively (P =.04). An additional 4% achieved an unconfirmed CR in each group. Kaplan-Meier estimated median duration of response was 14.2 months in the (90)Y ibritumomab tiuxetan group versus 12.1 months in the control group (P =.6), and time to progression was 11.2 versus 10.1 months (P =.173) in all patients. Durable responses of &gt; or = 6 months were 64% versus 47% (P =.030). Reversible myelosuppression was the primary toxicity noted with (90)Y ibritumomab tiuxetan. Conclusion: Radioimmunotherapy with (90)Y ibritumomab tiuxetan is well tolerated and produces statistically and clinically significant higher ORR and CR compared with rituximab alone. abstract_id: PUBMED:14762740 Logistics of radioimmunotherapy with yttrium 90 ibritumomab tiuxetan (Zevalin). Radioimmunotherapy is a promising new therapeutic option for the treatment of B-cell non-Hodgkin's lymphoma. Several monoclonal antibody and radionuclide conjugates, including I-131 tositumomab and Y-90 ibritumomab, have been investigated in clinical trials. Yttrium 90 ibritumomab tiuxetan (Zevalin; Biogen Idec Inc, Cambridge, MA) is indicated for the treatment of patients with relapsed or refractory low-grade, follicular, or transformed B-cell non-Hodgkin's lymphoma, including patients with rituximab-refractory follicular non-Hodgkin's lymphoma. The ibritumomab tiuxetan regimen requires coordination of a multidisciplinary team on which the nuclear medicine physician (or radiation oncologist) plays a major role. The nuclear medicine physician (or radiation oncologist) is responsible for overseeing the administration of the radioimmunoconjugates and for interpreting the imaging scans to determine the expected or altered biodistribution of the radioimmunoconjugate. abstract_id: PUBMED:12663713 Safety of yttrium-90 ibritumomab tiuxetan radioimmunotherapy for relapsed low-grade, follicular, or transformed non-hodgkin's lymphoma. Purpose: Radioimmunotherapy (RIT) with yttrium-90 ((90)Y)-labeled anti-CD20 antibody ((90)Y ibritumomab tiuxetan; Zevalin, IDEC Pharmaceuticals Corporation, San Diego, CA) has a high rate of tumor response in patients with relapsed or refractory, low-grade, follicular, or transformed B-cell non-Hodgkin's lymphoma (NHL). This study presents the safety data from 349 patients in five studies of outpatient treatment with (90)Y ibritumomab tiuxetan. Patients And Methods: Patients received rituximab 250 mg/m(2) on days 1 and 8, and either 0.4 mCi/kg (15 MBq/kg) or 0.3 mCi/kg (11 MBq/kg) of (90)Y ibritumomab tiuxetan on day 8 (maximum dose, 32 mCi). Patients were observed for up to 4 years after therapy or until progressive disease. Results: Infusion-related toxicities were typically grade 1 or 2 and were associated with rituximab. No significant organ toxicity was noted. Toxicity was primarily hematologic, with nadir counts occurring at 7 to 9 weeks and lasting approximately 1 to 4 weeks depending on the method of calculation. After the 0.4-mCi/kg dose, grade 4 neutropenia, thrombocytopenia, and anemia occurred in 30%, 10%, and 3% of patients, respectively, and after the 0.3-mCi/kg dose, these grade 4 toxicities occurred in 35%, 14%, and 8% of patients, respectively. The risk of hematologic toxicity increased with degree of baseline bone marrow involvement with NHL. Seven percent of patients were hospitalized with infection (3% with neutropenia) and 2% had grade 3 or 4 bleeding events. Myelodysplasia or acute myelogenous leukemia was reported in five patients (1%) 8 to 34 months after treatment. Conclusion: Single-dose (90)Y ibritumomab tiuxetan RIT has an acceptable safety profile in relapsed NHL patients with less than 25% lymphoma marrow involvement, adequate marrow reserve, platelets greater than 100,000 cells/ micro L, and neutrophils greater than 1,500 cells/ micro L. abstract_id: PUBMED:18040144 Radioimmunotherapy for non-hodgkin lymphoma : historical perspective and current status. Radioimmunotherapy (RIT) treatment for lymphoma is a novel targeted therapeutic approach. Several years of development of radioimmunotherapeutic compounds came to fruition in February of 2002 when (90)Y-ibritumomab tiuxetan (Zevalin, Y2B8) was approved in the USA and later in Europe, for the treatment of relapsed or refractory, low grade or transformed B-cell lymphoma. (90)Y-ibritumomab tiuxetan utilizes a monoclonal anti-CD20 antibody to deliver beta-emitting yttium-90 to the malignant B-cells. Clinical trials have demonstrated its efficacy, which is largely independent of the intrinsic activity of the anti-CD20 antibody. A similar anti-CD20 radiotherapeutic compound, (131)I-tositumomab, was subsequently approved in the USA. The advantages of increased efficacy compared to the naked antibody are gained at the expense of myelotoxicity which is dose limiting but reversible. Studies exploring expanded applications of radioimmunotherapy have been recently completed or are under way. It is hoped that RIT will be an ideal agent for consolidation after chemotherapy for both indolent and aggressive non-Hodgkin lymphoma as well as a useful addition to preparatory high dose regimens prior to transplant. RIT has been shown to be an effective and clinically relevant complementary therapeutic approach for patients with lymphoma. abstract_id: PUBMED:18814766 Clinical indications of radioimmunotherapy in lymphoma In their later stages low grade lymphoma are incurable. These lymphomas being radio-sensitive, a treatment of metabolic radioimmunotherapy, using monoclonal antibodies (anti-CD20) as the carrier has been developed. The aim of this treatment is to use these antibodies to target radiation to tumor tissues while limiting toxicity to normal cells. Ibritumomab tiuxetan (Zevalin) is currently prescribed for patients with relapsed or refractory low-grade follicular lymphoma after rituximab treatment. This outpatient treatment has a high level of overall response rate including complete response and that for a long period. The side effects are essentially hematological and reversible. In the near future many more indications should become apparent and Zevalin should become an important tool in the B-cell lymphoma (low and high grade). abstract_id: PUBMED:12939722 Radioimmunotherapy for non-Hodgkin's lymphoma. Radioimmunotherapy (RIT) treatment for lymphoma is a novel targeted therapeutic approach. Several years of development of radioimmunotherapeutic compounds came to fruition in February of 2002 when the US Food and Drug Administration (FDA) approved yttrium 90 ((90)Y)-ibritumomab tiuxetan ((90)Y-IT) for the treatment of relapsed or refractory, low-grade, or transformed B-cell lymphoma. (90)Y-IT uses a monoclonal anti-CD20 antibody to deliver beta-emitting (90)Y to the malignant B cells. Clinical trials have demonstrated its efficacy, which is largely independent of the intrinsic activity of the anti-CD20 antibody. A similar anti-CD20 radiotherapeutic compound, iodine 131 ((131)I)-tositumomab ((131)I-T), is also under consideration for approval. The advantages of increased efficacy compared to the native antibody are gained at the expense of myelotoxicity, which is dose-limiting but reversible. Other radioimmunoconjugates (RIC), including products for Hodgkin's lymphoma, are in earlier stages of development. Studies exploring expanded applications of RIT are under way. RIT has been shown to be an effective and clinically relevant complementary therapeutic approach for patients with lymphoma. abstract_id: PUBMED:16540013 Radioimmunotherapy for non-Hodgkin lymphoma: historical development and current status Radioimmunotherapy treatment for lymphoma is a novel targeted therapeutic approach. Several years of development of radioimmunotherapeutic compounds came to fruition in February of 2002 when 90Y-ibritumomab tiuxetan (Zevalin, Y2B8) was approved in the USA and later in Europe, for the treatment of relapsed or refractory, low grade or transformed B-cell lymphoma in the USA. 90Y-ibritumomab tiuxetan utilizes a monoclonal anti-CD20 antibody to deliver beta-emitting yttrium-90 to the malignant B-cells. Clinical trials have demonstrated its efficacy, with observed clinical responses in the 80 % range. This product has become available in Europe, with simplified administration, for the treatment of relapsed follicular lymphoma. A similar anti-CD20 radiotherapeutic compound, 131I-tositumomab, was subsequently approved in the USA. Promising studies exploring expanded applications of radioimmunotherapy as consolidation, as part of transplant, or in other histologic types have been recently completed or are under way. Radioimmunotherapy has been shown to be an effective and clinically relevant complementary therapeutic approach for patients with lymphoma, bringing the Nuclear Medicine into lymphoma therapeutics. abstract_id: PUBMED:16997175 Radioimmunotherapy for B-cell non-Hodgkin lymphoma. Radioimmunotherapy (RIT) combines the targeting advantage of a monoclonal antibody with the radiosensitivity of non-Hodgkin lymphoma (NHL) cells. There are now two radioimmunoconjugates (RICs) - ibritumomab tiuxetan (Zevalin) and tositumomab (Bexxar) - that are approved by the FDA in the US for relapsed low-grade or follicular B-cell NHL. Both agents target the CD20 antigen on B-cell lymphoma cells. In relapsed disease, single doses of RIT produce an 80% overall response rate, with approximately 20% of patients achieving durable responses. RIT is very well tolerated and is delivered on an outpatient basis over 1 week. The only significant toxicity is reversible myelosuppression. Both RIT agents have demonstrated high anti-tumor activity in patients who are refractory to rituximab. Current trials are testing RIT as initial therapy with rituximab maintenance, as adjuvant therapy after chemotherapy, or in high-dose protocols with stem-cell support. Answer: Yes, tumoral ^111In-ibritumomab accumulation does correlate with the therapeutic effect in relapsed or refractory low-grade B-cell lymphoma patients undergoing ^90Y-ibritumomab radioimmunotherapy (RIT). A study found that patients with positive tumoral ^111In-ibritumomab accumulation on pre-treatment imaging had a significantly higher overall response rate (ORR) compared to those with negative accumulation (100.0% vs. 78.6%, p = 0.02). However, the study also noted that the ^111In-ibritumomab imaging findings did not correlate with progression-free survival (PFS) (PUBMED:25117746). Radioimmunotherapy has been shown to be effective in treating B-cell lymphoma, with response rates of 70% to 80% in low-grade and follicular lymphoma, and 50% to 60% in transformed lymphoma. The median duration of response to a single course of treatment has been about one year, with complete remission rates in one quarter to one third of patients (PUBMED:12901152). In a phase III randomized study, ^90Y ibritumomab tiuxetan demonstrated statistically and clinically significant higher ORR and complete response (CR) rates compared with rituximab alone, which supports the therapeutic effect of RIT in this patient population (PUBMED:12011122). Overall, the evidence suggests that tumoral ^111In-ibritumomab accumulation can be a predictor of a strong therapeutic effect in patients undergoing ^90Y-ibritumomab RIT, although it may not be a predictor of long-term outcomes such as PFS.
Instruction: Is chemotherapy necessary for premenopausal women with lower-risk node-positive, endocrine responsive breast cancer? Abstracts: abstract_id: PUBMED:18259856 Is chemotherapy necessary for premenopausal women with lower-risk node-positive, endocrine responsive breast cancer? 10-year update of International Breast Cancer Study Group Trial 11-93. Introduction: International Breast Cancer Study Group (IBCSG) Trial 11-93 is the largest trial evaluating the role of the addition of chemotherapy to ovarian function suppression/ablation (OFS) and tamoxifen in premenopausal patients with endocrine-responsive early breast cancer. Methods: IBCSG Trial 11-93 is a randomized trial comparing four cycles of adjuvant chemotherapy (AC: doxorubicin or epirubicin, plus cyclophosphamide) added to OFS and 5 years of tamoxifen versus OFS and tamoxifen without chemotherapy in premenopausal patients with node-positive, endocrine-responsive early breast cancer. There were 174 patients randomized from May 1993 to November 1998. The trial was closed before the target accrual was reached due to low accrual rate. Results: Patients randomized tended to have lower risk node-positive disease and the median age was 45. After 10 years median follow up, there remains no difference between the two randomized treatment groups for disease-free (hazard ratio=1.02 (0.57-1.83); P=0.94) or overall survival (hazard ratio=0.97 (0.44-2.16); P=0.94). Conclusion: This trial, although small, offers no evidence that AC chemotherapy provides additional disease control for premenopausal patients with lower-risk node-positive endocrine-responsive breast cancer who receive adequate adjuvant endocrine therapy. A large trial is needed to determine whether chemotherapy adds benefit to endocrine therapy for this population. abstract_id: PUBMED:34660302 Axillary Lymph Node Dissection Rates and Prognosis From Phase III Neoadjuvant Systemic Trial Comparing Neoadjuvant Chemotherapy With Neoadjuvant Endocrine Therapy in Pre-Menopausal Patients With Estrogen Receptor-Positive and HER2-Negative, Lymph Node-Positive Breast Cancer. In this study, we aimed to evaluate axillary lymph node dissection (ALND) rates and prognosis in neoadjuvant chemotherapy (NCT) compare with neoadjuvant endocrine therapy (NET) in estrogen receptor-positive (ER+)/human epidermal growth factor receptor 2-negative (HER2-), lymph node (LN)-positive, premenopausal breast cancer patients (NCT01622361). The multicenter, phase 3, randomized clinical trial enrolled 187 women from July 5, 2012, to May 30, 2017. The patients were randomly assigned (1:1) to either 24 weeks of NCT including adriamycin plus cyclophosphamide followed by intravenous docetaxel, or NET involving goserelin acetate and daily tamoxifen. ALND was performed based on the surgeon's decision. The primary endpoint was ALND rate and surgical outcome after preoperative treatment. The secondary endpoint was long-term survival. Among the 187 randomized patients, pre- and post- neoadjuvant systemic therapy (NST) assessments were available for 170 patients. After NST, 49.4% of NCT patients and 55.4% of NET patients underwent mastectomy after treatment completion. The rate of ALND was significantly lower in the NCT group than in the NET group (55.2% vs. 69.9%, P=.046). Following surgery, the NET group showed a significantly higher mean number of removed LNs (14.96 vs. 11.74, P=.003) and positive LNs (4.84 vs. 2.92, P=.000) than the NCT group. The axillary pathologic complete response (pCR) rate was significantly higher in the NCT group (13.8% vs. 4.8%, P=.045) than in the NET group. During a median follow-up of 67.3 months, 19 patients in the NCT group and 12 patients in the NET group reported recurrence. The 5-year ARFS (97.5%vs. 100%, P=.077), DFS (77.2% vs. 84.8%, P=.166), and OS (97.5% vs. 94.7%, P=.304) rates did not differ significantly between the groups. In conclusion, although survival did not differ significantly, more NCT patients might able to avoid ALND, with fewer LNs removed with lower LN positivity. Clinical Trial Registration: https://clinicaltrials.gov/ct2/show/NCT01622361, identifier NCT01622361. abstract_id: PUBMED:18325918 Premenopausal endocrine-responsive early breast cancer: who receives chemotherapy? Background: The role of chemotherapy in addition to combined endocrine therapy for premenopausal women with endocrine-responsive early breast cancer remains an open question, yet trials designed to answer it have repeatedly failed to adequately accrue. The International Breast Cancer Study Group initiated two concurrent trials in this population: in Premenopausal Endocrine Responsive Chemotherapy (PERCHE), chemotherapy use is determined by randomization and in Tamoxifen and Exemestane Trial (TEXT) by physician choice. PERCHE closed with inadequate accrual; TEXT accrued rapidly. Methods: From 2003 to 2006, 1317 patients (890 with baseline data) were randomly assigned to receive ovarian function suppression (OFS) plus tamoxifen or OFS plus exemestane for 5 years in TEXT. We explore patient-related factors according to whether or not chemotherapy was given using descriptive statistics and classification and regression trees. Results: Adjuvant chemotherapy was chosen for 64% of patients. Lymph node status was the predominant determinant of chemotherapy use (88% of node positive treated versus 46% of node negative). Geography, patient age, tumor size and grade were also determinants, but degree of receptor positivity and human epidermal growth factor receptor 2 status were not. Conclusions: The perceived estimation of increased risk of relapse is the primary determinant for using chemotherapy despite uncertainties regarding the degree of benefit it offers when added to combined endocrine therapy in this population. abstract_id: PUBMED:34277393 Patient-Reported Outcomes From Phase III Neoadjuvant Systemic Trial Comparing Neoadjuvant Chemotherapy With Neoadjuvant Endocrine Therapy in Pre-Menopausal Patients With Estrogen Receptor-Positive and HER2-Negative, Lymph Node-Positive Breast Cancer. We aimed to evaluate the patient-reported outcomes (PROs) in a prospective phase III clinical trial, comparing neoadjuvant endocrine therapy (NET) with conventional neoadjuvant chemotherapy (NCT) in patients with hormone status positive, lymph node-positive premenopausal breast cancer (NCT01622361). The patients were randomized prospectively to either 24 weeks of NCT with adriamycin plus cyclophosphamide followed by taxane or NET with gonadotropin-releasing hormone agonist and tamoxifen. The patients were examined at the surgery unit of a large tertiary care hospital with a comprehensive cancer center. PROs were assessed on the first day of the trial (day 1, baseline) and at the end of treatment, using the breast cancer module of the European Organisation for Research and Treatment of Cancer Quality of Life Questionnaire Core 23 (EORTC QLQ BR23). One hundred and eighty-seven patients were randomly assigned to chemotherapy (n=95) or endocrine therapy (n=92), and 174 patients completed 24 weeks of the neoadjuvant treatment period (n=87, in each group). Baseline scores were similar between the groups. After treatment, there were no statistically significant differences in the function scales, including body image, sexual functioning, and sexual enjoyment between the groups, although the endocrine treatment group showed a significant improvement in the future perspective (hazard ratio, 8.3; 95% confidence interval, 1.72-18.38; P = 0.021). Similarly, there were no statistically significant differences in the symptom scales between the groups, including adverse effects of systemic therapy, breast symptoms, arm symptoms, and upset about hair loss. In conclusion, overall PROs were similar in both treatment groups, except for "future perspective," which was significantly better in the NET group than in the NCT group. Clinical Trial Registration: ClinicalTrials.Gov, identifier NCT01622361. abstract_id: PUBMED:24095609 Adjuvant treatment of premenopausal women with endocrine-responsive early breast cancer: design of the TEXT and SOFT trials. Objectives: In 2003 the International Breast Cancer Study Group (IBCSG) initiated the TEXT and SOFT randomized phase III trials to answer two questions concerning adjuvant treatment for premenopausal women with endocrine-responsive early breast cancer: 1-What is the role of aromatase inhibitors (AI) for women treated with ovarian function suppression (OFS)? 2-What is the role of OFS for women who remain premenopausal and are treated with tamoxifen? Methods: TEXT randomized patients to receive exemestane or tamoxifen with OFS. SOFT randomized patients to receive exemestane with OFS, tamoxifen with OFS, or tamoxifen alone. Treatment was for 5 years from randomization. Results: TEXT and SOFT successfully met their enrollment goals in 2011. The 5738 enrolled women had lower-risk disease and lower observed disease-free survival (DFS) event rates than anticipated. Consequently, 7 and 13 additional years of follow-up for TEXT and SOFT, respectively, were required to reach the targeted DFS events (median follow-up about 10.5 and 15 years). To provide timely answers, protocol amendments in 2011 specified analyses based on chronological time and median follow-up. To assess the AI question, exemestane + OFS versus tamoxifen + OFS, a combined analysis of TEXT and SOFT became the primary analysis (n = 4717). The OFS question became the primary analysis from SOFT, assessing the unique comparison of tamoxifen + OFS versus tamoxifen alone (n = 2045). The first reports are anticipated in mid- and late-2014. Conclusions: We present the original designs of TEXT and SOFT and adaptations to ensure timely answers to two questions concerning optimal adjuvant endocrine treatment for premenopausal women with endocrine-responsive breast cancer. Trial Registration TEXT: Clinicaltrials.govNCT00066703 SOFT: Clinicaltrials.govNCT00066690. abstract_id: PUBMED:22749924 A risk score based on histopathological features predicts higher risk of distant recurrence in premenopausal patients with lymph node-negative endocrine-responsive breast cancer. Purpose: To develop a Risk Score (RS) to predict distant recurrence among premenopausal women with node-negative endocrine-responsive early breast cancer. Methods: The Cox model was used to develop the RS using clinical and histopathological features from 378 women participating in the IBCSG Trial VIII who received endocrine therapy alone or following chemotherapy. The performance of the resulting model was validated on a cohort of 1005 patients from a single institution who received endocrine therapy alone. Results: In a multivariable analysis, the risk of distant recurrence was associated with tumor size, ER, Ki-67 and peritumoral vascular invasion. In the validation cohort, patients with high RS were at greater risk of distant recurrence compared to patients with low RS (HR, 17.41; 95% CI, 5.72-52.95). Conclusion: In premenopausal women with node-negative endocrine-responsive early breast cancer, the RS identifies patients at higher risk of distant recurrence. abstract_id: PUBMED:26255743 Adjuvant endocrine therapy for premenopausal women with hormone-responsive breast cancer. Multiple strategies for endocrine treatment of premenopausal women with hormone-responsive breast cancer have been assessed and results have been presented over the last two years. These include tamoxifen for 5-10 years (ATLAS and aTTom), tamoxifen for 5 years followed by aromatase inhibitor (AI) for 5 years for women who have become postmenopausal (MA-17); ovarian ablation (OA) by surgery (EBCTCG overview); ovarian function suppression (OFS) by LHRH agonist (LHRH agonist meta-analysis); or combinations of approaches including OFS plus tamoxifen or AI (SOFT, TEXT, ABCSG 12 and E3193). Many of these trials have taken place in the backdrop of (neo)adjuvant chemotherapy which can confound interpretation because such therapy can suppress ovarian function either transiently or permanently. Nonetheless these trials suggest in aggregate that 10 years of tamoxifen are better than 5 years and that a program of extended adjuvant therapy of tamoxifen for 5 years followed by aromatase inhibitor for 5 years is effective for suitable candidates. The SOFT and E3193 trials do not show a major advantage for use of OFS + tamoxifen compared to tamoxifen alone. The joint SOFT/TEXT analysis and ABCGS12 trials both suggest that outcomes can be excellent with the use of combined endocrine therapy alone in properly selected patients but give conflicting results with regard to potential benefits for OFS + AI compared with OFS + tamoxifen. Further work will be needed to ascertain long-term outcomes, identify factors that predict who will benefit from extended adjuvant endocrine therapy, and assess role of OFS by medical or surgical means. It is clear, however, that endocrine therapy is a critical part of the adjuvant regimen for most premenopausal women with hormone-responsive breast cancer, and a subset of these women with luminal A-type tumors can be safely treated with endocrine therapy alone. abstract_id: PUBMED:26688676 Chemotherapy in Premenopausal Breast Cancer Patients. Evidence has long demonstrated that premenopausal women obtain the greatest benefit from adjuvant chemotherapy overall, with risk reduction increasing with decreasing age. The chemoendocrine effect of chemotherapy has only more recently been documented as impacting on outcomes for women with hormone receptor-positive breast cancer, and recent data have elucidated the optimal strategies for manipulating the menopausal status to improve disease outcomes, without necessarily including cytotoxic chemotherapy. Still, many premenopausal women will require adjuvant cytotoxic chemotherapy, and the effects of treatment on women diagnosed with breast cancer in the premenopausal setting can have important implications both on their breast cancer outcomes and on comorbidities and psychosocial outcomes. This article describes the most recent information and issues surrounding the indications, effects, and special considerations for adjuvant chemotherapy in premenopausal women with breast cancer, in an effort to inform their care. abstract_id: PUBMED:18953651 Is adjuvant chemotherapy of benefit for postmenopausal women who receive endocrine treatment for highly endocrine-responsive, node-positive breast cancer? International Breast Cancer Study Group Trials VII and 12-93. To compare the efficacy of chemoendocrine treatment with that of endocrine treatment (ET) alone for postmenopausal women with highly endocrine responsive breast cancer. In the International Breast Cancer Study Group (IBCSG) Trials VII and 12-93, postmenopausal women with node-positive, estrogen receptor (ER)-positive or ER-negative, operable breast cancer were randomized to receive either chemotherapy or endocrine therapy or combined chemoendocrine treatment. Results were analyzed overall in the cohort of 893 patients with endocrine-responsive disease, and according to prospectively defined categories of ER, age and nodal status. STEPP analyses assessed chemotherapy effect. The median follow-up was 13 years. Adding chemotherapy reduced the relative risk of a disease-free survival event by 19% (P = 0.02) compared with ET alone. STEPP analyses showed little effect of chemotherapy for tumors with high levels of ER expression (P = 0.07), or for the cohort with one positive node (P = 0.03). Chemotherapy significantly improves disease-free survival for postmenopausal women with endocrine-responsive breast cancer, but the magnitude of the effect is substantially attenuated if ER levels are high. abstract_id: PUBMED:29886394 Adjuvant endocrine therapy in premenopausal patients with hormone receptor-positive early breast cancer: Evidence evaluation and GRADE recommendations by the Italian Association of Medical Oncology (AIOM). Premenopausal women with hormone receptor-positive early breast cancer are candidates for adjuvant endocrine therapy, as recommended by the major international guidelines. To date, adjuvant endocrine options for premenopausal women include tamoxifen with or without ovarian function suppression (OFS) or an aromatase inhibitor with OFS. Multiple strategies for endocrine treatment of premenopausal women with hormone-responsive breast cancer have been assessed, and the results of randomised clinical trials have been reported over the last years. Despite this evidence, the optimal algorithm for endocrine therapy for premenopausal women with hormone receptor-positive early stage invasive breast cancer shows open questions regarding the role of OFS in addition to tamoxifen and the optimal use of hormonal agents. The panel of the Italian Association of Medical Oncology (AIOM) Clinical Practice Guidelines on Breast Cancer applied the Grades of Recommendation, Assessment, Development and Evaluation (GRADE) methodology on three critical questions on the choice of the adjuvant hormonal therapy in premenopausal breast cancer patients to summarise available evidence and to create recommendations to help physicians in their clinical practice. Answer: The necessity of chemotherapy for premenopausal women with lower-risk node-positive, endocrine-responsive breast cancer is a subject of ongoing research and debate. According to the International Breast Cancer Study Group (IBCSG) Trial 11-93, there is no evidence that adding adjuvant chemotherapy (AC) to ovarian function suppression/ablation (OFS) and tamoxifen provides additional disease control for premenopausal patients with lower-risk node-positive endocrine-responsive breast cancer who receive adequate adjuvant endocrine therapy. After a 10-year median follow-up, there was no difference in disease-free or overall survival between the groups that received chemotherapy and those that did not (PUBMED:18259856). However, the role of chemotherapy in addition to combined endocrine therapy for this population remains an open question, as trials designed to answer it have repeatedly failed to adequately accrue (PUBMED:18325918). The perceived estimation of increased risk of relapse is the primary determinant for using chemotherapy despite uncertainties regarding the degree of benefit it offers when added to combined endocrine therapy in this population. In a phase III clinical trial comparing neoadjuvant chemotherapy (NCT) with neoadjuvant endocrine therapy (NET) in premenopausal patients with estrogen receptor-positive and HER2-negative, lymph node-positive breast cancer, although survival did not differ significantly between the groups, more NCT patients were able to avoid axillary lymph node dissection (ALND), with fewer lymph nodes removed and lower lymph node positivity. This suggests that NCT might be beneficial for reducing the extent of surgery in the axilla (PUBMED:34660302). Patient-reported outcomes from the same phase III clinical trial indicated that overall patient-reported outcomes were similar in both treatment groups, except for "future perspective," which was significantly better in the NET group than in the NCT group (PUBMED:34277393). In summary, while some evidence suggests that chemotherapy may not be necessary for all premenopausal women with lower-risk node-positive, endocrine-responsive breast cancer, particularly when adequate endocrine therapy is provided, the decision to include chemotherapy in the treatment regimen should be individualized based on the patient's risk of relapse and other factors. Further research is needed to clarify the subsets of patients who may benefit from chemotherapy in addition to endocrine therapy.
Instruction: Survival from haematological malignancy in childhood, adolescence and young adulthood in Australia: is the age-related gap narrowing? Abstracts: abstract_id: PUBMED:20712542 Survival from haematological malignancy in childhood, adolescence and young adulthood in Australia: is the age-related gap narrowing? Objectives: To examine 5-year survival from haematological malignancies in children, adolescents and young adults in Australia and determine if there has been any improvement in survival for the older age groups compared with children (the age-related "survival gap"). Design, Setting And Participants: Population-based study of all Australian children (aged 0-14 years), adolescents (15-19 years) and young adults (20-29 years) diagnosed with acute lymphoblastic leukaemia (ALL), acute myeloid leukaemia (AML), Hodgkin lymphoma (HL) and non-Hodgkin lymphoma (NHL) between 1982 and 2004, with follow-up to 2006. Main Outcome Measures: 5-year survival from ALL, AML, HL and NHL analysed for four periods of diagnosis (1982-1989, 1990-1994, 1995-1999 and 2000-2004). Results: During 1982-2004, 13 015 people aged &lt; or = 29 years were diagnosed with primary leukaemia or lymphoma in Australia. For those with ALL, 5-year survival for adolescents improved from 40% (1982-1989) to 74% (2000-2004); the improvement for young adults was smaller (31% to 47%), and both these groups still had lower survival than children, whose 5-year survival improved from 74% to 88%. There was a larger narrowing of the gap for AML: for cases diagnosed in 2000-2004, 5-year survival was similar for young adults (63%), adolescents (74%) and children (69%). For lymphoma cases diagnosed in 2000-2004, 5-year survival in all age groups was greater than 95% for HL and greater than 81% for NHL, although children fared better than adolescents and young adults. Conclusions: These Australian population-based data confirm an improvement in survival from haematological malignancies across all three age groups, but an age-related survival gap remains for adolescents and young adults compared with children, especially for young adults with ALL. Greater participation of adolescents and young adults in clinical trials and more detailed data collection are needed to provide evidence about optimal treatment regimens in these age groups. abstract_id: PUBMED:27047419 Pediatric Blood Cancer Survivors and Tobacco Use across Adolescence and Emerging Adulthood: A Narrative Review. Scholars underline the pivotal role of tobacco cigarette smoking in carcinogenesis process for blood tumors. A controversial debate is represented by the diffusion of tobacco use in young cancer survivors that had a previous diagnosis of blood tumor during the childhood. Compared with their peers, scientific evidence highlights that pediatric survivors have more difficult to give-up cigarette smoking. Furthermore, tobacco-smoking is frequently linked with others risk behaviors as drinking or substance abuse. In reviewing the main knowledge on this topic, authors affirm the need for increasing research on blood cancer survivors in order to depict psychological characteristics of pediatric blood cancer survivors. Improving health decision-making skills in young survivors could reduce the risk to adopt un-healthy behaviors and increase psychological wellbeing. Furthermore, authors propose tailored antismoking interventions based on the knowledge of the psychological and cognitive factors that support smoking during the transition toward emerging-adulthood. abstract_id: PUBMED:34918953 Preserved Long-Term Lung Function in Young Adult Survivors of Common Childhood and Adolescence Malignancies. Purpose: We aimed to evaluate long-term lung function and respiratory outcomes in young adults who survived common pediatric malignancies, treated in a single center. Methods: We enrolled young adults who had been treated during their childhood or adolescence for hematological or solid cancer at our Pediatric Oncology Unit, and performed pulmonary function tests (PFT) and clinical evaluation. PFT included spirometry and Diffusing Capacity of Lung for Carbon Monoxide (DLCO). Results: We included 121 survivors, mean age 23 years at follow-up, median 15 years from diagnosis. The most common diagnoses were hematological malignancies, mainly acute lymphoblastic leukemia, whereas 31% of the patients were treated for nonhematological cancer, mainly neuroblastoma. Treatments consisted of chemotherapy alone or in combination with radiotherapy and/or hematopoietic stem cell transplantation. Most of the patients denied respiratory symptoms throughout the years. In the whole group only eight patients (6%) had abnormal PFT, consisting mainly in a restrictive pattern and reduced DLCO. PFT abnormalities were of mild degree in most of the cases. Overall, the mean values of forced vital capacity, forced expiratory volume in 1 second, and DLCO were normal, but lower in females, in those who received radiotherapy and in those treated for lymphoma. Conclusion: In a group of young adults, surviving the most common childhood malignancies, we found a preserved lung function after a median follow-up of 15 years. The constantly increasing survival in childhood cancer is now associated with a trend toward an improvement in long-term respiratory outcomes. abstract_id: PUBMED:34166497 Parenthood among men diagnosed with cancer in childhood and early adulthood: trends over time in a Danish national cohort. Study Question: Is the rate of fatherhood among men diagnosed with cancer in childhood and early adulthood different from men without cancer, and, if so, have the differences changed over time? Summary Answer: Men diagnosed with cancer have had significantly reduced rates of fatherhood compared with undiagnosed men; however, the rates of fatherhood among the cancer survivors have increased markedly over time. What Is Known Already: The number of children and young adolescents who survive cancer has steadily increased over recent decades, with a current 5-year survival rate of approximately 80%. Consequently, life circumstances after cancer have gained increasing importance, including the desire among survivors to have children and a family. ARTs to aid reproduction among cancer survivors have been developed, and fertility preservation is increasingly a topic being discussed before undergoing cancer treatment. But the potential for fertility preservation differs dependent on age at diagnosis and type of cancer. Earlier studies have shown a decreased fertility rate among survivors of child and adolescent cancer compared to those diagnosed in early adulthood. Study Design, Size, Duration: This study is a national, register-based cohort study. Men diagnosed with cancer in childhood and early adulthood (&lt;30 years of age) were registered in the Danish Cancer Register in 1978-2016 (n = 9353). According to the time of diagnosis, each cancer-diagnosed man was randomly matched with 150 undiagnosed men from the background population within the same birth year. The men were followed until having their first child, death, migration or the end of the study (31 December 2017) in medical registers and socio-demographic population registers. Participants/materials, Setting, Methods: Fatherhood among the boys and young men diagnosed with cancer were compared with the age-matched comparison group in all statistical analyses. Cancer diagnoses were categorised as central nervous system (CNS) cancers, haematological cancers or solid cancers. Analyses were stratified by age at diagnosis (0-9, 10-19, 20-29 years) and time of diagnosis (1978-1989, 1990-1999, 2000-2009, 2010-2016). Death was incorporated as a competing risk in all analyses. Main Results And The Role Of Chance: The study population consisted of 9353 boys and young men diagnosed with cancer between 1978 and 2016 and 1 386 493 men in the age-matched comparison group. Those surviving CNS cancer as young men had the lowest hazard ratio (HR) of fatherhood compared with the age-matched comparison group (HR 0.67, 95% CI 0.57-0.79), followed by survivors of haematological cancers (HR 0.90, 95% CI 0.81-1.01), while the highest chance of fatherhood was among survivors of solid cancers (HR 1.16, 95% CI 1.12-1.20) with a slightly increased HR compared with undiagnosed males. The HR of becoming a father increased over time. From the first decade to the last decade 30 years later, the HR of becoming a father increased for solid tumours (HR 0.78, 95% CI 0.73-0.83 to HR 1.08, 95% CI 0.95-1.22), haematological cancers (HR 0.64, 95% CI 0.53-0.79 to HR 0.97, 95% CI 0.73-1.30) and CNS cancers (HR 0.44, 95% CI 0.34-0.57 to HR 0.98, 95% CI 0.49-1.95) compared to the age-matched comparison group. Also, when compared with the age-matched comparison group, men diagnosed with cancer when aged 20-29 years were more likely became fathers over the time of the study (HR 0.80, 95% CI 0.74-0.86 to HR 1.08, 95% CI 0.96-1.22). Limitations, Reasons For Caution: The study was based on register data, and information was not available about the men's fertility potential, whether they had a desire to have children and whether it was possible for them to find a partner. Information about fertility preservation, e.g. sperm freezing, could also have provided additional insights. Furthermore, information about diagnosis and ART treatment would have been beneficial. Wider Implications Of The Findings: Information and education of male patients diagnosed with cancer about fertility preservation options and their chances to create their own family is crucial. Reassuringly, time trends showed more men with a previous cancer diagnosis becoming fathers in recent years than in earlier years, reflecting that survival and fertility preservation have improved over time. Study Funding/competing Interest(s): R.S. received a PhD grant from the Rosa Ebba Hansen Foundation and from the Health Foundation (J.nr. 15-B-0095). The funding for the establishment of the DANAC II Cohort was obtained from the Rosa Ebba Hansen Foundation. The authors have no conflicts of interest to declare. Trial Registration Number: N/A. abstract_id: PUBMED:26937602 Survival From Childhood Hematological Malignancies in Denmark: Is Survival Related to Family Characteristics? Background: Due to diverse findings as to the role of family factors for childhood cancer survival even within Europe, we explored a nationwide, register-based cohort of Danish children with hematological malignancies. Methods: All children born between 1973 and 2006 and diagnosed with a hematological malignancy before the age of 20 years (N = 1,819) were followed until 10 years from diagnosis. Kaplan-Meier curves and Cox proportional hazards models estimating hazard ratios (HR) and 95% confidence intervals (CI) were used to assess the impact of family characteristics on overall survival in children with hematological malignancies. Results: Having siblings and increasing birth order were associated with reduced survival from acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML). Associations with AML were strongest and statistically significant. HRs of 1.62 (CI 0.85; 3.09) and 5.76 (CI 2.01; 16.51) were observed for the fourth or later born children with ALL (N = 41) and AML (N = 9), respectively. Children with older parents showed a tendency toward inferior ALL survival, while for AML young maternal age was related to poorer survival. Based on small numbers, a trend toward poorer survival from non-Hodgkin lymphoma was observed for children having siblings and for children of younger parents. Conclusions: Further research is warranted to gain further knowledge on the impact of family factors on childhood cancer survival in other populations and to elaborate potential underlying mechanisms and pathways of those survival inequalities. abstract_id: PUBMED:19705347 Trends in survival after diagnosis with hematologic malignancy in adolescence or young adulthood in the United States, 1981-2005. Background: : There are few population-based studies of long-term survival of adolescents and young adults with hematologic malignancies; most pertain to patients diagnosed in the 1990s or earlier. Period analysis was used to obtain up-to-date information on survival expectations of adolescents and young adults diagnosed with hematologic malignancies through the early 21st century. Methods: : Period analysis was used to calculate 5- and 10-year relative survival for adolescents and young adults diagnosed with Hodgkin lymphoma (HL), non-Hodgkin lymphoma (NHL), acute lymphoblastic leukemia (ALL), acute myeloblastic leukemia (AML), and chronic myelocytic leukemia (CML) for 5 5-year periods from 1981-1985 to 2001-2005, using data from the Surveillance, Epidemiology, and End Results database. Results: : Survival strongly improved for each of the 5 hematologic malignancies. Increases in 10-year relative survival between 1981-1985 and 2001-2005 were as follows: HL, from 80.4% to 93.4%; NHL, from 55.6% to 76.2%; ALL, from 30.5% to 52.1%; AML, from 15.2% to 45.1%; CML, from 0 to 74.5% (P &lt; .001 in all cases). However, although survival improved steadily throughout the period examined for the lymphomas and CML, survival was stable during the late 1990s and early 21st century for the acute leukemias. Conclusions: : Survival expectations for adolescents and young adults with hematologic malignancies have strongly improved since the 1980s. However, with the exception of HL, survival rates have not reached the levels observed for children diagnosed with these malignancies, and survival expectations for patients with acute leukemia have stabilized at relatively low levels. Cancer 2009. (c) 2009 American Cancer Society. abstract_id: PUBMED:36292046 Nutritional Status at Diagnosis as Predictor of Survival from Childhood Cancer: A Review of the Literature. Few studies so far have examined the impact of nutritional status on the survival of children with cancer, with the majority of them focusing on hematological malignancies. We summarized published evidence reporting the association of nutritional status at diagnosis with overall survival (OS), event-free survival (EFS), relapse, and treatment-related toxicity (TRT) in children with cancer. Published studies on children with leukemia, lymphoma, and other solid tumors have shown that both under-nourished and over-nourished children at cancer diagnosis had worse OS and EFS. Particularly, the risk of death and relapse increased by 30-50% among children with leukemia with increased body mass index at diagnosis. Likewise, the risk of TRT was higher among malnourished children with osteosarcoma and Ewing sarcoma. Nutritional status seems to play a crucial role in clinical outcomes of children with cancer, thus providing a significant modifiable prognostic tool in childhood cancer management. Future studies with adequate power and longitudinal design are needed to further evaluate the association of nutritional status with childhood cancer outcomes using a more standardized definition to measure nutritional status in this population. The use of new technologies is expected to shed further light on this understudied area and give room to person-targeted intervention strategies. abstract_id: PUBMED:27133800 Childhood, adolescent and young adult non-Hodgkin lymphoma: state of the science. The 5th International Symposium on Childhood, Adolescent and Young Adult (CAYA) Non-Hodgkin Lymphoma (NHL) was held in Varese, Italy, from 21-25 October 2015. This review represents a summary of the scientific sessions of this international symposium including childhood, adolescent and young adult (AYA) NHL in countries with limited socio-economic resources, AYA NHL, anaplastic large cell lymphoma, post-transplant lymphoproliferative disease, B-cell NHL, lymphoblastic lymphoma, T/natural killer cell NHL and immunological therapies in NHL. Most importantly, the new International Paediatric NHL Staging System (IPNHLSS) and International Paediatric NHL Response Criteria (IPNHLRC) were introduced during the symposium. The symposium brought together a multinational and multidisciplinary group of clinicians and basic scientists focused in this field of haematological malignancies. abstract_id: PUBMED:26695739 Comparison of survival of adolescents and young adults with hematologic malignancies in Osaka, Japan. The survival gap between adolescents and young adults (AYAs) with hematological malignancies persists in many countries. To determine to what extent it does in Japan, we investigated survival and treatment regimens in 211 Japanese AYAs (15-29 years) in the Osaka Cancer Registry diagnosed during 2001-2005 with hematological malignancies, and compared adolescents (15-19 years) with young adults (20-29 years). AYAs with acute lymphoblastic leukemia (ALL) had a poor 5-year survival (44%), particularly young adults (29% vs. 64% in adolescents, p = 0.01). Additional investigation for patients with ALL revealed that only 19% of young adults were treated with pediatric treatment regimens compared with 45% of adolescents (p = 0.05). Our data indicate that we need to focus on young adults with ALL and to consider establishing appropriate cancer care system and guidelines for them in Japan. abstract_id: PUBMED:30474007 Survival After Childhood Cancer-Social Inequalities in High-Income Countries. Despite substantial improvements in survival from childhood cancer during the last decades, there are indications that survival rates for several cancer types are no longer improving. Moreover, evidence accumulates suggesting that socioeconomic and sociodemographic factors may have an impact on survival also in high-income countries. The aim of this review is to summarize the findings from studies on social factors and survival in childhood cancer. Several types of cancer and social factors are included in order to shed light on potential mechanisms and identify particularly affected groups. A literature search conducted in PubMed identified 333 articles published from December 2012 until June 2018, of which 24 fulfilled the inclusion criteria. The findings are diverse; some studies found no associations but several indicated a social gradient with higher mortality among children from families of lower socioeconomic status (SES). There were no clear suggestions of particularly vulnerable subgroups, but hematological malignancies were most commonly investigated. A wide range of social factors have been examined and seem to be of different importance and varying between studies. However, potential underlying mechanisms linking a specific social factor to childhood cancer survival was seldom described. This review provides some support for a relationship between lower parental SES and worse survival after childhood cancer, which is a finding that needs further attention. Studies investigating predefined hypotheses involving specific social factors within homogenous cancer types are lacking and would increase the understanding of mechanisms involved, and allow targeted interventions to reduce health inequalities. Answer: The age-related survival gap for haematological malignancies in Australia has shown some improvement, but disparities persist, particularly for adolescents and young adults compared with children. For acute lymphoblastic leukaemia (ALL), 5-year survival for adolescents improved from 40% in 1982-1989 to 74% in 2000-2004, while for young adults, it increased from 31% to 47% in the same periods. However, both groups still had lower survival rates than children, whose 5-year survival improved from 74% to 88%. For acute myeloid leukaemia (AML), the survival gap has narrowed more significantly, with 5-year survival being similar for young adults (63%), adolescents (74%), and children (69%) for cases diagnosed in 2000-2004. For lymphoma cases diagnosed in 2000-2004, 5-year survival was greater than 95% for Hodgkin lymphoma (HL) and greater than 81% for non-Hodgkin lymphoma (NHL) across all age groups, although children still fared better than adolescents and young adults. The data suggest that while there has been an overall improvement in survival from haematological malignancies across all age groups, an age-related survival gap remains, especially for young adults with ALL. It is recommended that greater participation of adolescents and young adults in clinical trials and more detailed data collection are needed to provide evidence about optimal treatment regimens for these age groups (PUBMED:20712542).
Instruction: Can convulsions alone cause retinal haemorrhages in infants? Abstracts: abstract_id: PUBMED:9797668 Can convulsions alone cause retinal haemorrhages in infants? Aim: To evaluate the likelihood that, in children under the age of 2 years, convulsions alone may cause retinal haemorrhages. Methods: Children under the age of 2 years admitted to hospital following convulsions, were examined within 48 hours of admission. The convulsions were classified by a paediatric neurologist and detailed ocular examination, including indirect ophthalmoscopy, was performed by an ophthalmologist. Statistical analysis was undertaken using Hanley's rule of three. Results: 32 consecutive children admitted with convulsions were examined; 10 of them were admitted following epileptic seizures and 22 following febrile convulsions. Two of the children with febrile convulsions were admitted in status epilepticus. None of these children had retinal haemorrhages. Therefore, using Hanley's rule of three, the upper limit of 95% confidence interval of retinal haemorrhages following convulsions in children under the age of 2 years, is less than 10/100. Conclusions: In children under the age of 2 years convulsions alone are unlikely to cause retinal haemorrhages. By combining the results of this study with those previously reported from this unit in older children, the upper limit of 95% confidence interval of retinal haemorrhages, following convulsions in children under the age of 14 years, is less than 5/100. Therefore, the finding of retinal haemorrhages in a child admitted with a history of convulsion should trigger a meticulous search for other causes of these haemorrhages, particularly non-accidental injury. abstract_id: PUBMED:19666457 Do retinal haemorrhages occur in infants with convulsions? Aim: To determine the prevalence of retinal haemorrhages in infants presenting with convulsions and admitted to hospital, and to consider whether this finding indicates shaken baby syndrome. Methods: Prospective study of children aged 15 days to 2 years admitted with a diagnosis of first convulsion over a 2-year period (May 2004-May 2006). All infants were examined by an experienced ophthalmologist using indirect ophthalmoscopy within 72 h of admission. Results: 182 of 389 children seen in the accident and emergency department were admitted and two were found to have retinal haemorrhages. Both children were eventually diagnosed as being abused. Conclusions: Convulsions alone are unlikely to cause retinal haemorrhages in children under 2 years of age. abstract_id: PUBMED:15926388 Accidental and nonaccidental head injuries in infants: a prospective study. Object: Head injury is a major cause of morbidity and death in infants, and child abuse is among its chief causes. Retinal hemorrhages (RHs) are of paramount importance for the diagnosis of child abuse; however, their sensitivity and specificity are poorly estimated. Subdural hematoma (SDH) is a common feature; however, its incidence and causative factors are poorly documented. Methods: To study the epidemiology of head injuries in infants, factors predisposing to SDH, value of RH for the diagnosis of child abuse, and prognostic factors, the authors prospectively collected data from cases of head injury in infants who were hospitalized at their institution over a 3-year period. One hundred fifty cases were collected, 57 of which were due to child abuse. This prospective study allows the comparison of clinical, radiological, and ophthalmological features in accidental and nonaccidental trauma in infants. Conclusions: Subdural hematomas were significantly correlated with RH and with child abuse but not with idiopathic macrocranium. The sensitivity and specificity of RH for the diagnosis of child abuse were 75 and 93.2%, respectively. Retinal hemorrhages associated with accidental trauma were always mild, and the specificity of more severe RH for the diagnosis of child abuse was 100%. The grading of RH requires the expertise of a trained neuroophthalmologist. Child abuse was also significantly associated with antecedents of perinatal illness, absence of signs of impact, and seizures on presentation. Although child abuse represented just 38% of traumas, it was the cause of 71% of deaths and 90% of severe disability in this series. Abuse and the clinical severity on presentation were two significant and independent factors conditioning outcome. abstract_id: PUBMED:9832664 Occult intracranial injury in infants. Study Objectives: The objectives of this study were as follows: (1) to determine whether clinical symptoms and signs of brain injury are sensitive indicators of intracranial injury (ICI) in infants admitted with head trauma, (2) to describe the clinical characteristics of infants who have ICI in the absence of symptoms and signs of brain injury, and (3) to determine the clinical significance of those ICIs diagnosed in asymptomatic infants. Methods: We conducted a retrospective analysis of all infants younger than 2 years of age admitted to a tertiary care pediatric hospital with acute ICI during a 6(1/2)-year period. Infants were considered symptomatic if they had loss of consciousness, history of behavior change, seizures, vomiting, bulging fontanel, retinal hemorrhages, abnormal neurologic examination, depressed mental status, or irritability. All others were considered to have occult ICI. Results: Of 101 infants studied, 19 (19%; 95% confidence interval [CI] 12%, 28%) had occult ICI. Fourteen of 52 (27%) infants younger than 6 months of age had occult ICI, compared with 5 of 34 (15%) infants 6 months to 1 year, and none of 15 (0%) infants older than 1 year. Eighteen (95%) infants with occult ICI had scalp contusion or hematoma, and 18 (95%) had skull fracture. Nine (47%) infants with occult ICI received therapy for the ICI. No infants with occult ICI (0%) (95% CI 0, 14%) required surgery or medical management for increased intracranial pressure. Only 1 subject (5%) with occult ICI had any late symptoms or complications: a brief, self-limited convulsion. Conclusion: We found that 19 of 101 ICIs in infants admitted with head trauma were clinically occult. All 19 occult ICIs occurred in infants younger than 12 months of age, and 18 of 19 had skull fractures. None experienced serious neurologic deterioration or required surgical intervention. Physicians cannot depend on the absence of clinical signs of brain injury to exclude ICI in infants younger than 1 year of age. abstract_id: PUBMED:33163433 Abusive Head Traumas in 4 Infants. Pediatric abusive head trauma (AHT) is a serious, repeated child abuse that causes grave brain damage. In Korea, AHT cases have been reported rarely, especially infants. We present 4 cases of AHT in infants diagnosed in our institution during last 2 years. We collected the demographic data, ophthalmologic examination, imaging study, and outcomes. The mean age was 7.2 months, and 2 infants were girls and the others were boys. All four were admitted with no history of head trauma, and among them 2 patients presented with an episode of seizure and respiratory arrest with no history of head trauma. The initial mental status was semi-coma in 3 cases, and stupor in 1 case. There were multiple retinal hemorrhages in both eyes in 2 cases; one had multiple old fractures on the extremities and another child showed multiple skull fractures. All patients underwent emergent surgery for acute bilateral subdural hemorrhages; 3 had craniotomy and another had burr-hole drainage. Two children expired and the other 2 are in vegetative status. The AHT has recently become more frequent in Korea so that neurosurgeons must alert AHT even in infants with head trauma. abstract_id: PUBMED:20486311 Medical, social and societal issues in infants with abusive head trauma. Abusive head trauma (AHT) is the leading cause of death from traumatic brain injury in under 2 year olds. AHT presents with acute encephalopathy, subdural hemorrhages and retinal hemorrhages occurring in the context of an inappropriate or inconsistent history. We retrospectively analyzed, over a 10 year period, admissions and transfers to our hospital with suspected AHT to assess patterns of presentation, presenting symptoms, investigations, subsequent confirmation, social work input and both neurological and social outcomes. We analyzed all suspected AHT infants and children looking for the time of presentation, presenting symptoms, caregivers concerns prior to presentation, a family profile including stressors, investigations (in particular neuroradiology and ophthalmology assessments), treatment in hospital, length of stay in hospital, social work involvement, subsequent discharge, neurological outcome and subsequent social work follow up. Data was collected from the hospital HIPE system, RIS (radiology reports system) and records from the social work department from a period October 1998 to January 2009 inclusive. Of 22 patients with confirmed AHT, ages seizures and irritability followed by vomiting, poor feeding, a bulging fontanelle and lethargy. The father was the sole minder in 5 cases. There was a delayed history in 4 cases. One had multiple visits to his GP. All cases had subdural hemorrhages proven by either CT or MRI scans and retinal hemorrhages diagnosed by ophthalmology. One infant presented with a torn frenulum. Four had suspicious bruising. All had normal coagulation profiles, skeletal surveys and extensive metabolic tests. Hospital stays ranged from 1 to 124 days (the median was 28 days and mean 33 days). Ten (45%) infants required ventilatory support. Sixteen infants had social work involvement within 4 days of admission (7 of these were interviewed immediately). Outcomes after case conferences were that 6 returned home with parents, 9 were placed in foster care. Four parents (18%) admitted to shakng their infants. There was 1 death. Thirteen (60%) were normal on follow up. Two had ADHD. Two had language delay. Two had motor delay. One criminal prosecution has ensued as yet Children with suspected AHT should undergo appropriate investigations which should include brain imaging, ophthalmic examination, skeletal survey and blood investigations. Early social work assessment is a priority as part of the multidisciplinary approach. A prospective national study of AHT is required. abstract_id: PUBMED:15851071 Clinical analysis of non-accidental head injury in infants. We present the clinical features, radiological findings and outcome of infants with non-accidental head injury presenting to our department between 2001 and 2003. There were 26 male and 13 female infants, aged between 7 days and 5 months. Presenting symptoms included seizures, vomiting, bulging fontanel, decreased level of consciousness, focal neurological signs, anemia and respiratory distress. We classified the patients into three groups: mild head injury (8 patients), moderate head injury (11 patients) and severe head injury (20 patients). Detailed history taking and thorough physical examination, did not reveal obvious injury or abuse (by the parents), other external visible injury or significant incidence of retinal bleeding. CT scan showed intracranial hemorrhage in all patients, most commonly including acute or acute on chronic subdural hematoma (SDH) and subarachnoid hemorrhage (SAH) in the parafalcine and tentorial regions and over the brain convexity, associated with focal or diffuse hypodensity of the parenchyma. Intracranial lesions were mostly bilateral (80%). These patients generally had a poor outcome; 31% died, 23% had a good outcome, and the remainder recovered with neurological impairment of varying types and degrees. After comprehensive assessment, we concluded that all the cases in our series were "non-accidental" head injuries. However, the absence of retinal bleeding, visible external injury and recognised abuse in our series differed from reports by other investigators. In our opinion these features are not absolute requirements in diagnosing "non-accidental" injury and detailed history taking, thorough clinical examination and radiological studies were the key indicators of the diagnosis. abstract_id: PUBMED:35213611 Infantile subdural hematoma in Japan: A multicenter, retrospective study by the J-HITs (Japanese head injury of infants and toddlers study) group. Objective: Subdural hematoma in infants or toddlers has often been linked to abuse, but it is not clear how many cases actually occur and how many are suspected of abuse. The purpose of this study was to investigate subdural hematoma in infants and toddlers in Japan. Methods: This multicenter, retrospective study reviewed the clinical records of children younger than 4 years with head trauma who were diagnosed with any finding on head computed tomography (CT) and/or magnetic resonance imaging (MRI), such as skull fracture and/or intracranial injury. A total of 452 children were included. The group suspected to have been abused was classified as nonaccidental, and the group considered to have been caused by an accident was classified as accidental. Subdural hematoma and other factors were examined on multivariate analysis to identify which factors increase the risk of nonaccidental injuries. Results: Of the 452 patients, 158 were diagnosed with subdural hematoma. Subdural hematoma was the most common finding intracranial finding in head trauma in infants and toddlers. A total of 51 patients were classified into the nonaccidental group, and 107 patients were classified into the accidental group. The age of patients with subdural hematoma showed a bimodal pattern. The mean age of the accidental group with subdural hematoma was significantly older than that in the nonaccidental group (10.2 months vs 5.9 months, respectively. p &lt; 0.001). Multivariate analysis showed that patients under 5 months old, retinal hemorrhage, and seizure were significant risk factors for nonaccidental injury (odds ratio (OR) 3.86, p = 0.0011; OR 7.63, p &lt; 0.001; OR 2.49, p = 0.03; respectively). On the other hand, the odds ratio for subdural hematoma was 1.96, and no significant difference was observed (p = 0.34). Conclusions: At least in Japanese children, infantile subdural hematoma was frequently observed not only in nonaccidental but also in accidental injuries. In infants with head trauma, age, the presence of retinal hemorrhage, and the presence of seizures should be considered when determining whether they were abused. Subdural hematoma is also a powerful finding to detect abuse, but care should be taken because, in some ethnic groups, such as the Japanese, there are many accidental cases. abstract_id: PUBMED:235749 Central nervous system findings in the newborn monkey following severe in utero partial asphyxia. Seizures, brain swelling, and cortical necrosis have been observed in the newborn rhesus monkey following a 2 to 4-hour period of intrauterine partial asphyxia produced by halothane-induced maternal hypotension. These clinical and neuropathologic findings are similar to those seen in human newborn infants who have experienced an episode of intrauterine asphyxia from such a cause as premature placental separation. The present study strongly indicates that fetal partial asphyxia, from any cause, in the absence of fetal circulatory collapse or fetal head compression, may be the primary event that sets in motion a vicious cycle of brain swelling and impaired cerebral blood flow, leading finally to cerebral necrosis. abstract_id: PUBMED:23108326 Shaken baby syndrome: pathogenetic mechanism, clinical features and preventive aspects. The shaken baby syndrome (SBS) is an extremely serious form of child abuse and a leading cause of death and disability in childhood. The syndrome usually occurs in infants younger than 1 year when a parent or a care-giver tries to stop the baby from crying by vigorous manual shaking. The repetitive oscillations with rotational acceleration of the head can result in injuries of both vascular and neuronal structures. The most frequent injuries associated with SBS include encephalopathy, retinal hemorrhages, and subdural hemorrhage. Fractures of the vertebrae, long bones, and ribs may also be associated with the syndrome. Victims of abuse have various presenting signs and symptoms ranging from irritability, decreased responsiveness and lethargy to convulsions, and death. Diagnosis is often difficult because usually parents or caregivers not tell the truth about what has happened to their child and because usually there is no external evidence of trauma. However, the syndrome might be suspected if the information provided are vague or changing and when the child presents with retinal hemorrhages, subdural hematoma, or fractures that cannot be explained by accidental trauma or other medical conditions. Of infants who are victims of SBS, approximately 15% to 38% die and 30% are at risk of long-term neurologic sequelae, including cognitive and behavioural disturbances, motor and visual deficits, learning deficits and epilepsy. Parents and caregivers must be warned about the dangers of shaking infants. Answer: Based on the information provided in the abstracts, convulsions alone are unlikely to cause retinal haemorrhages in infants. A study examining children under the age of 2 years admitted to the hospital following convulsions found that none of the children, including those with febrile convulsions or admitted in status epilepticus, had retinal haemorrhages (PUBMED:9797668). Another prospective study of children aged 15 days to 2 years admitted with a diagnosis of first convulsion also concluded that convulsions alone are unlikely to cause retinal haemorrhages in this age group (PUBMED:19666457). Furthermore, retinal hemorrhages (RHs) are of paramount importance for the diagnosis of child abuse, and their presence, especially when severe, is highly specific for non-accidental trauma rather than accidental causes or medical conditions like convulsions (PUBMED:15926388). In cases where retinal haemorrhages are found in infants, it is important to consider non-accidental injury, such as abusive head trauma (AHT), as a potential cause. AHT is a serious form of child abuse that can lead to grave brain damage and is associated with acute encephalopathy, subdural hemorrhages, and retinal hemorrhages (PUBMED:33163433; PUBMED:20486311; PUBMED:23108326). Therefore, the finding of retinal haemorrhages in a child admitted with a history of convulsion should prompt a thorough investigation for other causes, particularly non-accidental injury (PUBMED:9797668). In summary, the evidence suggests that convulsions alone do not typically cause retinal haemorrhages in infants, and the presence of such haemorrhages should raise suspicion for other underlying causes, including the possibility of child abuse.
Instruction: Do students' perceptions of school smoking policies influence where students smoke? Abstracts: abstract_id: PUBMED:20703936 Do students' perceptions of school smoking policies influence where students smoke?: Canada's Youth Smoking Survey. Objective: The objective of this study was to explore students' perceptions of school policy characteristics that influence the location of smoking while at school. Methods: Data were collected from a nationally representative sample of Canadian youth in grades 7-12 as part of the 2006-2007 Youth Smoking Survey. We used multilevel logistic regression to examine how students' perceptions of school policies predicted smoking behavior on and off school grounds in 11,881 students who had ever smoked. Separate analyses were conducted for grades 7-9 and 10-12. Results: In both grades 7-9 and 10-12, perceiving clear rules about smoking decreased the likelihood that a student would smoke on school grounds, while perceiving that a high percentage of peers smoke, that there are school rules about smoking, that students obey the rules, and that students can be fined for smoking increased the likelihood that a student would smoke off school grounds. Conclusions: Clearly perceived rules about smoking encourage students not to smoke on school grounds; however, perceptions of rules, along with strong enforcement, may displace behavior off of school grounds. Non-smoking policies should be part of a comprehensive approach, that supports cessation. abstract_id: PUBMED:31200563 Adolescent Smoking in Secondary Schools that Have Implemented Smoke-Free Policies: In-Depth Exploration of Shared Smoking Patterns. Large numbers of adolescents smoke during school hours, despite the implementation of smoke-free school policies (SFSPs). Studies about SFSPs predominantly analyse smoking as individual behaviour, yet there is increasing recognition that smoking should be understood as social behaviour. We explored shared smoking patterns specifying where, when, and with whom, and social meanings about why groups of adolescents smoke in two Dutch schools that have implemented SFSPs. Surveys among adolescents were held to obtain contextual information about the schools. Four focus group discussions and fourteen individual interviews were held with adolescents to identify shared smoking patterns in each school. Two shared patterns were identified at a school where 17% of students smoked daily: Dependent smoking and Rebellious smoking. Both built on pro-smoking norms and underscored the benefits of smoking. Three shared patterns were identified at a school where 3% of students smoked daily: Social bonding smoking, Low-profile smoking and Smoking-friendly event smoking. These built on anti-smoking norms and helped smokers cope with negative social judgements related to smoking. We conclude that adolescent smoking during school hours is embedded in diverse shared smoking patterns. Future studies should develop more understanding about how to deal with adolescents' shared smoking patterns that decrease the effectiveness of tobacco policies. abstract_id: PUBMED:37465255 Passive exposure and perceptions of smoke-free policies in hospital and university campuses among nursing students: A cross-sectional multicenter study. Introduction: Outdoor smoke-free regulations reduce exposure to secondhand smoke (SHS) and help to denormalize tobacco use. As future key agents in health promotion, nursing students' attitudes should agree with tobacco-control policies. The objectives of this study were: 1) assess nursing students' exposure to SHS in nursing schools, 2) explore their perceptions of compliance with the existing smoke-free regulations in acute-care hospitals; and 3) describe their support for indoor and outdoor smoking bans on hospital and university campuses. Methods: This was a cross-sectional multicenter study conducted in 2015-2016 in all 15 university nursing schools in Catalonia, Spain. A questionnaire gathered information on SHS exposure, awareness of the smoke-free regulation in acutecare hospitals, and support for smoke-free policies in indoor and outdoor areas of hospitals and university campuses. Participants were nursing students attending classes on the day of the survey. We performed descriptive analyses and calculated adjusted prevalence ratios (APR) and 95% confidence interval (CI). Results: Of 4381 respondents, 99.1% had seen people smoking in outdoor areas of their university campus, and 75.2% had been exposed to SHS on the campus (6.0% indoors and 69.2% outdoors). Nearly 60% were aware of the smoking regulation in place in acute-care hospitals. There was widespread support for smoke-free indoor hospital regulation (98.7%), but less support (64.8%) for outdoor regulations. Approximately 33% supported the regulation to make outdoor healthcare campuses smoke-free, which was higher among third-year students compared to first-year students (APR=1.41; 95% CI: 1.24-1.62), among never smokers (41.4%; APR=2.84; 95% CI: 2.21-3.64) compared to smokers, and among those who were aware of the regulation (38.4%; 95% CI: 1.37-1.75). Conclusions: Exposure to SHS on university campuses is high. Nursing students express low support for strengthening outdoor smoking bans on hospital and university campuses. Interventions aiming to increase their support should be implemented. abstract_id: PUBMED:33302665 The prevalence of smoking in male medical students from 16 Chinese medical colleges and the associated factors. Background: The prevalence of smoking among Chinese medical students in medical colleges is a matter of significant concern. This study aimed to explore the prevalence of smoking and the associated factors among these students. Methods: A total of 7,728 students from 16 colleges in 12 provinces were surveyed to measure their smoking behaviors, attitudes toward tobacco control, and knowledge about smoking risk. An evaluation of campus environment and policy was conducted at the same time. Frequencies were calculated using weighted estimators for complex sampling. A two-level logistic regression model was used to assess factors associated with current smoking. Results: The prevalence of smoking among male medical students in the 16 medical colleges was 12.8%. Students' participation in smoking, smoking initiation after school enrollment, and intention-to-quit-smoking rates may be influenced by their anti-tobacco attitude, the belief that smoking can help to relieve stress, their parents' smoking behaviors, their best friend's attitude towards smoking and second-hand smoke exposure (SHSE), and their awareness of tobacco hazards. Conclusions: This study provides sufficient evidence to support the implementation of systematic intervention strategies to decrease the smoking rate among male medical students. These strategies may include tobacco control education, smoking cessation technologies, communication, and the enforcement of smoke-free policies to create a tobacco-controlled campus environment. abstract_id: PUBMED:16267148 School tobacco control policies related to students' smoking and attitudes toward smoking: national survey results, 1999-2000. The belief that schools can play a powerful role in preventing tobacco use among adolescents has led to the implementation of various tobacco-related policies and practices. This study examines the association between school policies regarding monitoring student behavior, severity of action taken for infraction of policies, and tobacco use by staff, and student smoking behavior and attitudes. Data on students' smoking behavior and attitudes were obtained from the 1999 and 2000 Monitoring the Future surveys of nationally representative samples of 8th-, 10th-, and 12th-grade students. Data on school policies and practices were obtained from administrators in those same schools. Hierarchical analyses using HLM5 were conducted. Strictness of monitoring was significantly negatively associated with daily cigarette use by middle school students. Permitting staff to smoke was significantly positively associated with students' daily cigarette use and negatively with their disapproval of cigarette use. Policy implications are discussed. abstract_id: PUBMED:22826066 Cigarette smoking among students and the influence of legal regulations on passive smoking. Research suggests that reducing the degree of nicotine addiction in the population cannot be achieved only by prevention programs. Legislative measures are necessary to be taken by the state. The aim of this study was to assess the degree of tobacco abuse in three groups of students. It also assesses the influence of ban on smoking in public places on passive contact of students with tobacco. A customized survey made up of open and closed questions was conducted among 102 students of electrical faculty, 109 medical students, and 71 students of animal husbandry faculty. The results showed that significantly more women from the electrical faculty smoked. Among the students of animal husbandry, men smoke significantly more cigarettes than women. Women studying animal husbandry start smoking significantly earlier (by about 2 years) than women from other faculties. They are also significantly less likely to smoke cigarettes at school and at home. According to the study, the Polish law to ban smoking in public places, in force since the 15th of November 2010, did not make students quit smoking, although the rate of smoking students decreased. Students did not observe restrictions on smoking in their environment. The study indicates a positive influence of the anti-nicotine legislation on passive smoking, just after 3 months from its introduction. abstract_id: PUBMED:16987941 The impact of school smoking policies and student perceptions of enforcement on school smoking prevalence and location of smoking. The purpose of this study was to comprehensively assess the impact of school tobacco policy intention, implementation and students' perceptions of policy enforcement on smoking rates and location of tobacco use during the school day. Data were obtained from all students in Grades 10-11 (n = 22,318) in 81 randomly selected schools from five Canadian provinces. Policy intention was assessed by coding written school tobacco policies. School administrators most familiar with the tobacco policy completed a survey to assess policy implementation. Results revealed policy intention and implementation subscales did not significantly predict school smoking prevalence but resulted in moderate prediction of tobacco use on school property (R(2) = 0.21-0.27). Students' perceptions of policy enforcement significantly predicted school smoking prevalence (R(2) = 0.36) and location of tobacco use (R(2) = 0.23-0.63). The research findings emphasize: (i) the need to consider both written policy intention and actual policy implementation and (ii) the existence of a policy is not effective in controlling tobacco use unless the policy is implemented and is perceived to be strongly enforced. abstract_id: PUBMED:29111947 Clearing the air: American Indian tribal college students' exposure to second hand smoke &amp; attitudes towards smoke free campus policies. Objective: The purpose of this study was to examine knowledge, awareness, and support for campus smoke-free policies. Participants: 1,256 American Indian tribal college students from three tribal colleges in the Midwest and Northern Plains. Methods: Data are from an observational cross-sectional study of American Indian tribal college students, collected through a web-based survey. Results: Only 40% of tribal college students reported not being exposed to second hand smoke in the past 7 days. A majority of nonsmokers (66%) agreed or strongly agreed with having a smoke-free campus, while 34.2% of smokers also agreed or strongly agreed. Overall, more than a third (36.6%) of tribal college students were not aware of their campus smoking policies. Conclusions: Tribal campuses serving American Indian students have been much slower in adopting smoke-free campus policies. Our findings show that tribal college students would support a smoke-free campus policy. abstract_id: PUBMED:29084167 Prevalence and Determinants of Current Smoking and Intention to Smoke among Secondary School Students: A Cross-Sectional Survey among Han and Tujia Nationalities in China. Objectives: This study examined the patterns and determinants of current smoking and intention to smoke among secondary school students of Han and Tujia nationalities in China. Methods: A cross-sectional survey was conducted in three regions, namely, Chongqing, Liaocheng, and Tianjin, of China in 2015. A structured self-administered questionnaire was used for data collection. Results: Of the total subjects (n = 1805), 78.9% were ethnic Han and 21.1% were ethnic Tujia. Overall 9.4% (Han: 7.7%; Tujia: 15.5%) secondary school students were smokers and 37.28% smoked more than once per day. Of the non-smoker students (n = 1636), 17.4% have an intention to smoke. A total of 81.1% of students reportedly had never been taught throughout school about smoking or tobacco prevention. When compared to the students who were taught in the school about smoking or tobacco prevention (18.90%) students who were never taught were more likely to smoke (OR = 2.39; 95% CI = 1.14-5.01). As compared to Han nationality students who were from Tujia nationality were more likely to smoke (OR = 2.76; 95% CI = 1.88-4.04) and were more likely to have a higher frequency of smoking (95% CI (0.88, 0.88), p = 0.010). Non-smokers who were high school students (OR = 4.29; 95% CI = 2.12-8.66), whose academic performance were situated in the last 25% (OR = 2.23; 95% CI = 1.48-3.34) and lower than 50% (OR = 1.50; 95% CI = 1.02-2.20) were more likely to have an intention of smoking. Conclusions: About one in ten secondary school students was a smoker, one in three smokers smoked more than one time per day, and a quarter of non-smokers had an intention of smoking in China. Smoking rate was higher among students from Tujia than the Han nationality. This study provided some important information for future tobacco control programs among secondary school students in the ethnic minority autonomous region and minority settlements in a multi-ethnic country. abstract_id: PUBMED:16889312 Texas college students' opinions of no-smoking policies, secondhand smoke, and smoking in public places. Abstract. The authors examined college student opinions of no-smoking policies, secondhand smoke, and smoking in public places. A convenience sample of 1,188 (66.4% female; 26.9% White, 64.1% Black, and 9.0% Hispanic) students attending 5 Texas colleges volunteered to complete a 60-item anonymous survey on tobacco attitudes and behaviors. Results of our study showed that women, Black students, and nonsmoking students reported the most favorable attitudes toward no smoking. Male students, white students, and students who smoked in the past 30 days (current smokers) reported the least favorable attitudes. The lowest level of agreement by all subgroups was provided for prohibiting smoking everywhere on campus (inside and outside of buildings). Implications for tobacco control on college campuses are discussed. Answer: Yes, students' perceptions of school smoking policies do influence where students smoke. Studies have shown that when students perceive clear rules about smoking, they are less likely to smoke on school grounds. For example, in a study involving Canadian youth, perceiving clear rules about smoking decreased the likelihood of students smoking on school grounds. However, perceiving that there are school rules about smoking, that students obey the rules, and that students can be fined for smoking increased the likelihood that a student would smoke off school grounds (PUBMED:20703936). Additionally, the enforcement of these policies and students' perceptions of the enforcement are significant predictors of both school smoking prevalence and the location of tobacco use. Students' perceptions of policy enforcement significantly predicted school smoking prevalence and location of tobacco use (PUBMED:16987941). Moreover, the presence of smoke-free school policies (SFSPs) does not necessarily prevent adolescents from smoking during school hours, as smoking is often a social behavior. Shared smoking patterns among students can decrease the effectiveness of tobacco policies (PUBMED:31200563). Furthermore, the prevalence of smoking among students and their attitudes towards smoking can also be influenced by the legal regulations on passive smoking. For instance, the introduction of a ban on smoking in public places did not make students quit smoking, but the rate of smoking students decreased, and there was a positive influence on passive smoking just after 3 months from its introduction (PUBMED:22826066). In summary, students' perceptions of school smoking policies, along with the enforcement of these policies, play a crucial role in determining where students choose to smoke.
Instruction: Are aldehydes in heat-sterilized peritoneal dialysis fluids toxic in vitro? Abstracts: abstract_id: PUBMED:8785233 Are aldehydes in heat-sterilized peritoneal dialysis fluids toxic in vitro? Objective: Chemical analysis of several brands of peritoneal dialysis fluids (PD fluids) has revealed the presence of 2-furaldehyde, 5-HMF (5-hydroxymethylfuraldehyde), acetaldehyde, formaldehyde, glyoxal, and methylglyoxal. The aim of this study was to investigate if the in vitro side effects caused by glucose degradation products, mainly formed during heat sterilization, are due to any of these recently identified aldehydes. Design: Cell growth media or sterile filtered PD fluids were spiked with different concentrations of thealdehydes. Measurements: In vitro side effects were determined as the inhibition of cell growth of cultured mouse fibroblasts or stimulated superoxide radical release from human peritoneal cells. Results: Our results demonstrate that the occurrences of 2-furaldehyde, 5-HMF, acetaldehyde, formaldehyde, glyoxal, or methylglyoxal in heat-sterilized PD fluids are probably not the direct cause of in vitro side effects. In order to induce the same magnitude of cell growth inhibition as the heat-sterilized PD fluids, the concentrations of 2-furaldehyde, glyoxal, and 5-HMF had to be 50 to 350 times higher than those quantified in the PD fluids. The concentrations of acetaldehyde, formaldehyde, and methylglyoxal observed in the heat-sterilized PD fluids were closer to the cytotoxic concentrations although still 3 to 7 times lower. Conclusion: Since none of these aldehydes caused in vitro toxicity at the tested concentrations, the toxicity found in PD fluids is likely to be due to another glucose degradation product, not yet identified. However, it is possible that these aldehydes may still have adverse effects for patients on peritoneal dialysis. abstract_id: PUBMED:8369351 Heat sterilization of fluids for peritoneal dialysis gives rise to aldehydes. Objective: To chemically identify and quantify glucose degradation products in heat sterilized fluids for peritoneal dialysis. Design: Three different brands of commercial PD-fluids and one laboratory made fluid, sterilized either by heat or filtration, were investigated for the presence of aldehydes. Measurements: Aldehydes were identified and quantified using high performance liquid chromatography and gas chromatography. Results: The tested brands of heat sterilized PD-fluids were found to contain several different aldehydes while the sterile filtered PD-fluid contained none. The highest concentrations in commercial PD-fluids of these aldehydes were: acetaldehyde (420 microns), glyoxal (14 microns), methylglyoxal (12 microns) and formaldehyde (11 microns). Valeraldehyde was also identified but not quantified. The presence of 5-HMF (15 microns) and 2-furaldehyde (2 microns), which has been identified by others, was confirmed. Conclusions: The heat sterilization of commercial PD-fluids gives rise to several aldehydes which may contribute to adverse effects of PD-fluids on patients. abstract_id: PUBMED:26495307 Cross-omics comparison of stress responses in mesothelial cells exposed to heat- versus filter-sterilized peritoneal dialysis fluids. Recent research suggests that cytoprotective responses, such as expression of heat-shock proteins, might be inadequately induced in mesothelial cells by heat-sterilized peritoneal dialysis (PD) fluids. This study compares transcriptome data and multiple protein expression profiles for providing new insight into regulatory mechanisms. Two-dimensional difference gel electrophoresis (2D-DIGE) based proteomics and topic defined gene expression microarray-based transcriptomics techniques were used to evaluate stress responses in human omental peritoneal mesothelial cells in response to heat- or filter-sterilized PD fluids. Data from selected heat-shock proteins were validated by 2D western-blot analysis. Comparison of proteomics and transcriptomics data discriminated differentially regulated protein abundance into groups depending on correlating or noncorrelating transcripts. Inadequate abundance of several heat-shock proteins following exposure to heat-sterilized PD fluids is not reflected on the mRNA level indicating interference beyond transcriptional regulation. For the first time, this study describes evidence for posttranscriptional inadequacy of heat-shock protein expression by heat-sterilized PD fluids as a novel cytotoxic property. Cross-omics technologies introduce a novel way of understanding PDF bioincompatibility and searching for new interventions to reestablish adequate cytoprotective responses. abstract_id: PUBMED:1457884 Toxicity of heat sterilized peritoneal dialysis fluids is derived from degradation of glucose. Heat sterilization makes peritoneal dialysis (PD) solutions cytotoxic. Two compounds in the solutions, lactate and glucose, can be degraded by heat. This study's goal was to discover which of the compounds was responsible for the cytotoxicity. The influence of sterilization temperature on degradation of the compounds was also subjected to investigation. Solutions of glucose and lactate and a mixture of lactate and glucose were prepared. These were sterilized in glass ampules in an oil bath at different temperatures for varying times. Toxicity was determined as inhibition of cell growth with a fibroblast cell line (L929), and ultraviolet (UV) absorbance was measured at 284 nm. Lactate solutions did not show cytotoxicity after heat sterilization. Glucose solutions that were heat sterilized showed an increase in UV absorbance at 284 nm and were cytotoxic. The mixture of lactate and glucose exhibited the same cytotoxicity as glucose alone. Lower sterilization temperatures lead to increased cytotoxicity and an increase in UV absorbance at 284 nm. Results indicate that the toxic products formed during heat sterilization of PD fluids are derived from glucose. abstract_id: PUBMED:10505558 Acute in vivo toxicity of heat-sterilized glucose peritoneal dialysis fluids to rat peritoneal macrophages. Objective: To evaluate the in vivo effects of heat-sterilized peritoneal dialysis (PD) fluids on the respiratory burst response of rat peritoneal leukocytes. Design: Rats were exposed to intraperitoneal injections of a laboratory-made PD fluid that was either heat-sterilized (H-PD) or filtered (F-PD). Control groups of animals were given Hank's buffer (HBSS) or saline (NaCl). Leukocytes were harvested by intraperitoneal lavage at different times in different animals and analyzed with respect to cell numbers, differential counts, and production of superoxide (chemiluminescence) in response to opsonized zymosan. The chemiluminescence responses of the macrophage and the neutrophil populations, respectively, were obtained by curve-fitting techniques from the responses of the mixed populations. Results: All fluids induced a recruitment of neutrophils, the PD fluids causing a cell number increase that was more transient than that caused by NaCl and HBSS. Macrophage numbers were only slightly influenced, but were generally higher after NaCl and HBSS injections than after PD fluid injections. The H-PD exposure induced a significant inhibition of the macrophage chemiluminescence response after 2 and 12 hours, compared with the exposure to F-PD. The neutrophil chemiluminescence response was not significantly affected. Conclusion: The toxins produced by heat-sterilization of glucose-containing PD fluids inhibit in vivo the respiratory burst response of peritoneal macrophages. abstract_id: PUBMED:12472800 Affinity adsorption of glucose degradation products improves the biocompatibility of conventional peritoneal dialysis fluid. Background: Reactive carbonyl compounds (RCOs) present in peritoneal dialysis (PD) fluid have been incriminated in the progressive deterioration of the peritoneal membrane in long-term PD patients. They are initially present in fresh conventional heat-sterilized glucose PD fluid and are supplemented during dwell time by the diffusion of blood RCOs within the peritoneal cavity. In the present study, RCO entrapping agents were immobilized on affinity beads to adsorb RCOs both in fresh PD fluid and in PD effluent. Methods: The RCO trapping potential of various compounds was assessed in vitro first by dissolving them in the tested fluid and subsequently after coupling with either epoxy- or amino-beads. The tested fluids include fresh heat-sterilized glucose and non-glucose PD fluids, and PD effluent. Their RCOs contents, that is, glyoxal (GO), methylglyoxal (MGO), 3-deoxyglucosone (3-DG), formaldehyde, 5-hydroxymethylfuraldehyde, acetaldehyde, and 2-furaldehyde were monitored by reverse-phase high-pressure liquid chromatography. The biocompatibility of PD fluid was assessed by a cytotoxic assay with either human epidermoid cell line A431 cells or with primary cultured human peritoneal mesothelial cells. Results: Among the tested RCO entrapping agents, hydrazine coupled to epoxy-beads proved the most efficient. It lowered the concentrations of three dicarbonyl compounds (GO, MGO, and 3-DG) and those of aldehydes present in fresh heat-sterilized glucose PD fluid toward the low levels observed in filter-sterilized glucose PD fluid. It did not change the glucose and electrolytes concentration of the PD fluid but raised its pH from 5.2 to 5.9. Hydrazine-coupled epoxy-bead also lowered the PD effluent content of total RCOs, measured by the 2,4-dinitrophenylhydrazone (DNPH) method. The cytotoxicity of heat-sterilized PD fluid incubated with hydrazine-coupled epoxy-beads was decreased to the level observed in filter-sterilized PD fluid as the result of the raised pH and the lowered RCOs levels. Conclusion: Hydrazine-coupled epoxy-beads reduce the levels of a variety of dicarbonyls and aldehydes present in heat-sterilized glucose PD fluid to those in filter-sterilized PD fluid, without altering glucose, lactate, and electrolytes contents but with a rise in pH. Incubated with PD effluents, it is equally effective in reducing the levels of serum-derived RCOs. RCO entrapping agents immobilized on affinity beads improve in vitro the biocompatibility of conventional heat-sterilized glucose PD fluid. Their clinical applicability requires further studies. abstract_id: PUBMED:1921158 Toxicity of peritoneal dialysis fluids on cultured fibroblasts, L-929. Peritoneal dialysis (PD) fluids are known to suppress the reactions of inflammatory cells in vitro. PD-fluids have also been shown to have cytotoxic influence on mesothelial cells. The combinations of these factors may have a detrimental effect on the peritoneum or may impair cellular defence against bacterial peritonitis. Some authors have discussed the relevance of heat sterilization to both so-called peritoneal side-effects and to chemical decomposition of fluids. Four commercial PD-fluids and one laboratory-made PD-fluid were tested for cytotoxicity on a cultured fibroblast cell line, L-929. Cytotoxicity was determined as an inhibition of cell growth by quantification of total protein. The laboratory-made PD-fluid was sterilized either by filtration or by filtration and heat. The commercial and the heat-sterilized laboratory made PD-fluids caused significant inhibition of cell growth (53 to 76%) in contrast to saline and the filter-sterilized laboratory-made PD-fluid. Since the pH values of all the testsolutions were neutral, low pH was not the cause of toxicity. Our results regarding the L-929 cells indicate that the cytotoxicity of PD-fluids is of a general nature. Furthermore, the results indicate that the heat sterilization process might be partially responsible for causing toxicity in PD-fluids. abstract_id: PUBMED:7612737 In vitro biocompatibility of a heat-sterilized, low-toxic, and less acidic fluid for peritoneal dialysis. Objective: The aim of this study was to investigate a peritoneal dialysis (PD) fluid (PD-Bio), produced with the intention of reducing the amount of glucose degradation products and to increase the final pH. The heat sterilization of the fluid was performed with the glucose separated from the electrolytes. After sterilization the two solutions were combined. Methods: The in vitro biocompatibility of PD-Bio was measured as the inhibition of cell growth of a cultured fibroblast cell line and as the stimulated release of interleukin-1 beta from cultured human mononuclear cells. The glucose degradation products were measured as UV absorbance at 228 nm or 284 nm and the concentration of aldehydes was estimated with high-performance liquid chromatography and gas chromatography. Results: Our results demonstrate that in comparison to conventional PD fluids the pH of PD-Bio was increased, to about 6.5. Due to less contaminating glucose degradation products in PD-Bio, basal cytotoxicity was significantly decreased for both 1.5% and 4% glucose-containing fluids, and the stimulated release of interleukin-1 beta was normalized compared to sterile filtered controls with the same pH. UV absorbance measured at 228 nm was decreased, whereas the absorbance at 284 nm was equal to that of a conventional fluid. In PD-Bio the concentrations of formaldehyde, acetaldehyde, methylglyoxal, and 2-furaldehyde were found to be below the detection limit, whereas glyoxal was present in the same and 5-hydroxymethylfurfural (5-HMF) in higher concentrations than in conventionally produced PD fluid. Conclusions: The results demonstrate that it is possible to improve biocompatibility of PD fluids by simply changing the way the fluid is produced. abstract_id: PUBMED:10926149 Kinetics and dose response of the effects of heated glucose peritoneal dialysis fluids on the respiratory burst of rat peritoneal leukocytes. Heat sterilization of glucose containing peritoneal dialysis (PD) fluids induces the production of cytotoxic glucose degradation products (GDPs), some of which are still unidentified. The present study was performed to characterize the kinetics and the dose-response of the respiratory burst inhibition of GDPs and to compare different fluids in this respect. The zymosan-induced respiratory burst of rat peritoneal neutrophils and macrophages was measured by chemiluminescence (CL) after incubation in vitro for 1, 2, and 4 hours in different homemade and commercially available PD fluids, followed by one hour of recovery in Hanks' buffer. Heat sterilized fluids were compared with their filter sterilized equivalents at two different pH levels. The results revealed that the inhibitory effect of heat sterilized fluids on the respiratory burst of peritoneal neutrophils is additive to that of low pH, but more fast-acting and, in contrast to the pH effect, similar in magnitude to its in vivo equivalent. The effect developed within 1 hour and had a linear dose response. The low GDP fluid Gambrosol-Bio was less toxic than the conventional fluid Gambrosol, but the difference was smaller than expected in relation to measured concentrations of known GDPs. Macrophages were less sensitive than neutrophils to the GDP effect. abstract_id: PUBMED:11229608 Biological significance of reducing glucose degradation products in peritoneal dialysis fluids. Unlabelled: Carbohydrates are not stable when exposed to energy; they degrade into new molecules. In peritoneal dialysis (PD) fluids, degradation of glucose occurs during the heat sterilization procedure. The biological consequences of this degradation are side effects such as impaired proliferation and impaired host defense mechanisms, demonstrated in vitro for a great variety of cells. Several highly toxic compounds--such as formaldehyde and 3-deoxyglucosone--have been identified in PD fluids. Carbonyl compounds, apart from being cytotoxic, are also well-known promoters of irreversible advanced glycation end-products (AGEs), which might participate in the long-term remodeling of the peritoneal membrane. Various approaches can be used to reduce the formation of glucose degradation products (GDPs) during heat sterilization. Some examples are shortening the sterilization time, lowering the pH, removing catalyzing substances, and increasing glucose concentration. The latter three factors are employed in the multi-compartment bag with a separate chamber containing pure glucose at high concentration and low pH. Gambrosol trio, a PD fluid produced in this way, shows reduced cytotoxicity, normalized host defense reactions, less AGE formation, and reduced concentrations of formaldehyde and 3-deoxyglucosone. Moreover, in the clinical situation, the fluid turns out to be more biocompatible for the patient, causing less mesothelial cell damage, which in the long term could lead to a more intact peritoneal membrane. Conclusion: Glucose degradation products in heat-sterilized fluids for peritoneal dialysis are cytotoxic, promote AGE formation, and cause negative side effects for the patient. Using improved and well-controlled manufacturing processes, it is possible to produce sterile PD fluids with glucose as the osmotic agent but without the negative side effects related to GDPs. Answer: The presence of aldehydes in heat-sterilized peritoneal dialysis (PD) fluids has been a concern due to their potential toxicity. However, the studies provided offer mixed insights into the toxicity of these aldehydes in vitro. One study investigated the in vitro side effects of aldehydes such as 2-furaldehyde, 5-HMF, acetaldehyde, formaldehyde, glyoxal, and methylglyoxal, which are found in heat-sterilized PD fluids. The results demonstrated that these aldehydes, at the concentrations found in the PD fluids, did not directly cause in vitro side effects, such as inhibition of cell growth or stimulated superoxide radical release from human peritoneal cells. To induce the same magnitude of cell growth inhibition as the heat-sterilized PD fluids, much higher concentrations of 2-furaldehyde, glyoxal, and 5-HMF were required. Although the concentrations of acetaldehyde, formaldehyde, and methylglyoxal were closer to cytotoxic levels, they were still 3 to 7 times lower than what would be required to cause toxicity. Therefore, the study concluded that the toxicity observed in PD fluids is likely due to another unidentified glucose degradation product, rather than the aldehydes tested (PUBMED:8785233). Another study confirmed the presence of several aldehydes in heat-sterilized commercial PD fluids, suggesting that these aldehydes could contribute to adverse effects on patients. The study did not, however, directly assess the in vitro toxicity of these aldehydes (PUBMED:8369351). In summary, while aldehydes are present in heat-sterilized PD fluids, the evidence from the provided abstracts suggests that, at the concentrations found in these fluids, they are not the direct cause of in vitro toxicity. However, the potential for adverse effects in patients cannot be ruled out, and further research may be needed to fully understand the impact of these aldehydes and other glucose degradation products on both in vitro and in vivo systems.
Instruction: Do we defend a brain oriented view of death? Abstracts: abstract_id: PUBMED:12235573 Do we defend a brain oriented view of death? Aims: To propose a new formulation of death based on the mechanisms involved in consciousness generation in human beings. Development: Any complete formulation of death must include three different elements: a definition of death, its anatomofunctional substratum and the tests required to diagnose death. The three brain oriented formulations of death are: the whole brain, the brain stem, and the neocortical formulations of death. In this paper I review and discuss each of these formulations, and I propose a new formulation about human death based on the physiopathological mechanisms involved in the generation of consciousness. This contains two physiological components: arousal and awareness. Since the structures of the brain stem, the diencephalon and the cerebral cortex interact to generate consciousness, it would be a mistake to make a rigid distinction between their functions from the point of view of waking and content. Important interconnections between the brain stem, other subcortical structures and the neocortex give rise to both components of consciousness. The generation of consciousness, then, is based on the anatomy and the physiology of pathways throughout the whole brain. None of the three formulations above is wholly satisfactory. Conclusion: I propose a new formulation of death that identifies consciousness as the most important function of the organism, because it provides the essential human attributes and the highest level of control within the hierarchy of integrating functions of the organism abstract_id: PUBMED:23784534 Reviving brain death: a functionalist view. Recently both whole brain death (WBD) and higher brain death (HBD) have come under attack. These attacks, we argue, are successful, leaving supporters of both views without a firm foundation. This state of affairs has been described as "the death of brain death." Returning to a cardiopulmonary definition presents problems we also find unacceptable. Instead, we attempt to revive brain death by offering a novel and more coherent standard of death based on the permanent cessation of mental processing. This approach works, we claim, by being functionalist instead of being based in biology, consciousness, or personhood. We begin by explaining why an objective biological determination of death fails. We continue by similarly rejecting current arguments offered in support of HBD, which rely on consciousness and/or personhood. In the final section, we explain and defend our functionalist view of death. Our definition centers on mental processing, both conscious and preconscious or unconscious. This view provides the philosophical basis of a functional definition that most accurately reflects the original spirit of brain death when first proposed in the Harvard criteria of 1968. abstract_id: PUBMED:1949843 The subject of death. On the current controversy about brain-oriented determination of death Recently new concepts of partial brain death have seriously challenged the well-established whole-brain definition of death. In the present paper, we propose a 4-level-model of death, which differentiates the levels of attribution, definition, criteria, and tests. It is argued that whole-brain concepts of death are susceptible to partial-brain oriented criticism, mainly because they do not provide a precise determination of the subject of death. abstract_id: PUBMED:30584866 A Conceptual Justification for Brain Death. Among the old and new controversies over brain death, none is more fundamental than whether brain death is equivalent to the biological phenomenon of human death. Here, I defend this equivalency by offering a brief conceptual justification for this view of brain death, a subject that Andrew Huang and I recently analyzed elsewhere in greater detail. My defense of the concept of brain death has evolved since Bernard Gert, Charles Culver, and I first addressed it in 1981, a development that paralleled advances in intensive care unit treatment. The century-old concept of the organism as a whole provides the fundamental justification for the equivalency of brain death and human death. In our technological age, in which increasing numbers of components and systems of an organism can be kept alive, and for longer intervals, the permanent cessation of functioning of the organism as a whole is the phenomenon that best corresponds to its death. abstract_id: PUBMED:37380827 Abortion, Brain Death, and Coercion. A "universalist" policy on brain death holds that brain death is death, and neurologic criteria for death determination are rightly applied to all, without exemptions or opt outs. This essay argues that advocates of a universalist brain death policy defend the same sort of coercive control of end-of-life decision-making as "pro-life" advocates seek to achieve for reproductive decision-making, and both are grounded in an illiberal political philosophy. Those who recognize the serious flaws of this kind of public policy with respect to abortion must apply the same logic to brain death. abstract_id: PUBMED:26912933 A Thomistic defense of whole-brain death. Michel Accad critiques the currently accepted whole-brain criterion for determining the death of a human being from a Thomistic metaphysical perspective and, in so doing, raises objections to a particular argument defending the whole-brain criterion by Patrick Lee and Germain Grisez. In this paper, I will respond to Accad's critique of the whole-brain criterion and defend its continued validity as a criterion for determining when a human being's death has occurred in accord with Thomistic metaphysical principles. I will, however, join Accad in criticizing Lee and Grisez's proposed defense of the whole-brain criterion as potentially leading to erroneous conclusions regarding the determination of human death. Lay summary: Catholic physicians and bioethicists currently debate the legally accepted clinical standard for determining when a human being has died-known as the "wholebrain criterion"-which has also been morally affirmed by the Magisterium. This paper responds to physician Michel Accad's critique of the whole-brain criterion based upon St. Thomas Aquinas's metaphysical account of human nature as a union of a rational soul and a material body. I defend the whole-brain criterion from the same Thomistic philosophical perspective, while agreeing with Accad's objection to an alternative Thomistic defense of whole-brain death by philosophers Patrick Lee and Germain Grisez. abstract_id: PUBMED:35844817 Brain death debates: from bioethics to philosophy of science. 50 years after its introduction, brain death remains controversial among scholars. The debates focus on one question: is brain death a good criterion for determining death? This question has been answered from various perspectives: medical, metaphysical, ethical, and legal or political. Most authors either defend the criterion as it is, propose some minor or major revisions, or advocate abandoning it and finding better solutions to the problems that brain death was intended to solve when it was introduced. Here I plead for a different approach that has been overlooked in the literature: the philosophy of science approach. Some scholars claim that human death is a matter of fact, a biological phenomenon whose occurrence can be determined empirically, based on science. We should take this claim seriously, whether we agree with it or not. The question is: how do we know that human death is a scientific matter of fact? Taking the philosophy of science approach means, among other things, examining how the determination of human death became an object of scientific inquiry, exploring the nature of the brain death criterion itself, and analysing the meaning of its core concepts such as "irreversibility" and "functions". abstract_id: PUBMED:31586180 The Organism as a Whole in an Analysis of Death. Although death statutes permitting physicians to declare brain death are relatively uniform throughout the United States, academic debate persists over the equivalency of human death and brain death. Alan Shewmon showed that the formerly accepted integration rationale was conceptually incomplete by showing that brain-dead patients demonstrated a degree of integration. We provide a more complete rationale for the equivalency of human death and brain death by defending a deeper understanding of the organism as a whole (OaaW) and by using a novel strategy with shared objectives to justify death determination criteria. Our OaaW account describes different types of OaaW, defining human death as the loss of status as a human OaaW. We defend human death as similar to nonhuman death in terms of wakefulness, but also distinct in terms of the sui generis properties, particularly conscious awareness. We thereby defend the equivalency of brain death and human death using a resulting neurocentric rationale. abstract_id: PUBMED:11653058 The metaphysics of brain death. The dominant conception of brain death as the death of the whole brain constitutes an unstable compromise between the view that a person ceases to exist when she irreversibly loses the capacity for consciousness and the view that a human organism dies only when it ceases to function in an integrated way. I argue that no single criterion of death captures the importance we attribute both to the loss of the capacity for consciousness and to the loss of functioning of the organism as a whole. This is because the person or self is one thing and the human organism is another. We require a separate account of death for each. Only if we systematically distinguish between persons and human organisms will we be able to provide plausible accounts both of the conditions of our ceasing to exist and of when it is that we begin to exist. This paper, in short, argues for a form of mind-body dualism and draws out some of its implications for various practical moral problems. abstract_id: PUBMED:35886508 Early Adolescents' Motivations to Defend Victims of Cyberbullying. The aim of this study was to investigate how different types of motivation to defend victims of bullying would be associated with various bystander behaviors in cyberbullying situations among early adolescents in Sweden. Data were collected from 460 Swedish adolescents aged between 11 and 15 years who completed a survey in their classroom. Results showed that autonomous motivation to defend was positively associated with defender behavior and negatively associated with pro-bully and passive behavior, while extrinsic motivation was positively associated with pro-bully and passive behavior. Age was positively associated with increased passive behavior and dampened defensive behavior, while no effect of gender was found on defender behavior. Our findings suggest that students' autonomous motivation to defend victims is important in cyberbullying situations. Answer: The abstracts provided reflect a range of perspectives on the concept of brain death and its acceptance as a criterion for determining human death. Some propose new formulations or defend existing ones, while others critique the concept from various philosophical, ethical, and scientific angles. One abstract proposes a new formulation of death based on the mechanisms involved in consciousness generation, suggesting that consciousness is the most important function of the organism and that the structures throughout the whole brain interact to generate consciousness (PUBMED:12235573). Another abstract revives brain death by offering a functionalist view based on the permanent cessation of mental processing, both conscious and preconscious or unconscious, rather than relying on biology, consciousness, or personhood (PUBMED:23784534). A different perspective is presented in an abstract that proposes a 4-level-model of death, differentiating between levels of attribution, definition, criteria, and tests, and argues that whole-brain concepts of death are susceptible to criticism for not providing a precise determination of the subject of death (PUBMED:1949843). Another abstract provides a conceptual justification for brain death, arguing that the permanent cessation of functioning of the organism as a whole best corresponds to its death in our technological age (PUBMED:30584866). There is also an abstract that compares universalist policies on brain death to coercive control of end-of-life decision-making, suggesting that both are grounded in an illiberal political philosophy (PUBMED:37380827). A Thomistic defense of whole-brain death is presented in another abstract, which responds to critiques of the whole-brain criterion from a Thomistic metaphysical perspective (PUBMED:26912933). An abstract advocating for a philosophy of science approach to the brain death debates suggests examining how the determination of human death became an object of scientific inquiry and analyzing the meaning of core concepts such as "irreversibility" and "functions" (PUBMED:35844817). Another abstract defends the equivalency of human death and brain death by using a novel strategy with shared objectives to justify death determination criteria, focusing on the organism as a whole (PUBMED:31586180). Finally, an abstract discusses the metaphysics of brain death, arguing for a form of mind-body dualism and suggesting that we require separate accounts of death for persons and human organisms (PUBMED:11653058). In summary, while there is no consensus among the abstracts, there is a defense of a brain-oriented view of death, with various formulations and justifications provided. However, there are also critiques and alternative perspectives that challenge the adequacy of brain death as the sole criterion for determining human death.
Instruction: Are Inflammatory Biomarkers Increased in Varicose Vein Blood? Abstracts: abstract_id: PUBMED:27103338 Are Inflammatory Biomarkers Increased in Varicose Vein Blood? Objectives: To test for the presence of inflammatory biomarkers in blood taken from varicose veins versus antecubital blood of the same patient and compare this to levels in healthy controls. Methods: Using a multiplex biochip array method (Randox, United Kingdom), the interleukins (ILs) IL-1α, IL-1β, IL-2, IL-4, IL-6, IL-8, and IL-10; vascular endothelial growth factor; interferon γ, tumor necrosis factor α ; monocyte chemotactic protein 1 (MCP-1); and epidermal growth factor were measured in citrated plasma samples drawn from the arms and legs of 24 patients with varicose veins and 24 controls. Results: Expressed as median (interquartile range) in pg/mL, leg samples from patients with varicose veins had significantly higher levels of IL-8 and MCP-1 compared to their own arm samples (IL-8: local 2.3 [1.71-3.3] vs systemic 2.3 [1.62-2.98], P = .023; MCP-1: local 114.42 [84.29-139.05] vs systemic 103.56 [79.75-126.42], P &lt; .0005). This was not observed in the control group. Leg samples from both patients with varicose vein and controls had higher levels of IL-6 compared to their own arm samples (patients: local 1.67 [0.82-4.48] vs systemic 1.24 [0.58-3.26], P = .002; controls: local 1.23 [0.83-1.7] vs systemic 1.03 [1.7-1.52], P = .005). No significant differences were detected with the other biomarkers. Conclusions: Blood drawn from the site of varicose veins appears to have significantly increased concentrations of IL-6, IL-8, and MCP-1 when compared to the same patient's arm blood. This supports the hypothesis that inflammation is activated from the tissues drained by the varicose veins. abstract_id: PUBMED:25753870 Biomarkers of low-grade inflammation in primary varicose veins of the lower limbs. Objective: To analyze serum biomarkers of CVD in selected patients with primary axial reflux of great saphenous vein in one or both lower limbs. Patients And Methods: Ninety-six patients affected by uncomplicated varicose veins, were enrolled in the study. A unilateral, primary axial reflux in great saphenous veins was detected in 54 patients (U-CVD group) and a bilateral one in 42 (B-CVD group). Sixty-five age and sex-matched subjects without venous reflux were enrolled as controls. Mean venous pressure of both lower limbs at the distal great saphenous vein (mGSVP) and venous reflux were measured by continuous-wave Doppler ultrasound and echoduplex scanning, respectively. Reactive Oxygen Species (ROS), tissue Plasminogen Activator (t-PA) and its Inhibitor 1 (PAI-1) activities, Hematocrit (HTC), White Blood Cells (WBC), Neutrophyls (NEU), Platelets (PLT), Fibrinogen (FIB) and Blood Viscosity (BV) were assessed in blood samples drawn from the antecubital vein. Results: B-CVD group showed higher fibrinogen values (p &lt; 0.005) and higher mean venous pressure (0 &lt; 0.0001) in comparison to controls, while U-CVD did not. No difference was found between both groups and controls for all the other parameters. Conclusions: Increased fibrinogen levels in patients with bilateral varicose veins may represent an early warning signal, as it could be associated to the long-term progression of chronic venous disease. abstract_id: PUBMED:31636748 Appropriate Surgical Treatment of Symptomatic Primary Varicose Veins Decreases Systemic Inflammatory Biomarkers. Objective: To evaluate the relationship between systemic inflammatory biomarkers and efficacy of surgical treatment of primary varicose veins of the lower extremities. Methods: Total 12 patients who underwent endovenous laser ablation or stripping of varicose veins and six healthy subjects were enrolled. Structural and molecular changes of varices were assessed by immunohistochemical staining with anti-monocyte chemotactic protein-1 (MCP-1). MCP-1 and interleukin-6 (IL-6) levels in systemic antecubital blood were measured before and at 12 weeks after treatment. Results: Immunohistochemical staining revealed prominent manifestation of MCP-1-positive endothelial cells in the walls of varices. Preoperative serum MCP-1 and IL-6 levels in the patients were significantly higher than those in the control (166±12 pg/mL vs 99±10 pg/mL, p=0.003; 5.1±0.95 pg/mL vs 0.0±0.0 pg/mL, p=0.001, respectively). The values were significantly correlated with the severity of chronic venous insufficiency (CVI). Postoperative serum MCP-1 level significantly decreased compared with the preoperative level (152±10 pg/mL vs 166±12 pg/mL, p=0.048). The values after endovenous laser ablation did not significantly decrease compared with those after stripping. Conclusion: Varicose veins with CVI increase inflammatory biomarker levels in the local tissue and systemic blood. Appropriate treatment of symptomatic varicose veins decreases inflammatory biomarker levels. abstract_id: PUBMED:24115225 Novel inflammatory biomarkers of portal pressure in compensated cirrhosis patients. Unlabelled: The rationale for screening inflammatory serum biomarkers of the hepatic vein pressure gradient (HVPG) is based on the fact that portal hypertension is pathogenically related to liver injury and fibrosis, and that in turn these are associated with the activation of inflammatory pathways. This was a nested cohort study in the setting of a randomized, clinical trial to assess the development of gastroesophageal varices (GEV) (N Engl J Med 2005;353:2254). Patients had cirrhosis and portal hypertension but did not have GEV. A total of 90 patients who had baseline day-1 sera available were enrolled in the present study. The objective of this study was to determine whether inflammatory biomarkers in conjunction with clinical parameters could be used to develop a predictive paradigm for HVPG. The correlations between HVPG and interleukin (IL)-1β (P=0.0052); IL-1R-α (P=0.0085); Fas-R (P=0.0354), and serum VCAM-1 (P=0.0007) were highly significant. By using multivariate logistic regression analysis and selected parameters (transforming growth factor beta [TGFβ]; heat shock protein [HSP]-70; at-risk alcohol use; and Child class B) we could exclude HVPG ≥ 12 mmHg with 86% accuracy (95% confidence interval [CI]: 67.78 to 96.16%) and the sensitivity was 87.01% (95% CI: 69.68 to 96.34%). Therefore, the composite test could identify 86% of compensated cirrhosis patients with HVPG below 12 mmHg and prevent unnecessary esophagogastroduodenoscopy with its associated morbidity and costs in these patients. Our diagnostic test was not efficient in predicting HVPG ≥ 12 mmHg. Conclusion: A blood test for HVPG could be performed in cirrhosis patients to prevent unnecessary esophagogastroduodenoscopy. abstract_id: PUBMED:26100448 Do blood constituents in varicose veins differ from the systemic blood constituents? Objective: Varicose veins represent one of the most frequent vascular diseases and are in most cases benign. However, advanced disease is frequently associated with complications such as chronic venous insufficiency and superficial vein thrombosis. The pathogenic mechanisms are not well understood. Besides increased venous pressure, it is suggested that local blood constituents trigger various mechanisms responsible for the progression of the disease and its complications. Design: The aim of this study was to investigate the changes in the blood in varicose veins and to compare them with the systemic markers of inflammation and endothelial damage. Materials And Methods: Forty patients with primary varicose veins were included in the study. Most patients were class C2. Blood samples were taken from the leg from the tortuous and dilated varicose tributaries of the great saphenous vein and from the cubital vein. Results: The values of basic hematologic tests were comparable between blood samples (varicose vs. systemic). In varicose veins, the following parameters were significantly increased in comparison with systemic blood: hsCRP (3.12 ± 2.18 mg/L vs. 2.04 ± 2.21 mg/L, p = .04), IL-6 (3.54 ± 2.59 pg/mL vs. 2.25 ± 1.27 pg/mL, p = .008), vWF (118.4 ± 27% vs. 83.2 ± 22%, p &lt; .05). D-dimer, in samples taken from the leg varicose veins, was also significantly higher than in the systemic blood (104.3 ± 9.3 ng/mL vs. 89.5 ± 8.3 ng/mL, p = .039). Conclusions: Some inflammatory markers and indicators of endothelial dysfunction are increased in varicose vein blood. This is most probably the consequence of deteriorated blood flow in dilated and tortuous superficial veins, and increased venous pressure. Damage to the venous wall, which causes a chronic inflammatory response, together with the procoagulant properties of local blood may promote further progression of the disease and thrombotic complications. abstract_id: PUBMED:32529904 Study of association of varicose veins and inflammation by inflammatory markers. Objective: In varicose veins, increased levels of inflammatory markers are indicators of endothelial damage and increased procoagulant activity. These findings support the assumption that the constitution of blood in varicose veins differs from that of systemic blood. The purpose of the study was a correlative study of blood constituents in varicose veins and peripheral veins (normal vein) in same individual with varicose vein which was done by comparing the level of concentration of interleukin-6, fibrinogen, haemoglobin from blood of varicose veins and normal peripheral vein (antecubital vein). Method: Using citrated plasma samples withdrawn from arms and legs of same patient and plasma obtained by centrifugation of citrated venous blood at 5000 r/min for 10 min was used for correlation. Serum concentration of interleukin-6 and fibrinogen were determined by human enzyme-linked immunosorbent assay Kit for both interleukin-6 and fibrinogen, which is based on the standard sandwich enzyme-linked immunosorbent assay technology. This assay employs a monoclonal antibody specific for human interleukin-6 coated on a 96-well plate. Result: Expressed as median (interquartile range) in pg/mL, leg samples from patient having varicose vein has significantly increased interleukin-6 in cases as compared to controls (p value of &lt;0.001). Leg samples from patient having varicose vein has significantly increased fibrinogen concentration than their arm samples (p value of &lt;0.001). Concentration of haemoglobin significantly increased in leg samples as compared to blood withdrawn from arms (p value of 0.012). Conclusion: Blood withdrawn from the site of varicose vein appears to have significantly increased concentration of interleukin-6, fibrinogen and haemoglobin when compared to same patient's antecubital blood sample supporting the hypothesis that inflammation is increased in tissues drained by varicose vein. abstract_id: PUBMED:22642402 Markers of coagulation, fibrinolysis and inflammation in relation to post-thrombotic syndrome. Background: Post-thrombotic syndrome (PTS) occurs in 20-50% of patients after a deep venous thrombosis (DVT). It is difficult to accurately predict which patients will develop PTS. Biomarkers could be a valuable tool for PTS risk assessment. Objectives: To investigate whether increased levels of factor (F)VIII, C-reactive protein (CRP) or D-dimer, over time, are associated with the development of PTS in patients after an acute DVT. Methods: PTS status was assessed using the Villalta scale. Blood sampling was performed at three points during follow-up. Results: A cohort of 228 consecutive patients was included after an acute DVT. At T1 (12 months after index DVT), both levels of D-dimer (median 725 ng mL(-1) [interquartile range, IQR 400-1400[ vs. 378 ng mL(-1) [251-652] P=0.004) and CRP (median 3.9 mg L(-1) [IQR 1.6-8.5] vs. 2.4 mg L(-1) [1.0-4.3] P=0.018) were increased in patients with PTS, compared with patients without PTS. Factor (F)VIII was not associated with PTS. In the multivariate logistic regression analysis, varicosities (odds ratio [OR] 13.4 95% confidence interval [CI] 3.0-59.1 P=0.001), a previous ipsilateral DVT (OR 6.3 95% CI 1.5-26.9 P=0.012) and CRP&gt;5 mg L(-1) on T1 (OR 8.0 95% CI 2.4-26.4 P=0.001) were significantly associated with PTS. Conclusions: Besides previous ipsilateral DVT and varicosities, CRP&gt;5 mg L(-1) at T1 was strongly and independently associated with PTS. Persistent inflammation rather than hypercoagulability might be the most important etiological factor in PTS, and may be a target for future therapy. The development of a risk score for PTS, including both clinical risk factors and biomarker levels, such as CRP, might be desirable. abstract_id: PUBMED:32679199 Compression stockings attenuate the expression of proteins associated with vascular damage in human varicose veins. Objective: The objective of this study was to analyze whether compression stocking therapy in the human varicose vein wall may change the levels of biomarkers associated with vein insufficiency. Methods: Dilated collateral varicose vein samples were obtained from patients showing chronic venous disease (class 2 of the Clinical, Etiology, Anatomy, and Pathophysiology classification). Before elective surgery, 12 patients underwent compression stocking therapy (for 1 month) and 9 patients did not (control group). Expression levels of biomarkers associated with endothelial functionality (nitric oxide synthase 3), inflammation (interleukin-6, interleukin-10), oxidative stress (Gp91phox subunit of NADPH oxidase), and coagulation (factor Xa) were determined. P-selectin, an inflammatory and thrombosis-related biomarker, was also measured. Results: Compression stockings increased the content of nitric oxide synthase 3 (control, 16.48 [16.04-17.40] AU; compression, 83.71 [67.70-91.85] AU; P &lt; .001) in the varicose vein wall that was accompanied by reduction of both interleukin-6 levels (control, 38.72 [33.48-48.52] pg/μg protein; compression, 14.49 [11.05-17.41] pg/μg protein; P = .001) and the expression of Gp91phox subunit of NADPH oxidase (control, 63.24 [53.79-77.03] AU; compression, 36.85 [35.66-52.27] AU; P &lt; .010). P-selectin (control, 77.37 [61.86-85.00] AU; compression, 54.31 [49.60-67.50] AU; P = .017) and factor Xa (control, 90.78 [75.02-100.00] AU; compression, 14.50 [13.77-36.20] AU; P &lt; .001) were also reduced in the varicose vein wall of compression stocking-treated patients. However, P-selectin lost its statistical significance after adjustment by dyslipidemia. Conclusions: In the varicose vein wall, compression stocking therapy improved the content levels of biomarkers associated with endothelial functionality, inflammation, oxidative stress, and coagulation. abstract_id: PUBMED:26004513 Purinergic transmission in blood vessels. There are nineteen different receptor proteins for adenosine, adenine and uridine nucleotides, and nucleotide sugars, belonging to three families of G protein-coupled adenosine and P2Y receptors, and ionotropic P2X receptors. The majority are functionally expressed in blood vessels, as purinergic receptors in perivascular nerves, smooth muscle and endothelial cells, and roles in regulation of vascular contractility, immune function and growth have been identified. The endogenous ligands for purine receptors, ATP, ADP, UTP, UDP and adenosine, can be released from different cell types within the vasculature, as well as from circulating blood cells, including erythrocytes and platelets. Many purine receptors can be activated by two or more of the endogenous ligands. Further complexity arises because of interconversion between ligands, notably adenosine formation from the metabolism of ATP, leading to complex integrated responses through activation of different subtypes of purine receptors. The enzymes responsible for this conversion, ectonucleotidases, are present on the surface of smooth muscle and endothelial cells, and may be coreleased with neurotransmitters from nerves. What selectivity there is for the actions of purines/pyrimidines comes from differential expression of their receptors within the vasculature. P2X1 receptors mediate the vasocontractile actions of ATP released as a neurotransmitter with noradrenaline (NA) from sympathetic perivascular nerves, and are located on the vascular smooth muscle adjacent to the nerve varicosities, the sites of neurotransmitter release. The relative contribution of ATP and NA as functional cotransmitters varies with species, type and size of blood vessel, neuronal firing pattern, the tone/pressure of the blood vessel, and in ageing and disease. ATP is also a neurotransmitter in non-adrenergic non-cholinergic perivascular nerves and mediates vasorelaxation via smooth muscle P2Y-like receptors. ATP and adenosine can act as neuromodulators, with the most robust evidence being for prejunctional inhibition of neurotransmission via A1 adenosine receptors, but also prejunctional excitation and inhibition of neurotransmission via P2X and P2Y receptors, respectively. P2Y2, P2Y4 and P2Y6 receptors expressed on the vascular smooth muscle are coupled to vasocontraction, and may have a role in pathophysiological conditions, when purines are released from damaged cells, or when there is damage to the protective barrier that is the endothelium. Adenosine is released during hypoxia to increase blood flow via vasodilator A2A and A2B receptors expressed on the endothelium and smooth muscle. ATP is released from endothelial cells during hypoxia and shear stress and can act at P2Y and P2X4 receptors expressed on the endothelium to increase local blood flow. Activation of endothelial purine receptors leads to the release of nitric oxide, hyperpolarising factors and prostacyclin, which inhibits platelet aggregation and thus ensures patent blood flow. Vascular purine receptors also regulate endothelial and smooth muscle growth, and inflammation, and thus are involved in the underlying processes of a number of cardiovascular diseases. abstract_id: PUBMED:28074870 MicroRNA-155 is upregulated in ascites in patients with spontaneous bacterial peritonitis. MircoRNA's (miR) have been recognised as important modulators of gene expression and potential biomarkers. However, they have been rarely investigated in bio fluids apart from blood. We investigated the association of miR-125b and miR-155 with complications of cirrhosis. Ascites was prospectively collected from patients with cirrhosis undergoing paracentesis at our department. miR's were determined in the supernatant using qPCR and normalized by SV-40. Clinical parameters were assessed at paracentesis and during follow-up. 76 specimens from 72 patients were analysed. MiR's were not associated to age, sex or aetiology of cirrhosis. MiR-125b levels differed between patients with low and high MELD score, and miR-125b levels showed an inverse correlation to serum creatinine (r2 = -0.23; p = 0.05). MiR-155 was elevated in patients with spontaneous bacterial peritonitis (SBP) (n = 10; p = 0.04). MiR-155 levels differed between patients with and without 30-day survival (p = 0.02). No association of ascites levels of investigated miR's to size of varices, episodes of gastrointestinal bleeding or hepatorenal syndrome was observed. While miR-125b levels in ascites seem to be associated with liver and renal dysfunction, miR-155 might be implicated in local immune response in SBP. Answer: Yes, inflammatory biomarkers are increased in varicose vein blood. Studies have shown that blood drawn from the site of varicose veins has significantly increased concentrations of certain inflammatory biomarkers when compared to the same patient's arm blood. Specifically, leg samples from patients with varicose veins had significantly higher levels of interleukin-8 (IL-8) and monocyte chemotactic protein 1 (MCP-1) compared to their own arm samples. This was not observed in the control group. Additionally, leg samples from both patients with varicose veins and controls had higher levels of interleukin-6 (IL-6) compared to their own arm samples (PUBMED:27103338). Other studies have also found increased levels of inflammatory markers such as fibrinogen in patients with bilateral varicose veins, which may represent an early warning signal for the progression of chronic venous disease (PUBMED:25753870). Furthermore, surgical treatment of primary varicose veins has been shown to decrease systemic inflammatory biomarker levels, suggesting that varicose veins with chronic venous insufficiency (CVI) increase inflammatory biomarker levels in both the local tissue and systemic blood (PUBMED:31636748). Additional research indicates that some inflammatory markers and indicators of endothelial dysfunction, such as high-sensitivity C-reactive protein (hsCRP), IL-6, and von Willebrand factor (vWF), are increased in varicose vein blood. This is likely a consequence of deteriorated blood flow and increased venous pressure in dilated and tortuous superficial veins, which may promote further progression of the disease and thrombotic complications (PUBMED:26100448). Moreover, a study found that blood withdrawn from the site of varicose veins has significantly increased concentrations of interleukin-6, fibrinogen, and hemoglobin when compared to the same patient's antecubital blood sample, supporting the hypothesis that inflammation is increased in tissues drained by varicose veins (PUBMED:32529904).
Instruction: Major cerebral events in Staphylococcus aureus infective endocarditis: is anticoagulant therapy safe? Abstracts: abstract_id: PUBMED:19690411 Major cerebral events in Staphylococcus aureus infective endocarditis: is anticoagulant therapy safe? Objectives: To study the impact of anticoagulation on major cerebral events in patients with left-sided Staphylococcus aureus infective endocarditis (IE). Methods: A prospective cohort study; the use of anticoagulation and the relation to major cerebral events was evaluated separately at onset of admission and during hospitalization. Results: Overall, 70 out of 175 patients (40%; 95% CI: 33-47%) experienced major cerebral events during the course of the disease, cerebral ischaemic stroke occured in 59 patients (34%; 95% CI: 27-41%), cerebral infection in 23 patients (14%; 95% CI: 9-19%), and cerebral haemorrhage in 5 patients (3%; 95% CI: 0.5-6%). Patients receiving anticoagulation were less likely to have experienced a major cerebral event at the time of admission (15%) compared with those without anticoagulation (37%, p = 0.009; adjusted OR: 0.27; 95% CI: 0.075-0.96; p = 0.04). In-hospital mortality was 23% (95% CI: 17-29%), and there was no significant difference between those with or without anticoagulation. Conclusions: We found no increased risk of cerebral haemorrhage in S. aureus IE patients receiving anticoagulation. Anticoagulation was associated with a reduced risk of cerebral events before initiation of antibiotics. Data support the continuance of anticoagulation in S. aureus IE patients when indicated. abstract_id: PUBMED:34846922 Successful endovascular coiling of infectious cerebral aneurysm following Staphylococcus haemolyticus endocarditis. Recent reports suggest that Staphylococcus haemolyticus can cause infective endocarditis (IE). However, no data are available regarding infectious intracranial aneurysm (IIA) following S. haemolyticus endocarditis. Endovascular coiling is a challenging approach for the treatment of IIA. We describe the case of a 63-year-old woman who suddenly developed aphasia and dysarthria following an acute cerebral infarction in her left insular and temporal cortex. After a total hysterectomy at the age of 39, the patient had suffered from recurrent bacterial pyomyositis in her legs. At admission, there was no evidence of cerebral aneurysm, as assessed by magnetic resonance angiography, and no vegetation, as assessed by transesophageal echocardiography (TEE), resulting in an incorrect diagnosis. However, subarachnoid hemorrhage and development of cerebral aneurysm in the left middle cerebral artery occurred within 1 week of hospitalization. Continuous positive blood culture results and a second TEE finally revealed that IE was caused by S. haemolyticus. Coil embolization of the IIA was successful on day 26 after symptom onset; after this procedure, the patient began to recover. This case demonstrates that S. haemolyticus-induced endocarditis can cause IIA. Endovascular coiling is a potentially effective approach to treat IIA. abstract_id: PUBMED:8341834 Cerebral embolism as the first manifestation of Staphylococcus aureus endocarditis: the role of transesophageal echocardiography in the therapeutic decision In patients with endocarditis due to Staphylococcus aureus, the issue of medical or surgical treatment still remains controversial. Likewise, there is no agreement in the management of those patients who have suffered cerebral embolism and show large vegetations in echocardiography. Moreover, transthoracic echocardiography not always disclose endocarditis vegetations as accurately as transesophageal echocardiography. We describe a patient with cerebral embolism due to Staphylococcus aureus endocarditis in which adequate medical diagnosis and management was achieved by means of transesophageal echocardiography. abstract_id: PUBMED:19621620 Necrotising endocarditis of mitral valve due to Staphylococcus lugdunensis Staphylococcus lugdunensis is an infrequent cause of infective endocarditis (IE) and usually involves native valves of the heart. It causes life-threatening events such as rupture of cardiac valve or cerebral or pulmonary embolism due to necrosis on the endocardial tissue involved by the bacteria. Antibiotic therapy without cardiac surgery or delayed cardiac surgery usually follows a fatal course in S. lugdunensis endocarditis. In this report the first case of S. lugdunensis endocarditis from Turkey was presented. A 37-year-old man was admitted to the emergency department with a 2-weeks history of fever chills and accompanying intermittent pain on the left side of the thorax. Other than recurrent folliculitis continuing for 20 years, his history was unremarkable. Echocardiography revealed vegetation on the mitral valve of the patient and vancomycin plus gentamicin were initiated with the diagnosis of IE. All blood cultures (5 sets) taken on admission and within the initial 48 hours of the antibiotic therapy yielded S. lugdunensis. According to the susceptibility test results, the antibiotic therapy was switched to ampicillin-sulbactam plus rifampin. Blood cultures became negative after the third day of therapy, however, cardiac failure was emerged due to rupture of mitral valve and chorda tendiniea on the 12th day of the therapy. Cardiac surgery revealed that mitral valve and surrounding tissue of the valve were evidently necrotic and fragile, anterior leaflet of the mitral valve was covered with vegetation, posterior leaflet and chorda tendiniea were ruptured. Vegetation was removed and the destructed mitral valve was replaced with a mechanical valve. Vegetation culture remained sterile, however, antibiotics were switched to vancomycin plus rifampin due to persistent fever on the 21st day of the therapy (9th day of operation). Fever resolved four days after the antibiotic switch. Antibiotics were stopped on the 9th weeks of admission and the patient was discharged. He had no problem in follow-up controls for one year. In conclusion, proper antibiotic therapy combined with early cardiac surgery seems to be the optimal therapeutic approach in IE caused by S. lugdunensis. abstract_id: PUBMED:3120665 Staphylococcus aureus endocarditis Among 409 cases of bacterial endocarditis (BE) observed from 1972 to 1985, 142 were caused by Staphylococcus aureus. Of these 142 cases, 59 affected native valves of the left heart (left BE), 47 affected native valves of the right heart (right BE) (including 36 drug-addicts), and 36 involved prosthetic valves (BEP) and were associated with mediastinitis in 11 cases. Symptoms were acute in 122 cases and subacute in 2 cases. 91 of the BE on native valves were primary (86%). Cutaneous manifestations were present in 38 cases (27%). Of the 67 patients who died (47%), 28/59 had left BE (47%), 7/47 had right BE (15%) (including 2 drug addicts) and 32/36 had BEP (86%); all differences were statistically significant. Complications consisted of: heart failure in 78 cases (55%), including 40 cases of left BE, 8 cases of right BE and 30 cases of BEP; systemic peripheral embolism in 29 cases (left BE 17, BEP 12) and neurological accidents in 58 cases (left BE 34, right BE 24). Thirty of these accidents occurred before the 4th day (left BE 13, BEP 17). Documented neurological accidents included cerebral haemorrhage (13 cases), cerebral infarction (14 cases) and cerebral abscess (4 cases); 4 of the 12 patients who underwent arteriography were found to have one or several aneurysms. Thirty-nine of these 58 patients died, death being directly due to a neurological cause in 20 cases (left BE 10, right BE 10).(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:31681776 Aspirin Effect on Staphylococcus aureus-Platelet Interactions During Infectious Endocarditis. Infectious endocarditis (IE) is a rare disease associated with high mortality and morbidity rate. The platelet-bacterial interaction presents the cornerstone of the development of endocardial vegetation. The epidemiology of IE has undergone profound changes between the last and the new decade, with Staphylococcus aureus becoming the main incriminated species. Despite improvements in antibiotic and surgical therapies, embolic disorders remain highly associated with IE that can be fatal. Antiplatelet drugs have been widely proposed to overcome embolic events associated with IE. This proposal has been supported by numerous in vitro, experimental, and clinical studies. However, other studies have yielded conflicting results. In this review, we focus on the effect of aspirin on the genesis of S. aureus endocarditic vegetation, as well as on the management of embolic and hemorrhagic events related to it, starting by its influence on the platelet-bacteria interaction. abstract_id: PUBMED:34894129 Safety and Pharmacokinetics of Exebacase in an Infant With Disseminated Staphylococcus aureus Infection. Exebacase, an antistaphylococcal lysin produced from a bacteriophage-encoded gene, is a promising adjunctive therapy for severe methicillin-resistant Staphylococcus aureus infections. We describe the first infant to receive exebacase, dosing, and pharmacokinetics. Exebacase may be safe and efficacious in children; however, further clinical trials are needed to optimize dosing. abstract_id: PUBMED:19763060 A patient in whom survival was achieved by acute-stage surgery for infective endocarditis complicated by a cerebral hemorrhage. We report a patient who underwent mitral valve replacement despite acute-stage cerebral hemorrhage related to a complication of infective endocarditis (IE) and rupture of intracerebral mycotic aneurysm, achieving survival. The patient was a 24-year-old female. She consulted our hospital because of a fever and was diagnosed with IE and a cerebral hemorrhage. A blood culture test on admission revealed methicillin-sensitive Staphylococcus aureus. Transthoracic echocardiography revealed giant vegetation in the left atrium. Besides the management of an acute-stage cerebral hemorrhage, medical therapy for IE was performed. However, a severe infection was refractory, and mitral-regurgitation-related heart failure deteriorated. Despite acute-stage cerebral hemorrhage, mitral valve replacement was performed 10 days after admission because advanced disseminated intravascular coagulation syndrome made conservative management difficult. After surgery, neither exacerbation of cerebral symptoms nor recurrent IE was noted, and the patient was discharged. abstract_id: PUBMED:38487127 Daptomycin and Ceftaroline Combination Therapy in Complicated Endovascular Infections Caused by Methicillin-Resistant Staphylococcus epidermidis. Background In complicated endovascular infections by methicillin-resistant Staphylococcus aureus (MRSA) or Staphylococcus epidermidis (MRSE), when first-line therapy with vancomycin (VAN) or daptomycin (DAP) fails, combination therapy with ceftaroline (CFT) and DAP has been shown to be a useful approach as salvage therapy for persistent MRSA bacteremia. Objectives This study aimed to describe experience with daptomycin and ceftaroline combination therapy in MRSE-complicated endovascular infections. Methods A single-center retrospective review of consecutive patients with MRSE-complicated endovascular infections treated with ≥72 hours of DAP+CFT at any time during the course of treatment, from January 1, 2016 to December 31, 2020, at Centro Hospitalar Universitário São João (CHUSJ), Porto, Portugal, was conducted. The exclusion criteria were known resistance to daptomycin or ceftaroline, total time of combination therapy &lt;72 hours and loss to follow-up. Results We identified seven cases that matched our criteria: five endocarditis and two central venous catheter infections. Six patients switched to combination therapy due to treatment failure with first-line agents - three due to persistent bacteremia and three due to progression of infection despite negative blood cultures. Effective surgical source control took one to four weeks to occur. Three patients died during the treatment, one from progression of the disease and two due to another infection. Conclusions We consider the DAP+CFT combination therapy to be a valid and safe therapeutic choice in complicated patients, such as those with severe infection, poor functional status, and impossibility or delay of surgical source control. However, conclusions on the role of combination therapy should be careful due to the low number of patients and the several confounding factors. abstract_id: PUBMED:34596881 The Effect of Combination Therapy on Mortality and Adverse Events in Patients with Staphylococcus aureus Bacteraemia: A Systematic Review and Meta-analysis of Randomized Controlled Trials. Introduction: The findings of randomized controlled trials (RCTs), observational studies, and meta-analyses vary regarding the effectiveness and safety of combination therapy for patients with Staphylococcus aureus bacteraemia (SAB). We aimed to identify the effectiveness and safety of combination therapy in patients with SAB compared with those of monotherapy. Methods: We performed a systematic review and meta-analysis to compare combination therapy versus monotherapy in patients with SAB. Two authors independently searched PubMed, Embase, and the Cochrane Library of clinical trials until 17 February 2021. Any RCT comparing mortality or adverse events (AEs) of combination therapy versus monotherapy for patients with SAB was eligible. Summary risk ratios (RRs) and 95% confidence intervals (CIs) were evaluated using a random-effects model. The primary outcome was all-cause mortality at any time point. This meta-analysis is registered with the PROSPERO database (CRD42020188176) and reported according to PRISMA guidelines. Results: In total, 1906 articles were identified and screened, and 14 studies (2367 patients) were included in the meta-analysis. There was no significant difference in the risk of all-cause mortality between the two groups (RR = 1.00; 95% CI 0.83-1.20; P = 0.99; I2 = 0%). Similar results were obtained by subgroup analysis of mortality recording time, endocarditis, pathogen resistance, article publication time, number of patients, and adjuvant antibiotics. Notably, combination treatment might significantly increase the risk of drug-related AEs (RR = 1.68; 95% CI 1.06-2.66; P = 0.03; I2 = 67%) and nephrotoxicity (RR = 2.30; 95% CI 1.68-3.16; P &lt; 0.00001; I2 = 0%), although the occurrences of AEs leading to treatment discontinuation and serious AEs were not significantly different between the two groups. Conclusions: The meta-analysis suggested that combination therapy could not reduce mortality but might increase the risk of drug-related AEs and nephrotoxicity and should be applied very cautiously. Future studies on combined drug therapy for SAB need careful and rigorous design for specific antibiotic combinations. Answer: The safety of anticoagulant therapy in patients with left-sided Staphylococcus aureus infective endocarditis (IE) who experience major cerebral events has been a subject of investigation. A prospective cohort study found that patients receiving anticoagulation were less likely to have experienced a major cerebral event at the time of admission compared with those without anticoagulation (15% vs. 37%, p = 0.009; adjusted OR: 0.27; 95% CI: 0.075-0.96; p = 0.04). Moreover, there was no significant difference in in-hospital mortality between those with or without anticoagulation, and no increased risk of cerebral hemorrhage was observed in patients receiving anticoagulation. These findings suggest that anticoagulation is associated with a reduced risk of cerebral events before the initiation of antibiotics and support the continuance of anticoagulation in S. aureus IE patients when indicated (PUBMED:19690411). However, it is important to note that the management of patients with S. aureus endocarditis can be complex, especially when complicated by cerebral events such as embolism or hemorrhage. The role of transesophageal echocardiography in the therapeutic decision is crucial, as it can accurately disclose endocarditis vegetations and guide medical or surgical treatment (PUBMED:8341834). In cases of cerebral hemorrhage related to IE, acute-stage surgery for infective endocarditis has been performed successfully, as reported in a patient with mitral valve replacement despite acute-stage cerebral hemorrhage (PUBMED:19763060). In summary, the evidence suggests that anticoagulant therapy may be safe in patients with S. aureus IE and could be beneficial in reducing the risk of cerebral events prior to antibiotic treatment. However, individual patient circumstances, including the presence of cerebral complications, should be carefully considered when making therapeutic decisions.
Instruction: Do parental ratings on cognition reflect neuropsychological outcome in congenital heart disease? Abstracts: abstract_id: PUBMED:18053001 Do parental ratings on cognition reflect neuropsychological outcome in congenital heart disease? Aim: To describe the parental view of the cognitive skills of their child with a surgically corrected congenital heart disease (CHD) and compare it to objectified cognitive measures in children with CHD 6-12 years postoperatively. Methods: Parents completed a questionnaire on several cognitive functions of their child. Children with CHD and healthy controls (n=86, aged 8 years 8 months +/- 1 year 6 months) underwent an abbreviated IQ-testing and a neurodevelopmental assessment. Results: Parents of the children with CHD more frequently indicated lower sustained attention (p&lt;0.05), lower divided attention (p&lt;0.001), more problems with memory and learning skills (p&lt;0.05), and deficient gross motor functioning (p&lt;0.01) compared to the parents of healthy controls. Intellectual and neuropsychological assessment revealed a lower estimated full-scale IQ (p&lt;0.01), worse sensorimotor functioning (p&lt;0.001), and lower performances on language (p&lt;0.001), attention/executive functioning (p&lt;0.05), and memory (p&lt;0.05) in the CHD-group. Several items of the questionnaire were significant predictors for worse neurodevelopmental outcome. Conclusion: Overall, the objective and subjective measures on cognitive functioning are in agreement and indicate the presence of neurocognitive deficits in children with CHD. This study endorses the accuracy and usefulness of a parental questionnaire to report on the cognitive functioning of the child and urges the investigation of neurocognitive functioning in children with CHD at follow-up. abstract_id: PUBMED:35679183 Social cognition and competence in preschoolers with congenital heart disease. Objective: Children born with congenital heart disease (CHD) are at an increased risk for various neurodevelopmental impairments. However, little is known regarding social outcomes associated with CHD, particularly during early childhood. The present study aimed to characterize the sociocognitive profile and to assess the contribution of language, executive functions (EF), and social cognition to social competence (SC) in preschoolers with CHD. Method: Five-year-old children with CHD (n = 55) completed a standardized neuropsychological assessment. Performance on sociocognitive skills was compared to test norms using one-sample t tests. Hierarchical regression was conducted to examine the associations between language skills, affect recognition (AR), theory of mind (ToM), EF (performance-based and parent-rated), and social competence. Results: Children with CHD performed significantly worse than norms in language and ToM, whereas EF and social competence appeared generally preserved in our sample. In hierarchical regression analysis, cognitive functions (language score, AR, ToM, EF performance) accounted for a significant 24.3% of the variance. Parent-rated EF added another 24.8% to the total explained variance. Conclusions: These findings provide new evidence for understanding social cognition and competence among preschoolers with CHD, showing vulnerability in social cognition and language skills but not in social competence more generally. The model suggests a combined contribution of social cognition, language, and EF on social outcomes. Remedial programs addressing these intervention targets could be useful in promoting social development in this vulnerable population. (PsycInfo Database Record (c) 2022 APA, all rights reserved). abstract_id: PUBMED:26210851 Neurodevelopmental outcome after surgery for acyanotic congenital heart disease. Background: Inconsistent results on neuropsychological outcome in patients treated for acyanotic congenital heart disease (aCHD) questioned the clinical relevance of possible neurobehavioral sequelae in this group. This study was designed to objectify the neuropsychological profile and evaluate associations with medical data. Methods: Patients with a corrected atrial or ventricular septal defect, ASD-II or VSD, (n=46; mean age 9 years, 2 months) and a matched control group were submitted to an intelligence test (Wechsler Intelligence Scale for Children, third edition, Dutch version) and evaluated with a neuropsychological test battery (Developmental Neuropsychological Assessment, second edition, Dutch version). Hospitalization variables were retrieved to evaluate associations with cognitive outcome. Parents completed a behavioral checklist (Achenbach Child Behavior Checklist for Children aged 6-18). Results: ASD-II patients showed lower scores in domains of visuospatial processing, language, attention, and social perception. VSD patients displayed subtle problems in attention and visuospatial information processing. Only few perioperative medical factors, but also socioeconomic variables were associated with cognitive outcomes. Parents of ASD-II patients reported more school problems when compared to controls. Conclusions: After treatment for aCHD, subtle cognitive difficulties can present in domains of visuospatial information processing, language, attention, and social perception. These shortcomings might hamper school performances, as is suggested by lower school competence ratings. Ongoing follow-up and cognitive screening is warranted to promote developmental progress, in which both parents and clinicians share responsibility. abstract_id: PUBMED:20660542 Neuropsychological status in children after repair of acyanotic congenital heart disease. Objectives: The majority of previous studies that described the neuropsychological effects of cardiopulmonary bypass (CPB) in children were performed after surgery in infancy for complex congenital heart disease (CHD). We sought to limit confounding variables and isolate potential independent effects of CPB by describing neuropsychological function in school-aged children after repair of acyanotic CHD. Methods: This was a prospective study of patients who were aged 5 to 18 years and undergoing repair of acyanotic CHD. Neuropsychological testing battery included assessment of intelligence, memory, motor, attention, executive function, and behavior before and 6 months after CPB. The independent effects of anesthesia, surgery, and hospitalization on neuropsychological function were assessed by testing a surgical control group of patients who were undergoing repair of pectus deformities. In addition, an outpatient group of children with mild CHD were enrolled to assess the practice effects of serial testing. Results: Patients included CPB (n = 35), surgical control (n = 19), and nonsurgical (n = 12). Groups were comparable in age, gender, and race and demonstrated similar unadjusted group mean scores on baseline and 6-month follow-up neuropsychological testing. When adjusted for practice effects, the CPB group performed similar to the non-CPB groups in all assessed neuropsychological domains, with the exception of 1 of 4 tests of executive function. Conclusions: When controlling for the non-CPB effects of surgery (eg, hospitalization, anesthesia, thoracotomy) and the practice effects of serial testing, there were no consistent independent effects of CPB on neuropsychological status in a cohort of children and adolescents 6 months after repair of acyanotic CHD. abstract_id: PUBMED:7928100 Neuropsychological and behavioral status of Chinese children with acyanotic congenital heart disease. Thirty-nine children between the ages of 5 to 8 years (N = 27) and 9 to 14 years (N = 12) who were undergoing cardiac surgery for repair of ventricular or atrial septal defect and 39 controls matched for age, educational level and social class were administered the age-appropriate Chinese versions of the Wechsler Intelligence scales and the Halstead Reitan Neuropsychological test batteries. Parents completed the Achenbach Behavioral Rating Scale. Surgical subjects underwent extracorporeal circulation and hypotensive anesthesia. The intellectual development of children with heart disease was significantly lower than normal controls and showed particular impairment of complex integrative functions of the brain. Parents of surgical subjects also rated their children as having greater behavioral disturbance than did parents of healthy children. Acyanotic heart disease was found to both impact negatively neuropsychological and behavioral functioning. abstract_id: PUBMED:31713990 Predictors of neuropsychological functioning and medication adherence in pediatric heart transplant recipients referred for neuropsychological evaluation. Children who undergo heart transplantation are at risk for long-term neurodevelopmental sequelae secondary to heart disease and its treatment. Detailed neuropsychological profiles in clinical sample status post-pediatric heart transplantation are sparse in the literature, and there is little information regarding predictors of neuropsychological functioning or how it relates to medication adherence in this population. The present study examined these questions in a retrospective analysis of 27 pediatric heart transplantation recipients referred for clinical neuropsychological evaluation. The sample demonstrated mild-to-moderate decrements across domains of neuropsychological functioning. Children with premorbid congenital heart disease performed more poorly in working memory, word reading, and parent-rated conceptual adaptive skills compared to children with premorbid cardiomyopathy. Additionally, a higher number of rejection episodes were related to poorer verbal memory. Children with parent-reported attention problems had better adherence to immunosuppressant medication, which may have represented greater caregiver involvement in medication management. Taken together, clinically referred children with history of heart transplantation showed broad-based difficulties across neuropsychological domains according to formal testing and parent rating scales. This population requires routine neuropsychological monitoring and intervention. abstract_id: PUBMED:33105067 Early school-age cognitive performance post-pediatric heart transplantation. Background: As survival in pediatric heart transplantation (HTx) has improved due to medical advances, the analysis of long-term outcomes impacting quality of life such as cognition and development becomes increasingly important. Neuropsychological assessments provide a comprehensive understanding of individual needs, allowing for the development of tailored recommendations and interventions. Methods: Routine neuropsychological assessment was completed between 5 and 7 years of age in this cohort of pediatric HTx recipients at our center (Jan 2014-Oct 2018), including tests of general intellect (WPPSI-IV, WISC-V), academics (WIAT-II/III), perceptual-motor abilities (Beery VMI), and memory (CMS). Relevant medical variables were collected. Results: Among 25 children, the median age at testing was 6.7 (IQR:5.8-7.4) years, with a median time since HTx of 5.2 (IQR:4.8-6.8) years. Medical diagnoses included congenital heart disease (CHD; 56%) and cardiomyopathy (44%). Cognitive functioning across the intellectual, academic, and perceptual-motor domains fell within the low-average range, while memory abilities fell within the average range. DSM-5 clinical diagnoses were provided for 14 (56%) children: Intellectual Disability-Mild (20%), Learning Disability (20%), Language Disorder (8%), and Attention-Deficit/Hyperactivity Disorder (12%). The presence of neurological issues and/or CHD predicted poorer performance on various neuropsychological domains. Conclusions: Over 50% of this cohort of pediatric heart transplant recipients seen for routine post-HTx neuropsychological assessment received a clinical psychological diagnosis, notably higher than rates in the general population. This population requires monitoring to ensure that high risk children are identified and successfully supported in school and their community. abstract_id: PUBMED:20456979 Neurocognitive function of pediatric heart transplant recipients. Background: Pediatric heart transplant recipients exhibit cognitive delays, as evident in assessments of their general intelligence. Less is known about their specific neurocognitive impairments. Methods: All 19 children in Finland aged 6 to 16 years who had undergone heart transplantation (HTx) participated. Of these, 12 (63%) had cardiomyopathy (CM) and 7 (37%) had congenital heart disease (CHD). They were assessed on average 5.5 (SD, 3.6) years post-operatively at a mean age of 12.0 (SD, 3.1) years. A standardized test of intelligence (Wechsler Intelligence Scale for Children [WISC]-III), a neuropsychological test battery (NEPSY-II), and a parental developmental questionnaire (FTF) were administered. The neuropsychological test profile of the HTx group was compared with that of a matched control group. Results: HTx children had a lower mean Performance Intelligence Quotient (PIQ; 82.2, p = 0.001) and Full-Scale IQ (FSIQ; 85.6, p = 0.004) compared with population norms. HTx children scored generally lower than the control group on the neuropsychological tests (p = 0.002). Seven patients with pre-HTx neurologic sequelae (n = 6) or extreme prematurity (n = 1) had lower mean FSIQ (72.1) than did children without major pre-HTx risk factors (93.5, p = 0.012). The latter group scored below average on only 1 of 6 WISC-III sub-tests and 2 of 10 NEPSY-II sub-tests, all measuring visuoconstructional performance. Conclusions: Children without major neurologic risk factors pre-HTx may have normal general intelligence after HTx but deficits in the visuoconstructional domain. abstract_id: PUBMED:31200623 Neuropsychological Impairment in Children With Class 1 Congenital Heart Disease. Children with congenital heart disease (CHD) are at increased risk of neuropsychological impairments, but few studies are available on the nonlinguistic neuropsychological abilities of children with CHD. We conducted neuropsychological evaluations using the NEPSY II on a cohort of 17 school-age Italian children with diagnoses of Class 1 CHD, no genetic or chromosomal abnormalities, and normal intelligence scores, and we compared them with 34 matched controls. Children with CHD in this study had undergone at least one cardiac surgery with cardiopulmonary bypass support. We found that children with CHD scored significantly lower than their matched controls on memory and learning domain tasks, and they displayed subtle attention/executive dysfunctions and deficits in sensorimotor skills. abstract_id: PUBMED:34362206 Fetal Cardiac Services during the COVID-19 Pandemic: How Does It Affect Parental Counseling? The COVID-19 pandemic impacts health care providers in multiple ways, even specialties that do not seem to be affected primarily, such as fetal cardiac services. We aimed to assess the effects on parental counseling for fetal congenital heart disease (CHD). In this multicenter study, we used a validated questionnaire. Parents were recruited from four national tertiary medical care centers (n = 226); n = 169 had been counseled before and n = 57 during the pandemic. Overall counseling success including its dimensions did not differ between the two groups (p = n.s.). However, by applying the sorrow scale, we could demonstrate that parents counseled during the pandemic were significantly more concerned (p = 0.025) and unsure (p = 0.044) about their child's diagnosis, therapy and outcome. Furthermore, parents expressed a significantly increased need for written and/or online information on fetal heart disease (p = 0.034). Other modifiers did not affect counseling success (p = n.s.). We demonstrate that the COVID-19 pandemic impacts effectiveness of parental counseling for fetal CHD, possibly by altering parental perceptions. This needs to be taken into consideration when counseling. Implementing alternative and innovative approaches (e.g., online conference or virtual reality tools) may aid in facilitating high-quality services in critical times such as in the present pandemic. Answer: Yes, parental ratings on cognition do reflect neuropsychological outcomes in congenital heart disease (CHD). A study described the parental view of the cognitive skills of their child with surgically corrected CHD and compared it to objectified cognitive measures in children with CHD 6-12 years postoperatively. The results indicated that parents of children with CHD more frequently reported lower sustained attention, divided attention, problems with memory and learning skills, and deficient gross motor functioning. These parental observations were in agreement with the results from intellectual and neuropsychological assessments, which revealed a lower estimated full-scale IQ, worse sensorimotor functioning, and lower performances on language, attention/executive functioning, and memory in the CHD group. Several items of the parental questionnaire were significant predictors for worse neurodevelopmental outcomes, endorsing the accuracy and usefulness of parental questionnaires to report on the cognitive functioning of the child (PUBMED:18053001). Furthermore, other studies have shown that children with CHD can exhibit various neurodevelopmental impairments, including deficits in language, attention, executive functions, and social cognition, which can impact social competence. These findings suggest that parental observations and concerns regarding their child's cognitive and social functioning are indeed reflective of the neuropsychological challenges faced by children with CHD (PUBMED:35679183, PUBMED:26210851). In summary, parental ratings on cognition are consistent with the neuropsychological outcomes observed in children with CHD, highlighting the importance of considering parental input when assessing the cognitive and developmental status of children with CHD.
Instruction: Is antibiotic prophylaxis mandatory after the insertion of levonorgestrel-releasing intrauterine systemin order to decrease the risk of pelvic inflammatory disease? Abstracts: abstract_id: PUBMED:24868262 Is antibiotic prophylaxis mandatory after the insertion of levonorgestrel-releasing intrauterine systemin order to decrease the risk of pelvic inflammatory disease? Objective: This study was undertaken in order to determine if antibiotic prophylaxis is mandatory, after the insertion of levonorgestrel-releasing intrauterine system in order to decrease the risk of pelvic inflammatory disease. Materials And Methods: We prospectively evaluated 44 patients, admitted in the Bucharest Emergency Hospital between the 1ⁱ of February 2012 and the 1ⁱ of October 2012, in whom the levonorgestrel-releasing intrauterine system was inserted. The patients enrolled were divided into two groups. In group A, a number of 22 patients, received, after the insertion of levonorgestrel-releasing intrauterine system, 875mg Amoxicillin Trihydrate + 125 mg Potassium Clavulanate, a dose every 12 hours for 5 days. Group B was represented by the other 22 patients who did not receive antibiotic prophylaxis. All patients were reevaluated at 4 and 12 weeks after the insertion of levonorgestrel-releasing intrauterine system. Results: During the first 4 weeks after the insertion of levonorgestrel-releasing intrauterine system only two patients, one from group A and one from group B were diagnosed with pelvic inflammatory disease. At a second follow up visit - 12 weeks after the insertion of levonorgestrel-releasing intrauterine system, no other patient was diagnosed with pelvic inflammatory disease. Conclusion: Antibiotic prophylaxis is not mandatory, after the insertion of levonorgestrel-releasing intrauterine system in order to decrease the risk of pelvic inflammatory disease. abstract_id: PUBMED:27032746 Canadian Contraception Consensus (Part 3 of 4): Chapter 7--Intrauterine Contraception. Objective: To provide guidelines for health care providers on the use of contraceptive methods to prevent pregnancy and on the promotion of healthy sexuality. Outcomes: Overall efficacy of cited contraceptive methods, assessing reduction in pregnancy rate, safety, ease of use, and side effects; the effect of cited contraceptive methods on sexual health and general well-being; and the relative cost and availability of cited contraceptive methods in Canada. Evidence: Published literature was retrieved through searches of Medline and The Cochrane Database from January 1994 to January 2015 using appropriate controlled vocabulary (e.g., contraception, sexuality, sexual health) and key words (e.g., contraception, family planning, hormonal contraception, emergency contraception). Results were restricted to systematic reviews, randomized control trials/controlled clinical trials, and observational studies published in English from January 1994 to January 2015. Searches were updated on a regular basis in incorporated in the guideline to June 2015. Grey (unpublished) literature was identified through searching the websites of health technology assessment and health technology-related agencies, clinical practice guideline collections, clinical trial registries, and national and international medical specialty societies. Values: The quality of the evidence in this document was rated using the criteria described in the Report of the Canadian Task Force on Preventive Health Care (Table 1). CHAPTER 7: INTRAUTERINE CONTRACEPTION: Summary Statements: 1. Intrauterine contraceptives are as effective as permanent contraception methods. (II-2) 2. The use of levonorgestrel-releasing intrauterine system (LNG-IUS) 52 mg by patients taking tamoxifen is not associated with recurrence of breast cancer. (I) 3. Intrauterine contraceptives have a number of noncontraceptive benefits. The levonorgestrel-releasing intrauterine system (LNG-IUS) 52 mg significantly decreases menstrual blood loss (I) and dysmenorrhea. (II-2) Both the copper intrauterine device and the LNG-IUS significantly decrease the risk of endometrial cancer. (II-2) 4. The risk of uterine perforation decreases with inserter experience but is higher in postpartum and breastfeeding women. (II-2) 5. The risk of pelvic inflammatory disease (PID) is increased slightly in the first month after intrauterine contraceptive (IUC) insertion, but the absolute risk is low. Exposure to sexually transmitted infections and not the IUC itself is responsible for PID occurring after the first month of use. (II-2) 6. Nulliparity is not associated with an increased risk of intrauterine contraceptive expulsion. (II-2) 7. Ectopic pregnancy with an intrauterine contraceptive (IUC) is rare, but when a pregnancy occurs with an IUC in situ, it is an ectopic pregnancy in 15% to 50% of the cases. (II-2) 8. In women who conceive with an intrauterine contraceptive (IUC) in place, early IUC removal improves outcomes but does not entirely eliminate risks. (II-2) 9. Intrauterine contraceptives do not increase the risk of infertility. (II-2) 10. Immediate insertion of an intrauterine contraceptive (10 minutes postplacental to 48 hours) postpartum or post-Caesarean section is associated with a higher continuation rate compared with insertion at 6 weeks postpartum. (I) 11. Immediate insertion of an intrauterine contraceptive (IUC; 10 minutes postplacental to 48 hours) postpartum or post-Caesarean section is associated with a higher risk of expulsion. (I) The benefit of inserting an IUC immediately postpartum or post-Caesarean section outweighs the disadvantages of increased risk of perforation and expulsion. (II-C) 12. Insertion of an intrauterine contraceptive in breastfeeding women is associated with a higher risk of uterine perforation in the first postpartum year. (II-2) 13. Immediate insertion of an intrauterine contraceptive (IUC) post-abortion significantly reduces the risk of repeat abortion (II-2) and increases IUC continuation rates at 6 months. (I) 14. Antibiotic prophylaxis for intrauterine contraceptive insertion does not significantly reduce postinsertion pelvic infection. (I) RECOMMENDATIONS: 1. Health care professionals should be careful not to restrict access to intrauterine contraceptives (IUC) owing to theoretical or unproven risks. (III-A) Health care professionals should offer IUCs as a first-line method of contraception to both nulliparous and multiparous women. (II-2A) 2. In women seeking intrauterine contraception (IUC) and presenting with heavy menstrual bleeding and/or dysmenorrhea, health care professionals should consider the use of the levonorgestrel intrauterine system 52 mg over other IUCs. (I-A) 3. Patients with breast cancer taking tamoxifen may consider a levonorgestrel-releasing intrauterine system 52 mg after consultation with their oncologist. (I-A) 4. Women requesting a levonorgestrel-releasing intrauterine system or a copper-intrauterine device should be counseled regarding changes in bleeding patterns, sexually transmitted infection risk, and duration of use. (III-A) 5. A health care professional should be reasonably certain that the woman is not pregnant prior to inserting an intrauterine contraceptive at any time during the menstrual cycle. (III-A) 6. Health care providers should consider inserting an intrauterine contraceptive immediately after an induced abortion rather than waiting for an interval insertion. (I-B) 7. In women who conceive with an intrauterine contraceptive (IUC) in place, the diagnosis of ectopic pregnancy should be excluded as arly as possible. (II-2A) Once an ectopic pregnancy has been excluded, the IUC should be removed without an invasive procedure. The IUC may be removed at the time of a surgical termination. (II-2B) 8. In the case of pelvic inflammatory disease, it is not necessary to remove the intrauterine contraceptive unless there is no clinical improvement after 48 to 72 hours of appropriate antibiotic treatment. (II-2B) 9. Routine antibiotic prophylaxis for intrauterine contraceptive (IUC) insertion is not indicated. (I-B) Health care providers should perform sexually transmitted infection (STI) testing in women at high risk of STI at the time of IUC insertion. If the test is positive for chlamydia and/or gonorrhea, the woman should be appropriately treated postinsertion and the IUC can remain in situ. (II-2B) 10. Unscheduled bleeding in intrauterine contraception users, when persistent or associated with pelvic pain, should be investigated to rule out infection, pregnancy, gynecological pathology, expulsion or malposition. (III-A) abstract_id: PUBMED:16413845 Does insertion and use of an intrauterine device increase the risk of pelvic inflammatory disease among women with sexually transmitted infection? A systematic review. Concerns exist as to whether the insertion of copper and levonorgestrel-releasing intrauterine devices (IUDs) increases the risk of pelvic inflammatory disease (PID) among women with sexually transmitted infection (STI). We searched the MEDLINE database for all articles published between January 1966 and March 2005 that included evidence relevant to IUDs and STIs and PID. None of the studies that examined women with STIs compared the risk of PID between those with insertion or use of an IUD and those who had not received an IUD. We reviewed indirect evidence from six prospective studies that examined women with insertion of a copper IUD and compared risk of PID between those with STIs at the time of insertion with those with no STIs. These studies suggested that women with chlamydial infection or gonorrhea at the time of IUD insertion were at an increased risk of PID relative to women without infection. The absolute risk of PID was low for both groups (0-5% for those with STIs and 0-2% for those without). abstract_id: PUBMED:30981616 Streptococcal Toxic Shock Syndrome After Insertion of a Levonorgestrel Intrauterine Device. Background: The insertion of intrauterine devices (IUDs) is associated with a small increased risk of pelvic inflammatory disease and ascending infection, particularly in patients with risk factors. However, the risk of sepsis and toxic shock syndrome after insertion of an IUD is a rare event, described only through case reports in the literature. Case: This report describes the case of a 40-year-old woman who presented with high fever, myalgias, and abdominal pain 2 days following the insertion of a levonorgestrel-releasing IUD. She was found to have group A Streptococcus infection in the vagina that led to a diagnosis of streptococcal toxic shock syndrome and required admission to the intensive care unit. Conclusion: Toxic shock syndrome is a rare but potentially catastrophic complication after the insertion of an IUD. Health care providers should be conscious of this complication, particularly among patients presenting with fever and sepsis soon after IUD insertion. abstract_id: PUBMED:8968696 A risk-benefit assessment of the levonorgestrel-releasing intrauterine system. The levonorgestrel-releasing intrauterine system (LNG-IUS), has been developed by Leiras Pharmaceuticals, Turku, Finland. It is a new systemic hormonal contraceptive that releases levonorgestrel 20 micrograms every 24 hours. The device provides fertility control comparable with that of female sterilisation, complete reversibility and convenience, and has an excellent tolerability record. The low dosage of levonorgestrel released by its unique delivery system ensures minimal hormone-related systemic adverse effects, which tend to be in the category of 'nuisance' rather than hazardous, and gradually diminish after the first few months of use. In some respects, the contraceptive characteristics of the LNG-IUS have over-shadowed a substantial range of noncontraceptive beneficial effects that are rarely seen with inert or copper-releasing intrauterine contraceptive devices (IUDs), and have important and positive gynaecological and public health implications. This applies particularly to the profound reduction in duration and quantity of menstrual bleeding, and alleviation of dysmenorrhoea, which are associated with the use of the device. Recent studies have shown that the LNG-IUS is effective in preventing endometrial proliferation associated with oral or transdermal estradiol therapy, and in inducing regression of endometrial hyperplasia. Further research is required to determine whether it has a role in regulating the growth of uterine fibroids, and preventing pelvic inflammatory disease. The unique unwanted noncontraceptive effects of the system, including possible development of functional ovarian cysts, and the relationship between menstrual bleeding pattern and ovarian function, also require better understanding, in order to offer appropriate patient counselling and maximise acceptability and continuation of use of the method. abstract_id: PUBMED:20556754 Immediate postabortal insertion of intrauterine devices. Background: Insertion of an intrauterine device (IUD) immediately after an abortion has several advantages. The woman is known not to be pregnant. Many clinicians refuse to insert an IUD in a woman who is not menstruating. After induced abortion, a woman's motivation to use contraception may be high. However, insertion of an IUD immediately after a pregnancy ends carries risks, such as spontaneous expulsion due to recent cervical dilation. Objectives: To assess the safety and efficacy of IUD insertion immediately after spontaneous or induced abortion. Search Strategy: We searched MEDLINE, CENTRAL, POPLINE, EMBASE, ClinicalTrials.gov, and ICTRP. We also contacted investigators to identify other trials. Selection Criteria: We sought all randomized controlled trials with at least one treatment arm that involved IUD insertion immediately after an induced abortion or after curettage for spontaneous abortion. We identified 11 trials which described random assignment. Data Collection And Analysis: We evaluated the methodological quality of each report and abstracted the data. We focused on discontinuation rates for accidental pregnancy, perforation, expulsion, and pelvic inflammatory disease. We computed the weighted average of the rate ratios. We computed relative risks (RR) with 95% Confidence Intervals (CI). Main Results: Three trials randomized to immediate or delayed insertion. One showed no significant differences. Meta-analysis of two showed use of levonorgestrel-releasing intrauterine system or CuT380A was more likely for immediate versus delayed insertion (RR 1.18; 95% CI 1.08 to 1.28). Another trial randomized to the levonorgestrel IUD or Nova T; discontinuation rates due to pregnancy were 0.8 and 9.5, respectively. Sub-analysis showed higher expulsion rates for postabortal than interval insertions (levonorgestrel: 2.8 versus 6.8; Nova T: 3.0 versus 8.3).Seven trials examined immediate insertion. From meta-analysis of two multicenter trials, pregnancy was less likely for the TCu 220C versus the Lippes Loop (RR 0.38; 95% CI 0.20 to 0.72) as was expulsion (RR 0.51; 95% CI 0.30 to 0.88). Estimates for the TCu 220 versus the Copper 7 were 0.52 (95% CI 0.36 to 0.77) and 0.58 (95% CI 0.39 to 0.87), respectively. In other work, adding copper sleeves to the Lippes Loop improved efficacy (RR 3.82; 95% CI 1.41 to 10.36) and reduced expulsion (RR 3.37; 95% CI 1.65 to 6.90). Authors' Conclusions: Insertion of an IUD immediately after abortion is safe and practical. IUD expulsion rates appear higher than after interval insertions. However, IUD use is higher at six months with immediate than with interval insertion. abstract_id: PUBMED:15495018 Immediate postabortal insertion of intrauterine devices. Background: Insertion of an intrauterine device (IUD) immediately after an abortion has several potential advantages. The woman is known not to be pregnant, a major concern for clinicians. For example, many clinicians refuse to insert an IUD in a woman who is not menstruating. After induced abortion, a woman's motivation to use contraception may be high. However, insertion of an IUD immediately after a pregnancy ends carries potential risks as well. For example, the risk of spontaneous expulsion may be increased due to recent cervical dilation. Objectives: To assess the safety and efficacy of IUD insertion immediately after spontaneous or induced abortion. Search Strategy: We used MEDLINE, Popline, and EMBASE computer searches, supplemented by review articles and contacts with investigators. Selection Criteria: We sought all randomized controlled trials that had at least one treatment arm that involved IUD insertion immediately after an induced abortion or after curettage for spontaneous abortion. We identified 12 trials which described random assignment but excluded three from this review. Two of these revealed unethical research conduct, and one used alternate assignment to treatments. Data Collection And Analysis: We evaluated the methodological quality of each report and abstracted information onto a data collection form. We focused on gross discontinuation rates (single-decrement life table analysis) for accidental pregnancy, perforation, expulsion, and pelvic inflammatory disease. We entered the data into RevMan 3.1 for analysis of Peto odds ratios. Main Results: In large multicenter trials, the TCu 220C device proved superior to either the Lippes Loop D or the Copper 7 IUDs for immediate postabortal insertion. In single-center trials, the Nova T IUD had a significantly higher discontinuation rate for pregnancy than did the Multiload 250 (OR 4.5; 95% CI 1.0-19.8), while the Nova T had a significantly lower discontinuation rate for pregnancy than did the TCu 200 (OR 0.3; 95% CI 0.1-0.9). The levonorgestrel-releasing device was more effective in preventing pregnancy than was the Nova T. Only one trial compared immediate vs. delayed insertion. In this trial, the performance of the Copper 7 IUD inserted immediately after abortion was inferior to that after interval insertion (remote from pregnancy), although the differences were not statistically significant. Reviewers' Conclusions: Insertion of an IUD immediately after abortion is both safe and practical. IUD expulsion rates appear higher than after interval insertions. abstract_id: PUBMED:8489751 Hormonal intrauterine devices. Intrauterine administration of progestogens from an IUD was introduced more than 15 years ago in order to improve the contraceptive efficacy and to reduce side-effects. A device releasing 20 micrograms levonorgestrel daily (Levonova) with a life span of at least 5 years is now available and has been subjected to an extensive study including 1821 women followed during 5 years. The safety and efficacy were evaluated in comparison to Cu-T IUDs. The pregnancy rate (Pearl index) was 0.09. During the first two cycles after insertion intermenstrual bleeding was common, otherwise the frequency of side-effects was low. The results seem to justify the conclusion that intrauterine administration of progestogens represents a further development of IUDs and hormonal contraception. Furthermore, these devices have proven to be effective in treatment of heavy menstrual blood losses. In many menorrhagic women, use of these IUDs can replace more invasive surgical methods such as hysterectomy or endometrial resection. Intrauterine administration of progestogens is also effective in opposing the proliferative effect of oestrogen on the endometrium in women on hormonal replacement therapy. abstract_id: PUBMED:9238300 Intrauterine contraception in adolescent women. The GyneFix intrauterine implant. Pregnancy rates among adolescents have not decreased over the last 10 years, despite numerous efforts. To solve this important health problem, the major strategy recommended is to encourage contraceptive use among sexually active teenagers. An important means of obtaining this is by promoting methods that are not dependent on daily administration in order to avoid noncompliance. One such method (Norplant) has already shown to be much more effective than the combination pill in preventing pregnancy in adolescent women. The frameless intrauterine implant system (fixed, frameless, and completely flexible) has been studied since 1985 in women between 14 and 50 years of age. The results in young nulligravid women confirm its very high effectiveness (cumulative pregnancy rate at 36 months: 1.4%), its low expulsion rate (cumulative rate at 36 months: 0.9%) and its optimal tolerance (cumulative removal rate for medical reasons at 36 months: 2.4%), resulting in a high acceptance of the implant and a high continued use. The system (GyneFix) offers long-term protection (5 years), and its insertion, with or without anesthetic, is easily accomplished in the office. The GyneFix should therefore be recommended as an excellent alternative for birth control pills for young women with low risk for STDs, especially when compliance is a problem, without an increased risk for complications and without systemic side effects. Removal of the device is accomplished by traction on the tail. It can also be used for emergency contraception and for insertion immediately after termination of pregnancy. abstract_id: PUBMED:11293937 Levonorgestrel releasing intrauterine spiral--contraception and therapeutic indications The levonorgestrel-releasing intrauterine system (LNG-IUS) is a highly reliable contraceptive device. It is a suitable method of contraception also for breast-feeding mothers, and for women who have completed their family it is a real alternative to sterilisation. In comparison with the copper-releasing intrauterine devices, the rates of ectopic pregnancies and adnexitis/pelvic peritonitis (PID) are reduced. In menstruating women the menstrual blood loss is lower and, correspondingly, dysmenorrhoea and/or iron-deficiency anaemia occur less frequently and when they do occur they are less severe. In hypermenorrhoea/menorrhagia the LNG-IUS is a therapeutic alternative to the surgical procedure. In the menopause and postmenopause the LNG-IUS is suitable for protection of the endometrium in women undergoing continuous estrogen replacement therapy. Answer: Antibiotic prophylaxis is not mandatory after the insertion of a levonorgestrel-releasing intrauterine system (LNG-IUS) to decrease the risk of pelvic inflammatory disease (PID). A study that prospectively evaluated 44 patients who had an LNG-IUS inserted found that only two patients, one who received antibiotic prophylaxis and one who did not, were diagnosed with PID during the first 4 weeks after insertion. No other patients were diagnosed with PID at the 12-week follow-up. This led to the conclusion that antibiotic prophylaxis is not mandatory after LNG-IUS insertion to decrease the risk of PID (PUBMED:24868262). Additionally, the Canadian Contraception Consensus states that routine antibiotic prophylaxis for intrauterine contraceptive insertion does not significantly reduce post-insertion pelvic infection, and health care providers should perform sexually transmitted infection testing in women at high risk of STI at the time of IUC insertion (PUBMED:27032746).
Instruction: Do diabetologists recognise self-management problems in their patients? Abstracts: abstract_id: PUBMED:15533583 Do diabetologists recognise self-management problems in their patients? Objective: The aim of this study was to determine whether diabetologists recognise patients' needs for additional intensive multidisciplinary care due to glycaemic and diabetes-related psychosocial difficulties. Research Design And Methods: We compared 114 participants in a diabetes intervention programme with 201 as yet non-referred outpatients, of whom 54 outpatients were considered eligible for the intervention by their diabetologists; thus, 147 outpatients were considered non-eligible. Results: Analysis revealed that the eligible patients had poorer glycaemic control but all other parameters were similar to non-eligible patients. Significantly, 22 (15%) of the 147 non-eligible patients clearly had diabetes distress and could potentially benefit from the intervention. Conclusion: The results suggest that in regular care, patients' needs with respect to glycaemic control are recognised by their diabetologists, but patients with high psychosocial diabetes-related distress are often overlooked, though they also may be in need of additional care. Integrated monitoring of diabetes-related distress in outpatients could improve this area of diabetes care. abstract_id: PUBMED:36641785 Self-management, self-care, and self-help in adolescents with emotional problems: a scoping review. This study aimed to review the existing published and grey literature describing the concepts of self-management, self-care, and self-help, and to capture strategies or techniques related to these concepts, for adolescents with emotional problems. Emotional problems are rising amongst adolescents, yet timely access to specialist mental health treatment is limited to those with greater severity of mental health difficulties. Self-management, self-care, and self-help strategies may be used by adolescents with emotional problems both in terms of those waiting for treatment and to prevent relapse. Given the overlap in existing definitions and the lack of clarity around these concepts in an adolescent mental health context, a scoping review of the literature is warranted to provide clarity. Eligible studies were those involving adolescents aged 10 to 19 years with symptoms of emotional problems. Studies referenced self-management, self-care, or self-help, not involving a professional, in this population. Quantitative, qualitative, economic, and mixed methods studies, as well as systematic, scoping, and literature reviews, from 2000 onwards and in the English language, were eligible for inclusion. A systematic search was conducted of both published and grey literature. Databases searched included PsycINFO, Medline, Embase, Web of Science, and CINAHL Plus. Mednar was also searched for unpublished studies and grey literature. Tables of themes, terms, and associated strategies are presented alongside a thematic analysis of the results. 62 articles were included. These were 20 quantitative studies, 14 systematic reviews, 10 qualitative studies, five review papers, four book chapters, four mixed methods studies, two dissertations, two meta-analyses and one scoping review and systematic review. Most of the included articles referenced self-help (n = 51), followed by self-management (n = 17) and self-care (n = 6). A total of 12 themes were identified from a reflexive thematic analysis of descriptions (and associated strategies) of self-management, self-help, or self-care in included texts. This scoping review provides clarity on the similarities and differences between how these concepts are discussed, and the strategies which are associated with each of these concepts in the relevant literature. Implications for policy and intervention development for adolescents' self-management, self-help, and self-care of their mental health are discussed. There is considerable overlap in both the ways in which these concepts are described, and the strategies or approaches proposed in relation to them, supporting previous research suggesting these strategies should be grouped under a single term, such as "self or community approaches." More research is needed for self-management, self-help, and self-care amongst marginalized groups as these adolescents may have the highest unmet need for mental health support. abstract_id: PUBMED:35407634 Current Knowledge and Perceptions of Bariatric Surgery among Diabetologists and Internists in Poland. Perioperative care and follow-up after bariatric surgery (BS) engage various medical professionals. It is key for them to be well informed about these procedures. However, knowledge and attitudes may be not satisfactory enough to provide proper care. We aimed to assess knowledge and perceptions of BS among diabetologists and internists. A total of 34 diabetologists and 30 internists completed the electronic questionnaire. There were no differences in self-estimated knowledge between them, except regarding items related to the treatment of diabetes and metabolic control. Several misconceptions were identified in the questions testing the understanding of key issues in BS. Most participants considered BS effective in weight loss and metabolic control. A total of 75% highlighted the lack of appropriate equipment for dealing with morbidly obese patients. Interestingly, in a multivariable linear regression model, self-estimated knowledge was the only variable associated with frequency of referrals to bariatric surgeons. A total of 92% of respondents were interested in broadening their knowledge. Guidelines for long-term follow-up and funding were the most frequently chosen topics to explore. The study showed a positive attitude of diabetologists and internists towards surgical treatment of obesity and identified some significant gaps in knowledge. The results may be helpful in planning trainings to provide the best care for patients suffering from morbid obesity. abstract_id: PUBMED:35259705 Role of diabetologists in the management of nonalcoholic fatty liver disease: Primary prevention and screening/management of fibrosis and cirrhosis. Background And Aims: Nonalcoholic fatty liver disease (NAFLD) is a common condition, especially among individuals with type 2 diabetes (T2D). Presence of T2D increases the risk of progression of simple steatosis to more severe liver conditions, such as nonalcoholic steatohepatitis (NASH) and fibrosis (NASH-fibrosis). Since majority of patients with T2D are managed by diabetologists (including physicians and endocrinologists), their roles in the management of coexisting NAFLD are not well defined, partly due to lack of unambiguous guidelines. Methods: A literature search was performed with Medline (PubMed), Scopus and Google Scholar electronic databases till January 2022, using relevant keywords (nonalcoholic fatty liver disease and diabetologist; screening of NASH; management of NASH) to extract relevant studies describing prevention and screening of NAFLD/NASH, especially in people with T2D. Results: Diabetologists have two main roles for the management of patients with T2D and coexisting NAFLD. The most important role is to prevent the development of NASH-fibrosis in patients with simple steatosis (primary prevention). This can be achieved by reinforcing the importance of lifestyle measures, and by early use of glucose-lowering agents with beneficial effects on the liver. The second important role of diabetologists is to screen all patients with T2D for liver fibrosis and compensated cirrhosis, and provide appropriate referral for timely management of complications (secondary prevention). Conclusion: Diabetologists can play a central role in mitigating the epidemic of NAFLD in individuals with T2D. However, diabetologists need to be aware about their roles in NASH-fibrosis prevention and screening. Furthermore, longitudinal studies should explore the role of newer glucose-lowering drugs in the primary prevention of NASH-fibrosis in individuals with coexisting T2D and simple steatosis. abstract_id: PUBMED:18360173 The impact of primary care patients' pain and emotional problems on their confidence with self-management. There is a paucity of information about confidence with self-management in primary care practice. This study examines changes over time in patient-reported confidence with self-management on the basis of 1047 patients aged 50-69 who had common chronic diseases, bothersome pain, or emotional problems. We examined the relationship between patients' self-reported confidence, their experiences of medical care, and health outcomes after adjustment for baseline characteristics. We observed that, over a 2-year period, about a third of the patients remained confident and a third remained not confident. Change in pain or emotional problems was strongly associated with whether a patient was confident or not at the end of the follow-up period (P &lt; .001). Persistently good confidence or improved confidence was strongly associated with measures of high-quality medical care. For patients with diabetes, persistent confidence was more often associated with control of blood glucose (P = .004) compared with the control in patients who were not as confident. Confident patients were likely to be fully engaged in everyday work and activities (P &lt; .001). The results suggest that for the majority of patients in primary care practices, the status of their self-reported confidence with self-management persists over time. Their confidence is impacted by their pain or emotional state and strongly associated with their medical care experiences and some outcomes of care. abstract_id: PUBMED:37297685 The Development of Recommendations for Healthcare Providers to Support Patients Experiencing Medication Self-Management Problems. Medication self-management problems such as the inability to correctly obtain, understand, organize, administer or monitor medication can result in negative patient outcomes. However, supportive tools for healthcare providers to assist patients with medication self-management problems are lacking. This study aimed to develop recommendations for healthcare providers to support patients with polypharmacy who experience medication self-management problems. A three-phase study was conducted starting with (1) the mapping of medication self-management problems, followed by (2) a scoping review providing a list of relevant interventions and actions for each respective problem and (3) a three-round modified e-Delphi study with experts to reach consensus on the relevance and clarity of the recommended interventions and actions. The cut-off for consensus on the relevance and clarity of the recommendations was set at 80% expert agreement. Experts could propose additional recommendations based on their professional experience and expertise. The experts (n = 23) involved were healthcare professionals (i.e., nurses, pharmacists, and physicians) with specific expertise in medication management of patients with polypharmacy. Simultaneous with the second e-Delphi round, a panel of patients with polypharmacy (n = 8) evaluated the usefulness of recommendations. Results obtained from the patient panel were fed back to the panel of healthcare providers in the third e-Delphi round. Descriptive statistics were used for data analysis. Twenty medication self-management problems were identified. Based on the scoping review, a list of 66 recommendations for healthcare providers to support patients with the identified medication self-management problems was composed. At the end of the three-round e-Delphi study, the expert panel reached consensus on the relevance and clarity of 67 recommendations, clustered according to the six phases of the medication self-management model by Bailey et al. In conclusion, this study resulted in a guidance document including recommendations that can serve as a resource for healthcare providers to support patients with polypharmacy in case of medication self-management problems. Future research should focus on the evaluation of the feasibility and user-friendliness of the guide with recommendations in clinical practice. abstract_id: PUBMED:23565448 Profile of vitamin D in a cohort of physicians and diabetologists in Kolkata. Introduction: Vitamin D deficiency has been documented across all age groups and both sexes from India. However, there is paucity of data on vitamin D deficiency in a particular cohort of population. Objectives: To assess the vitamin D status in a cohort of physicians and diabetologists in Kolkata. Material And Methods: An observational cross sectional study carried out in the month of December 2011 in a cohort of 40 physicians and diabetologists in Kolkata. Results: A total of 40 subjects were studied. Mean age of the cohort was 52.22 ± 10.91. Mean serum vitamin D level was 13.02 ± 4.77 ng/ml. Nearly 92.5% and 5.0% of subjects had vitamin D deficiency and insufficiency, respectively. Conclusions: Vitamin D deficiency is highly prevalent in physicians and diabetologists in Kolkata. abstract_id: PUBMED:38268261 Correlation of health literacy, health management self-efficacy, and attitude in elderly patients with schizophrenia: A cross-sectional study. Aim: The study investigated the relationship between chronic diseases and health problems, emotional health status, health literacy, health self-management attitude, and health management self-efficacy in older patients with schizophrenia. It is expected to provide multiple health perspectives in the care of older patients with schizophrenia and improve their health literacy and self-management. Design: A cross-sectional study. Methods: Data collection from 2020/09 to 2021/10, the study of older patients with schizophrenia who were diagnosed with schizophrenia and were over 50 years, live in residential mental rehabilitation institutions in eastern Taiwan as older people. Data were collected on demographic variables, chronic diseases, health problems, health literacy, healthy management self-efficacy, and attitudes. Results: The study results show 61.49 years (SD = 6.57), the gender, health literacy, and health management self-efficacy can effectively explain the variance of health self-management state by 18.5% (F(3,116) = 10.01, p &lt; 0.000), in which health literacy (β = 0.28, t = 3.07, p = 0.003) and health management self-efficacy (β = 0.19, t = 2.03, p = 0.04) that in the analysis of predictors, emotional health status, and health literacy not only have significant explanatory power on health management self-efficacy and the performance of physical functions but are also important factors affecting the quality of life among the elderly; the important influencing factor of health management self-efficacy and health self-management behaviour is health literacy. Therefore, health literacy is crucial to the health attitudes of older patients with schizophrenia. Conclusion: This study found that increased health literacy among older patients with schizophrenia can enhance their attitude and ability to manage their health. IMPLICATIONS FOR THE PROFESSION AND NO PATIENT OR PUBLIC CONTRIBUTION: Nursing can provide nursing measures related to individual health literacy to help patients improve their health literacy, maintain the health of chronic older patients with schizophrenia, reduce elderly schizophrenia disability, enhance patients' ability to manage their self-efficacy and health self-management attitude, and improving the health and quality of life of patients. abstract_id: PUBMED:38268258 Self-management theories, models and frameworks in patients with chronic heart failure: A scoping review. Aim: The aim of this study was to synthesize the self-management theory, model and frameworks of patients with chronic heart failure, focusing on construction process, methods and existing problems. Background: Although the self-management theories have been created and verified for those patients with chronic heart failure, no reviews have been performed to integrate these theories. Design: A scoping review of recent literature (without a date limit) was conducted. Methods: A comprehensive literature search was performed. If the study reported the construction of a self-management theory, model or framework about chronic heart failure cases, it would be included in the review. Results: Fourteen studies were included, which could be categorized into situation-specific theory, middle-range theory and other theory models (including conceptual model, hypothetic regression model and identity description model). It also includes the update and validation of theories, the situation-specific theoretical of caregiver contributions extended from situation-specific theories and the nurse-led situation-specific theory in different contexts. Conclusion: Self-management might contribute to start an education programme before patients with chronic heart failure (CHF) begin their chronic disease live as an individual. Our scoping review indicates that a series of self-management theories, models and frameworks for CHF patients have been developed, but more studies are still needed to validate and support these theories according to their cultural contexts. abstract_id: PUBMED:38192561 Instruments for measuring self-management and self-care in geriatric patients - a scoping review. Introduction: With demographic changes, prioritizing effective care for geriatric patients to maintain functionality, independence, and quality of life is crucial. Well-developed self-management or self-care abilities, which can be maintained and improved through interventions, are of the utmost importance. To implement these interventions tailored and effectively, a thorough assessment of the individual's self-management and self-care abilities is required. Objective: This scoping review aimed to identify self-management and self-care instruments suitable for geriatric patients, their underlying theories and definitions of self-management and self-care, and their similarities and differences in item content. Methods: A systematic search of the PubMed and CINAHL databases was conducted to identify retrievable full-text articles published in English in the medical and nursing fields since the 1970s, which were validated on a sample with an average age of at least 70 years, used generic rather than disease-specific items, and addressed the broad range of self-management and self-care abilities. Results: Of the 20 included articles, six instruments were identified that were based on different theories and offered varying definitions of self-management or self-care. Despite these differences, all emphasize empowered individuals taking an active role in their care. Most address actual behavior and abilities referring to lifestyle factors and (anticipated) adjustment behavior. However, crucial aspects, such as psychological factors, (instrumental) activities of daily living, and social environment are not fully addressed in these instruments, nor are the types of execution to which the items refer, such as wants, feelings, confidence, or attitudes. Conclusion: To fully understand how geriatric patients implement daily self-management or self-care, a combination of instruments covering the important factors of self-management and self-care and addressing multiple types of item execution, such as behaviors, abilities, wants, or attitudes, is recommended. This review provides the first comprehensive overview of self-management and self-care instruments suitable for geriatric patients. Answer: Diabetologists do recognize some self-management problems in their patients, particularly those related to glycemic control. A study aimed at determining whether diabetologists recognize patients' needs for additional intensive multidisciplinary care due to glycaemic and diabetes-related psychosocial difficulties found that diabetologists were able to identify eligible patients who had poorer glycaemic control. However, the same study also revealed that diabetologists often overlooked patients with high psychosocial diabetes-related distress, even though these patients could potentially benefit from additional care. This suggests that while diabetologists are attuned to issues of glycaemic control, they may not always recognize other self-management problems, particularly those related to psychosocial distress (PUBMED:15533583). The recognition of self-management problems by diabetologists is crucial because these problems can have significant impacts on patients' health outcomes. For instance, primary care patients' pain and emotional problems can affect their confidence with self-management, which in turn is associated with their medical care experiences and some outcomes of care, such as control of blood glucose in patients with diabetes (PUBMED:18360173). To support patients experiencing medication self-management problems, recommendations for healthcare providers have been developed. These recommendations are designed to assist patients with polypharmacy who experience medication self-management problems, indicating an awareness of the importance of supporting self-management in patient care (PUBMED:37297685). In conclusion, while diabetologists do recognize some self-management problems in their patients, particularly those related to glycemic control, there is evidence that they may not always identify problems related to psychosocial distress. There is a need for integrated monitoring of diabetes-related distress in outpatient care to improve this aspect of diabetes care (PUBMED:15533583). Additionally, the development of recommendations for healthcare providers to support patients with medication self-management problems suggests a growing recognition of the broader range of self-management issues that patients may face (PUBMED:37297685).
Instruction: Is systemic right ventricular function by cardiac MRI related to the degree of tricuspid regurgitation in congenitally corrected transposition of the great arteries? Abstracts: abstract_id: PUBMED:24814545 Is systemic right ventricular function by cardiac MRI related to the degree of tricuspid regurgitation in congenitally corrected transposition of the great arteries? Background And Methods: Systemic right ventricular dysfunction and tricuspid regurgitation (TR) are frequently encountered in patients with congenitally corrected transposition of the great arteries (CCTGA). Studies using echocardiography have suggested a relationship between the degree of TR and systemic right ventricular dysfunction; however, assessment of systemic right ventricular function by echocardiography is limited. Cardiac MRI (CMR) is the gold standard for volumetric assessment of the systemic right ventricle. We performed a retrospective cohort study at our center evaluating all adult patients with CCTGA who underwent a CMR between 1/1999 and 1/2013 to determine the relationship between the degree of TR and systemic right ventricular function. Results: Of the 33 patients identified, 12 had ≤ mild TR (37%), 13 had moderate TR (40%), and 8 had severe TR (24%). Mean age at CMR was 38 years (23-64). Mean right ventricular ejection fraction (45% vs. 41% vs. 42%, p=0.68) and mean indexed right ventricular end diastolic volume (122 ml/m(2) vs. 136 ml/m(2) vs. 138 ml/m(2)p=0.36) were not significantly different for patients with ≤ mild TR, moderate TR or severe TR. The degree of TR was not associated with additional congenital lesions, prior procedures, presence of an intraventricular conduction delay, or decreased left ventricular function. Conclusion: No association between the degree of TR and right ventricular volume or ejection fraction by CMR was identified. Failure to show worsening function or increased volume with greater degrees of TR suggests that the degree of regurgitation alone may not fully explain the heterogeneity in right ventricular size and function. abstract_id: PUBMED:23377901 Systemic right ventricles rarely show myocardial scars in cardiac magnetic resonance delayed-enhancement imaging. Objectives: To test the hypothesis that myocardial scars cause systolic dysfunction in patients with transposition of the great arteries and a systemic right ventricle. Methods: We retrospectively analyzed 20 consecutive patients (10 male, mean age 27.3 years) with a systemic right ventricle who underwent cardiac magnetic resonance imaging with 1.5 T. Cine steady-state free-precession sequences were performed to obtain volumes and function. Phase-sensitive inversion-recovery (PSIR) delayed-enhancement imaging was performed to detect myocardial scars. Tricuspid insufficiency was detected with echocardiography. Furthermore, the presence of arrhythmias and New York Heart Association (NYHA) class were assessed. Results: Mean ejection fraction of systemic right ventricles was 43 ± 11 %, mean end-diastolic volume index was 111 ± 37 ml/m(2). Delayed-enhancement imaging revealed only one myocardial scar in the wall of a right ventricular aneurysm. All patients but one (95 %) presented with tricuspid insufficiency. Clinically relevant arrhythmias were present in 13/20 patients (65 %). The majority of patients (90 %) were NYHA class I or II. Arrhythmias, tricuspid insufficiency and NYHA class were not associated with right ventricular ejection fraction. Conclusions: Although right ventricular function was clearly impaired in our patient cohort, there was only one myocardial scar. Our results show that myocardial scarring assessed by PSIR delayed-enhancement imaging is not the underlying pathology of systemic right ventricular failure. abstract_id: PUBMED:27542349 Echocardiography vs magnetic resonance imaging in assessing ventricular function and systemic atrioventricular valve status in adults with congenitally corrected transposition of the great arteries. Background: The survival in adults with congenitally corrected transposition of the great arteries (ccTGA) might be reduced due to dysfunction of the systemic right ventricle (sRV). The quantitative echocardiographic assessment of sRV function and tricuspid (systemic atrioventricular valve) regurgitation (TR) is still a diagnostic challenge. Thus, the aim of this study was to compare echocardiographic indices of sRV function and the degree of TR with corresponding MRI (magnetic resonance imaging)-derived parameters in adults with ccTGA. Methods: A prospective cross-sectional study of adults with ccTGA referred to a tertiary congenital heart disease center was conducted. All patients underwent transthoracic echocardiography and MRI examinations. Thirty-three adults (19F/14M, mean age 34.1 years) were included. Results: We found significantly lower fractional area change (FAC) and global longitudinal strain (GLS) values in patients with MRI-derived RV ejection fraction (EF) &lt;45%. A cutoff GLS&lt;-16.3% identified sRV EF ≥45% with a sensitivity of 77.3% and specificity of 72.7%. A very strong correlation between MRI- and echocardiography-derived TR volume was observed (r=.84; P&lt;.0001). Conclusions: GLS is the variable with the best sensitivity but less specificity to distinguish between systemic RV EF ≥45% and below 45%, and it seems to be the preferred echocardiographic index of systemic RV function in adults with ccTGA. The quantitative assessment of TR by MRI and echocardiography showed a very strong agreement in patients with ccTGA. abstract_id: PUBMED:16087087 Management of the failing systemic right ventricle. Conditions in which the right ventricle serves as the systemic pumping chamber are frequently complicated by the development of right ventricular failure and tricuspid valve regurgitation. The right ventricle is the systemic ventricle in conditions of ventriculoarterial discordance with atrioventricular concordance (transposition of the great arteries) or with atrioventricular discordance (congenitally corrected transposition of the great arteries). Concerns regarding actual or potential systemic right ventricular failure in these cases may lead to surgical evaluation and treatment designed to reestablish the left ventricle as the systemic pump. In cases where the left ventricle has prolonged exposure to low pressures in the pulmonary circulation, the left ventricle must be "retrained" to assume a systemic pressure load. Anatomic repair, with or without a preparatory period of left ventricular retraining, is a consideration for three clinically relevant scenarios: (1) patients with transposition of the great arteries after an atrial level switch (Senning or Mustard procedure), (2) patients with congenitally corrected transposition who are unoperated or who have undergone physiologic ("classic") repair, and (3) unoperated patients with transposition who present after the neonatal period. abstract_id: PUBMED:31286415 The impact of pulmonary artery banding and cardiac resynchronization therapy for the adult patient with congenitally corrected transposition of the great arteries. It has been controversial how to manage severe dysfunction of the systemic right ventricle and severe tricuspid regurgitation for adult patients with congenitally corrected transposition of the great arteries. We reported a 43-year-old man with these disorders, who received pulmonary artery banding and cardiac resynchronization therapy without cardiopulmonary bypass. The degree of the tricuspid regurgitation was improved to mild-moderate and systemic right ventricular ejection fraction was improved from 28 to 45% after surgery. Cardiopulmonary exercise testing showed that the predictive value of anaerobic threshold and peak oxygen uptake improved from 65 to 99% and 59 to 92%, respectively. In conclusion, pulmonary artery banding and cardiac resynchronization therapy can be the first choice of surgery for severe tricuspid regurgitation with severely impaired systemic right ventricular function. After improving systemic right ventricular dysfunction tricuspid surgery could be the next choice as a surgical intervention in the future. abstract_id: PUBMED:30833832 Clinical Evaluation of Exercise Capacity in Adults with Systemic Right Ventricle. The right ventricle provides systemic circulation in individuals with congenitally corrected transposition of the great arteries (CCTGA) and in those with complete transposition who have had an atrial switch repair (DTGA). The aim of this study was to evaluate how the systemic right ventricle adapts to increased workload and oxygen demand during exercise. From November 2005 through December 2015, 3,358 adult patients with congenital heart disease were treated at our institution; we identified 48 (26 females, 22 males; median age, 25.4 ± 8.1 yr) who met the study criteria; 37 had DTGA and atrial switch repair, and 11 had CCTGA. We studied their echocardiographic and cardiopulmonary exercise test results. A control group consisted of 29 healthy sex- and age-matched volunteers. On exercise testing, oxygen uptake at anaerobic threshold, peak oxygen uptake, peak heart rate, and percentage of maximal heart rate were significantly lower in the group with systemic right ventricle than in the control group (all P &lt;0.001); in contrast, the peak ventilatory equivalent for carbon dioxide was higher in the study group (P=0.013). Impaired systemic right ventricular function reduced peak oxygen uptake. The peak heart rate was lower in the CCTGA group than in the DTGA group. Our results indicate that reduced exercise capacity is related to impaired systemic right ventricular function, severe tricuspid valve regurgitation, and chronotropic incompetence. There was no correlation between cardiopulmonary exercise test results and time after surgery. Chronotropic efficiency is lower in individuals with CCTGA than in those with DTGA. abstract_id: PUBMED:22572412 Preservation of systemic tricuspid valve function by pulmonary conduit banding in a patient with corrected transposition of the great arteries. Systemic tricuspid valve regurgitation increases mortality and morbidity in patients with a corrected transposition of the great arteries. A 17-year old male with a physiologically corrected transposition after the closure of a ventricular septal defect and conduit placement between a morphological left ventricle and pulmonary artery presented with exertional dyspnoea. The transthoracic echocardiography showed a severe conduit stenosis, and cardiac catheterization revealed a pressure gradient of 114 mmHg. The patient underwent conduit re-replacement using a pulmonary heterograft. Intraoperative transoesophageal echocardiography revealed an acute severe tricuspid regurgitation after a conduit re-replacement. Pulmonary conduit banding was performed under transoesophageal echocardiography guidance, during which the left ventricular to right ventricular pressure ratio increased from 0.33 to 0.60 and the degree of tricuspid regurgitation decreased mildly. The patient was discharged uneventfully at postoperative day 16. Conduit banding might be a useful technique to preserve the systemic tricuspid valve function during conduit re-replacement in patients with a corrected transposition. abstract_id: PUBMED:31666144 In congenitally corrected transposition of great arteries following Fontan surgery, a left ventricle with suprasystemic systolic and high end-diastolic pressures paradoxically preserves right ventricular and tricuspid valve function. Systemic right ventricular function in congenitally corrected transposition depends on septal geometry. Suprasystemic left ventricular systolic pressures and high end-diastolic pressures after Fontan surgery paradoxically preserve right ventricular function. abstract_id: PUBMED:23350927 Predicting functional capacity in patients with a systemic right ventricle: subjective patient self-assessment is better than B-type natriuretic peptide levels and right ventricular systolic function. Background: Many adults with transposition of the great arteries have an anatomic right ventricle functioning as the systemic ventricle and are known to develop congestive heart failure, premature cardiac death, and need for cardiac transplantation. Predictors of poor clinical outcome and functional status in patients with left ventricular failure do not always apply to these patients. We aimed to identify predictors of poor functional status in those patients with a systemic right ventricle. Methods: We performed a prospective study of 51 adults with transposition of the great arteries and systemic right ventricles. Demographic, clinical, laboratory, and imaging data were collected, and patients completed a Minnesota Living with Heart Failure Questionnaire (MLHFQ). Comparisons were made between those patients with d-type transposition of the great arteries (dTGA) who have undergone prior atrial switch and those with congenitally corrected transposition (ccTGA). A correlation analysis was performed to identify predictors of poor functional status, as determined by a 6-minute walk distance test. Results: Median age was 30 years (range 19-65). Median B-type natriuretic peptide was 48 pg/mL (range 16-406). There were 27 patients (53%) with moderate-severe right ventricular dysfunction and 10 (20%) with moderate-severe tricuspid valve regurgitation. The median MLHFQ score was 9 (range 0-78) and 6-minute walk test was 510 m (range 231-703). Forty-one patients had a diagnosis of dTGA atrial switch and 11 patients had ccTGA. Patients with ccTGA were significantly older (40 vs. 28 years, P =.004) and had more tricuspid valve regurgitation (P =.02). Despite this, their MLHFQ scores were significantly lower (2.5 vs. 17, P =.04) and they walked further (635 vs. 504 m, P =.02). Predictors of a short 6-minute walk distance included short stature (P =.009) and dTGA (P =.002). The patient's self-assessment of poor health, as measured by an increased New York Heart Association class (P =.003) and a decreased MLHFQ score (P &gt;.0001) also correlated. B-type natriuretic peptide levels, right ventricular dysfunction, severity of tricuspid valve regurgitation, need for pacemaker, and clinical signs of heart failure did not correlate with exercise tolerance. Conclusions: Traditional parameters used to predict outcomes in patients with left ventricular failure are not predictive in patients with a systemic right ventricle. Instead, patient's self-assessment of functional status did correlate with objective functional status. abstract_id: PUBMED:34895701 Ventricular Function in Physiologically Repaired and Unrepaired Congenitally Corrected Transposition of the Great Arteries. In patients with congenitally corrected transposition of the great arteries (ccTGA) and hemodynamically significant concomitant lesions, physiologic repair may be undertaken, in which the circulation is septated but the morphologic right ventricle (RV) remains the systemic ventricle. Patients without significant concomitant lesions may be observed without surgery, with a similar physiologic result. We compared cardiovascular magnetic resonance measures of ventricular size and function in patients with physiologically repaired and unrepaired ccTGA. Patients with ccTGA who underwent cardiovascular magnetic resonance at our center between September 2007 and July 2019 were analyzed. In 38 patients identified (12, physiologically repaired; 26, unrepaired; mean age 34.5 [18.7 to 52.0] years), there was a higher proportion of RV ejection fraction ≤45% in physiologically repaired (75% vs unrepaired 35%, p = 0.02). Physiologically repaired patients had worse left ventricle global longitudinal strain (-14.9% ± 5.0% vs unrepaired patients -18.4% ± 2.7%, p = 0.04). The difference in tricuspid regurgitant fraction between groups did not achieve statistical significance (physiologically repaired 27.4 ± 11.1% vs unrepaired patients 19.2 ± 13.0%, p = 0.08). Evaluation for late gadolinium enhancement was more commonly undertaken in physiologically repaired patients (8 of 12 vs unrepaired 7 of 26, p = 0.03) and present more frequently in the left ventricle in physiologically repaired patients in patients evaluated (6 of 8 vs unrepaired 0 of 7, p = 0.01). In conclusion, ventricular function is decreased in patients with ccTGA undergoing physiologic repair compared with those without previous surgery. These cohorts should be considered separately when using ventricular function as an outcome. RV dysfunction is concerning for long-term outcomes following physiologic repair. Answer: No, systemic right ventricular function as assessed by cardiac MRI (CMR) is not related to the degree of tricuspid regurgitation (TR) in congenitally corrected transposition of the great arteries (CCTGA). A retrospective cohort study evaluating adult patients with CCTGA who underwent CMR found no association between the degree of TR and right ventricular volume or ejection fraction. The study concluded that the degree of regurgitation alone may not fully explain the heterogeneity in right ventricular size and function (PUBMED:24814545). This finding is consistent with another study where myocardial scarring assessed by PSIR delayed-enhancement imaging was not the underlying pathology of systemic right ventricular failure, despite the presence of TR in the majority of patients (PUBMED:23377901). Additionally, echocardiography and MRI showed a very strong agreement in the quantitative assessment of TR in patients with CCTGA, suggesting that both methods can be reliably used to assess the degree of TR (PUBMED:27542349).
Instruction: Endocrine factors associated with non-alcoholic fatty liver disease in women with polycystic ovary syndrome: do androgens play a role? Abstracts: abstract_id: PUBMED:19347015 Endocrine and liver interaction: the role of endocrine pathways in NASH. This article reviews evidence that causally links hormonal disorders with hepatobiliary disease, and gives particular focus to nonalcoholic steatohepatitis (NASH). The downstream mechanisms by which endocrine disturbances cause liver disease might be similar to those involved in the development of primary liver disease. Hypothyroidism, for example, might lead to NASH, cirrhosis and potentially liver cancer via the development of hyperlipidemia and obesity. Patients with growth hormone deficiency have a metabolic-syndrome-like phenotype that is also associated with the development of NASH. Polycystic ovary syndrome is a common endocrine disorder that is often associated with insulin resistance, the metabolic syndrome, altered levels of liver enzymes and the development of NASH. Recent findings support a role of dehydroepiandrosterone sulfate deficiency in the development of advanced NASH. In addition, adrenal failure is increasingly reported in patients with end stage liver disease and in patients who have received a liver transplant, which suggests a bidirectional relationship between liver and endocrine functions. Clinicians should, therefore, be aware of the potential role of endocrine disorders in patients with cryptogenic liver disease and of the effects of liver function on the endocrine system. abstract_id: PUBMED:22382613 Neuroendocrine and endocrine dysfunction in the hyperinsulinemic PCOS patient: the role of metformin. Metformin is a widely used and extensively studied insulin sensitising drug for the treatment of women with polycystic ovary syndrome (PCOS), with various actions in tissues responding to insulin that include the liver, skeletal muscle, adipose tissue, the endothelium of blood vessels, and the ovaries. Treatment of PCOS women with metformin has been shown to reduce fasting glucose levels, blood pressure, and serum androgens; further effects of metformin in women with PCOS may include direct effects on the central nervous system; and indirect effects via the modification of gut hormone and adipokine synthesis and/or secretion. A number of "novel" adipokines and metabolic factors have been recently identified which may play a role both in the pathogenesis and the treatment of women with PCOS. We here discuss recent advances in the area, with a focus on neuroendocrine and endocrine dysfunctions in women with PCOS and the potential role of metformin in this context. abstract_id: PUBMED:35448486 Nonalcoholic Fatty Liver Disease and Endocrine Axes-A Scoping Review. Nonalcoholic fatty liver disease (NAFLD) is the leading cause of chronic liver disease. NAFLD often occurs associated with endocrinopathies. Evidence suggests that endocrine dysfunction may play an important role in NAFLD development, progression, and severity. Our work aimed to explore and summarize the crosstalk between the liver and different endocrine organs, their hormones, and dysfunctions. For instance, our results show that hyperprolactinemia, hypercortisolemia, and polycystic ovary syndrome seem to worsen NAFLD's pathway. Hypothyroidism and low growth hormone levels also may contribute to NAFLD's progression, and a bidirectional association between hypercortisolism and hypogonadism and the NAFLD pathway looks likely, given the current evidence. Therefore, we concluded that it appears likely that there is a link between several endocrine disorders and NAFLD other than the typically known type 2 diabetes mellitus and metabolic syndrome (MS). Nevertheless, there is controversial and insufficient evidence in this area of knowledge. abstract_id: PUBMED:22082482 The emerging role of endocrine disruptors in pathogenesis of insulin resistance: a concept implicating nonalcoholic fatty liver disease. Endocrine disruptors or endocrine-disrupting chemicals (EDCs) represent a highly heterogeneous group of molecules found in the environment or in consumer products. Toxicology and epidemiology studies have suggested the involvement of diverse EDCs in an increasing number of metabolic disorders, including insulin resistance (IR) and IR-related co morbidities, such as obesity, type 2 diabetes mellitus (T2DM) and polycystic ovary syndrome. Nonalcoholic fatty liver disease (NAFLD), another IR related condition, is emerging as a significant public health concern, affecting 30-45% of the general population in the Western world. To evaluate whether EDCs may also play a role in the pathogenesis of NAFLD, we reviewed the literature on well-studied EDCs, such as dioxins, bisphenol A, phthalates and other persistent organic pollutants, in relation to pathways that might contribute to the pathogenesis of fatty liver / NAFDL. Certain EDCs may be responsible for inducing alterations similar to those encountered in NAFLD either directly through a hepatotoxic effect and/or indirectly by triggering hepatic and systematic IR. Considering these effects, which act in concert with the effects of the epidemics of obesity and T2DM, EDCs may play a significant role in the pathogenesis of fatty liver, thereby increasing the prevalence of NAFLD worldwide. Translational studies and clinical trials investigating the association between EDCs and NAFLD are required to confirm and extent these studies. abstract_id: PUBMED:20001571 Endocrine factors associated with non-alcoholic fatty liver disease in women with polycystic ovary syndrome: do androgens play a role? Objective: To characterise the metabolic profile of women with polycystic ovary syndrome (PCOS) and non-alcoholic fatty liver disease (NAFLD) and to determine whether circulating androgens differ in PCOS women with NAFLD compared to PCOS subjects without NAFLD. Methods: Retrospective study of 21 women with PCOS, elevated liver enzymes and ultrasound evidence of hepatic steatosis matched with 32 PCOS women with normal liver enzymes. Extensive demographic, endocrine and metabolic data were compared. Pearson's correlation coefficients were calculated to assess for potential relationships between the free androgen index (FAI) and other dependent variables. Results: PCOS subjects with NAFLD demonstrate greater insulin resistance but have similar circulating androgen levels. Conclusion: In this pilot study, insulin resistance was the most prominent feature characterising NAFLD complicating PCOS. Total testosterone, FAI, DHEAS and 17-hydroxyprogesterone levels were similar between patients with PCOS and without NAFLD. abstract_id: PUBMED:26494962 Endocrine causes of nonalcoholic fatty liver disease. Nonalcoholic fatty liver disease (NAFLD) is the most common cause of chronic liver disease in the industrialized world. The prevalence of NAFLD is increasing, becoming a substantial public health burden. NAFLD includes a broad spectrum of disorders, from simple conditions such as steatosis to severe manifestations such as fibrosis and cirrhosis. The relationship of NAFLD with metabolic alterations such as type 2 diabetes is well described and related to insulin resistance, with NAFLD being recognized as the hepatic manifestation of metabolic syndrome. However, NAFLD may also coincide with endocrine diseases such as polycystic ovary syndrome, hypothyroidism, growth hormone deficiency or hypercortisolism. It is therefore essential to remember, when discovering altered liver enzymes or hepatic steatosis on radiological exams, that endocrine diseases can cause NAFLD. Indeed, the overall prognosis of NAFLD may be modified by treatment of the underlying endocrine pathology. In this review, we will discuss endocrine diseases that can cause NALFD. Underlying pathophysiological mechanisms will be presented and specific treatments will be reviewed. abstract_id: PUBMED:19923096 Role of the endocrine system in the pathogenesis of non-alcoholic fatty liver disease The most frequent liver disorder in metabolic syndrome is the nonalcoholic fatty liver disease. Its pathogenesis is a complex, multifactorial process, characterized by insulin resistance and involvement of the endocrine system. Hypothyroidism may lead to nonalcoholic steatohepatitis via hyperlipidemia and obesity. Adult patients with growth hormone deficiency have a metabolic syndrome-like phenotype with obesity and many characteristic metabolic alterations. The chronic activation of the hypothalamic-pituitary-adrenal axis results in metabolic syndrome as well. Cushing's syndrome has also features of metabolic syndrome. Mild elevation of transaminase activities is commonly seen in patients with adrenal failure. Non-alcoholic steatosis is twice as common in postmenopusal as in premenopausal women and hormonal replacement therapy decreases the risk of steatosis. Insulin resistance, diabetes mellitus type 2, sleeping apnoe syndrome, cardiovascular disorders and non-alcoholic fatty liver disease are more frequent in polycystic ovary syndrome. Hypoandrogenism in males and hyperandrogenism in females may lead to fatty liver via obesity and insulin resistance. Adipokines (leptin, acylation stimulating protein, adiponectin) have a potential role in the pathogenesis of nonalcoholic fatty liver. The alterations of endocrine system must be considered in the background of cryptogenic liver diseases. The endocrine perspective may help the therapeutic approaches in the future. abstract_id: PUBMED:29873585 NASH in Nondiabetic Endocrine Disorders. Nonalcoholic fatty liver disease (NAFLD) encompasses a spectrum of disease, including hepatic steatosis, inflammation, and fibrosis. NAFLD carries the risk of progression to cirrhosis with its associated complications and hepatocellular carcinoma. It is now the most common liver disease in the Western world and its prevalence is increasing. While the association between NAFLD and type 2 diabetes has been well documented, there is significantly less understanding of the pathophysiology and progression of NAFLD in patients with other endocrine disorders affecting metabolism in various ways. Some of the more common endocrine disorders such as polycystic ovarian syndrome, growth hormone deficiency, hypothyroidism, and hypogonadism are known in clinical practice to be associated with NAFLD. Medications that alter the endocrine system such as tamoxifen and adrenal steroids have also been attributed to significant NAFLD. The key to management of NAFLD at this time are dietary changes and exercise to achieve weight loss. Unfortunately, a large proportion of the patients with these endocrine disorders are unable to achieve either. This review aims to examine and summarize the current published literature that have evaluated the association between NAFLD and the above endocrine disorders and potential therapeutic interventions in each case. abstract_id: PUBMED:33101202 Dynamic Changes in Circulating Endocrine FGF19 Subfamily and Fetuin-A in Response to Intralipid and Insulin Infusions in Healthy and PCOS Women. Background: The fibroblast growth factors (FGF) 19 subfamily, also referred to as endocrine FGFs, includes FGF19, FGF21, and FGF23 are metabolic hormones involved in the regulation of glucose and lipid metabolism. Fetuin-A is a hepatokine involved in the regulation of beta-cell function and insulin resistance. Endocrine FGFs and fetuin-A are dysregulated in metabolic disorders including obesity, type 2 diabetes, non-alcoholic fatty liver disease and polycystic ovary syndrome (PCOS). Our study was designed to examine the response of endocrine FGFs and fetuin-A to an acute intralipid, insulin infusion and exercise in PCOS and healthy women. Subjects and Measurements: Ten healthy and 11 PCOS subjects underwent 5-h saline infusions with a hyperinsulinemic-euglycemic clamp (HIEC) performed during the final 2 h. One week later, intralipid infusions were undertaken with a HIEC performed during the final 2 h. After an 8 week of exercise intervention the saline, intralipid, and HIEC were repeated. Plasma levels of endocrine FGFs and fetuin-A were measured. Results: Baseline fetuin-A was higher in PCOS women but FGF19, FGF21, and FGF23 did not differ and were unaffected by exercise. Insulin administration elevated FGF21 in control and PCOS, suppressed FGF19 in controls, and had no effects on FGF23 and fetuin-A. Intralipid infusion suppressed FGF19 and increased FGF21. Insulin with intralipid synergistically increased FGF21 and did not have effects on lipid-mediated suppression of FGF19 in both groups. Conclusion: Our study provides evidence for insulin and lipid regulation of endocrine FGFs in healthy and PCOS women, suggesting that FGF family members play a role in lipid and glucose metabolism. Clinical Trial Registration: www.isrctn.org, Identifier: ISRCTN42448814. abstract_id: PUBMED:33449917 Pathophysiology of NASH in endocrine diseases. Nonalcoholic fatty liver disease (NAFLD) is the most common cause of chronic liver disease in the industrialized world. NAFLD encompasses a whole spectrum ranging from simple steatosis to nonalcoholic steatohepatitis (NASH) and cirrhosis. The latter can lead to hepatocellular carcinoma. Furthermore, NASH is the most rapidly increasing indication for liver transplantation in western countries and therefore represents a global health issue. The pathophysiology of NASH is complex and includes multiple parallel hits. NASH is notably characterized by steatosis as well as evidence of hepatocyte injury and inflammation, with or without fibrosis. NASH is frequently associated with type 2 diabetes and conditions associated with insulin resistance. Moreover, NASH may also be found in many other endocrine diseases such as polycystic ovary syndrome, hypothyroidism, male hypogonadism, growth hormone deficiency or glucocorticoid excess, for example. In this review, we will discuss the pathophysiology of NASH associated with different endocrinopathies. Answer: The relationship between endocrine factors and nonalcoholic fatty liver disease (NAFLD) in women with polycystic ovary syndrome (PCOS) has been explored in various studies. PCOS is often associated with insulin resistance, the metabolic syndrome, altered levels of liver enzymes, and the development of NAFLD (PUBMED:19347015). However, when it comes to the specific role of androgens in women with PCOS who have NAFLD, the evidence suggests that circulating androgen levels are similar in PCOS women with and without NAFLD (PUBMED:20001571). This indicates that while insulin resistance is a prominent feature characterizing NAFLD in PCOS, androgens may not play a significant role in this context. Furthermore, the metabolic profile of women with PCOS and NAFLD shows greater insulin resistance, but no significant difference in total testosterone, free androgen index (FAI), dehydroepiandrosterone sulfate (DHEAS), or 17-hydroxyprogesterone levels compared to PCOS subjects without NAFLD (PUBMED:20001571). This further supports the notion that androgens may not be the key endocrine factor associated with NAFLD in women with PCOS. In summary, while PCOS is commonly associated with endocrine dysfunctions such as insulin resistance and metabolic syndrome, which are linked to the development of NAFLD, the role of androgens in the development of NAFLD in women with PCOS does not appear to be significant based on the circulating levels of androgens in PCOS women with and without NAFLD (PUBMED:20001571).
Instruction: Is cyclophosphamide effective in patients with IgM-positive minimal change disease? Abstracts: abstract_id: PUBMED:22729760 Is cyclophosphamide effective in patients with IgM-positive minimal change disease? Background: We analyzed the impact of immunoglobulin M (IgM) positivity on the relapse-free interval post completed course of cyclophosphamide (CYC) treatment in patients with steroid-dependent nephrotic syndrome (SDNS) and minimal change disease (MCD). Methods: This was a retrospective chart review of all children who received CYC for SDNS and MCD between 1988 and 2009. Patients were divided into three groups based on kidney biopsy: MCD without immunoglobulin M (IgM) positivity (IgM-), MCD with IgM-positive immunofluorescence (IF) only (IgM+), and MCD with IgM-positive IF and electron-dense deposits on electron microscopy (IgM++). The relapse-free time interval to the first relapse post-CYC therapy or up to 48 months of follow-up (if no relapse occurred) was used for survival analysis. Results: Forty children aged 1.5-12.3 years (15 were IgM-, 16 were IgM+, 9 were IgM++) received a cumulative CYC dose of 175 ± 30 mg/kg. The overall relapse-free survival time was 75 % at 12 months, 64 % at 24 months, 59 % at 36 months, and 56 % at 48 months, with no significant differences between the IgM groups (p = 0.80). Conclusions: Based on our results, we conclude that more than 50% of our SDNS patients with MCD remained relapse-free 4 years post-CYC treatment. No significant difference in the response to CYC was observed between patients with or without IgM positivity. abstract_id: PUBMED:19219463 Minimal change disease with IgM+ immunofluorescence: a subtype of nephrotic syndrome. Immunoglobulin (Ig) M nephropathy is defined by electron-dense mesangial deposits and mesangial IgM visible by immunofluorescence (IF) without other histopathologic and immunofluorescent microscopic abnormalities. Certain patients have only immuno-positive (IgM+) IF. Children presenting with steroid-dependent or steroid-resistant nephrotic syndrome have a high prevalence of IgM+ IF with or without electron-dense deposits. We reviewed the clinical course of children with steroid-dependent or steroid-resistant nephrotic syndrome who underwent renal biopsy at Texas Children's Hospital from 1989 to 2006 to further characterize IgM+ IF in children with nephrotic syndrome. Of the 55 children with steroid-resistant or -dependent minimal change disease (MCD), 23 had IgM+ IF. Of these 23 children, 61% had microscopic hematuria at presentation, 48% (11/23) were steroid-dependent, and 48% (11/23) steroid-resistant (one underwent biopsy prior to steroid therapy). We compared the efficacy of adjuvant treatment with cyclophosphamide and cyclosporine: 18% initially treated with cyclophosphamide obtained remission, while 55% had no response; 83% obtained subsequent remission with cyclosporine. Of those initially treated with cyclosporine, 88% obtained complete or partial remission. IgM+ IF may be surrogate marker for the severity of MCD. Based on our results, children with MCD and IgM+ IF have a better response to cyclosporine than cyclophosphamide. abstract_id: PUBMED:7054586 Significance of mesangial IgM deposition in "minimal change" nephrotic syndrome. During a 6-year period, 64 patients with a nephrotic syndrome, shown histologically to result from minimal change nephropathy, were studied for a mean of 4.5 years. Fifty of the patients showed no glomerular immunoglobulin on immunofluorescent or immunoperoxidase study, but 14 had mesangial deposits of IgM. The onset characteristics, response to treatment, and long-term course were similar in the two groups. We conclude that IgM deposition in the mesangium in patients with minimal change nephrotic syndrome is of no prognostic value, and our data do not suggest that this group of patients, although a histopathologic entity, represent a clinically distinct group. abstract_id: PUBMED:13680329 Intravenous pulse cyclophosphamide--is it effective in children with steroid-resistant nephrotic syndrome? Unlabelled: Treatment of steroid-resistant nephrotic syndrome (SRNS) remains a challenge to pediatric nephrologists. Recently, intravenous cyclophosphamide (IV-CPM) infusion was shown to be effective, safe, and economical for the treatment of SRNS, particularly minimal change disease (MCD), as it results in more sustained remissions, longer periods without proteinuria, and fewer significant side effects at a lower cumulative dose. A prospective study was conducted to evaluate IV-CPM infusions in the management of children with SRNS secondary to MCD or IgM nephropathy. Five patients with SRNS (4 IgM nephropathy and 1 MCD) received six monthly IV-CPM infusions at a dose of 500 mg/m(2). No patient achieved complete or sustained remission. Three patients attained partial remission, which was not sustained for more than 1 month post therapy. One patient progressed rapidly to end-stage renal disease during treatment. Side effects included vomiting in four patients and alopecia in one patient. Conclusion: IV-CPM pulse therapy at a dose of 500 mg/m(2) is unsuccessful in obtaining complete or sustained remission in children with SRNS secondary to IGM nephropathy or MCD. Further randomized controlled studies with higher doses are required. abstract_id: PUBMED:30369188 Clinical analysis of six cases with the de novo glomerulitis after allogeneic hematopoietic stem cell transplantation Objective: To explore the occurrence, clinical characteristics, diagnosis and treatment of glomerulitis after allogeneic hematopoietic stem cell transplantation (allo-HSCT). Methods: Analysis were carried out based on the clinical data of 6 patients with de novo glomerulitis following allo-HSCT hospitalized in Henan Tumor Hospital from January 2008 to December 2016, and the clinical manifestation, pathology, diagnosis, treatment and outcome were investigated. Results: The occurrence of glomerulitis was 1.26% (6/478). The median time was 447(272-1 495) d after allo-HSCT. Proteinuria and varying degrees of edema were present in all patients. Of the 6 patients, 4 patients with impaired renal function, 3 cases of hypertension, 5 cases of urine occult blood positive, 2 cases of hyperlipidemia. 5 patients underwent acute graft-versus-host disease (GVHD), 4 patients accompanied with chronic GVHD at diagnosis. Kidney pathology showed typical features of minimal change diseases in 1 patient, membranous nephropathy in 4 patients and mesangial proliferative glomerulonephritis in 1 case. Immunohistochemistry of glomerular lesions revealed that the immune complex deposition included IgG in 4 patients, C3 in 3 patients, IgM and C1q in 1 patient. Serum ANA was positive in 2 patients and serum IgG and IgM were in high level in 1 patient, respectively. Only 1 case was effective on glucocorticoid. 5 cases treated by low dose cyclophosphamide combined with mycophenolate mofetil (MMF), 2 cases achieved complete remission, and 3 cases were partial remission. Up to now, 2 cases died with lung infection, and 4 patients survived. Conclusion: The predominant pathological type of glomerulitis was membranous nephropathy. Low-dose cyclophosphamide combined with MMF was an effective treatment. abstract_id: PUBMED:18396625 Clinical analysis of late-onset nephrotic syndrome after allogenic hematopoietic stem cell transplantation Objective: To investigate the clinical manifestations, pathology, diagnosis, treatment, and pathogenesis of late-onset nephrotic syndrome (NS) after allogenic hematopoietic stem cell transplantation (allo-HSCT). Methods: NS post-HSCT was investigated in 167 patients with hematopoietic malignancies who survived more than 3 months after allo-HSCT. The clinical manifestations, pathology, diagnosis, and treatment were investigated in a retrospective study. The association of the onset of NS post-HSCT with sex, age, transplant type, conditioning regiments, human leucocyte antigen (HLA) matching, donor-recipient relationship, acute graft versus host disease (aGVHD), chronic graft versus host disease (cGVHD), and cytomegalovirus infection were analyzed. Results: Five patients (2.99%) were diagnosed as with NS post-HSCT, 4 with membranous glomerulonephritis (MGN) and 1 with minimal change disease (MCD). Immunohistochemistry of glomerular lesions revealed that the immunoglobulin (Ig) of immune complex deposition included IgG in 3 patients, IgM in 1 patient, and co-existence of IgG/IgM in 1 patient. Serum ANA was positive in 2 patients and serum IgG and IgM were in high levels in 2 patients. Glucocorticoid combined with cyclophosphamide (CTX) was fundamentally effective treatment for NS post-HSCT. 1 patient got complete response, 3 got partial response, and 1 was stable after the treatment. Binary logistic regression showed that the he onset of NS post-HSCT was not significantly associated with sex, age, transplant type, conditioning regimen, HLA matching, donor-recipient relation, aGVHD, cGVHD and CMV infection. Conclusion: The predominant pathological type of NS post-HSCT is MGN, followed by MCD. The pathogenesis of NS post-HSCT may be related to the abnormality of humoral immunity. Glucocorticoid combined with CTX is an effective treatment. abstract_id: PUBMED:3147413 Cyclosporin in the treatment of steroid-responsive and steroid-resistant nephrotic syndrome in adults. The effect of cyclosporin on proteinuria was studied in 11 patients with steroid-responsive nephrotic syndrome (10 minimal change nephropathy, one IgM nephropathy) and in four patients with steroid-resistant nephrotic syndrome from focal segmental glomerulosclerosis. Cyclosporin (mean initial dose 7.7 mg/kg per day) produced a complete remission of proteinuria in 15 nephrotic episodes in the ten patients with minimal-change nephropathy after a mean 14.3 days (range 7-23 days) of therapy. All patients remained in remission while receiving cyclosporin (mean duration of follow-up 147 days; range 40-230 days). However, when cyclosporin was discontinued on nine occasions in five patients, all relapsed after a mean 47.8 days (range 7-180 days). Four of the five patients were subsequently rechallenged with cyclosporin and all responded. Maintenance cyclosporin therapy to prevent relapse was not associated with any adverse effects, and there was no significant difference between the creatinine clearance before and after 30 days of therapy (86.9 +/- 19.3 and 81.7 +/- 23.5 ml/min respectively, P greater than 0.1). The patient with steroid-responsive IgM nephrotic syndrome did not respond to cyclosporin, and there was no significant effect of cyclosporin on proteinuria in the four patients with FSGS. Cyclosporin is an effective agent for the treatment of patients with frequently relapsing minimal-change nephropathy who became steroid dependent when cyclophosphamide is contraindicated. However, unlike cyclophosphamide, long-term remissions which persist after treatment is withdrawn are not obtained, and patients may be said to be cyclosporin dependent. abstract_id: PUBMED:4047274 Efficacy of cyclophosphamide in steroid-sensitive childhood nephrotic syndrome with different morphological lesions. We have reviewed the efficacy of cyclophosphamide in 39 steroid-sensitive frequently relapsing nephrotic children. Cyclophosphamide was used because of heavy steroid dependence and steroid toxicity. A percutaneous renal biopsy done prior to administration of cyclophosphamide showed the lesion to be minimal change in 7 children, IgM nephropathy in 17 children, and evolving from minimal change nephrotic syndrome to focal segmental sclerosis (FSGS) in 15 children. 100% of patients with minimal change nephrotic syndrome responded to cyclophosphamide, but only 58% of IgM patients responded (p less than 0.05). Only 1 of 15 FSGS patients responded (p less than 0.001 vs. minimal change nephrotic syndrome and p less than 0.01 vs. IgM). In view of the failure of cyclophosphamide to produce a remission in FSGS and its potential for long-term impairment of suppressor T cell function, we suggest that it should not be used in patients whose disease has evolved from minimal change to FSGS. abstract_id: PUBMED:20017458 Renal biopsy in children with steroid-dependent nephrotic syndrome. Background: There is lack of consensus on the necessity of renal biopsy in children with steroid-dependent nephrotic syndrome (SDNS) prior to cytotoxic therapy. Objectives: To retrospectively evaluate: (a) the benefit of renal biopsy (RB) prior to cyclophosphamide therapy; (b) relationship between histopathologic results of RB samples and clinical course in children with SDNS. Patients, Materials And Methods: RB was performed in 18 SDNS patients (11 boys and 7 girls). The mean age of the children at the time of nephrotic syndrome (NS) diagnosis was 6.4 +/- 3.9 years and 9.7 +/- 4.3 years at the time of RB. Following the RB, all children received prednisone and cyclophosphamide treatment for 12 weeks. Duration of remission and relapse rate was recorded. Results: The histologic evaluation revealed minimal change disease (MCD; n = 14) and IgM nephropathy (n = 4). These results didn't affect the ongoing therapy. MCD patients had longer remission compared to IgM nephropathy (3.2 +/- 1.5 vs 1.7 +/- 0.8 years; p = 0.05). Relapse rate did not differ significantly between MCD and IgM nephropathy (p = 0.22). The duration of remission was inversely correlated to relapse rate after the treatment (r = -0.66, p = 0.01). Conclusion: We suggest that RB prior to cyclophosphamide therapy is not necessary in patients with SDNS (Tab. 2, Ref. 14). Full Text (Free, PDF) www.bmj.sk. abstract_id: PUBMED:10862635 Adult-onset idiopathic nephrotic syndrome associated with pure diffuse mesangial hypercellularity. Background: Pure diffuse mesangial hypercellularity (DMH), in its primary form, is a relatively rare histological finding and few data exist in the literature regarding its clinical course and prognosis in nephrotic adults with this diagnosis. Methods: We retrospectively analysed the clinical and histological data of 28 adult nephrotic patients (13 male) with this diagnosis with regard to response to the treatment, outcome and prognostic indicators. Results: Of 25 patients treated with prednisolone (Pred), nine (36%) showed complete remission (CR) of proteinuria, eight (32%) partial remission (PR) and eight (32%) did not respond at all (NR). The combination of cyclosporin treatment with prednisolone of those with PR or NR produced one further complete and two partial remissions. At the end of follow-up (mean 64 months), 10 patients (40%) were in CR, nine (36%) in PR and six (24%) were NR and remained nephrotic. Renal function remained unchanged in patients with CR or PR. In contrast, the six non-responders progressed to end-stage renal disease (ESRD). Compared with non-responders, patients who responded to Pred were older and had normal renal function at presentation. This group also had less mesangial sclerosis and severe tubulointerstitial fibrosis and none showed synechiae with Bowman's capsule. IgM mesangial deposits were observed in 22% of patients with CR in response to Pred, in 37% of those with PR and in 100% of non-responders, who finally progressed to ESRD. A multivariate analysis of clinical and histological features at biopsy showed persistent nephrotic syndrome (P&lt;0.001), the severity of DMH (P&lt;0.03) and the presence of mesangial IgM (P&lt;0. 01) to have independent predictive value for ESRD. This analysis also demonstrated that only mesangial sclerosis (P&lt;0.03) and the presence of mesangial IgM (P&lt;0.002) independently predicted the response to therapy. Conclusions: DMH associated with idiopathic nephrotic syndrome is a heterogeneous entity. Patients who respond to therapy (completely or partially) have a benign course similar to that of minimal change nephrotic syndrome. They are usually older and have normal renal function at presentation, whereas 'sclerotic' lesions are less frequent findings in initial biopsies. Non-responders tend to be younger and progress to ESRD. Most of them have impaired renal function at first assessment and more prominent 'sclerotic' lesions on initial biopsies. Mesangial IgM is an independent marker of poorer response to treatment and progression to ESRD but it lacks specificity. Answer: Cyclophosphamide appears to have varying degrees of effectiveness in patients with IgM-positive minimal change disease (MCD). According to the studies, the response to cyclophosphamide in this patient group is not significantly different from those without IgM positivity. One study found that more than 50% of steroid-dependent nephrotic syndrome (SDNS) patients with MCD remained relapse-free 4 years post-cyclophosphamide treatment, regardless of IgM positivity (PUBMED:22729760). However, another study reported that children with MCD and IgM-positive immunofluorescence had a better response to cyclosporine than to cyclophosphamide, with only 18% initially treated with cyclophosphamide obtaining remission, while 55% had no response (PUBMED:19219463). Furthermore, a study that looked at the significance of mesangial IgM deposition in minimal change nephrotic syndrome concluded that IgM deposition in the mesangium is of no prognostic value, suggesting that the presence of IgM does not necessarily predict the response to cyclophosphamide (PUBMED:7054586). On the other hand, a prospective study evaluating intravenous cyclophosphamide infusions in the management of children with steroid-resistant nephrotic syndrome (SRNS) secondary to MCD or IgM nephropathy found that no patient achieved complete or sustained remission, indicating limited effectiveness in this context (PUBMED:13680329). In summary, while cyclophosphamide can be effective in some patients with IgM-positive MCD, the response is variable and may not be as favorable as in those with IgM-negative MCD. Other treatment options, such as cyclosporine, may be more effective for certain patients with IgM-positive MCD (PUBMED:19219463).
Instruction: Does provision of food in school increase girls' enrollment? Abstracts: abstract_id: PUBMED:24683673 Does provision of food in school increase girls' enrollment? Evidence from schools in sub-Saharan Africa. Background: The problem of low female literacy rates in Africa starts with low primary school enrollment, particularly in areas of high food insecurity. The provision of food in, and through, schools is considered to be one way of enrolling more girls in school, keeping them enrolled, and enhancing their adult well-being and productivity as a result. Objective: To investigate the effects of provision of food and additional take-home rations in schools on girls' enrollment. Methods: A retrospective cross-sectional study was designed based on school-level surveys in 32 African countries between 2002 and 2005. The study population consisted of girls and boys in primary schools targeted by the World Food Programme (WFP) and located in food-insecure areas that also suffered from lack of access to education. Results: Provision of food in schools through the Food for Education (FFE) program contributed to increasing absolute enrollment in WFP-assisted schools by 28% for girls and 22% for boys in the first year. Post year-one enrollment patterns varied according to the type of FFE program. Where provision of take-home rations for girls was combined with on-site feeding for all pupils, the increase in girls' absolute enrollment was sustained at 30% after the first year. However, in schools providing on-site feeding alone, the rate of increase in absolute enrollment after the first year reverted to the rates of increase found in the year prior to FFE implementation. The provision of take-home rations also appeared to reduce the dropout rate of female students, particularly in the higher grades. Conclusions: FFE programs can have a lasting positive influence on school enrollment and, by providing extra take-home rations to girls, in addition to on-site feeding, can make a strong contribution to the Millennium Development Goals. abstract_id: PUBMED:26052509 School Feeding and Girls' Enrollment: The Effects of Alternative Implementation Modalities in Low-Income Settings in Sub-Saharan Africa. Background: School feeding interventions are implemented in nearly every country in the world, with the potential to support the education, health and nutrition of school children. In terms of impact on school participation, there is little evidence to show that different school feeding modalities have different effect sizes. Objective: To examine the influence of different school feeding modalities on primary school enrollment, particularly for girls, in 32 countries across sub-Saharan Africa. Methods: An observational study involving a meta-analysis of published data was developed to examine program effect. Schools were divided according to the type and length of the program: those with existing programs, those that had had school feeding for less than 1 year, and a counterfactual including schools without a program but that were going to initiate school feeding within the survey year. The intervention consisted of two different types of school feeding: onsite meals alone or onsite meals plus take-home rations. Changes in enrollment, both total and disaggregated by grade and gender, over a 1-year period, were used to assess effects of school feeding. To control for pre-program characteristics in the beneficiary population, data on covariates were also examined before the school feeding intervention began and after one year of implementation. Using this design a comparison of enrollment levels was made between the types of treatment schools and controls schools during the period school feeding was first introduced. Standard multiple regression models were used to analyze program effect. Results: School feeding programs were found to have statistically significant increases in enrollment, with effect size of about 10%. The changes on enrollment varied by modality of school feeding provision and by gender, with onsite meals appearing to have stronger effects in the first year of treatment in the lower primary grades, and onsite combined with take-home rations also being effective post-year 1, particularly for girls that were receiving the extra take-home rations. Conclusion: School feeding programs had a positive impact on school enrollment. The operational nature of the survey data used in the meta-analysis, however, limits the robustness of the design and validity of the findings. Nevertheless, this analysis is the first to study possible links between enrollment and length of program duration using multivariable models, examining whether programs reach a saturation point or steady state beyond which school feeding may in fact have no further benefits on school enrollment. Further research is required to examine this issue in more detail. abstract_id: PUBMED:35002057 A Note on the Effect of COVID-19 and Vaccine Rollout on School Enrollment in the US. The COVID-19 pandemic outbreaks forced families to decide the safest and most effective learning environments for their children because of the virus's threat to health and life. Hence, because of the nationwide school closure, policymakers have raised concerns about the missing children cases-those who have not enrolled in school at all because of the pandemic. The present study investigates whether there is a difference in the school enrollment during the pre-COVID-19 period, COVID-19 period, and vaccine rollout period. We employed the U.S. Current Population Survey (CPS), covering January 2020 to May 2021, while we use both the logistic and multinomial regression models for the empirical analysis. Our results showed that school enrollment is lower during the COVID-19 pandemic compared to the Pre-COVID-19 period. Other results showed that school enrollment is higher since vaccine rollout compared to the COVID-19 period. A possible explanation for this could be that families consider the vaccine the safe path to enrolling their children in school. We also found that school enrollment varies significantly across race and ethnic groups. abstract_id: PUBMED:34462037 Impact of the community eligibility provision program on school meal participation in Texas. Objective: To estimate the impact of opting into the community eligibility provision (CEP) on school meal participation among students in Texas. Design: A quasi-experimental design using a two-way fixed effects panel difference-in-difference model and the variation in adoption timing to estimate the impact of opting into CEP on student breakfast and lunch participation in eligible, ever-adopting schools. Setting: All public and charter K-12 schools in Texas participating in national school meals (breakfast and/or lunch) from 2013 to 2019 who are eligible for the CEP program in at least 1 year and choose to opt into the program in at least 1 year (n 2797 unique schools and 16 103 school-years). Participants: School-level administrative data from the Texas Department of Agriculture on meal counts, enrollment and summary characteristics of students merged with district-level educational and socio-demographic data from the Texas Education Authority. Results: We find opting into CEP increased school breakfast participation by 4·59 percentage points (P &lt; 0·001) and lunch participation by 4·32 percentage points (P &lt; 0·001), on average. The effect is slightly larger (4·64 and 4·61, respectively) and still statistically significant when excluding summer months. Conclusion: Our findings suggest that opting into CEP modestly increases school meal participation in Texas, with a similar impact on breakfast and lunch. abstract_id: PUBMED:32754916 Community Eligibility Provision and School Meal Participation among Student Subgroups. Background: The National School Lunch and Breakfast Programs help to reduce food insecurity and improve nutrition. The Community Eligibility Provision (CEP) enables high-poverty schools to offer breakfast and lunch at no cost to all students. This study examines associations between CEP and participation among students eligible for free or reduced-price meals ("FRPM"), possibly eligible ("near-cutoff"), or ineligible ("full-price"). Methods: Using data from the 2013-2015 Healthy Communities Study, we compared school breakfast and lunch participation between 842 students in K-8 at 80 CEP schools and 1463 students at 118 schools without CEP. Cross-sectional difference-in-difference (DID) models compared meal participation among near-cutoff and full-price groups to that in the FRPM group. Results: Overall, FRPM students had high participation in school lunch and breakfast at both types of schools. In adjusted DID models, lunch participation among near-cutoff students was 12 points higher in CEP versus comparison schools (p &lt; .05). Among full-price students, breakfast participation was 20 points higher and lunch participation 19 points higher in CEP than comparison schools (p &lt; .001). Conclusions: Community Eligibility Provision improves access to school breakfast and lunch in high-poverty schools, particularly for students who are near or above the cutoff for FRPM eligibility. abstract_id: PUBMED:37351530 Effects of a single cash transfer on school re-enrollment during COVID-19 among vulnerable adolescent girls in Kenya: Randomized controlled trial. COVID-19 related school closures in Kenya were among the longest in Africa, putting older adolescent girls nearing the end of secondary school at risk of permanent dropout. Using a randomized-controlled trial we evaluated a logistically simple cash transfer intervention in urban areas designed to promote their return to school. There were no required conditions for receiving the transfer and the intervention is interpreted as a labeled cash transfer. It had substantial significant effects on re-enrollment of adolescent girls, with greater effectiveness for older girls and even for some not enrolled earlier in the school year. The program effectiveness demonstrates feasibility of the approach and underscores the potential importance of additional resources for schooling during the pandemic, when a large majority of households had suffered income losses. abstract_id: PUBMED:33114649 Education or Provision? A Comparison of Two School-Based Fruit and Vegetable Nutrition Education Programs in the Netherlands. A healthy diet is important for optimal child growth and development. School-based opportunities to encourage children to achieve healthy eating behaviors should be explored. Nutrition education programs can provide school children with classroom-based nutrition education and access to fruits and vegetables (FV). However, the effectiveness of specific program components implemented separately has not yet been comprehensively evaluated. The current study examined effectiveness of individual components of two programs targeting primary school children (n = 1460, n = 37 schools) aged 7-12 years. Nutrition knowledge and FV consumption were measured using a student questionnaire, and presence of school food policies was measured in the teachers' questionnaire. A quasi-experimental design with three arms compared: (1) schools that implemented both programs: FV provision + education (n = 15), (2) schools that implemented the FV provision program only (n = 12), (3) schools that did not implement either program (n = 10). Outcomes were assessed pre-intervention (T0), during the intervention (T1), and 6 months post-intervention (T2). Results indicated a significant increase in nutrition knowledge for children attending schools that had participated in both programs, compared to control schools (p &lt; 0.01), but no significant increase in FV intake. In schools without food policies, FV provision alone contributed to an increase in child FV intake (p &lt; 0.05). abstract_id: PUBMED:26034342 Is Child Labor a Barrier to School Enrollment in Low- and Middle-Income Countries? Achieving universal primary education is one of the Millennium Development Goals. In low- and middle-income developing countries (LMIC), child labor may be a barrier. Few multi-country, controlled studies of the relations between different kinds of child labor and schooling are available. This study employs 186,795 families with 7- to 14-year-old children in 30 LMIC to explore relations of children's work outside the home, family work, and household chores with school enrollment. Significant negative relations emerged between each form of child labor and school enrollment, but relations were more consistent for family work and household chores than work outside the home. All relations were moderated by country and sometimes by gender. These differentiated findings have nuanced policy implications. abstract_id: PUBMED:33138030 Stakeholder Generated Ideas for Alternative School Food Provision Models in Australia Using the Nominal Group Technique. Good nutrition is important for children's learning, growth, and development, yet food intake during school hours does not align with recommendations. In Australia, most school children currently bring a packed lunch from home, but what if there was a different way? This project aimed to engage a diverse range of stakeholders to (1) generate, refine and prioritize ideas for novel models of food provision to Australian children within school hours, and (2) to determine and rank the potential barriers and facilitators to changing the school food provision system. This study used nominal group technique virtual workshops-three idea generation workshops (n = 21 participants) and one consensus workshop (n = 11 participants). School lunch prepared onsite was the top ranked food provision model option based on impact and achievability. Potential barriers (n = 26) and facilitators (n = 28) to changing the school food system were generated. The top ranked barrier and facilitator related to government support. This study highlights that there is an opportunity to explore partnerships and utilize existing skills and infrastructure to introduce a universal school-provided lunch model in Australia. The next steps should focus on building the business case capturing the social value of investing in school lunches, including considering parent-paid and subsidized options. abstract_id: PUBMED:31049487 Identifying Nutrition and Health-Relevant Behaviors, Beliefs, and Values of School-Going Adolescent Girls in Rural Bangladesh: Context for Interventions. Background: Focusing on adolescent schoolgirls in rural Bangladesh, this study aimed to describe the nutrition-relevant context required for effective intervention planning. It included attention to dietary behaviors, daily schedules and activities, and girls' beliefs and values. We placed a special emphasis on iron because anemia has been identified as a significant problem in adolescent girls in Bangladesh. Objectives: The study was undertaken to inform a larger project to develop a replicable model for integrating nutrition activities into the multiple social programs in BRAC, a large nongovernmental service delivery organization in Bangladesh. Methods: Following an initial phase of exploratory key informant interviewing, data collection was conducted through the use of focused ethnographic methods in 2 additional phases. Phase II consisted of in-depth interviews with school-going adolescents in rural communities in Rangpur District (n = 23). Interview modules included sociodemographic information; a qualitative 24-h dietary recall; respondents' concepts of "health," "healthy foods," and "anemia"; exposure to nutrition messages and ratings of their importance and feasibility; and actions to maintain health. In phase III, ratings and other data were collected systematically from a separate sample of 20 adolescent girls. Analyses included thematic analysis of transcribed and translated text and quantitative analysis of numeric data. Results: Key findings include evidence that school-going girls consume a variety of foods and have substantial knowledge about good health and nutrition practices. However, we also found beliefs and practices that challenge the development of nutritionally sound food practices, including iron nutrition. The study revealed the importance of teachers and mothers as information sources, and the dietary and cultural roles of purchased snacks, which occur in connection with school attendance. Conclusions: These findings, together with insights about girls' values, particularly conceptions related to "a healthy life," have implications for expanding BRAC activities to support the nutrition of adolescent school girls. Answer: Yes, the provision of food in school does increase girls' enrollment. Evidence from schools in sub-Saharan Africa indicates that the provision of food through the Food for Education (FFE) program contributed to a significant increase in absolute enrollment, with a 28% increase for girls and 22% for boys in the first year. When take-home rations for girls were combined with on-site feeding for all pupils, the increase in girls' absolute enrollment was sustained at 30% after the first year. In contrast, schools providing on-site feeding alone saw the rate of increase in absolute enrollment revert to pre-FFE implementation rates after the first year. The provision of take-home rations also appeared to reduce the dropout rate of female students, particularly in higher grades (PUBMED:24683673). Additionally, school feeding programs were found to have statistically significant increases in enrollment, with an effect size of about 10%. The changes in enrollment varied by the modality of school feeding provision and by gender, with onsite meals having stronger effects in the first year of treatment in the lower primary grades. Onsite meals combined with take-home rations were also effective post-year 1, particularly for girls receiving the extra take-home rations (PUBMED:26052509). These findings support the conclusion that school feeding programs can have a lasting positive influence on school enrollment and can make a strong contribution to educational goals, such as the Millennium Development Goals, by providing extra take-home rations to girls in addition to on-site feeding (PUBMED:24683673).
Instruction: Language delay in severely neglected children: a cumulative or specific effect of risk factors? Abstracts: abstract_id: PUBMED:20413156 Language delay in severely neglected children: a cumulative or specific effect of risk factors? Objectives: This research sought to determine if the language delay (LD) of severely neglected children under 3 years old was better explained by a cumulative risk model or by the specificity of risk factors. The objective was also to identify the risk factors with the strongest impact on LD among various biological, psychological, and environmental factors. Methods: Sixty-eight severely neglected children and their mothers participated in this cross-sectional study. Children were between 2 and 36 months of age. Data included information about the child's language development and biological, psychological, and environmental risk factors. Results: Prevalence of LD is significantly higher in this subgroup of children than in the population as a whole. Although we observed that the risk of LD significantly increased with an increase in the cumulative count of the presence of the child's biological-psychological risk factors, the one-by-one analysis of the individual factors revealed that the cumulative effect mainly reflected the specific impact of the child's cognitive development. When we considered also the environmental risk factors, multivariate logistic regression established that cognitive development, the mother's own physical and emotional abuse experience as a child, and the mother's low acceptability level towards her child are linked to LD in severely neglected children. Conclusions: Language development is the result of a complex interaction between risk factors. LD in severely neglected children is better explained by the specificity of risk factors than by the cumulative risk model. Practice Implications: Most prevention and early intervention programs promote and target an increase in the quantity and quality of language stimulation offered to the child. Our results suggest that particular attention should be given to other environmental factors, specifically the mother's psychological availability and her sensitivity towards the child. It is essential to suggest interventions targeting various ecological dimensions of neglectful mothers to help break the intergenerational neglect transmission cycle. It is also important to develop government policies and ensure that efforts among the various response networks are concerted since in-depth changes to neglect situations can only come about when all interested parties become involved. abstract_id: PUBMED:30782015 The Pragmatic Language Skills of Severely Neglected 42-Month-Old Children: Results of the ELLAN Study. The goals of this study were twofold: (1) to compare the pragmatic language skills (i.e., social communication skills) of 42-month-old neglected children with those of same-aged non-neglected children and (2) to measure the prevalence of pragmatic difficulties among the neglected children. The study sample was composed of 45 neglected and 95 non-neglected 42-month-old French-speaking children. The Language Use Inventory: French (LUI-French) was completed with all parents. This measure, comprised of 159 scored items divided into 10 subscales, was used to assess the children's pragmatic skills. The 10th percentile on the LUI-French (95% confidence interval ) was used to identify children with pragmatic difficulties. The neglected children had lower scores than the non-neglected children on all 10 dimensions of pragmatics evaluated (p &lt; .01), as well as lower LUI-French Total Scores (p &lt; .001). The effect sizes of these differences varied between 0.84 and 2.78. Forty-four percent of the neglected children presented significant pragmatic difficulties compared to 4.2% of their non-neglected peers (p &lt; .001). It can be concluded that exposure to neglect significantly compromises children's pragmatic skills. These results support the need for interventions geared toward neglected children and their families to support the early development of their pragmatic skills. abstract_id: PUBMED:9208366 Epidemiology of specific language impairment: prenatal and perinatal risk factors. The prenatal and perinatal risk factors likely to be associated with specific language impairment (SLI) were examined in this study. A review of existing research showed that there have been few studies on this topic. Among children with SLI, greater rates of near relatives with language learning problems has been found. Data obtained from a case-control study of 177 children with SLI and 925 children without sensory, developmental disorders, or language impairment were studied using a parental questionnaire concerning exposures during the prenatal and perinatal period for the index children. Differences between the children with SLI and controls were found for parental characteristics regarding education, positive history of language and learning problems, tobacco smoking, and breast feeding. No support was found for elevated rates of maternal exposure to disease or occupational chemical substances. abstract_id: PUBMED:24686440 Prenatal, perinatal, and neonatal risk factors for specific language impairment: a prospective pregnancy cohort study. Purpose: Although genetic factors are known to play a causal role in specific language impairment (SLI), environmental factors may also be important. This study examined whether there are prenatal, perinatal, and neonatal factors that are associated with childhood SLI. Method: Participants were members of the Raine Study, a prospective cohort investigation of pregnant women and their offspring. Parent report indicated that 26 children had received a clinical diagnosis of SLI. Data from antenatal and birth medical records were compared between the children with SLI and typically developing comparison children (N = 1,799). Results: There were no statistically significant differences between the SLI and comparison groups in the individual prenatal, perinatal, and neonatal factors examined. Aggregate risk scores were calculated for each period on the basis of factors known to be associated with neurodevelopmental disorder. There were no group differences in aggregate risk scores in the prenatal and perinatal periods. However, significantly more children in the SLI group (50%) compared with the comparison group (27.6%) experienced 2 or more risk factors during the neonatal period. Conclusion: The vast majority of prenatal, perinatal, and neonatal complications do not play a clear causal role in childhood SLI. However, poor neonatal health may signify increased risk for SLI. abstract_id: PUBMED:29216607 Frequency of risk factors and coexisting abnormalities in a population of Egyptian children with autism spectrum disorder. Background: Many risk factors interact together during the critical period of development and govern the future phenotype of autism spectrum disorder (ASD). Furthermore, co-occurring abnormalities among individuals with ASD vary a lot so as their abilities. Aim Of Work: To investigate possible risk factors and to determine the prevalence of coexisting abnormalities in a sample of Egyptian ASD children and their influence on the severity and their communication performance. Methods: The diagnosis and severity of ASD for participants (N=80) was performed by DSM-5, ADIR and CARS. They were investigated regarding the possible risk factors and coexisting abnormalities. A detailed history taking, clinical examination, the Arabic preschool language scale, cognitive abilities assessment and other additional instrumental measures such as EEG were used. Results: Caesarian section and neonatal jaundice were the most common risk factors. The severity of ASD was positively related to maternal and paternal ages. Developmental language disorder, intellectual disability, attention deficit hyperactivity disorder, sleep disorder and EEG changes were more frequently detected among studied cases. The CARS scores were significantly higher in ADHD and EEG changes groups. The most severely affected CARS items in the groups with these disorders were determined. Conclusion: High parental ages has an impact on the severity of ASD. ADHD, sleep disorder, and EEG changes seem to have an impact on certain elements of the adaptive behavior especially the communicative performance of ASD individuals. We recommend to seriously investigate co-morbid abnormalities and consider them during the process of management of ASD for proper intervention plans. abstract_id: PUBMED:17164505 Factors associated with specific language impairment and later language development during early life: a literature review. It has been reported that 50% of children with specific language impairment (SLI) have persistent SLI, which has been associated with various risk factors. To date, however, there has not been a comprehensive review of studies into different risk factors that could be used by clinicians to facilitate parental counseling and individual case-management. Several studies about the factors associated with SLI were reviewed based on study design. This article presents a review of factors associated with later language development and SLI, and reviews the risk for children who have SLI during early life. The summary provides data including specific biologic and environmental factors that are significantly associated with SLI, to ensure early intervention for children with SLI in the presence of identified risk factors. abstract_id: PUBMED:12578298 Risk factors for language impairment in Swedish bilingual and monolingual children relative to severity. Aim: To explore potential risk factors for language impairment (LI) relative to degree of severity in bilingual and monolingual children. Methods: Two groups of clinically assessed children were compared, 252 bilinguals whose parents were both non-Swedish and 446 monolinguals, who were referred over a period of 24 mo to the University Hospital in Malmö for suspected LI. Results: A multivariate linear regression for both groups showed that two risk factors were the same: parental distress (p &lt; 0.0001 in both groups) and short attention span (p &lt; 0.0001 in both groups). Male gender (p &lt; 0.0001) was an additional predictor for LI relative to degree of severity in the monolingual group. Specific risk factors for LI relative to degree of severity in the bilingual children were maternal arrival within 1 y in relation to birth (p &lt; 0.002) and parental need for an interpreter after &gt; 5 y in Sweden (p &lt; 0.040). Conclusion: Most risk factors for LI relative to degree of severity in this study seemed to apply to both groups, although they sometimes appeared in another form in the bilingual group. Gender and possibly also hereditary factors seemed to need to interact with environmental factors to appear as risk factors for the bilingual children in this study. abstract_id: PUBMED:23506460 Among perinatal factors, only the Apgar score is associated with specific language impairment. Aim: The purpose of this study was to assess the relation of perinatal risk factors with later development of specific language impairment (SLI). Method: In a case-control study, 179 children attending special needs schools for SLI were matched with non-affected children attending mainstream schools. Both groups consisted of 134 males and 45 females (age range 4-13 y; mean age 9 y, SD 2 y 4 mo). Data on duration of pregnancy, birthweight, delivery complications, birth characteristics, and Apgar scores were collected from the Preventive Child Health Care files of the Municipal Health Service. Results: The gestational age of the children with SLI (mean 39.6 wks, SD 0.1 wk) and for the comparison group (mean 39.4 wks, SD 0.1 wk) and the birthweight of children with SLI (mean 3330.4 g; SD 41.4 g) and for the comparison group (mean 3388.1g; SD 39.8 g) were not statistically different; neither were other pregnancy and birth characteristics, with the exception of the Apgar scores (effect of group for Apgar score after 1 min p=0.045; after 5 min p=0.001). The difference in Apgar scores was larger for females than for males (effect of group × sex for Apgar score after 1 min p=0.049; after 5 min p=0.043). Interpretation: The relation between Apgar scores and SLI together with the influence of sex may be meaningful for predicting modelling and for understanding the causal pathway for SLI. abstract_id: PUBMED:31198730 Speech and language delay in children: Prevalence and risk factors. Context: Intelligible speech and language is a useful marker for the child's overall development and intellect. Timely identification of delay by primary care physicians can allow early intervention and reduce disability. Data from India on this subject is limited. Aims: To study the prevalence and risk factors of speech-language delay among children aged 1-12 years. Settings And Design: A cross sectional study was conducted at the Pediatric outpatient department of a teaching hospital. Materials And Methods: Eighty four children (42 children with delayed speech and 42 controls) aged 1-12 years were included. The guardians of these children were requested to answer a questionnaire. History of the child's morbidity pattern and the risk factors for speech delay were recorded. The child's developmental milestones were assessed. Statistical Analysis Used: Data entry was analyzed using SPSS software, version 16. Standard statistical tests were used. A p value of less than 0.05 was taken as statistically significant. Results: Speech and Language delay was found in 42 out of 1658 children who attended the OPD. The risk factors found to be significant were seizure disorder (P=&lt; 0.001)), birth asphyxia (P=0.019), oro-pharyngeal deformity (P=0.012), multilingual family environment (P=&lt; 0.001), family history (P=0.013), low paternal education (P=0.008), low maternal education (P=&lt; 0.001), consanguinity (P=&lt; 0.001) and inadequate stimulation (P=&lt; 0.001). Conclusions: The prevalence of speech and language delay was 2.53%. and the medical risk factors were birth asphyxia, seizure disorder and oro-pharyngeal deformity. The familial causes were low parental education, consanguinity, positive family history, multilingual environment and inadequate stimulation. abstract_id: PUBMED:28321888 Specific language impairment is associated with maternal and family factors. Background: This study aimed to identify risk factors associated with specific language impairment (SLI). Methods: In a nested case-control design, 253 children attending special needs schools for severe speech and language difficulties in the Netherlands were matched for sex and date of birth with 253 children attending mainstream education. Data on perinatal, maternal and family issues were retrieved from well-child care files registered shortly after birth. Results: Children with SLI had younger mothers than children in the control group (mean 30 years 9 months vs. mean 31 years 9 months) (P = 0.02). Children with SLI were less frequently breastfed directly after birth (55% vs. 71%) (P = 0.0007) and were less frequently firstborns (33.3% vs. 46.2%) (P = 0.002). No statistically significant differences were found for any of the other risk factors. Conclusions: A relationship was found between SLI and maternal age, being breastfed and place in the birth order. Perinatal risk factors do not seem to be strongly associated with SLI. Answer: The language delay (LD) in severely neglected children under 3 years old is better explained by the specificity of risk factors rather than a cumulative risk model. A study involving sixty-eight severely neglected children and their mothers found that while the risk of LD increased with the cumulative count of the child's biological-psychological risk factors, the cumulative effect mainly reflected the specific impact of the child's cognitive development. When environmental risk factors were also considered, multivariate logistic regression established that cognitive development, the mother's own physical and emotional abuse experience as a child, and the mother's low acceptability level towards her child were linked to LD in severely neglected children (PUBMED:20413156). Furthermore, the pragmatic language skills (i.e., social communication skills) of 42-month-old neglected children were significantly compromised compared to their non-neglected peers, with 44% of the neglected children presenting significant pragmatic difficulties (PUBMED:30782015). This supports the need for interventions geared toward neglected children and their families to support the early development of their pragmatic skills. In contrast, a study examining prenatal, perinatal, and neonatal risk factors for specific language impairment (SLI) found that the vast majority of these complications do not play a clear causal role in childhood SLI. However, poor neonatal health may signify increased risk for SLI (PUBMED:24686440). Additionally, factors such as maternal and paternal ages, developmental language disorder, intellectual disability, attention deficit hyperactivity disorder, sleep disorder, and EEG changes were found to be associated with the severity of autism spectrum disorder (ASD) and could impact communicative performance (PUBMED:29216607). Overall, these findings suggest that language development in severely neglected children is influenced by specific risk factors, particularly those related to cognitive development and the child's environmental context, including the mother's psychological availability and sensitivity towards the child. These insights highlight the importance of targeted interventions and government policies to address these specific risk factors and support language development in neglected children (PUBMED:20413156).
Instruction: Do people recognise mental illness? Abstracts: abstract_id: PUBMED:14504994 Do people recognise mental illness? Factors influencing mental health literacy. Background: Mental health literacy has been defined as the public's knowledge and the beliefs about mental disorders enhancing the ability to recognise specific disorders. Aims: Firstly, to determine whether the public recognises a person depicted in a vignette as mentally ill or as experiencing a crisis. Secondly, to reveal the factors influencing the correct recognition. Methods: Multiple logistic regression analysis of an opinion survey conducted in a representative population sample in Switzerland (n=844). Results: The depression vignette was correctly recognised by 39.8% whereas 60.2% of the respondents considered the person depicted as having a 'crisis.' The schizophrenia vignette was correctly identified by 73.6% of the interviewees. A positive attitude to psychopharmacology positively influenced the recognition of the two vignettes whereas a positive attitude to community psychiatry had the inverse effect. Moreover, for the depression vignette previous contact to mentally ill people had a positive influence on the recognition. For the schizophrenia vignette instead, rigidity and interest in mass media had a negative influence, respectively. Conclusions: The low knowledge about mental disorders, particularly depression, confirms the importance and the need to increase mental health literacy. Furthermore, professionals must openly discuss illness models with their patients, especially emphasising the differences between illness and crisis. abstract_id: PUBMED:30590273 Associations between ability to recognise a mental health disorder and lived experience of mental illness in an Australian sample. Community awareness of mental illness is crucial for helping people access mental health support. The association between ability to recognise mental health disorders and lived experience is not well examined in Australian populations. Australian adults (n = 1265) provided responses to a vignette depicting a person with depression and self-reported a previous diagnosis of depression and/or anxiety. People who had lived experience with depression and/or anxiety had significantly higher levels of this aspect of mental health literacy than people without lived experience. Public mental health literacy promotion efforts should consider incorporating insights from people with lived experience of mental illness. abstract_id: PUBMED:35346301 Developing a post-discharge suicide prevention intervention for children and young people: a qualitative study of integrating the lived-experience of young people, their carers, and mental health clinicians. Background: Suicide in young people is a leading cause of death. Interventions that are reflexive, tailored, and developed in concert with this at-risk population are needed. This study aimed to integrate lived-experience into the design of a suicide prevention intervention delivered by phone to young people post-discharge from an emergency department (ED) for suicide risk or self-harm. Methods: Qualitative study was conducted at the Queensland Children's Hospital, Brisbane Australia. Four focus groups with young people with lived-experience, parents or carers and ED mental health clinicals were conducted. In total 5 young people with lived-experience of suicidality (17-21 years, Mage = 19.20), 3 parents and carers with a lived-experience of caring for a young person with mental illness, and 10 ED mental health clinicians participated in focus groups. The first phase of qualitative analysis involved a phenomenological analysis and second phase included a deductive content analysis. The paper is following the Consolidated Criteria for Reporting Qualitative Research. Results: First phase, a phenomenological analysis identified three foundational themes to structure future follow-up phone interventions: a person-centred focus, the phone-call dynamic, and the phone-call purpose. Second phase, a deductive content analysis found that participants preferred an intervention that was structured, consistent, and finite. Moreover, an intervention that was authentic, able to facilitate and empower growing independence, and achievable of young people after an ED presentation was desired. Conclusions: Participants expressed their desire for a responsive, structured, and clearly focused phone call that would recognise the young person and parent/carer's needs while providing tailored support to ease transition from the ED to available community and family led care. abstract_id: PUBMED:27552002 The health needs and healthcare experiences of young people trafficked into the UK. Young people who have been trafficked may have experienced significant trauma and violence but little is known about their health and healthcare needs. This UK study aimed to address that gap. It included a health survey and qualitative interviews with 29 young people aged 16-21 trafficked into the UK from other countries who were recruited through voluntary organisations and children's social services. These data were supplemented by interviews with relevant professionals. Over half the young people had been trafficked for sex work but sexual violence had also been experienced by those trafficked for domestic servitude and labour exploitation. Physical violence, threats, restrictions of liberty and deprivation were also widespread, as were experiences of physical and sexual violence prior to being trafficked. Five young women had become pregnant whilst trafficked; three were parents when interviewed. Two-thirds screened positive for high levels of psychological distress, including PTSD. Twelve reported suicidal thinking. Whilst some were keen for opportunities to talk to health professionals confidentially and wanted practitioners to treat their accounts as credible, others wanted to forget abusive experiences. Complex gatekeeping systems, language barriers and practitioners who failed to take them seriously limited access to healthcare. Support and advocacy were helpful in assisting these young people to navigate healthcare systems. Health professionals need to recognise and respond appropriately to trafficked young people's often complex mental health needs and refer them to relevant services, as well as facilitating care at later times when they might need support or be more ready to receive help. abstract_id: PUBMED:31755831 Managing health changes for people with a learning disability in a residential care home setting. Background: People with learning disability (LD) have complex comorbidities that develop at an earlier age than the general population and with which they are now living longer. Identification, assessment and management of these conditions is important but challenging. Aim: To develop resources with care staff to enable them to recognise and manage changes and decline in the health of a person with a LD. Methods: Two resources (PIP-LD and CIRC) were developed through undertaking a literature review; networking with experts; and collaborating with staff in the care homes for people with a LD. Care staff then used these resourcesto review their residents. Findings: The PIP-LD and CIRC were used in 39 care homes. The PIP-LD empowered staff to meet people's immediate health needs, and the CIRC helped them to recognise changes or a decline. Conclusions: The combined use of the PIP-LD and the CIRC enabled care staff to recognise the signs and symptoms of each person's comorbidities early, and to identify and manage changes when their health declined. abstract_id: PUBMED:30746830 How well do older adults recognise mental illness? A literature review. Older adults tend to underutilise mental health services. Mental health literacy plays a critical role in identifying and overcoming barriers to accessing mental health care. The ability to recognise mental illness is an essential component of mental health literacy, with important implications to whether the person will seek professional help. We conducted a review of the literature on older adults' abilities to recognise mental illness. Of the 421 papers that were retrieved in the comprehensive search in PubMed, 32 studies met inclusion criteria. Studies were heterogeneous in terms of target population and methodology, yet findings show that older adults are less likely to correctly recognise mental disorders. Cueing older participants with mental labels improved their recognition abilities. Recognition was particularly poor among immigrant and ethnic/racial older adults, likely due to linguistic and cultural barriers. Our findings demonstrate that older adults show low levels of mental illness recognition and tend to view some illnesses as normal parts of aging. Findings emphasise the need for developing educational programs tailored by the specific phenomenology, conceptualisations and cultural meanings of mental illness among older adults, with attention to informal sources of information and social networks. abstract_id: PUBMED:31983937 Does Mental Illness Impact the Incidence of Crime and Victimisation among Young People? While the high prevalence of offending and victimisation among young people is well established, no study to date has compared official crime records of young people with mental illness with those without mental illness. This case linkage study sought to determine whether young people with a formal history of mental illness were more likely to have official histories of offending and victimisation than young people who had no recorded histories of mental illness. Results suggested that young people with a history of mental illness are particularly vulnerable to violence. While a weak association was established between violent offending and mental illness, a stronger relationship was found between victimisation and mental illness, especially violent victimisation. Consistent with literature regarding the victim-offender overlap, a history of offending placed young people at a substantially higher risk of victimisation. Implications of these results are discussed in the context of how public mental health services could better protect these vulnerable young people. abstract_id: PUBMED:37008456 Rural healthy people 2030: New decade, new challenges. Rural Healthy People is a companion piece to the federal Healthy People initiative released once a decade to identify the most important Healthy People priorities for rural America, as identified by rural stakeholders, for the current decade. This study reports on the findings of Rural Healthy People 2030. The study relied on a survey of rural health stakeholders collected from July 12, 2021, to February 14, 2022, and: 1) identified the 20 Healthy People priorities most frequently selected as priorities for rural America, 2) studied the priorities that were most frequently selected as a "top 3" priority within each Healthy People 2030 category, and 3) investigated Healthy People 2030 priorities in terms of ranked importance for rural Americans. The analysis finds that for the first time across 3 decades of Rural Healthy People, a greater proportion of respondents selected "Mental Health and Mental Disorders" and "Addiction" as Healthy People priorities for rural America, than did "Health Care Access and Quality". Even still, respondents ranked "Health Care Access and Quality" as the single-most important rural priority. "Economic Stability," a new priority within the Social Determinant of Health category, debuted within the 10 most frequently selected priorities for rural America for the coming decade. As public health practitioners, researchers, and policymakers work toward closing the urban-rural divide, the most important rural priorities to address in the coming decade are mental health and substance use disorders, access to high quality health care services, and social determinants of health, such as economic stability. abstract_id: PUBMED:25953431 Rural Healthy People 2020: New Decade, Same Challenges. Purpose: The health of rural America is more important than ever to the health of the United States and the world. Rural Healthy People 2020's goal is to serve as a counterpart to Healthy People 2020, providing evidence of rural stakeholders' assessment of rural health priorities and allowing national and state rural stakeholders to reflect on and measure progress in meeting those goals. The specific aim of the Rural Healthy People 2020 national survey was to identify rural health priorities from among the Healthy People 2020's (HP2020) national priorities. Methods: Rural health stakeholders (n = 1,214) responded to a nationally disseminated web survey soliciting identification of the top 10 rural health priorities from among the HP2020 priorities. Stakeholders were also asked to identify objectives within each national HP2020 priority and express concerns or additional responses. Findings And Conclusions: Rural health priorities have changed little in the last decade. Access to health care continues to be the most frequently identified rural health priority. Within this priority, emergency services, primary care, and insurance generate the most concern. A total of 926 respondents identified access as the no. 1 rural health priority, followed by, no. 2 nutrition and weight status (n = 661), no. 3 diabetes (n = 660), no. 4 mental health and mental disorders (n = 651), no. 5 substance abuse (n = 551), no. 6 heart disease and stroke (n = 550), no. 7 physical activity and health (n = 542), no. 8 older adults (n = 482), no. 9 maternal infant and child health (n = 449), and no. 10 tobacco use (n = 429). abstract_id: PUBMED:37029492 Quality of Life of people with Schizophrenia: A meta-analysis. Background: Schizophrenia is a severe, chronic mental disorder that causes many psychosocial problems. In order to reveal these problems, it is necessary to measure the quality of life of people with schizophrenia. Aim: The aim of this meta-analysis is to compare the quality of life of people with schizophrenia and healthy subjects. Methods: Literature search was conducted in the Web of Science Core Collection database including the dates of January 2000 and March 2021. The systematic search provided 464 potentially relevant studies. The final sample consisted of 18 studies. Results: The results of using a random effects model for analysis indicated that schizophrenia subjects showed considerably lower quality of life scores compared to healthy controls. Conclusion: Determining the quality of people with schizophrenia will help us to create effective psychosocial intervention programs. Answer: The recognition of mental illness by the general public varies and is influenced by several factors. A study conducted in Switzerland found that only 39.8% of respondents correctly recognized a vignette depicting depression, while 73.6% correctly identified a vignette depicting schizophrenia (PUBMED:14504994). Factors that positively influenced the recognition of mental illness included a positive attitude towards psychopharmacology and, in the case of depression, previous contact with mentally ill people. Conversely, a positive attitude to community psychiatry had an inverse effect on recognition, and for schizophrenia, rigidity and interest in mass media negatively influenced recognition (PUBMED:14504994). In Australia, individuals with lived experience of depression and/or anxiety were found to have significantly higher levels of mental health literacy, suggesting that personal experience with mental health issues can improve the ability to recognize mental disorders (PUBMED:30590273). Similarly, integrating the lived experiences of young people, their carers, and mental health clinicians is seen as beneficial in developing tailored suicide prevention interventions (PUBMED:35346301). However, recognition of mental illness can be particularly challenging among certain populations. For instance, older adults are less likely to correctly recognize mental disorders, and recognition is even poorer among immigrant and ethnic/racial older adults, likely due to linguistic and cultural barriers (PUBMED:30746830). Additionally, young people with a history of mental illness are more vulnerable to violence and victimization, which may be associated with their mental health status (PUBMED:31983937). In the context of rural health, mental health and mental disorders have emerged as top priorities, indicating a recognition of the importance of mental health issues in these communities (PUBMED:37008456, PUBMED:25953431). Overall, while there is some level of recognition of mental illness among the public, it is clear that knowledge and understanding of mental health issues are not uniform and can be influenced by various factors, including personal experience, attitudes towards mental health care, and cultural and linguistic backgrounds (PUBMED:14504994, PUBMED:30590273, PUBMED:35346301, PUBMED:30746830, PUBMED:31983937, PUBMED:37008456, PUBMED:25953431).
Instruction: Does adolescent weight status predict problematic substance use patterns? Abstracts: abstract_id: PUBMED:24933140 Does adolescent weight status predict problematic substance use patterns? Objectives: To identify underlying patterns of cigarette smoking, alcohol use, and marijuana use in young adulthood, and ascertain whether adolescent overweight or obesity status predicts problematic substance use patterns. Methods: The study included 15,119 participants from the National Longitudinal Study of Adolescent Health (Add Health) at Wave 1 (11-19 years) and Wave 3 (18-26 years). Latent class analysis was conducted. Results: Participants were classified into a Low Substance Use (35%), Regular Smokers (12%), High-risk Alcohol use (33%), or High Substance Use (20%) class. Overweight/obese adolescents had a greater likelihood of belonging to the Regular Smokers class. Conclusions: Overweight/obese adolescents are at higher risk of engaging in regular cigarette smoking without problematic alcohol or marijuana use. abstract_id: PUBMED:33190110 Altered resting functional connectivity patterns associated with problematic substance use and substance use disorders during adolescence. Background: Adolescence is typified by increasing rates of substance use and the development of substance use disorders (SUD). Aberrant connectivity between cortical regions involved in executive control, and subcortical regions has been suggested to be associated with SUD and problematic substance use among adolescents. Few studies, however, have investigated system-level or whole-brain functional connectivity (FC) in order to test this hypothesis. Methods: In a sample of 114 adolescents (mean age = 17.62 years, SD = 1.23, 61F) from the community, the present study used resting-state functional magnetic resonance imaging and independent component analysis to study executive control-subcortical network (ECN-SCN) coupling in adolescent SUD (n=18) and problematic substance use (n=34). In addition, whole-brain FC analyses were also conducted. Results: Problematic substance use, but not SUD, was associated with increased negative ECN-SCN coupling (p = 0.026). The whole-brain FC analysis showed insula-associated hypoconnectivity in the SUD group (p = 0.037), which was negatively correlated with frequency of substance use. Conclusions: Findings implicate different neural circuitry underlying adolescent SUD versus problematic use. Greater negative coupling between the SCN and ECN in adolescents with problematic substance use could underlie risk for future development of SUD or other mental health problems. Although we cannot infer directionality, hypoconnectivity within the insula in adolescents with SUD may indicate addiction-related alterations in interoceptive awareness or impairments in decision-making. abstract_id: PUBMED:25662370 Problematic Internet use is associated with substance use in young adolescents. Aim: This study examined whether problematic Internet use was associated with substance use among young adolescents and assessed whether this association accounted for the use of tobacco, alcohol, cannabis and other drugs. Methods: Using the Internet Addiction Test, we divided a representative sample of 3067 adolescents in Switzerland (mean age 14 years) into regular and problematic Internet users. We performed a bivariate analysis and two logistic regression models, to analyse substances separately and simultaneously, and developed a log-linear model to define the associations between significant variables. Results: Problematic Internet users were more likely to be female, to use substances, to come from nonintact families, to report poor emotional well-being and to be below average students. The first model showed significant associations between problematic users and each substance, with adjusted odds ratios of 2.05 for tobacco, 1.72 for alcohol, 1.94 for cannabis and 2.73 for other drugs. Only smoking remained significant in the second model, with an adjusted odds ratio of 1.71. Conclusion: Problematic Internet use is associated with other risky behaviours and may be an important early predictor of adolescent substance use. Therefore, it should be included in the psychosocial screening of adolescents. abstract_id: PUBMED:37741073 The role of child maltreatment and adolescent victimization in predicting adolescent psychopathology and problematic substance use. Background: Maltreated children are more likely to experience adolescent victimization, which may underlie the association between maltreatment and adolescent psychopathology and substance use. Objective: To determine whether number of adolescent victimization types predicts adolescent psychopathology and problematic substance use over and above number of child maltreatment subtypes; whether adolescent victimization mediates the relations between maltreatment and change in adolescent psychopathology and problematic substance use; and whether maltreatment moderates the relation between adolescent victimization and changes in these outcomes. Participants And Setting: Participants were 545 (295 maltreated, 250 non-maltreated; 328 males, 217 females) racially and ethnically diverse (52.8 % Black, 27.5 % White, 12.8 % Bi-racial; 13.4 % Latino/a) children and families from the Rochester, New York, USA area assessed across three waves of data (Wave 1, Mage = 7.6 years; Wave 2, Mage = 13.8 years; Wave 3, Mage = 16.2 years). Methods: Maltreatment was coded at Wave 1 using Department of Human Services records. Adolescents self-reported psychopathology, problematic substance use, and victimization at Waves 2 and 3. Results: Structural equation modeling revealed that adolescent victimization predicted adolescent psychopathology (β = 0.24, p &lt; .001) and problematic substance use (β = 0.27, p &lt; .001) over and above child maltreatment. Adolescent victimization did not mediate the association between child maltreatment change in psychopathology and problematic substance use and child maltreatment did not moderate the association between adolescent victimization and these outcomes. Conclusions: We discuss the importance of future research utilizing multi-wave designs to examine relations between these constructs and of assessing for more proximal victimization. abstract_id: PUBMED:31181470 Unhealthy weight control behaviors and substance use among adolescent girls: The harms of weight stigma. Rationale: Unhealthy weight control behaviors (UWCBs) often predict future substance use, resulting in a comorbidity that predicts later health consequences. Objective: The current study aimed to examine whether weight stigma and its associated harms magnify or attenuate the effect of UWCBs at baseline on substance use at 10-year follow-up among girls and to elucidate factors related to the perceived harms of weight stigma. Methods: Data from 1147 adolescent girls from Project EAT (Eating and Activity in Teens and Young Adults) were analyzed at baseline and 10-year follow-up. Participants were split into three groups at baseline: no weight stigma; weight stigma only (being teased by peers or parents); and weight stigma with perceived harms (e.g., being bothered by teasing). Moderation and stratified regression analyses examined the role of weight stigma and its perceived harms on the relationship between UWCBs and substance use. ANOVA models aimed to elucidate factors related to the perceived harms of weight stigma. Results: Moderation analyses revealed marginally significant effects of group membership on the effect of UWCB on substance use (p = .08). Stratified regression results indicated that UWCBs at baseline predicted substance use at 10-year follow-up among girls in the weight stigma with perceived harms group (p = .005), but not in the no weight stigma or weight stigma only groups. Girls in the weight stigma with perceived harms group reported higher weight concern, depressive symptoms and BMIs than girls who report no weight stigma or weight stigma only. Conclusions: Health initiatives should seek to reduce weight stigma and its associated harms to prevent substance use in girls and women. Clinicians working with adolescent girls with UWCB should inquire about experiences with weight stigma with its harms to assess substance use risk. Due to the marginally significant moderation, results should be interpreted with caution. Conclusions: Health initiatives should seek to reduce weight stigma and its associated harms to prevent substance use in girls and women. Clinicians working with adolescent girls with UWCB should inquire about experiences with weight stigma with its harms to assess substance use risk. Due to the marginally significant moderation, results should be interpreted with caution. abstract_id: PUBMED:24999059 Problematic substance use in urban adolescents: role of intrauterine exposures to cocaine and marijuana and post-natal environment. Background: Linkages between intrauterine exposures to cocaine and marijuana and adolescents' problematic substance use have not been fully delineated. Methods: Prospective longitudinal study with assessors unaware of intrauterine exposure history followed 157 urban participants from birth until late adolescence. Level of intrauterine exposures was identified by mother's report and infant's meconium. Problematic substance use, identified by the Voice Diagnostic Interview Schedule for Children (V-DISC) or the Audio Computer Assisted Self-Interview (ACASI) and urine assay, was a composite encompassing DSM-IV indication of tolerance, abuse, and dependence on alcohol, marijuana, and tobacco and any use of cocaine, glue, or opiates. Results: Twenty percent (32/157) of the sample experienced problematic substance use by age 18 years, of whom the majority (22/157) acknowledged abuse, tolerance or dependence on marijuana with or without other substances. Structural equation models examining direct and indirect pathways linking a Cox survival model for early substance initiation to a logistic regression models found effects of post-natal factors including childhood exposure to violence and household substance use, early youth substance initiation, and ongoing youth violence exposure contributing to adolescent problematic substance use. Conclusion: We did not identify direct relationships between intrauterine cocaine or marijuana exposure and problematic substance use, but did find potentially modifiable post-natal risk factors also noted to be associated with problematic substance use in the general population including earlier substance initiation, exposure to violence and to household substance use. abstract_id: PUBMED:32638695 Different factors predict adolescent substance use versus adult substance abuse: Lessons from a social-developmental approach. This 17-year prospective study applied a social-developmental lens to the challenge of distinguishing predictors of adolescent-era substance use from predictors of longer term adult substance use problems. A diverse community sample of 168 individuals was repeatedly assessed from age 13 to age 30 using test, self-, parent-, and peer-report methods. As hypothesized, substance use within adolescence was linked to a range of likely transient social and developmental factors that are particularly salient during the adolescent era, including popularity with peers, peer substance use, parent-adolescent conflict, and broader patterns of deviant behavior. Substance abuse problems at ages 27-30 were best predicted, even after accounting for levels of substance use in adolescence, by adolescent-era markers of underlying deficits, including lack of social skills and poor self-concept. The factors that best predicted levels of adolescent-era substance use were not generally predictive of adult substance abuse problems in multivariate models (either with or without accounting for baseline levels of use). Results are interpreted as suggesting that recognizing the developmental nature of adolescent-era substance use may be crucial to distinguishing factors that predict socially driven and/or relatively transient use during adolescence from factors that predict long-term problems with substance abuse that extend well into adulthood. abstract_id: PUBMED:33652883 Substance Use Disorder Status Moderates the Association between Personality Traits and Problematic Mobile Phone/Internet Use. Background: Associations between personality traits and problematic smartphone use (PSU) among individuals with substance use disorder (SUD) have not been widely investigated. The current study aims to assess whether SUD status moderates the association between personality traits and PSU. Methods: The study group included 151 individuals with SUD and a normative sample (NS) comprised of 554 non-SUD students. The following self-report questionnaires were used: the Mobile Phone Problem Use Scale (MPPUS-10) to assess problematic smartphone use (PSU), the Internet Addiction Test (IAT) to assess intensity of internet use, and the NEO Five-Factor Inventory (NEO-FFI) to assess Personality traits. Results: SUD status moderated the association between neuroticism and openness to new experiences on PSU. That is, greater neuroticism and openness were significantly associated with more excessive PSU among the NS. In the SUD group, greater openness was a significant protective factor against PSU. Moderation results were similar when using the IAT (which was significantly correlated with MPPUS) as an outcome. Conclusions: The presence of SUD may influence how personality traits are associated with problematic mobile phone/internet use. Given that this is among one of the first studies examining this topic, findings should be replicated with additional studies. abstract_id: PUBMED:29670501 Psychoactive Substance Use and Problematic Internet Use as Predictors of Bullying and Cyberbullying Victimization. Research exploring the relationship between addictions and experiences of bullying suggests that problem behaviors may generally be associated with an increased risk of victimization. The aim of the present study was to examine the role of psychoactive substance use, excessive Internet use, and social support in both traditional offline bullying and online "cyberbullying" victimization in a nationally representative sample of adolescents (N = 6237; 51% male; Mage = 16.62 years, SD = 0.95). Results demonstrated that traditional bullying victimization was associated with cyberbullying victimization. Furthermore, psychoactive substance use and problematic Internet use predicted both traditional bullying and cyberbullying victimization. Finally, perceived social support was found to be an important protective factor against both traditional and cyberbullying victimization. However, psychoactive substance use and problematic Internet use accounted for only a small proportion of variance in victimization. abstract_id: PUBMED:24188039 Dyadic coregulation and deviant talk in adolescent friendships: interaction patterns associated with problematic substance use in early adulthood. In a sample of 711 ethnically diverse adolescents, the observed interpersonal dynamics of dyadic adolescent friendship interactions were coded to predict early adulthood tobacco, alcohol, and marijuana use. Deviant discussion content within the interactions was coded along with dyadic coregulation (i.e., interpersonal coordination, attention synchrony). Structural equation modeling revealed that, as expected, deviant content in adolescent interactions at age 16-17 years was strongly predictive of problematic use of tobacco, alcohol, and marijuana at ages 22 and 23. Although dyadic coregulation was not directly predictive of early adulthood substance use, it did moderate the impact of deviant talk within the dyad on future alcohol and marijuana use. For these substances, high levels of dyadic coregulation increased the risk associated with high levels of deviant talk for problematic use in early adulthood. Results held when comparing across genders and across ethnic groups. The results suggest that these interpersonal dynamics are associated with developmental trajectories of risk for or resilience to peer influence processes. Answer: Yes, adolescent weight status does predict problematic substance use patterns. According to a study that included 15,119 participants from the National Longitudinal Study of Adolescent Health, overweight or obese adolescents had a greater likelihood of belonging to the Regular Smokers class, indicating that they are at higher risk of engaging in regular cigarette smoking without problematic alcohol or marijuana use (PUBMED:24933140).
Instruction: Intravenous regional anesthesia administered by the operating plastic surgeon: is it safe and efficient? Abstracts: abstract_id: PUBMED:18040193 Intravenous regional anesthesia administered by the operating plastic surgeon: is it safe and efficient? Experience of a medical center. Background: Intravenous regional anesthesia (Bier's block) is an effective method of providing anesthesia for extremity surgery. This technique is most suitable for short-duration, less than 60-minute surgical procedures in distal extremities. Earlier studies recommended that intravenous regional anesthesia be performed by anesthesiologists who are familiar with the technique and fully trained to treat its complications. This study was conducted to demonstrate that intravenous regional anesthesia administered by the operating plastic surgeon is safe, efficient, and simple to perform. Methods: A 5-year retrospective chart review (January of 2000 to December of 2004) was undertaken. The study included patients who underwent surgical procedures and were administered intravenous regional anesthesia by the surgeon. Results: Four hundred forty-eight patients were included in the study, and 479 operations of 483 scheduled were completed under intravenous regional anesthesia performed by the operating surgeon. Intravenous regional anesthesia was efficient in 478 of 479 of the cases (99.8 percent). Tourniquet-related technical problems were noted in five of 483 cases (1 percent), resulting in cancellation of four operations, with no reported consequent anesthetic toxicity. Minor complications were reported in six of 479 of the cases (1.2. percent). No major complications occurred. Conclusions: Despite earlier worries about the safety of intravenous regional anesthesia if not administered by anesthesiologists, the authors found that intravenous regional anesthesia represents a safe and efficient technique of anesthesia for extremity surgery when performed carefully and appropriately by the operating surgeon. The authors suggest that plastic surgeons familiarize themselves with this technique, which is simple to perform. This could probably result in lower overall costs and greater autonomy for the surgeon. abstract_id: PUBMED:24156887 Intravenous regional anesthesia with long-acting local anesthetics. An update Intravenous regional anesthesia is a widely used technique for brief surgical interventions, primarily on the upper limbs and less frequently, on the lower limbs. It began being used at the beginning of the 20th century, when Bier injected procaine as a local anesthetic. The technique to accomplish anesthesia has not changed much since then, although different drugs, particularly long-acting local anesthetics, such as ropivacaine and levobupivacaine in low concentrations, were introduced. Additionally, drugs like opioids, muscle relaxants, paracetamol, neostigmine, magnesium, ketamine, clonidine, and ketorolac, have all been investigated as adjuncts to intravenous regional anesthesia, and were found to be fairly useful in terms of an increased onset of operative anesthesia and longer lasting perioperative analgesia. The present article provides an overview of current knowledge with emphasis on long-acting local anesthetic drugs. abstract_id: PUBMED:26647089 Comparison of tramadol and lornoxicam in intravenous regional anesthesia: a randomized controlled trial Background And Objectives: Tourniquet pain is one of the major obstacles for intravenous regional anesthesia. We aimed to compare tramadol and lornoxicam used in intravenous regional anesthesia as regards their effects on the quality of anesthesia, tourniquet pain and postoperative pain as well. Methods: After the ethics committee approval 51 patients of ASA physical status I-II aged 18-65 years were enrolled. The patients were divided into three groups. Group P (n=17) received 3mg/kg 0.5% prilocaine; group PT (n=17) 3mg/kg 0.5% prilocaine+2mL (100mg) tramadol and group PL (n=17) 3mg/kg 0.5% prilocaine+2mL (8mg) lornoxicam for intravenous regional anesthesia. Sensory and motor block onset and recovery times were noted, as well as tourniquet pains and postoperative analgesic consumptions. Results: Sensory block onset times in the groups PT and PL were shorter, whereas the corresponding recovery times were longer than those in the group P. Motor block onset times in the groups PT and PL were shorter than that in the group P, whereas recovery time in the group PL was longer than those in the groups P and PT. Tourniquet pain onset time was shortest in the group P and longest in the group PL. There was no difference regarding tourniquet pain among the groups. Group PL displayed the lowest analgesic consumption postoperatively. Conclusion: Adding tramadol and lornoxicam to prilocaine for intravenous regional anesthesia produces favorable effects on sensory and motor blockade. Postoperative analgesic consumption can be decreased by adding tramadol and lornoxicam to prilocaine in intravenous regional anesthesia. abstract_id: PUBMED:24093649 Costs of regional and general anesthesia: what the plastic surgeon needs to know. Although regional anesthesia techniques seem to expand the opportunity for cost savings when executing plastic surgery procedures, cost allocation is not a simple business. Equivalence must first be demonstrated, and the patient’s perception is integral to assigning value to an intervention. Opportunity costs cannot be ignored when the plastic surgeon assumes the role of the anesthesiologist. Most importantly, the system must be modified to optimize the cost savings realized through the intervention. This article presents an in-depth look into the multiple factors that must be taken into consideration when assessing costs related to anesthesia. abstract_id: PUBMED:864892 Intravenous regional anesthesia. A series of 33 patients who underwent intravenous regional anesthesia for treatment of orthopedic and surgical procedures is reported. Intravenous regional anethesia is a safe and reliable alternative to general or other regional anesthesia techniques for use in the emergency department. The technique is easily mastered and requires minimal special equipment. The agent of choice is lidocaine in a dosage of 3 mg/kg administered as a 0.5% solution. Complications are few and are usually related to rapid systemic absorption of the anesthetic agent or minor mistakes in technique. abstract_id: PUBMED:23335648 Office-based outpatient plastic surgery utilizing total intravenous anesthesia. Background: Office-based plastic surgery procedures continue to increase in popularity and a range of anesthetic techniques can be utilized, from light conscious sedation to general anesthesia requiring intubation. Total intravenous anesthesia (TIVA) is well suited for the office environment because it allows for moderate to deep sedation without the need for intubation. Objective: The authors review plastic surgery procedures performed in an outpatient office-based operating room under TIVA to assess patient outcomes and complications. Methods: A retrospective chart review was conducted of patients who underwent surgical procedures performed by 2 senior surgeons at American Association for Accreditation of Ambulatory Surgery Facilities-certified outpatient operating rooms between 2003 and 2011. TIVA was always administered by a board-certified anesthesiologist because it required the use of propofol. Conscious sedation with midazolam and fentanyl was always administered by the plastic surgeon. Patient outcomes and complications were analyzed to assess the safety of TIVA in an office operating room. Results: A total of 2611 procedures were performed on 2006 patients. No deaths, cardiac events, or transfers to the hospital occurred in any patients, regardless of the type of sedation utilized. Six hundred forty-two patients were given TIVA, which included propofol and/or ketamine, in addition to midazolam and fentanyl. The remaining 1364 patients received conscious sedation. There was 1 documented case (0.05%; 1/2006) of deep vein thrombosis/pulmonary embolism in a patient who had an implant exchange under TIVA; this patient was taking oral contraceptive pills at the time of surgery. Conclusions: Office-based surgery is an attractive option for many patients. This review suggests that a variety of procedures can be performed in a safe manner under TIVA. Although patient selection for outpatient surgery is paramount, TIVA offsets the risks of general anesthesia and is associated with minimal postoperative complications. Level Of Evidence: 4. abstract_id: PUBMED:4091609 Intravenous regional anesthesia. Intravenous regional anesthesia is a simple and effective method of analgesia of an extremity by intravenous injection of a local anesthetic while the circulation is interrupted. Although the pharmacokinetics of the injected drug is relatively well known, its precise mode of action is still not well understood. The complications of the technique are uncommon and generally minor ones. They occur immediately after tourniquet release. The relative and absolute contra-indications to the use of intravenous regional anesthesia are presented. abstract_id: PUBMED:2486181 Ketamine for intravenous regional anesthesia. We studied ketamine intravenous regional anesthesia of the upper extremity in volunteers using concentrations of 0.5%, 0.3%, and 0.2%. Ketamine 0.5 and 0.3% produced adequate intravenous regional anesthesia. Anesthesia was inadequate when a 0.2% concentration was used. However, although the 0.3% concentration provides complete sympathetic, sensory, and motor blockade when injected into the isolated extremity, unpleasant psychotomimetic effects after the release of the tourniquet limit the usefulness of this use of ketamine. Ketamine cannot be recommended for intravenous regional anesthesia unless these unpleasant side effects are abolished or controlled by means of pharmacologic adjuvants. abstract_id: PUBMED:26768929 Comparison of tramadol and lornoxicam in intravenous regional anesthesia: a randomized controlled trial. Background And Objectives: Tourniquet pain is one of the major obstacles for intravenous regional anesthesia. We aimed to compare tramadol and lornoxicam used in intravenous regional anesthesia as regards their effects on the quality of anesthesia, tourniquet pain and postoperative pain as well. Methods: After the ethics committee approval 51 patients of ASA physical status I-II aged 18-65 years were enrolled. The patients were divided into three groups. Group P (n = 17) received 3mg/kg 0.5% prilocaine; group PT (n = 17) 3mg/kg 0.5% prilocaine + 2 mL (100mg) tramadol and group PL (n = 17) 3mg/kg 0.5% prilocaine + 2 mL (8 mg) lornoxicam for intravenous regional anesthesia. Sensory and motor block onset and recovery times were noted, as well as tourniquet pains and postoperative analgesic consumptions. Results: Sensory block onset times in the groups PT and PL were shorter, whereas the corresponding recovery times were longer than those in the group P. Motor block onset times in the groups PT and PL were shorter than that in the group P, whereas recovery time in the group PL was longer than those in the groups P and PT. Tourniquet pain onset time was shortest in the group P and longest in the group PL. There was no difference regarding tourniquet pain among the groups. Group PL displayed the lowest analgesic consumption postoperatively. Conclusion: Adding tramadol and lornoxicam to prilocaine for intravenous regional anesthesia produces favorable effects on sensory and motor blockade. Postoperative analgesic consumption can be decreased by adding tramadol and lornoxicam to prilocaine in intravenous regional anesthesia. abstract_id: PUBMED:21897500 Clonidine as an adjunct to intravenous regional anesthesia: A randomized, double-blind, placebo-controlled dose ranging study. Background: The addition of clonidine to lidocaine intravenous regional anesthesia (IVRA) has been previously reported to improve postoperative analgesia in patients undergoing upper extremity surgery. Our objective was to perform a dose ranging study in order to determine the optimal dose of clonidine used with lidocaine in IVRA. DESIGN #ENTITYSTARTX00026; Setting: We performed a double-blinded randomized placebo-controlled study with 60 patients scheduled for elective endoscopic carpal tunnel release under IVRA with 50 ml lidocaine 0.5%. University-affiliated outpatient surgery center. Data collected in operating rooms, recovery room, and by telephone after discharge from surgery center. MATERIALS #ENTITYSTARTX00026; Methods: Sixty adult ASA I or II patients undergoing outpatient endoscopic carpal tunnel release under intravenous regional anesthesia.Patients were randomized into five study groups receiving different doses of clonidine in addition to 50 ml 0.5% lidocaine in their IVRA. Group A received 0 mcg/kg, group B 0.25 mcg/kg, group C 0.5 mcg/kg, group D 1.0 mcg/kg and group E 1.5 mcg/kg of clonidine.Intraoperative fentanyl, recovery room pain scores, time to first postsurgical analgesic, total number of acetaminophen/codeine tablets consumed postsurgery, incidence of sedation, hypotension and bradycardia. RESULTS #ENTITYSTARTX00026; Conclusions: There was no benefit from any dose of clonidine compared to placebo. There were no clonidine-related side effects seen within the dose range studied. In short duration minor hand surgery, the addition of clonidine to lidocaine-based intravenous regional anesthesia provides no measurable benefit. Answer: Intravenous regional anesthesia, also known as Bier's block, is a technique used for providing anesthesia for extremity surgery, particularly suitable for short-duration procedures in distal extremities. A study conducted to assess the safety and efficiency of intravenous regional anesthesia administered by the operating plastic surgeon found that it is indeed a safe and efficient method. The retrospective chart review included 448 patients and 479 operations, with a high efficiency rate of 99.8 percent and no major complications reported. Only minor complications occurred in 1.2 percent of the cases, and tourniquet-related technical problems were noted in 1 percent of cases, leading to the cancellation of four operations without any consequent anesthetic toxicity. The study concluded that intravenous regional anesthesia can be safely and efficiently performed by the operating surgeon when done carefully and appropriately, suggesting that plastic surgeons should familiarize themselves with this technique for potential cost savings and greater autonomy (PUBMED:18040193).
Instruction: Night-time frequency, sleep disturbance and general health-related quality of life: is there a relation? Abstracts: abstract_id: PUBMED:19054172 Night-time frequency, sleep disturbance and general health-related quality of life: is there a relation? Objectives: We conducted a community-based study to determine the relationship among night-time frequency, sleep disturbance and general health-related quality of life (GHQL). Methods: A total of 2271 participants, men and women, aged 41-70 and randomly selected in three Japanese towns completed a postal questionnaire survey. This questionnaire included: the International Prostate Symptom Score, the overall incontinence score of the International Consultation of Incontinence Questionnaire Short Form for lower urinary tract symptoms, the Pittsburg Sleep Quality Index for sleep problems, the Medical Outcome Study Short Form-8 for GHQL, and medical history of disease, cigarette smoking, and alcohol consumption. A multiple regression model was used for statistical analysis, and P &lt; 0.05 was considered significant. Results: Although night-time frequency by itself was closely associated with most aspects of GHQL, this association disappeared in four domains (general health perception, vitality, mental health and emotional role) and in the two summary scores of the Medical Outcome Study Short Form-8 after inclusion of the influence of sleep problems represented by the total score on the Pittsburg Sleep Quality Index. However, three domains (physical function, physical role, and social function) remained significantly associated with night-time frequency. Sleep problems were by far the worst risk factor for the deterioration of GHQL. Conclusions: Night-time frequency appeared to be associated with GHQL mainly by affecting sleep conditions, a symptom that independently influenced some aspects of GHQL. abstract_id: PUBMED:36612478 Health, Occupational Stress, and Psychosocial Risk Factors in Night Shift Psychiatric Nurses: The Influence of an Unscheduled Night-Time Nap. Background: Occupational stress and shift work (including night shift work) are associated with physical and psychological health consequences in healthcare providers in general and those working in psychiatric establishments in particular. The aim of this study was to assess the impact of occupational risk factors and unscheduled night-time naps on self-reported health disorders among nurses working in a French psychiatric hospital. Methods: We performed a 12-month observational field study of experienced nurses working at Philippe Pinel Psychiatric Hospital (Amiens, France) between September 2018 and September 2019. A comparative descriptive study of two groups of nurses who filled out a questionnaire on health and occupational stress was performed: nurses working permanently on the night shift (the night shift group, who took unscheduled naps), and nurses rotating weekly between morning and afternoon shifts (the day shift group). Results: The night and day shift groups comprised 53 and 30 nurses, respectively. There were no intergroup differences in health disorders, sleep quality, occupational stress, and risk factor perception. Correlation analyses showed that in the day shift group, a low level of support from supervisors was associated with elevated levels of distress, anxiety, and gastrointestinal disorders. In the night shift group, a greater overall work load was associated with elevated levels of anxiety and distress. These findings indicated that the nurses on the night shift had adapted well to their working conditions. Conclusions: An organizational strategy including an unscheduled night-time nap might improve health among night shift nurses. abstract_id: PUBMED:38235926 Latent class analysis of the sleep quality of night shift nurses and impact of shift-related factors on the occupational stress and anxiety. Aims: The objective of this study is to explore the various latent categories within the sleep quality of night shift nurses and to investigate whether shift-related factors predispose nurses to higher levels of occupational stress and anxiety. Design: This is a cross-sectional study. Methods: From November to December 2020, registered nurses from 18 tertiary hospitals and 16 secondary hospitals in Chongqing were selected through convenience sampling for this study. Latent class analysis was used to investigate the sleep quality of nurses working night shifts. Furthermore, univariate analysis and logistic multivariate analysis were utilized to identify the contributing factors to occupational stress and anxiety. Results: The four latent categories of Pittsburgh Sleep Quality Index for night shift nurses were identified as 'Low Sleep Disorder Group' (56.34%), 'Moderate Sleep Disorder Group' (37.27%), 'High Sleep Disorder Non-Reliant on Sleeping medication Group' (4.89%) and 'High Sleep Disorder Reliant on Sleeping medication Group' (1.50%). The results showed that having a night-shift frequency of 3-4 times per month, night-shift durations of 9-12 h, sleep time delay after night shift (≥2 h), total sleep time after night shift less than 4 h were shift-related factors that increased the levels of occupational stress and anxiety. Conclusion: The sleep quality of night shift nurses demonstrates heterogeneity and can be classified into four latent categories. Higher frequency of night shifts, extended work hours and insufficient rest time are all associated with increased levels of occupational stress and anxiety. Impact: By identifying the four latent categories of sleep quality among night shift nurses, this study sheds light on the relationship between sleep patterns and levels of occupational stress and anxiety. These findings have important implications for healthcare institutions in the management of nurse well-being and work schedules. Patient Or Public Contribution: No patient or public contribution. abstract_id: PUBMED:32243038 Sleep among primary care physicians-Association with overtime, night duties and strategies to counteract poor sleep quality. Physicians belong to the risk group for sleep disorders as a result of work related stress, excessive working time, large amount of on-call duties and shift work. Poor sleep quality of healthcare providers threatens not only their safety, but also the safety of their patients. This study examines if there are any differences in how working either long hours or at night duties relates to self-reported sleep time, sleep quality and daytime sleepiness in primary care physicians. We analyzed data from a survey research of 807 participants. Respondents were divided into four groups based on their reported work hours per week and number of nights on duty per month. Overtime was associated with shorter sleep time and worse subjective sleep quality even when compared with that of participants who work on night duties. All investigated groups of participants reported short sleep latencies which suggests that sleep debt is a common problem in this population. Surprisingly in the case of poor sleep quality participants rarely used recommended methods like regular physical activity or specialist advice. The most frequently reported answer was that they do nothing to improve sleep and every third participants uses hypnotics to do that. These results suggest that primary care physicians despite being at high risk for sleep disorders due to working overtime and at night, pay less attention to their sleep quality and do not use recommended strategies to improve it. abstract_id: PUBMED:28243077 A cross-sectional survey of night-time symptoms and impact of sleep disturbance on symptoms and health status in patients with COPD. Background: Sleep disturbance has been termed the forgotten dimension of chronic obstructive pulmonary disease (COPD), but it is clinically important as most patients are affected. This study examined the incremental burden of illness associated with sleep disturbance in COPD, with reference to health status and disease impact, and the degree of concordance between physicians and patients in reporting night-time COPD symptoms. Methods: Real-world data from &gt;2,500 patients with COPD consulting for routine care were derived from respiratory Disease-Specific Programs conducted in Europe, the USA, and China. Night-time COPD symptom burden was assessed from patient and physician perspectives. Patients completed the Jenkins Sleep Evaluation Questionnaire (JSEQ), COPD assessment test (CAT), and EuroQol five-dimension questionnaire (EQ-5D). A regression approach was used to analyze the relationship between sleep disturbance (JSEQ score) and health status (EQ-5D score), adjusting for confounding variables. Results: Frequency of night-time symptoms was high and was higher when reported by patients than physicians (69.7% and 65.7%, respectively). According to the JSEQ, 73.3% of patients had trouble falling asleep, 75.3% experienced night-time awakenings, 70.6% had trouble staying asleep, and 67.7% woke after a usual amount of sleep feeling worn out. Over half (52.7%) of patients received maintenance treatment where night-time symptom relief was stated by the physician as a treatment aim. A one unit increase in JSEQ score was associated with increased CAT score (0.7 units in Europe and the USA; 0.2 units in China). Sleep disturbance was significantly associated with worse health status (odds ratio [OR]: 1.27, 95% confidence interval [CI]: 1.18, 1.36, P&lt;0.001 for Europe; OR: 1.23, 95% CI: 1.12, 1.38, P&lt;0.001 for the USA; and OR: 1.19, 95% CI: 1.10, 1.28, P&lt;0.001 for China). Conclusions: Night-time symptoms and sleep disturbance are common among patients with COPD, and sleep disturbance has a detrimental impact on COPD symptoms and health status. abstract_id: PUBMED:24348032 Impact of night-time symptoms in COPD: a real-world study in five European countries. Background: Sleep quality is often poor in patients with chronic obstructive pulmonary disease (COPD). A cross-sectional European survey investigated the prevalence of night-time symptoms in COPD to evaluate the level of disconnect between physician and patient perceptions of the presence of night-time symptoms, and to compare the characteristics of patients with and without night-time symptoms. Methods: A total of 251 primary care physicians and 251 respiratory specialists completed record forms on 2,807 patients with COPD. The forms captured information on patient demographics, lung function, COPD severity, and symptoms. Patients completed questionnaires on the time of day when their COPD symptoms bothered them, and the impact of COPD on their ability to get up in the morning and on sleep. Data were compared between groups (those with and without night-time symptoms) using t-tests or Wilcoxon signed rank tests. The kappa statistic was used to assess the level of disconnect between physician and patient perceptions of the impact of night-time symptoms. Results: Most patients (78%) reported night-time disturbance. Patients with night-time symptoms experienced more daytime breathlessness (mean modified Medical Research Council dyspnea scale score 2.4 versus 1.1) and exacerbations in the previous 12 months (mean 1.7 versus 0.4), and received more maintenance therapy (mean of 2.8 versus 2.3 products) than those without. Concordance between the frequency of physician-reported (67.9% of patients) and patient-reported (68.5% of patients) night-time symptoms was good. Physicians significantly underestimated the impact of COPD on the patient's ability to get up in the morning and on sleep (fair-moderate agreement). Physician-reported night-time symptoms were present for 41.2% of patients who could be categorized by Global initiative for chronic Obstructive Lung Disease (GOLD) group (n=937), increasing from 20.9% of those in the low-risk group to 77.4% of those in the high-riskgroup. Conclusion: Patients with COPD experience night-time symptoms regardless of GOLD group, that impact on their ability to get up in the morning and on their sleep quality. abstract_id: PUBMED:19375656 The night shift: a risk factor for health and quality of life in nursing staff Objectives: To study shift-related differences (day shift vs. night shift) in health and quality of life in nursing staff in hospitals in the Catalan public health system. Method: We performed a cross-sectional multicenter study in a sample of 476 nursing staff in the wards and special services of five Catalan public hospitals working for at least 6 consecutive months on the day shift or night shift. The nurses completed a validated, self-administered questionnaire on quality of life (M. Ruiz and E. Baca) and another questionnaire on health-related aspects such as sleep, working conditions, and demographic variables. Results: Nurses working on the night shift showed a higher prevalence of appetite disturbance (45.2% vs 34.4%; p=0.01) and varicose veins (46.6% vs 36.4%; p=0.008). Sleeping disorders were also more frequent on the night shift, including insomnia and sleep fragmentation, with no differences in those who slept during the day (22.3%vs 33.7% ) or night (17.6% vs 30%) with respect to the day shift (12.2% vs 22.6%). Multivariate analysis of the results of the quality of life questionnaire revealed the night shift to be associated with the dimensions of social support (OR: 1.17; 95% CI, 1.01-3.01), physical/psychological well-being (OR: 1.04; 95% CI, 1.004-1.07) and leisure time (OR: 1.07; 95% CI, 1.003-1.1), although the overall score was similar. Conclusions: The night shift is associated a higher incidence of varicose veins, appetite disturbance and sleep disorders, as well as alterations related to social support, leisure time, and physical and physiological well-being. abstract_id: PUBMED:26629133 Quality of sleep and health-related quality of life in renal transplant recipients. Aims And Objectives: The purpose of this study was to examine the sleep quality and health-related quality of life (HRQOL) in patients after renal transplantation and to explore the relationship between the quality of sleep and the HRQOL. Background: Sleep disorders are still an important clinical problem after renal transplantation. Previous studies mainly focused on patients' sleep quality before kidney transplant. More studies are needed to document sleep quality after renal transplantation. Design: A cross-sectional design was used in this study. Methods: A convenience sample of renal transplant recipients was recruited at an outpatient transplant clinic of a general hospital in Beijing, China. The Pittsburgh Sleep Quality Index (PSQI) was used to measure quality of sleep. The Medical Outcomes Study 36-item Short Form (MOS SF-36) was used to measure health-related quality of life. Results: The average PSQI score of the 204 renal transplant recipients was 5.81±3.52, significantly lower than the norm. Fifty (24.5%) recipients were classified as having poor sleep quality (global PSQI &gt; 7). The mean scores of renal transplant recipients for SF-36 Mental Component Summary (MCS) and Physical Component Summary (PCS) were 47.57±6.71 and 48.26±9.66 respectively. Compared with residents in Sichuan province, recipients' scores for SF-36 dimensions were statistically lower except the dimension of mental health. SF-36 scores of poor sleepers (PSQI &gt; 7) were significantly lower than the good sleepers (PSQI ≤ 7) in both the MCS and PCS. Significant differences exist between the groups in physical function, bodily pain, vitality, and mental health dimensions. Conclusions: Sleep quality and HRQOL of patients after renal transplantation were lower than the norm. Poor sleep is associated with lower HRQOL. Relevance To Clinical Practice: Health professionals need to pay attention to sleep quality and HRQOL in renal transplant recipients and take appropriate measures to improve patients' sleep quality and HRQOL. abstract_id: PUBMED:37728682 Association of participants who screened positive for night eating syndrome with physical health, sleep problems, and weight status in an Australian adult population. Background: Night eating syndrome (NES) is a unique eating disorder characterised by evening hyperphagia and nocturnal ingestions which cause significant distress and/or impairment in functioning. Despite the growing literature, NES remains poorly understood and under diagnosed. As such, this study aims to compare the prevalence of physical health conditions in participants with NES when compared to participants without an eating disorder (ED) and participants with other eating disorders (including anorexia nervosa (AN), binge eating disorder (BED) and bulimia nervosa (BN)) in a general population Australian sample of adults. Methods: The data for this study were obtained from the 2017 Health Omnibus Survey (HOS) a multi-stage, cross-sectional survey, conducted by Harrison Research in South Australia. This current study focused on 2547 participants over 18 years of age and specific questions from this population survey including those related to participant demographics and health. Results: This study identified that participants who screened positive for night eating syndrome (spNES) when compared to participants with other eating disorders (ED) or no ED diagnosis, were significantly more likely to have an increased age, be female, have lower levels of education and have lower household income. Additionally, the spNES group was significantly associated with sleep apnoea (p = 0.031), insomnia or other sleep problems (p &lt; 0.0001), increased BMI (p &lt; 0.0001), increased levels of pain/discomfort and lower physical health-related quality of life. Hypertension, hypercholesterolemia, and diabetes were not significantly associated with the spNES group or the "other ED" group which included participants with AN, BED, BN. Conclusions: Several physical health problems were found to be significantly associated with the spNES group including sleep problems, increased BMI, increased levels of pain and lower self-reported physical health-related quality of life. Consequently, future research exploring the complex interaction between NES and these medical conditions may provide further insight into the diagnosis, screening tools and management of NES. Additionally, this study highlights the need for future studies which use larger population-based samples. Level Of Evidence: Level III. Evidence obtained from well-designed cohort or case-control analytic studies. abstract_id: PUBMED:28619184 Sleep quality subtypes predict health-related quality of life in children. Objective: This paper aimed to investigate whether distinct sleep quality subtypes predicted health-related quality of life in a nonclinical sample of children. Methods: This paper utilized data from two waves of the Longitudinal Study of Australian Children, a cohort study that follows a representative population of children in Australia. This paper examined data from Waves 4 and 5 of the LSAC (covering the period 2010-2012) and included 3974 children aged 10-11 years at Wave 4 (51.4% male). Multiple dimensions of sleep quality were assessed using a combination of child- and parent-reported measures. Health-related quality of life (HRQOL) was assessed through the Pediatric Quality of Life Inventory. Results: Latent class analysis indicated six distinct sleep quality classes in children, namely good sleep, moderate sleep quality, mild sleep disturbances, short sleep, long sleep, and disordered sleep. In general, the disordered sleep and minor sleep disturbance classes had poorer HRQOL, which worsened over time. The long sleep and moderate sleep quality classes also showed some decreases in HRQOL over time. Conclusions: This study demonstrates that there are distinct sleep quality subtypes in children that could have implications for HRQOL. These findings may inform future strategies to promote improved sleep and HRQOL in children. Answer: Yes, there is a relation between night-time frequency, sleep disturbance, and general health-related quality of life (GHQL). A community-based study in Japan found that night-time frequency was closely associated with most aspects of GHQL. However, when the influence of sleep problems, as represented by the total score on the Pittsburgh Sleep Quality Index, was included in the analysis, the association between night-time frequency and GHQL disappeared in several domains (general health perception, vitality, mental health, and emotional role) and in the two summary scores of the Medical Outcome Study Short Form-8. Despite this, three domains (physical function, physical role, and social function) remained significantly associated with night-time frequency, indicating that sleep problems were a major risk factor for the deterioration of GHQL (PUBMED:19054172). Furthermore, studies on healthcare professionals, such as nurses working night shifts, have shown that occupational stress and shift work are associated with physical and psychological health consequences. An observational field study of nurses in a French psychiatric hospital suggested that an organizational strategy including an unscheduled night-time nap might improve health among night shift nurses (PUBMED:36612478). Another study on night shift nurses found that higher frequency of night shifts, extended work hours, and insufficient rest time were associated with increased levels of occupational stress and anxiety (PUBMED:38235926). In primary care physicians, overtime and night duties were associated with shorter sleep time and worse subjective sleep quality, suggesting that sleep debt is a common problem in this population (PUBMED:32243038). Similarly, in patients with chronic obstructive pulmonary disease (COPD), night-time symptoms and sleep disturbance were common and had a detrimental impact on COPD symptoms and health status (PUBMED:28243077, PUBMED:24348032). In nursing staff, the night shift was associated with a higher incidence of varicose veins, appetite disturbance, sleep disorders, and alterations related to social support, leisure time, and physical and physiological well-being (PUBMED:19375656). Renal transplant recipients with poor sleep quality also had lower health-related quality of life (HRQOL) (PUBMED:26629133). Additionally, participants who screened positive for night eating syndrome were significantly more likely to have sleep problems, increased BMI, increased levels of pain, and lower physical health-related quality of life (PUBMED:37728682).
Instruction: Does polyploidy facilitate long-distance dispersal? Abstracts: abstract_id: PUBMED:24694830 Does polyploidy facilitate long-distance dispersal? Background And Aims: The ability of plant lineages to reach all continents contributes substantially to their evolutionary success. This is exemplified by the Poaceae, one of the most successful angiosperm families, in which most higher taxa (tribes, subfamilies) have global distributions. Due to the old age of the ocean basins relative to the major angiosperm radiations, this is only possible by means of long-distance dispersal (LDD), yet the attributes of lineages with successful LDD remain obscure. Polyploid species are over-represented in invasive floras and in the previously glaciated Arctic regions, and often have wider ecological tolerances than diploids; thus polyploidy is a candidate attribute of successful LDD. Methods: The link between polyploidy and LDD was explored in the globally distributed grass subfamily Danthonioideae. An almost completely sampled and well-resolved species-level phylogeny of the danthonioids was used, and the available cytological information was assembled. The cytological evolution in the clade was inferred using maximum likelihood (ML) as implemented in ChromEvol. The biogeographical evolution in the clade was reconstructed using ML and Bayesian approaches. Key Results: Numerous increases in ploidy level are demonstrated. A Late Miocene-Pliocene cycle of polyploidy is associated with LDD, and in two cases (the Australian Rytidosperma and the American Danthonia) led to secondary polyploidy. While it is demonstrated that successful LDD is more likely in polyploid than in diploid lineages, a link between polyploidization events and LDD is not demonstrated. Conclusions: The results suggest that polyploids are more successful at LDD than diploids, and that the frequent polyploidy in the grasses might have facilitated the extensive dispersal among continents in the family, thus contributing to their evolutionary success. abstract_id: PUBMED:33811320 Molecular signatures of long-distance oceanic dispersal and the colonization of Pacific islands in Lycium carolinianum. Premise: Long-distance dispersal has been important in explaining the present distributions of many plant species. Despite being infrequent, such dispersal events have considerable evolutionary consequences, because bottlenecks during colonization can result in reduced genetic diversity. We examined the phylogeographic history of Lycium carolinianum, a widespread taxon that ranges from southeastern North America to several Pacific islands, with intraspecific diversity in sexual and mating systems. Methods: We used Bayesian, likelihood, and coalescent approaches with nuclear and plastid sequence data and genome-wide single nucleotide polymorphisms to reconstruct the dispersal history of this species. We also compared patterns of genetic variation in mainland and island populations using single nucleotide polymorphisms and allelic diversity at the S-RNase mating system gene. Results: Lycium carolinianum is monophyletic and dispersed once from the North American mainland, colonizing the Pacific islands ca. 40,100 years ago. This dispersal was accompanied by a loss of genetic diversity in SNPs and the S-RNase locus due to a colonization bottleneck and the loss of self-incompatibility. Additionally, we documented at least two independent transitions to gynodioecy: once following the colonization of the Hawaiian Islands and loss of self-incompatibility, and a second time associated with polyploidy in the Yucatán region of Mexico. Conclusions: Long-distance dispersal via fleshy, bird dispersed fruits best explains the unusually widespread distribution of L. carolinianum. The collapse of diversity at the S-RNase locus in island populations suggests that self-fertilization may have facilitated the subsequent colonization of Pacific islands following a single dispersal from mainland North America. abstract_id: PUBMED:22323427 Diverse spore rains and limited local exchange shape fern genetic diversity in a recently created habitat colonized by long-distance dispersal. Background And Aims: Populations established by long-distance colonization are expected to show low levels of genetic variation per population, but strong genetic differentiation among populations. Whether isolated populations indeed show this genetic signature of isolation depends on the amount and diversity of diaspores arriving by long-distance dispersal, and time since colonization. For ferns, however, reliable estimates of long-distance dispersal rates remain largely unknown, and previous studies on fern population genetics often sampled older or non-isolated populations. Young populations in recent, disjunct habitats form a useful study system to improve our understanding of the genetic impact of long-distance dispersal. Methods: Microsatellite markers were used to analyse the amount and distribution of genetic diversity in young populations of four widespread calcicole ferns (Asplenium scolopendrium, diploid; Asplenium trichomanes subsp. quadrivalens, tetraploid; Polystichum setiferum, diploid; and Polystichum aculeatum, tetraploid), which are rare in The Netherlands but established multiple populations in a forest (the Kuinderbos) on recently reclaimed Dutch polder land following long-distance dispersal. Reference samples from populations throughout Europe were used to assess how much of the existing variation was already present in the Kuinderbos. Key Results: A large part of the Dutch and European genetic diversity in all four species was already found in the Kuinderbos. This diversity was strongly partitioned among populations. Most populations showed low genetic variation and high inbreeding coefficients, and were assigned to single, unique gene pools in cluster analyses. Evidence for interpopulational gene flow was low, except for the most abundant species. Conclusions: The results show that all four species, diploids as well as polyploids, were capable of frequent long-distance colonization via single-spore establishment. This indicates that even isolated habitats receive dense and diverse spore rains, including genotypes capable of self-fertilization. Limited gene flow may conserve the genetic signature of multiple long-distance colonization events for several decades. abstract_id: PUBMED:29158342 Fates of angiosperm species following long-distance dispersal: Examples from American amphitropical Polemoniaceae. Premise Of The Study: Following establishment after long-distance dispersal, species may experience stasis, accumulate changes leading to new species identity, diversify into multiple species, interact with related species to form novel species, and even become extirpated. We examined each species of temperate Polemoniaceae in South America via the literature and new analyses to better understand the fates of species in this family after their dispersal from North America. Methods: We reviewed literature for the 15 species of Polemoniaceae in South America amphitropically disjunct from their relatives in North America. We conducted DNA sequence analyses to infer relationships, timing of dispersal, and processes involved since dispersal in Microsteris gracilis, three Gilia, two Giliastrum, and three Collomia. Analyses included construction of haplotype networks and phylogenetic trees using maximum likelihood and Bayesian inference. Key Results: For all species examined in detail, origins in South America are compatible with dispersal via epizoochory from ca. 0.092-19.46 million years ago. Most species in South America are unique relative to their North American congeners, yet few have radiated into two or more species. Relative stasis, divergence, and hybridization with, and without, allopolyploid formation have occurred postdispersal in Polemoniaceae, as well as extirpation following at least brief establishment. Conclusions: Polemoniaceae that have established in South America share many features likely inherited from their North American progenitors, but some traits may have arisen in situ in specific taxa, such as cleistogamy, self-incompatibility, and the annual habit, evidencing the rich nature of diversification processes. abstract_id: PUBMED:29578295 Intercontinental dispersal and whole-genome duplication contribute to loss of self-incompatibility in a polyploid complex. Premise Of The Study: Angiosperm species often shift from self-incompatibility to self-compatibility following population bottlenecks. Across the range of a species, population bottlenecks may result from multiple factors, each of which may affect the geographic distribution and magnitude of mating-system shifts. We describe how intercontinental dispersal and genome duplication facilitate loss of self-incompatibility. Methods: Self and outcross pollinations were performed on plants from 24 populations of the Campanula rotundifolia polyploid complex. Populations spanned the geographic distribution and three dominant cytotypes of the species (diploid, tetraploid, hexaploid). Key Results: Loss of self-incompatibility was associated with both intercontinental dispersal and genome duplication. European plants were largely self-incompatible, whereas North American plants were intermediately to fully self-compatible. Within both European and North American populations, loss of self-incompatibility increased as ploidy increased. Ploidy change and intercontinental dispersal both contributed to loss of self-incompatibility in North America, but range expansion did not affect self-incompatibility within Europe or North America. Conclusions: When species are subject to population bottlenecks arising through multiple factors, each factor can contribute to self-incompatibility loss. In a widespread polyploid complex, the loss of self-incompatibility can be predicted by the cumulative effects of whole-genome duplication and intercontinental dispersal. abstract_id: PUBMED:27520931 Historical biogeography of the fern genus Deparia (Athyriaceae) and its relation with polyploidy. The wide geographical distribution of many fern species is related to their high dispersal ability. However, very limited studies surveyed biological traits that could contribute to colonization success after dispersal. In this study, we applied phylogenetic approaches to infer historical biogeography of the fern genus Deparia (Athyriaceae, Eupolypods II). Because polyploids are suggested to have better colonization abilities and are abundant in Deparia, we also examined whether polyploidy could be correlated to long-distance dispersal events and whether polyploidy could play a role in these dispersals/establishment and range expansion. Maximum likelihood and Bayesian phylogenetic reconstructions were based on a four-region combined cpDNA dataset (rps16-matK IGS, trnL-L-F, matK and rbcL; a total of 4252 characters) generated from 50 ingroup (ca. 80% of the species diversity) and 13 outgroup taxa. Using the same sequence alignment and maximum likelihood trees, we carried out molecular dating analyses. The resulting chronogram was used to reconstruct ancestral distribution using the DEC model and ancestral ploidy level using ChromEvol. We found that Deparia originated around 27.7Ma in continental Asia/East Asia. A vicariant speciation might account for the disjunctive distribution of East Asia-northeast North America. There were multiple independent long-distance dispersals to Africa/Madagascar (at least once), Southeast Asia (at least once), south Pacific islands (at least twice), Australia/New Guinea/New Zealand (at least once), and the Hawaiian Islands (at least once). In particular, the long-distance dispersal to the Hawaiian Islands was associated with polyploidization, and the dispersal rate was slightly higher in the polyploids than in diploids. Moreover, we found five species showing recent infraspecific range expansions, all of which took place concurrently with polyploidization. In conclusion, our study provides the first investigation using phylogenetic and biogeographic analyses trying to explore the link between historical biogeography and ploidy evolution in a fern genus and our results imply that polyploids might be better colonizers than diploids. abstract_id: PUBMED:22334449 Tracing the origin of Gulf Coast Phragmites (Poaceae): a story of long-distance dispersal and hybridization. Premise Of The Study: Long-distance dispersal can affect speciation processes in two opposing ways. Dispersal can promote geographic isolation or it can bring together geographically distant and distantly related genotypes, thus counteracting local differentiation. We used the Gulf Coast of North America (GC), a "hot spot" of reed diversity and evolutionary dynamics, as a model system to study the diversification processes within the invasive, cosmopolitan, polyploid grass Phragmites. Methods: Genetic diversity was studied using collections representing all species of the genus and from all continents (except Antarctica). A range of molecular markers, including chloroplast and nuclear sequences, microsatellites, and AFLPs, was analyzed to detect DNA variation from the population to the species level and to infer phylogenetic relationships across continents. Key Results: An interspecific hybrid, Phragmites mauritianus × P. australis, and four P. australis cp-DNA haplotypes from Africa, Europe, and North America have been dispersed to the GC and interbreed with each other. Conclusions: Long-distance dispersal and weak breeding barriers appear to be recurring phenomena, not only in the GC, but worldwide. We present data strongly suggesting that interspecific hybridization and introgression among different Phragmites species take place and appear to have contributed significantly to the diversification processes within the genus. Hence, the application of traditional species concepts within Phragmites might be inappropriate. abstract_id: PUBMED:30591708 Phylogeography of western Mediterranean Cymbalaria (Plantaginaceae) reveals two independent long-distance dispersals and entails new taxonomic circumscriptions. The Balearic Islands, Corsica and Sardinia (BCS) constitute biodiversity hotspots in the western Mediterranean Basin. Oligocene connections and long distance dispersal events have been suggested to cause presence of BCS shared endemic species. One of them is Cymbalaria aequitriloba, which, together with three additional species, constitute a polyploid clade endemic to BCS. Combining amplified fragment length polymorphism (AFLP) fingerprinting, plastid DNA sequences and morphometrics, we inferred the phylogeography of the group and evaluated the species' current taxonomic circumscriptions. Based on morphometric and AFLP data we propose a new circumscription for C. fragilis to additionally comprise a group of populations with intermediate morphological characters previously included in C. aequitriloba. Consequently, we suggest to change the IUCN category of C. fragilis from critically endangered (CR) to near threatened (NT). Both morphology and AFLP data support the current taxonomy of the single island endemics C. hepaticifolia and C. muelleri. The four species had a common origin in Corsica-Sardinia, and two long-distance dispersal events to the Balearic Islands were inferred. Finally, plastid DNA data suggest that interspecific gene flow took place where two species co-occur. abstract_id: PUBMED:29024107 Multispeed genome diploidization and diversification after an ancient allopolyploidization. Hybridization and genome doubling (allopolyploidy) have led to evolutionary novelties as well as to the origin of new clades and species. Despite the importance of allopolyploidization, the dynamics of postpolyploid diploidization (PPD) at the genome level has been only sparsely studied. The Microlepidieae (MICR) is a crucifer tribe of 17 genera and c. 56 species endemic to Australia and New Zealand. Our phylogenetic and cytogenomic analyses revealed that MICR originated via an intertribal hybridization between ancestors of Crucihimalayeae (n = 8; maternal genome) and Smelowskieae (n = 7; paternal genome), both native to the Northern Hemisphere. The reconstructed ancestral allopolyploid genome (n = 15) originated probably in northeastern Asia or western North America during the Late Miocene (c. 10.6-7 million years ago) and reached the Australian mainland via long-distance dispersal. In Australia, the allotetraploid genome diverged into at least three main subclades exhibiting different levels of PPD and diversity: 1.25-fold descending dysploidy (DD) of n = 15 → n = 12 (autopolyploidy → 24) in perennial Arabidella (3 species), 1.5-fold DD of n = 15 → n = 10 in the perennial Pachycladon (11 spp.) and 2.1-3.75-fold DD of n = 15 → n = 7-4 in the largely annual crown-group genera (42 spp. in 15 genera). These results are among the first to demonstrate multispeed genome evolution in taxa descending from a common allopolyploid ancestor. It is suggested that clade-specific PPD can operate at different rates and efficacies and can be tentatively linked to life histories and the extent of taxonomic diversity. abstract_id: PUBMED:15770232 To succeed globally, disperse locally: effects of local pollen and seed dispersal on tetraploid establishment. Newly formed tetraploid plants in sympatry with their diploid progenitors should face significant obstacles to persistence and population establishment because of low-fitness triploids formed by cross-ploidy pollinations. Prior models have found restrictive conditions for a minority tetraploid subpopulation to persist. A stochastic spatial model, parameterized using snow buttercups (Ranunculus adoneus), was used to examine the influence of limited seed and pollen dispersal distances on the success of minority tetraploids and the interaction of these factors with different rates of self-pollination and tetraploid advantage. Higher rates of self-pollination and increased tetraploid advantage increase the probability of tetraploid persistence. Limiting the dispersal of seeds and pollen further increases the positive impact of any given level of self-pollination and tetraploid advantage. Taxa with short-distance seed and pollen dispersal should face much less stringent barriers to sympatric polyploid speciation than taxa with long-distance dispersal patterns. With short-distance seed and pollen dispersal, polyploid speciation should be possible in the absence of ecological differentiation or recurrent polyploid formation through unreduced gametes. Answer: Yes, polyploidy does appear to facilitate long-distance dispersal (LDD) in plants. Studies have shown that polyploid species are more successful at LDD than diploids. For instance, in the globally distributed grass subfamily Danthonioideae, a Late Miocene-Pliocene cycle of polyploidy is associated with LDD, suggesting that the frequent polyploidy in grasses might have facilitated their extensive dispersal among continents, contributing to their evolutionary success (PUBMED:24694830). Similarly, in the fern genus Deparia, polyploidy has been correlated with long-distance dispersal events, and the dispersal rate was slightly higher in polyploids than in diploids (PUBMED:27520931). Moreover, the collapse of diversity at the S-RNase locus in island populations of Lycium carolinianum suggests that self-fertilization, which is often associated with polyploidy, may have facilitated the subsequent colonization of Pacific islands following a single dispersal from mainland North America (PUBMED:33811320). Furthermore, the study of the fern genus Deparia also found that polyploids might be better colonizers than diploids, implying that polyploidy could play a role in successful dispersal/establishment and range expansion (PUBMED:27520931). In the case of the Campanula rotundifolia polyploid complex, loss of self-incompatibility was associated with both intercontinental dispersal and genome duplication, suggesting that polyploidy can contribute to self-compatibility, which in turn may facilitate successful colonization after LDD (PUBMED:29578295). Additionally, the phylogeography of western Mediterranean Cymbalaria revealed two independent long-distance dispersals, and the study of the Microlepidieae tribe showed that an ancestral allopolyploid genome reached the Australian mainland via long-distance dispersal (PUBMED:30591708; PUBMED:29024107). Lastly, the study of Gulf Coast Phragmites indicated that long-distance dispersal and weak breeding barriers, which are often associated with polyploidy, are recurring phenomena that have contributed significantly to the diversification within the genus (PUBMED:22334449). In summary, the evidence from various studies supports the idea that polyploidy can facilitate long-distance dispersal in plants, potentially contributing to their successful colonization and diversification. Polyploidy may enhance the ability of plant species to establish in new environments after LDD by providing genetic and reproductive advantages, such as increased ecological tolerances, self-fertilization, and hybrid vigor (PUBMED:24694830; PUBMED:27520931; PUBMED:29578295). Moreover, the study of tetraploid establishment in Ranunculus adoneus suggests that taxa with short-distance seed and pollen dispersal, which is a condition that can be associated with polyploidy, face less stringent barriers to sympatric polyploid speciation than taxa with long-distance dispersal patterns. This indicates that even with local dispersal, polyploids can succeed globally (PUBMED:15770232). In conclusion, the body of research indicates that polyploidy is a trait that can enhance the capacity for long-distance dispersal in plants, thereby playing a significant role in their global distribution and evolutionary success.
Instruction: Do Homeopathic Pathogenetic Trials generate recognisable and reproducible symptom pictures? Abstracts: abstract_id: PUBMED:24685415 Do Homeopathic Pathogenetic Trials generate recognisable and reproducible symptom pictures?: Results from a pilot pathogenetic trial of Ozone 30c. Background: Homeopathic Pathogenetic Trials (HPTs) are a pillar of homeopathy, a key source of the symptoms characteristic of a particular homeopathic medicine. Homeopaths choose homeopathic medicines by comparing these remedy pictures with the symptoms the patient is presenting. Thus, recognition of these symptom sets underpins the clinical practice of homeopathy. Objective: To test whether HPTs generate consistent and recognisable sets of symptoms in consecutive trials. Design: Practising homeopaths, blinded to the homeopathic medicine under investigation, were given the set of symptoms generated during an unpublished HPT and asked to identify the homeopathic medicine used. Homeopathic Trial Substance: Ozone, prepared by homeopathic method to the ultramolecular dilution of 30c (10(-60) dilution), was chosen at random from twenty potential medicines. Results: Seven practising homeopaths were asked to make three guesses as to the identity of the remedy. Initially from the full list of possible remedies (N = 2372). Two of the seven homeopaths guessed the identity of the remedy correctly (p &lt; 0.0001). Subsequently, when their choice of possible medicines was restricted to a list of 20, the same two homeopaths selected the correct medicine, however none of the other practising homeopaths did so (p = 0.2). Discussion: The selection of the correct homeopathic medicine from the unrestricted list (N = 2372 medicines) by two homeopaths is noteworthy given that the homeopathic medicine used during the HPT was diluted well beyond Avogadro's number and would not be expected to produce any detectable or recognisable symptomatology. Possible reasons why the remaining five homeopaths did not guess correctly are discussed. Conclusion: The results show that practising homeopaths may be able to correctly identify a homeopathic medicine from the set of symptoms generated during an HPT. This suggests that such symptom pictures generated by taking an ultramolecular homeopathic medicine are recognisable and specific to the substance taken. Since identification of the remedy was based on past HPT information held in the materia medica, this demonstrates that HPT-generated symptom pictures are reproducible, thus validating the HPT methodology. These promising preliminary findings warrant replication; possible improvements to the trial design to be incorporated in future studies were identified. abstract_id: PUBMED:18955242 Brief homeopathic pathogenetic experimentation: a unique educational tool in Brazil. In homeopathy, many difficulties are encountered in understanding theoretical presuppositions because they represent different paradigms from those of hegemonic science. In our medical school, we developed a brief homeopathic pathogenetic experimentation course to be added as curricular content and a didactic method in homeopathic disciplines to add practical experience to the theoretical approach of homeopathic presuppositions. In accordance with the premises laid out by Hahnemann, the father of homeopathy, brief pathogenetic experimentation was offered on a voluntary basis for students who were free of chronic diseases and who had not regularly used medication in the last 3 months. The clinical test, either crossed or sequential (randomized and blind), was used as a study model. Single weekly doses of a homeopathic medicine of 30 cH or placebo were taken by participants during 4 weeks after which crossover of the experimented substances took place for another 4 weeks. Polycrest medicines were used so that symptoms developed by the participants could be compared to those described in Homeopathic Materia Medica. Thirty-three of the 50 students who studied homeopathy as an elective discipline over the last 4 years at the School of Medicine of the University of São Paulo (FMUSP) participated. Participants described symptoms according to specific methodology including many comments with peculiar characteristics and notable idiosyncrasies. All these students endorsed the course because it contributed to their understanding of how dynamized substances produced symptoms in healthy participants as well as the cure of symptoms according to the casual similitude principle. Brief homeopathic pathogenetic experimentation proved to be an effective method to observe the idiosyncratic manifestations of human individuality based on qualitative methodology, thus building a basis of understanding of homeopathy. abstract_id: PUBMED:28552174 A clinical history of Zincum metallicum: homeopathic pathogenetic trials and case reports. In the present study, we investigated the experimental basis for the indications of homeopathic drug Zincum metallicum. The current body of knowledge about Zinc met has a core composed of pathogenetic and clinical data collected in the 19th century surrounded by layers of clinical observations reported over time. In the description, we prioritized poorly known sources, especially the ones that were never translated from the original German. We also performed quantitative and statistical analysis of repertory data. Through a literature survey and a call to practicing homeopathic doctors from many countries, we were able to put together a relevant case-series that represents homeopathic indications of Zinc. abstract_id: PUBMED:36662492 Safety and Evaluation of the Immune Response of Coronavirus Nosode (BiosimCovex) in Healthy Volunteers: A Preliminary Study Extending the Homeopathic Pathogenetic Trial. Objectives: Regulatory clinical Phase I studies are aimed at establishing the human safety of an active pharmaceutical agent to be later marketed as a drug. Since homeopathic medicines are prepared by a potentizing method using alcohol, past a certain dilution, their toxicity/infectivity is assumed to be unlikely. We aimed to develop a bridge study between homeopathic pathogenetic trials and clinical trials. The primary purpose was to evaluate the safety of a nosode, developed from clinical samples of a COVID-19 patient. The secondary objectives were to explore whether a nosode developed for a specific clinical purpose, such as use during an epidemic, may elicit laboratory signals worthy of further exploration. Methods: An open-label study was designed to evaluate the safety and immune response of the Coronavirus nosode BiosimCovex, given orally on three consecutive days to ten healthy volunteers. Clinical examinations, laboratory safety and immune parameters were established. Interferon-gamma, Interleukin-6, and CD 4 were measured. (CTRI registration number: CTRI/2020/05/025496). Results: No serious/fatal adverse events were reported. Laboratory tests to measure safety were unchanged. Three subjects showed elevated Interleukin-6 (IL-6) on day 17 in comparison to the baseline, and ten subjects showed elevated IL-6 on day 34. A significant difference between IL-6 observations, calculated by repeated measures ANOVA, was found to be highly significant. On day 60, the IL-6 values of nine subjects were found to return to normal. Corresponding CD4 cell elevation was observed on day 60, when compared to day 34. Conclusions: HPT may potentially extend into physiological changes with regards to immune response and should encourage future studies. abstract_id: PUBMED:23870381 Hepatitis C Nosode: The preparation and homeopathic pathogenetic trial. Background: A double blind, randomized placebo controlled homeopathic pathogenetic trial (proving) of Hepatitis C (Hep C) nosode was conducted with the aim to introduce the new nosode in homeopathic pharmacopeia. Method: Documentation included approval by Ethics Committee, Informed Consent Form, Laboratory investigations, safety and ethical measures. The volunteers were trained to write data in prescribed diaries and data were analyzed. A fifteen-step method was used in the preparation of Hep C nosode (genotype I and III), allowing future preparation of an identical nosode. 22 volunteers were entered, 15 received Hep C nosode in 30c potency, 7 received placebo, once a week for four weeks. Results: The Hep C nosode was associated with qualitatively and quantitatively distinct symptoms, which can be applied in clinical practice. A significantly higher incidence of pathogenetic effect of homeopathic medicine compared to placebo was observed. Safety was documented. The nosode produced symptoms comparable with Hep C disease. Conclusion: An improved method of nosode preparation was used. A double blind, randomized placebo controlled pathogenetic trial of the Hep C nosode generated guiding symptoms, which may facilitate its prescription in practice. The nosode should be further explored for the treatment of immunologically mediated diseases, infections including Hep C, fibrotic pathology and chronic inflammatory disorders. abstract_id: PUBMED:16815514 Double-blind, placebo-controlled homeopathic pathogenetic trials: symptom collection and analysis. Background: Homeopathic pathogenetic trials (provings) are fundamental to homeopathy. Since most of the data from available provings have not been statistically evaluated, it is unclear how specific reported symptoms are and how they differ from those reported by people taking placebo. Method: We combine and analyse data from two different homeopathic pathogenic trials--including 10 and 11 provers, respectively, and both including 30% placebo-to test the null hypothesis that there is no significant difference between the number of symptoms in placebo and verum groups. Results: The principal results were: Placebo reported less symptoms than verum groups. Symptom distribution according to predefined classes (common symptoms increased in intensity and/or duration-, cured, old, new and exceptional) was statistically different between placebo and verum group at a high level of significance (P&lt;0.001). Compared to verum, placebo provers reported less new and old but more common (increased in duration or intensity) symptoms. Within repertory categories, other differences were detected. The two groups differ in terms of the duration of each symptom and kinetics of symptoms: most symptoms were more persistent in verum than in placebo groups and verum provers recorded a decreasing number of symptoms with time. Placebo provers did not show such a temporal pattern. Conclusions: If confirmed by other studies these results would demonstrate the non-equivalence between homeopathic medicines in high dilution and placebo and contribute to the improvement of proving methodology and evaluation. abstract_id: PUBMED:11212086 Clinical verification of symptom pictures of homeopathic medicines. This invitation for co-operation is addressed to all interested groups. Its purpose is data collection for clinical verification of symptoms from homeopathic proving and collection of clinical symptoms not derived from homeopathic pathogenetic trials (provings). To date no consensus exists on this topic. The European Committee for Homoeopathy (ECH) research sub-committee has created a working group on this topic. This paper is the result of a review of papers on the topic, exchanges between members and a meeting. abstract_id: PUBMED:34959246 An Attempt to Estimate the Noises in Homeopathic Pathogenetic Trials by Employing the Jaccard Similarity Index and Noise Index. Background: The critical task of researchers conducting double-blinded, randomized, placebo-controlled homeopathic pathogenetic trials is to segregate the signals from the noises. The noises are signs and symptoms due to factors other than the trial drug; signals are signs and symptoms due to the trial drug. Unfortunately, the existing tools (criteria for a causal association of symptoms only with the tested medicine, qualitative pathogenetic index, quantitative pathogenetic index, pathogenic index) have limitations in analyzing the symptoms of the placebo group as a comparator, resulting in inadequate segregation of the noises. Hence, the Jaccard similarity index and the Noise index are proposed for analyzing the symptoms of the placebo group as a comparator. Methods: The Jaccard similarity index is the ratio of the number of common elements among the placebo and intervention groups to the aggregated number of elements in these groups. The Noise index is the ratio of common elements among the placebo and intervention group to the total elements of the intervention group. Homeopathic pathogenetic trials of Plumbum metallicum, Piper methysticum and Hepatitis C nosode were selected for experimenting with the computation of the Jaccard similarity index and the Noise index. Results: Jaccard similarity index calculations show that 8% of Plumbum metallicum's elements, 10.7% of Piper methysticum's elements, and 19.3% of Hepatitis C nosode's elements were similar to the placebo group when elements of both the groups (intervention and placebo) were aggregated. Noise index calculations show that 10.7% of Plumbum metallicum's elements, 13.9% of Piper methysticum's elements and 25.7% of Hepatitis C nosode's elements were similar to those of the placebo group. Conclusion: The Jaccard similarity index and the Noise index might be considered an additional approach for analyzing the symptoms of the placebo group as a comparator, resulting in better noise segregation in homeopathic pathogenetic trials. abstract_id: PUBMED:30144788 Quantitative and Qualitative Pathogenetic Indices for Review of Data Derived from Homeopathic Pathogenetic Trials. Introduction: Analysis of data derived from homeopathic pathogenetic trials (HPTs, homeopathic drug provings) has been a challenge. Most parts of the homeopathic pharmacopeia were sourced from Hahnemann's Materia Medica Pura (1825-1833), TF Allen's Encyclopedia (1874) and Constantine Hering's Materia Medica (1879-1891), well before randomised controlled trials were in use. As a result, such studies and their outcomes harbour a large risk of inclusion of unreliable symptoms. Aims And Objective: The main purpose of this article is to introduce Quantitative and Qualitative Pathogenetic Indices to improve the method of analysis of symptoms. Materials And Methods: The data from HPTs for human immunodeficiency virus nosode, hepatitis C nosode, capsaicin alkaloids (capsaicin and dihydrocapsaicin) and hydroquinone (HQ) were extracted and analysed in terms of novel Qualitative and Quantitative Pathogenetic Indices. Taken into the consideration were the qualitative aspect of a symptom (i.e. its intensity), and the quantitative aspect by calculating the number of symptoms per volunteer per day. The pathogenetic effects and data evaluation indices were calculated for each HPT. A comparison was made of symptoms of verum versus placebo provers in terms of their quantity and quality. Results: Four HPTs involving 81 volunteers (56 on verum and 25 on placebo) generated 555 symptoms or pathogenetic effects (excluding run-in phase symptoms), of which 448 (81%) were reported by volunteers who were in the verum arm, and 107 (19%) were reported by volunteers on placebo. The overall mean incidence of pathogenetic effects for the four HPTs was thus 8 per verum prover and 4.28 per placebo prover. The corresponding mean Quantitative Pathogenetic Index was 0.23 symptoms per volunteer per day for the verum arm and 0.12 symptoms per volunteer per day for the placebo arm. The overall mean incidence of pathogenetic effects in the run-in phase was less. The overall mean Qualitative Pathogenetic Index (number of symptoms, of a given intensity, per volunteer per day) for the verum arm was 0.09 versus 0.05 for the placebo arm. Conclusion: The symptoms exhibited by volunteers in the verum arm were more numerous and more intense than those in the placebo arm. An innovative and logical method of reporting of symptoms and analysis has been introduced by the use of these pathogenetic indices, which can be used in future as measurement tools for analysis of data from HPTs. abstract_id: PUBMED:11479778 Homeopathic pathogenetic trials of Acidum malicum and Acidum ascorbicum. Two homeopathic pathogenetic trials (HPTs, provings), of identical design were conducted: of Acidum malicum 12 cH and Acidum ascorbicum 12 cH. Each trial included 20 healthy volunteers. Both were of double-blind, placebo-controlled, randomised, four period crossover design, with two sequences. 'Healthy' was defined in terms of SF-36 scores, medical judgement and blood tests. All volunteers had regular interviews with the same supervisor. No serious adverse reactions occurred. The causal relationship of each symptom with treatment was judged, blind, by the volunteer, the supervisor and a 9-item pathogenetic index. For Acidum malicum 79 symptoms were identified by the supervisor, 57 were included in the final analysis, 22 occurred in verum treatment periods. For Acidum ascorbicum, of 55 symptoms, 39 were included in the analysis. 16 occurred in verum treatment periods. Answer: Yes, Homeopathic Pathogenetic Trials (HPTs) are reported to generate recognisable and reproducible symptom pictures. A pilot pathogenetic trial of Ozone 30c found that practising homeopaths were able to correctly identify a homeopathic medicine from the set of symptoms generated during an HPT, suggesting that such symptom pictures are specific to the substance taken and reproducible (PUBMED:24685415). Additionally, brief homeopathic pathogenetic experimentation has been used as an educational tool in Brazil, where students reported symptoms that contributed to their understanding of how dynamized substances produced symptoms in healthy participants, further supporting the reproducibility and recognition of symptom pictures in HPTs (PUBMED:18955242). Moreover, a clinical history of Zincum metallicum included a literature survey and case reports that represent homeopathic indications of Zinc, indicating a body of knowledge that supports the reproducibility of symptoms associated with this homeopathic drug (PUBMED:28552174). Similarly, a preliminary study on the safety and immune response of a Coronavirus nosode in healthy volunteers suggested that HPTs may extend into physiological changes with regards to immune response, implying that the trials can produce consistent and measurable effects (PUBMED:36662492). Furthermore, a double-blind, randomized placebo-controlled HPT of Hepatitis C nosode produced symptoms comparable with Hep C disease, indicating that the nosode generated guiding symptoms that may facilitate its prescription in practice (PUBMED:23870381). Another study combining data from two different HPTs found that placebo reported fewer symptoms than verum groups, and the symptom distribution was statistically different between the two, suggesting specificity of symptoms to the homeopathic medicine (PUBMED:16815514). Finally, the use of the Jaccard similarity index and the Noise index in analyzing symptoms of placebo groups as a comparator in HPTs of Plumbum metallicum, Piper methysticum, and Hepatitis C nosode showed that a percentage of elements were similar to the placebo group, indicating that while there is some noise, there is also a distinct signal that can be attributed to the homeopathic substance (PUBMED:34959246). Quantitative and Qualitative Pathogenetic Indices have also been introduced to improve the analysis of symptoms from HPTs, providing tools for better evaluation of the data (PUBMED:30144788).
Instruction: The Global Fund and tuberculosis in Nicaragua: building sustainable capacity? Abstracts: abstract_id: PUBMED:35317539 TB training in Kenya: building capacity for care and prevention. Background: Devolution of healthcare services in Kenya resulted in a large number of newly recruited tuberculosis (TB) coordinators. We describe a unique collaboration between a national tuberculosis program (NTP), a local, and an international non-governmental organization to build human resource capacity in TB care and prevention. Methods: From 2016 to 2021, the Kenya Division of National Tuberculosis, Leprosy and Lung Disease Program, Centre for Health Solutions-Kenya, and the International Union Against Tuberculosis and Lung Disease developed and conducted a series of 7-day training courses. A key focus of training was the introduction of TBData4Action, an approach involving the local use of routinely available data to strengthen decision-making and support supervision. Results: Implementation outcomes included training 331 (96%) coordinators out of 344, representing all 47 counties, 37 national officers and 21 other stakeholders using the country-tailored curriculum, including hands-on group work by county teams and field practicals. Thirty-five national facilitators were identified and mentored as local faculty. Training costs were reduced by 75% compared with international alternatives. Conclusion: The collaboration resulted in the training of the majority of the coordinators in a standardized approach to TB care. A sustainable approach to capacity building in local data use was found feasible; the model could be adapted by other NTPs. abstract_id: PUBMED:29553308 Operational research within a Global Fund supported tuberculosis project in India: why, how and its contribution towards change in policy and practice. Background: The Global Fund encourages operational research (OR) in all its grants; however very few reports describe this aspect. In India, Project Axshya was supported by a Global Fund grant to improve the reach and visibility of the government Tuberculosis (TB) services among marginalised and vulnerable communities. OR was incorporated to build research capacity of professionals working with the national TB programme and to generate evidence to inform policies and practices. Objectives: To describe how Project Axshya facilitated building OR capacity within the country, helped in addressing several TB control priority research questions, documented project activities and their outcomes, and influenced policy and practice. Methods: From September 2010 to September 2016, three key OR-related activities were implemented. First, practical output-oriented modular training courses were conducted (n = 3) to build research capacity of personnel involved in the TB programme, co-facilitated by The Union, in collaboration with the national TB programme, WHO country office and CDC, Atlanta. Second, two large-scale Knowledge, Attitude and Practice (KAP) surveys were conducted at baseline and mid-project to assess the changes pertaining to TB knowledge, attitudes and practices among the general population, TB patients and health care providers over the project period. Third, studies were conducted to describe the project's core activities and outcomes. Results: In the training courses, 44 participant teams were supported to develop research protocols on topics of national priority, resulting in 28 peer-reviewed scientific publications. The KAP surveys and description of project activities resulted in 14 peer-reviewed publications. Of the published papers at least 12 have influenced change in policy or practice. Conclusions: OR within a Global Fund supported TB project has resulted in building OR capacity, facilitating research in areas of national priority and influencing policy and practice. We believe this experience will provide guidance for undertaking OR in Global Fund projects. abstract_id: PUBMED:29485727 Shortening the decade-long gap between adult and paediatric drug formulations: a new framework based on the HIV experience in low- and middle-income countries. Introduction: Despite the coordinated efforts by several stakeholders to speed up access to HIV treatment for children, development of optimal paediatric formulations still lags 8 to 10 years behind that of adults, due mainly to lack of market incentives and technical complexities in manufacturing. The small and fragmented paediatric market also hinders launch and uptake of new formulations. Moreover, the problems affecting HIV similarly affect other disease areas where development and introduction of optimal paediatric formulations is even slower. Therefore, accelerating processes for developing and commercializing optimal paediatric drug formulations for HIV and other disease areas is urgently needed. Discussion: The Global Accelerator for Paediatric Formulations (GAP-f) is an innovative collaborative model that will accelerate availability of optimized treatment options for infectious diseases, such as HIV, tuberculosis and viral hepatitis, affecting children in low- and middle-income countries (LMICs). It builds on the HIV experience and existing efforts in paediatric drug development, formalizing collaboration between normative bodies, research networks, regulatory agencies, industry, supply and procurement organizations and funding bodies. Upstream, the GAP-f will coordinate technical support to companies to design and study optimal paediatric formulations, harmonize efforts with regulators and incentivize manufacturers to conduct formulation development. Downstream, the GAP-f will reinforce coordinated procurement and communication with suppliers. The GAP-f will be implemented in a three-stage process: (1) development of a strategic framework and promotion of key regulatory efficiencies; (2) testing of feasibility and results, building on the work of existing platforms such as the Paediatric HIV Treatment Initiative (PHTI) including innovative approaches to incentivize generic development and (3) launch as a fully functioning structure. Conclusions: GAP-f is a key partnership example enhancing North-South and international cooperation on and access to science and technology and capacity building, responding to Sustainable Development Goal (SDG) 17.6 (technology) and 17.9. (capacity-building). By promoting access to the most needed paediatric formulations for HIV and high-burden infectious diseases in low-and middle-income countries, GAP-f will support achievement of SDG 3.2 (infant mortality), 3.3 (end of AIDS and combat other communicable diseases) and 3.8 (access to essential medicines), and be an essential component of meeting the global Start Free, Stay Free, AIDS Free super-fast-track targets. abstract_id: PUBMED:26884492 Investing to end epidemics: the role of the Global Fund to control TB by 2030. The Global Fund to fight AIDS, Tuberculosis and Malaria provides over three-quarters of all international financing towards TB programs with US$4.7 billion disbursed, supporting provision of treatment for 13.2 million patients with smear-positive TB and 210 000 patients with multidrug-resistant TB in over 100 countries since 2002. In 2013, the Global Fund launched a new funding model that, among others, is advancing strategic investments to maximize impact, addressing 'missing' TB cases, enhancing a synergistic response to TB/HIV dual epidemics, and building resilient and sustainable systems for health. A new Global Fund Strategy is under development through consultation with various stakeholders, with which the Global Fund will work to play a more catalytic role and foster innovations to end the TB epidemic. abstract_id: PUBMED:19858945 Promoting country ownership and stewardship of health programs: The global fund experience. The Global Fund to Fight AIDS, Tuberculosis and Malaria was established in 2002 to provide large-scale financing to middle- and low-income countries to intensify the fight against the 3 diseases. Its model has enabled strengthening of local health leadership to improve governance of HIV programs in 5 ways. First, the Global Fund has encouraged development of local capacity to generate technically sound proposals reflecting country needs and priorities. Second, through dual-track financing-where countries are encouraged to nominate at least one government and one nongovernment principal recipient to lead program implementation-the Global Fund has enabled civil society and other nongovernmental organizations to play a critical role in the design, implementation, and oversight of HIV programs. Third, investments to strengthen community systems have enabled greater involvement of community leaders in effective mobilization of demand and scale-up for services to reach vulnerable groups. Fourth, capacity building outside the state sector has improved community participation in governance of public health. Finally, an emphasis on inclusiveness and diversity in planning, implementation, and oversight has broadly enhanced country coordination capacity. Strengthening local leadership capacity and governance are critical to building efficient and equitable health systems to deliver universal coverage of HIV services. abstract_id: PUBMED:33736629 A feasibility study using time-driven activity-based costing as a management tool for provider cost estimation: lessons from the national TB control program in Zimbabwe in 2018. Background: Insufficient cost data and limited capacity constrains the understanding of the actual resources required for effective TB control. This study used process maps and time-driven activity-based costing to document TB service delivery processes. The analysis identified the resources required to sustain TB services in Zimbabwe, as well as several opportunities for more effective and efficient use of available resources. Methods: A multi-disciplinary team applied time-driven activity-based costing (TDABC) to develop process maps and measure the cost of clinical pathways used for Drug Susceptible TB (DS-TB) at urban polyclinics, rural district and provincial hospitals, and community based targeted screening for TB (Tas4TB). The team performed interviews and observations to collect data on the time taken by health care worker-patient pairs at every stage of the treatment pathway. The personnel's practical capacity and capacity cost rates were calculated on five cost domains. An MS Excel model calculated diagnostic and treatment costs. Findings: Twenty-five stages were identified in the TB care pathway across all health facilities except for community targeted screening for TB. Considerable variations were observed among the facilities in how health care professionals performed client registration, taking of vital signs, treatment follow-up, dispensing medicines and processing samples. The average cost per patient for the entire DS-TB care was USD324 with diagnosis costing USD69 and treatment costing USD255. The average cost for diagnosis and treatment was higher in clinics than in hospitals (USD392 versus USD256). Nurses in clinics were 1.6 time more expensive than in hospitals. The main cost components were personnel (USD130) and laboratory (USD119). Diagnostic cost in Tas4TB was twice that of health facility setting (USD153 vs USD69), with major cost drivers being demand creation (USD89) and sputum specimen transportation (USD5 vs USD3). Conclusion: TDABC is a feasible and effective costing and management tool in low-resource settings. The TDABC process maps and treatment costs revealed several opportunities for innovative improvements in the NTP under public health programme settings. Re-engineering laboratory testing processes and synchronising TB treatment follow-up with antiretroviral treatments could produce better and more uniform TB treatments at significantly lower cost in Zimbabwe. abstract_id: PUBMED:36314548 Exploring the Impact of the COVID-19 Pandemic on Tuberculosis Care and Prevention. The COVID-19 pandemic has set back the global tuberculosis (TB) response by several years. In 2020, access to TB prevention and care declined sharply, with TB notifications dropping by 18% compared to 2019. Declines were more pronounced in children, with a 24% drop in 0-14 year-olds and a 28% drop in 0-4 year-olds. As a result, in 2020 the number of deaths due to TB increased to 1.5 million across all ages, reversing a decade-long declining trend. Progress toward the UN High Level Meeting targets for 2022 is at risk, including the targets related to children for TB and drug-resistant TB treatments, and TB preventive therapy. Nonetheless, ending TB by 2030 as envisaged in the Sustainable Development Goals (SDGs) is still possible, but requires increased investments in accelerated case detection, subclinical TB, preventive therapy and an effective vaccine. Investing in TB could prepare the world better for fighting a future airborne pandemic. abstract_id: PUBMED:30866870 Sub-national prevalence survey of tuberculosis in rural communities of Ethiopia. Background: Tuberculosis is a major public health problem with varying prevalence in different settings. National prevalence surveys provide evidence for planning and decision making. However, they lack the capacity to estimate subnational magnitude that affected the capacity to make selected intervention based on the prevalence. Ethiopia is among high TB burden countries with estimated prevalence of 108 per 100,000 population varying by regions. We aimed to study sub national prevalence of smear-positive TB in rural communities of southern Ethiopia. Methods: This cross-sectional study, enrolled community members aged over 14 years who had cough of at least two weeks duration. Two sputum samples were collected and examined by using smear microscopy. Results: 38,304 eligible people were enumerated (10,779 from Hadiya, 10,059 from Gurage and 17,466 from Sidama) and indentified 960 presumptive cases. 16, 14 and 14 smear-positive pulmonary TB cases were identified respectively. The point prevalence of smear-positive TB were 148 per 100,000 population (95% CI: 91-241) in Hadiya, 139 per 100,000 population (95% CI: 83-234) in Gurage and 80/100,000 population (95%CI: 48-135) in Sidama zone. Gurage zone had the highest prevalent to notified cases of seven to one. Conclusions: The prevalence of smear positive TB varies by districts and is high in rural southern Ethiopia compared to the estimated national prevalence. More TB patients remain missed and unreached, impacting negatively on health outcomes. TB case finding approaches should be revisited and innovative approaches and tools to identify missing people with TB should be scaled up. abstract_id: PUBMED:29209537 Health extension workers improve tuberculosis case finding and treatment outcome in Ethiopia: a large-scale implementation study. Background: Tuberculosis (TB) is a major cause of death in Ethiopia. One of the main barriers for TB control is the lack of access to health services. Methods: We evaluated a diagnostic and treatment service for TB based on the health extension workers (HEW) of the Ethiopian Health Extension Programme in Sidama Zone, with 3.5 million population. We added the services to the HEW routines and evaluated their effect over 4.5 years. 1024 HEWs were trained to identify individuals with symptoms of TB, request sputum samples and prepare smears. Smears were transported to designated laboratories. Individuals with TB were offered treatment at home or the local health post. A second zone (Hadiya) with 1.2 million population was selected as control. We compared TB case notification rates (CNR) and treatment outcomes in the zones 3 years before and 4.5 years after intervention. Results: HEWs identified 216 165 individuals with symptoms and 27 918 (12%) were diagnosed with TB. Smear-positive TB CNR increased from 64 (95% CI 62.5 to 65.8) to 127 (95% CI 123.8 to 131.2) and all forms of TB increased from 102 (95% CI 99.1 to 105.8) to 177 (95% CI 172.6 to 181.0) per 100 000 population in the first year of intervention. In subsequent years, the smear-positive CNR declined by 9% per year. There was no change in CNR in the control area. Treatment success increased from 76% before the intervention to 95% during the intervention. Patients lost to follow-up decreased from 21% to 3% (p&lt;0.001). Conclusion: A community-based package significantly increased case finding and improved treatment outcome. Implementing this strategy could help meet the Ethiopian Sustainable Development Goal targets. abstract_id: PUBMED:33929335 Collect Once, Use Many Times: Attaining Unified Metrics for Tuberculosis Preventive Treatment for People Living With HIV. The World Health Organization (WHO) recommends providing tuberculosis preventive treatment (TPT) to all persons living with HIV and to all household contacts of persons with bacteriologically confirmed pulmonary tuberculosis disease. Regrettably, the absence of a harmonized data collection and management approach to TPT indicators has contributed to programmatic challenges at local, national, and global levels. However, in April 2020, the WHO launched the Consolidated HIV Strategic Information Guidelines, with an updated set of priority indicators. These guidelines recommend that Ministries of Health collect, report, and use data on TPT completion in addition to TPT initiation. Both indicators are reflected in the WHO's list of 15 core indicators for program management and are also required by the US President's Emergency Plan for AIDS Relief's Monitoring, Evaluation, and Reporting (MER) guidance. Although not perfectly harmonized, both frameworks now share essential indicator characteristics. Aligned indicators are necessary for robust strategic and operational planning, resource allocation, and data communication. "Collect once, use many times" is a best practice for strategic information management. Building harmonized and sustainable health systems will enable countries to successfully maintain essential HIV, tuberculosis, and other health services while combatting new health threats. Answer: The provided abstracts do not specifically mention the Global Fund's activities in Nicaragua related to tuberculosis (TB). However, they do provide insights into the Global Fund's general approach to TB control and capacity building in various countries, which could be extrapolated to understand its potential impact in Nicaragua. The Global Fund has been a major international financier for TB programs, providing over three-quarters of all international financing towards TB programs and supporting the treatment of millions of patients with TB and multidrug-resistant TB in over 100 countries since 2002 (PUBMED:26884492). The Global Fund's model has enabled strengthening of local health leadership to improve governance of health programs, including TB, by encouraging the development of local capacity to generate technically sound proposals, enabling civil society and nongovernmental organizations to play a critical role in program design and implementation, and investing in community systems to enable greater community involvement (PUBMED:19858945). Additionally, the Global Fund supports operational research (OR) within its grants to build research capacity of professionals working with national TB programs and to generate evidence to inform policies and practices, as seen in Project Axshya in India (PUBMED:29553308). This approach could be relevant to building sustainable capacity in Nicaragua, as it emphasizes the importance of local research and evidence-based decision-making. Furthermore, the Global Fund's new funding model advances strategic investments to maximize impact, address 'missing' TB cases, enhance responses to TB/HIV dual epidemics, and build resilient and sustainable systems for health (PUBMED:26884492). This strategic approach could be applied to Nicaragua to build sustainable capacity for TB control. In summary, while the abstracts do not directly address the Global Fund's work in Nicaragua, they highlight the organization's commitment to building sustainable capacity for TB control through financial support, strengthening local leadership and governance, promoting operational research, and advancing strategic investments. These principles are likely to be part of the Global Fund's approach in Nicaragua as well.
Instruction: Does subjective tinnitus cause sexual disturbance? Abstracts: abstract_id: PUBMED:17459276 Does subjective tinnitus cause sexual disturbance? Objective: Tinnitus can cause psychological problems, which can affect sexual performance. The aim of this study was to investigate sexual disturbance related to the psychological problems of patients with subjective tinnitus. Materials And Method: The subjective tinnitus group with normal hearing levels consisted of 20 patients (10 male, 10 female) who were nonpsychiatric. The control group consisted of 20 healthy patients (10 male, 10 female) with normal hearing levels who did not have tinnitus and were nonpsychiatric. All subjects were married and had an active sexual life. Using a questionnaire, the subjective tinnitus loudness level score (STLL-Sc) was found. Using Zung Anxiety and Depression Scale, self-rating depression scale (SDS) was found in the study and control groups. Sexual function was assessed in all male subjects with the International Index of Erectile Functions (IIEF) and in all female subjects with the Female Sexual Function Index (FSFI). Results: In females, the satisfaction subscore was slightly lower than normal limits in both the study and control groups. In males, the IIEF showed an insignificant, negative correlation with the STLL-Sc and the SDS and a positive correlation with tinnitus duration. In females, the FSFI showed an insignificant negative correlation with the STLL-Sc and a positive correlation with tinnitus duration and the SDS. Conclusion: Sexual disturbance is seen in very quiet- and intermediate-level tinnitus sufferers in the early period of the disease. Over time, they become used to living with their tinnitus, and no loss in sexual performance is seen. In the future, we plan to investigate the sexual disturbance of patients with severe STLL-Scs. abstract_id: PUBMED:29687619 Understanding jugular venous outflow disturbance. Extracranial venous abnormalities, especially jugular venous outflow disturbance, were originally viewed as nonpathological phenomena due to a lack of realization and exploration of their feature and clinical significance. The etiology and pathogenesis are still unclear, whereas a couple of causal factors have been conjectured. The clinical presentation of this condition is highly variable, ranging from insidious to symptomatic, such as headaches, dizziness, pulsatile tinnitus, visual impairment, sleep disturbance, and neck discomfort or pain. Standard diagnostic criteria are not available, and current diagnosis largely depends on a combinatory use of imaging modalities. Although few researches have been conducted to gain evidence-based therapeutic approach, several recent advances indicate that intravenous angioplasty in combination with stenting implantation may be a safe and efficient way to restore normal blood circulation, alleviate the discomfort symptoms, and enhance patients' quality of life. In addition, surgical removal of structures that constrain the internal jugular vein may serve as an alternative or adjunctive management when endovascular intervention is not feasible. Notably, discussion on every aspect of this newly recognized disease entity is in the infant stage and efforts with more rigorous designed, randomized controlled studies in attempt to identify the pathophysiology, diagnostic criteria, and effective approaches to its treatment will provide a profound insight into this issue. abstract_id: PUBMED:34366185 Sleep disturbance and psychological distress in adult patients with tinnitus. Background/purpose: Sleep disturbance and psychological distress are among the most prevalent comorbidities of tinnitus. We aimed to clarify the dose-response effects of these phenomena with tinnitus severity. Methods: This study enrolled adult patients with subjective tinnitus for more than 6 months was conducted from January 2017 to December 2018 in one tertiary medical center and one local hospital. Data collected included demographic data and questionnaires, namely Tinnitus Handicap Inventory (THI), Pittsburgh Sleep Quality Index (PSQI), Epworth Sleepiness Scale (ESS), and Hospital Anxiety and Depression Scale (HADS). Results: In total, 1610 patients with tinnitus (1105 male, 68.6%) with mean age of 48.3 ± 14.3 years completed all questionnaires. The average THI score was 9.2 ± 19.4, and 82.4% of patients reported to have slight tinnitus (THI ranged 0-16). The mean PSQI score was 8.4 ± 4.3, and 70.8% of participants had sleep difficulty (PSQI &gt; 5). Compared with patients with slight tinnitus, those with catastrophic tinnitus were mostly old women with lower body mass index, and had higher scores in ESS, PSQI, and HADS (all P &lt; 0.05). In 1140 patients with sleep difficulty, independent factors influencing THI were age, ESS, and HADS, and positive correlations were observed between age-adjusted THI and ESS, HADS-A, and HADS-D (all P &lt; 0.001). Conclusion: Old age, daytime sleepiness, and psychological distress are highly associated with tinnitus severity among patients with sleep difficulty. Management of sleep disturbance and psychological distress is necessary to control tinnitus. abstract_id: PUBMED:25162023 The effects of acute stress-induced sleep disturbance on acoustic trauma-induced tinnitus in rats. Chronic tinnitus is a debilitating condition and often accompanied by anxiety, depression, and sleep disturbance. It has been suggested that sleep disturbance, such as insomnia, may be a risk factor/predictor for tinnitus-related distress and the two conditions may share common neurobiological mechanisms. This study investigated whether acute stress-induced sleep disturbance could increase the susceptibility to acoustic trauma-induced tinnitus in rats. The animals were exposed to unilateral acoustic trauma 24 h before sleep disturbance being induced using the cage exchange method. Tinnitus perception was assessed behaviourally using a conditioned lick suppression paradigm 3 weeks after the acoustic trauma. Changes in the orexin system in the hypothalamus, which plays an important role in maintaining long-lasting arousal, were also examined using immunohistochemistry. Cage exchange resulted in a significant reduction in the number of sleep episodes and acoustic trauma-induced tinnitus with acoustic features similar to a 32 kHz tone at 100 dB. However, sleep disturbance did not exacerbate the perception of tinnitus in rats. Neither tinnitus alone nor tinnitus plus sleep disturbance altered the number of orexin-expressing neurons. The results suggest that acute sleep disturbance does not cause long-term changes in the number of orexin neurons and does not change the perception of tinnitus induced by acoustic trauma in rats. abstract_id: PUBMED:8373941 Sleep disturbance associated with chronic tinnitus. Sleep disturbance is a common and frequent complaint reported by tinnitus sufferers. Recent studies have shown that when insomnia and depression are associated with tinnitus there is decreased tolerance and increased discomfort with the tinnitus. The purpose of this study was to assess the reported prevalence and severity of sleep disturbance in chronic tinnitus patients. Patients (n = 80) were military personnel without major psychiatric disturbance and their tinnitus was associated with noise-induced permanent hearing loss (NIHL). Mini Sleep Questionnaire (MSQ) scores for sleep disturbance were found to be higher than those of normal controls in 77% of the patients. Highest MSQ scores in tinnitus patients with a sleep complaint were for delayed sleep, morning awakenings, mid-sleep awakenings, morning fatigue, and chronic fatigue. In contrast, a complaint of excessive daytime sleep (EDS) was not common. The self-rated severity of the tinnitus was greater in subjects with higher sleep disturbance scores. Self-rated depressive symptomatology was also highly correlated with sleep disturbance. Retrospective examination of sleep records and polysomnographic data for 10 patients with a complaint of chronic tinnitus revealed a combined effect for the tinnitus condition when associated with another conventional sleep disorder. In spite of the common complaint of sleep disturbance in tinnitus, only a minority seek a sleep examination. abstract_id: PUBMED:8975999 Correlates of sleep disturbance in chronic distressing tinnitus. Sleep disturbance was recorded for a 1-week period prior to a follow-up assessment of 26 patients who had completed a study of psychological therapy reported elsewhere (Davies et al., 1995). The absence of reported sleep disturbance was associated with essentially normal hearing over the range 250 Hz to 2 kHz; hearing impairment was present in mild or severe sleep disturbance. Patients whose sleep was most disturbed rated significantly greater tinnitus annoyance in the evening. Measures of mood and emotional distress did not relate to degree of sleep disturbance. abstract_id: PUBMED:38469674 The association between tinnitus and risk of cardiovascular events and all-cause mortality: insight from the UK Biobank. Background: The potential influence of tinnitus on cardiovascular disease (CVD) and all-cause mortality has yet to be explored. We aim to examine the correlations between tinnitus and the risk of cardiovascular events and all-cause mortality. Methods: We conducted a prospective cohort study utilising data from the UK Biobank. The presence of tinnitus was evaluated through a questionnaire. The primary outcome was defined as a composition of cardiovascular events, including myocardial infarction (MI), stroke, and mortality from CVD, as well as all-cause mortality. Cox proportional hazard models were employed to examine the associations between tinnitus and both the primary outcome and its individual components. Sensitivity analyses were conducted to evaluate the robustness of the primary analysis. Results: A total of 140,146 participants were included in the study. The presence of tinnitus was found to be associated with a higher incident rate of the primary outcome (HR = 1.057, 95%CI: 1.017-1.099, p = 0.005), MI (HR = 1.139, 95%CI: 1.061-1.222, p &lt; 0.001) and all-cause mortality (HR = 1.053, 95%CI: 1.003-1.105, p = 0.038) after adjusting for confounders. However, there was no significant association between tinnitus and stroke or mortality from CVD. Subgroup analysis revealed that the association between tinnitus and the primary outcome was significant in females, participants with abnormal BMI, and those without hearing difficulty, depression or anxiety. Sensitivity analyses yielded consistent results. Conclusion: The findings from this study contribute to the existing body of evidence suggesting an association between tinnitus and an increased risk of cardiovascular events and all-cause mortality. abstract_id: PUBMED:26298574 Sexual orientation disparities in physical health: age and gender effects in a population-based study. Background: Recent studies have identified substantial health disparities between lesbian, gay, and bisexual (LGB) individuals compared to heterosexuals. However, possible variation in sexual orientation health disparities by age and according to gender remains largely unexplored. Purpose: To examine physical health disparities between LGB and heterosexual individuals in a general population sample in Sweden, to explore potential age and gender differences in these disparities, and to test potential mechanisms underlying any observed disparities. Method: Between 2008 and 2013, 60,922 individuals (16-84 years of age) responded to nationwide population-based health surveys. In the sample, 430 (0.7%) individuals self-identified as gay/lesbian and 757 (1.3%) self-identified as bisexual. Logistic and negative binomial regression analyses were used to explore health disparities based on sexual orientation. Results: Overall, LGB individuals were more likely to report worse self-rated health as well as more physical health symptoms (e.g., pain, insomnia, dermatitis, tinnitus, intestinal problems) and conditions (e.g., diabetes, asthma, high blood pressure) compared to heterosexuals. However, these physical health disparities differed by age. Disparities were largest among adolescents and young adults and generally smallest in older age groups. Health behaviors and elevated reports of exposure to perceived discrimination, victimization, and threats of violence among sexual minorities partially explained the sexual orientation disparities in physical health. Conclusions: Age emerged as an important effect modifier of physical health disparities based on sexual orientation. Gender-specific findings suggest that sexual orientation disparities persist into adulthood for women but are gradually attenuated for older age groups; in contrast, for men, these disparities disappear starting with young adults. These results support a developmental model of minority stress and physical health among LGB individuals. abstract_id: PUBMED:17210339 Audiovestibular disturbance in patients with systemic lupus erythematosus. Objective: The aim of this study was to evaluate the audiovestibular disturbance in patients with systemic lupus erythematosus (SLE). Study Design And Setting: Twenty-eight patients with SLE and 28 healthy control subjects were included. Pure-tone audiometry, impedance audiometry, and electronystagmography (ENG) were used for baseline evaluation. Laboratory tests were carried out. Cranial and brain stem magnetic resonance imagings (MRI) were undertaken. Results: Nineteen (67%) patients reported audiovestibular symptoms. Sensorineural hearing loss was found in 6 (21%) patients. Abnormal results on ENG were significantly higher (50%) (P &lt; 0.01). Abnormal laboratory data were available from 26 patients (P &lt; 0.05). MRI did not show any pathosis. Conclusion: The audiovestibular disturbances in SLE are more prevalent than previously recognized. Although no cause and effect relationship can be established by this type of study, it appears that a relationship exists. Significance: Audiologic research should be directed toward routine, pure tone audiometry, and ENG assessment for patients with SLE to enable crucial treatment. abstract_id: PUBMED:15021779 Vertigo and hearing disturbance as the first sign of a glioblastoma (World Health Organization grade IV). Objective: To describe vertigo and hearing disturbance as a first sign of glioblastoma. Study Design: Case report. Setting: Ear, Nose, and Throat Department of the University of Regensberg, Germany. Primary Care Center. Patients: A patient with a left temporal glioblastoma. Results: A 67-year-old man presented with a 2-month history of vertigo and hearing disturbance. Radiological imaging revealed a left temporal tumor with dural inflation and erosion of the petrous bone and superior semicircular duct. The surgery involved total resection of the tumor and resurfacing of the gap in the superior canal. The histopathological examination revealed World Health Trade Organization IV glioblastoma. Postoperatively, the debilitating symptoms were relieved and the patient received radiation therapy. Tumor progression indicated a recraniotomy and a mastoidectomy. The tumor was only partially resected, and required chemotherapy. It subsequently developed otoliquorrhea and required a remastoidectomy. Histopathology of a pathological fracture of the X thoracic vertebra revealed a metastasis of the known glioblastoma. The patient died from respiratory distress syndrome. Conclusion: To the best of our knowledge, we are presenting the first case with transdural infiltration of bony structures by a glioblastoma at the moment of diagnosis. The transdural spread could be via the sinus petrosus and along the nervous petrosus major in the petrosal bone. Superior canal dehiscence syndrome should be considered in the differential diagnosis of vertigo and hearing disturbance. Two different processes for the etiology of the superior canal dehiscence syndrome are discussed previously in the literature; however, we present a new entity with a tumor-cause dehiscence of the bone overlying the superior canal. Answer: The study conducted to investigate sexual disturbance related to the psychological problems of patients with subjective tinnitus found that sexual disturbance is seen in very quiet- and intermediate-level tinnitus sufferers in the early period of the disease. However, over time, as patients become accustomed to living with their tinnitus, no loss in sexual performance is observed (PUBMED:17459276). This suggests that while subjective tinnitus may initially cause sexual disturbance due to associated psychological problems, this effect may diminish as individuals adapt to the condition.
Instruction: Should few retrieved oocytes be as an indication for intracytoplasmic sperm injection? Abstracts: abstract_id: PUBMED:22949362 Should few retrieved oocytes be as an indication for intracytoplasmic sperm injection? Objective: To reevaluate whether relatively few oocytes obtained in one cycle are an indication for intracytoplasmic sperm injection (ICSI). Methods: A total of 406 cycles with three or fewer retrieved oocytes performed in 396 non-male infertile couples were retrospectively reviewed. Cycles were classified into three groups by different fertilization techniques: the in vitro fertilization (IVF) group, insemination with conventional IVF; the ICSI group, insemination with ICSI though semen parameters were normal; and the rescue ICSI group, re-insemination with ICSI after conventional IVF failure. Results: The ICSI group resulted in higher normal fertilization compared with the conventional IVF group. Correspondingly, the cycle cancellation rate was decreased in the ICSI group, though it was not statistically significant. The clinical pregnancy rate and implantation rate were lower in the ICSI group compared with the conventional IVF group. Rescue ICSI was a method to avert total fertilization failure in conventional IVF, increasing fertilization and ensuring embryo availability for transfer, but the normal fertilization was the lowest due to delayed insemination and the chance of pregnancy was very little. Conclusions: Obtaining only few oocytes in one cycle is not considered as an indication for ICSI when the sperm sample is apparently normal. Rescue ICSI is either not recommended if conventional insemination fails. Such patients should not be subjected to the unnecessary costs and potential risks of ICSI. abstract_id: PUBMED:26473112 Pregnancy and fertilization potential of immature oocytes retrieved in intracytoplasmic sperm injection cycles. Objective: The goal of this study was to evaluate the pregnancy potential of immature (metaphase I or germinal vesicle stage) oocytes retrieved in intracytoplasmic sperm injection (ICSI) cycles. Methods: A total of 1,871 couples with infertility underwent 2,984 ICSI cycles. Cycles in which three or fewer oocytes were retrieved were included in this study in order to evaluate the pregnancy potential of immature oocytes. Cycles were divided into five groups (group I-V), according to the maturation status of the oocytes at the time of cumulus cell removal and ICSI. The fertilization and pregnancy rates after ICSI were analyzed and compared among the study groups based on the maturation status of the retrieved oocytes. Results: The retrieval of only immature oocytes was associated with a significant decrease in the fertilization rate (76.1%±37.3% vs. 49.0%±49.1%, 66.7%±48.7%; group I vs. group II, group III, respectively) and the average number of transferred embryos (1.5±0.7 vs. 1.1±0.4, 1.1±0.6). The cycle cancellation rate was significantly higher when only immature oocytes were retrieved. The clinical pregnancy rate decreased significantly when the transferred embryos had originated from immature oocytes (16.9% vs. 10.3%, 1.2%). Conclusion: In ICSI cycles, the fertilization potential and pregnancy potential of the immature oocytes retrieved in ICSI cycles were inferior to those of mature oocytes. Therefore, increasing the number of injectable oocytes and transferrable embryos by using immature oocytes after their spontaneous in vitro maturation does not necessarily improve pregnancy outcomes. abstract_id: PUBMED:32510642 Spermatozoa retrieved by electroejaculation: Should we prefer fresh or cryopreserved spermatozoa for intracytoplasmic sperm injection? We aim to evaluate our experience, comparing intracytoplasmic sperm injection (ICSI) outcomes of cycle using fresh versus thawed electroejaculated spermatozoa. All consecutive couples undergoing ICSI cycles using electroejaculated spermatozoa, during a 16-year period, were evaluated. Embryological/laboratory variables of the ICSI cycles were assessed and compared between those utilising fresh (fresh group) versus thawed (thawed group) electroejaculated spermatozoa. Fifty-seven couples were evaluated, 30 used a fresh electroejaculated spermatozoa in 55 ICSI cycles, while 27 used a thawed sperm sample in 41 ICSI cycles. There were no in-between group differences in the mean numbers of oocytes retrieved per oocyte retrieval nor the percentage of MII oocytes. The fresh group demonstrated significantly higher fertilisation (71.5% vs. 64.1%, respectively, p &lt; .05), top-quality embryos (66.5% vs. 54.9%, respectively, p &lt; .02), clinical pregnancy per transfer (41.3% and 21.2%, respectively, p &lt; .05) and cumulative clinical pregnancy (58.2% vs. 26.8%, respectively, p &lt; .001) rates, as compared to the thawed group. Independent of the source of spermatozoa used, no pregnancy was achieved following ICSI utilising immotile spermatozoa. In conclusion, ICSI cycles using ejaculated spermatozoa of patients suffering from neurologic or psychogenic anejaculation are reassuring. The use of fresh ejaculated spermatozoa retrieved on the day of the female spouse oocyte retrieval might improve outcome. Whenever a thawed electroejaculated spermatozoa yield no motile spermatozoa, emergency electroejaculation is mandatory. abstract_id: PUBMED:36762323 Pregnancy outcomes in infertile patients undergoing intracytoplasmic sperm injection and two interval programs of hCG-to-oocyte aspiration. This study aimed to determine the outcomes of intracytoplasmic sperm injection (ICSI) if oocyte retrieval was done 32-34 hours or 34:05-36 hours after human chorionic gonadotropin (hCG) injection. A randomized sample involving 186 patients with tubal failure was divided into groups A (96 patients) and B (90 patients). Intracytoplasmic sperm injection was performed on all patients according to described protocols. The number of oocytes retrieved, oocyte cumulus complex quality, number of fertilized eggs, and pregnancy rates were compared between groups. The total of oocytes retrieved in group B was significantly higher than in group A but not significant (P=0.068). The oocyte maturation rate was also significantly higher in the long interval group B than in the short interval group A (P=0.039). There was a significant difference between the two groups in terms of fertilization rate (0.040), and the pregnancy rate in group B was higher than that in group A, but it was not significant (P=0.055). The prolonged interval could also increase the pregnancy rate, but it was not significant. It seems that if the interval between hCG priming and oocyte retrieval is prolonged, the percentage of the number of oocytes retrieved, mature oocytes (MII), and fertilized oocytes increases. abstract_id: PUBMED:38228973 Reproductive Outcomes of Conventional In Vitro Fertilization and Intracytoplasmic Sperm Injection in Patients with Non-Severe Male Infertility Across Poor and Different Sub-Optimal Ovarian Response Categories: A Cohort Study Based on 30,352 Fresh Cycles from 2009-2019. Due to the influence of economic, social and many other factors, there are more and more reproductive problems. Originally introduced for managing male factor infertility, intracytoplasmic sperm injection had become the most commonly used fertilization treatment in the world, with broadened indications including low oocyte yield, prior fertilization failure with conventional in vitro fertilization etc. However, academic evidence for better live-birth outcomes of intracytoplasmic sperm injection over conventional in vitro fertilization is limited. Thus, we aimed to compare the reproductive outcomes of conventional in vitro fertilization and intracytoplasmic sperm injection in patients with non-severe male factor infertility across poor and different sub-optimal ovarian response categories. The fertility rate, implantation rate, clinical pregnancy rate, live birth rate and other obstetric outcomes were mainly compared. Our results showed that independent of the number of oocytes retrieved, intracytoplasmic sperm injection significantly increased the fertilization rate, while conventional in vitro fertilization cycles showed a higher implantation rate, clinical pregnancy rate, and live birth rate. No differences were observed in most obstetric outcomes. Our study indicates that poor ovarian response is not an indication for intracytoplasmic sperm injection in couples with non-severe male infertility. abstract_id: PUBMED:12419038 Fertilization, embryonic development and pregnancy losses with intracytoplasmic sperm injection for surgically-retrieved spermatozoa. Prior to the development of intracytoplasmic sperm injection (ICSI), azoospermic and severely oligozoospermic men had little to no chance of having a biological child. In this study, ICSI outcome in 454 transfers with ejaculated spermatozoa and 59 transfers with surgically retrieved spermatozoa were evaluated. Normal fertilization rate using ejaculated spermatozoa was 75% of 5995 oocytes, and 73% of 751 oocytes for surgically retrieved spermatozoa; with ongoing pregnancy rates of 53% (242/454) and 61% (36/59) respectively. Surgically retrieved spermatozoa significantly (P &lt; 0.05) impacted 1PN oocytes (6.1%, 46/751), severely fragmented embryos (8.8%, 46/550) and incidence of pregnancy loss (11%, 4/36). When using ejaculated spermatozoa, incidence of 1PN oocytes, severely fragmented embryos and pregnancy loss was 2.9% (177/5995), 4.5% (200/4365), 2.4% (6/242) respectively. abstract_id: PUBMED:26627517 FSH/LH ratio in females and intracytoplasmic sperm injection. Objective: To observe the effect of follicle stimulating hormone-luteinizing hormone ratio on oocyte parameters, quality of embryo, implantation and clinical pregnancy rate after intracytoplasmic sperm injection. Methods: The retrospective study was conducted at a fertility clinic in Islamabad, Pakistan, and comprised data of primary infertile females who underwent intracytoplasmic sperm injection from June 2011 to March 2013. All subjects had duration of infertility more than two years, and age range was 20-40 years. Follicle stimulating hormone and luteinizing hormone were estimated by enzyme-linked immunosorbent assay on day 3 of the cycle and the ratio was calculated. Groups were stratified on the basis of median ratio into groups I &lt;1.26 and group II &gt;1.26. SPSS 20 was used for data analysis. Results: Of the 282 females, 143(51%) were in group I and 139(49%) in group II. Pregnancy was acquired by 79(55%) and 22(16%) females in group I and II respectively. The number of retrieved, metaphase, fertilised oocytes, cleaved embryos and endometrial thickness was significantly larger in group I (p&lt; 0.0001). Conclusions: Follicle stimulating hormone-luteinizing hormone ratio less than 1.26 was associated with good oocyte parameters, top quality embryo and implantation after intracytoplasmic sperm injection. abstract_id: PUBMED:33349808 The effect of seasonality on reproductive outcome of patients undergoing intracytoplasmic sperm injection: A descriptive cross-sectional study. Background: There is conflicting evidence regarding the impact of season on the assisted reproductive technology outcome. Objective: To retrospectively compare three year outcome of women undergoing their first intracytoplasmic sperm injection cycle, across seasons. Materials And Methods: In this descriptive cross-sectional study, 3,670 women who underwent their first intracytoplasmic sperm injection cycle in Mehr Medical Institute, Rasht, Iran between April 2010 and May 2014 were studied. Women were divided into four groups according to the day of oocyte retrival as: spring (n = 808), summer (n = 994), autumn (n = 1066), and winter (n = 802). Basal and stimulation charecteristics were compared among groups. Results: While sperm concentration and motility were significantly lower during summer, the total number of retrieved and metaphase II oocytes were significantly higher (p = 0.0001, p = 0.0001, p = 0.004, p = 0.02, respectively). Fertilization rate were significantly higher during autumn (p = 0.0001). Also, the number of high- quality transferred embryos were significantly higher during summer and winter (p = 0.03). A similar pattern was observed in implantation rate and pregnancy over the four seasons. Conclusion: Despite the fact that intracytoplasmic sperm injection minimize the seasonal effect on pregnancy outcome, changes in pregnancy rate still occur among different seasons without particular pattern. It seems that performing assisted reproductive technology procedures in a particular season should be considered as an effective factor. abstract_id: PUBMED:26419975 Effect of oocyte dysmorphisms on intracytoplasmic sperm injection cycle outcomes in normal ovarian responders. Aim: The aim of this study was to identify the influence of oocyte dysmorphisms on clinical outcomes after intracytoplasmic sperm injection cycle in normal responders. Material And Methods: In the prospective study, morphology of 1999 metaphase II oocytes retrieved from 316 intracytoplasmic sperm injection cycles was evaluated from March 2011 to March 2013 at Royan Institute. Controlled ovarian stimulation was performed by long standard agonist protocol. Oocyte morphology was assessed before sperm injection by one embryologist. The associations between fertilization rate, embryo quality and the independent variables were analyzed using odds ratio (OR) calculated with unconditional logistic regression test. Results: From all retrieved oocytes, 1543 (77.1%) showed at least one morphologic aberration. Presence of cytoplasmic vacuoles and high cytoplasmic viscosity were associated with a significant decrease in the fertilization rate (OR: 0.5; P = 0.004 and OR: 0.6; P = 0.03, respectively). The results showed that oocyte morphology did not affect embryo quality. The number of gonadotrophin injections used showed a direct relation with presence of large perivitelline space. No significant difference was observed among four groups (women with total normal morphologic oocytes [group 1], women with total extracytoplasmic dysmorphic oocytes [group 2], women with total cytoplasmic dysmorphic oocytes [group 3] and women with total oocytes containing multiple dysmorphic features [group 4]) in terms of implantation and clinical pregnancy rates. Conclusions: Metaphase II oocyte morphology had minor impacts on fertilization rate, pronuclear morphology and embryo quality in women with normal ovarian response. abstract_id: PUBMED:9763816 The ICSI (Intracytoplasmic Sperm Injection) ICSI (Intracytoplasmic Sperm Injection) is the latest known assisted reproduction technique (ART) and it already appears to be mature. In fact the analysis of the results presented by the researchers over the years has shown that the most specific indication for this ART is the sterility of the couple with serious male pathology, up to ejaculatory azoospermia where it is possible to perform MESA, PESA or TESE. Any kind of sterility could be solved with ICSI whose only limit presently known is the high technology and therefore high costs involved. The percentage of oocytes that undergo ICSI without being damaged varies from 87% to 94% and the percentage of fertilization varies from 33% to 71%. The transfer rate is 59-100%. The rate of pregnancies per couple ranges from 12% to 40% of the couples, from 11% to 41% for the transfers. Since abortions are similar to the values of the normal population (10-15%), ICSI is actually the assisted fertilization technique with the highest incidence of pregnancies and "take home babies". The percentage incidence of the two sexes, of the malformations and the typologies of malformations corresponds to those observed in the population with spontaneous pregnancies. Since there is no natural selection of the gametes in ICSI, one may be sure that when spermatozoa with any kind of pathology are injected, the pregnancy does not take place at all. Answer: Based on the information provided in the abstracts, few retrieved oocytes should not be considered an indication for intracytoplasmic sperm injection (ICSI) when the sperm sample is apparently normal. A study involving 406 cycles with three or fewer retrieved oocytes in non-male infertile couples found that while the ICSI group had higher normal fertilization compared to the conventional IVF group, the clinical pregnancy rate and implantation rate were lower in the ICSI group. Rescue ICSI, which is re-insemination with ICSI after conventional IVF failure, was not recommended as it resulted in the lowest normal fertilization and very little chance of pregnancy (PUBMED:22949362). Furthermore, another study showed that the fertilization potential and pregnancy potential of immature oocytes retrieved in ICSI cycles were inferior to those of mature oocytes, indicating that using immature oocytes after their spontaneous in vitro maturation does not necessarily improve pregnancy outcomes (PUBMED:26473112). Additionally, a cohort study based on 30,352 fresh cycles from 2009-2019 indicated that poor ovarian response is not an indication for ICSI in couples with non-severe male infertility, as ICSI significantly increased the fertilization rate but conventional IVF cycles showed a higher implantation rate, clinical pregnancy rate, and live birth rate (PUBMED:38228973). In conclusion, the evidence suggests that having few retrieved oocytes is not in itself an indication for ICSI, especially when sperm parameters are normal. Instead, the decision to use ICSI should be based on other factors, such as male infertility or previous fertilization failures with conventional IVF.
Instruction: HIV incidence and poverty in Manicaland, Zimbabwe: is HIV becoming a disease of the poor? Abstracts: abstract_id: PUBMED:18040166 HIV incidence and poverty in Manicaland, Zimbabwe: is HIV becoming a disease of the poor? Background: In Zimbabwe, socioeconomic development has a complicated and changeable relationship with HIV infection. Longitudinal data are needed to disentangle the cyclical effects of poverty and HIV as well as to separate historical patterns from contemporary trends of infection. Methods: We analysed a large population-based cohort in eastern Zimbabwe. The wealth index was measured at baseline on the basis of household asset ownership. The associations of the wealth index with HIV incidence and mortality, sexual risk behaviour, and sexual mixing patterns were analysed. Results: The largest decreases in HIV prevalence were in the top third of the wealth index distribution (tercile) in both men at 25% and women at 21%. In men, HIV incidence was significantly lower in the top wealth index tercile (15.4 per 1000 person-years) compared with the lowest tercile (27.4 per 1000 person-years), especially among young men. Mortality rates were significantly lower in both men and women of higher wealth index. Men of higher wealth index reported more sexual partners, but were also more likely to use condoms. Better-off women reported fewer partners and were less likely to engage in transactional sex. Partnership data suggests increasing like-with-like mixing in higher wealth groups resulting in the reduced probability of serodiscordant couples. Conclusion: HIV incidence and mortality, and perhaps sexual risk, are lower in higher socioeconomic groups. Reduced vulnerability to infection, led by the relatively well off, is a positive trend, but in the absence of analogous developments in vulnerable groups, HIV threatens to become a disease of the poor. abstract_id: PUBMED:26593453 The effects of household wealth on HIV prevalence in Manicaland, Zimbabwe - a prospective household census and population-based open cohort study. Introduction: Intensified poverty arising from economic decline and crisis may have contributed to reductions in HIV prevalence in Zimbabwe. Objectives: To assess the impact of the economic decline on household wealth and prevalent HIV infection using data from a population-based open cohort. Methods: Household wealth was estimated using data from a prospective household census in Manicaland Province (1998 to 2011). Temporal trends in summed asset ownership indices for sellable, non-sellable and all assets combined were compared for households in four socio-economic strata (small towns, agricultural estates, roadside settlements and subsistence farming areas). Multivariate logistic random-effects models were used to measure differences in individual-level associations between prevalent HIV infection and place of residence, absolute wealth group and occupation. Results: Household mean asset scores remained similar at around 0.37 (on a scale of 0 to 1) up to 2007 but decreased to below 0.35 thereafter. Sellable assets fell substantially from 2004 while non-sellable assets continued increasing until 2008. Small-town households had the highest wealth scores but the gap to other locations decreased over time, especially for sellable assets. Concurrently, adult HIV prevalence fell from 22.3 to 14.3%. HIV prevalence was highest in better-off locations (small towns) but differed little by household wealth or occupation. Initially, HIV prevalence was elevated in women from poorer households and lower in men in professional occupations. However, most recently (2009 to 2011), men and women in the poorest households had lower HIV prevalence and men in professional occupations had similar prevalence to unemployed men. Conclusions: The economic crisis drove more households into extreme poverty. However, HIV prevalence fell in all socio-economic locations and sub-groups, and there was limited evidence that increased poverty contributed to HIV prevalence decline. abstract_id: PUBMED:26687174 Rethinking the Poverty-disease Nexus: the Case of HIV/AIDS in South Africa. While it is well-established that poverty and disease are intimately connected, the nature of this connection and the role of poverty in disease causation remains contested in scientific and social studies of disease. Using the case of HIV/AIDS in South Africa and drawing on a theoretically grounded analysis, this paper reconceptualises disease and poverty as ontologically entangled. In the context of the South African HIV epidemic, this rethinking of the poverty-disease dynamic enables an account of how social forces such as poverty become embodied in the very substance of disease to produce ontologies of HIV/AIDS unique to South Africa. abstract_id: PUBMED:22964936 Tuberculosis, HIV, and poverty: temporal trends in Brazil, the Americas, and worldwide. Objective: To analyze the temporal trends of the incidence and prevalence of tuberculosis, with and without HIV co-infection, as well as of the associated mortality, in Brazil, the Americas, and worldwide. Methods: We collected data related to tuberculosis, with and without HIV co-infection, between 1990 and 2010, in Brazil, the Americas, and worldwide. Temporal trends were estimated by linear regression. Results: We identified a trend toward a decrease in tuberculosis prevalence and mortality, and that trend was more pronounced in Brazil and the Americas than worldwide. There was also a trend toward an increase in the incidence of tuberculosis/HIV co-infection, as well as in the rates of detection of new cases of active and latent tuberculosis. The incidence of tuberculosis was found to trend downward in Brazil, whereas it trended upward worldwide. Tuberculosis incidence rates correlated positively with poverty rates and with HIV incidence rates. Conclusions: Social inequality and the advent of AIDS are the major factors that aggravate the current situation of tuberculosis. In this context, methodical approaches to the assessment of surveillance activities are welcome, because they will identify situations in which the reported tuberculosis data do not reflect the true incidence of this disease. abstract_id: PUBMED:28988165 Documenting and explaining the HIV decline in east Zimbabwe: the Manicaland General Population Cohort. Purpose: The Manicaland cohort was established to provide robust scientific data on HIV prevalence and incidence, patterns of sexual risk behaviour and the demographic impact of HIV in a sub-Saharan African population subject to a generalised HIV epidemic. The aims were later broadened to include provision of data on the coverage and effectiveness of national HIV control programmes including antiretroviral therapy (ART). Participants: General population open cohort located in 12 sites in Manicaland, east Zimbabwe, representing 4 major socioeconomic strata (small towns, agricultural estates, roadside settlements and subsistence farming areas). 9,109 of 11,453 (79.5%) eligible adults (men 17-54 years; women 15-44 years) were recruited in a phased household census between July 1998 and January 2000. Five rounds of follow-up of the prospective household census and the open cohort were conducted at 2-year or 3-year intervals between July 2001 and November 2013. Follow-up rates among surviving residents ranged between 77.0% (over 3 years) and 96.4% (2 years). Findings To Date: HIV prevalence was 25.1% at baseline and had a substantial demographic impact with 10-fold higher mortality in HIV-infected adults than in uninfected adults and a reduction in the growth rate in the worst affected areas (towns) from 2.9% to 1.0%pa. HIV infection rates have been highest in young adults with earlier commencement of sexual activity and in those with older sexual partners and larger numbers of lifetime partners. HIV prevalence has since fallen to 15.8% and HIV incidence has also declined from 2.1% (1998-2003) to 0.63% (2009-2013) largely due to reduced sexual risk behaviour. HIV-associated mortality fell substantially after 2009 with increased availability of ART. Future Plans: We plan to extend the cohort to measure the effects on the epidemic of current and future HIV prevention and treatment programmes. Proposals for access to these data and for collaboration are welcome. abstract_id: PUBMED:28761606 Malaria morbidity and mortality trends in Manicaland province, Zimbabwe, 2005-2014. Introduction: Zimbabwe targets reducing malaria incidence from 22/1000 in 2012 to 10/1000 by 2017, and malaria deaths to near zero by 2017. As the country moves forward with the malaria elimination efforts, it is crucial to monitor trends in malaria morbidity and mortality in the affected areas. In 2013, Manicaland Province contributed 51% of all malaria cases and 35% of all malaria deaths in Zimbabwe. This analysis describes the trends in malaria incidence, case fatality and malaria outpatient workload compared to the general outpatient workload. Methods: We analyzed routinely captured malaria data in Manicaland Province for the period 2005 to 2014. Epi Info version 7 was used to calculate chi-square trends for significance and Microsoft Excel was used to generate graphs. Permission to analyze the data was sought and granted by the Provincial Medical Directorate Institutional Review Board of Manicaland and the Health Studies office. Results: Malaria morbidity data for the period 2005-2014 was reviewed and a total of 947,462 cases were confirmed during this period. However, malaria mortality data was only available for the period 2011-2014 and cumulatively 696 deaths were reported. Malaria incidence increased from 4.4/1,000 persons in 2005 to 116.3/1,000 persons in 2014 (p&lt;0.001). The incidence was higher among females compared to males (p-trend&lt;0.001) and among the above five years age group compared to the under-fives (p-trend&lt;0.001). The proportion of all Outpatient Department attendances that were malaria cases increased 30 fold from 0.3% in 2005 to 9.1% in 2014 (p-trend&lt;0.001). The Case Fatality Rate also increased 2-fold from 0.05 in 2011 to 0.1 in 2014 (p-trend&lt;0.001). Conclusion: Despite current malaria control strategies, the morbidity and mortality of malaria increased over the period under review. There is need for further strengthening of malaria control interventions to reduce the burden of the disease. abstract_id: PUBMED:2261136 Poverty and HIV seropositivity: the poor are more likely to be infected. We analyzed demographic and behavioral risk factors for HIV seropositivity using data from 3601 clients of the main HIV counseling and testing clinic for high-risk people in Seattle, Washington, USA. Clients with lower income were found to be more likely to be HIV seropositive, before and after controlling for other demographic and risk factors with logistic regression. This result supports the hypothesis that the impoverished are at increased risk for HIV infection due to the physical and social circumstances in which their poverty places them. These may include poor access to risk-reduction information and less support for implementation of risk-reduction strategies. abstract_id: PUBMED:10533533 Poverty, disease progression and employment among people living with HIV/AIDS in Australia. A national survey of 925 people living with HIV/AIDS (PLWHA) in Australia is used to examine the relationship between disease progression, employment status, poverty and economic hardship. While disease progression has some impact on economic hardship, employment status is found to be the strongest determinant of both poverty and economic hardship. The most commonly cited reasons for leaving work were psychosocial (71%), with declining health cited by half of respondents. It is therefore argued that psychosocial issues are at least as important as changes in health in causing unemployment and therefore poverty and economic hardship among PLWHA in Australia. abstract_id: PUBMED:12846265 The world-wide increase in tuberculosis: how demographic changes, HIV infection and increasing numbers in poverty are increasing tuberculosis. After more than a century of decline, in the mid 1980s tuberculosis began to increase in some developed countries. Health care workers were then forced to look to the developing world, where they found tuberculosis to be out of control, in many countries. It is now appreciated that tuberculosis is not only increasing globally but is likely to do so beyond the next decade for three principal reasons. First, demographically as the expected population increase will be greatest in areas of the world where tuberculosis is most prevalent, particularly middle Africa and South Asia. Secondly, the increase of HIV, which renders the host uniquely susceptible to tuberculosis, is occurring in the same areas of the world and is already causing an increase in tuberculosis case rates of up to tenfold. Thirdly, as more and more people are forced to live in poverty, where poor nutrition and crowded conditions lead to the spread of tuberculosis, the disease risk will be compounded. Sound medical management, particularly the use of the five components of directly observed therapy, will relieve the situation. But until world conditions of poverty and HIV spread are addressed, it is unlikely that tuberculosis can be controlled. abstract_id: PUBMED:36149912 Status of the HIV epidemic in Manicaland, east Zimbabwe prior to the outbreak of the COVID-19 pandemic. Background: Manicaland province in eastern Zimbabwe has a high incidence of HIV. Completion of the seventh round of the Manicaland Survey in 2018-2019 provided the opportunity to assess the state of the epidemic prior to the start of the COVID-19 pandemic. The study aims were to: a) estimate HIV seroprevalence and assess whether prevalence has declined since the last round of the survey (2012-2013), b) describe and analyse the socio-demographic and behavioural risk factors for HIV infection and c) describe the HIV treatment cascade. Methods: Participants were administered individual questionnaires collecting data on socio-demographic characteristics, sexual relationships, HIV prevention methods and treatment access, and were tested for HIV. Descriptive analyses were followed by univariate and multivariate analyses of risk factors for HIV seropositvity using logistic regression modelling based on the proximate-determinants framework. Results: HIV prevalence was 11.3% [95% CI; 10.6-12.0] and was higher in females than males up to 45-49 years. Since 2012-2013 HIV prevalence has significantly declined in 30-44 year-olds in males, and 20-44 year-olds in females. The HIV epidemic has aged since 2012-2013, with an increase in the mean age of HIV positive persons from 38 to 41 years. Socio-demographic determinants of HIV prevalence were church denomination in males, site-type, wealth-status, employment sector and alcohol use in females, and age and marital status in both sexes. Behavioural determinants associated with increased odds of HIV were a higher number of regular sexual partners (lifetime), non-regular sexual partners (lifetime) and condom use in both sexes, and early sexual debut and concomitant STIs in females; medical circumcision was protective in males. HIV status awareness among participants testing positive in our study was low at 66.2%. ART coverage amongst all participants testing positive for HIV in our study was 65.0% and was lower in urban areas than rural areas, particularly in males. Conclusions: Prevalence has declined, and ART coverage increased, since 2012-2013. Majority of the associations with prevalence hypothesised by the theoretical framework were not observed in our data, likely due to underreporting of sexual risk behaviours or the treatment-as-prevention effect of ART curtailing the probability of transmission despite high levels of sexual risk behaviour. Further reductions in HIV incidence require strengthened primary prevention, HIV testing and linkage to risk behaviour counselling services. Our results serve as a valuable baseline against which to measure the impact of the COVID-19 pandemic on HIV prevalence and its determinants in Manicaland, Zimbabwe, and target interventions appropriately. Answer: The relationship between HIV incidence and poverty in Manicaland, Zimbabwe, suggests that HIV may be increasingly becoming a disease of the poor. Analysis of a large population-based cohort in eastern Zimbabwe revealed that the largest decreases in HIV prevalence were observed in the wealthiest third of the population, with men and women in the top wealth index tercile experiencing significantly lower HIV incidence and mortality rates compared to those in the lowest tercile (PUBMED:18040166). Men of higher wealth reported more sexual partners but were also more likely to use condoms, while better-off women reported fewer partners and were less likely to engage in transactional sex. This suggests that individuals in higher socioeconomic groups may have reduced vulnerability to HIV infection due to safer sexual practices and reduced probability of serodiscordant coupling. However, despite the economic crisis driving more households into extreme poverty, HIV prevalence fell across all socio-economic locations and sub-groups, with limited evidence that increased poverty contributed to the decline in HIV prevalence (PUBMED:26593453). Initially, HIV prevalence was higher in women from poorer households and lower in men in professional occupations, but in the most recent data (2009 to 2011), the poorest households had lower HIV prevalence, and men in professional occupations had similar prevalence to unemployed men. The complex interplay between poverty and disease is further highlighted by the reconceptualization of the poverty-disease dynamic, which suggests that social forces such as poverty become embodied in the very substance of disease, producing unique ontologies of HIV/AIDS in South Africa (PUBMED:26687174). This entanglement of poverty and disease underscores the importance of considering socioeconomic factors in understanding and addressing the HIV epidemic. In summary, while there is evidence that HIV incidence and mortality are lower in higher socioeconomic groups in Manicaland, Zimbabwe, the overall decline in HIV prevalence across different socio-economic strata suggests that the disease is not solely becoming one of the poor. The situation is nuanced, and the relationship between HIV and poverty is complex and evolving.
Instruction: Does socioeconomic position moderate the impact of child behaviour problems on maternal health in South Asian families with a child with intellectual disabilities? Abstracts: abstract_id: PUBMED:19234974 Does socioeconomic position moderate the impact of child behaviour problems on maternal health in South Asian families with a child with intellectual disabilities? Background: Previous research has reported robust associations between child problem behaviours and parental health in families with a child with intellectual disabilities. A recent study found that socioeconomic position may moderate this relationship. This paper examines moderating effects using more diverse indicators of socioeconomic position and parental health and in a cross-cultural context. Methods: Structured interviews were conducted with 123 UK South Asian mothers of a child with severe intellectual disabilities. Results: Socioeconomic position did not moderate the association between child problem behaviours and maternal distress. Socioeconomic position did moderate associations between child problem behaviours and maternal anxiety, depression and self-assessed health. Conclusions: Future research should systematically investigate the influence of socioeconomic position on family functioning. abstract_id: PUBMED:23464804 Attributions, criticism and warmth in mothers of children with intellectual disability and challenging behaviour: a pilot study. Background: Associations between parental expressed emotion (EE) or parental attributions and the problem behaviours of children with intellectual disability (ID) have been explored in ID research. However, a more detailed examination of the attributional model of EE has not been reported. In the present study, we partially replicated and extended research focused on mothers of typically developing children with behaviour problems. Methods: Twenty-seven mothers of children with ID and behaviour problems aged 4-9 years were interviewed about their most problematic behaviours exhibited by their child, and completed a Five Minute Speech Sample. Interview transcripts and speech samples were coded for maternal EE and spontaneous causal attributions regarding the child's behaviour problems. Data were also collected on maternal well-being, and the child's behaviour problems. Results: Mothers typically made attributions that were internal to the child, controllable by the child, personal to the child and stable for the child. Maternal attributions of being able to control the child's behaviour were associated with high maternal criticism and low warmth. Maternal depression was more strongly associated with the child's behaviour problems when mothers were coded as high in criticism or low in warmth. Conclusions: Patterns of maternal attributions about their child's behaviour problems and their consequences for maternal well-being and maternal-child relationships require more research attention. Implications for practice are discussed, including the potential for maternal attributions to be incompatible with the focus of positive behaviour supports offered to families. abstract_id: PUBMED:18513339 Predicting maternal parenting stress in middle childhood: the roles of child intellectual status, behaviour problems and social skills. Background: Parents of children with intellectual disabilities (ID) typically report elevated levels of parenting stress, and child behaviour problems are a strong predictor of heightened parenting stress. Interestingly, few studies have examined child characteristics beyond behaviour problems that may also contribute to parenting stress. The present longitudinal study examined the contribution of child social skills to maternal parenting stress across middle childhood, as well as the direction of the relationship between child social skills and parenting stress. Method: Families of children with ID (n = 74) or typical development (TD) (n = 115) participated over a 2-year period. Maternal parenting stress, child behaviour problems and child social skills were assessed at child ages six and eight. Results: Child social skills accounted for unique variance in maternal parenting stress above and beyond child intellectual status and child behaviour problems. As the children matured, there was a significant interaction between child social skills and behaviour problems in predicting parenting stress. With respect to the direction of these effects, a cross-lagged panel analysis indicated that early parenting stress contributed to later social skills difficulties for children, but the path from children's early social skills to later parenting stress was not supported, once child behaviour problems and intellectual status were accounted for. Conclusion: When examining parenting stress, child social skills are an important variable to consider, especially in the context of child behaviour problems. Early parenting stress predicted child social skills difficulties over time, highlighting parenting stress as a key target for intervention. abstract_id: PUBMED:31746131 Attachment and child behaviour and emotional problems in autism spectrum disorder with intellectual disability. Background: Behaviour and emotional problems are highly prevalent in children with autism spectrum disorder (ASD). In typically developing children, attachment quality acts as a risk/protective factor for behavioural outcomes and adjustment, warranting investigation in children with ASD. Method: We investigated the relationship between attachment and child behaviour and emotional problems in children with ASD and comorbid intellectual disability. Data were collected from parent-child dyads where children were diagnosed with ASD and ID (n = 28) or other developmental disabilities (n = 20). Results: Children with ASD had higher levels of behaviour and emotional problems and more attachment difficulties than children with other developmental disabilities. Poorer attachment quality contributed uniquely to the variance in child behaviour and emotional problems. Conclusions: Interventions targeting behaviour and emotional problems in children with ASD may benefit from an attachment model which addresses the child's difficulty in using caregivers as a coregulatory agent of emotions. abstract_id: PUBMED:24984053 Maternal well-being and child behavior in families with fragile X syndrome. The purpose of this study was to examine the bidirectional relationships relationship between maternal mental health status, maternal stress, family environment and behavioral functioning of children with fragile X syndrome (FXS), the leading cause of inherited intellectual disability. Children with FXS commonly demonstrate challenging behavior related to anxiety, attention, and aggression, whereas mothers of children with FXS have been identified as susceptible to mental health challenges due to their status as genetic carriers of the FXS premutation, as well as the environmental stressors of raising children with special needs. The longitudinal design of this study builds upon prior work that established a concurrent relationship among these factors in families of children with other intellectual disorders. Findings indicated that maternal mental health status was not significantly related to changes in levels of child challenging behavior, heightened child challenging behavior was related to improvements in maternal depression over time, and heightened levels of child challenging behavior was related to increased feelings of maternal closeness toward the child over time. The unexpected nature of the results regarding maternal depression and closeness provides new and more complex hypotheses about how mothers of special needs children demonstrate adaptation and resilience. The findings have implications for maternal and familial mental health treatment as well as future research. abstract_id: PUBMED:38009976 Child characteristics associated with child quality of life and parenting stress in Angelman syndrome. Background: Angelman syndrome (AS) is a rare neurodevelopmental disorder characterised by severe intellectual disability, movement disorder, epilepsy, sleeping problems, and behavioural issues. Little is known on child health-related quality of life (HRQoL) in AS. AS family studies have reported elevated parenting stress and a high impact of the child's syndrome on the parent. It is unclear which factors influence child HRQoL and parenting stress/impact in AS. Methods: We collected data prospectively through standardised clinical assessments of children with AS at the ENCORE Expertise centre for Angelman Syndrome at the Erasmus MC Sophia Children's Hospital. A linear regression analysis was conducted for the following outcome variables: (1) child HRQoL (Infant and Toddler Quality of Life Questionnaire); (2) the impact of the child's syndrome on the parent (Infant and Toddler Quality of Life Questionnaire); and (3) parenting stress (Parenting Stress Index). Predictor variables were child genotype, epilepsy, sleeping problems (Sleep Disturbance Scale for Children), cognitive developmental level (Bayley Cognition Scale), autistic features (Autism Diagnostic Observation Schedule) and emotional/behavioural problems (Child Behaviour Checklist). Covariates were sex, age and socio-economic status. Results: The study sample consisted of 73 children with AS, mean age = 9.1 years, range = 2-18 years. Emotional/behavioural problems were the strongest significant predictor of lowered child HRQoL. Internalising problems were driving this effect. In addition, having the deletion genotype and higher age was related to lower child HRQoL. Sleeping problems were related to a higher impact of the child's syndrome on the parent. Finally, emotional/behavioural problems were associated with higher parenting stress. Cognitive developmental level, autistic features and epilepsy were not a significant predictor of child HRQoL and parenting stress/impact. Conclusions: These results suggest that interventions aimed at increasing child HRQoL and decreasing parenting stress/impact in AS should focus on child emotional/behavioural problems and sleeping problems, using a family-centred approach. abstract_id: PUBMED:32770791 Sleep Well Be Well: Pilot of a digital intervention to improve child behavioural sleep problems. Aim: To investigate whether a digital sleep intervention improves child and care giver sleep and psychosocial outcomes. Methods: A total of 120 families with children aged 2-13 years, reporting moderate to severe child behavioural sleep problems, were recruited from a hospital sleep clinic waitlist or the community. Children from non-English speaking families, with known intellectual disability (IQ &lt; 70) or severe medical problems excluded. Tailored behavioural sleep strategies were delivered to primary care givers via a smart phone app and complementary website. Eligible families completed a baseline questionnaire and child 'sleep check' then received the digital sleep intervention for 5 weeks, and then completed a post questionnaire. Outcomes: care giver report of child sleep as no/mild versus moderate/severe problem over past month (primary outcome); problem child sleep patterns (Brief Infant Sleep Questionnaire or Child Sleep Habits Questionnaire), child temperament, care giver mental health (Kessler 6), care giver sleep, health service use for their child's sleep and time off work/activities to access services. Results: At follow up, care givers reported fewer moderate/severe child sleep problems (84.6-40.7%), improved problem child sleep patterns, better temperament and improved care giver mental health. Care giver sleep quality and quantity remained unchanged. Health service use (averaged over a 6-month period pre- and post-intervention) fell from 18.9% pre- to 14.1% post-intervention. Conclusion: A digital sleep intervention appears promising in improving sleep in children with moderate/severe behavioural sleep problems, and care giver mental health. It may be a useful alternative to face-to-face management of behavioural sleep problems. abstract_id: PUBMED:20597724 Impact on siblings of children with intellectual disability: the role of child behavior problems. The impact on everyday life for siblings of children with intellectual disability or typical development was examined. Participants were families of children with intellectual disability (n=39) or typical development (n=75). Child behavior problems and sibling impact were assessed at child ages 5, 6, 7, and 8. Results indicate that siblings of children with intellectual disability were consistently reported by mothers and fathers to be more negatively impacted compared to siblings of typically developing children. When child behavior problems were accounted for, however, there was no longer a significant relationship between child intellectual status and sibling impact. For both intellectual disability and typical development groups, cross-lagged panel analyses indicate that early child behavior problems lead to increased sibling negative impact over time. abstract_id: PUBMED:25472805 Parenting stress and child behaviour problems among parents with intellectual disabilities: the buffering role of resources. Background: Parents with intellectual disabilities (ID) are at risk for high levels of parenting stress. The present study evaluated resources, including parental adaptive functioning, financial resources and access to a support network, as moderators of the association between child behaviour problems and parenting stress. Method: A total of 134 parents with ID and their children (ages 1-7 years) were recruited from 10 Dutch care organisations. Questionnaires were administered to the parents to obtain information on parenting stress in the parent and child domain, financial resources and their support network. Teachers and care workers reported on child behaviour problems and parental adaptive functioning, respectively. Results: Parents experienced more stress with regard to their children than towards their own functioning and situation. Parenting stress was less in parents who were not experiencing financial hardship. Child behaviour problems were associated with high child-related parenting stress, not parent-related parenting stress. Large support networks decreased the association between child behaviour problems and child-related parenting stress. Financial resources did not significantly moderate the association. Conclusions: Parenting stress among parents with ID is focused on problems with the child, especially when little social support is available. abstract_id: PUBMED:30230656 Family impact of childhood neurodevelopmental disability: considering adaptive and maladaptive behaviour. Background: The aim of the current study was to identify functional predictors of perceived impact of childhood disability among families of children with neurodevelopmental disorders and disabilities. We first examined the relationship between sub-domains of adaptive and problematic behaviour and perceived family impact. Second, we examined whether the same sub-domains would emerge as significant after controlling for the impact of child diagnosis, including autism spectrum disorder, cerebral palsy and intellectual disability. Method: Caregivers of 216 children and adolescents (M = 8.17 years) with neurodevelopmental disorder and disability completed measures of children's practical, conceptual and social skills (i.e. adaptive behaviour), behaviour problems and positive and negative family impact. Results: Indices of child adaptive and problematic behaviour were only significantly associated with perceived negative family impact. Children's practical and social skills, as well as emotional symptoms, emerged as significant predictors of perceived negative family impact, with emotional symptoms accounting for greatest variance. Including diagnosis in our statistical models did not explain additional variance above and beyond these particular sub-domains of child functioning. Conclusions: The study findings suggest that it is not children's most impaired domains of functioning that are perceived as significantly impactful by the family. The findings highlight the importance of devoting consideration to the ways in which the functional limitations experienced by children with chronic developmental health conditions similarly impact family life and well-being, regardless of disorder designation. Answer: Yes, socioeconomic position does moderate the impact of child behaviour problems on maternal health in South Asian families with a child with intellectual disabilities. According to the study conducted with 123 UK South Asian mothers of a child with severe intellectual disabilities, socioeconomic position did not moderate the association between child problem behaviors and maternal distress. However, it did moderate associations between child problem behaviors and maternal anxiety, depression, and self-assessed health (PUBMED:19234974). This suggests that while socioeconomic position may not influence the relationship between child behavior problems and maternal distress, it does have a moderating effect on other aspects of maternal health, such as anxiety, depression, and the mother's perception of her own health.
Instruction: Pneumoretroperitoneum and pneumomediastinum after Stapled Anopexy Is conservative treatment possible? Abstracts: abstract_id: PUBMED:27842017 Pneumoretroperitoneum and pneumomediastinum after Stapled Anopexy Is conservative treatment possible? Introduction: Stapled anopexy is considered the gold standard in treating haemorroidal disease associated to mucosal prolapse, but severe complications have been described. Among these, a minimal anastomotic leakage may lead to gas spreading into surrounding soft tissues. Case Report: We report the case of a 61 year old male who developed pneumoretroperitoneum and pneumomediastinun two days after a Stapled Anopexy. CT scans showed a minimal leakage with no abscess. The patient was successfully treated by bowel rest, antibiotics and total parenteral nutrition, avoiding surgical approach. Conclusion: A minimal anastomotic leakage following Stapled Anopexy, when leading to air diffusion into soft tissues and not associated to abscess or peritonitis may be treated conservatively avoiding ileostomy or colostomy. Key Words: Anastomotic leakage, Pneumoretroperitoneum, Stapled Anopexy. abstract_id: PUBMED:18998372 Conservative treatment of pneumatosis intestinalis and pneumoperitoneum after bone marrow transplantation Objective: Pneumatosis intestinalis (PI) is a radiological sign that can be accompanied by pneumoperitoneum. It is not exclusive of neonatal necrotizing enterocolitis. It can also appear after bone marrow transplantation. We describe our experience with 6 patients diagnosed of PI after bone marrow transplantation (BMT) who were treated conservatively without surgery in any case and good outcome. Patients And Method: We have reviewed the patients diagnosed of PI from 2000 to 2007 after BMT in our center. Results: Six patients have had 7 episodes of PI with pneumoperitoneum in 3. All cases previously developed intestinal graft-versus-host disease. PI was diagnosed from 1 to 4 months after transplantation. At diagnosis, any patient presented peritoneal signs. Computed tomography was used for PI diagnosis with colonic predominance (5), pneumomediastinum (1) and retropneumoperitoneum (2). The treatment was conservative with intestinal rest, antibiotics and total parenteral nutrition. Enteral feeding was initiated progressively between 1 and 2 months after diagnosis but in one case PI reappeared and it required to start again the conservative treatment. In the other cases, outcome was very satisfactory, improving the pneumatosis and with a correct oral feeding without needing of surgery in any case. Comments: PI with or without pneumoperitoneum is an condition to have in mind in bone marrow transplantation patients. Pneumoperitoneum with good general condition and no sign of peritonitis is not indicative of surgery in these patients. Conservative treatment with antibiotics and parenteral nutrition allows resolution spontaneously abstract_id: PUBMED:15654528 Pneumomediastinum after dilatation of anal stricture following stapled hemorrhoidopexy. Stapling procedure is a new technique for the surgical management of prolapsing haemorrhoids. Some articles have reported severe adverse effects of this operation. We describe a case of an excessive staple-line stenosis followed stapled haemorrhoidopexy. Proctoscopic dilatation resulted in complications of retropneumoperitoneum, pneumomediastinum, subcutaneous emphysema and perianal abscess. Drainage of the abscess was performed, allowing quick recovery. After discharge from the hospital, the patient continued to perform periodic dilatation. Simple proctoscopic dilatation was conducted in an outpatient setting. abstract_id: PUBMED:28170116 Anorexia nervosa with massive pulmonary air leak and extraordinary propagation. A rare case combining pneumothorax, pneumomediastinum, pneumopericardium, pneumoperitoneum, pneumorrhachis, air in retroperitoneum and extensive subcutaneous emphysema simultaneously in a severely anorectic male with BMI 9.2 (22.8 kg) and multiple vomitings is presented. This unusual condition was treated successfully with conservative medical approach in a specialized somatic unit for anorexia nervosa. abstract_id: PUBMED:7640458 Conservative management of idiopathic pneumoperitoneum masquerading as peritonitis: report of a case. Pneumoperitoneum is most commonly caused by the perforation of a hollow viscus, in which case an emergency laparotomy is indicated. We report herein the case of a patient who presented with the signs and symptoms of peritonitis, but who was found to have idiopathic pneumoperitoneum which was successfully managed by conservative treatment. A 70-year-old man presented with epigastric pain, nausea, and a severely distended and tympanitic abdomen. Abdominal examination revealed diffuse tenderness with guarding, but no rebound tenderness. He was febrile with leukocytosis and high C-reactive protein. Chest X-ray and abdominal computed tomography demonstrated a massive pneumoperitoneum without pneumothorax, pneumomediastinum, pneumoretroperitoneum, or subcutaneous emphysema, and subsequent examinations failed to demonstrate perforation of a hollow viscus. Thus, a diagnosis of idiopathic pneumoperitoneum was made, and the patient was managed conservatively, which resulted in a successful outcome. This experience and a review of the literature suggest that idiopathic pneumoperitoneum is amenable to conservative management, even when the signs and symptoms of peritonitis are present. abstract_id: PUBMED:2335147 Retroperitoneal, mediastinal and subcutaneous emphysema as a complication of routine upper gastrointestinal endoscopy. Diagnostic endoscopy of the upper gastrointestinal tract is a safe procedure, which, however, is not without complications. We report the rare occurrence of retroperitoneal, mediastinal and subcutaneous emphysema following routine endoscopy, with no obvious evidence of a site of perforation. Possible etiological aspects are discussed. This complication appeared to be a benign clinical condition that was resolved with conservative, nonsurgical treatment. abstract_id: PUBMED:33920850 Pneumomediastinum, Pneumoretroperitoneum, Pneumoperitoneum and Subcutaneous Emphysema Secondary to a Penetrating Anal Injury. Simultaneous occurrence of pneumoperitoneum, pneumoretroperitoneum, pneumomediastinum and subcutaneous emphysema is rare. The most reported mechanisms are barotrauma, thermal injury and instrument puncture caused by colonoscopy. Ectopic air may travel into different body compartments through distinct anatomical fascial planes. Definite curative treatment involves surgical repair of the bowel wall defect. Conservative treatment is available in selected patients. Here, we present a case of traumatic penetrating rectal injury leading to developing air in the peritoneum, retroperitoneum, mediastinum, and subcutaneous space with good recovery under conservative treatment. abstract_id: PUBMED:30123608 The Close Relationship between Large Bowel and Heart: When a Colonic Perforation Mimics an Acute Myocardial Infarction. Colonoscopic perforation is a serious and potentially life-threatening complication of colonoscopy. Its incidence varies in frequency from 0.016% to 0.21% for diagnostic procedures, but may be seen in up to 5% of therapeutic colonoscopies. In case of extraperitoneal perforation, atypical signs and symptoms may develop. The aim of this report is to raise the awareness on the likelihood of rare clinical features of colonoscopic perforation. A 72-year-old male patient with a past medical history of myocardial infarction presented to the emergency department four hours after a screening colonoscopy with polypectomy, complaining of neck pain, retrosternal oppressive chest pain, dyspnea, and rhinolalia. Right chest wall and cervical subcutaneous emphysema, pneumomediastinum, pneumoretroperitoneum, and bilateral subdiaphragmatic free air were reported on the chest and abdominal X-rays. The patient was treated conservatively, with absolute bowel rest, total parental nutrition, and broad-spectrum intravenous antibiotics. Awareness of the potentially unusual clinical manifestations of retroperitoneal perforation following colonoscopy is crucial for the correct diagnosis and prompt management of colonoscopic perforation. Conservative treatment may be appropriate in patients with a properly prepared bowel, hemodynamic stability, and no evidence of peritonitis. Surgical treatment should be considered when abdominal or chest pain worsens, and when a systemic inflammatory response arises during the conservative treatment period. abstract_id: PUBMED:22466080 Pneumothorax following ERCP: report of four cases and review of the literature. We report four patients with pneumothorax as a complication of ERCP with sphincterotomy. With conservative treatment all patients recovered. Previously, 16 comparable cases have been reported in the literature. The main risk factor for this rare complication seems (pre-cut) sphincterotomy. Pneumothorax is usually right-sided or bilateral and accompanied by pneumomediastinum, pneumoretroperitoneum and subcutaneous emphysema. The prognosis seems favourable with a non-surgical approach including intravenous antibiotics, fasting and when indicated chest tube drainage. abstract_id: PUBMED:15868509 The last images. Pneumoretroperitoneum, pneumomediastinum and subcutaneous emphysema of the neck after stapled hemorrhoidopexy. N/A Answer: Yes, conservative treatment is possible for pneumoretroperitoneum and pneumomediastinum after Stapled Anopexy. A case report of a 61-year-old male who developed pneumoretroperitoneum and pneumomediastinum two days after Stapled Anopexy indicates that conservative treatment can be successful. The patient was treated with bowel rest, antibiotics, and total parenteral nutrition, avoiding a surgical approach, and the treatment was successful without the need for ileostomy or colostomy (PUBMED:27842017). This suggests that in cases where a minimal anastomotic leakage leads to air diffusion into soft tissues without associated abscess or peritonitis, conservative management may be a viable option.
Instruction: Do support groups help people with chronic fatigue syndrome and fibromyalgia? Abstracts: abstract_id: PUBMED:16331774 Do support groups help people with chronic fatigue syndrome and fibromyalgia? A comparison of active and inactive members. Objective: To examine the benefits and problems of a chronic fatigue syndrome (CFS) and fibromyalgia (FM) support organization as reported by its participants. Methods: Active members (n = 32) and inactive members or dropouts (n = 135) of a regional support organization for people with CFS and FM completed a 26 item questionnaire by telephone interview or by self-completion and postal return. Results: The most frequently endorsed benefits of membership were illness legitimization (67.8%), finding out helpful new information (66.4%), and feeling understood by others (62.2%). Lower frequency endorsements were given to: helped to find (35.0%) or deal with (38.5%) doctors, and helped to improve my illness (36.4%). The most frequently reported reasons for dropping out were inconvenient location (37.8%) or time (37.0%), too much negative talk or complaining (33.3%), too sick to attend (28.8%), and illness or coping improvement (29.6% each). The active-member group showed significantly higher (p &lt; 0.04) symptom severity scores and less illness improvement (p &lt; 0.01) in comparison to the inactive/dropout group. Conclusion: This cross-sectional study suggests that support groups for CFS are viewed as helpful by participants on a number of illness related issues. On the other hand, active members reported greater symptom severity and less illness improvement than inactive members or dropouts. abstract_id: PUBMED:35831253 An online survey of pelvic congestion support group members regarding comorbid symptoms and syndromes. Objectives: Patients with pelvic congestion syndrome (PCS) often report overlapping somatic symptoms and syndromes. The objective of this study was to explore the prevalence of co-existing symptoms and self-reported syndrome diagnoses among women with PCS and to inform future research hypotheses. Methods: A brief online survey was offered to members of a PCS support group website. Responses were assessed for self-reported co-existing symptoms and formal diagnoses, including: chronic fatigue syndrome, fibromyalgia, postural tachycardia syndrome, irritable bowel syndrome, migraines, interstitial cystitis, and temporomandibular joint dysfunction. Results: Of a total of 6000 members, there were 398 respondents; 232 (59%) had not yet been treated for PCS. Among these, the most prevalent co-existing symptoms were as follows: severe fatigue (72%), dizziness (63%), IBS symptoms (61%), brain fog (33%), migraines (49%), polyuria or dysuria (41%), excessive sweating (31%), TMJ pain (31%), and loose skin or lax joints (18%). These are much higher than reported for the general female population. The most commonly self-reported comorbid syndrome diagnoses for the overall group of 398 were: irritable bowel syndrome (29%), fibromyalgia (13%), spinal nerve problems (18%), interstitial cystitis (10%), postural tachycardia syndrome (9%), hypertension (11%), chronic fatigue syndrome (10%), and Ehlers-Danlos syndrome (6%). Other than with hypertension, these rates are variably higher than in the general population. Conclusion: Several self-reported co-existing symptoms and syndromes are more prevalent in members of a PCS support group relative to the reported prevalence in the general population. More formal investigation is warranted to evaluate this finding and to investigate potential etiologic links. Ehlers-Danlos Syndrome appears to be common in self identifying PCS women. abstract_id: PUBMED:10782836 Attributions in chronic fatigue syndrome and fibromyalgia syndrome in tertiary care. Objective: To evaluate the attributions of patients with chronic fatigue syndrome (CFS) and fibromyalgia (FM) consulting at a university fatigue and pain clinic. Methods: Consecutive attenders (n = 192) who met the CFS criteria (n = 95) or FM criteria (n = 56) or who had medically unexplained chronic pain and/or fatigue without meeting both criteria (CPF) (n = 41) were evaluated. All subjects completed an extended form of the Cause of Illness Inventory. Descriptive statistics, frequency analyses, chi-square tests, one-way analysis of variance, and sequential Fisher least significant difference tests were performed. Results: In total, 48 patients reported physical causes only and 10 patients psychosocial causes only; the majority (70%) mentioned both types of causes. With regard to the contents, "a chemical imbalance in my body" (61%), "a virus" (51%), "stress" (61%), and "emotional confusion" (40%) were reported most frequently. The diagnostic label did not have a significant influence on number and type of attributions. Small to moderate effect sizes were registered concerning the association of specific attributions and diagnosis, sex, duration of the symptoms, contact with a self-help group, and premorbid depression. Conclusion: The majority of patients with CFS, FM, and CPF reported a great diversity of attributions open to a preferably personalized cognitive behavioral approach. Special attention should be paid to patients with symptoms existing for more than one year and those who had previous contacts with a self-help group. They particularly show external, stable, and global attributions that may compromise feelings of self-efficacy in dealing with the illness. abstract_id: PUBMED:26998656 Statistically modelling the relationships between Type D personality and social support, health behaviors and symptom severity in chronic illness groups. Objective: The study aimed to develop a predictive model of how Type D personality influences health behaviours, social support and symptom severity and assess its generalisability to a range of chronic illnesses. Design: Participants were classified as either healthy (n = 182) or having a chronic illness (n = 207). Participants completed an online survey measuring Type D and a range of health-related variables. Chronic illness participants were classified as having either a functional somatic syndrome (i.e. chronic fatigue syndrome or fibromyalgia), where the underlying pathological processes were unclear, or illnesses such as type 2 diabetes, osteoarthritis or rheumatoid arthritis, where the causes are well understood. Main Outcome Measures: Outcome measures were health behaviours, social support and both physical and psychological symptoms. Results: The rate of Type D was higher in chronic illness participants (53%) than in healthy controls (39%). Negative affectivity (NA) and social inhibition (SI) both correlated with outcome measures, although NA was generally the stronger predictor. Using NA and SI as independent subscales led to superior prediction of health outcomes than using categorical or continuous representations. Conclusion: Findings suggest that the relationship between Type D and health outcomes may generalise across different chronic illnesses. abstract_id: PUBMED:38129366 Social support in low-income women with Fibromyalgia Syndrome from a sub-urban and peri-urban areas of Tenerife (Canary Islands, Spain): a mixed method study. Background: Women with Fibromyalgia Syndrome (FMS) can benefit form adequate social support to fight the consequences of their illness, but the extent to which this is available to those with low incomes who live in depressed areas of Tenerife (Canary Islands, Spain) is currently unknown. The purpose of this study was to explore social support in low-incomes women with FMS in sub-urban and peri-urban areas of Tenerife. Methodology: A sequential exploratory mixed method study was carried out from January 20, 2023, to June 10, 2023, at the Fibromyalgia and Chronic Fatigue Association of Tenerife (AFITEN) using non-probability convenience sampling. Social support was analyzed quantitatively through MOS-SSS survey and Duke-UNC-11 questionnaire, while qualitative data were obtained through semi-structured interviews to identify social support providers and analyze their satisfactions levels. Results: A total of 49 women, with a mean age of 57.80 years-old (SD = 13.25) were finally included in this study. MOSS-SSS and Duke-UNC-11 both indicated lower social support levels at 68.6 (SD =16.3) and 38.0 (SD = 9.74), respectively. The qualitative analysis revealed that partners and friends provided the most significant support with the highest satisfaction scores. Conclusions: The socioeconomic status of low-income women with FMS living in sub-urban and peri-urban areas of Tenerife (Canary Islands, Spain) influences on their social support, with the affective support and confidentiality being the most affected dimensions. abstract_id: PUBMED:15682879 Death of a lifestyle: the effects of social support and healthcare support on the quality of life of persons with fibromyalgia and/or chronic fatigue syndrome. Purpose: The purpose of this study was to investigate how social support and healthcare support affect the quality of life of persons with fibromyalgia and chronic fatigue syndrome. Method: A constant comparison method was used for the qualitative portion of the research and descriptive correlational methods were used for the quantitative portion. Conclusion: This mixed design research study suggested that social support, unlike healthcare support, is related to Quality of Life (QOL). It was also evident that subjects suffering from CFS and/or FMS do not experience high levels of social support. abstract_id: PUBMED:15169718 On chronic fatigue syndrome. N/A abstract_id: PUBMED:27187093 Effect of Acute Exercise on Fatigue in People with ME/CFS/SEID: A Meta-analysis. Purpose: A prominent symptom of myalgic encephalomyelitis, chronic fatigue syndrome, or systemic exertion intolerance disease (ME/CFS/SEID) is persistent fatigue that is worsened by physical exertion. Here the population effect of a single bout of exercise on fatigue symptoms in people with ME/CFS/SEID was estimated and effect moderators were identified. Methods: Google Scholar was systematically searched for peer-reviewed articles published between February 1991 and May 2015. Studies were included where people diagnosed with ME/CFS/SEID and matched control participants completed a single bout of exercise and fatigue self-reports were obtained before and after exercise. Fatigue means, standard deviations, and sample sizes were extracted to calculate effect sizes and the 95% confidence interval. Effects were pooled using a random-effects model and corrected for small sample bias to generate mean Δ. Multilevel regression modeling adjusted for nesting of effects within studies. Moderators identified a priori were diagnostic criteria, fibromyalgia comorbidity, exercise factors (intensity, duration, and type), and measurement factors. Results: Seven studies examining 159 people with ME/CFS/SEID met inclusion criteria, and 47 fatigue effects were derived. The mean fatigue effect was Δ = 0.73 (95% confidence interval = 0.24-1.23). Fatigue increases were larger for people with ME/CFS/SEID when fatigue was measured 4 h or more after exercise ended rather than during or immediately after exercise ceased. Conclusions: This preliminary evidence indicates that acute exercise increases fatigue in people with ME/CFS/SEID more than that in control groups, but effects were heterogeneous between studies. Future studies with no-exercise control groups of people with ME/CFS/SEID are needed to obtain a more precise estimate of the effect of exercise on fatigue in this population. abstract_id: PUBMED:34739944 The impact of COVID-19 stress on pain and fatigue in people with and without a central sensitivity syndrome. Objectives: Stress may augment somatic symptoms in central sensitivity syndromes (CSS) such as fibromyalgia, chronic fatigue syndrome, and irritable bowel syndrome. To test this hypothesis, we examined whether the association between COVID-19 stress and somatic symptom severity would be stronger in people with than without CSS and whether psychological flexibility would buffer the impact of this stress on symptom severity. Methods: In a 2-sample, repeated cross-sectional design, we analysed questionnaire data from Dutch people with and without CSS, collected in two independent surveys: before the COVID-19 pandemic (2018; CSS: n = 194, non-CSS: n = 337) and at the peak of the pandemic (2020; CSS: n = 428, non-CSS: n = 1101). Somatic symptom severity, worry and stress due to the pandemic, and psychological flexibility were examined in regression analyses. Two stress operationalisations were analysed: stress levels during the peak of the pandemic, and a comparison of measurements in 2020 and 2018 (assuming higher stress levels in 2020). Results: Higher worry and stress during the pandemic (standardized β = 0.14), the presence of a CSS (β = 0.40), and lower psychological flexibility (β = -0.33) were all (p &lt; .0001) associated with more severe somatic symptoms, but the associations of each stress operationalisation with somatic symptoms was not particularly strong in people with CSS (β = -0.026, p = .27; β = -0.037, p = .22), and psychological flexibility (β = -0.025, p = .18; β = 0.076, p = .35) did not buffer this association. Conclusions: Findings do not support the hypotheses that COVID-19 stress augments somatic symptoms, particularly in CSS, or that psychological flexibility buffers this impact. Rather, COVID-19-related stress appears to have an uncertain impact on somatic symptoms. abstract_id: PUBMED:25544022 Finding benefit in stressful uncertain circumstances: relations to social support and stigma among women with unexplained illnesses. Living with a chronic illness can be challenging, but the ability to derive benefits and grow from this experience may enhance well-being. However, the possibility of obtaining such benefits may be dependent on the levels of stigmatization and lack of social support experienced by an individual as a result of the illness. Chronic fatigue syndrome (CFS) and fibromyalgia are chronic conditions that remain largely unexplained and those with these conditions must often contend with stigma and skepticism from others. Individuals with CFS/fibromyalgia often display stress-related biological alterations and the experience of stressful life events has been associated with illness development. The present study demonstrated that women with CFS/fibromyalgia (n = 40) as well as community participants who were depressed/anxious (n = 37), reported higher stigma levels than healthy women (n = 33). Moreover, women with CFS/fibromyalgia and those with depression/anxiety also reported greater levels of stigma than women with a chronic yet more widely accepted condition (n = 35; rheumatoid arthritis, osteoarthritis and multiple sclerosis). Secrecy related to stigma among those with CFS/fibromyalgia declined with increased social support, but this was not apparent among those with other chronic conditions. In addition, posttraumatic growth was lower among women with CFS/fibromyalgia compared to those with other chronic conditions. Qualitative analysis examining both negative impacts and positive changes stemming from illness experience revealed many similarities between women with CFS/fibromyalgia and those with other chronic conditions, including elevated appreciation for life, personal growth and compassion for others. However, women with CFS/fibromyalgia tended to report less positive change regarding interpersonal relationships compared to women with other chronic conditions. In general, unexplained illnesses were also accompanied by stigmatization which might ultimately contribute to women's lower ability to derive positive growth from their illness experience. Answer: Support groups appear to offer some benefits to people with chronic fatigue syndrome (CFS) and fibromyalgia (FM). Participants of a CFS and FM support organization reported several benefits of membership, including illness legitimization, finding out helpful new information, and feeling understood by others. However, active members of the support group reported greater symptom severity and less illness improvement than inactive members or dropouts, suggesting that while support groups are viewed as helpful on certain issues, they may not necessarily lead to improved illness outcomes (PUBMED:16331774). Additionally, social support is indicated to be related to the quality of life of persons with fibromyalgia and chronic fatigue syndrome. A study suggested that social support, unlike healthcare support, is related to Quality of Life (QOL), but it was also evident that subjects suffering from CFS and/or FMS do not experience high levels of social support (PUBMED:15682879). In the context of chronic illness groups, the relationship between Type D personality and health outcomes, including social support, health behaviors, and symptom severity, may generalize across different chronic illnesses, including CFS and FM (PUBMED:26998656). For low-income women with Fibromyalgia Syndrome in Tenerife, social support levels were found to be lower, with partners and friends providing the most significant support. The socioeconomic status of these women influenced their social support, with the affective support and confidentiality being the most affected dimensions (PUBMED:38129366). In summary, support groups do provide certain benefits to individuals with CFS and FM, such as information sharing, mutual understanding, and legitimization of their illness. However, the impact on symptom severity and illness improvement is less clear, and the level of social support experienced by individuals with these conditions can vary based on several factors, including socioeconomic status.
Instruction: Interobserver Variability in Injury Severity Scoring After Combat Trauma: Different Perspectives, Different Values? Abstracts: abstract_id: PUBMED:33858510 Interobserver variability of injury severity assessment in polytrauma patients: does the anatomical region play a role? Background: The Abbreviated Injury Scale (AIS) and Injury Severity Score (ISS) are widely used to assess trauma patients. In this study, the interobserver variability of the injury severity assessment for severely injured patients was analyzed based on different injured anatomical regions, and the various demographic backgrounds of the observers. Methods: A standardized questionnaire was presented to surgical experts and participants of clinical polytrauma courses. It contained medical information and initial X-rays/CT-scans of 10 cases of severely injured patients. Participants estimated the severity of each injury based on the AIS. Interobserver variability for the AIS, ISS, and New Injury Severity Score (NISS) was calculated by employing the statistical method of Krippendorff's α coefficient. Results: Overall, 54 participants were included. The major contributing medical specialties were orthopedic trauma surgery (N = 36, 67%) and general surgery (N = 13, 24%). The measured interobserver variability in the assessment of the overall injury severity was high (α ISS: 0.33 / α NISS: 0.23). Moreover, there were differences in the interobserver variability of the maximum AIS (MAIS) depending on the anatomical region: αhead and neck: 0.06, αthorax: 0.45, αabdomen: 0.27 and αextremities: 0.55. Conclusions: Interobserver agreement concerning injury severity assessment appears to be low among clinicians. We also noted marked differences in variability according to injury anatomy. The study shows that the assessment of injury severity is also highly variable between experts in the field. This implies the need for appropriate education to improve the accuracy of trauma evaluation in the respective trauma registries. abstract_id: PUBMED:26125170 Interobserver Variability in Injury Severity Scoring After Combat Trauma: Different Perspectives, Different Values? Background: Anatomic measures of injury burden provide key information for studies of prehospital and in-hospital trauma care. The military version of the Abbreviated Injury Scale [AIS(M)] is used to score injuries in deployed military hospitals. Estimates of total trauma burden are derived from this. These scores are used for categorization of patients, assessment of care quality, and research studies. Scoring is normally performed retrospectively from chart review. We compared data recorded in the UK Joint Theatre Trauma Registry (JTTR) and scores calculated independently at the time of surgery by the operating surgeons to assess the concordance between surgeons and trauma nurse coordinators in assigning injury severity scores. Methods: Trauma casualties treated at a deployed Role 3 hospital were assigned AIS(M) scores by surgeons between 24 September 2012 and 16 October 2012. JTTR records from the same period were retrieved. The AIS(M), Injury Severity Score (ISS), and New Injury Severity Score (NISS) were compared between datasets. Results: Among 32 matched casualties, 214 injuries were recorded in the JTTR, whereas surgeons noted 212. Percentage agreement for number of injuries was 19%. Surgeons scored 75 injuries as "serious" or greater compared with 68 in the JTTR. Percentage agreement for the maximum AIS(M), ISS, and NISS assigned to cases was 66%, 34%, and 28%, respectively, although the distributions of scores were not statistically different (median ISS: surgeons: 20 [interquartile range (IQR), 9-28] versus JTTR: 17.5 [IQR, 9-31.5], p = .7; median NISS: surgeons: 27 [IQR, 12-42] versus JTTR: 25.5 [IQR, 11.5-41], p = .7). Conclusion: There are discrepancies in the recording of AIS(M) between surgeons directly involved in the care of trauma casualties and trauma nurse coordinators working by retrospective chart review. Increased accuracy might be achieved by actively collaborating in this process. abstract_id: PUBMED:9480600 Scoring--criteria for operability For therapeutic recommendations three different kinds of scores are essential: 1. The severity scores for trauma; 2. Severity scores for mangled extremities; 3. Intensive care scores. The severity of polytrauma patients is measurable by the AIS, ISS, RTS, PTS and TRISS which is a combination of RTS, ISS, age, and mechanism of injury. For mangled extremities there are also different scores available: MESI (Mangled Extremity Syndrome Index) and MESS (Mangled Extremity Severity Score). The aim of these scores is to assist in the indication with regard to amputate or to save the extremity. These scoring indices can be used to evaluate the severity of a systemic inflammatory reaction syndrome with respect to multiple organ failure. All scores are dynamic values which are variable with improvement of therapy. abstract_id: PUBMED:28262279 Major influence of interobserver reliability on polytrauma identification with the Injury Severity Score (ISS): Time for a centralised coding in trauma registries? Objective: The Abbreviated Injury Scale (AIS) and the Injury Severity Score (ISS) find increasingly widespread use to assess trauma burden and to perform interhospital benchmarking through trauma registries. Since 2015, public resource allocation in Switzerland shall even be derived from such data. As every trauma centre is responsible for its own coding and data input, this study aims at evaluating interobserver reliability of AIS and ISS coding. Methods: Interobserver reliability of the AIS and ISS is analysed from a cohort of 50 consecutive severely injured patients treated in 2012 at our institution, coded retrospectively by 3 independent and specifically trained observers. Results: Considering a cutoff ISS≥16, only 38/50 patients (76%) were uniformly identified as polytraumatised or not. Increasing the cut off to ≥20, this increased to 41/50 patients (82%). A difference in the AIS of ≥ 1 was present in 261 (16%) of possible codes. Excluding the vast majority of uninjured body regions, uniformly identical AIS severity values were attributed in 67/193 (35%) body regions, or 318/579 (55%) possible observer pairings. Conclusion: Injury severity all too often is neither identified correctly nor consistently when using the AIS. This leads to wrong identification of severely injured patients using the ISS. Improving consistency of coding through centralisation is recommended before scores based on the AIS are to be used for interhospital benchmarking and resource allocation in the treatment of severely injured patients. abstract_id: PUBMED:25284230 Risk stratification in trauma and haemorrhagic shock: scoring systems derived from the TraumaRegister DGU(®). Scoring systems commonly attempt to reduce complex clinical situations into one-dimensional values by objectively valuing and combining a variety of clinical aspects. The aim is to allow for a comparison of selected patients or cohorts. To appreciate the true value of scoring systems in patients with multiple injuries it is necessary to understand the different purposes of quantifying the severity of specific injuries and overall trauma load, being: (1) clinical decision making; (2) triage; (3) planning of trauma systems and resources; (4) epidemiological and clinical research; (5) evaluation of outcome and trauma systems, including quality assessment; and (6) estimation of costs and allocation of resources. For the first two, easy-to estimate scores with immediate availability are necessary, mainly based on initial physiology. More sophisticated scores considering age, gender, injury pattern/severity and more are usually used for research and outcome evaluation, once the diagnostic and therapeutic process has been completed. For score development large numbers of data are necessary and thus, it appears as a logical consequence that large registries as the TraumaRegister DGU(®) of the German Trauma Society (TR-DGU) are used to derive and validate clinical scoring systems. A variety of scoring systems have been derived from this registry, the majority of them with focus on hospital mortality. The most important among these systems is probably the RISC score, which is currently used for quality assessment and outcome adjustment in the annual audit reports. This report summarizes the various scoring systems derived from the TraumaRegister DGU(®) over the recent years. abstract_id: PUBMED:35974196 Inter-rater reliability of the Abbreviated Injury Scale scores in patients with severe head injury shows good inter-rater agreement but variability between countries. An inter-country comparison study. Introduction: Substantial difference in mortality following severe traumatic brain injury (TBI) across international trauma centers has previously been demonstrated. This could be partly attributed to variability in the severity coding of the injuries. This study evaluated the inter-rater and intra-rater reliability of Abbreviated Injury Scale (AIS) scores of patients with severe TBI across three international level I trauma centers. Methods: A total 150 patients (50 per center) were randomly selected from each respective trauma registry: University Medical Center Utrecht (UMCU), the Netherlands; John Hunter Hospital (JHH), Australia; and Harborview Medical Center (HMC), the United States. Reliability between coders and trauma centers was measured with the intraclass correlation coefficient (ICC). Results: The reliability between the coders and the original trauma registry scores was 0.50, 0.50, and 0.41 in, respectively, UMCU, JHH, and HMC. The AIS coders at UMCU scored the most AIS codes of ≥ 4. Reliability within the trauma centers was substantial in UMCU (ICC = 0.62) and HMC (ICC = 0.78) and almost perfect in JHH (ICC = 0.85). Reliability between trauma centers was 0.70 between UMCU and JHH, 0.70 between JHH and HMC, and 0.59 between UMCU and HMC. Conclusion: The results of this study demonstrated a substantial and almost perfect reliability of the AIS coders within the same trauma center, but variability across trauma centers. This indicates a need to improve inter-rater reliability in AIS coders and quality assessments of trauma registry data, specifically for patients with head injuries. Future research should study the effect of differences in AIS scoring on outcome predictions. abstract_id: PUBMED:29936974 Quantification of rib fractures by different scoring systems. Background: The three known systems for evaluation of patients with rib fractures are rib fracture score (RFS), chest trauma score (CTS), and RibScore (RS). The aim was to establish critical values for these systems in different patient populations. Methods: Retrospective cohort study included 1089 patients with rib fractures, from level-1 trauma center; divided into two groups: first group included 620 nongeriatric patients, and second group included 469 geriatric patients (≥65 y.o.). Additional variables included mortality, injury severity score (ISS), hospital and intensive care unit lengths of stay (HLOS, ICULOS), duration of mechanical ventilation, rate of pneumonia (PN), tracheostomy, and epidural analgesia. Results: RFS critical values were 10 for nongeriatric and eight for geriatric patients, CTS were four and six respectively, and RS were one for both. Nongeriatric patients with RFS ≥10 versus RFS &lt;10, had higher mortality, ISS, HLOS, ICULOS, and tracheostomy (P &lt;0.03). Geriatric patients with RFS ≥8 versus RFS &lt;8, had higher mortality, ISS, HLOS, ICULOS, and PN (P &lt;0.03). Nongeriatric patients with CTS ≥4 versus CTS &lt;4, had higher mortality, ISS, HLOS, ICULOS, duration of mechanical ventilation, and PN (P &lt; 0.02). Geriatric patients with CTS ≥6 versus CTS &lt;6 had greater values for all variables (P &lt; 0.01). Both groups with RS ≥1 versus RS &lt;1, had greater values for all variables (P &lt; 0.05). In geriatric group, prediction of PN was good by CTS (c = 0.8) and fair by RFS and RS (c = 0.7). Conclusions: Physicians should choose score to match specific population and collected variables. RFS is simple but sensitive in elderly population. CTS is recommended for geriatric patients as it predicts PN the best. RS is recommended for assessment of severely injured patients with high ISS. abstract_id: PUBMED:27667119 Validation of international trauma scoring systems in urban trauma centres in India. Introduction: In the Lower-Middle Income Country setting, we validate trauma severity scoring systems, namely Injury Severity Score (ISS), New Injury Severity Scale (NISS) score, the Kampala Trauma Score (KTS), Revised Trauma Score (RTS) score and the TRauma Injury Severity Score (TRISS) using Indian trauma patients. Patients And Methods: From 1 September 2013 to 28 February 2015, we conducted a prospective multi-centre observational cohort study of trauma patients in four Indian university hospitals, in three megacities, Kolkata, Mumbai and Delhi. All adult patients presenting to the casualty department with a history of injury and who were admitted to inpatient care were included. The primary outcome was in-hospital mortality within 30-days of admission. The sensitivity and specificity of each score to predict inpatient mortality within 30days was assessed by the areas under the receiver operating characteristic curve (AUC). Model fit for the performance of individual scoring systems was accomplished by using the Akaike Information criterion (AIC). Results: In a registry of 8791 adult trauma patients, we had a cohort of 7197 patients eligible for the study. 4091 (56.8%)patients had all five scores available and was the sample for a complete case analysis. Over a 30-day period, the scores (AUC) was TRISS (0.82), RTS (0.81), KTS (0.74), NISS (0.65) and ISS (0.62). RTS was the most parsimonious model with the lowest AIC score. Considering overall mortality, both physiologic scores (RTS, KTS) had better discrimination and goodness-of-fit than ISS or NISS. The ability of all Injury scores to predict early mortality (24h) was better than late mortality (30day). Conclusion: On-admission physiological scores outperformed the more expensive anatomy-based ISS and NISS. The retrospective nature of ISS and TRISS score calculations and incomplete imaging in LMICs precludes its use in the casualty department of LMICs. They will remain useful for outcome comparison across trauma centres. Physiological scores like the RTS and KTS will be the practical score to use in casualty departments in the urban Indian setting, to predict early trauma mortality and improve triage. abstract_id: PUBMED:37241183 Prehospital Trauma Scoring Systems for Evaluation of Trauma Severity and Prediction of Outcomes. Introduction: Trauma scoring systems in prehospital settings are supposed to ensure the most appropriate in-hospital treatment of the injured. Aim Of The Study: To determine the sensitivity and specificity of the CRAMS scale (circulation, respiration, abdomen, motor and speech), RTS score (revised trauma score), MGAP (mechanism, Glasgow Coma Scale, age, arterial pressure) and GAP (Glasgow Coma Scale, age, arterial pressure) scoring systems in prehospital settings in order to evaluate trauma severity and to predict the outcome. Materials And Methods: A prospective, observational study was conducted. For every trauma patient, a questionnaire was initially filled in by a prehospital doctor and these data were subsequently collected by the hospital. Results: The study included 307 trauma patients with an average age of 51.7 ± 20.9. Based on the ISS (injury severity score), severe trauma was diagnosed in 50 (16.3%) patients. MGAP had the best sensitivity/specificity ratio when the obtained values indicated severe trauma. The sensitivity and specificity were 93.4 and 62.0%, respectively, for an MGAP value of 22. MGAP and GAP were strongly correlated with each other and were statistically significant in predicting the outcome of treatment (OR 2.23; 95% Cl 1.06-4.70; p = 0.035). With a rise of one in the MGAP score value, the probability of survival increases 2.2 times. Conclusion: MGAP and GAP, in prehospital settings, had higher sensitivity and specificity when identifying patients with a severe trauma and predicting an unfavorable outcome than other scoring systems. abstract_id: PUBMED:34829503 Applicability of Anatomic and Physiologic Scoring Systems for the Prediction of Outcome in Polytraumatized Patients with Blunt Aortic Injuries. Background: Most patients with blunt aortic injuries, who arrive alive in a clinic, suffer from traumatic pseudoaneurysms. Due to modern treatments, the perioperative mortality has significantly decreased. Therefore, it is unclear how exact the prediction of commonly used scoring systems of the outcome is. Methods: We analyzed data on 65 polytraumatized patients with blunt aortic injuries. The following scores were calculated: injury severity score (ISS), new injury severity score (NISS), trauma and injury severity score (TRISS), revised trauma score coded (RTSc) and acute physiology and chronic health evaluation II (APACHE II). Subsequently, their predictive value was evaluated using Spearman´s and Kendall´s correlation analysis, logistic regression and receiver operating characteristics (ROC) curves. Results: A proportion of 83% of the patients suffered from a thoracic aortic rupture or rupture with concomitant aortic wall dissection (54/65). The overall mortality was 24.6% (16/65). The sensitivity and specificity were calculated as the area under the receiver operating curves (AUC): NISS 0.812, ISS 0.791, APACHE II 0.884, RTSc 0.679 and TRISS 0.761. Logistic regression showed a slightly higher specificity to anatomical scoring systems (ISS 0.959, NISS 0.980, TRISS 0.957, APACHE II 0.938). The sensitivity was highest in the APACHE II with 0.545. Sensitivity and specificity for the RTSc were not significant. Conclusion: The predictive abilities of all scoring systems were very limited. All scoring systems, except the RTSc, had a high specificity but a low sensitivity. In our study population, the RTSc was not applicable. The APACHE II was the most sensitive score for mortality. Anatomical scoring systems showed a positive correlation with the amount of transfused blood products. Answer: The study described in abstract PUBMED:26125170 investigated the interobserver variability in injury severity scoring after combat trauma. The study compared data recorded in the UK Joint Theatre Trauma Registry (JTTR) with scores calculated independently at the time of surgery by the operating surgeons to assess the concordance between surgeons and trauma nurse coordinators in assigning injury severity scores. The results showed discrepancies in the recording of the military version of the Abbreviated Injury Scale (AIS(M)) between surgeons directly involved in the care of trauma casualties and trauma nurse coordinators working by retrospective chart review. Specifically, among 32 matched casualties, surgeons scored 75 injuries as "serious" or greater compared with 68 in the JTTR, and the percentage agreement for the maximum AIS(M), Injury Severity Score (ISS), and New Injury Severity Score (NISS) assigned to cases was 66%, 34%, and 28%, respectively. The study concluded that increased accuracy in injury severity scoring might be achieved by actively collaborating in the process of recording AIS(M) scores.
Instruction: Is upper gastrointestinal endoscopy indicated in asymptomatic patients with a positive fecal occult blood test and negative colonoscopy? Abstracts: abstract_id: PUBMED:10378617 Is upper gastrointestinal endoscopy indicated in asymptomatic patients with a positive fecal occult blood test and negative colonoscopy? Purpose: There are no recommendations as to whether endoscopic evaluation of the upper gastrointestinal tract is indicated in asymptomatic patients who have a positive fecal occult blood test and a negative colonoscopy. Subjects And Methods: All asymptomatic patients with a positive fecal occult blood test who were referred for diagnostic endoscopy were identified. Patient charts, endoscopy records, and pathology reports were reviewed. Results: During the 5-year study period, 498 asymptomatic patients with a positive fecal occult blood test and negative colonoscopy were evaluated. An upper gastrointestinal source of occult bleeding was detected in 67 patients (13%), with peptic ulcer disease being the most common lesion identified (8%). Four patients were diagnosed with gastric cancer and 1 had esophageal carcinoma. In addition, 74 patients (15%) had lesions that were not considered a source of occult bleeding; these findings prompted a change in management in 56 patients (11%). Anemia was the only variable significantly associated with having a clinically important lesion identified (multivariate odds ratio = 5.0; 95% confidence interval 2.9 to 8.5; P &lt;0.001). Conclusions: Upper gastrointestinal endoscopy yields important findings in asymptomatic patients with a positive fecal occult blood test and negative colonoscopy. Our data suggest that endoscopic evaluation of the upper gastrointestinal tract should be considered, especially in patients with anemia. abstract_id: PUBMED:8249968 Is there a role for upper gastrointestinal endoscopy in the evaluation of patients with occult blood-positive stool and negative colonoscopy? Objective: To determine the role of upper gastrointestinal endoscopy in the clinical investigation of patients with occult blood-positive and negative colonoscopy. Methods: This is a retrospective study of 211 consecutive patients who were investigated by upper gastrointestinal endoscopy for occult blood-positive stool and negative colonoscopy. Patients were categorized into four groups: asymptomatic (n = 117), symptomatic (n = 37), severe anemia (hemoglobin &lt; 10 g/dl) (n = 33), and incomplete (CBC not obtained or insufficient documentation regarding the presence or absence of symptoms) (n = 24). Results: Eighty-eight of the 211 patients (42%) had abnormal upper gastrointestinal endoscopic findings: 43% of the "asymptomatic" group, 35% of the "symptomatic" group, 45% of the "severe anemia" group, and 42% of the "incomplete" group. The number of patients with abnormal upper gastrointestinal findings increased with age (p = 0.0002). Furthermore, there was a significant difference in upper gastrointestinal lesions between the patients who were 60 yr of age or older, and the patients less than 60 yr of age: 51% compared to 22% (p = 0.00003). All upper gastrointestinal endoscopic findings were benign, with 53 of the 88 patients (60%) having erosive gastritis. However, only 25 of those 88 patients (28%) had a lesion for which a specific treatment was available. Conclusions: Upper gastrointestinal endoscopy in patients with occult blood-positive stool and negative colonoscopy has a very low yield for significant lesions, not only from a therapeutic standpoint, but also for detection of malignant lesions. Prospective studies are needed to determine which specific groups of patients deserve further investigation. abstract_id: PUBMED:16522381 Endoscopic findings in the upper gastrointestinal tract of faecal occult blood-positive, colonoscopy-negative patients. Background: Detection of faecal occult blood is recommended for colorectal cancer screening in average risk populations. However, many subjects do not have any cause found in the colon to account for the occult blood loss. Aims: To determine the prevalence of upper gastrointestinal tract disease in faecal occult blood-positive, colonoscopy-negative patients. Patients And Methods: Retrospective audit of 99 patients (56 females; mean age 60 years, range 18-83) who underwent same-day colonoscopy and upper gastrointestinal endoscopy over a 2-year period. Results: Fifty-two of the 99 patients had a normal colonoscopy, 16 had diverticulosis and 2 had hyperplastic polyps; these 70 patients comprised the colonoscopy-negative group. Significant upper gastrointestinal tract disease was noted in 25 (36%) of the colonoscopy-negative group compared with 10 (34%) of the 29 colonoscopy-positive group (p=ns). Most of the upper gastrointestinal tract lesions identified were benign. Within the colonoscopy-negative group, patients with anaemia or upper gastrointestinal tract symptoms had a higher prevalence of positive findings in the upper gastrointestinal tract, but this association was not statistically significant. Conclusions: Endoscopic examination of the upper gastrointestinal tract in faecal occult blood-positive individuals reveals mostly benign disease, with an equal prevalence in colonoscopy-negative and colonoscopy-positive patients. Routine performance of upper gastrointestinal endoscopy in faecal occult blood-positive individuals is not indicated and should be undertaken only for appropriate symptoms. abstract_id: PUBMED:1442675 Yield of upper endoscopy in the evaluation of asymptomatic patients with Hemoccult-positive stool after a negative colonoscopy. The yield of upper endoscopy in asymptomatic patients with positive fecal occult blood test (FOBT) and a negative colonoscopy was evaluated prospectively in 70 consecutive patients. Significant pathology was diagnosed in 19 patients (27%), eight patients with ulcers, five with arteriovenous malformations, three with esophageal or gastric varices, two with multiple erosions, and two with biopsy-proven Barrett's esophagus. Thirteen patients had iron deficiency anemia and demonstrated a 38% prevalence of significant pathology. Fifteen patients on nonsteroidal anti-inflammatory agents had a 30% prevalence of significant pathology. No statistically significant difference was noted between subgroups. In conclusion, asymptomatic patients without a colonic source to explain a positive FOBT often have significant lesions, on upper endoscopy. Iron deficiency anemia did not have an impact on pathology. Because treatment and follow-up plans were altered in many of the cases in which significant pathology was demonstrated, we conclude that upper endoscopy should be seriously considered for all asymptomatic patients with occult gastrointestinal bleeding and a negative colonoscopic examination. abstract_id: PUBMED:33714864 Systematic upper endoscopy concomitant with colonoscopy performed within the colorectal cancer screening program: Impact on the patients' management. Background And Aims: The French colorectal cancer screening program is based on a fecal immunochemical test, followed by colonoscopy in case of positivity. The benefit of adding a concomitant upper endoscopy to detect upper digestive lesions (precancerous or others) is still debated. Our aim was to evaluate the frequency of upper digestive lesions detected by upper endoscopy performed concomitantly with colonoscopy following a positive fecal immunochemical test, and their impact on the patients' management (i.e., surveillance, medical treatment, endoscopic or surgical procedure). Methods: Data of all the patients who consulted for a positive test between May 2016 and May 2019 in our center, and for whom concomitant upper endoscopy and colonoscopy were performed, were analyzed retrospectively. Patients with significant history of upper gastrointestinal diseases or with current gastrointestinal symptoms were excluded. Results: One hundred patients were included [median age (min-max): 62 (50-75), men 64%]. Macroscopic and/or microscopic upper digestive lesions were found in 58 of them (58%): Helicobacter pylori infection in 17 patients, gastric precancerous lesions in 9 patients (chronic atrophic gastritis with intestinal metaplasia, n=8, low grade dysplasia, n=1), Barrett's esophagus requiring surveillance in 4 patients, and 1 duodenal adenoma with low-grade dysplasia. In 44 patients (44%), the upper endoscopy findings had an impact on patients' management, with no significant difference between the groups with positive (CRC or advanced adenoma)- or negative (any other lesions or normal) colonoscopy. Conclusion: A systematic upper endoscopy combined with colonoscopy for positive fecal immunochemical test could represent an efficient strategy for upper digestive lesions screening in France. Further studies are necessary to confirm these results and to evaluate cost-effectiveness of this approach. abstract_id: PUBMED:23713803 Is esophagogastroduodenoscopy necessary in patients with positive fecal occult blood tests and negative colonoscopy? Background: False positives of fecal occult blood tests (FOBT) regarding colorectal cancer detection are common. The aim was to investigate the upper gastrointestinal (GI) condition confirmed by endoscopy in positive FOBT patients without advanced colorectal neoplasia, hemorrhoid, or colitis. Methods: The authors reviewed the collective data of 340 positive FOBT patients who underwent both esophagogastroduodenoscopy (EGD) and colonoscopy from January 2005 to October 2011. Demographic data, effects of medications, including antiplatelet agents, nonsteroidal anti-inflammatory drugs, or warfarin on the results of FOBT, presence or absence of GI mucosal lesions confirmed by EGD and colonoscopy, and FOBT titer in 552 positive FOBT patients were analyzed. Results: On colonoscopy, colorectal cancer was detected in 35 patients (10.3%), advanced adenoma in 22 patients (6.4%). "Negative colonoscopy" included no significant lesions and non-advanced adenomas. In 243 patients with "negative colonoscopy", EGD findings included 3 (1.2%) gastric cancers and 39 (16%) peptic ulcer diseases. Gastric cancers were all found in patients who had no experience of EGD within 2 years; however, the incidence of peptic ulcer disease was not different in patients with or without previous EGD within 2 years. Two or more antiplatelet agents increased false positive rates of FOBT. Conclusions: Upper GI evaluation is mandatory in patients with positive FOBTs and negative colonoscopy especially in patients without experience of EGD within 2 years. abstract_id: PUBMED:14571744 Clinicopathologic significance of immunohistochemical fecal occult blood test in subjects receiving bidirectional endoscopy. Background/aims: Fecal occult blood test has been utilized to screen for lower gastrointestinal pathologies, such as colorectal cancer and polyps that bleed. Recent studies have revealed a relatively high frequency of upper gastrointestinal abnormalities in subjects with positive fecal occult blood by guaiac-based method. Although immunohistochemical tests of fecal occult blood were assumed to have greater diagnostic validity, the distribution of gastrointestinal pathology using such examinations is not well established. This study aims to investigate the efficacy of immunohistochemical analysis of fecal occult blood in detecting upper and lower gastrointestinal lesions in asymptomatic individuals. Methodology: Subjects who underwent regular health checkups were enrolled if they received both esophagogastroscopic and colonoscopic examinations. Each subject was tested by an immunohistochemical fecal occult blood test. The fecal occult blood results were evaluated and correlated with lesions identified in endoscopic examinations. Results: In total 655 males and 722 females with age 46.2 +/- 12.1 years were enrolled, 287 cases (20.7%) had polypoid lesions of colon, including 6 colon cancers, 37 with polyps &gt; or = 1 cm, 104 with polyp 5-9 mm, and 140 with polyp &lt; 5 mm. FOB was positive in 31 cases, of which 15 (15/31, 48.4%) were polypoid lesions of colon, 1 was colonic ulcer, 9 (29.0%) were active gastroduodenal ulcers but 6 (19.4%) had no significant lesions. The positive and negative predictive value for colon polyps was 48.4% and 80%, respectively. The sensitivity was 50% (3/6) for colon cancer and varied among polyps with different sizes: 16.2% (6/37) for polyps &gt; or = 1 cm; 5.8% (6/104) for polyps 5-9 mm and 0% (0/140) for polyps &lt; 5 mm. Conclusions: A substantial portion of subjects (29%) with positive fecal occult blood reaction of immunohistochemical analysis but negative colonoscopy still needs esophagogastroscopic examination to disclose upper gastrointestinal lesions. Immunohistochemical determination of fecal occult blood remains imperfect for polypoid lesions of colon in view of its sensitivity and specificity. abstract_id: PUBMED:1728127 A prospective study of bidirectional endoscopy (colonoscopy and upper endoscopy) in the evaluation of patients with occult gastrointestinal bleeding. Unlabelled: One hundred patients with occult gastrointestinal bleeding (OGIB) (i.e., guaiac-positive stools and/or iron deficiency anemia) were prospectively evaluated with bidirectional endoscopy [upper endoscopy (EGD) and colonoscopy] to determine the origin of occult bleeding. Predetermined criteria were used to prospectively define gastrointestinal bleeding sources. Among the 58 males and 42 females, the median age was 65 yr. Thirty-one percent of the group had gastrointestinal symptoms. Sixty-six percent of the study group were inpatients. Bidirectional endoscopy detected the source of OGIB in 53% of patients, with a positive finding on EGD of 36%, and with colonoscopy, of 26%. In only 9% of patients was a source of OGIB detected on both EGD and colonoscopy. Acid peptic disease accounted for the source of OGIB in 27%, colonic adenomas 14%, angiodysplasia 13%, colorectal carcinoma 6%, and gastric cancer in 1% of patients. The diagnostic yield was significantly higher with EGD than with colonoscopy in patients with anemia and guaiac-positive stools (45% vs. 26%, p less than 0.01). Upper endoscopy directed a change in patient management in 29 patients. In Conclusion: for the patient population described in this study, bidirectional endoscopy determined the source of OGIB in 50%. As expected, colonoscopy resulted in a higher cancer detection rate than EGD--yet EGD detected the origin of OGIB in 68% (36/53) of patients found to have an occult bleeding source, and resulted in a therapeutic initiation or a change in therapy for 30% of all patients. abstract_id: PUBMED:23547576 Evaluation of occult gastrointestinal bleeding. Occult gastrointestinal bleeding is defined as gastrointestinal bleeding that is not visible to the patient or physician, resulting in either a positive fecal occult blood test, or iron deficiency anemia with or without a positive fecal occult blood test. A stepwise evaluation will identify the cause of bleeding in the majority of patients. Esophagogastroduodenoscopy (EGD) and colonoscopy will find the bleeding source in 48 to 71 percent of patients. In patients with recurrent bleeding, repeat EGD and colonoscopy may find missed lesions in 35 percent of those who had negative initial findings. If a cause is not found after EGD and colonoscopy have been performed, capsule endoscopy has a diagnostic yield of 61 to 74 percent. Deep enteroscopy reaches into the mid and distal small bowel to further investigate and treat lesions found during capsule endoscopy or computed tomographic enterography. Evaluation of a patient who has a positive fecal occult blood test without iron deficiency anemia should begin with colonoscopy; asymptomatic patients whose colonoscopic findings are negative do not require further study unless anemia develops. All men and postmenopausal women with iron deficiency anemia, and premenopausal women who have iron deficiency anemia that cannot be explained by heavy menses, should be evaluated for occult gastrointestinal bleeding. Physicians should not attribute a positive fecal occult blood test to low-dose aspirin or anticoagulant medications without further evaluation. abstract_id: PUBMED:20962660 Esophago-gastro-duodenoscopy is not indicated in patients with positive immunochemical test and nonexplanatory colonoscopy. Objectives: Patients with positive fecal occult blood test and unrevealing colonoscopy are often advised to undergo esophago-gastro-duodenoscopy (EGD) to exclude a bleeding source in the upper gastrointestinal tract. In this study, we evaluated EGD findings in patients with positive immunochemical fecal occult blood test (I-FOBT) not explained by colonoscopy. Methods: Out of 1221 consecutive patients having total colonoscopy after preparing I-FOBT (OC-MICRO, with threshold of 75 or 100 ngHb/ml), we included only patients without colorectal cancer or advanced adenomatous polyp on colonoscopy, who also underwent EGD within 4 months of the fecal blood testing. Findings on EGD were classified as those lesions which are likely or unlikely to bleed. Results: EGD was performed in 160 patients after a negative colonoscopy. The procedure was performed 1.6 ± 1.4 months after the I-FOBT. Lesion with a bleeding potential was found in 24 patients (15%). In three (12.5%) and two (8.3%) of these patients I-FOBT was positive at the 75 and 100 ngHb/ml threshold, respectively. In 136 patients EGD was normal, and I-FOBT was similarly positive in 16 (11.7%) and 13 patients (9.5%), respectively. The mean fecal hemoglobin was also similar between the groups. Conclusion: Immunological FOBT positivity was not correlated with the finding of lesions, which are likely to bleed on EGD. Thus, EGD is probably not indicated in patients with positive I-FOBT and unrevealing colonoscopy. Answer: The indication for upper gastrointestinal endoscopy in asymptomatic patients with a positive fecal occult blood test (FOBT) and negative colonoscopy is a subject of debate, as evidenced by the findings from several studies. One study found that upper gastrointestinal endoscopy yielded important findings in 13% of asymptomatic patients with a positive FOBT and negative colonoscopy, with peptic ulcer disease being the most common lesion identified. Anemia was significantly associated with having a clinically important lesion identified, suggesting that endoscopic evaluation of the upper gastrointestinal tract should be considered, especially in patients with anemia (PUBMED:10378617). Another study reported that upper gastrointestinal endoscopy in patients with occult blood-positive stool and negative colonoscopy had a low yield for significant lesions, with only 28% of patients having a lesion for which a specific treatment was available. The study concluded that the procedure has a very low yield for significant lesions and that prospective studies are needed to determine which specific groups of patients deserve further investigation (PUBMED:8249968). A retrospective audit found that endoscopic examination of the upper gastrointestinal tract in fecal occult blood-positive individuals reveals mostly benign disease, with an equal prevalence in colonoscopy-negative and colonoscopy-positive patients. The study concluded that routine performance of upper gastrointestinal endoscopy in fecal occult blood-positive individuals is not indicated and should be undertaken only for appropriate symptoms (PUBMED:16522381). Another study concluded that upper endoscopy should be seriously considered for all asymptomatic patients with occult gastrointestinal bleeding and a negative colonoscopic examination, as significant lesions were often found (PUBMED:1442675). A systematic upper endoscopy combined with colonoscopy for positive fecal immunochemical test could represent an efficient strategy for upper digestive lesions screening, as findings had an impact on patients' management in 44% of cases (PUBMED:33714864). In contrast, one study suggested that upper GI evaluation is mandatory in patients with positive FOBTs and negative colonoscopy, especially in patients without experience of EGD within 2 years (PUBMED:23713803). In summary, while some studies suggest that upper gastrointestinal endoscopy can yield important findings in certain asymptomatic patients with a positive FOBT and negative colonoscopy, especially those with anemia or without recent EGD, others argue that the procedure has a low yield for significant lesions and should not be routinely performed without appropriate symptoms. The decision to perform upper gastrointestinal endoscopy in this patient population should be individualized based on factors such as the presence of anemia, age, and previous endoscopic history.
Instruction: Does the duration of surgery affect outcomes in patients undergoing coronary artery bypass grafting? Abstracts: abstract_id: PUBMED:18778805 Does the duration of surgery affect outcomes in patients undergoing coronary artery bypass grafting? Background: Numerous predictors of coronary artery bypass grafting (CABG) outcomes have been identified. We aimed to determine whether the duration of surgery independently predicts outcome in patients undergoing CABG. Methods: We retrospectively reviewed data from 337 patients (mean age 62 +/- 7 years) who underwent CABG consecutively at our institution between January 2005 and December 2006. Results: Duration of surgery correlated positively with length of both surgical intensive care unit (SICU) stay (r = .147, P = .004) and ventilator support (r = .097, P = .038) in univariate analysis, but only with length of SICU stay (P = .01) in a multivariate logistic regression after confounding factors were controlled for in the model. The regression coefficient was .006; every additional 30 minutes of surgery time was associated with 4.32 more hours of SICU stay. Duration of surgery was not associated with survival (P &gt; .05). Conclusions: Although duration of surgery did not affect short-term survival after CABG, surgical duration independently predicted length of SICU stays. Efforts to reduce the length of operations may promote more efficient use of hospital resources. abstract_id: PUBMED:31001464 Correlation between Gensini Score and Duration of Diabetes in Patients Undergoing Coronary Angiography. Introduction The relationship between the duration of diabetes mellitus and coronary artery disease (CAD) is well established. Moreover, the Gensini score system is a reliable assessment tool for the severity of coronary artery disease (CAD). After an extensive literature search, we found that there is a knowledge gap about the relationship between the Gensini score and the duration of diabetes in our population. Therefore, the aim of this study was to find the relationship between the Gensini score and the duration of diabetes in patients undergoing a coronary angiography. Methods A cross-sectional study was conducted among 321 consecutively selected diabetic patients. Clinically diagnosed cases of diabetes on proper anti-diabetic treatment were included in this study. Patients with known severe CAD or history of coronary artery bypass grafting (CABG) surgery or primary coronary intervention (PCI) were excluded from the study. Coronary angiography was performed on all the patients and their Gensini score was calculated using the modified scoring schema. Duration of diabetes and other baseline risk factors were recorded for all patients. The relationship between the Gensini score and the duration of diabetes was assessed by calculating Pearson's correlation coefficient. Results A total of 321 diabetic patients were included in this study, out of which 67.9% (218) were men and mean ± standard deviation age was 56.13 ± 7.67 years ranging between 40 to 70 years with a majority of the patients, 63.9% (205), being under 60 years of age. Along with diabetes, the most commonly observed risk factor was hypertension, which was observed in 77.6% (249) of the patients. Smoking and obesity were also observed in 18.7% (60) and 17.4% (56) of the patients, respectively. A significant positive correlation, 0.55 (p&lt;0.001), was observed between the duration of diabetes and the Gensini scores. The correlation was found to be stronger in older patients (more than 60 years of age) with a correlation coefficient of 0.52 vs. 0.38, and male patients with correlation coefficients of 0.66 vs. 0.34. Conclusion A significant positive correlation between the Gensini score and the duration of diabetes mellitus was observed. This correlation is relatively stronger among male and older patients (more than 60 years of age). abstract_id: PUBMED:38246822 The Effects of Daytime Variation on Short-term Outcomes of Patients Undergoing Off-Pump Coronary Artery Bypass Grafting. Objective: To evaluate the effects of time of surgery on the short-term outcomes of patients undergoing off-pump coronary artery bypass grafting (OPCABG). Design: A retrospective cohort study. Setting: A single large-volume cardiovascular center. Patients: Patients undergoing elective OPCABG between September 2019 and July 2022. Interventions: Patients were divided into the following 2 groups according to the start time of surgery: morning (AM group, before 11 AM) and afternoon (PM group, after 11 AM). Propensity-score matching (PSM) with a 1:1 matching ratio was used to create comparable cohorts. Measurements And Main Results: The primary endpoint was the composite incidence of mortality and morbidities during hospitalization. Secondary endpoints included postoperative bleeding and transfusion, mechanical ventilation duration (MVD), and lengths of stay (LOS) in the intensive care unit (ICU) and hospital. From a consecutive series of 1,039 patients, PSM yielded 317 well-matched pairs. There was no difference in the composite incidence of in-hospital mortality and morbidities between the AM and PM groups (16.4% v 17.4%, p = 0.832). However, patients in the PM group were associated with less postoperative blood loss over the first 24 hours (470 v 540 mL, p = 0.002), decreased MVD (14 v 16 hours, p &lt; 0.001), and shorter LOS in ICU (46 v 68 hours, p = 0.002) compared to patients in AM group. Conclusions: The current study suggested a lack of relevance regarding the time of surgery with in-hospital mortality and morbidities in patients undergoing OPCABG. abstract_id: PUBMED:34462695 Gender-Based Disparities in Rural Versus Urban Patients Undergoing Cardiac Procedures. Introduction Rural populations have higher rates of diabetes and hypertension (HTN) with disparities in outcomes among patients presenting to the emergency room with heart attack and stroke. However, it is unclear whether there are any sex differences among patients presenting for cardiac procedures from rural versus urban areas. Our study aimed to investigate gender-based differences in baseline characteristics and procedural outcomes among rural and urban residents presenting for cardiac catheterization and percutaneous interventional procedures. Methods We assessed baseline conditions and outcomes in 1775 patients who underwent cardiac catheterization and or Percutaneous Coronary Intervention at the University of Tennessee Medical Center between July 2018 to October 2019 from rural as well as urban areas. Baseline conditions assessed were diabetes, HTN, stroke, peripheral vascular disease, heart failure, and prior bypass surgery. Outcomes assessed were vascular/bleeding complications, duration of the procedure, and mortality. Results There were significant gender-based inter-group differences in outcomes between rural versus urban residents. In general, both rural and urban males had significantly longer procedure times and higher mortality than rural or urban females (P=0.01). Among females, rural women had longer procedure times than urban women. Bleeding complications were greater among rural residents than urban residents (p≤0.001), with rural females having the highest bleeding complication rate. Mortality was also higher among rural females compared to their urban counterparts (p=0.01). Significant gender-based inter-group differences were noted between rural versus urban residents. While the incidence of stroke was higher among rural and urban females compared to males, the peripheral vascular disease was more common among males. The history of coronary artery bypass graft (CABG) was more commonly seen among rural males than females. Rural and urban males had significantly longer procedure times than females, particularly urban females (P=0.01). Among women, rural females had longer procedure times, higher vascular/bleeding complications, and greater mortality than urban females. Mortality was higher among rural men and women compared to urban men or women (p=0.01). Rural women had the highest bleeding/vascular complications. Conclusions We found significant gender-based differences between rural versus urban patients. While rural females had a higher incidence of stroke, peripheral vascular disease and a history of CABG were more commonly seen among rural males. Overall, rural males had higher mortality than females (P=0.01). Among women, rural females had longer procedure times, higher bleeding complications, and greater mortality than urban females. Being aware of such gender-based differences may help physicians take steps to improve outcomes. Information derived from our study may also be useful for policymakers in directing healthcare funding to lower gaps in the care of patients such as those with peripheral vascular disease, ultimately leading to better health outcomes. abstract_id: PUBMED:37719499 Comparison of Postoperative Outcomes Between Intra-aortic Balloon Pump and Levosimendan in Patients Undergoing Coronary Artery Bypass Graft: A Systematic Review and Meta-Analysis. This study was conducted to compare the postoperative outcomes between intra-aortic balloon pump (IABP) and levosimendan in patients undergoing coronary artery bypass graft (CABG) surgery. This meta-analysis was conducted following the recommendations of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). For this meta-analysis, a literature search was performed on PubMed, Cochrane Central Register of Controlled Trials, and EMBASE from inception to July 15, 2023. Keywords used to search for relevant articles included "intra-aortic balloon," "levosimendan," and "cardiac surgery" along with their key terms and Medical Subject Headings (MeSH) terms. Outcomes assessed in this study included postoperative outcomes like all-cause mortality, postoperative arrhythmias, need for inotropic support, length of intensive care unit stay (ICU) in days, and duration of mechanical ventilation in hours. Other outcomes included two-hour, six-hour, and 24-hour postoperative central venous pressure (CVP), mean atrial pressure (MAP), and heart rate (HR). A total of eight studies were included in the pooled analysis. The pooled results found that the length of ICU stay and the duration of mechanical ventilation were significantly higher in patients receiving IABP. Additionally, the findings of this meta-analysis showed a higher need for inotropic support in patients receiving IABP compared to patients receiving levosimendan but the difference was statistically insignificant. However, no significant differences were found between the two groups in terms of mortality and arrhythmias. In conclusion, patients treated with levosimendan exhibited significant advantages, as they experienced shorter ICU stays and reduced duration of mechanical ventilation compared to the IABP group and less requirement for inotropic support. abstract_id: PUBMED:37611846 Enhanced recovery after surgery program for patients undergoing isolated elective coronary artery bypass surgery improves postoperative outcomes. Objective: To evaluate the effect of a perioperative systematic standardized enhanced recovery after surgery (ERAS) program for patients undergoing isolated elective coronary artery bypass grafting (CABG) in terms of mortality, hospital morbidities, and length of stay. Methods: From January 2015 to September 2020, 1101 patients underwent isolated elective CABG. Our standardized systematic ERAS program was implemented in November 2018. Propensity score matching resulted in well-matched pairs of 362 patients receiving standard perioperative care (control group) and 362 patients on the ERAS program (ERAS group). There were no significant intergroup differences in preoperative and operative data except for the normothermia rate, which was significantly greater in the ERAS group (P &lt; .001). The primary outcome was 3-year mortality. The secondary outcomes were hospital morbidities and length of stay. Results: In-hospital and 3-year mortality did not differ between the 2 groups. The ERAS program was associated with a significant relative risk decrease in mechanical ventilation duration (-53.1%, P = .003), length of intensive care unit stay (-28.0%, P = .015), length of hospital stay (-10.5%, P = .046), bronchopneumonia (-51.5%, P &lt; .001), acute respiratory distress syndrome (-50.8%, P = .050), postoperative delirium (-65.4%, P = .011), moderate-to-severe acute kidney injury (-72.0%, P = .009), 24-hour chest tube output (-26.4%, P &lt; .001), and overall red blood cell transfusion rate (-32.4%, P = .005) compared with the control group. Conclusions: A systematic standardized ERAS program for low-risk patients undergoing isolated elective CABG was associated with a significant improvement in postoperative outcomes, reduction in red blood cell transfusion, shorter lengths of intensive care unit and hospital stays, and comparable long-term mortality. abstract_id: PUBMED:37572411 The Association of Persistent Poverty and Outcomes Among Patients Undergoing Cardiac Surgery. Introduction: We sought to evaluate the association of county-level poverty duration and cardiac surgical outcomes. Methods: Patients who underwent coronary artery bypass graft, surgical aortic valve replacement, and mitral valve repair and replacement between 2016 and 2020 were identified using the Medicare Standard Analytical Files Database. County-level poverty data were acquired from the American Community Survey and US Department of Agriculture (1980-2015). High poverty was defined as ≥19.5% of residents in poverty. Patients were stratified into never-high poverty (NHP), intermittent low poverty, intermittent high poverty, and persistent poverty (PP). A mixed-effect hierarchical generalized linear model and Cox regression models that adjusted for patient-level covariates were used to evaluate outcomes. Results: Among 237,230 patients, 190,659 lived in NHP counties, while 10,273 resided in PP counties. Compared with NHP patients, PP patients were more likely to present at a younger median age (NHP: 75 y versus PP: 74 y), be non-Hispanic Black (5388, 2.9% versus PP: 1030, 10.1%), and live in the south (NHP: 66,012, 34.6% versus PP: 87,815, 76.1%) (all P &lt; 0.001). PP patients also had more nonelective surgical operations (NHP: 58,490, 30.8% versus 3645, 35.6%, P &lt; 0.001). Notably, PP patients had increased odds of 30-d mortality (odds ratio 1.13, 95% confidence interval [CI] 1.02-1.26), 90-d mortality (odds ratio 1.14, 95% CI 1.05-1.24), and risk of long-term mortality (hazard ratio 1.13, 95% CI 1.09-1.19) compared with patients in NHP counties (all P &lt; 0.05). Conclusions: County-level poverty was associated with a greater risk of short- and long-term mortality among cardiac surgical patients. abstract_id: PUBMED:27888052 Nonsteroidal Anti-Inflammatory Drugs and Clinical Outcomes in Patients Undergoing Coronary Artery Bypass Surgery. Background: Nonsteroidal anti-inflammatory drugs (NSAIDs) are commonly used in perioperative pain management of patients undergoing coronary artery bypass graft surgery. However, the association of periprocedural use of NSAIDs and clinical outcomes after coronary artery bypass graft is understudied. Methods: We conducted a retrospective analysis using pooled data from 2 multicenter randomized controlled trials (PREVENT IV [n = 3014] and MEND-CABG II [n = 3023]). Rates of death, death or myocardial infarction, and death, myocardial infarction, or stroke in the 30 days following coronary artery bypass graft surgery were compared in patients using or not using perioperative NSAIDs. Inverse probability of treatment weighting and Cox proportional hazards regression models were used to adjust for confounding. Results: A total of 5887 patients were studied. Median age was 65 years, 78% were male, and 91% were White. NSAIDs were used in 2368 (40.2%) patients. The majority of patients (1822 [30.9%]) received NSAIDs after coronary artery bypass graft surgery; 289 (4.9%) used them prior to and after the surgery; and 257 (4.4) received NSAIDs prior to the surgery only. Adjusted 30-day outcomes were similar in patients receiving and not receiving NSAIDs (death: hazard ratio [HR] 1.18; 95% confidence interval [CI], 0.48-2.92; death or myocardial infarction: HR 0.87; 95% CI, 0.42-1.79; death, myocardial infarction, or stroke: HR 0.87; 95% CI, 0.46-1.65). Conclusion: In this pooled data analysis, perioperative NSAID use was common among patients undergoing coronary artery bypass graft surgery and was not associated with an increased short-term risk for major adverse clinical outcomes. abstract_id: PUBMED:33061001 Early outcomes in patients undergoing off-pump coronary artery bypass grafting. Background: In spite of general decline worldwide, off-pump coronary artery bypass grafting (CABG) surgery is performed in more than 60% of patients undergoing CABG in India; mainly because of shorter operative time and reduced procedure cost. However, paucity of data exists in literature about early outcomes following off-pump CABG from India. Methodology: We conducted a prospective observational study of 800 consecutive cases that underwent off-pump CABG (OPCAB) from August 2015 to October 2017. Primary end point of the study was the 30-day/in-hospital mortality from any cause. Results: On multivariate analysis, emergency surgery (OR 9.72; CI 1.96-48.21, p value 0.005), severe left ventricular dysfunction (OR 2.28; CI 1.25-4.76, p value 0.026), postoperative atrial fibrillation (OR 9.95; CI 3.12-32.01, p value 0.05), and dialysis-dependent renal failure (OR 29.7, CI 10.02-87.99, p value 0.006) were the factors associated with mortality. The observed mortality was 1.6%, and the expected mortality by EuroSCORE II was 2.6%. The median EuroSCORE II of expired patients was 3.03 and of entire cohort was 1.54 (p value 0.001). Stroke rate was 0.9%. Deep sternal wound infection occurred in 0.9%, and 3.8% patients were readmitted to the hospital after discharge. Conclusion: Early outcome of off-pump CABG was excellent in this study. Increased incidence of deep sternal wound infection remains a concern. Multicenter study with a larger sample size is required for a dependable evaluation of the efficacy of off-pump CABG in Indian population. abstract_id: PUBMED:38342892 The effect of body mass index on short-term outcomes in patients undergoing off-pump coronary artery bypass grafting surgery: a retrospective study from a single cardiovascular center. Objective: This study is designed to investigate the impact of body mass index (BMI) on the short-term outcomes of patients undergoing off-pump coronary artery bypass graft (OPCAB) surgery. Methods: Data was obtained from 1006 Chinese patients who underwent isolated, primary OPCAB at a high-traffic cardiovascular center during 2020. Subjects were categorized, by BMI, into a low &amp; normal weight (LN) group (BMI &lt; 24 kg/m2), an overweight (OVW) group (24 ≤ BMI &lt; 28 kg/m2), and an obese (OBS) group (BMI ≥ 28 kg/m2). Information pertaining to patients' short-term outcomes (including incidence of mortality and morbidities; duration of postoperative mechanical ventilation; length of stay in the ICU and hospital; postoperative bleeding; etc.) were extracted, and the data from each group were compared. Results: The incidences of in-hospital mortality and morbidities were similar for all three groups. The volume of fluid infusion, postoperative bleeding within 24 h and total bleeding in LN group were higher than those in the OBS group (P &lt; 0.001). The hemoglobin level was lower in the LN group than that in the OBS group (P &lt; 0.001). Duration of mechanical ventilation and length of stay in the ICU in the LN group were longer than those in the OBS group (P &lt; 0.001). Conclusions: Our results demonstrate that BMI is not significantly related with short-term outcomes in OPCAB patients. However, we suggest that OPCAB patients with low-normal BMI are more susceptible to post-operative blood loss. Answer: Yes, the duration of surgery does affect outcomes in patients undergoing coronary artery bypass grafting (CABG). A study found that the duration of surgery correlated positively with the length of surgical intensive care unit (SICU) stay and ventilator support in univariate analysis. However, after controlling for confounding factors in a multivariate logistic regression, only the length of SICU stay remained significantly associated with the duration of surgery. Specifically, every additional 30 minutes of surgery time was associated with 4.32 more hours of SICU stay. Despite this, the duration of surgery was not associated with survival (PUBMED:18778805). Other factors that may influence outcomes include the time of day the surgery is performed. A study evaluating the effects of daytime variation on short-term outcomes of patients undergoing off-pump coronary artery bypass grafting (OPCABG) found no difference in the composite incidence of in-hospital mortality and morbidities between morning and afternoon surgeries. However, patients in the afternoon group experienced less postoperative blood loss, decreased mechanical ventilation duration (MVD), and shorter lengths of stay in the ICU compared to patients in the morning group (PUBMED:38246822). Additionally, the implementation of an enhanced recovery after surgery (ERAS) program for patients undergoing isolated elective CABG was associated with significant improvements in postoperative outcomes, including reductions in mechanical ventilation duration, length of ICU stay, and length of hospital stay, among other benefits (PUBMED:34462695). In summary, while the duration of surgery does not appear to affect short-term survival after CABG, it does independently predict the length of SICU stays, and efforts to reduce the length of operations may promote more efficient use of hospital resources. Other factors, such as the time of day of the surgery and the use of ERAS programs, can also influence postoperative outcomes.
Instruction: Can extracapsular lymph node involvement be a tool to fine-tune pN1 for adenocarcinoma of the oesophagus and gastro-oesophageal junction in the Union Internationale contre le Cancer (UICC) TNM 7th edition? Abstracts: abstract_id: PUBMED:24477743 Can extracapsular lymph node involvement be a tool to fine-tune pN1 for adenocarcinoma of the oesophagus and gastro-oesophageal junction in the Union Internationale contre le Cancer (UICC) TNM 7th edition?†. Objectives: The current (7th) International Union Against Cancer (UICC) pN staging system is based on the number of positive lymph nodes but does not take into consideration the characteristics of the metastatic lymph nodes itself. In particular, it has been suggested that tumour penetration beyond the lymph node capsule in metastatic lymph nodes, which is also called extracapsular lymph node involvement, has a prognostic impact. The aim of the current study was to assess the prognostic value of extracapsular (EC) and intracapsular (IC) lymph node involvement (LNI) in adenocarcinoma of the oesophagus and gastro-oesophageal junction (GOJ) and to assess its potential impact on the 7th edition of the UICC TNM manual. Methods: From 2000 to 2010, all consecutive adenocarcinoma patients with primary R0-resection (n = 499) were prospectively included for analysis. The number of resected lymph nodes, number of positive lymph nodes and number of EC-LNI/IC-LNI were determined. Extracapsular spread was defined as infiltration of cancer cells beyond the capsule of the positive lymph node. Results: Two hundred and eighteen (43%) patients had positive lymph nodes. Cancer-specific 5-year survival in lymph node-positive patients was significantly (P &lt; 0.0001) worse compared with lymph node-negative patients, being 88.3 vs 28.7%, respectively. In 128 (58.7%) cases EC-LNI was detected. EC-LNI showed significantly worse cancer-specific 5-year survival compared with IC-LNI, 19.6 vs 44.0% (P &lt; 0.0001). In the pN1 category (1 or 2 positive LN's-UICC stages IIB and IIIA), this was 30.4% vs 58%; (P = 0.029). In higher pN categories, this effect was no longer noticed. Integrating these findings into an adapted TNM classification resulted in improved homogeneity, monotonicity of gradients and discriminatory ability indicating an improved performance of the staging system. Conclusions: EC-LNI is associated with worse survival compared with IC-LNI. EC-LNI patients show survival rates that are more closely associated with the current TNM stage IIIB, while IC-LNI patients have a survival more similar to TNM stage IIB. Incorporating the EC-IC factor in the TNM classification results in an increased performance of the TNM model. Further confirmation from other centres is required within the context of future adaptations of the UICC/AJCC (American Joint Committee on Cancer) staging system for oesophageal cancer. abstract_id: PUBMED:26583670 International Multicenter Study on the Impact of Extracapsular Lymph Node Involvement in Primary Surgery Adenocarcinoma of the Esophagus on Overall Survival and Staging Systems. Objective: The current pathological lymph node (pN) staging is based on the number of positive lymph nodes but does not take into consideration characteristics of the involved lymph nodes itself. The current study aims to examine the prognostic value of extracapsular lymph node involvement (EC-LNI) and intracapsular lymph node involvement (IC-LNI) for esophageal adenocarcinoma treated by primary surgery. Methods: From the databases of five European high volume centers, 1639 adenocarcinoma patients with primary R0-resection were withheld after excluding 90-day mortality. Oncologic variables, including number of resected lymph nodes, number of resected positive lymph nodes, and EC-LNI/IC-LNI were examined. The Union Internationale contre le Cancer (UICC) 7th edition prognostic staging was used as baseline staging system. Statistical analysis was performed by Cox proportional hazards modeling and verified using the Random Survival Forest technique. Results: EC-LNI showed significantly worse overall 5-year survival compared with IC-LNI overall (13.4% vs 37.2%, P &lt; 0.0001), including in each pN-category [16.4% vs 45.6% in pN1 (P &lt; 0.0001), 16.1% vs 23.8% (P = 0.047) in pN2 (P = 0.065), and 8.7% vs 26.3% in pN3 categories, respectively]. pN1 IC-LNI patients show a 5-year overall survival comparable (P = 0.92) with stage IIB (ie, pT3N0). Reclassifying the UICC prognostic stages according to these findings into an adapted staging model showed a significant (P &lt; 0.0001) increase in homogeneity, discriminatory ability, and monotonicity compared with the original UICC TNM 7th edition prognostic staging. Conclusions: These data suggest that lymph node capsular status is an important prognostic factor and should be considered for the future edition of the TNM staging system for esophageal cancer. abstract_id: PUBMED:14991629 Comparison of old and new TNM systems for nodal staging in adenocarcinoma of the gastro-oesophageal junction. Background: Adenocarcinoma of the gastro-oesophageal junction is considered a distinct clinical entity, although the current pathological tumour node metastasis (pTNM) classification does not consider this tumour specifically. A prospective study was undertaken to determine the prognostic importance of lymph node involvement in adenocarcinoma of the gastro-oesophageal junction, analysing both a number- and site-based classification, in order to develop a clinically useful nodal staging system. Methods: Two classification systems were analysed in 116 patients who underwent resection for adenocarcinoma of the gastro-oesophageal junction from January 1988 to August 2001. The Cox regression model was used to evaluate the prognostic significance of the site and number of positive nodes. Results: The number- and site-based staging systems coincided only in 42 (56 per cent) of 75 patients; in particular, the old pN1 classification was upstaged in 13 of 41 patients and the old pN2 was downstaged in 13 of 34 patients. Lymph node involvement was the most important prognostic factor in both classifications (P &lt; 0.001). The risk of death was significantly influenced by the site of nodal metastasis among patients with a similar number of involved nodes (relative risk with respect to pN0: 2.18 for pN1 with one to six nodes; 6.53 for pN2 with one to six nodes; 7.53 for pN1 with more than six nodes; 39.13 for pN2 with more than six nodes). Conclusion: Adenocarcinoma of the gastro-oesophageal junction requires a specific lymph node classification which should take into account both the number and site of nodal metastases. abstract_id: PUBMED:31029372 The 8th edition of the AJCC-TNM classification: New contributions to the staging of esophagogastric junction cancer. The new 8th edition of the TNM classification system for esophageal and cardia or esophagogastric junction cancer provides important innovations in the TNM stages. Two classifications are presented, updated by stages, clinical (cTNM) and pathological (pTNM) methods, together with another pathological classification applicable to cases receiving neoadjuvant treatment (ypTNM). There is a notable increase in complexity compared to previous versions, but it is still early to determine whether the current modifications will result in a clear improvement in the prognostic discrimination of survival among the patient groups (which is their main objective), although the initial expectations are favorable. abstract_id: PUBMED:27644904 Chemoradiotherapy in tumours of the oesophagus and gastro-oesophageal junction. Oesophageal cancer remains a malignancy with a poor prognosis. However, in the recent 10-15 years relevant progress has been made by the introduction of chemoradiotherapy (CRT) for tumours of the oesophagus or gastro-oesophageal junction. The addition of neo-adjuvant CRT to surgery has significantly improved survival and locoregional control, for both adenocarcinoma and squamous cell carcinoma. For irresectable or medically inoperable patients, definitive CRT has changed the treatment intent from palliative to curative. Definitive CRT is a good alternative for radical surgery in responding patients with squamous cell carcinoma and those running a high risk of surgical morbidity and mortality. For patients with an out-of-field solitary locoregional recurrence after primary curative treatment, definitive CRT can lead to long term survival. abstract_id: PUBMED:35635055 Clinicopathologic and survival differences between adenocarcinoma of the distal oesophagus and gastro-oesophageal junction. Background: The incidence of adenocarcinoma of the distal oesophagus (DO) and gastro-oesophageal junction (GOJ) are increasing. They may represent differing disease processes. This study aimed to assess clinicopathological and survival differences between patients with DO and GOJ adenocarcinomas. Methods: Data were extracted from a prospective single-surgeon database of consecutive patients undergoing an open Ivor-Lewis oesophagectomy for oesophageal adenocarcinoma (distal oesophagus, Siewert type I and II). Differences in clinicopathological characteristics and survival were evaluated and prognostic factors examined using univariate and multivariate survival analyses. Results: The data were available for 234 patients who underwent an oesophagectomy between 1992 and 2019. DO tumours had higher rates of Barrett's oesophagus (P &lt; 0.001), presented with lower tumour stage (P = 0.02) and were more likely to be associated with fewer lymph nodes resected (P = 0.003) than GOJ tumours. The median overall survival for distal oesophageal tumours was 29.2 months, while gastro-oesophageal tumours was 38.6 months. Kaplan Meier analysis did not show a difference in overall survival between the two groups (P = 0.08). However, when adjusted for potential confounders, GOJ tumours were associated with a reduced adjusted hazard of death (adjusted HR 0.58, 95% CI 0.36-0.92, P = 0.022) compared with DO tumours. Conclusion: This study suggests that GOJ cancers have different clinicopathological characteristics and improved survival compared to DO tumours. abstract_id: PUBMED:17582230 Prognostic nomogram for patients undergoing oesophagectomy for adenocarcinoma of the oesophagus or gastro-oesophageal junction. Background: Tumour node metastasis (TNM) staging predicts survival on the basis of the pathological extent of a tumour. The aim of this study was to develop a prognostic model with improved survival prediction after oesophagectomy. Methods: Consecutive patients who had potentially curative oesophagectomy for adenocarcinoma of the oesophagus or gastro-oesophageal junction were included. Cox regression analyses were performed to examine the association between risk factors and time to death from oesophageal cancer. The concordance index, calculated after bootstrapping, was used to measure accuracy. A nomogram was designed for use in clinical practice. Results: Oesophageal cancer-specific survival rates for the 364 included patients who underwent oesophagectomy between 1993 and 2003 were 75.8, 54.9 and 39.2 per cent at 1, 2 and 5 years respectively. A prognostic model using all prognostic variables outperformed TNM staging (concordance index 0.79 versus 0.68 respectively; P &lt; 0.001). A reduced model derived after backward elimination, containing only T stage, lymph node ratio and extracapsular lymph node involvement, also outperformed TNM staging (concordance index 0.77; P &lt; 0.001). Conclusion: A prognostic model developed to predict disease-specific survival after oesophagectomy was superior to TNM staging. More reliable prognostic information might lead to different approaches to patient follow-up. abstract_id: PUBMED:33975686 The complexity of cancer origins at the gastro-oesophageal junction. Chronic acid-biliary reflux and Helicobacter pylori infection are instrumental environmental drivers of cancer initiation and progression in the upper gastrointestinal tract. Remarkably, although these environmental carcinogens are quite dissimilar, the tumour progression cascade these carcinogens engender is highly comparable. For this reason, studies of malignant progression occurring at the anatomic borderland between the oesophagus and the stomach have traditionally lumped junctional adenocarcinomas with either oesophageal adenocarcinoma or gastric adenocarcinoma. Whilst studies have revealed remarkable epidemiological and genetic similarities of these cancers and their associated premalignant conditions, these works have also revealed some key differences. This highlights that further scientific effort demands a dedicated focus on the understanding of the cell-cell interaction between the epithelium and the local microenvironment in this anatomic region. We here review available evidence with regards to tumour progression occurring at the gastro-oesophageal junction and contrast it with available data on cancer evolution in the metaplastic oesophagus and distal stomach. abstract_id: PUBMED:24728822 Evaluation of the nodal status in the 7th edition of the UICC-TNM classification for esophageal squamous cell carcinoma: proposed modifications for improved survival stratification : impact of lymph node metastases on overall survival after esophagectomy. Background: The 7th edition of the Union for International Cancer Control-TNM (UICC-TNM) classification for esophageal carcinoma made considerable modifications to the definition of N-staging by the number of involved lymph nodes and the regional node boundary. There were few validations of the regional boundary. We evaluated the nodal status of this classification for esophageal squamous cell carcinoma (ESCC). Methods: There were 665 patients reviewed who had ESCC and underwent esophagectomy between 1997 and 2012. We evaluated the impact of the location of lymph node metastasis on overall survival. Results: There were 414 patients (61.7 %) who had lymph node metastases. The overall 5-year survival rate was 54.7 %. There were no significant differences in survival among N2, N3, and M1 patients. Cox regression analysis revealed that common hepatic or splenic node involvements (P = 0.001), pT stage (P = 0.0002), and pN stage (P &lt; 0.0001) were independent predictors of survival, but supraclavicular node involvement (P = 0.29) was not. We propose a modified nodal status that designates supraclavicular node as regional: m-N0 (5-year survival = 79 %; n = 251); m-N1 (5-year = 56 %; n = 212); m-N2 (5-year = 30 %; n = 114); m-N3 (5-year = 18 %; n = 52); m-M1 (5-year = 6.2 %; n = 36). This modified nodal staging predicts survival better than the current staging system. Conclusions: The modification of supraclavicular lymph node from nonregional to regional in the 7th UICC classification of ESCC may allow for better stratification of overall survival. abstract_id: PUBMED:29632424 Impact of postoperative TNM stages after neoadjuvant therapy on prognosis of adenocarcinoma of the gastro-oesophageal junction tumours. Aim: To compare prognostic relevance of postoperative tumour/node/metastasis (TMN) stages between patients with and without neoadjuvant treatment. Methods: Data from patients with adenocarcinoma of the gastro-oesophageal junction (AEG) who had undergone surgical resection at a single German university centre were retrospectively analysed. Patients with or without neoadjuvant preoperative treatment were selected by exact matching based on preoperative staging. Standard assessment of preoperative (c)TNM stage was based on endoscopic ultrasound and computed tomography of the thorax and abdomen, according to the American Joint Committee on Cancer/Union for International Cancer Control classification system. Patients with cT1cN0cM0 and cT2cN0cM0 stages were excluded from the study, as these patients are generally not recommended for pretreatment. Long-term survival among the various postoperative TNM stages was compared between the groups of patients with or without neoadjuvant treatment. For statistical assessments, a P-value of ≤ 0.05 was considered significant. Results: The study included a total of 174 patients. The group of patients who had received preoperative neoadjuvant treatment included more cases of AEG (Siewert) type 1 carcinoma (P &lt; 0.001), and consequently oesophagectomy was performed more frequently among these patients (P &lt; 0.001). The two groups (with or without preoperative neoadjuvant treatment) had comparable preoperative T stages, but the group of patients with preoperative neoadjuvant treatment presented a higher rate of preoperative N-positive disease (P = 0.020). Overall long-term survival was not different between the two groups of patients according to tumours of different AEG classifications, receipt of oesophagectomy or gastrectomy, nor between patients with similar postoperative TNM stage, resection margin and grading. However, an improvement of long-term survival was found for patients with nodal down-staging after neoadjuvant therapy (P = 0.053). Conclusion: The prognostic relevance of postoperative TNM stages is similar for AEG in patients with or without neoadjuvant preoperative treatment, but treatment-related nodal down-staging prognosticates longer-term survival. Answer: Yes, extracapsular lymph node involvement (EC-LNI) can be a tool to fine-tune pN1 for adenocarcinoma of the oesophagus and gastro-oesophageal junction in the Union Internationale contre le Cancer (UICC) TNM 7th edition. Studies have shown that EC-LNI is associated with worse survival compared to intracapsular lymph node involvement (IC-LNI). Specifically, in the pN1 category (1 or 2 positive lymph nodes), patients with EC-LNI had significantly worse cancer-specific 5-year survival compared to those with IC-LNI (30.4% vs 58%; P = 0.029) (PUBMED:24477743). Furthermore, integrating the EC-LNI and IC-LNI factor into an adapted TNM classification resulted in improved performance of the staging system, indicating that EC-LNI patients show survival rates more closely associated with the current TNM stage IIIB, while IC-LNI patients have survival more similar to TNM stage IIB (PUBMED:24477743). An international multicenter study also supported the prognostic value of EC-LNI, showing that EC-LNI had significantly worse overall 5-year survival compared with IC-LNI across all pN-categories. Reclassifying the UICC prognostic stages according to these findings into an adapted staging model showed a significant increase in homogeneity, discriminatory ability, and monotonicity compared with the original UICC TNM 7th edition prognostic staging (PUBMED:26583670). Therefore, the evidence suggests that lymph node capsular status is an important prognostic factor and should be considered for future editions of the TNM staging system for esophageal cancer (PUBMED:26583670). Incorporating EC-LNI into the staging system could provide a more accurate prognosis and potentially guide treatment decisions for patients with adenocarcinoma of the oesophagus and gastro-oesophageal junction.
Instruction: Does return to work occur earlier after work-related sick leave episodes than after non-work-related sick leave episodes? Abstracts: abstract_id: PUBMED:18805879 Does return to work occur earlier after work-related sick leave episodes than after non-work-related sick leave episodes? A retrospective cohort study in Spain. Objective: In Spain, sick pay benefits for work-related sick leave episodes are higher than for non-work-related episodes. Our aim is to assess whether time to return to work is longer for higher paid sick leave episodes than for lower paid episodes. Methods: We used data from 62,376 work-related and 76,932 non-work-related sick leave episodes occurring among 338,226 workers from 56,099 companies in Spain in 2002. All episodes were followed for up to 18 months. Episodes were classified by a physician as being work- or non-work-related according to medico-legal judgments. The median episode duration and the 25th and 75th percentiles were calculated. The probability of remaining absent from work was estimated by a non-parametric estimator of the marginal survival function. The time ratio between both types of sick leave was estimated by a log-logistic regression model, using non-work-related episodes as the reference. Results: Median episode duration (25th-75th percentiles) was 11 (6-21) days for work-related episodes and 9 (4-29) days for non-work-related episodes. Time to return to work was longer for work-related episodes than for non-work-related episodes of less than 16 days (time ratio: 1.19 in men and 1.08 in women), while the opposite was observed for episodes of more than 15 days (0.58 in men and 0.40 in women). Conclusions: Sick pay benefits have a limited effect on time to return to work after a sick leave episode. abstract_id: PUBMED:22425457 Predictors of the duration of non-work-related sick leave due to anxiety disorders Objectives: To analyze the duration of non-work-related sick leave due to anxiety disorders and to identify demographic, occupational and clinical variables that may contribute to its prediction. Methods: We performed a prospective cohort study of 1,161 workers with an episode of non-work-related sick leave due to an anxiety disorder, belonging to the insured population of a mutual insurance company. We assessed the duration of non-work-related sick leave episodes and the main potentially related demographic, occupational and clinical variables. All non-work-related sick leave processes were followed-up until discharge. Cox regression analyses were conducted to establish the predictors of non-work-related sick leave duration. Results: The median duration of non-work-related sick leave due to anxiety disorders was 83 days. In a multivariate analysis, the following factors were identified as being significantly associated with increases in the duration of sick leave (p &lt;0.05): age of over 35 years, lower educational level (primary school studies, secondary school studies or high-school diploma vs. university degree), and the existence of comorbidity and unemployment occurring during the sick leave. In contrast, being separated or divorced was associated with an earlier return to work (p &lt;0.05). Conclusions: Anxiety disorders are associated with long periods of non-work-related sick leave compared with other disorders and standard time duration. Demographic, occupational and clinical variables collected at the initial assessment of the sick leave episode would help to identify groups with an increased risk of prolonged sick leave, requiring strategies to facilitate return to work. abstract_id: PUBMED:32089014 Work ethics and societal norms influence sick leave and return to work: tales of transformation. Purpose: This study's purpose was to explore how people on sick leave manage societal norms and values related to work, and how these influence their perspectives of themselves throughout the rehabilitation process. Materials And Methods: This was a longitudinal interview study with a narrative approach, comprising 38 interviews with 11 individuals on long-term sick leave. Data collection was conducted in two phases and analysed iteratively through content analysis. Results: The results suggest that work ethics and societal norms influence individuals' views of themselves and the sick leave and rehabilitation process. Conforming one's personal values to the work norm can create internal conflicts and cause feelings of shame for not being able to live up to the established norm. The strong work norm may create unrealistic expectations, which in some cases may result in constraining the return to work process. Conclusion: To transform a sick leave narrative into a positive one, societal norms and their influence on identity needs to be recognised. Stakeholders involved in the process can contribute to a positive transformation by not only supporting return to work, but also to acknowledge and help people manage their self-image as having a disability that limits their ability to work.IMPLICATIONS FOR REHABILITATIONStakeholders involved in the sick leave and rehabilitation process need to support sick listed individuals by acknowledging and helping people manage their self-image.Full RTW is not always the best option from a quality of life and wellbeing perspective.Treatment and support from stakeholders should be viewed as meaningful and legitimate, even if it does not lead to RTW. abstract_id: PUBMED:24865407 Sick leave diagnoses and return to work: a Swedish register study. Purpose: To provide a detailed description of return to work (RTW) for different diagnoses. Methods: A sample of 617,611 cases of sick leave with minimum duration of 14 days started in 2009 and 2010 was followed for 450 days. Data on sick leave episodes, diagnosis (ICD-10 codes on 3-digit level) and background factors were retrieved from Swedish National Social Insurance registers. RTW was analysed with Cox proportional hazard regression with separate analyses for women and men. Results: The diseases with the lowest RTW rates were present within malign neoplasms, severe mental disorders and severe cardiovascular diseases. High-RTW rates were present for infectious respiratory diseases, viral infections and less severe cardiovascular diseases. There were distinct differences across diagnoses within all diagnosis chapters. This also holds for mental disorders and musculoskeletal diseases, which are the most common causes of sick leave. The results for women and men across different diseases and disorders were strikingly similar. Conclusions: Systematic use of standard information such as detailed sick listing diagnosis could, at low-additional cost, pinpoint cases at risk for prolonged sick leave. The use of broad diagnosis categories in sick leave research may be questioned since there are distinct differences in RTW within ICD-10 chapters. abstract_id: PUBMED:37058510 Work as an arena for health-Supervisors' experiences with attending to employees' sick leave and return-to-work process. Background: Supervisors play a pivotal role in the sick leave process. Although responsibility for sick leave and return to work follow-up is increasingly placed on the workplace in Norway, few studies have explored supervisors' experiences. This study aims to explore supervisors´ experiences with attending to employees' sick leave and return to work process. Methods: This study consists of individual interviews with 11 supervisors from diverse workplaces that was analysed thematically. Results: The supervisors emphasised the value of presence at the workplace, the need for them to obtain information and uphold dialogue, considering individual and environmental influences on return-to-work and allocating responsibility. Investing time and money was crucial to prevent or reduce the negative impact of sick leave. Conclusions: The supervisors' perception of attending to sick leave and return-to-work largely reflect Norwegian legislation. However, they find obtaining information and managing responsibility challenging, suggesting that their responsibilities for return-to-work are perhaps disproportionate to their knowledge on attending this process. Individualised support and guidance on how to develop accommodations based on the employee´s workability should be made available. The reciprocal nature of follow-up described also reveals how the return-to-work process is enmeshed with (inter)personal considerations possibly resulting in unequal treatment. abstract_id: PUBMED:35106629 Predictors of return to work for people on sick leave with common mental disorders: a systematic review and meta-analysis. Purpose: To support the return to work following common mental disorders knowing which factors influence the return to work is important. We aimed to identify factors predicting return to work for people on sick leave with disorders like stress, anxiety, and depression. Methods: A systematic review and meta-analyses were conducted regarding return to work at any time point, &lt; 3 months, 3-12 months, and &gt; 12 months of sick leave, respectively, and diagnostic subgroups. Results: The meta-analyses included 29 studies. Predictors decreasing return to work probability at any time point were higher age, being male, neuroticism and openness, previous sickness absence, and higher symptom scores. Predictors increasing return to work probability were positive return to work expectations, high return to work- and general self-efficacy, conscientiousness, and high workability index. Return to work within &lt; 3 months of sick leave was associated with positive return to work expectations. Return to work after &gt; 12 months was increased by higher education. Higher age was associated with decreased return to work probability after &gt; 12 months. No significant predictors were found in diagnostic subgroups. Conclusion: Results are overall consistent with earlier reviews. Future studies should focus on specific time points, diagnostic subgroups, and work-related factors. Prospero Registration Id: CRD42018073396. abstract_id: PUBMED:36578011 Development of the core of an ICF-based instrument for the assessment of work capacity and guidance in return to work of employees on sick leave: a multidisciplinary modified Delphi study. Background: Several occupational health disciplines are involved in return to work guidance, implying that good interdisciplinary collaboration is important. A shared conceptual framework and a common language for the assessment of work capacity and guidance in return to work is expected to be at the benefit of appropriate and sustainable employability of sick employees. The International Classification of Functioning, Disability and Health (ICF) can be considered a shared conceptual framework and is also promising in terms of a common language. The purpose of the current study is to reach multidisciplinary consensus among occupational health professionals on the content of an ICF-based instrument for the assessment of work capacity and guidance in return to work. Methods: To obtain multidisciplinary consensus we conducted a modified Delphi study among twelve occupational health experts, including four occupational physicians, four insurance physicians and four labour experts. The study included two e-mail rounds and two virtual meetings. In the consecutive rounds the experts assessed ICF items as well as a list of non-ICF-based work-related environmental factors on their relevance for the assessment of the work capacity and guidance in return to work together with their interpretability. Results: The four consecutive Delphi rounds resulted in 20 items that are minimally needed for the assessment of the work capacity and return to work possibilities of employees on sick leave. The final list included six items on personal functioning, seven items on social functioning and seven items on physical functioning. Conclusions: This set of items forms the core of an ICF-based instrument, which is expected to facilitate interdisciplinary and intradisciplinary communication because of the use of a shared conceptual framework. As such, it should be of help in the guidance in return to work of employees on sick leave and contribute to appropriate and sustainable employability. abstract_id: PUBMED:36064472 Effectiveness of work-related interventions for return to work in people on sick leave: a systematic review and meta-analysis of randomized controlled trials. Background: Long-term sick leave is a serious concern in developed countries and the cost of sickness absence and disability benefits cause major challenges for both the individual and society as a whole. Despite an increasing body of research reported by existing systematic reviews, there is uncertainty regarding the effect on return to work of workrelated interventions for workers with different diagnoses. The objective of this systematic review was to assess and summarize available research about the effects of work-related interventions for people on long-term sick leave and those at risk of long-term sick leave. Methods: We conducted a systematic review in accordance with international guidelines. Campbell Collaboration (Area: Social Welfare), Cochrane Database of Systematic Reviews, Cochrane Central Register of Controlled Trials, Embase, Epistemonikos, MEDLINE, PsycINFO, Scopus, and Sociological Abstracts were systematically searched in March 2021. Two authors independently screened the studies. We conducted risk of bias assessments and meta-analyses of the available evidence in randomized controlled trials (RCTs). The remaining comparisons were synthesized narratively. The certainty of evidence for each outcome was assessed. Results: We included 20 RCTs comprising 5753 participants at baseline from 4 different countries. The studies had generally low risk of bias. Our certainty in the effect estimates ranged from very low to moderate. Eight different interventions were identified. Meta-analysis revealed no statistically significant difference between multidisciplinary rehabilitation (MR) and usual care (US) (Risk Ratio [RR] 1.01; Confidence Interval [CI] 95% 0.70-1.48 at 12 months follow-up) and between MR and other active intervention (Risk Ratio [RR] 1.04; Confidence Interval [CI] 95% 0.86-1.25 at 12 months follow-up). Remaining intervention groups revealed marginal, or no effect compared to the control group. The results for the secondary outcomes (self-efficacy, symptom reduction, function, cost-effectiveness) showed varied and small effects in the intervention groups. Conclusion: Overall, the present data showed no conclusive evidence of which work-related intervention is most effective for return to work. However, a handful of potential interventions exist, that may contribute to a foundation for future research. Our findings support the need for adequately powered and methodologically strong studies. abstract_id: PUBMED:37067701 The Effectiveness of Cognitive-Behavioral Therapy in Helping People on Sick Leave to Return to Work: A Systematic Review and Meta-analysis. Purpose: Previous research has systematically studied the effectiveness of Cognitive Behavioral Therapy (CBT)-based interventions in managing both mental and physical symptoms of chronic disease including depression, stress-related mental disorders (SMD), and chronic pain that are common causes of sick leave. However, a systematic review focusing on the effectiveness of CBT in facilitating RTW is lacking. This study compiles research on utilizing CBT-based interventions for helping employees on sick leave return to work. Methods: Randomized controlled trials (RCT) published between 1 January 1990 and 27 June 2022 were searched in MEDLINE, EMBASE, The Cochrane Library, Scopus, PsycINFO, Web of Science, and PubMed. The primary outcome variables included a return to work (RTW) measure and sickness absences. The secondary outcomes include psychological conditions (mental illness, stress, anxiety, and depression) and physical condition (working ability, fatigue, and physical function). Results: Thirty-four RCTs were included in the analysis. Fifteen RCTs with 1727 participants reported on sick leave. Results showed that participants who completed CBT intervention had reduced sick leave in days (mean reduction - 3.654; 95%CI - 5.253, - 2.046; p &lt; 0.001) compared to the control group. Sixteen papers with 2298 participants reported that the intervention group RTW 1.5 days earlier (95%CI 1.019, 1.722; p &lt; 0.05). CBT-based interventions were effective in managing fatigue, mental illness, and depression, and improving physical function while it showed no effects in managing stress, anxiety and working ability. Conclusions: The findings indicate that CBT-based interventions are effective in reducing the length of sick leave and facilitating the RTW of employees in the intervention group. abstract_id: PUBMED:9916829 Shift work and sick leave. Objective: Shift workers working nights are known to have higher morbidity from certain illnesses than day workers. This study examined episodes of certified sick leaves of day workers and shift workers in a large industrial plant to examine whether slowly rotating shift work leads to increased risk of sick leave. Methods: In a case-base design more than 11000 episodes of sick leave, lasting more than 3 days, were obtained from the sick-leave files of a chemical plant in Norway. The diagnoses were grouped into 5 categories according to information on their work schedules. The workers included in the study were divided into 3 groups. They worked slowly rotating 3 shifts, 2 shifts without night work, and daytime schedules. Results: For all the diagnoses the shift workers and day workers were evenly distributed among the cases and the referents, the odds ratios ranging from 0.8 to 1.2. The risk of sick leave did not change with the number of years in shift work. There was a higher risk of sick leave with musculoskeletal diagnoses among the 2-shift workers. Conclusions: In this study shift workers did not have a higher risk of sick leave for diseases that, in previous studies, have been shown to be related to shift and night work. Although bias may be present in the study, the results are in line with those of previous studies, and they suggest that even certified sick leaves are not a valid proxy for morbidity. Answer: According to the retrospective cohort study conducted in Spain, the time to return to work after a sick leave episode varies depending on the duration of the sick leave. For episodes of less than 16 days, the time to return to work was longer for work-related episodes than for non-work-related episodes (time ratio: 1.19 in men and 1.08 in women). However, for episodes lasting more than 15 days, the opposite was observed, with a quicker return to work for work-related episodes (time ratio: 0.58 in men and 0.40 in women) (PUBMED:18805879). This suggests that while sick pay benefits may have a limited effect on the return to work after a sick leave episode, the duration of the sick leave plays a significant role in determining the time of return to work, with work-related episodes leading to a quicker return to work when the sick leave is longer.
Instruction: Does an electrocardiogram add predictive value to the rose angina questionnaire for future coronary heart disease? Abstracts: abstract_id: PUBMED:22544919 Does an electrocardiogram add predictive value to the rose angina questionnaire for future coronary heart disease? 10-year follow-up in a Middle East population. Background: To evaluate the power of abnormal resting ECG versus Rose Questionnaire angina and its additive value in predicting 10-year coronary heart disease (CHD) risk in an Iranian urban population with high prevalence of CHD. Methods: There were 5101 subjects ≥30 years (2900 women), free of CHD at baseline; they were categorised in to four groups according to their Rose Angina and ECG status for ischaemia as Rose-/ECG-, Rose+/ECG-, Rose-/ECG+ and Rose+/ECG+. HR of CHD was estimated using Cox regression analysis, given Rose-/ECG- as the reference. The authors used Akaike information criterion, C-index and integrated discrimination improvement indices to evaluate the prognostic value of ECG when would be added to Rose Questionnaire. Results: During follow-up, 387 CHD events (169 women) were observed. Multivariate analysis showed a HR of 2.59 (95% CI 1.71 to 3.91) and 2.26 (1.48 to 3.44) for Rose+/ECG- group in men and women, respectively. These figures for Rose-/ECG+ were 1.36 (0.90 to 2.05) in men and 2.09 (1.40 to 3.12) in women. There was no any interaction between Rose Questionnaire and gender to predict incident CHD, in age-adjusted analysis. Akaike information criterion, C-index and relative integrated discrimination improvement did not show any difference between models including Rose alone and Rose plus ECG to predict CHD events especially in men. Conclusions: Rose Questionnaire as a simple screening tool is equally important to predict incident CHD in both genders. Adding abnormal ECG to angina did not culminate in higher risk for future CHD events. abstract_id: PUBMED:2732767 Comparison of a supplemented Rose Questionnaire to exercise thallium testing in men and women. The Rose Questionnaire has had high specificity and variable sensitivity when compared to physician identification of the symptom complex of angina pectoris. We assessed the accuracy of a supplemented Rose Questionnaire in a series of 198 patients by comparing the Questionnaire to exercise thallium test evidence of coronary artery disease. The Rose diagnosis of angina had 26% sensitivity, 79% specificity, 42% positive predictive value, and 65% negative predictive value. The Rose diagnosis of myocardial infarction had 26% sensitivity and 90% specificity. The Rose diagnosis of angina or infarction yielded a sensitivity of 44%, specificity of 72%, positive predictive value of 67%, and negative predictive value of 50%. Supplemental questions designed to identify atypical ischemic pain led to increased sensitivity of up to 68% that was offset by decreased specificity. While the Questionnaire's sensitivity for coronary disease was greater for women than men (57 vs 40%), the overall accuracy was the same because specificity was lower (63 vs 80%). abstract_id: PUBMED:28848610 Risk of Coronary Heart Events Based on Rose Angina Questionnaire and ECG Besides Diabetes and Other Metabolic Risk Factors: Results of a 10-Year Follow-up in Tehran Lipid and Glucose Study. Background: High-risk individuals for CHD could be diagnosed by some non-invasive and low-priced techniques such as Minnesota ECG coding and rose angina questionnaire (RQ). Objectives: The present study aimed at determining the risk of incident CHD according to ECG and RQ besides diabetes and other metabolic risk factors in our population. Methods: Participants comprised of 5431 individuals aged ≥ 30 years within the framework of Tehran lipid and glucose study. Based on their status on history of CHD, ECG, and RQ at baseline, all participants were classified to 5 following groups: (1) History-Rose-ECG- (the reference group); (2) History-Rose+ECG-; (3) History-Rose-ECG+; (4) History-Rose+ECG+; and (5) History+. We used Cox regression model to find the role of ECG and RQ on CHD, independent of other risk factors. Results: Overall, 562 CHD events were detected during the median of 10.3 years follow-up. CHD incidence rates were 55.9 and 9.09 cases per 1000 person-year for participants with and without history of CHD, respectively. Hazard ratios (HRs) (95% CIs) were 4.11 (3.27 - 5.11) for History + and 2.18 (1.63 - 2.90), 1.92 (1.47 - 2.51), and 2.48 (1.46 - 4.20) for History-Rose+ECG-, History-Rose-ECG+, and History-Rose+ECG+, respectively. RQ and ECG had the same HRs as high as those for hypertension and hypercholesterolemia; however, diabetes showed statistically and clinically more effects on CVD than RQ and ECG. Conclusions: RQ in general and, ECG especially in asymptomatic patients, were good predictors for CHD events in both Iranian males and females; however, their predictive powers were lower than that of diabetes. abstract_id: PUBMED:8665372 Resting electrocardiogram and risk of coronary heart disease in middle-aged British men. Objective: To examine the relation between resting electrocardiographic (ECG) abnormalities and risk of coronary heart disease (CHD). Design And Setting: This was a prospective study of 7735 middle-aged men aged 40-59 years at entry (British Regional Heart Study). At baseline assessment each man completed a modified World Health Organization (WHO) (Rose) chest-pain questionnaire, gave details of his medical history and had a three-lead orthogonal electrocardiogram recorded. "Symptomatic CHD' refers to a history of anginal chest pain and/or a prolonged episode of central chest pain on WHO questionnaire and/or recall of a doctor diagnosis of CHD (angina or myocardial infarction). Main Outcome Measures: These were the first major CHD events, i.e. fatal CHD and non-fatal myocardial infarction, occurring during 9.5 years of follow-up. Results: Of 611 first major CHD events during follow-up, 243 (40%) were fatal. After adjustment for age, other ECG abnormalities and symptomatic CHD, the ECG abnormalities most strongly associated with risk of a major CHD event were definite myocardial infarction (relative risk 2.5; 95% confidence interval 1.8-7.5) and definite myocardial ischaemia (1.9; 1.1-2.9). Other ECG abnormalities independently associated with a statistically significant increase in risk were left ventricular hypertrophy (2.2; 1.5-3.3), left axis deviation (1.3; 1.1-1.6) and ectopic beats, particularly if these were ventricular (1.6; 1.1-2.4). Three ECG abnormalities associated with a marked increase in CHD case-fatality rate were pre-existing myocardial infarction (67%), major conduction defect (71%) and arrhythmia (67%); the rate in men with none of these abnormalities was 32%. The relative risks associated with each ECG abnormality were similar in men with and without symptomatic CHD. The increase in risk in the presence of symptomatic CHD (2.4-fold) and ECG evidence of definite myocardial infarction (2.5-fold) was similar; the presence of both factors increased risk more than six-fold. The most serious ECG abnormalities-definite myocardial infarction and ischaemia-were useful predictors of future major CHD events only in men with symptomatic CHD. Conclusion: The prognostic importance of major ECG abnormalities is strongly influenced by the presence of symptomatic CHD. In men with symptomatic CHD the resting electrocardiogram may help to define a group at high risk who may benefit from intervention. However, it has little or no value as a screening tool in middle-aged men without symptomatic CHD. abstract_id: PUBMED:12047781 Prognostic value of dobutamine echocardiography in elderly patients with suspected coronary artery disease and an abnormal resting electrocardiogram. Unlabelled: The prognostic value of dobutamine stress echocardiography (DE) in elderly patients over 70 years of age with suspected coronary artery disease (CAD) and an abnormal electrocardiogram (ECG) at rest has not been well documented. We investigated 158 consecutive outpatients (97 men) aged 76 +/- 4 years presenting with right bundle branch block with or without a left anterior hemiblock (41%), left bundle branch block (23%), or repolarization abnormalities (36%). The basic clinical and echocardiographic data were analyzed and correlated with the DE results. The patients were followed for a mean of 24 +/- 13 months. Twenty-three (14.5%) patients presented with a nonfatal cardiac event (CE). According to univariate analysis, the parameters associated with the onset of a CE included arterial hypertension, angina pectoris, anti-angina therapy, the presence of calcifications on the aortic valve and the ascending walls of the aorta, and the presence of segmental-contraction abnormalities of the left ventricle (LV) at rest during echocardiography. According to a Cox regression model, the only independent predictive parameter for the onset of a CE was the appearance of segmental-contraction abnormalities of the LV and their severity during peak dose administration during DE (odds ratio [OR] = 2.58, P &lt; 0.001). This parameter remains independent when spontaneous CEs (myocardial infarction and unstable angina) were considered as endpoints (OR = 2.31, P &lt; 0.001). Conclusions: DE is a safe and reliable method for investigating CAD in patients over 70 years of age with an abnormal resting ECG. In our study, the most predictive independent parameter for the onset of a CE was the appearance of segmental-contraction abnormalities of the LV and their severity during DE. abstract_id: PUBMED:14624423 Intracoronary electrocardiogram and angina pectoris during percutaneous coronary interventions as an assessment of myocardial viability: comparison with low-dose dobutamine echocardiography. The aims of this study were to evaluate the diagnostic value of intracoronary electrocardiogram (ECG) and presence of angina pectoris during percutaneous coronary interventions in the prediction of myocardial viability assessed by low-dose dobutamine echocardiography (LDDE). Seventy-one patients (60 men; mean age, 54 +/- 11 years) with recent Q-wave MI and angiographically documented regional wall motion abnormality in the presence of a significant (&gt;/= 70%) nonocclusive stenosis of the infarct-related vessel who were referred for angioplasty were prospectively included in the study. The intracoronary ECG was recorded using coronary angioplasty guidewire. Significant ST segment elevation was defined as a new or worsening ST segment elevation of &gt;/= 0.1 mV at 80 msec after the J-point. Angina pectoris was noted as present or absent during balloon inflation. All patients underwent LDDE for viability assessment. Significant ST segment elevation in the intracoronary ECG and chest pain were observed in 56 (78.9%) and 49 (69%) of the 71 patients. Viability was present on LDDE in 52 (92.9%) of 56 patients with and 3 (20%) of 15 without ST segment elevation. Viability was detected in 45 (91.8%) of 49 patients with and 10 (45.4%) of 22 without angina pectoris during balloon occlusion. The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of ST segment elevation for viability were 94.5%, 75%, 92.9%, 80%, and 90.1%, respectively. The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of angina pectoris for viability were 81.8%, 75%, 91.8%, 54.5%, and 80.3%, respectively. The present study demonstrated that a simple assessment of ST segment elevation in the intracoronary ECG or angina pectoris during coronary angioplasty can be used to assess myocardial viability identified by LDDE in patients with previous MI. abstract_id: PUBMED:11570113 Exercise electrocardiogram testing and prognosis. Novel markers and predictive instruments. Ample evidence now exists supporting the use of the exercise test primarily for prognostic, as opposed to diagnostic, purposes. Although limitations must be recognized, the Duke exercise treadmill score, the chronotropic response to exercise, and heart rate recovery appear to function as powerful and independent predictors of risk. With the possible exception of exercise-induced ischemia, as manifested by the ST-segment and angina components of the Duke exercise treadmill score, exercise predictors of risk are not clearly modifiable. Nonetheless, they are clinically quite useful since they may well identify patients who are or are not likely to gain benefit from further testing and aggressive therapies. How so? The "plain old" exercise treadmill test makes it possible to easily, safely, and inexpensively identify a large group of patients who are at low risk for death or major cardiac events. For this reason alone, the predictive instruments described in this article should be routinely incorporated into clinical practice. It makes no sense to perform expensive and potentially risky diagnostic tests, prescribe polypharmacy, or institute invasive therapeutic procedures in patients who are already at low risk. As an example, Weiner and colleagues found that coronary bypass grafting only benefited CASS registry patients who had a high-risk exercise test result. Future research will be needed to further refine risk stratification with the exercise test, and determine how best to use adjunctive imaging studies and to reduce risk among patients with prognostically important findings. abstract_id: PUBMED:7775893 The evaluation of an Arabic version of Rose questionnaire. An Arabic version of Rose questionnaire was tested to establish its credibility as an epidemiological tool for screening purposes. The study showed that the criteria used in defining CHD determinants were still applicable today and in Arabic. The Arabic version maintained similar, if not better, sensitivity and specificity results than its original counterpart. The study pointed out, however, to several limitations that should be kept in mind during the interpretation of the questionnaire results. abstract_id: PUBMED:10235174 Rose questionnaire angina in younger men and women: gender differences in the relationship to cardiovascular risk factors and other reported symptoms. Cross-sectional data from the Whitehall II study baseline were used to identify factors that may lead to the high levels of Rose angina reporting in women. 134 (4.0%) of 3350 women and 164 (2.4%) of 6830 men reported angina (P&lt;0.001). Women with Rose angina had a poorer cardiovascular risk profile (degree of obesity, serum cholesterol and apolipoprotein B, blood pressure) and more electrocardiogram abnormalities (ST and T changes) than women without angina, but the associations were generally weaker than in men. Women who reported many other physical symptoms had a high prevalence of Rose angina (9.7%). Adjustment for symptom reporting reduced the age-adjusted gender difference to odds ratio (OR) = 0.93 (95% confidence interval [CI]: 0.56-1.56) for subjects with no symptoms, and to OR = 1.42 (95% CI = 1.05-1.90) for subjects at the upper quartile of symptom score. Among women a high level of general symptom reporting was associated with General Health Questionnaire (GHQ) minor psychiatric morbidity (51.9% prevalence), but GHQ caseness does not appear to be a predictor of Rose angina (OR 1.22 [0.67-2.21]) in this group. Coronary artery disease risk is raised in women with Rose angina, and this remains true in groups with high levels of general symptom reporting. abstract_id: PUBMED:10493847 Extent and severity of test positivity during dobutamine stress echocardiography. Influence on the predictive value for coronary artery disease. Aims: Recent studies have evaluated the diagnostic accuracy and predictive value of dobutamine echocardiography without considering the additional information implied by the magnitude of induced wall motion abnormalities. We sought to evaluate the positive predictive value of dobutamine echocardiography for coronary artery disease from the extent and severity of the induced wall motion abnormality. In addition, we intended to determine factors associated with false-negative dobutamine echocardiography. Methods And Results: Two hundred and eighty-three consecutive patients with suspected coronary artery disease underwent dobutamine echocardiography (up to 40 microg x kg(-1) x min(-1)+atropine up to 1 mg) and coronary angiography. The number of segments and the degree of deterioration were used to describe the extent and severity of induced wall motion abnormality. Analysis of clinical, procedural and echocardiographic variables was performed to determine factors associated with false-negative results. The positive predictive value of dobutamine echocardiography increased from 85% to 90%, 94% and 94% with deterioration of wall motion by one grade in &gt;/=1, &gt;/=2, &gt;/=3 and &gt;/=4 segments, respectively (P&lt;0.05). Deterioration of wall motion by two grades in one segment had a positive predictive value of 96% as compared to 85% for deterioration by only one grade in one segment (P&lt;0.05). Patients with false-negative test results received atropine more frequently (28% vs 13%, odds ration [OR]=3.87, 95% confidence interval [CI]=1.54-9.75, P=0.028) than patients with a correct positive result. However, angina (15 vs 37%, OR=0.26, 95% CI=0.09-0.71, P=0.010), ECG changes during dobutamine stress (15% vs 35%, OR=0.49, 95% CI 0.19-1.25, P=0.014) and high image quality (OR 1.59, 95% CI 1.07-2.37, P=0.015) were less frequent. The sensitivity of dobutamine echocardiography increased from 67% to 71% and 86% (P&lt;0.05) with increasing achieved maximal heart rate (&lt;75%, 75-85% and &gt;85% of maximal heart rate). Conclusion: The positive predictive value of dobutamine echocardiography increases significantly as the extent and severity of induced wall motion abnormality increases. Thus, the degree of test positivity should be reported in clinical practice. Despite high pharmacological drug doses, the haemodynamic response may still be insufficient in some patients to induce myocardial ischaemia, resulting in false-negative dobutamine echo tests. To maximize the sensitivity of dobutamine echocardiography, the highest haemodynamic stress level, with a heart rate above 85% of the predicted heart rate, should be reached. Answer: The predictive value of an electrocardiogram (ECG) when added to the Rose Angina Questionnaire for future coronary heart disease (CHD) has been evaluated in various studies. According to a 10-year follow-up study in an Iranian urban population, adding abnormal ECG to the Rose Questionnaire did not result in a higher risk prediction for future CHD events, especially in men (PUBMED:22544919). This suggests that the Rose Questionnaire alone is a significant predictor of incident CHD in both genders and that the addition of an ECG may not provide substantial additional prognostic information. Another study assessing the Rose Questionnaire against exercise thallium testing found that the Rose Questionnaire had high specificity but variable sensitivity for identifying angina pectoris (PUBMED:2732767). Supplemental questions designed to identify atypical ischemic pain increased sensitivity but decreased specificity. In the Tehran Lipid and Glucose Study, it was found that the Rose Questionnaire and ECG were good predictors for CHD events in both Iranian males and females, but their predictive powers were lower than that of diabetes (PUBMED:28848610). This indicates that while the Rose Questionnaire and ECG can be useful in predicting CHD, they may not be as strong as other metabolic risk factors such as diabetes. A study on middle-aged British men showed that the prognostic importance of major ECG abnormalities is strongly influenced by the presence of symptomatic CHD (PUBMED:8665372). In men with symptomatic CHD, the resting ECG may help define a high-risk group that could benefit from intervention, but it has little value as a screening tool in asymptomatic individuals. Overall, the evidence suggests that while the Rose Angina Questionnaire and ECG can be useful tools for predicting future CHD, the addition of an ECG to the Rose Questionnaire does not consistently add significant predictive value, particularly in the absence of symptomatic CHD. The predictive power of these tools may also be less than that of established metabolic risk factors like diabetes.
Instruction: Are abdominal x-rays a reliable way to assess for constipation? Abstracts: abstract_id: PUBMED:20728159 Are abdominal x-rays a reliable way to assess for constipation? Purpose: Currently to our knowledge no validated reliable tools are available to evaluate constipation in children. Abdominal x-rays are often done in clinical practice to evaluate patients with lower urinary tract symptoms. Although 3 previously published rating tools exist to score constipation based on x-ray, there is little information on their merits. We assessed these 3 tools for reliability among multiple practitioners. Materials And Methods: We retrospectively analyzed abdominal x-rays in a cohort of 80 patients between ages 4 and 12 years. X-rays were independently assessed by each of us using the previously published Barr, Leech and Blethyn scoring tools. Scores were analyzed for reliability using standard statistical methods. Results: The range of weighted κ score, indicating reliability, were 0.0491 to 0.4809 for the Barr, 0.1195 to 0.2730 for the Leech and 0.0454 to 0.4514 for the Blethyn method. Guidelines for κ scores are greater than 0.75-excellent, 0.4 to 0.75-good and 0 to 0.4-marginal reproducibility. ICC, another reliability measure, was 0.02577 for the Barr, 0.3313 for the Leech and 0.201 for the Blethyn method. ICC interpretations are greater than 0.75-excellent, 0.4 to 0.75-good and 0 to 0.4-poor. There was a trend toward good interrater reliability between more experienced urology practitioners with the Barr and Blethyn tools (0.48 and 0.45, respectively) but not between less experienced raters or with the Leech tool. Conclusions: Currently available scoring tools to evaluate constipation by x-ray do not have good reliability among multiple examiners. Further research is needed to develop an alternate tool to increase the reliability of x-ray to assess constipation between multiple raters. abstract_id: PUBMED:20453675 Lack of utility of abdominal x-rays in the evaluation of children with constipation: comparison of different scoring methods. Background And Aim: Abdominal x-rays are used diagnostically in the evaluation of children with constipation. However, their clinical utility has not been established. The aim of the study was to assess the accuracy of different methods in identifying children with functional constipation (FC) or nonretentive fecal incontinence (NRFI). Patients And Methods: Retrospective review of abdominal x-rays in which colonic transit (CT), Barr, Leech, and fecal loading (FL) scores were blindly measured by blinded pediatric gastroenterologists and a radiologist. Children were classified a priori as FC or NRFI. Results: One hundred sixty patients (125 FC, 35 NRFI) were studied. There were significant differences (P &lt; 0.05) when comparing those with FC and those with NRFI: CT: 51 +/- 18 vs 40 +/- 21 hours; Barr: 14 +/- 5 vs 11 +/- 4; Leech: 10 +/- 2 vs 8 +/- 2; FL: 2 +/- 0.5 vs 1.7 +/- 0.4. More than 20% of FC had normal Barr and Leech scores, whereas &gt;50% of NRFI had abnormal scores. CT discriminated better between FC and NRFI. There was a significant correlation (P &lt; 0.05) between CT and Barr (0.45), Leech (0.41) and FL scores (0.36), and between Barr and Leech scores (r = 0.94). There was good intraobserver correlation between Barr, Leech, and FL scores but poor interobserver reproducibility. Conclusions: Although significant differences in overall FC and NRFI scores exist, the discriminative value is low for all scores. There is poor interobserver reproducibility of the Barr, Leech, and FL scores. These findings confirm the limited value of the plain abdominal x-ray in the evaluation of children with constipation. abstract_id: PUBMED:34779158 Abdominal X-rays in children: indications, procedure and interpretation. Abdominal X-rays (AXRs) are often used as an imaging modality in children who present with abdominal pain with a suspected serious underlying pathology. Nausea, vomiting and constipation that are unresponsive to treatment may also prompt a request for an AXR in a child. Nurses play an important role in ensuring that requests for AXRs in children are appropriate and that the procedure is performed safely. This is the second of two articles on the use of AXRs for diagnosing abdominal pathologies in childhood. The first article focused on AXRs in neonates. This one focuses on AXRs in children, discussing their indications, procedure and interpretation and describing abdominal conditions in children for which an AXR is likely to support diagnosis. abstract_id: PUBMED:32300503 Pediatric Abdominal X-rays in the Acute Care Setting - Are We Overdiagnosing Constipation? Introduction Constipation represents 3% of all office visits to pediatricians and 10% - 45% of consultations with pediatric gastroenterologists. It has been reliably established that the role of abdominal x-rays (AXR) in the diagnosis of constipation in pediatrics is limited; yet, significant overdiagnosis of constipation exists when plain abdominal x-rays are used in the acute setting for abdominal pain or to screen for other disorders. This results in loss of time, resources, exposure to unnecessary radiation, and potentially missing the primary diagnosis. The purpose of this study is to determine the sensitivity and specificity of AXR in diagnosing constipation in the acute setting. Objectives To determine 1) the sensitivity and specificity of plain AXR in the diagnosis of constipation and 2) the effect of age, race, gender, comorbid conditions, and practice setting on the diagnosis of constipation. Methods This study was a historical cohort study of children (two to 18 years of age) who were seen at Ascension St. John Children's Hospital between March 2015 - March 2018 and who had a plain AXR performed during an emergency department (ED) visit or inpatient stay. If AXR results contained keywords, such as "constipation," "stool load," "fecal retention," and "fecal load," the ambulatory medical record, Athena® (http://www.athenahealth.com), was searched to determine if the child had an ambulatory visit in the ensuing 45 days. Chart review was conducted to assess if the diagnosis of constipation was later confirmed by history and physical examination by a pediatrician or gastroenterologist at that visit. By comparing data from both encounters, the sensitivity and specificity of plain AXR in diagnosing constipation was assessed. All data were analyzed using the Statistical Package for Social Sciences (SPSS), v. 25.0 (IBM SPSS Statistics, Armonk, NY) and a p-value of 0.05 or less was considered to indicate statistical significance. Results Over the three-year study period, 1,383 AXRs were performed on 1,116 patients. The sensitivity of AXR in the diagnosis of constipation was 73.8%, specificity 26.8%, positive predictive value 46.4%, and negative predictive value of 54.3%. Pediatric gastroenterologists were more likely to diagnose constipation (63.2%) compared to pediatricians (41.4%) and pediatric surgeons (33.3%) (p = 0.04). Conclusions AXRs are not a reliable means of diagnosing constipation. Overall, we found similar sensitivity and specificity of AXR in diagnosing constipation compared to previous studies. Yet, our study gives new insight into the practices around diagnosing constipation in a single-center community hospital pediatric acute setting and the radiology department. This further emphasizes the need to review current practices and impart more education both in the acute care setting and radiology department. abstract_id: PUBMED:28831934 Functional abdominal symptoms in children: the role of the abdominal X-ray and measurement of colon transit time Functional constipation and functional non-retentive faecal incontinence are common problems in childhood; these functional defaecation disorders are diagnosed according to the Rome IV criteria. Here we describe the role of an abdominal X-ray in diagnosing functional defaecation disorders, and address the limited additional value of a plain abdominal X-ray. In limited cases it may be helpful to determine colonic transit time using an abdominal X-ray to differentiate between functional constipation and functional non-retentive faecal incontinence. The diagnostic value of abdominal X-rays in children with functional abdominal pain will be discussed. The aims of this article are to explore ways of reducing unnecessary use of abdominal X-rays and reducing unnecessary radiation exposure in children with functional defaecation disorders and functional abdominal pain. abstract_id: PUBMED:35917842 A Quality Improvement Initiative to Reduce Abdominal X-ray use in Pediatric Patients Presenting with Constipation. Objective: To decrease the percentage of patients undergoing an abdominal radiograph for evaluation of constipation within 24 hours of their initial gastroenterology visit. Study Design: In January 2015, we implemented a quality improvement, evidence-based guideline (EBG) aimed at standardizing the initial assessment of patients presenting for a new outpatient gastroenterology visit with a primary complaint of constipation. Over the subsequent 5 years, we followed the clinical impact of this guideline initiation with the goal of decreasing unnecessary abdominal radiograph use by 10% within 1 year of EBG launch. Patients older than 6 months and younger than 19 years were included. Results: In total, 6723 patients completed new patient gastroenterology visits for a primary diagnosis of constipation between 2013 and 2019. Of these, 993 (14.8%) patients had abdominal radiographs taken within 24 hours of their initial visit. Over the 7 years of this project, a mean frequency of abdominal radiograph use decreased from 24% to less than 11%. In addition, a 57% decrease in hospital charges related to decreased radiograph use for constipation was found. No increases in subsequent emergency department visits or hospitalization for constipation within 30 days of patients' initial visits were seen. Conclusions: Through local adoption of an EBG, routine use of abdominal radiographs taken during a patient's initial outpatient gastroenterology visit for constipation decreased by more than 50%. This reduction was maintained over a subsequent 5-year period without any detrimental side effects. abstract_id: PUBMED:23079375 Definition of reliable, objective criteria by abdominal radiography to identify occult constipation in children with lower urinary tract symptoms. Purpose: Treating constipation in children with voiding dysfunction may improve or resolve urinary symptoms. A clinical diagnosis of constipation may not identify all patients. Abdominal radiographs (plain x-ray of the kidneys, ureters and bladder) are often used to assess constipation but no objective definition of constipation based on abdominal radiographs exists. Most abdominal radiograph rating scales use subjective criteria and our previous series showed that these scales have poor reliability. We identified reliable, objective parameters on abdominal radiograph to predict constipation. Materials And Methods: The abdominal radiographs of 80 children 4 to 12 years old, including 40 with constipation and 40 successfully treated for constipation, were assessed for several measurable parameters. Logistic regression was used to construct a model to predict constipation status based on these abdominal radiograph parameters. Model accuracy was assessed using AUC analysis of ROC curves. Results: The most predictive model included cecal diameter, total length of stool measured, stool length in the rectum, and patient age and gender. As measured by the area under the ROC curve, accuracy was excellent at 0.87. We calculated cutoffs for individual parameters on abdominal radiograph, including total stool length greater than 33.4 cm, cecal diameter greater than 3.7 cm and stool length in the rectum greater than 5.9 cm. Conclusions: We identified accurate, reliable criteria based on objective measurements on abdominal radiograph to differentiate patients with and without constipation. These criteria may be applied to objectively assess constipation status in children with urinary symptoms without a history of constipation. Further study will determine whether these criteria predict the response to treatment. abstract_id: PUBMED:1420793 Spontaneous hematoma of the rectal muscles as a cause of abdominal pain with palpable mass Objective: To carry out a review of hematoma in the abdominal wall: etiology, triggering factors, diagnosis and treatment. To consider differential diagnosis with other causes of acute abdominal pain with palpable mass. Design: Based on two clinical observations. SITE. Iscar Health Centre, Valladolid. Patients: Two patients who presented themselves at Casualty complaining of acute abdominal pain and with palpable abdominal mass (lump). Main Measurements And Results: The anamnesis and physical exploration provided the tentative diagnosis, and they were sent to the Hospital centre in question. The analyses, X-rays and abdominal scans were not conclusive and an abdominal laparotomy was carried out which confirmed the diagnosis. Conclusions: Spontaneous haematoma in the abdominal wall is a cause of acute abdominal pain which is encountered most frequently in women receiving anticoagulant treatment or in situations that increase abdominal muscular tension (coughing, pregnancy, constipation, previous laparotomies and intraabdominal injections). Confirmation of initial diagnosis is obtained, normally, using sonic scanners, or abdominal TAC, but on occasions findings are not conclusive, and are submitted to abdominal laparotomy. The treatment is conservative except with hemodynamic complications. abstract_id: PUBMED:13026819 Treatment of infantile constipation with x-rays N/A abstract_id: PUBMED:31388854 Use of X-ray to Assess Fecal Loading in Patients with Gastrointestinal Symptoms. Background/aims: There is currently no gold standard for evaluating stool burden in the colon for patients with gastrointestinal symptoms. We aim to examine the relationship between fecal loading on single view abdominal X-ray imaging and gastrointestinal symptoms such as constipation, diarrhea, bloating, and accidental bowel leakage in adult outpatients. Methods: This retrospective, cross-sectional study examined patients seen at University of Michigan from 2005 to 2017. Chart review of demographic information, reported gastrointestinal symptoms, past medical history, and abdominal radiographic imaging was performed. Bivariate analysis was performed to assess associations between these characteristics and fecal loading. Factors independently associated with fecal loading were identified using logistic regression. Significance was established at p &lt; 0.05. Results: In total, 319 patients who had an X-ray were included in the final analysis, with 84.0% demonstrating fecal loading on the initial X-ray and most reporting constipation as a symptom (n = 214, 84.3%, p = 0.0334). In logistic regression, a chief complaint of constipation had higher odds of being associated with fecal loading on X-ray compared to diarrhea (adjusted OR 6.41; CI 1.51-27.24, p = 0.0118). Bloating as a reported symptom was statistically significant with an adjusted OR of 2.56 (CI 1.10-5.96, p = 0.0286). Conclusions: Constipation (as a chief complaint) and bloating (as a symptom) were associated with fecal loading on X-ray imaging, while accidental bowel leakage and diarrhea were not. Answer: Abdominal x-rays are commonly used in clinical practice to evaluate patients with constipation, but their reliability and clinical utility for this purpose have been questioned. Studies have shown that the reliability of abdominal x-ray scoring tools, such as the Barr, Leech, and Blethyn methods, to evaluate constipation is not good among multiple examiners, with κ scores and ICC (Intraclass Correlation Coefficient) values indicating marginal reproducibility and poor reliability, respectively (PUBMED:20728159). Additionally, significant differences in scoring methods have been observed when comparing children with functional constipation (FC) and nonretentive fecal incontinence (NRFI), but the discriminative value of these scores is low, and there is poor interobserver reproducibility (PUBMED:20453675). Furthermore, a study found that the sensitivity of abdominal x-rays (AXR) in diagnosing constipation was only 73.8%, with a specificity of 26.8%, indicating that AXRs are not a reliable means of diagnosing constipation (PUBMED:32300503). Another study identified reliable, objective parameters on abdominal radiographs to predict constipation, such as cecal diameter and stool length measurements, which may be applied to objectively assess constipation status in children with urinary symptoms without a history of constipation (PUBMED:23079375). Despite these findings, abdominal x-rays continue to be used, and a quality improvement initiative showed that it is possible to reduce their use in pediatric patients presenting with constipation without detrimental side effects (PUBMED:35917842). However, the overall consensus from the literature suggests that the use of abdominal x-rays for assessing constipation has limited value, and there is a need for more education and review of current practices to reduce unnecessary radiation exposure and improve diagnostic accuracy (PUBMED:34779158, PUBMED:28831934). In summary, abdominal x-rays are not a consistently reliable method to assess for constipation, and their use should be carefully considered in the context of other clinical findings and potentially more accurate diagnostic approaches.
Instruction: Does difficulty functioning in the surrogate role equate to vulnerability in critical illness research? Abstracts: abstract_id: PUBMED:26304514 Does difficulty functioning in the surrogate role equate to vulnerability in critical illness research? Use of path analysis to examine the relationship between difficulty providing substituted judgment and receptivity to critical illness research participation. Purpose: Individuals who struggle to provide substitute judgment for the critically ill often find it challenging to engage in decision making for therapeutic interventions. Although essential to the conduct of research, how these individuals respond to requests for clinical trial participation is poorly understood. Methods: Survey data collected to examine surrogate attitudes toward research provided the conceptual framework to explore influences on decision making. Path analysis was used to derive the final model (nonlatent, fully recursive, 1 indicator/variable). Results: Surrogates with list-wise complete records (406) were analyzed. The following variables were not retained in the final model: education, income, religiosity, decision-making experience, discussion of patient's wishes, number of individuals assisting with decision making, trust in care providers, difficulty making decisions, and responsibility for decision making. Being white and having experience making treatment decisions for the patient during the current intensive care unit encounter affected the likelihood the surrogate would permit participation in research positively (parameter estimates, 0.281 and 0.06, respectively). No variable reflecting difficulty functioning in the surrogate role was associated with permitting research participation. Conclusions: We were unable to demonstrate a relationship between perceived difficulty in decision making in the surrogate role and receptivity to clinical trial participation. abstract_id: PUBMED:27849142 The Experience of Surrogate Decision Makers on Being Approached for Consent for Patient Participation in Research. A Multicenter Study. Rationale: Recruitment in critical care research differs from other contexts in important ways: patients lack decision-making capacity, uncertainty exists regarding patient prognosis, and critical illnesses are often associated with appreciable morbidity and mortality. Objectives: We aimed to describe the experiences of surrogate decision makers (SDMs) in being approached for consent for critically ill patients to participate in research. Methods: A multicenter, qualitative study involving semistructured interviews with 26 SDMs, who provided or declined surrogate consent for research participation, at 5 Canadian centers nested within a multicenter observational study of research recruitment practices. Transcripts were reviewed by three qualitative researchers, and data were analyzed using grounded theory and a narrative critical analysis. Measurements And Main Results: SDMs were guided by an overarching desire for the patient to live. Surrogate research decision-making involved three sequential stages: (1) being approached; (2) reflecting on participation; and (3) making a decision. In stage 1, SDMs identified factors (their expectations, how they were approached, the attributes of the person approaching, and study risks and benefits) that characterized their consent encounter and affirmed a preference to be approached in person. If SDMs perceived the risk of participation to be too high or felt patients may not benefit from participation, they did not contemplate further. In stage 2, SDMs who knew the patient's wishes or had a deeper understanding of research prioritized the patient's wishes and the perceived benefits of participation. Without this information, SDMs prioritized obtaining more and better care for the patient, considered what was in their mutual best interests, and valued healthcare professional's knowledge. Trust in healthcare professionals was essential to proceeding further. In stage 3, SDMs considered six factors in rendering decisions. Conclusions: SDMs engaged in three sequential stages and considered six factors in making surrogate decisions for research participation. Surrogates' assessments of the risks and benefits of participation and their trust in healthcare professionals were critical factors in research decision-making. By conceptualizing surrogate decision-making for research in stages, future research can develop and test procedures to enhance the surrogate research decision-making process. abstract_id: PUBMED:37704184 The Stresses of Surrogate Decision-Making: Contributing Factors and Clinicians' Role in Mitigation. Background: Surrogate Decision-Makers (surrogates) are frequently employed in decision-making for critically ill adults. There are insufficient data considering the surrogate experience, stress, and potential for mitigation. Methods: An anonymous online survey queried (1) medical situation (2) total stress (3) demographics (4) potential factors, including sources of information about patient wishes, external sources of support or competing stressors, and their interactions with the medical team through the experience. Results: 108 respondents were included; 91 completed all items. Most respondents ranked their experience as a surrogate as one of the most stressful experiences of their lives; this was associated with whether it was an end-of-life decision (P = .003), Respondent Religion (P = .015), or religious or spiritual beliefs (P = .024), and having their own health problems (P = .008). On individual Likert responses, surrogates reported significant stress mitigation when they felt they had been helpful (P &lt; .001), knew the patient's wishes (P = .0011), specifically discussed patient wishes (P &lt; .001), or patient's wishes were documented (P &lt; .001). Items about surrogate-team interaction also met significance, including the physician being communicative and available (P &lt; .001), respectful (P = .007), honest (P &lt; .001), and validating (P = .001). Conclusions: Surrogate stress is an evolving area for research. Significant factors included relationship with the medical team, making this an important area for HPM to play a key role in mitigating surrogate stress. abstract_id: PUBMED:24330417 Advance directives lessen the decisional burden of surrogate decision-making for the chronically critically ill. Aims And Objectives: To identify the relationships between advance directive status, demographic characteristics and decisional burden (role stress and depressive symptoms) of surrogate decision-makers (SDMs) of patients with chronic critical illness. Background: Although the prevalence of advance directives among Americans has increased, SDMs are ultimately responsible for complex medical decisions of the chronically critically ill patient. Decisional burden has lasting psychological effects on SDMs. There is insufficient evidence on the influence of advance directives on the decisional burden of surrogate decision-makers of patients with chronic critical illness. Design: The study was a secondary data analysis of cross-sectional data. Data were obtained from 489 surrogate decision-makers of chronically critically ill patients at two academic medical centres in Northeast Ohio, United States, between September 2005-May 2008. Methods: Data were collected using demographic forms and questionnaires. A single-item measure of role stress and the Center for Epidemiological Studies Depression (CESD) scale were used to capture the SDM's decisional burden. Descriptive statistics, t-tests, chi-square and path analyses were performed. Results: Surrogate decision-makers who were nonwhite, with low socioeconomic status and low education level were less likely to have advance directive documentation for their chronically critically ill patient. The presence of an advance directive mitigates the decisional burden by directly reducing the SDM's role stress and indirectly lessening the severity of depressive symptoms. Conclusions: Most SDMs of chronically critically ill patients will not have the benefit of knowing the patient's preferences for life-sustaining therapies and consequently be at risk of increased decisional burden. Relevance To Clinical Practice: Study results are clinically useful for patient education on the influence of advance directives. Patients may be informed that SDMs without advance directives are at risk of increased decisional burden and will require decisional support to facilitate patient-centred decision-making. abstract_id: PUBMED:19723862 Surrogate consent for genomics research in intensive care. Surrogate decision makers may be poorly prepared to give informed consent for genomics research for their loved ones in intensive care. A review of the challenges and strategies associated with obtaining surrogates' consent for genomics research in intensive care patients revealed that few well-controlled studies have been done on this topic. Yet, a major theme in the literature is the role of health care professionals in guiding surrogates through the informed consent process rather than simply witnessing a signature. Informed consent requires explicit strategies to approach potential surrogates effectively, educate them, and ensure that informed consent has been attained. abstract_id: PUBMED:25491659 Is there a role for physician involvement in introducing research to surrogate decision makers in the intensive care unit? (The Approach trial: a pilot mixed methods study). Purpose: To assess the feasibility of conducting a randomized trial comparing two strategies [physician (MD) vs. non-physician (non-MD)] for approaching substitute decision makers (SDMs) for research and to evaluate SDMs' experiences in being approached for consent. Methods: A pilot mixed methods study of first encounters with SDMs. Results: Of 137 SDMs (162 eligibility events), 67 and 70 were randomized to MD and non-MD introductions, respectively. Eighty SDMs (98 events) provided consent and 21 SDMs (24 events) declined consent for studies, including 2 SDMs who provided and declined consent. We identified few missed introductions [4/52 (7.7 %)] and protocol violations [6/117 (5.1 %)], high comfort, satisfaction and acceptance scores and similar consent rates in both arms. SDMs provided consent significantly more often when a patient update was provided in the MD arm. Most SDMs (85.7 %) felt that physician involvement was inconsequential and preferred physician time to be dedicated to patient care; however, SDM experiences were closely related to their recall of being approached and recall was poor. SDMs highlighted 7 themes of importance to them in research surrogate decision-making. Conclusion: SDMs prioritized the personal attributes of the person approaching them over professional designation and preferred physician time to be dedicated to patient care. A mixed methods design evaluated intervention fidelity and provided the rationale for not proceeding to a larger trial, despite achieving all feasibility metrics in the pilot trial. Trial Registration Number: NCT01232621. abstract_id: PUBMED:36442321 Family members' perceptions of surrogate decision-making in the intensive care unit: A systematic review. Background: A better understanding of the perceptions of family members in making surrogate decisions for loved ones during intensive care is needed to inform the development of targeted supportive interventions. Objective: To examine and synthesize qualitative data on family members' perceptions of surrogate decision-making in the intensive care unit. Design: We conducted a systematic review and qualitative data synthesis. Eligible studies contained family members' quotes about surrogate decision-making experiences and perceptions in adult intensive care units, published in English or Chinese, in a peer-reviewed journal up to February 2022. Data sources included Embase, PubMed, ISI Web of Science, PsychINFO, CINAHL, Biomedical Literature Service System, China National Knowledge Infrastructure (CNKI), Wanfang Data, and VIP Journal. Methods: The searches yielded 5974 identified articles, of which 23 studies were included. At least two different reviewers independently assessed the study quality and extracted data into a Microsoft Excel spreadsheet. A thematic synthesis was performed by classifying all text units into one of the broad themes and subsequently analyzed to inductively develop the first-, second-, and third-order themes. Six family members with experience in intensive care unit surrogate decision-making contributed to the analysis. Results: The qualitative data synthesis resulted in five major themes. The following key new insights into family members' perceptions of surrogate decision-making in the intensive care unit were obtained: in individual systems, family members suffered from emotional distress and psychological stress; different cognitive styles emerged; some family members reshaped a new order of life in the disruption; in family systems, the family as a whole was closely connected with each other; and in medical systems families perceived asymmetry in relationships with clinicians, many factors influencing trust, the necessity for role-specific mediators and issues with operations and environments not being sufficiently humanized. Conclusion: This qualitative synthesis showed that individuals' emotions and cognition underwent complex processes during surrogate decision-making. The family as a whole, with disparate functional states, also faced different processes and outcomes under the crisis situation. At a broader level, the decision-making process reflected society's perceptions of the medical system. Future studies should use these insights to further explore and optimize the many aspects of surrogate decision support measures for families of critically ill patients and include the measurement of outcomes after interventions at multiple layers of the individual, family, and medical systems. Registration Number: The protocol was prospectively published on International Prospective Register of Systematic Reviews (PROSPERO)-CRD42022316687. Tweetable Abstract: Families of critically ill patients undergo a complex interactional process within the individual, family, and medical systems during surrogate decision-making. abstract_id: PUBMED:38387647 Barriers and Facilitators of Surrogates Providing Consent for Critically Ill Patients in Clinical Trials: A Qualitative Study. Background: Enrollment into critical care clinical trials is often hampered by the need to rely on surrogate decision-makers. To identify potential interventions facilitating enrollment into critical care clinical trials, a better understanding of surrogate decision-making for critical care clinical trial enrollment is needed. Research Question: What are the barriers and facilitators of critical care trial enrollment? What are surrogate decision-makers' perspectives on proposed interventions to facilitate trial enrollment? Study Design And Methods: We conducted semistructured interviews with 20 surrogate decision-makers of critically ill patients receiving mechanical ventilation. The interviews were recorded and transcribed verbatim, and analyzed for themes using an inductive approach. Results: Twenty surrogate decision-makers of critically ill patients participated in the semistructured interviews. Thematic analysis confirmed previous research showing that trust in the system, assessing the risks and benefits of trial participation, the desire to help others, and building medical knowledge as important motivating factors for trial enrollment. Two previously undescribed concerns among surrogate decision-makers of critically ill patients were identified, including the potential to interfere with clinical treatment decisions and negative sentiment about placebos. Surrogates viewed public recognition and charitable donations for participation as favorable potential interventions to encourage trial enrollment. However, participants viewed direct financial incentives and prioritizing research participants during medical rounds negatively. Interpretation: This study confirms and extends previous findings that health system trust, study risks and benefits, altruism, knowledge generation, interference with clinical care, and placebos are key concerns and barriers for surrogate decision-makers to enroll patients in critical care trials. Future studies are needed to evaluate if charitable giving on the patient's behalf and public recognition are effective strategies to promote enrollment into critical care trials. abstract_id: PUBMED:17352666 Attitudes of relatives of patients in intensive care and emergency departments to surrogate consent to research on incapacitated participants. Background: When potential research participants are incapable of providing consent, it is common for clinicians and researchers to approach family members, attempting to ascertain that person's wishes. Where legally recognised, surrogate consent may also then be provided by relatives for therapy or research involvement. This practice is widely accepted as acknowledging and maintaining patient autonomy, yet there are few data on acceptability of this to the community, or on the accuracy of surrogate decisions. Methods: We conducted a questionnaire-based survey of 283 people in the waiting rooms of the emergency and intensive care departments of a tertiary hospital in September 2006 to evaluate attitudes to critical care research, willingness to participate if incapacitated, and acceptability of surrogate consent in these circumstances. Results: 283 people were approached with the questionnaire, with 185 people fully completing and returning them: 17% strongly indicated agreement to research participation if they were critically ill, with 25% indicating they would refuse. Only 26% of respondents thought it acceptable that a relative provide consent to research participation on their behalf. Demographic factors did not influence responses, but views of respondents that participation in research was beneficial to participants correlated with an increased willingness to participate themselves. Conclusions: From our questionnaire, it appears that willingness to participate in research is less than we expected. Surrogate decision-making and the provision of surrogate consent to research was acceptable to only 26% of respondents. abstract_id: PUBMED:22732283 Surrogate and patient discrepancy regarding consent for critical care research. Objective: Critically ill patients frequently display impaired decision-making capacity due to their underlying illness and the use of sedating medications. Healthcare providers often rely on surrogates to make decisions for medical care and participation in clinical research. However, the accuracy of surrogate decisions for a variety of critical care research studies is poorly understood. Design: Cross-sectional observational study. Setting: Academic medical center. Patients: Medical intensive care unit patients and their designated surrogates. Intervention: Patients were asked whether they would consent to participate in hypothetical research studies of increasing complexity, and surrogates independently indicated whether they would consent to enroll the patient in the same scenarios. Results: Overall, 69 medical intensive care unit patients were enrolled into the study. The majority of surrogates were either the spouse (58%) or parent (22%) of the patient. The percentage of patients that would agree to participate in a research study and the percentage of surrogates that would agree to have the patient enrolled into a research study both declined as the risk of the study increased (p &lt; .001 for both analyses). In addition, the overall discrepancy, the false-negative rates, and the false-positive rates between patient and surrogates were greater as the risk of the study increased (p &lt; .001, p &lt; .001, and p = .049, respectively). κ values for all seven scenarios demonstrated less-than-moderate agreement (range 0.03-0.41). Conclusions: There are significant discrepancies in the willingness to participate in various types of clinical research proposals between critically ill patients and their surrogate decision makers. The results of this study raise concerns about the use of surrogate consent for inclusion of critically ill patients into research protocols. Answer: The relationship between difficulty functioning in the surrogate role and vulnerability in critical illness research is not straightforward. According to the study by PUBMED:26304514, no variable reflecting difficulty functioning in the surrogate role was associated with permitting research participation. This suggests that difficulty in the surrogate role does not necessarily equate to vulnerability in critical illness research in terms of receptivity to clinical trial participation. However, the experience of surrogate decision-makers (SDMs) is complex and multifaceted. The study by PUBMED:27849142 found that SDMs' decision-making involved three sequential stages and that trust in healthcare professionals was essential to proceeding further in the decision-making process. This indicates that while difficulty in the surrogate role may not directly affect receptivity to research participation, the overall experience and trust in healthcare professionals play a critical role. Furthermore, the study by PUBMED:37704184 highlighted that the surrogate experience is often one of the most stressful experiences of their lives, and factors such as knowing the patient's wishes and interactions with the medical team can mitigate this stress. This implies that while difficulty in the surrogate role may not equate to vulnerability in terms of research participation, it does contribute to the overall stress and burden experienced by surrogates. The study by PUBMED:24330417 found that the presence of an advance directive can mitigate the decisional burden by reducing role stress and depressive symptoms in SDMs. This suggests that while difficulty in the surrogate role may not directly relate to vulnerability in research participation, it can impact the psychological well-being of SDMs. In summary, while difficulty functioning in the surrogate role does not appear to directly equate to vulnerability in terms of receptivity to critical illness research participation, it is a factor in the overall stress and decisional burden experienced by surrogates. Trust in healthcare professionals and the presence of advance directives can influence the surrogate experience and potentially mitigate some of the challenges faced in the decision-making process.
Instruction: Skeletal muscle proteins oxidation in chronic right heart failure in rats: can different beta-blockers prevent it to the same degree? Abstracts: abstract_id: PUBMED:19289250 Skeletal muscle proteins oxidation in chronic right heart failure in rats: can different beta-blockers prevent it to the same degree? Background: Skeletal muscle atrophy and decreased expression of slow fibers contribute to exercise capacity limitation in Chronic Heart Failure (CHF). Pro-inflammatory cytokines and free radicals worsen muscle damage. In CHF sarcomeric proteins are oxidized with reduction of muscle twitch efficiency, and VO(2)-max. Beta-blockers with anti-oxidative capacity such as carvedilol have been shown to prevent contractile protein oxidation in CHF rats. Recently a new class of beta-blockers with NO donor activity has been introduced and approved for the treatment of CHF. Since a clinical clear superiority of a beta-blocker has never been shown, we compared nebivolol, that possesses NO donor activity, with bisoprolol, looking at possible differences in skeletal muscle that may have an impact on muscle function and exercise capacity in humans. We therefore studied skeletal muscle apoptosis and wastage, sarcomeric protein composition and oxidation, and muscle efficiency. Methods And Results: In the monocrotaline rat model of CHF we compared nebivolol a beta-blocker with vasodilative properties mediated by NO production, with bisoprolol. Nebivolol prevented protein oxidation, while bisoprolol did it only partially, as demonstrated by the oxyblot analysis (Oxy/RP values) (0.90+/-0.14 Controls.; 1.7+/-0.14 CHF; 1.1+/-0.05 bisoprolol; 0.82+/-0.17 nebivolol low; 0.62+/-0.10 nebivolol high). Only nebivolol improved twitch force production and relaxation. Nebivolol prevented fibers shift towards fast isoforms, atrophy, decreased apoptosis and sphingosine levels. Conclusions: Nebivolol seems better than bisoprolol in CHF by decreasing apoptosis and cytokines induced muscle wastage, preventing fibers shift and protein oxidation. Nebivolol by stimulating NO generation may have prevented protein oxidation. It could be speculated that ROS release, pro-inflammatory cytokines production and NF-kappa-B activation may play a key role. These positive changes could produce a favorable impact on exercise capacity in man. abstract_id: PUBMED:11412142 Apoptosis and changes in contractile protein pattern in the skeletal muscle in heart failure. Chronic heart failure is characterized as a clinical disorder by exercise intolerance. There are two factors that are independently responsible for the reduced exercise capacity: (a) a shift from myosin heavy chain 1 (MHC1) to MHC2a and MHC2b and (b) muscle atrophy. We have demonstrated, both in experimental models of heart failure and in man, that the more severe the heart failure, the greater the magnitude of skeletal muscle apoptosis. In the monocrotaline treated rat, that develops a severe right-sided heart failure, the increased number of apoptotic nuclei was paralleled by increasing levels of circulating TNFalpha. In agreement with some recent observations showing that sphingolipids can mediate programmed cell death, we found that in animals with heart failure and high number of apoptotic nuclei, circulating levels of sphingosine were significantly increased. In a study conducted in patients with heart failure we found a correlation between exercise capacity limitation and skeletal myocytes apoptosis. There was also a correlation between degree of muscle atrophy and magnitude of apoptosis. The shift in MHCs, although with a different mechanism, is also responsible for the reduced exercise capacity in these patients. In fact there is a strong correlation between indices of severity of CHF and MHC composition. Muscle fatigue, appears earlier in patients that have a greater skeletal muscle expression of 'fast' MHCs. We have also demonstrated that MHCs shift and apoptosis can be prevented by using angiotensin II converting enzyme inhibitors and angiotensin II receptor blockers. abstract_id: PUBMED:9216979 Decreased [3H]ouabain binding sites in skeletal muscle of rats with chronic heart failure. Abnormalities intrinsic to skeletal muscle are thought to contribute to decrements in exercise capacity found in individuals with chronic heart failure (CHF). Na+-K+-adenosinetriphosphatase (the Na+ pump) is essential for maintaining muscle excitability and contractility. Therefore, we investigated the possibility that the number and affinity of Na+ pumps in locomotor muscles of rats with CHF are decreased. Myocardial infarction (MI) was induced in 8 rats, and a sham operation was performed in 12 rats. The degree of CHF was assessed approximately 180 days after surgery. Soleus and plantaris muscles were harvested, and Na+ pumps were quantified by using a [3H]ouabain binding assay. At the time of muscle harvest, MI and sham-operated rats were similar in age (458 +/- 54 vs. 447 +/- 34 days old, respectively). Compared with their sham-operated counterparts, MI rats had a significant amount of heart failure, right ventricular-to-body weight ratio was greater (48%), and the presence of pulmonary congestion was suggested by an elevated lung-to-body weight ratio (29%). Left ventricular end-diastolic pressure was significantly increased in the MI rats (11 +/- 1 mmHg) compared with the sham-operated controls (1 +/- 1 mmHg). In addition, mean arterial blood pressure was lower in the MI rats compared with their control counterparts. [3H]ouabain binding sites were reduced 18% in soleus muscle (136 +/- 12 vs. 175 +/- 13 pmol/g wet wt, MI vs. sham, respectively) and 22% in plantaris muscle (119 +/- 12 vs. 147 +/- 8 pmol/g wet wt, MI vs. sham, respectively). The affinity of these [3H]ouabain binding sites was similar for the two groups. The relationship between the reduction in Na+ pump number and the reduced exercise capacity in individuals with CHF remains to be determined. abstract_id: PUBMED:37118350 Skeletal and cardiac muscle have different protein turnover responses in a model of right heart failure. Right heart failure (RHF) is a common and deadly disease in aged populations. Extra-cardiac outcomes of RHF such as skeletal muscle atrophy contribute to morbidity and mortality. Despite the significance of maintaining right ventricular (RV) and muscle function, the mechanisms of RHF and muscle atrophy are unclear. Metformin (MET) improves cardiac and muscle function through the regulation of metabolism and the cellular stress response. However, whether MET is a viable therapeutic for RHF and muscle atrophy is not yet known. We used deuterium oxide labeling to measure individual protein turnover in the RV as well as subcellular skeletal muscle proteostasis in aged male mice subjected to 4 weeks of hypobaric hypoxia (HH)-induced RHF. Mice exposed to HH had elevated RV mass and impaired RV systolic function, neither of which was prevented by MET. HH resulted in a higher content of glycolytic, cardiac, and antioxidant proteins in the RV, most of which were inhibited by MET. The synthesis of these key RV proteins was generally unchanged by MET, suggesting MET accelerated protein breakdown. HH resulted in a loss of skeletal muscle mass due to inhibited protein synthesis alongside myofibrillar protein breakdown. MET did not impact HH-induced muscle protein turnover and did not prevent muscle wasting. Together, we show tissue-dependent responses to HH-induced RHF where the RV undergoes hypertrophic remodeling with higher expression of metabolic and stress response proteins. Skeletal muscle undergoes loss of protein mass and atrophy, primarily due to myofibrillar protein breakdown. MET did not prevent HH-induced RV dysfunction or muscle wasting, suggesting that the identification of other therapies to attenuate RHF and concomitant muscle atrophy is warranted. abstract_id: PUBMED:32985798 Differential effects of right and left heart failure on skeletal muscle in rats. Background: Exercise intolerance is a cardinal symptom in right (RV) and left ventricular (LV) failure. The underlying skeletal muscle contributes to increased morbidity in patients. Here, we compared skeletal muscle sarcopenia in a novel two-stage model of RV failure to an established model of LV failure. Methods: Pulmonary artery banding (PAB) or aortic banding (AOB) was performed in weanling rats, inducing a transition from compensated cardiac hypertrophy (after 7 weeks) to heart failure (after 22-26 weeks). Cardiac function was characterized by echocardiography. Skeletal muscle catabolic/anabolic balance and energy metabolism were analysed by histological and biochemical methods, real-time PCR, and western blot. Results: Two clearly distinguishable stages of left or right heart disease with a comparable severity were reached. However, skeletal muscle impairment was significantly more pronounced in LV failure. While the compensatory stage resulted only in minor changes, soleus and gastrocnemius muscle of AOB rats at the decompensated stage demonstrated reduced weight and fibre diameter, higher proteasome activity and expression of the muscle-specific ubiquitin E3 ligases muscle-specific RING finger 1 and atrogin-1, increased expression of the atrophy marker myostatin, increased autophagy activation, and impaired mitochondrial function and respiratory chain gene expression. Soleus and gastrocnemius muscle of PAB rats did not show significant changes in muscle weight and proteasome or autophagy activation, but mitochondrial function was mildly impaired as well. The diaphragm did not demonstrate differences in any model or disease stage except for myostatin expression, which was altered at the decompensated stage in both models. Plasma interleukin (IL)-6 and angiotensin II were strongly increased at the decompensated stage (AOB &gt; &gt; PAB). Soleus and gastrocnemius muscle itself demonstrated an increase in IL-6 expression independent from blood-derived cytokines only in AOB animals. In vitro experiments in rat skeletal muscle cells suggested a direct impact of IL-6 and angiotensin II on distinctive atrophic changes. Conclusions: Manifold skeletal muscle alterations are more pronounced in LV failure compared with RV failure despite a similar ventricular impairment. Most of the catabolic changes were observed in soleus or gastrocnemius muscle rather than in the constantly active diaphragm. Mitochondrial dysfunction and up-regulation of myostatin were identified as the earliest signs of skeletal muscle impairment. abstract_id: PUBMED:26902451 Optimal Use of Beta-Blockers for Congestive Heart Failure. Beta-blockers are the cornerstone treatment for congestive heart failure (HF). Current HF guidelines commonly recommend β-blockers for the treatment of HF with reduced left ventricular ejection fraction (LVEF). The effect of β-blockers, however, is less clear for HF patients with preserved LVEF, unstable severe acute HF, or right ventricular failure. This review summarizes the effect of β-blockers in various clinical situations and suggests a strategy for optimal use. (Circ J 2016; 80: 565-571). abstract_id: PUBMED:23453873 The contribution of stem cell therapy to skeletal muscle remodeling in heart failure. Background: The aim of our study was to investigate whether stem cell (SC) therapy with human amniotic fluid stem cells (hAFS, fetal stem cells) and rat adipose tissue stromal vascular fraction cells-GFP positive cells (rSVC-GFP) was able to produce favorable effects on skeletal muscle (SM) remodeling in a well-established rat model of right heart failure (RHF). Methods: RHF was induced by monocrotaline (MCT) in Sprague-Dawley rats. Three weeks later, four millions of hAFS or rSVC-GFP cells were injected via tail vein. SM remodeling was assessed by Soleus muscle fiber cross sectional area (CSA), myocyte apoptosis, myosin heavy chain (MHC) composition, satellite cells pattern, and SC immunohistochemistry. Results: hAFS and rSVC-GFP injection produced significant SC homing in Soleus (0.68 ± 1.0 and 0.67 ± 0.75% respectively), with a 50% differentiation toward smooth muscle and endothelial cells. Pro-inflammatory cytokines were down regulated to levels similar to those of controls. SC-treated (SCT) rats showed increased CSA (p&lt;0.004 vs MCT) similarly to controls with a reshift toward the slow MHC1 isoform. Apoptosis was significantly decreased (11.12.± 8.8 cells/mm(3) hAFS and 13.1+7.6 rSVC-GFP) (p&lt;0.001 vs MCT) and similar to controls (5.38 ± 3.0 cells/mm(3)). RHF rats showed a dramatic reduction of satellite cells(MCT 0.2 ± 0.06% Pax7 native vs controls 2.60 ± 2.46%, p&lt;0.001), while SCT induced a repopulation of both native and SC derived satellite cells (p&lt;0.005). Conclusions: SC treatment led to SM remodeling with satellite cell repopulation, decreased atrophy and apoptosis. Modulation of the cytokine milieu might play a crucial pathophysiological role with a possible scenario for autologous transplantation of SC in pts with CHF myopathy. abstract_id: PUBMED:16186060 Skeletal muscle fibres synthesis in heart failure: role of PGC-1alpha, calcineurin and GH. Background: Patients with congestive heart failure (CHF) have decreased exercise capacity because of muscle fatigability. Symptoms are due to a specific myopathy with increased expression of fast type II fibres, fast MHCs and muscle atrophy. PGC-1alpha, a potent transcriptional coactivator for nuclear receptors, induces mitochondrial myogenesis and the preferential synthesis of slow fibres. IGF1-Calcineurin stimulation can lead to increased expression of PGC-1alpha. Methods: We investigated the levels of PGC-1alpha during progression and regression of skeletal myopathy in the soleus muscle of rats with right heart failure secondary to monocrotaline-induced pulmonary hypertension. We used GH to stimulate the IGF1-calcineurin-PGC-1alpha axis. Results: The slow MHC1 decreased from 90.6+/-0.5 to 71.7+/-2.2 in the CHF rats (p&lt;0.00001) and increased to 82.1+/-1.8 after GH (p&lt;0.00002). Western blot analysis showed that PGC-1alpha is significantly decreased in CHF, while it came back to control values after GH. Cytochrome c was decreased in CHF and returned to control values with GH. Troponin I was expressed solely as slow isoform in the control soleus, while the fast isoform appeared in CHF. Its expression returned to control values after GH. Conclusions: We conclude that PGC-1alpha plays an important role in regulating slow fibres expression. PGC1-1alpha is in turn regulated by the IGF1-calcineurin axis. GH by increasing the circulating levels of IGF1, enhanced the expression of slow MHC1, TnI and the synthesis of mitochondria. abstract_id: PUBMED:30833208 Mortality associated with cardiovascular drugs in patients with chronic obstructive pulmonary disease and right-sided heart failure - A danish nationwide registry-based study. Background: The optimal medical treatment in patients with chronic obstructive pulmonary disease (COPD) and right-sided heart failure (RHF) is unknown. We aimed to estimate the risks of all-cause mortality associated with the current clinical use of various cardiovascular drugs in this patient-group. Methods: We followed all patients with registered COPD and RHF (defined as a diagnosis of pulmonary hypertension plus use of loop-diuretics) for the risk of all-cause mortality (Jan 1, 1995 to Dec 31, 2015) using the Danish nationwide administrative registries. The association between mortality and claimed prescriptions for cardiovascular drugs was assessed by multivariable Cox regression models. Results: 5991 patients (mean age 74 ± standard deviation 10 years, 51% women) were included. Of these, 1440 (24%) used beta-blockers, 2149 (36%) renin-angiotensin system inhibitors [RASi], 1340 (22%) oral anticoagulants, 1376 (23%) calcium channel blockers, 1194 (20%) statins, 1824 (30%) spironolactone, and 2099 (35%) low-dose aspirin. During an average follow-up of 2.2 years (±standard deviation 2.8, min-max 0-19.6 years), 5071 (85%) died, corresponding to a mortality rate of 38 per 100 person-years (95% confidence interval 37-39). Compared to no use, beta-blockers were associated with adjusted hazards ratio 0.90 (95% confidence interval 0.84-0.98), RASi 0.92 (0.86-0.98), calcium channel blockers 0.86 (0.80-0.92), spironolactone 1.17 (1.10-1.24), statins 0.85 (0.78-0.92), oral anticoagulants 0.87 (0.79-0.95), and aspirin 0.99 (0.93-1.05). Propensity-score matched analyses and inverse-probability-weighted models yielded similar results. Conclusion: Several cardiovascular drugs may be associated with lowered mortality in COPD and RHF. Given the grave prognosis, randomized clinical trials are warranted to test this hypothesis. abstract_id: PUBMED:20702857 Fatty acid oxidation and malonyl-CoA decarboxylase in the vascular remodeling of pulmonary hypertension. Pulmonary arterial hypertension is caused by excessive growth of vascular cells that eventually obliterate the pulmonary arterial lumen, causing right ventricular failure and premature death. Despite some available treatments, its prognosis remains poor, and the cause of the vascular remodeling remains unknown. The vascular smooth muscle cells that proliferate during pulmonary arterial hypertension are characterized by mitochondrial hyperpolarization, activation of the transcription factor NFAT (nuclear factor of activated T cells), and down-regulation of the voltage-gated potassium channel Kv1.5, all of which suppress apoptosis. We found that mice lacking the gene for the metabolic enzyme malonyl-coenzyme A (CoA) decarboxylase (MCD) do not show pulmonary vasoconstriction during exposure to acute hypoxia and do not develop pulmonary arterial hypertension during chronic hypoxia but have an otherwise normal phenotype. The lack of MCD results in an inhibition of fatty acid oxidation, which in turn promotes glucose oxidation and prevents the shift in metabolism toward glycolysis in the vascular media, which drives the development of pulmonary arterial hypertension in wild-type mice. Clinically used metabolic modulators that mimic the lack of MCD and its metabolic effects normalize the mitochondrial-NFAT-Kv1.5 defects and the resistance to apoptosis in the proliferated smooth muscle cells, reversing the pulmonary hypertension induced by hypoxia or monocrotaline in mice and rats, respectively. This study of fatty acid oxidation and MCD identifies a critical role for metabolism in both the normal pulmonary circulation (hypoxic pulmonary vasoconstriction) and pulmonary hypertension, pointing to several potential therapeutic targets for the treatment of this deadly disease. Answer: In a study comparing the effects of different beta-blockers on skeletal muscle protein oxidation in chronic right heart failure (CHF) in rats, it was found that nebivolol, a beta-blocker with vasodilative properties mediated by NO production, prevented protein oxidation, while bisoprolol did so only partially. This was demonstrated by oxyblot analysis, which showed lower Oxy/RP values for nebivolol-treated rats compared to bisoprolol-treated and CHF rats (PUBMED:19289250). Nebivolol also improved twitch force production and relaxation, prevented fibers shift towards fast isoforms, atrophy, decreased apoptosis, and sphingosine levels. These findings suggest that nebivolol may be more effective than bisoprolol in preventing skeletal muscle protein oxidation in CHF, potentially due to its NO donor activity which may have prevented protein oxidation (PUBMED:19289250). Therefore, based on the available evidence from this study, different beta-blockers do not prevent skeletal muscle protein oxidation in chronic right heart failure to the same degree, with nebivolol showing a superior protective effect compared to bisoprolol.
Instruction: Mass-forming cholangiocarcinoma and adenocarcinoma of unknown primary: can they be distinguished on liver MRI? Abstracts: abstract_id: PUBMED:24929668 Mass-forming cholangiocarcinoma and adenocarcinoma of unknown primary: can they be distinguished on liver MRI? Purpose: To determine MR features suggestive of mass-forming cholangiocarcinoma (CCA) or liver metastases of adenocarcinoma of unknown primary (AUP), and to compare the ability of two experienced radiologists to establish the correct diagnosis. Materials And Methods: 61 patients with CCA or AUP, with MRIs were placed into two groups: population 1, 28 patients with certain diagnosis of either CCA or AUP; and population 2, 33 patients with uncertain diagnosis. Using population 1 with known diagnosis, two investigators formulated imaging criteria for CCA or AUP, which represented phase 1 of the study. In phase 2, two independent radiologists categorized the patients in populations 1 and 2 as CCA or AUP using the formulated criteria. This categorization was compared with the patient medical records and pathologist review. Findings were tested for statistical significance. Results: In phase 1, solitary lesion, multifocal lesions with dominant lesion, capsule retraction, and porta hepatis lymphadenopathy were features of CCA; multifocal lesions with similar size, and ring enhancement were features of AUP. The number of lesions, capsule retraction, and early tumor enhancement pattern were observed to be significant features (P &lt; 0.05). In phase 2, agreement between the two radiologists was good (k = 0.663). For population 1, the agreement was good (k = 0.659), and was fair for population 2 (k = 0.293). Concordance between the two radiologists, medical record, and the pathologist was found in 41/61 (67%) patients. Conclusion: Distinctive features of CCA and AUP are identifiable on MRI images, which may aid the radiologist to establish the correct diagnosis. abstract_id: PUBMED:25962708 Intrahepatic mass-forming cholangiocarcinoma: enhancement pattern on Gd-BOPTA-MRI with emphasis of hepatobiliary phase. Purpose: To describe the Gd-BOPTA MRI findings of intrahepatic mass-forming type cholangiocarcinomas (IMCs), with emphasis on the hepatobiliary phase (HBP). Methods: We reviewed retrospectively 29 IMC patients who underwent Gd-BOPTA-MRI between June, 2004 and June, 2014. Images were acquired prior to, and after, administration of 15-20 mL of Gd-BOPTA in the dynamic phase (arterial phase, portal venous phase, and 3-5 min phase), 10-15-min late phase, and 2-3 h HBP phase. Results: In the dynamic phase, 27 (93%) lesions showed a peripheral rim-like enhancement in the arterial and portal venous phases, followed by progressive filling-in on the delayed images. In 14 (56%) cases, a hypointense peripheral rim was identified in the 10-15-min late phase, delineating a target pattern. In the HBP, the cholangiocarcinoma showed a diffuse, mainly central and inhomogeneous enhancement (cloud of enhancement) in 28 (96%) patients; in 23 (79%) cases, there was an association between cloud appearance and a hypointense peripheral rim, showing a target pattern. Conclusions: Gd-BOPTA MRI pattern of IMC on dynamic study is similar to that of conventional extracellular agents, that is peripheral enhancement with progressive and concentric filling of contrast material on delayed phases. At 10-15 min delayed phases, IMC shows often a peripheral hypointense rim consistent with a target appearance. In the HBP, due to progressive central enhancement (cloud) and peripheral hypointense rim, an higher number of tumors show a target appearance; this pattern is not specific and would also be expected to be seen in metastases from adenocarcinoma. abstract_id: PUBMED:24163655 Unusual images of mass-forming intrahepatic cholangiocarcinoma. We experienced a case of mass-forming intrahepatic cholangiocarcinoma which could not been diagnosed accurately without pathologic findings. A 78-year-old Japanese woman with no particular symptoms was admitted for changes in liver function tests. Ultrasonography revealed a solid liver tumor. When there are no typical imaging features, no pathognomonic clinical findings and no obvious risk factors for any specific hepatic tumor, it may be difficult to make an accurate diagnosis before surgical resection. The lesion was resected on the basis of a high degree of suspicion for malignancy and submitted for pathologic evaluation. Microscopically, the neoplasm was a moderately differentiated adenocarcinoma with abundant fibrous stroma, consistent with a mass-forming cholangiocarcinoma. This case exemplifies the importance of considering the various tumorous and non-tumorous diseases in the differential diagnosis of a liver mass with atypical features, especially when malignancy cannot be excluded. abstract_id: PUBMED:24964684 The CT appearances for differentiating of peripheral, mass-forming cholangiocarcinoma and liver meatastases from colorectal adenocarcinoma. Objective: To evaluate the computed tomographic (CT) appearances for differentiating of primary hepatic adenocarcinoma (peripheral, mass-forming cholangiocarcinoma) and secondary hepatic adenocarcinoma (liver metastases from colorectal carcinoma). Material And Method: Between January 2004 and December 2010, 45 patients with peripheral, mass-forming cholangiocarcinoma (Group 1) and 45 patients with liver metastases from colorectal adenocarcinoma (Group 2) who underwent abdominal CT scan at the authors' institution were included in the present retrospective study. Two experienced, abdominal radiologists blinded to the participants 'clinical histories and pathological results, separately reviewed the CT findings of each participant (number of liver mass(es), size, margin, internal calcification, hepatic capsule retraction, vascular invasion, peripheral bile duct dilatation, proximal bile duct enhancement, extrahepatic spreading, nearby lymphadenopathy and nearby organ invasion) and gave the presumed diagnosis of each individual case. Any discrepancies were solved by a consensus review. Finally, the authors conducted a stratified analysis of the patients in both groups based on their CT appearances. Results: Ninety participants were 35 (38.9%) female, 55 (61.1%) male, age range from 43 to 88 years (mean 63.4 years, SD = 10.7). There were 28.9% vs. 48.9% female with the mean age (SD) of 61.5 (9.4) vs. 65.4 (11.6) years in Group 1 and 2, respectively. The mean size (SD) were 7.4 (3.7) cm vs. 4.0 (2.1) cm, in Group 1 and 2, respectively (p &lt; 0.001). The presence of hepatic capsule retraction, vascular invasion, peripheral bile duct dilatation, proximal bile duct enhancement, extrahepatic spreading, nearby lymphadenopathy, and nearby organ invasion were significantly higher in Group 1 than Group 2 (p &lt; 0.001). In contrary, the presence of multiple lesions with separated locations, and smooth margin were significantly suggested of Group 2 (p &lt; 0.001 and p = 0.007, respectively). By logistic regression analysis, peripheral bile duct dilatation, extrahepatic spreading, and proximal bile duct enhancement were the sole predictors of peripheral, mass-forming cholangiocarcinoma. The interobserver agreement for the presumed diagnosis of liver mass was good (kappa = 0.76). Conclusion: The presence of peripheral bile duct dilatation, extrahepatic spreading, and proximal bile duct enhancement were highly suggestive of peripheral, mass-forming cholangiocarcinoma. abstract_id: PUBMED:38028185 Intrahepatic cholangiocarcinoma initially diagnosed as adenocarcinoma of unknown primary with hepatoduodenal ligament lymph node metastases: A case report. Intrahepatic cholangiocarcinoma (iCCA) with regional lymph node metastases, which lacks a well-delineated liver mass, may be misdiagnosed as a carcinoma of unknown primary (CUP) origin. The present study reports the case of a 69-year-old man initially diagnosed with CUP, who was incidentally found to have abdominal lymphadenopathy during ultrasonography (US). The clinical course from the time of lymphadenectomy and CUP diagnosis to iCCA detection after long-term follow-up is reported. A patient with a history of hypertensive renal disease presented with an incidental finding of enlarged abdominal lymph nodes in the perihepatic region on US. Abdominal contrast-enhanced computed tomography (CT) scan and magnetic resonance imaging (MRI) revealed two enlarged lymph nodes in the hepatoduodenal ligament. Exploratory laparotomy and lymphadenectomy were performed for diagnostic and therapeutic purposes, respectively. Poorly differentiated metastatic adenocarcinoma positive for cytokeratin 7 and negative for cytokeratin 20 was identified in two of the 22 lymph nodes. Postoperatively, a positron emission tomography/CT (PET/CT) scan was performed, which failed to locate the primary site. The diagnosis of CUP was confirmed based on clinical, radiological and histopathological characteristics. A sequential abdominal CT scan 48 months after lymphadenectomy revealed a faintly enhancing, intraductal polypoid mass with localized ductal dilatation in liver segment 3. MRI and PET/CT confirmed a mass in the left lobe of the liver. US-guided percutaneous needle biopsy confirmed the presence of moderately differentiated adenocarcinoma. The patient refused surgical treatment because of general weakness caused by Coronavirus disease 2019 infection. The patient received radical radiotherapy and underwent left hepatectomy after recovery of their performance status. Histopathological examination of the surgical specimen demonstrated prevailing fibrosis and mucin accumulation, with scattered cancer cells observed focally in the resected liver specimen owing to the effect of the radiotherapy. Consequently, a definitive diagnosis of primary adenocarcinoma of the intrahepatic bile duct was confirmed. The present report may improve understanding of the pathophysiology and clinical progression of iCCA, with a specific focus on the intraductal growth subtype. abstract_id: PUBMED:34276849 Klatskin tumor presenting as unresectable perihilar hepatic mass: A case report. Cholangiocarcinoma is a rare liver tumor with three types: intrahepatic, extrahepatic, and perihilar, which alone account for about 50% of cases. The diagnosis is late with a poor prognosis. Imaging through Bili MRI and CT scan plays an essential role in the classification and staging of tumors for therapeutic management. We report the case of a 62-year-old woman, received for cholestasis syndrome and weight loss in whom CT and MRI found a hepatic hilar mass. The diagnosis of Klatskin's tumor was retained with a histological finding of adenocarcinoma. The patient underwent biliary drainage and palliative treatment as the tumor was unresectable. abstract_id: PUBMED:22829097 Small intrahepatic mass-forming cholangiocarcinoma: target sign on diffusion-weighted imaging for differentiation from hepatocellular carcinoma. Purpose: To determine the differential MRI features of small mass-forming intrahepatic cholangiocarcinoma (ICC) from hepatocellular carcinoma (HCC). Methods: Sixty-four patients with pathologically proven small ICCs (n = 32) and HCCs (n = 32) (≤ 3.0 cm in diameter) who had undergone preoperative gadoxetic acid-enhanced MRI and DWI were enrolled in this study. Images were analyzed for the shape of the lesions, the presence of biliary dilatation, hyperenhancement (&gt;50 % of the tumor volume) or rim enhancement on the arterial phase, capsular enhancement, and the presence of target appearance (a central enhancement with hypointense rim) on the hepatobiliary phase and on DWI (a central hypointense area with a peripheral hyperintense rim). Statistical significance of these findings was determined by the χ(2) or Fisher's exact test. Multivariate analysis was performed to identify independent imaging findings that allow differentiation of the two diseases. Results: Univariate analysis revealed that the following significant parameters favor ICC over HCC: lobulating shape, rim enhancement on arterial phase, target appearance on the hepatobiliary phase, and DWI (P &lt; 0.05). Multivariate logistic regression analysis revealed that only target appearance on the DWI was a significant and independent variable predictive of ICC, as 24 ICCs (75.0 %) and one HCC (3.1 %) showed this feature (P = 0.0003). Conclusion: A target appearance on the DWI was the most reliable imaging feature for distinguishing small mass-forming ICC from small HCC. abstract_id: PUBMED:20135355 Clinicopathological prognostic factors after hepatectomy for patients with mass-forming type intrahepatic cholangiocarcinoma: relevance of the lymphatic invasion index. Background: The present study was conducted to clarify the pathological factors in patients who underwent surgery for mass-forming type intrahepatic cholangiocarcinoma (IHC). Methods: From 1982 to July 2004, a total of 60 liver resections for mass-forming type IHC were performed at Kyushu University and its affiliated institutions. Portal venous, lymphatic, hepatic venous, and serosal invasion was examined by univariate and multivariate analyses for their prognostic value. The portal venous (PV) invasion index was defined as follows: PV0, portal venous invasion (-) and intrahepatic metastasis (-); PV1, portal venous invasion (+) or intrahepatic metastasis (+); PV2, portal venous invasion (+) and intrahepatic metastasis (+). The lymphatic invasion (LI) index was defined as follows: LI0, lymphatic duct invasion (-) and lymph node metastasis (-); LI1, intrahepatic lymphatic duct invasion (+) or lymph node metastasis (+); LI2, intrahepatic lymphatic duct invasion (+) and lymph node metastasis (+). Results: In univariate analysis, statistically significant prognostic factors for poor outcome were tumor size (&gt;5 cm), serosal invasion (+), PV1 or PV2, LI1 or LI2, histological grade (moderate and poor), hepatic venous invasion (+) and noncurative resection. After multivariate analysis, the lymphatic invasion index and histological grade were statistically independent prognostic factors for overall survival and recurrence-free survival. Conclusions: In patients with mass-forming type IHC, lymphatic invasion is the most important invasion pathway, compared with serosal and portal and hepatic venous invasion. Stratification of the lymphatic invasion pathway by lymphatic invasion, including intrahepatic lymphatic duct invasion and lymph node metastasis, is a good predictor for prognosis in patients after hepatectomy for mass-forming type IHC. abstract_id: PUBMED:10084653 Immunohistochemical expression of cytokeratins in intrahepatic cholangiocarcinoma and metastatic adenocarcinoma of the liver. Background And Objectives: This study was designed to identify a difference in immunostaining that might help to distinguish between primary and metastatic liver neoplasms. Methods: We examined immunohistochemical expression of cytokeratins (CKs) 7, 8, 19, and 20 in 12 intrahepatic cholangiocarcinomas (ICCs; 9 of the mass-forming and 3 of the infiltrating type), 25 metastatic colorectal carcinomas (MCCs), and 7 metastatic gastric carcinomas (MGCs) of the liver. Results: CKs 7 and 19 were expressed in all ICCs of infiltrating type, while each was seen in 7/9 (77.8%) of mass-forming type. CK 7-positive/CK 20-negative was seen in 9/12 (75.0%) of ICCs and in none of the 25 MCCs, while CK 7-negative/CK 20-positive was seen in 1/12 (8.3%) of ICCs and 20/25 (80.0%) of MCCs. No differences were observed between MGCs and ICCs. Conclusions: These results suggest that immunohistochemical staining for both CKs 7 and 20 is useful for the differential diagnosis of ICCs and MCCs, whereas phenotypic expression of CKs appears to be different between mass-forming and infiltrating types of ICCs. abstract_id: PUBMED:31100645 Interpretation of core biopsy of liver mass lesions: A comparison study between cytopathologist and gastrointestinal pathologist. Context: Core biopsy (CB) is a main tool for diagnosis of liver mass lesions. When CB is performed with fine needle aspiration (FNA), the CB may be interpreted by a cytopathologist or gastrointestinal pathologist. Objective: This study compares interpretation of liver mass biopsy between cytopathologist and gastrointestinal pathologist in the era of subspecialty practice. Design: 349 liver mass lesions with FNA and CB performed during a 5-year period were retrieved. All cases were initially interpreted by a cytopathologist and retrospectively reviewed by a gastrointestinal pathologist. Results: The overall agreement was 95.1% (332/349 cases). There was agreement on 57/65 non-neoplastic cases (87.7%) with 8 (12.3%) discordant cases including 4 steatosis (steatohepatitis missed in 3 cases, 1 re-interpreted as focal nodular hyperplasia [FNH]); 3 inflammation (1 necrotizing granulomatous inflammation, 1 massive necrosis instead of fibrosing cholestatic hepatitis, and 1 hepatocellular carcinoma [HCC] was missed); and 1 initially deemed normal re-interpreted as FNH. There was agreement on 275/284 neoplastic cases (96.8%), with 9 (3.2%) discordant cases including: 2 initially interpreted as HCC (1 metastatic adrenal cortical carcinoma, 1 cholangiocarcinoma); 3 adenocarcinomas (2 further defined as prostatic primary, 1 well-differentiated neuroendocrine tumor [WDNET]); 2 metastatic carcinomas (1 tumor-induced fibrosis instead of cirrhosis, 1 LCNEC re-interpreted as WDNET); 1 poorly differentiated carcinoma (re-interpreted as LCNEC); and 1 sarcomatoid carcinoma (re-interpreted as leiomyosarcoma). Conclusion: Cytopathologist and gastrointestinal pathologist are highly concordant in the interpretation of neoplastic liver mass CB. Consultation may improve accuracy in certain non-neoplastic biopsies and neuroendocrine neoplasms. Answer: Yes, mass-forming cholangiocarcinoma (CCA) and adenocarcinoma of unknown primary (AUP) can be distinguished on liver MRI. A study aimed to determine MR features suggestive of CCA or AUP found that certain features on MRI images could aid radiologists in establishing the correct diagnosis. Solitary lesion, multifocal lesions with a dominant lesion, capsule retraction, and porta hepatis lymphadenopathy were features of CCA, while multifocal lesions with similar size and ring enhancement were features of AUP. The number of lesions, capsule retraction, and early tumor enhancement pattern were significant features (P < 0.05). Two independent radiologists were able to categorize patients as CCA or AUP using the formulated criteria with good agreement (k = 0.663) for population 1 with a known diagnosis and fair agreement (k = 0.293) for population 2 with an uncertain diagnosis. Concordance between the radiologists, medical records, and pathologist was found in 67% of patients (PUBMED:24929668). Additionally, Gd-BOPTA MRI findings of intrahepatic mass-forming type cholangiocarcinomas (IMCs) were described, with emphasis on the hepatobiliary phase (HBP). The pattern of IMC on dynamic study is similar to that of conventional extracellular agents, showing peripheral enhancement with progressive and concentric filling of contrast material on delayed phases. In the HBP, due to progressive central enhancement (cloud) and peripheral hypointense rim, a higher number of tumors show a target appearance; however, this pattern is not specific and could also be seen in metastases from adenocarcinoma (PUBMED:25962708). In conclusion, distinctive features of CCA and AUP are identifiable on MRI images, which may aid the radiologist in establishing the correct diagnosis, although some overlap in imaging features can occur.
Instruction: Does hysteroscopy improve upon the sensitivity of dilatation and curettage in the diagnosis of endometrial hyperplasia or carcinoma? Abstracts: abstract_id: PUBMED:9454651 Does hysteroscopy improve upon the sensitivity of dilatation and curettage in the diagnosis of endometrial hyperplasia or carcinoma? Objective: The objective of this study was to determine whether hysteroscopy improved upon the diagnostic sensitivity of dilatation and curettage (D+C) in the detection of endometrial hyperplasia and carcinoma. Methods: A retrospective chart review was conducted of all patients undergoing hysteroscopy/D+C for abnormal uterine bleeding between 1991 and 1995. Hysteroscopic impressions and D+C diagnoses were compared. Results: Three hundred seventy-three patients were included in the study. Of the 61 patients with D+C demonstrating hyperplasia, the hysteroscopic impression was hyperplasia in 32 (52%). Of the 10 patients with D+C demonstrating carcinoma, the hysteroscopic impression was hyperplasia in 8 (80%) and carcinoma in 2 (20%). Two additional cases of carcinoma were diagnosed within 6 months of hysteroscopy/D+C, and both had been missed on both hysteroscopy and D+C. Of 204 patients with a normal hysteroscopic impression, 23 (11%) had hyperplasia on D+C. Conclusions: Hysteroscopy did not improve upon the sensitivity of D+C in the detection of endometrial hyperplasia or carcinoma. abstract_id: PUBMED:17966221 Hysteroscopy with directed biopsy versus dilatation and curettage for the diagnosis of endometrial hyperplasia and cancer in perimenopausal women. Background: The present study was undertaken to compare the effectiveness of dilatation and curretage (D&amp;C) with hysteroscopy and guided biopsy (H+B) for the collection of endometrial samples adequate for histological examination in perimenopausal women at risk of endometrial hyperplasia or cancer. Methods: We performed hysteroscopy and biopsy followed by dilatation and curettage in 734 patients with abnormal perimenopausal bleeding or sonographically revealed endometrial pathology. Two hundred and ninety-two patients in whom lesions were totally removed during hysteroscopy were excluded from further study. Results: Using both methods we disclosed 64 cases of endometrial polyps, 60 cases of endometrial hyperplasia, and 49 cases of endometrial cancer. Hysteroscopy left just four cases of endometrial pathology undiagnosed as opposed to 21 cases using dilatation and curettage. Histology could not be performed on material obtained with hysteroscopy in four cases and with curettage in 23 cases. Conclusions: 1) Hysteroscopy with directed biopsy is more sensitive in disclosing all types of uterine lesions than dilatation and curettage. 2) Curettage done after hysteroscopy and directed biopsy does not improve the detection of endometrial cancer. abstract_id: PUBMED:10094904 Does hysteroscopy improve upon the sensitivity of dilation and curettage in the diagnosis of endometrial hyperplasia or carcinoma? N/A abstract_id: PUBMED:26327874 Hysteroscopic assessment of postmenopausal endometrial thickening. Introduction: Endometrial thickness is measured by transvaginal sonography and thickening indicates an increased risk of malignancy or other pathology (hyperplasia or polyp) in the postmenopausal period. The main screening methods for the uterine cavity are dilatation and curettage, and hysteroscopy. We sought to correlate hysteroscopic and pathological findings in asymptomatic postmenopausal women with sonographically thickened endometrium (&gt; 5 mm) in this study. Material And Methods: This retrospective cross-sectional study involved case records of 197 women who have thickened (&gt; 5 mm) endometrium in the postmenopausal period. All these women underwent hysteroscopy with diagnostic dilatation and curettage between January 2012 and January 2013 at the Bursa Zübeyde Hanım Maternity Hospital. Sensitivity, specificity, positive, negative predictive values and p value of hysteroscopy were calculated. Dilatation and curettage was set as the gold standard. Results: For the evaluation of postmenopausal thickened endometrium, hysteroscopy revealed sensitivity, specificity, positive predictive value and negative predictive value as 76.4%, 76.9%, 73.1%, 79.8%, respectively. Conclusions: Hysteroscopy is a fast and accurate technique in evaluation of the intrauterine space occupying lesions (polyp, fibroid) but only moderate for endometrial hyperplasia. Hysteroscopic view combined with direct biopsy could be a gold standard for endometrial assessment. abstract_id: PUBMED:17436402 Flexible hysteroscopy in irregular uterine bleeding. Purpose: The objective of our study was to analyze the possibilities of diagnostic flexible hysteroscopy in women with irregular uterine bleeding. Patients And Methods: The study involved 661 women aged 26-72 years (mean 46.23) with mild to moderate irregular uterine bleeding who agreed to have invasive investigations. Diagnostic hysteroscopy by panoramic fibrohysteroscope Pentax LH-150 II (5 mm in diameter) was performed initially to all patients, followed by classical dilatation and curettage (D &amp; C). Results: Different types of normal endometrium were found in 248 (37.52%) women, hyperplasia and polyposis in 300 (45.39%), suspected changes for atypia and neoplasia in 82 (12.40%) and other different findings in 31 (4.69%). Subsequent D &amp; C showed lack of adequate material from the uterine cavity in 26 (3.93%) women, in 538 (81.39%) the morphological findings were due to hormonal and dyshormonal reasons, 66 (9.98%) were with different precancer and cancer lesions and in 31 (4.69%) cases, clinically and morphologically, bleeding was due to the presence of submucosal myomas, intrauterine devices, residua and pathology of the chorion. Conclusion: The comparison of the hysteroscopy results with clinical and histological findings from D &amp; C showed good diagnostic possibilities of the flexible hysteroscopy for evaluation of women with irregular bleeding (sensitivity 74.07% and specificity 90.61%). This examination is relatively simple, safe and requires simultaneous performance of a direct biopsy, in order to improve the accuracy of the results. abstract_id: PUBMED:14674125 Value of saline infusion sonohysterography and hysteroscopy in postmenopausal patient with persistent abnormal ultrasonographic images after endometrial curettage with normal histological results Aim Of The Study: Comparison of sonohysterography vs. outpatients diagnostic hysteroscopy in evaluation postmenopausal patient with persistent abnormal ultrasounds images after endometrial curettage revealed normal histological result. Materials And Methods: Material consisted of 47 postmenopausal women age 44-91 years, diagnosed because of abnormal uterine bleeding and/or suspicious endometrial ultrasonographic pictures performed a month after curettage. Patients were evaluated by diagnostic sonohysterography (without analgaesia) and outpatients diagnostic hysteroscopy (local analgaesia). Outcomes of sonohysterography and hysteroscopy were compared with results of histological examinations of specimen (hysteroscopical biopsy, hysteroscopical resection or hysterectomy). Sensitivity and specificity of two diagnostic methods were compared. Results: Intrauterine cause of abnormal ultrasonographic picture was disclosed by sonohysterography in 35 (74.46%) cases. Suspicious of focal endometrial carcinoma in SIS was reported in 1 (2.12%) case. Sensitivity of SIS in relation to all pathology was 0.97, specificity 0.90. Sensitivity and specificity amounted in relation to: polyps 1.00 and 0.83, hyperplasia 0.84 and 0.95. There was no significant statistical differences in sensitivity and specificity between SIS and diagnostic hysteroscopy. Conclusion: Diagnostic hysteroscopy and sonohysterography have the same diagnostic potential in evaluation of endometrium. Both methods are useful in patient with unsuspected histological results of the specimens. Ultrasonographic pictures imitating endometrial carcinoma or endometrial hyperplasia in most cases are connected with endometrial polyp. Advantage of applying liquid medium in SIS seems surpass the risk of passage of cells of carcinoma to peritoneal cavity. abstract_id: PUBMED:12731740 Removal of endometrial polyps by use of grasping forceps and curettage after diagnostic hysteroscopy. Purpose Of Investigation: To determine the therapeutic efficacy of the use of gall stone forceps and curettage in endometrial polyps removal, after their detection with diagnostic hysteroscopy. Methods: From 1997 to 2001, we conducted a prospective study in 53 patients who presented at our department for menstrual disorders, infertility problems or postmenopausal bleeding and in whom endometrial polyps were detected by hysteroscopy. All patients received general anesthesia and after hysteroscopic detection of the polyps' location, their removal was attempted by use of Desjardins gall stone forceps and curettage. Immediately after the procedure, a second hysteroscopy was performed in order to detect remnants of the polyps. Results: Fifty patients presented with only one polyp, two with two polyps and one with three polyps. The mean diameter of the polyps ranged from 0.5 to 3 cm. The hysteroscopic appearance of all polyps was not suggestive of malignancy. During the second hysteroscopy we found parts or whole polyps in five and two cases, respectively, accounting for a therapeutic success of 86.8%. The hospitalization time for all patients was 24 hours and occurred no intraoperative or postoperative complications. Conclusion: Our method seems to be safe, with low cost and sufficient therapeutic outcome and could be used in hospitals with availability of diagnostic hysteroscopy only. abstract_id: PUBMED:23313791 Concordance between hysteroscopic impression and endometrial histopathological diagnosis. Objective: The aim of this study was to evaluate the accuracy of hysteroscopic impression for diagnosing benign and malignant endometrial pathology. Method: This is a retrospective cross-sectional study involving case records of 412 patients who underwent hysteroscopy with diagnostic dilatation and curettage (D&amp;C) at the University of Malaya Medical Centre from January 2009 to August 2011, and cases with records of previous hysteroscopies (2007-2008). Sensitivity, specificity, positive (PPV) and negative (NPV) predictive values, likelihood ratios (LR) and post-test probabilities of hysteroscopy were calculated. D&amp;C was set as the 'gold standard'. Results: Hysteroscopy and histology results were concordant in 366 (88.8%) subjects. Sensitivity, specificity, PPV and NPV were high exceeding 80%. Moderate sensitivity for endometrial hyperplasia (64.4%, 95% CI=49.8%-76.8%) with moderate PPV for malignancy (62.1%, 95% CI=44.0%-77.3%) due to misdiagnosing hyperplasia as malignant was observed. PPV for leiomyoma was reduced (83.3%, 95% CI=60.8%-94.2%) despite 100% sensitivity, due to D&amp;C false negatives. High positive LR (&gt;10) and low negative LR (&lt;0.2) were observed generally except for endometrial hyperplasia (0.36). Hysteroscopy had moderate positive post-test probability for malignancy (0.62) but effective in ruling out malignancy (negative post-test probability=0.00). Conclusion: Hysteroscopy is accurate for diagnosing focal and malignant endometrial pathology but only moderate for hyperplasia. Endometrial sampling is recommended for all cases especially when suspecting hyperplasia or malignancy. abstract_id: PUBMED:1803713 Hysteroscopy and endometrial cancer diagnosis: a review of 2007 consecutive examinations in self-referred patients. The authors reviewed 2007 consecutive outpatient hysteroscopies performed in self-referred women to assess the detection rate of uterine cancer and the validity of different selection criteria for hysteroscopy. Thirty cases of uterine cancer (29 endometrial, 1 carcinosarcoma) were detected. Abnormal uterine bleeding was the indication most commonly associated with cancer (26 of 30 cases, cancer detection rate = 2.1%), whereas the presence of cervical polyps had no predictive value. Patients age was correlated to cancer detection rate, and the investigation of uterine cancer under the age of 45 was poorly cost effective. Hysteroscopy and endometrial biopsy, performed by Permacurette or Novak curette immediately after hysteroscopy, missed respectively 8 and 2 of 30 cancers. Hysteroscopy should be employed in combination with endometrial biopsy as a standard outpatient investigation whenever endometrial cancer is suspected. These procedures are safe and accurate and rule out more aggressive and costly procedures, such as dilatation and curettage, in most cases. abstract_id: PUBMED:34175583 Correlation between pre-operative diagnosis and final pathological diagnosis of endometrial malignancies; impact on primary surgical treatment. Objectives: to compare the accuracy of preoperative and postoperative endometrial sampling obtained by dilatation and curettage (D&amp;C), Pipelle and hysteroscopy to diagnose endometrial malignancies and to evaluate the adequacy of surgical treatment for these patients. Study Design: This retrospective record-based study was conducted at Gynecology Oncology Unit, Ain Shams University Maternity Hospital, Cairo, Egypt. Records of patients with an initial preoperative or a final postoperative pathological diagnosis of endometrial hyperplasia or endometrial malignancies were retrieved over a six year period from January 2013 until March 2019. The preoperative biopsy methods, results, postoperative pathology diagnosis and treatment received were recorded and analyzed. Results: Records of 395 patients were retrieved. Rates of agreement between initial and postoperative pathology for endometrial hyperplasia (with and without atypia) were 10% by D&amp;C, 7.8% by Pipelle and zero % by hysteroscopy; for endometrioid adenocarcinoma grade 1, they were 40.7% by D&amp;C, 40.6% by Pipelle and 20% by hysteroscopy; for endometrioid adenocarcinoma grades 2 and 3, they were 83.7% and 84.6% by D&amp;C, 97% and 60% by Pipelle, and 100% by hysteroscopy, respectively. For type II endometrial cancer and endometrial stromal sarcoma, agreement rates were 100% by all biopsy methods. Out of 13 cases diagnosed with atrophic endometrium preoperatively, 11 cases were diagnosed as endometrial stromal sarcoma. Comparing accuracy of the three biopsy methods, D&amp;C had highest sensitivity of 61.9%; hysteroscopy showed 100% specificity, while Pipelle was the least accurate method. Finally, 61.4% of the patients received adequate surgical treatment. Conclusion: There were excellent concordance rates between initial biopsy and postoperative pathology in diagnosis of high grade endometrioid and other endometrial malignancies compared to endometrial hyperplasia and low grade endometrioid adenocarcinoma. Patients diagnosed initially with atrophic endometrium or hyperplasia warranted more surveillance before their surgical treatment. It is crucial to develop protocols for increasing accuracy of endometrial sampling such as performing molecular studies for preoperative pathology. Answer: The studies present mixed findings regarding the improvement of diagnostic sensitivity for endometrial hyperplasia or carcinoma when hysteroscopy is used in conjunction with dilatation and curettage (D&C). One study found that hysteroscopy did not improve upon the sensitivity of D&C in the detection of endometrial hyperplasia or carcinoma. In this study, hysteroscopic impressions were compared with D&C diagnoses, and it was observed that hysteroscopy missed some cases of carcinoma that were detected by D&C, and in cases where hyperplasia was detected by D&C, the hysteroscopic impression was hyperplasia in only 52% of cases (PUBMED:9454651). Conversely, another study concluded that hysteroscopy with directed biopsy is more sensitive in disclosing all types of uterine lesions than D&C. This study found that hysteroscopy left just four cases of endometrial pathology undiagnosed as opposed to 21 cases using D&C. It also suggested that curettage done after hysteroscopy and directed biopsy does not improve the detection of endometrial cancer (PUBMED:17966221). A further study indicated that hysteroscopy is a fast and accurate technique in evaluating intrauterine space-occupying lesions but only moderate for endometrial hyperplasia. The study suggested that hysteroscopic view combined with direct biopsy could be a gold standard for endometrial assessment (PUBMED:26327874). Another study supported the diagnostic possibilities of flexible hysteroscopy for evaluation of women with irregular bleeding, showing good sensitivity and specificity. It recommended the simultaneous performance of a direct biopsy to improve the accuracy of the results (PUBMED:17436402). In summary, while some studies suggest that hysteroscopy with directed biopsy may be more sensitive than D&C alone in detecting endometrial pathology (PUBMED:17966221; PUBMED:26327874; PUBMED:17436402), other studies indicate that hysteroscopy does not necessarily improve the sensitivity of D&C in all cases (PUBMED:9454651). Therefore, the improvement in sensitivity may depend on the specific circumstances and the way hysteroscopy is combined with other diagnostic methods.
Instruction: Are flow-mediated vasodilatation and intima-media thickness of the brachial artery associated with restenosis after endovascular treatment of peripheral arterial occlusive disease? Abstracts: abstract_id: PUBMED:20432038 Are flow-mediated vasodilatation and intima-media thickness of the brachial artery associated with restenosis after endovascular treatment of peripheral arterial occlusive disease? Objective: Restenosis after percutaneous angioplasty of peripheral arteries is still an unsolved matter. Previous studies reported an association between flow-mediated dilatation (FMD), a marker of endothelial dysfunction, and restenosis after coronary angioplasty. This study evaluates the influence of FMD and brachial intima-media thickness (B-IMT) on restenosis after angioplasty of peripheral arteries. Methods: One hundred and eighty-four patients (124 male) with claudication related to peripheral arterial disease participated in this trial. FMD and B-IMT were assessed before endovascular revascularisation. In a 12-month follow-up duplex ultrasound examinations were performed to detect restenosis. Finally 128 patients (91male, 37 female) were eligible for statistical analysis. Results: Restenosis was found in 54 patients (42.2%). Mean FMD was 3.53 ± 3.56%, with no difference between the patients with restenosis (3.55 ± 3.64%) and those without (3.52 ± 3.48%; p = 0.716). B-IMT had a mean value of 0.326 ± 0.134 mm. B-IMT significantly differed between the patients with restenosis (0.326 ± 0.134 mm) and those without (0.256 ± 0.133 mm; p = 0.007). We confirmed that a B-IMT over 0.21 mm was an independent risk factor for restenosis [OR 2.9 (1.3-6.3)]. Conclusion: Endothelial dysfunction is not associated with restenosis. Conversely patients with enlarged B-IMT are at risk of restenosis after angioplasty of peripheral arteries. abstract_id: PUBMED:16061402 Carotid intima-media thickness, carotid wall shear stress and restenosis after femoro-popliteal percutaneous transluminal angioplasty (PTA). Objective: To determine the relationship between carotid intima-media thickness (IMT), carotid wall shear stress (WSS) and restenosis after femoro-popliteal percutaneous transluminal angioplasty (PTA). Patients And Methods: Thirty-one subjects (18 men, 13 women, median age 69 years) treated with femoro-popliteal PTA for symptomatic peripheral arterial occlusive disease were enrolled. On admission, IMT, internal diameter and blood velocity of the common carotid artery (CCA) were assessed by high-resolution ultrasonography. Blood viscosity was measured and carotid WSS was calculated. Patients were followed up for 6 months for the occurrence of significant restenosis (&gt;50%) as documented by duplex ultrasonography. Two patients were lost to follow-up. Results: Fourteen patients (48%) developed restenosis at 6 months. IMT and WSS were not different in patients without and with restenosis (IMT: 0.90 (0.85-0.97) vs. 0.89 (0.84-0.93) mm, p = 0.51; WSS: 14.1 (11.9-19.2) vs. 15.9 (12.8-21.5) dyne/cm2, p = 0.48). The hazard ratio of incident restenosis as estimated by Cox regression analysis was 0.04 for IMT (p = 0.23; 95% CI 0.0001-8.22) and 1.07 for WSS (p = 0.10; 95% CI 0.98-1.17). Conclusions: In this pilot study involving a limited number of patients, carotid IMT and carotid WSS are not significantly related to restenosis at 6 months after femoro-popliteal PTA. This might be the result of different underlying pathophysiology for atherosclerosis and restenosis. abstract_id: PUBMED:17351203 Advances in endovascular treatment of femoropopliteal arterial occlusive disease. Over the past decade, accumulated experience and considerable advances in percutaneous endovascular techniques have enabled their use with increasing frequency for the management of peripheral arterial occlusive disease. However, despite the established role of endoluminal approaches in many other arterial territories, their application for the treatment of femoropopliteal occlusive disease is still debatable due to unique unfavorable anatomical, functional, and flow characteristics of this area. Conventional balloon angioplasty in the femoral and popliteal arteries is frequently associated with suboptimal results and high restenosis rates. To overcome these limitations, emerging approaches have been introduced or are currently under investigation. These include angioplasty modifications, newer concepts in stent design, adjunctive pharmacotherapy, debulking devices, and the application of gamma radiation and freezing in an attempt to reduce intimal hyperplastic response and consequently to expand the application of minimal invasive techniques in the hostile femoropopliteal environment. This review article will present the current status and future trends of endovascular therapy of femoropopliteal artery occlusive disease. abstract_id: PUBMED:28826727 Randomized comparison of femoropopliteal artery drug-eluting balloons and drug-eluting stents (FOREST trial): Study protocol for a randomized controlled trial. The optimal endovascular treatment for femoropopliteal arterial occlusive disease has yet to be assessed. Patency rates after uncoated balloon angioplasty are disappointing. Although stents have better outcomes, they also have limitations. Intra-arterial stenting may lead to stent thrombosis and flow pattern disruption, which may result in stent fracture or in-stent restenosis. In the past decade, drug-eluting balloons (DEBs) and drug-eluting stents (DESs) have been introduced, and both have been proven to possess antirestenotic features compared with conventional techniques. The objective of this study is to perform a noninferiority analysis of DEBs with provisional bare-metal stenting and primary stenting with DESs in the treatment of femoropopliteal arterial occlusive disease. If DEB with provisional bare-metal stenting proves to be noninferior to primary stenting with DESs, DEBs may be the favorable technique because the postoperative long-term limitations of stents will be restricted. This is a prospective, randomized, controlled, single-blind, multicenter trial. The study population consists of volunteers aged ≥18 years, with chronic, symptomatic peripheral arterial occlusive disease (Rutherford-Baker classification 2 to 5) caused by de novo stenotic or occlusive atherosclerotic lesions of the superficial femoral artery or of the popliteal artery (only segment P1). Subjects will be treated with a DEB and provisional bare-metal stenting (if a stenosis &gt;30% or a flow-limiting dissection persists after prolonged inflation with an uncoated balloon) or with primary stenting with a DES. The study will include 254 patients (ratio 1:1). The primary end point is 2-year freedom from binary restenosis, defined as a lumen diameter reduction of &lt;50% assessed by duplex ultrasound imaging (peak systolic velocity ratio &lt;2.5). Secondary end points are technical success, target lesion revascularization, target vessel revascularization, improvement in ankle-brachial index, improvement in Rutherford classification, amputation rate, and mortality rate. abstract_id: PUBMED:15546296 Endovascular interventions in iliac and infrainguinal occlusive artery disease. Percutaneous endovascular procedures are increasingly applied to treat symptomatic peripheral occlusive artery disease. While the primary technical success and recanalization rates in iliac and infrainguinal interventions are high, differences in the long-term patency rates exist with respect to the anatomic localization, separating the iliac, femoropopliteal, and infrapopliteal arterial regions. In iliac arteries, even complex lesions can be recanalized with good long-term patency rates, especially when using self-expanding nitinol stents. In the infrainguinal arteries the method of choice is still under debate (e.g., balloon angioplasty vs stent implantation). A high restenosis rate represents one of the major limitations in femoropopliteal and infrapopliteal interventions. Therefore, additional methods and treatment strategies for peripheral interventions with the potential for future applications are under investigation and will be discussed such as drug-eluting stents, brachytherapy, subintimal angioplasty, laser angioplasty, atherectomy/thrombectomy, cutting balloon, polytetrafluoroethylene (PTFE)-covered stent grafts, biodegradable stents, and cryoplasty. The increasing amount of data on successful peripheral interventions supports the necessity to adapt and reevaluate the current consensus guidelines that were put together in 2000. abstract_id: PUBMED:9863361 Doppler sonography diagnosis of restenosis after percutaneous transluminal angioplasty: sensitivity and specificity of the pedal-brachial index in relation to changes in absolute arterial blood pressure Introduction: Percutaneous transluminal angioplasty (PTA) is accepted for the treatment of patients with severe, disabling claudication who fail conservative management and also for patients with limb threatening ischaemia [1-5, 18, 20]. The development of neointimal hyperplasia (predominantly during the first 6-12 months after PTA), and the progression of the underlying atherosclerotic disease (thereafter), are the reasons of restenosis and reocclusion [1-4, 6]. More than 50% of occurring restenoses are primarily oligo/asymptomatic [1-4, 6-10, 25, 26]. Follow-up visits are aimed at detecting significant restenoses, before reocclusion occurs, so that timely reintervention is possible. In asymptomatic patients in whom reintervention is not necessary, repeated angiography is not justifiable. Non-invasive alternatives include Duplex scanning or the assessment of simple peripheral haemodynamic variables such as ankle systolic pressures and the ankle/brachial pressure index (ABI). The aim of this study was: (1) To determine the sensitivity and specificity of post PTA changes in the ABI, and changes in the absolute ankle pressure to detect restenoses after femoropopliteal PTA (as a gold standard, Duplex scanning, with its reported high sensitivity, specificity and accuracy for detecting restenosis was chosen [9, 25]. (2) For both methods, to evaluate the criteria (minimal magnitude of change-cut off points) necessary to detect restenosis with reasonable reliability. (3) To compare the diagnostic value of changes in ABI to changes in absolute ankle pressure, and to determine the method of preference for detection of post-PTA restenosis. Material And Methods: The study included 171 consecutive patients with peripheral arterial occlusive disease, Fontaine stage II or stage III, selected for femoro-popliteal PTA. All pts presented with single or multiple arterial stenoses or occlusions not exceeding 10 cm of length within the femoro-popliteal segment. At least one lower leg artery had to be patent. Only pts in whom PTA was successful (maximum residual lesion within the dilated segment showed &lt; or = 30% diameter reduction (DR)) were accepted for a 12-month follow-up. After PTA all patients were prescribed a platelet aggregation inhibitor. The investigational scheme included the following procedures: 1. Duplex scanning of the entire lower leg vascular tree before PTA, within 7 days after PTA as well as at 4 weeks and at 12 months. 2. Scoring of Duplex results using the Bollinger angiography score system [11] which distinguishes the pelvic segment, a proximal and a distal superficial femoral segment, the popliteal artery including the popliteal trifurcation and the proximal 1/3 of the lower leg arteries. The score system allows the differentiation of single and multiple plaque (&lt; or = 25% DR), single or multiple stenoses (&lt; or = 50% and &lt; or = 50% DR) involving less or more than 50% of the segment under investigation as well as short (&lt; 50% of segment length) and long (&gt; or = 50% of segment length) vascular occlusions. 3. The assessment of the resting ankle systolic pressures (dorsalis pedis artery, posterior tibial artery) of the reference leg, pre PTA, at 4 weeks, and 3, 6, 9 and 12 months using a 8 Mhz CW Doppler device (Parks 908) with the patient in the supine position and after a resting period of &gt; or = 30 minutes. 4. The calculation of the ankle/brachial pressure index (ABI) as the ratio of the higher systolic pressure value from either the posterior tibial or the dorsalis pedis artery divided by the higher value of the two (bilateral) systolic brachial pressures. 5. A full physical examination including a pulse status, vascular auscultation as well as the assessment of the clinical symptomatology pre PTA, at 4 weeks as well as at 6 and 12 months. The following findings were suggestive of restenosis/reocclusion of the dilated segment. 1. Reoccurrence of a stenosis &lt; or = 50% DR within the dilated segment and/or its inflow or outflo abstract_id: PUBMED:8160095 Preventive percutaneous radiotherapy for avoiding hyperplasia of the intima following angioplasty together with stent implantation Purpose And Background: Interventional radiologic procedures play an important role in the management of peripheral arterial occlusive disease. Percutaneous transluminal interventions as angioplasty (PTA) and implantations of metal-stents injure the wall of the blood vessels resulting in hyperplasia of the intima and media. In spite of adequate anticoagulation therapy restenosis and reocclusion frequently occur. Our study was designed to prevent hyperplasia of the intima by percutaneous radiation therapy. Patients And Methods: A total of 24 patients had a stent implanted in their superficial femoral artery from 1990 to 1992. Eleven patients received percutaneous radiation therapy of the enlarged stent area on five consecutive days with a single dose of 2.5 Gy thus resulting in a total dose of 12.5 Gy. Both patients groups were compared. None of the patients in the radiation group suffered from complications. Results: In the seven months following stent implantation and radiation only two of eleven patients in the radiation group suffered from occlusion of the stent that had to be treated by vascular surgery. In comparison five of 13 patients treated with stent implantation alone suffered from occlusion of the stent within the first eight to nine months. Conclusion: In spite of a small case number our study shows a positive influence of percutaneous radiation therapy on the extent of hyperplasia of the intima after stent implantation leading to lower reocclusion rates. Whether these results can be improved by alternative dose-fractionation schedules has to be addressed by further studies. abstract_id: PUBMED:17928625 Comparison of selective AT1-receptor blockade versus ACE inhibition for restenosis prophylaxis in patients with peripheral occlusive arterial disease after stent angioplasty: a randomized, controlled, proof-of-concept study. Different components of the renin-angiotensin system (RAS) have been demonstrated in atherosclerotic plaques. However, the involvement of the RAS in the complex process of in-stent restenosis is not yet clear. In this prospective, randomized, double-blind, controlled proof-of-concept study, we compared the 2 different pharmacological approaches, selective AT(1)-receptor-blockade with candesartan vs ACE inhibition with quinapril to reduce in-stent restenosis after stent angioplasty of the superficial femoral artery. Twenty-two hypertensive patients with stage IIb peripheral occlusive arterial disease and severe claudication who had been successfully treated with percutaneous transluminal angioplasty (PTA) and stent implantation were randomly assigned to receive daily doses of either candesartan (32 mg) or quinapril (20 mg). Primary end point was restenosis 6 months after intervention, assessed by angiography. Secondary end points were pain-free walking distance, determined by treadmill ergometry; determination of crurobrachial indices; and intima-media thickness (IMT). At 6 months, the rate of restenosis on angiography was 34% in the candesartan group and 71% in the quinapril group (P = .043). Relevant restenosis was found in 3 patients (27%) in the candesartan group and in 7 patients (64%) in the quinapril group. Patients in the candesartan group were able to walk farther on a treadmill (increase: 135 m +/- 20 m) compared with patients in the quinapril group (increase: 83 m +/- 21 m). The IMT at the stent edge was not significantly different in the 2 groups (candesartan: 1.9 mm +/- 0.5 mm; quinapril: 2.0 mm +/- 0.3 mm). This study revealed significant benefit of a pharmacological restenosis regimen using the AT(1)-receptor antagonist candesartan in patients with severe atherosclerosis after superficial femoral artery stenting compared with treatment with the ACE inhibitor quinapril. Further prospective studies in patients are required to confirm these results. abstract_id: PUBMED:17523758 Candesartan treatment for peripheral occlusive arterial disease after stent angioplasty : a randomised, placebo-controlled trial. Objectives: In this prospective, double-blind, placebo-controlled study we observed the influence of treatment with candesartan 8mg on restenosis rates after stent implantation into the femoral artery 6 months after percutaneous transluminal angioplasty (PTA). We hypothesised that angiotensin II type 1 (AT1)-receptor blockade with candesartan would reduce restenosis rates by reducing angiotensin II-mediated intima hyperproliferation within the stented vessel segment in patients with peripheral occlusive disease. Patients And Methods: Eighty-seven patients with peripheral occlusive arterial disease stage IIb who had been successfully treated with PTA and stent implantation were randomised to receive orally either candesartan 8mg (n = 44) or placebo (n = 43). Follow-up included evaluation of the degree of stenosis and thickness of the intima-media complex (primary endpoint). In addition, thickness of the interventricular septum, crurobrachial pressure ratios, and pain-free walking distance were determined (secondary endpoints). Results: The degree of stenosis after 6 months was not significantly different between the groups studied (35.9 +/- 39.6% for candesartan vs 36.0 +/- 38.4% for placebo). Relevant restenosis including stent occlusions was found in nine patients (20.5%) in the candesartan group and in ten patients (23.3%) in the placebo group. The thickness of the intima-media complex 6 months after stent implantation was 1.60 +/- 0.32mm in the candesartan group and 1.64 +/- 0.32mm in the placebo group (not significant). There were no differences in secondary endpoints between the treatment groups. Controls after 3 months (20.9 +/- 33.6% for candesartan vs 27.6 +/- 38.3% for placebo; p = 0.39) and 9 months (44.1 +/- 40.8% for candesartan vs 47.7 +/- 37.2% for placebo; p = 0.67) of therapy revealed a lower degree of stenosis in patients treated with candesartan. Conclusions: Although not significant, candesartan treatment tended to improve the prognostic benefits after stent implantation, suggesting that an antiproliferative effect after stenting may need higher doses than an antihypertensive effect of the drug. This hypothesis requires confirmation in further prospective studies with higher daily doses of candesartan, which are already in progress. abstract_id: PUBMED:34634414 Predictors of outcome in diabetic patients undergoing infrapopliteal endovascular revascularization for chronic limb-threatening ischemia. Objective: The incidence of chronic limb-threatening ischemia in diabetic patients is increasing. The factors influencing outcome after infrapopliteal revascularization in these patients are largely unknown. Therefore, this study aims to identify the impact of perioperative glucose control on the long-term outcomes in this patient cohort, and furthermore to identify other factors independently associated with outcome. Methods: Consecutive diabetic patients undergoing infrapopliteal endovascular revascularization for chronic limb-threatening ischemia were identified. Patients' demographics, procedural details, daily capillary blood glucose, and hemoglobin A1C levels were collected and analyzed against the study end points using Kaplan-Meier and Cox regression analysis. Results: A total of 437 infrapopliteal target vessels were successfully crossed in 203 patients. Amputation-free survival by Kaplan-Meier (estimate (standard error)%) was 74 (3.3)% and 63 (3.7)%, primary patency was 61 (4.2)% and 50 (4.9)%, assisted primary patency was 69 (5.2)% and 55 (6.1)%, and secondary patency was 71 (3.8)% and 59 (4.1)% at 1 year and 2 years, respectively. Cox regression analysis showed high perioperative capillary blood glucose levels to be an independent predictor of binary restenosis (hazard ratio [HR], 1.49; 95% confidence interval [CI], 1.31-1.1.78; P = .015). Postprocedural dual-antiplatelet therapy was found to be an independent predictor of amputation-free survival (HR, 1.69; 95% CI, 1.04-2.75; P = .033), and freedom from major adverse limb events (HR: 1.96; 95% CI, 1.16-3.27; P = .023) and baseline estimated glomerular filtration rate was significantly associated with better amputation-free survival (HR, 0.52; 95% CI, 0.31-0.87; P = .014). Conclusions: Poor perioperative glycemic control is associated with a higher incidence of restenosis after infrapopliteal revascularization in diabetic patients. Dual antiplatelet therapy is associated with better outcomes in this group. Answer: Flow-mediated vasodilation (FMD) does not appear to be associated with restenosis after endovascular treatment of peripheral arterial occlusive disease. A study involving 184 patients with claudication related to peripheral arterial disease found no significant difference in mean FMD between patients with restenosis and those without after a 12-month follow-up (PUBMED:20432038). On the other hand, brachial intima-media thickness (B-IMT) has been shown to be significantly associated with restenosis. The same study reported that patients with an enlarged B-IMT were at a higher risk of restenosis after angioplasty of peripheral arteries. A B-IMT value over 0.21 mm was identified as an independent risk factor for restenosis, with an odds ratio of 2.9 (PUBMED:20432038). These findings suggest that while endothelial dysfunction, as measured by FMD, is not linked to restenosis, structural changes in the artery, such as increased B-IMT, are indicative of a higher risk for restenosis following endovascular treatment in patients with peripheral arterial occlusive disease.
Instruction: Retropharyngeal and parapharyngeal abscesses or phlegmons in children. Is there an association with adenotonsillectomy? Abstracts: abstract_id: PUBMED:18922587 Retropharyngeal and parapharyngeal abscesses or phlegmons in children. Is there an association with adenotonsillectomy? Background: Retropharyngeal and parapharyngeal abscesses or phlegmons are relatively uncommon. Since the tonsils and adenoids are part of the lymphoid tissues of the nasopharynx and oral cavity, one may hypothesize that their removal would lead to an increased susceptibility to pharyngeal infections. Objective: Determine whether there is an association with a history of adenotonsillectomy and the development of retro- and parapharyngeal abscesses/phlegmons. Methods: A case-control study was performed. The charts of 180 children admitted to a tertiary care pediatric hospital between 1996 and 2006 were reviewed. The patients were age-matched to 180 children having visited the emergency department of the same hospital. The charts of these children were reviewed and used as controls. Results: 13.9% of children admitted with a retro- or parapharyngeal abscess or phlegmon had previously undergone an adenotonsillectomy as compared to only 2.2% of children in the control group. The calculated odds ratio was 7.10 (95% CI [2.52-19.93], p&lt;0.001). Conclusion: In conclusion, the development of a retropharyngeal or parapharyngeal abscess or phlegmon is strongly associated with a history of adenotonsillectomy. abstract_id: PUBMED:38490013 Parapharyngeal and retropharyngeal infections in children: Kawasaki disease needs vigilance. Objective: Kawasaki Disease (KD) may mimic Parapharyngeal (PPI) and Retropharyngeal Infections (RPI), leading to misdiagnosis as Deep Neck Infections (DNIs). The treatment plans for the two diseases are different, and delayed treatment can lead to serious complications. Therefore, prompt diagnosis and management are necessary. This study was performed to evaluate the clinical features of KD mimicking DNIs and explore the treatment options. Methods: Children with cellulitis or abscess in parapharyngeal or retropharyngeal space in neck CT were included in this study. The medical records of enrolled children were retrospectively reviewed. Results: In total, 56 children were diagnosed with PPI or/and RPI. Twenty-two (39.3%) participants were eventually diagnosed with KD, and 34 (60.7%) were diagnosed with DNIs. Compared with the DNIs group, the KD group had a higher body temperature (p=0.007), and higher levels of AST (p=0.040), ALT (p=0.027), and ESR (p=0.030). Deep cervical cellulitis (p=0.005) were more common in the KD group. However, deep neck abscess often occurred in the DNIs group (p=0.002), with parapharyngeal abscess being the most common type of abscess (p=0.004). The KD mimicking DNIs cases did not respond to antibiotic treatment, but symptoms significantly improved after the use of Immunoglobulin (IVIG) and aspirin. Conclusion: Children with KD may exhibit retropharyngeal or parapharyngeal inflammation in the early stages. KD should be considered a differential diagnosis for children with DNIs, high fever, and no response to antibiotic therapy. Surgery in KD mimicking deep neck abscess requires caution. Level Of Evidence: I. abstract_id: PUBMED:19586761 Retro- and parapharyngeal infections: standardization of their management Aim: To analyze the changes in the management of retropharyngeal and parapharyngeal infections and propose a decisional algorithm for their diagnosis and treatment. Patients And Methods: A retrospective survey was carried out in a tertiary care pediatric hospital between January 2001 and December 2005. All children aged less than 15 years and affected by a retro- or parapharyngeal infection were included. Clinical, biological, and radiological data, medical and surgical treatment, and complications were extracted from the review of medical charts. The results of the surgical findings were correlated with a cervical computed tomographic scan (CT scan). Results: Thirty-one patients were included, 64.5% during the last 2 years of the study period. All children presented fever and a stiff neck. The pharyngeal examination revealed a retropharyngeal bulge in a quarter of the population and an upper respiratory tract infection was concomitant in 68% of cases. A CT scan was carried out in 29 of 31 children (93.5%), with the radiological diagnosis of an abscess in 16 children (55.2%), presuppurative adenitis in 8 children (27.6%), and cellulitis in 5 children (17.2%). The CT scan was performed within 0.75 days of admission in 2001 and 2.3 days in 2005. All children were treated with intravenous antibiotic therapy: an association of amoxicillin/clavulanic acid and an aminoglycoside in most cases. The mean duration of intravenous antibiotic therapy was 5.2 days. Seventeen patients (93.5%) underwent surgical drainage and purulent material was found in 82.3% of cases. The accuracy of the CT scan, confirmed by surgical finding of a purulent material, was 71.4% in correctly identifying an abscess. The mean duration of surgical treatment after admission increased from 1.7 days in 2001 to 3.3 days in 2005. The number of patients who underwent surgery was divided by a factor of 3 in the second period of the study. Two groups were compared: group A (n=12) treated with antibiotic therapy and group B (n=17) treated with antibiotics and surgical drainage. No significant difference was found between the two groups considering the duration of parenteral and oral antibiotic therapy, the standardization of cervical mobility, the mean time for apyrexia, and the length of hospitalization. There was one recurrence in group B 1 month later, and one case of sepsis in group A. None of the patients with retropharyngeal infection died. Conclusion: Without clinical evidence of severe sepsis, parenteral antibiotic therapy is recommended as the first-line treatment for children over 6 months of age presenting with retropharyngeal and parapharyngeal infections. If the clinical and/or biological conditions do not improve within 48-72h, a CT scan is indicated to assess the extent of infection and exclude complications. The decision to initiate surgical drainage depends on the patient's clinical status and the accessibility of the abscess. abstract_id: PUBMED:31599139 Deep Neck Inflammation: Probable Kawasaki Disease in Korean Children. Objectives: Deep neck infections (DNIs) can cause life-threatening complications, and prompt diagnosis and management are necessary. Kawasaki disease (KD) may be accompanied by deep neck inflammation; making it difficult to distinguish from DNIs. This study was performed to evaluate clinical features and outcomes of children with parapharyngeal and retropharyngeal inflammation. Methods: Medical records of the children diagnosed with parapharyngeal and retropharyngeal cellulitis or abscess using cervical computed tomography (CT) between 2013 and 2017 were retrospectively reviewed. Results: A total of 47 children were diagnosed with parapharyngeal and retropharyngeal inflammation. Eleven (23.4%) of them were eventually diagnosed with KD, and 36 (76.6%) were diagnosed with DNIs. There were no significantly different clinical and laboratory characteristics on admission between children diagnosed with KD and DNIs; however, significantly more children with KD were febrile for ≥3 days after admission compared to those with DNIs (P=0.009). Deep neck abscesses on CT were observed in 16 children with DNIs (44.4%) and in no child with KD (P=0.009). Among the 36 children with DNIs, 30 (83.3%) were cured with antibiotic therapy only. Conclusion: A quarter of children presenting with deep neck inflammation were diagnosed with KD. KD should be considered in children showing deep neck inflammation unresponsive to empirical antibiotic therapy after 3 days, especially in those presenting with deep neck cellulitis rather than deep neck abscess. abstract_id: PUBMED:21189972 A case of Kawasaki disease with coexistence of a parapharyngeal abscess requiring incision and drainage. Kawasaki disease (KD) causes multisystemic vasculitis but infrequently manifests with deep neck infections, such as a peritonsillar abscess, peritonsillar or deep neck cellulitis, suppurative parapharyngeal infection, or retropharyngeal abscess. As its etiology is still unknown, the diagnosis is usually made based on typical symptoms. The differential diagnosis between KD and deep neck infections is important, considering the variable head and neck manifestations of KD. There are several reports on KD patients who were initially diagnosed with retropharyngeal abscess on on computed tomography scans (CT). However, the previously reported cases did not have abscess or fluid collection on retropharyngeal aspiration. Therefore, false-positive neck CT scans have been obtained, until recently. In this case, suspected neck abscess in patients with KD unresponsive to intravenous immunoglobulin could signal the possible coexistence of suppurative cervical lymphadenitis. abstract_id: PUBMED:15711471 Retropharyngeal and lateral pharyngeal abscesses in children Objective: Review of the diagnostic and management of retropharyngeal and lateral pharyngeal abscesses in children. Patients And Methods: Retrospective study of 5 children's cases hospitalized during year 2003. Results: Diagnosis included CT scan which is often helpful to differentiate cellulitis from abscesses. First medical management consists in an intravenous broad-spectrum antibiotics. Surgical drainage may be considered in cases of failure or clinical aggravation. Closed observation is mandatory in all cases to prevent major complication. Conclusion: Retropharyngeal and lateral pharyngeal abscesses in children are life-threatening infectious. Therapeutic consists in intravenous broad-spectrum antibiotics associated, if necessary, with surgical drainage in cases of persistent abscess. abstract_id: PUBMED:9509114 Retropharyngeal abscess and acute torticollis. Acute torticollis may result from an inflammatory process irritating the cervical muscles. In children there is often an association between acute torticollis and retropharyngeal cellulitis/abscess. Over six weeks, two children with acute torticollis presented to our Department. Both children were found to have retropharyngeal cellulitis/abscess. The problem of differentiating between the non-suppurative and the suppurative phases of the disease process is discussed. abstract_id: PUBMED:23687369 Retropharyngeal cellulitis in adolescence. A healthy 10-year-old boy presented with fever and progressively worsening sore throat and dysphagia. Physical examination showed pharyngeal erythema with tender left cervical lymphadenopathy. Radiography revealed 9 mm deep prevertebral soft tissues at the C2 level, and contrast-enhanced CT showed fluid collection with no major ring enhancement in the retropharyngeal space. He was diagnosed with retropharyngeal cellulitis and treated with intravenous antibiotics. Retropharyngeal cellulitis or abscess is a relatively rare infection in adolescents but is more frequent in 2-4-year-old children. Retropharyngeal cellulitis may rapidly extend caudally, with fatal consequences. For adolescents, physicians need to be aware of this clinical entity and carefully evaluate imaging findings even when only the mild pharyngeal physical findings are noted. abstract_id: PUBMED:10504018 Correlation between computed tomography and surgical findings in retropharyngeal inflammatory processes in children. Retropharyngeal abscess (RPA) in children is a potentially life-threatening process which often requires immediate surgical intervention. Contrast enhanced computed tomography (CT) is utilized frequently to determine abscess versus cellulitis/phlegmon and aids in determining cases needing surgical drainage. The purpose of this retrospective study was to determine the accuracy of CT in distinguishing retropharyngeal abscess from cellulitis in children. The medical records of 32 children from 1989 to 1997 suspected of having a retropharyngeal abscess were reviewed. All patients included in the study underwent a CT scan as well as surgical exploration within 48 h of the scan. Two patients required two surgical procedures (n = 34). A comparison between CT results and operative findings was made to determine the accuracy of CT imaging in confirming the presence of RPA versus cellulitis. Suspected diagnosis of abscess or cellulitis/phlegmon on CT was confirmed at surgery in 25 of 34 cases (73.5%). The false positive rate of CT scan was 11.8% (4/34), while the false negative rate was 14.7% (5/34). Based on our results, CT is accurate in differentiating abscess from cellulitis in 73.5% of cases. Clinical findings, as well as radiologic findings, must be considered together prior to surgical drainage of a suspected retropharyngeal abscess in children. abstract_id: PUBMED:34345332 A case of intracranial vasospasm in a patient with extensive retropharyngeal cellulitis. Retropharyngeal cellulitis/abscesses are deep neck infections that may become life-threatening if airway compromise occurs. This condition is more common in children than in adults, and associated intracranial vessel narrowing has been reported. We report an adult patient with extensive retropharyngeal cellulitis and intracranial vasospasm. The patient was a 62-year-old woman who presented with fever, sore throat, and neck pain. She also had uncontrolled type 2 diabetes mellitus. Leukocytosis, prolonged erythrocyte sedimentation rate, elevated C-reactive protein, and hyperglycemia were present on admission. Computed tomography and contrast-enhanced magnetic resonance imaging revealed severe swelling in the nasopharyngeal, retropharyngeal, prevertebral, and bilateral carotid spaces. Gadolinium enhancement extended to the middle cranial fossa and visceral space. Multiple stenoses in several intracranial vessels was also identified. Intravenous antibiotic therapy was initiated, the patient's symptoms resolved, and repeat imaging confirmed improvement. Intracranial vasospasm should be considered in patients with retropharyngeal cellulitis. Answer: Yes, there is an association between adenotonsillectomy and the development of retropharyngeal or parapharyngeal abscesses or phlegmons in children. A case-control study reviewed the charts of 180 children admitted to a tertiary care pediatric hospital with retro- or parapharyngeal abscess or phlegmon and compared them to 180 age-matched controls. The study found that 13.9% of children admitted with a retro- or parapharyngeal abscess or phlegmon had previously undergone an adenotonsillectomy, compared to only 2.2% of children in the control group. The calculated odds ratio was 7.10, indicating a strong association between a history of adenotonsillectomy and the development of these infections (PUBMED:18922587).
Instruction: Is there a negative interpretation bias in depressed patients? Abstracts: abstract_id: PUBMED:27262590 Magnitude of negative interpretation bias depends on severity of depression. The present study investigated the hypothesis that the magnitude of negative interpretation bias displayed by those with depression is related to the degree of depression they experience. Seventy one depressed participants (scoring 14 and above on the Beck Depression Inventory II) completed tasks spanning three domains of possible negative interpretations: semantic ambiguity; nonverbal ambiguity and situational ambiguity. Regression analyses revealed that just under half of the variance in depressive symptom severity was explained by the combination of negative interpretation bias tasks, with the strongest predictor of depressive symptom severity being negative interpretation of semantic ambiguity when reading ambiguous text descriptions. Subsidiary group analyses confirmed that severely depressed individuals interpreted emotionally ambiguous information in a more negative way than did their mildly or moderately depressed counterparts. These findings indicate that the degree of negative interpretive bias is closely related to depression severity and that bias manifests especially strongly at the most severe levels of depression. Our findings may help us to refine cognitive theories of depression and be helpful in guiding therapy. abstract_id: PUBMED:27206684 Negative interpretation bias as a mechanism of the relationship between rejection sensitivity and depressive symptoms. Rejection sensitivity and negative cognitive biases have been identified as important risk factors for depression. Rejection sensitivity is defined as the tendency to anxiously anticipate, easily perceive, and overreact to rejection. Although prior studies have found an association between one component of rejection sensitivity, the tendency to anxiously anticipate rejection, and depression, little is known about the mechanisms through which anxious anticipation of rejection might confer depression risk. One possibility is that rejection anticipation leads to negatively biased interpretations, a cognitive risk factor for depression. Results from two studies (one cross-sectional, one longitudinal) indicate that negative interpretation biases mediate the association between the anxious anticipation of rejection and depressive symptoms. These findings indicate that interpretation biases represent a mechanism through which anxious anticipation of rejection confers risk for depression, and suggest that interventions designed to challenge negative interpretations may help to decrease depression risk among individuals who anticipate rejection. abstract_id: PUBMED:25009521 Positive interpretation bias predicts well-being in medical interns. Cognitive theories of emotion posit that affective responses may be shaped by how individuals interpret emotion-eliciting situations. This study tested whether individual differences in interpretation bias (i.e., interpreting ambiguous scenarios in a more negative or positive manner) independently predict trait resilience and depression in medical interns. Interpretation bias and trait resilience scores were assessed in 47 interns prior to their first internship. Depressive symptoms were assessed twice during internship. Nearly half of the sample (42%) scored above the cut-off for mild depressive symptoms during internship, a significant rise compared to the initial assessment. Those with a more positive interpretation bias had higher trait resilience (β = 0.44, p = 0.004) and a 6-fold decreased depressive symptom risk during internship (OR = 6.41, p = 0.027). The predictive power of a positive interpretation bias for decreased depression symptoms held over and above initial depressive symptoms, demographics and trait reappraisal. Assessing positive interpretation bias may have practical utility for predicting future well-being in at risk-populations. abstract_id: PUBMED:29567546 Body image related negative interpretation bias in anorexia nervosa. A distorted body image and pronounced body dissatisfaction are hallmarks of anorexia nervosa (AN) that typically result in dietary restraint and compensatory behaviours. Cognitive biases such as negative interpretation bias are considered key maintaining factors of these maladaptive cognitions and behaviours. However, little attention has been paid to empirical tests whether negative interpretation bias exists in AN and to what degree it is associated with symptom severity. Participants in the present study were 40 women with AN and 40 healthy women with no history of an eating disorder. Body-related negative interpretation bias (i.e., a tendency to interpret ambiguous information about the own body in a negative way) was measured by a Scrambled Sentences Task. Patients with AN showed a stronger body-related negative interpretation bias than healthy controls. Within both groups, negative interpretation bias correlated strongly and positively with AN symptom severity and these effects were not moderated by levels of depressive symptoms. The findings support the idea that biased interpretation of body-related information is associated with the specific psychopathology of AN. Targeted, computerised interventions (e.g. interpretation bias modification) may help to alter these dysfunctional cognitive schemas that lie at the heart of AN. abstract_id: PUBMED:37383958 Uncovering Negative Interpretation Bias in Remitted/Recovered Depression with Laboratory Task. Objective: Research has demonstrated that individuals with a history of depression engage in complicated strategies (e.g., thought suppression) that may mask the possible existence of major depression. Increasing the mental strain, such as retrieving a six-digit number, may reveal depressive thinking in previously depressed individuals. This study examined the hypothesis that thought suppression could mask a cognitive vulnerability to depression and illustrated how cognitive tasks disrupt mind control. Method: This case-control study recruited 255 participants with a convenience sampling method conducted at the Razi Educational and Therapeutic Psychiatric Center (Tehran, Iran) in 2021. Participants were divided into five groups, then they were evaluated by a scrambled sentence test (SST) after random assignment to either mental load or no mental load conditions. The number of negative unscrambled statements was used as an index of negative interpretation bias. After gathering data, analysis of variance (ANOVA) for different group factors and conditions was carried out to test the main hypotheses. Results: The effect of the intervention provided to each group on the score of the Hamilton Depression Rating Scale (HRDS) was significant (F (4, 208) = 511.77, P &lt; 0.001). A significant correlation (r = 0.36, P &lt; 0.01) was found between depression (HDRS) and negative interpretive bias (SST). Analysis of ANOVA has revealed a significant effect on the group (F (4, 412) = 14.94, P &lt; 0.001). The effect of the mental load was not significant (F (4, 412) = 0.09, P = 0.75), but the group × load interaction was significant (F (4, 412) = 5.03, P &lt; 0.001). Post hoc test was used to draw multiple comparisons between the five groups. Conclusion: The results revealed that people who are vulnerable to depressive disorders are predominantly engaged in thought suppression, which can conceal their depressogenic thinking until cognitive requests consume their mind control efforts. abstract_id: PUBMED:27279528 The association between ruminative thinking and negative interpretation bias in social anxiety. Cognitive models propose that both, negative interpretations of ambiguous social situations and ruminative thoughts about social events contribute to the maintenance of social anxiety disorder. It has further been postulated that ruminative thoughts fuel biased negative interpretations, however, evidence is rare. The present study used a multi-method approach to assess ruminative processing following a social interaction (post-event processing by self-report questionnaire and social rumination by experience sampling method) and negative interpretation bias (via two separate tasks) in a student sample (n = 51) screened for high (HSA) and low social anxiety (LSA). Results support the hypothesis that group differences in negative interpretations of ambiguous social situations in HSAs vs. LSAs are mediated by higher levels of post-event processing assessed in the questionnaire. Exploratory analyses highlight the potential role of comorbid depressive symptoms. The current findings help to advance the understanding of the association between two cognitive processes involved in social anxiety and stress the importance of ruminative post-event processing. abstract_id: PUBMED:27267952 Hostile interpretation bias in depression. Background: Research suggests an important relationship between interpretation bias, hostility and Major Depressive Disorder (MDD). Extant literature has yet to examine hostile interpretation bias in clinically depressed samples; the current studies sought to fill this gap. Method: Study 1 participants included undergraduates who met criteria for MDD (n=36) or no anxiety or mood diagnosis (n=35). Each participant completed a structured clinical interview along with measures of depression, hostile interpretation bias, and trait hostility. In Study 2, a sample of treatment-seeking individuals with elevated trait anger completed measures of depression, hostile interpretation bias, and trait anger. Results: Study 1 demonstrated that, relative to the non-depressed group, individuals with depression displayed greater hostile interpretation bias but comparable levels of trait hostility. In Study 2, greater hostile interpretation bias was associated with greater depressive symptoms, and this relationship was independent of co-occurring trait anger. Limitations: The correlational nature of these studies precludes interpretation of causal relationships between constructs. Additionally, replication of these results should be sought in a larger, more diverse sample. Conclusion: Overall, the findings suggest hostile interpretation bias may play a unique role in depression and could be a treatable feature of interpersonal mechanisms maintaining MDD. abstract_id: PUBMED:34101175 Interpreting ambiguous emotional information: Convergence among interpretation bias measures and unique relations with depression severity. This study aimed to examine the convergence among interpretation bias measures and their associations with depressive symptom severity. Research into interpretation biases employs measures of interpretation bias interchangeably, however, little is known about the relationship between these measures. Participants (N = 82 unselected undergraduate students; 59 female) completed four computer-based interpretation bias tasks in a cross-sectional design study. Indirect measures, based on participants' reaction times, were not correlated with each other and had poor split-half reliability. Direct measures were more strongly correlated with depressive symptoms than indirect measures, but only the Scrambled Sentences Task explained a reliable unique portion of the variance in depressive symptoms. Interpretation bias tasks may not measure the same cognitive process and may differ in the extent to which they are a cognitive marker of depression-linked interpretation bias. These findings help to improve the measurement of and theory underlying interpretation bias and depressive symptoms. abstract_id: PUBMED:30496949 Cognitive training for modifying interpretation and attention bias in depression: Relevance to mood improvement and implications for cognitive intervention in depression. Depressed individuals interpret ambiguous information more negatively and this has been suggested a central component in cognitive models of depression. In this study we investigated the effectiveness of the interpretation modification training on modifying interpretation bias in depression and its association with symptoms reduction. In a double-blind, randomized controlled design, twenty-two depressed individuals (mean age = 19.86, SD = 1.16) were randomly assigned to the experimental and control groups. They completed 10 sessions of cognitive training with the Ambiguous Hallmark Program (AHP) over 5 weeks. Participants' interpretation bias and their depressive scores were assessed and compared before and after the intervention. Results showed that the AHP significantly decreased negative interpretation in the experimental group. Additionally, a significant decrease in the depressive scores was also observed in the intervention group compared to the control group. We also observed the transfer of learning from the AHP training to another interpretation bias task. The AHP can significantly modify negative interpretations and symptoms in depression providing preliminary supporting evidence for its clinical application especially in mild-to-moderate depression. Improved cognitive control over emotional information and unbiased attention to them could explain effects of the Interpretation modification paradigm. Further studies need to examine the efficacy of the AHP as a potential cognitive intervention in depression. abstract_id: PUBMED:30662572 Mindfulness, Interpretation Bias, and Levels of Anxiety and Depression: Two Mediation Studies. In two studies, a possible mediation effect was tested of cognitive interpretation bias in the relation between respectively dispositional mindfulness and acceptance, on the one hand, and symptoms of depression and anxiety, on the other hand. An undergraduate student sample (N = 133; 86% female, Mage = 19.8) and a convenience community sample (N = 186; 66% female, Mage = 36.5) were examined by means of an online questionnaire measuring dispositional mindfulness (FFMQ-SF; Study 1) and acceptance (AAQ-II; Study 2), anxiety (STAI-trait) and depressive (BDI-II) symptoms, and interpretation bias (with the interpretation bias task, IBT). Considering both studies, results showed consistently the expected relations of larger mindfulness skills going together with a smaller cognitive interpretation bias and lower levels of depression and anxiety symptoms. More interestingly, it was found that interpretation bias served as a mediator in the relations between respectively dispositional mindfulness and acceptance, and symptoms of depression and anxiety. With these findings, some more insight in the working mechanisms of mindfulness-based treatments on internalizing psychopathology has been obtained. Answer: Yes, there is evidence of a negative interpretation bias in depressed patients. Studies have found that the magnitude of negative interpretation bias displayed by those with depression is related to the degree of depression they experience (PUBMED:27262590). Severely depressed individuals interpret emotionally ambiguous information in a more negative way than their mildly or moderately depressed counterparts (PUBMED:27262590). Negative interpretation biases have also been identified as a mechanism through which anxious anticipation of rejection confers risk for depression (PUBMED:27206684). Furthermore, individuals with a history of depression may engage in thought suppression strategies that can mask the existence of major depression, but when cognitive demands increase, such as through mental load tasks, their negative interpretation bias can be revealed (PUBMED:37383958). Negative interpretation bias is also associated with the specific psychopathology of anorexia nervosa (PUBMED:29567546), and hostile interpretation bias may play a unique role in depression, independent of co-occurring trait anger (PUBMED:27267952). Cognitive training has been shown to modify negative interpretation bias and reduce depressive symptoms (PUBMED:30496949), and mindfulness and acceptance have been found to reduce interpretation bias and, in turn, symptoms of depression and anxiety (PUBMED:30662572).