input
stringlengths
6.82k
29k
Instruction: National Institutes of Health Funding in Plastic Surgery: A Crisis? Abstracts: abstract_id: PUBMED:32764178 Creation of a comprehensive training and career development approach to increase the number of neurosurgeons supported by National Institutes of Health funding. Objective: To increase the number of independent National Institutes of Health (NIH)-funded neurosurgeons and to enhance neurosurgery research, the National Institute of Neurological Disorders and Stroke (NINDS) developed two national comprehensive programs (R25 [established 2009] for residents/fellows and K12 [2013] for early-career neurosurgical faculty) in consultation with neurosurgical leaders and academic departments to support in-training and early-career neurosurgeons. The authors assessed the effectiveness of these NINDS-initiated programs to increase the number of independent NIH-funded neurosurgeon-scientists and grow NIH neurosurgery research funding. Methods: NIH funding data for faculty and clinical department funding were derived from the NIH, academic departments, and Blue Ridge Institute of Medical Research databases from 2006 to 2019. Results: Between 2009 and 2019, the NINDS R25 funded 87 neurosurgical residents. Fifty-three (61%) have completed the award and training, and 39 (74%) are in academic practice. Compared to neurosurgeons who did not receive R25 funding, R25 awardees were twice as successful (64% vs 31%) in obtaining K-series awards and received the K-series award in a significantly shorter period of time after training (25.2 ± 10.1 months vs 53.9 ± 23.0 months; p < 0.004). Between 2013 and 2019, the NINDS K12 has supported 19 neurosurgeons. Thirteen (68%) have finished their K12 support and all (100%) have applied for federal funding. Eleven (85%) have obtained major individual NIH grant support. Since the establishment of these two programs, the number of unique neurosurgeons supported by either individual (R01 or DP-series) or collaborative (U- or P-series) NIH grants increased from 36 to 82 (a 2.3-fold increase). Overall, NIH funding to clinical neurological surgery departments between 2006 and 2019 increased from $66.9 million to $157.3 million (a 2.2-fold increase). Conclusions: Targeted research education and career development programs initiated by the NINDS led to a rapid and dramatic increase in the number of NIH-funded neurosurgeon-scientists and total NIH neurosurgery department funding. abstract_id: PUBMED:36260312 Current Status of National Institutes of Health Research Funding for Women Surgeon-Scientists. Importance: Women have made substantial advancements in academic surgery, but research funding disparities continue to hamper their progress, and current literature on the status of National Institutes of Health (NIH) funding awarded to women surgeon-scientists appears to be conflicting. Objective: To examine gender-based differences in NIH funding awarded to surgeon-scientists by comparing total grant amounts awarded and the distribution of grants by gender and research type. Design, Setting, And Participants: This cross-sectional study was performed using a previously created database of NIH-funded surgeons from 2010 to 2020. Active physician data from the Association of American Medical Colleges were used to calculate total surgeon populations. This study was performed at the NIH using the NIH internal data platform, iSearch Grants. A total of 715 men and women surgeon-scientists funded by the NIH in 2010 and 1031 funded in 2020 were included in the analysis. Main Outcomes And Measures: The main outcome was the number of women among the total number of surgeons who received NIH grants and the total grant amounts awarded to them. Bivariate χ2 analyses were performed using population totals and substantiated by z tests of population proportions. Results: This study included 715 physicians (n = 579 men [81.0%]) in 2010 and 1031 physicians (n = 769 men [74.6%]) in 2020. In 2020, women comprised 27.4% of the surgical workforce and 25.4% of surgeons with research funding in the US, but they received only 21.7% of total NIH research funding awarded to all surgeons. The number of funded women surgeon-scientists, however, significantly increased from 2010 to 2020 (262 [25.4%] in 2020 vs 136 [19.0%] in 2010; P < .001) as did their funding ($189.7 million [21.7%] in 2020 vs $75.9 million [12.3%] in 2010; P < .001). Furthermore, the proportion of US women surgeons overall with NIH funding significantly increased in 2020 vs 2010 (0.7% vs 0.5%; P < .001). Basic science, clinical outcomes, and clinical trial R01 grants also increased among women surgeon-scientists. Women and men K grant holders had a similar mean (SD) number of R01 application attempts before success (2.7 [3.01] vs 2.3 [3.15]; P = .60) and similar K-to-R award conversion rates (23.5% vs 26.7%; P = .55). Conclusions And Relevance: This cross-sectional study found an increasing number of women surgeon-scientists receiving NIH funding in 2020 vs 2010 as well as increases in the median grant amounts awarded. Although these results are promising, a discrepancy remains in the proportion of women in the surgical workforce compared with those funded by the NIH and the total grant amounts awarded to them. abstract_id: PUBMED:35835627 NIH Funding Across Surgical Specialties; How Do Women Fare? Background: Much has been written about the under-representation of women in academic medicine. However, no study has comprehensively described the gender-based trends of National Institutes of Health funding across surgical specialties; this study provides such an overview. Methods: We queried a previously created database to identify both male and female National Institutes of Health-funded surgeons. Surgical specialties and subspecialties were determined based upon formal training. Total grant costs and average costs per R01 and K grant were calculated and compared. Bivariate χ2 analyses were performed using population totals. Results: In 2020, the specialties with the highest proportion of National Institutes of Health-funded female surgeon-scientists were obstetrics and gynecology (57%) and vascular surgery (40%). The general surgery subspecialties with the highest proportion of women were breast (85%), endocrine (58%), and colorectal surgery (40%). An analysis of total grant costs in 2020 revealed that in most specialties, the proportion of funding held by women was substantially less than the proportion of women investigators. In obstetrics and gynecology, women comprised 57% of surgeons, but held only 46% of the funding. Similarly, in breast surgery, women comprised 85% of surgeons, but held only 45% of the funding. Women and men had similar changes in the average total cost per R01 and K grant awarded from 2010 to 2020. In 2020, women were awarded less than men per R01 grant in general, otolaryngology, plastic and reconstructive, urology, and vascular surgery. Conclusion: Although female surgeon-scientists have made significant advances in some surgical specialties, they continue to lag in others. An in-depth analysis of the factors contributing to these trends is necessary to achieve gender parity across all academic surgical specialties. abstract_id: PUBMED:25392826 Focus on China: should clinicians engage in research? and lessons from other countries. Following tremendous economic progress, society in China is also undergoing fundamental changes, as is the healthcare system. Currently the training of Chinese young doctors and their future work placement are all undergoing re-structuring. We compiled some thoughts and opinions on the topic of 'should clinicians in China engage in research?', and publish them as a special report in this issue of Quantitative Imaging in Medicine and Surgery (QIMS). The contributors included some editorial members of this journal, and a few personal friends. Besides a few minor linguistic corrections, opinions from the contributors have not been edited, as we want authors' to write their own independent views. However, it is possible there is a selection bias of the contributors of this paper; more likely those who are interested in the medical research are selected and therefore the views of the contributors may not be generalizable. To compare the structure and funding of China with other countries, authors from UK, The Netherlands, France, and USA are also invited. abstract_id: PUBMED:28205233 Priorities and funding for global cancer surgery research. N/A abstract_id: PUBMED:36643765 An In-Depth Examination of Surgeon-Scientists' NIH-Funded Areas of Research. Background: Few studies have examined which National Institutes of Health (NIH) Institutes or Centers (ICs) provide most of the funding to surgeons, nor examined the specifics of their research focus areas. A better understanding of both the goals of ICs and research focus areas for surgeons may facilitate further alignment of the two. Methods: A previously created database of NIH-funded surgeons was queried. To understand trends in funding, total grant cost was calculated for each IC in 2010 and 2020, and distribution of IC funds to each principal investigator (PI) category (surgeons, other physicians, and PhDs without a medical degree) was compared. Finally, total cost for Research Condition and Disease Categorization (RCDC) areas funded to surgeons compared to all of NIH was calculated. Statistical analyses were performed; a two-tailed p value of < 0.05 was considered significant. Results: The National Cancer Institute (NCI) awarded the largest percentage of all 2020 surgeon funding, 34.3% ($298.9M). Compared to the other ICs, surgeons held the largest percentage of the National Eye Institute's (NEI) total funding in 2010 and 2020 at 8.7% and 9.0%, respectively. The RCDC super category comprising the most funding for surgeons was health disparities with 14.5% of all surgeon funding, followed by neurology (13.8%) and cancer (11.4%). Surgeons were awarded 10.8% of NIH's transplant-related research, 7.0% of ophthalmology-related research, and 3.4% of cancer-related research in 2020. Conclusions: Our study shows surgeons have positioned themselves to examine new and myriad research topics while maintaining a focus on health disparities and cancer-related research. abstract_id: PUBMED:33973904 Utilization of the Department of Defense Peer-Reviewed Orthopaedic Research Program (PRORP): Combating Musculoskeletal Disease With PRORP. Introduction: Established in 2009, the Department of Defense (DoD) Peer-Reviewed Orthopaedic Research Program (PRORP) is an annual funding program for orthopaedic research that seeks to develop evidence for new clinical practice guidelines, procedures, technologies, and drugs. The aim was to help reduce the burden of injury for wounded Service members, Veterans, and civilians and to increase return-to-duty and return-to-work rates. Relative to its burden of disease, musculoskeletal injuries (MSKIs) are one of the most disproportionately underfunded conditions. The focus of the PRORP includes a broad spectrum of MSKI in areas related to unique aspect of combat- and some noncombat-related injuries. The PRORP may serve as an important avenue of research for nonmilitary communities by offering areas of shared interests for the advancement of military and civilian patient cohort MSKI care. The purpose of this study was to provide a descriptive analysis of the DoD PRORP, which is an underrecognized but high value source of research funding for a broad spectrum of both combat- and noncombat-related MSKIs. Methods: The complete PRORP Funding Portfolio for FY2009-FY2017 was obtained from the Congressionally Directed Medical Research Programs (CDMRP), which includes 255 awarded grants. Information pulled from the CDMRP included awardee descriptors (sex, education level, affiliated institution type, research specialty, and previous award winner [yes/no]) and grant award descriptors (grant amount, year, primary and secondary awarded topics, research type awarded, and mechanism of award). Distribution statistics were broken down by principal investigator specialty, sex, degree, organization type, research type, mechanism, and research topics. Distribution and statistical analysis was applied using R software version 3.6.3. Results: From FY2009 to 2017, $285 million was allocated for 255 PRORP-funded research studies. The seven major orthopaedic subspecialties (foot and ankle, hand, musculoskeletal oncology, pediatrics, spine, sports medicine, and trauma) were represented. Trauma and hand subspecialists received the largest amount of funding, approximately $28 (9.6%) and $22 million (7.1%), respectively. However, only 22 (8.6%) and 26 (10.2%) of the primary investigators were trauma and hand subspecialists, respectively. The primary research categories were diverse with the top five funded PRORP topics being rehabilitation ($53 million), consortia ($39 million), surgery ($37 million), device development ($30 million), and pharmacology ($10 million). Discussion: The CDMRP funding represents an excellent resource for orthopaedic medical research support that includes trauma and nontrauma orthopaedic conditions. This study serves to promote and communicate the missions of the PRORP both within and beyond the DoD to raise awareness and expand access of available funding for orthopaedic focused research. Significance/clinical Relevance: A likelihood exists that this project will provide sustained and powerful influence on future research by promoting awareness of orthopaedic funding sources. Level Of Evidence: Level III. abstract_id: PUBMED:36266134 Endocrine surgery and the surgeon-scientist: Bridging the gap between a rich history and a bright future. Introduction: We evaluate National Institutes of Health (NIH) data to describe endocrine surgical research performed by surgeons in the United States. Methods: An internal NIH database was queried for endocrine surgery-related grants awarded to surgeons in 2010, 2015, and 2020. The grants were then compared based on cost, grant type, research type, and endocrine topic. Results: Eighteen grants ($6.4 M) focused on endocrine surgery-related research topics were identified in 2020, 17 ($7.3 M) in 2015, and 11 ($3.8 M) in 2010. In 2020, 14 grants were basic science and 4 were clinical outcomes, and pancreatic endocrine disease and thyroid disease each comprised 6 grants. R01 and R21 grants comprised 10 (55.6%) of the grants in 2020, compared to 10 (58.5%) in 2015 and 8 (72.7%) in 2010, while K08 and K23 grants increased to 4 (22.2%) in 2020 from 2 (11.8%) in 2015 and none in 2010. Conclusion: There were more K-awards focused on endocrine surgery-related research in 2020 compared to 2015 and 2010, suggesting the pipeline is growing. abstract_id: PUBMED:37916404 Transforming the Future of Surgeon-Scientists. Objective: To create a blueprint for surgical department leaders, academic institutions, and funding agencies to optimally support surgeon-scientists. Background: Scientific contributions by surgeons have been transformative across many medical disciplines. Surgeon-scientists provide a distinct approach and mindset toward key scientific questions. However, lack of institutional support, pressure for increased clinical productivity, and growing administrative burden are major challenges for the surgeon-scientist, as is the time-consuming nature of surgical training and practice. Methods: An American Surgical Association Research Sustainability Task Force was created to outline a blueprint for sustainable science in surgery. Leaders from top NIH-sponsored departments of surgery engaged in video and in-person meetings between January and April 2023. A strength, weakness, opportunities, threats analysis was performed, and workgroups focused on the roles of surgeons, the department and institutions, and funding agencies. Results: Taskforce recommendations: (1) SURGEONS: Growth mindset : identifying research focus, long-term planning, patience/tenacity, team science, collaborations with disparate experts; Skill set : align skills and research, fill critical skill gaps, develop team leadership skills; DEPARTMENT OF SURGERY (DOS): (2) MENTORSHIP: Chair : mentor-mentee matching/regular meetings/accountability, review of junior faculty progress, mentorship training requirement, recognition of mentorship (eg, relative value unit equivalent, awards; Mentor: dedicated time, relevant scientific expertise, extramural funding, experience and/or trained as mentor, trusted advisor; Mentee : enthusiastic/eager, proactive, open to feedback, clear about goals; (3) FINANCIAL SUSTAINABILITY: diversification of research portfolio, identification of matching funding sources, departmental resource awards (eg, T-/P-grants), leveraging of institutional resources, negotiation of formalized/formulaic funds flow investment from academic medical center toward science, philanthropy; (4) STRUCTURAL/STRATEGIC SUPPORT: Structural: grants administrative support, biostats/bioinformatics support, clinical trial and research support, regulatory support, shared departmental laboratory space/equipment; Strategic: hiring diverse surgeon-scientist/scientists faculty across DOS, strategic faculty retention/ recruitment, philanthropy, career development support, progress tracking, grant writing support, DOS-wide research meetings, regular DOS strategic research planning; (5) COMMUNITY AND CULTURE: Community: right mix of faculty, connection surgeon with broad scientific community; Culture: building research infrastructure, financial support for research, projecting importance of research (awards, grand rounds, shoutouts); (6) THE ROLE OF INSTITUTIONS: Foundation: research space co-location, flexible start-up packages, courses/mock study section, awards, diverse institutional mentorship teams; Nurture: institutional infrastructure, funding (eg, endowed chairs), promotion friendly toward surgeon-scientists, surgeon-scientists in institutional leadership positions; Expectations: RVU target relief, salary gap funding, competitive starting salaries, longitudinal salary strategy; (7) THE ROLE OF FUNDING AGENCIES: change surgeon research training paradigm, offer alternate awards to K-awards, increasing salary cap to reflect market reality, time extension for surgeon early-stage investigator status, surgeon representation on study section, focused award strategies for professional societies/foundations. Conclusions: Authentic recommitment from surgeon leaders with intentional and ambitious actions from institutions, corporations, funders, and society is essential in order to reap the essential benefits of surgeon-scientists toward advancements of science. abstract_id: PUBMED:37119616 Perceptions of General Surgery Program Leadership and Residents on Research Sabbatical. Introduction: The incorporation of a 1-y- or 2-y research period during general surgery residency is increasingly common in many academic programs, yet often, it is heterogeneously structured and ill-defined. This survey-based observational study aimed to characterize the perceptions of general surgery program directors (PDs) and residents regarding an in-training, dedicated research sabbatical. Methods: Two surveys were conducted using Qualtrics software. One survey was sent to general surgery residency PDs, and the other was sent to general surgery residents currently on a research sabbatical. The primary aim of the survey was to assess the PDs' and research residents' perceptions of the research sabbatical. Results: Seven hundred and fifty-two surveys were analyzed, of which 120 were from PDs and 632 from research residents. Among the residents, 44.1% felt that the research time delayed their surgical training. Regarding research funding, 46.7% of the responding residents said that their residency program funded their research, 30.9% said they acquired funding independently, and 19.1% said that it came from a combination of residency program and independent funding. Finally, regarding how residents found their research opportunity, 42.7% said they found it independently and 53.3% said their program provided it. Conclusions: Research sabbaticals during residency may be considered essential to academic development. However, in this survey-based study, perceptions of research time and its structure varied greatly between PDs and residents. An intentional push toward developing guidelines for research sabbaticals may benefit residency program leadership and residents. Answer: The provided abstracts do not directly address the question of whether there is a crisis in National Institutes of Health (NIH) funding in plastic surgery. However, they do offer insights into NIH funding trends, challenges, and opportunities across various surgical specialties, including plastic surgery. From the abstracts, we can infer that NIH funding is a critical component of research and career development for surgeon-scientists, including those in plastic surgery. The creation of targeted research education and career development programs by the NINDS has led to a significant increase in the number of NIH-funded neurosurgeon-scientists and total NIH neurosurgery department funding (PUBMED:32764178). This suggests that similar programs could potentially benefit plastic surgery research funding as well. The abstracts also highlight the disparities in NIH funding based on gender, with women surgeon-scientists receiving a smaller proportion of total NIH research funding compared to their male counterparts, despite an increase in the number of funded women surgeon-scientists and their funding from 2010 to 2020 (PUBMED:36260312). This indicates a need for continued efforts to address funding disparities, which may also impact plastic surgery. In terms of surgical specialties, the abstracts reveal that while female surgeon-scientists have made significant advances in some areas, they continue to lag in others, and an in-depth analysis of the factors contributing to these trends is necessary to achieve gender parity across all academic surgical specialties (PUBMED:35835627). This could suggest that plastic surgery, like other specialties, may face similar challenges in achieving equitable funding for women surgeon-scientists. The abstracts do not provide specific data on the state of NIH funding in plastic surgery, nor do they declare a funding crisis in the field. However, they do emphasize the importance of institutional support, mentorship, financial sustainability, and strategic support for the success of surgeon-scientists (PUBMED:37916404), which are likely relevant to plastic surgery as well. In summary, while the abstracts do not directly answer the question of a funding crisis in plastic surgery, they do shed light on broader trends and challenges in NIH funding for surgical research that could be applicable to the field of plastic surgery.
Instruction: The Healthy People 2010 outcomes for the care of children with special health care needs: an effective national policy for meeting mental health care needs? Abstracts: abstract_id: PUBMED:18256914 The Healthy People 2010 outcomes for the care of children with special health care needs: an effective national policy for meeting mental health care needs? Objectives: To assess the effectiveness of the Maternal and Child Health Bureau's (MCHB) Six Core Outcomes for children with special health care needs (CSHCN) as indicators in measuring the degree to which mental health care needs are met. Methods: This study analyzes data from the 2001 National Survey of Children with Special Health Care Needs for 9,748 CSHCN who needed mental health care. Bivariate and logistic analyses were employed to investigate the impact of the MCHB's Six Core Outcomes on the probability of having an unmet need for mental health services. Results: Of the 2.3 million CSHCN in the U.S. who needed mental health care in 2001, almost one-fifth did not receive all of the mental health services that they needed. Ultimately, eight Outcomes and sub-categories of Outcomes were considered. Sixty-one percent of CSHCN with a need for mental health care had care that fulfills six of the eight considered Outcomes. Logistic analysis indicates that individual fulfillment of each of the Core Outcomes and fulfillment of additional Outcomes have a significant association with reducing the probability of having an unmet mental health care need for CSHCN. Conclusions: This study is the first attempt to apply the Six Core Outcomes to meeting the needs for mental health care among CSHCN. Estimates of unmet need for mental health care suggest that efforts can be made to improve access for CSHCN. The initial estimates generated by this study indicate that the MCHB Outcomes are important in meeting children's mental health needs and are important indicators for informing MCHB policy. abstract_id: PUBMED:35039741 Barriers to Mental Health Services for Parents and Siblings of Children with Special Health Care Needs. Caregivers of children with special health care needs (CSHCNs), especially those whose children have emotional, behavioral, or developmental problems (EBDPs), experience considerable strain and stress related to caring for their child's special needs. The enormous burden of caregiving can decrease a parent's ability to provide care, impacting the health of the child, the parents, and overall family functioning. To manage these challenges, these parents report the need for mental health care for themselves or their children, but many families with need go without care. Comprehensive knowledge about barriers to family mental health care for families of CSHCN is lacking. This study examines data from the National Survey of Children with Special Health Care Needs (2005/2006 and 2009/2010) to estimate time-specific, population-based prevalence of fourteen specific barriers to family mental health services and identifies risk factors for experiencing barriers to care for families of CSHCN. Among all CSHCN, cost barriers (33.5%) and lack of insurance (15.9%) were the most commonly reported obstacles to service access in 2005 and 2009, followed by inconvenient service times (12.3%), and locations (8.7%). Reports of these barriers increased significantly from 2005 to 2009. All types of barriers to family mental health services were reported significantly more frequently by CSHCN with EBDPs than by those without. CSHCN's race, insurance, and parent education and income levels were factors associated with cost barriers to family mental health care. Understanding barriers to mental health care for families of CSHCN is critical to creating policy and practice solutions that increase access to mental health care for these families. abstract_id: PUBMED:28340082 Disparities in Health Care Quality among Asian Children with Special Health Care Needs. There is a dearth of information on the quality of health care for Asian American children and particularly Asian children with special health care needs (CSHCN). The goal of this article was to determine whether there were disparities in quality of health care for Asian CSHCN, whose experiences have not been studied. Data were derived from the 2009-2010 National Survey of Children with Special Health Care Needs (ns = 355 non-Hispanic Asian children and 4,343 non-Hispanic white CSHCN). Bivariate and multivariate logistic regression analyses were conducted to examine the relationship between racial identity (that is, non-Hispanic white and non-Hispanic Asian) and quality of health care. Racial disparities in quality of health care were substantial between Asian and white CSHCN in 2009-2010. Asian parents were significantly less likely than white parents to report that their health care provider provided the specific information they needed, helped them feel like a partner in their child's care, and was sensitive to the family's values and customs. The development and testing of specific, targeted policy and practice interventions to reduce disparities in health care quality for these children are urgently needed. abstract_id: PUBMED:25897270 Health literacy and the Affordable Care Act: a policy analysis for children with special health care needs in the USA. Children with special health care needs (CSHCN) represent populations with chronic health conditions that are often high utilizers of health care. Limited health literacy has emerged as a key indicator of adverse health outcomes, and CSHCN from limited health literacy families are particularly vulnerable. The purpose of this policy analysis is to outline key provisions in the Affordable Care Act (ACA) that incorporate health literacy approaches for implementation and have implications for CSHCN in the USA. Several key provisions are incorporated in the ACA that involve health literacy and have implications for CSHCN. These include: expansion of public insurance coverage and simplifying the enrollment process, provisions assuring equity in health care and communication among all populations, improving access to patient-centered medical homes that can offer care coordination, ensuring enhanced medication safety by changing liquid medication labeling requirements, and provisions to train health care providers on literacy issues. More research is needed to determine how provisions pertaining to health literacy in the ACA are implemented in various states. abstract_id: PUBMED:31292839 Do Children with Special Health Care Needs with Anxiety have Unmet Health Care Needs? An Analysis of a National Survey. Objective: To describe differences in health care needs between Children with Special Health Care Needs (CSHCN) with and without anxiety and examine the association between anxiety and unmet health care needs. Methods: We analyzed data from the 2009/2010 national survey of CSHCN. The independent variable was anxiety. The main outcomes were health care needs and unmet needs. Covariates included demographics, other co-morbid conditions, and the presence and quality of a medical home. We used bivariate analyses and multivariable logistic regression to assess the relationships among anxiety, covariates, and the outcomes. We stratified our analysis by age (6-11 years, 12-17 years). Propensity score matched paired analysis was used as a sensitivity analysis. Results: Our final sample included 14,713 6-11 year-olds and 15,842 12-17-year-olds. Anxiety was present in 16% of 6-11 year-olds and 23% or 12-17 year-olds. In bivariate analyses, CSHCN with anxiety had increased health care needs and unmet needs, compared to CSHCN without anxiety. In multivariable analyses, only children 12-17 years old with anxiety had increased odds of having an unmet health care need compared to those children without anxiety (OR 1.44 [95% CI 1.17-1.78]). This was confirmed in the propensity score matching analysis (OR 1.12, [95% CI 1.02-1.22]). The specific unmet needs for older CSHCN with anxiety were mental health care (OR 1.54 [95% CI 1.09-2.17]) and well child checkups (OR 2.01 [95% CI 1.18-3.44]). Conclusion: Better integration of the care for mental and physical health is needed to ensure CSHCN with anxiety have all of their health care needs met. abstract_id: PUBMED:30272532 Unmet Need for Care Coordination Among Children with Special Health Care Needs. Socioeconomic disparities in access to care coordination have been noted among children with special health care needs (CSHCN). Following recent policy developments and technological innovation, care coordination has become more widespread, possibly leading to reduced disparity in care coordination access. This study investigates whether child and household characteristics remain associated with unmet need for care coordination among CSHCN. CSHCN (aged <18 years) requiring ≥2 types of health services in the past year were identified in the 2016 National Survey of Children's Health (NSCH). Care coordination was defined as help with arranging the child's care among different doctors or services. Children were classified as not needing care coordination, receiving sufficient care coordination (met need), or needing but not receiving care coordination (unmet need). Weighted multinomial logistic regression examined the association of child characteristics with this outcome. The analysis included 5622 children with no need for care coordination, 1466 with a met need, and 980 with unmet needs. Children with mental health conditions were more likely to have unmet rather than met needs for care coordination (odds ratio = 4.1; 95% confidence interval: 2.7, 6.1; P < 0.001). After multivariable adjustment, race/ethnicity, income, family structure, insurance coverage, place of birth, and use of English in the home were not associated with having unmet rather than met needs for care coordination. Among CSHCN, the latest data from NSCH reveal no evidence of previously described socioeconomic disparities in access to care coordination. Nevertheless, unmet needs for care coordination remain prevalent, especially among children with mental health conditions. abstract_id: PUBMED:32239364 A Profile of Care Coordination, Missed School Days, and Unmet Needs Among Oregon Children with Special Health Care Needs with Behavioral and Mental Health Conditions. To inform Oregon's Title V needs assessment activities, we aimed to (1) characterize the state's subpopulation of children with special health care needs (CSHCN) with behavioral and mental health conditions (B/MHC) and (2) determine associations of care coordination with missed school days and unmet needs for this subpopulation. We analyzed 2009-2010 National Survey of Children with Special Health Care Needs data on 736 Oregon CSHCN < 18 years, including 418 CSHCN with B/MHC. Among Oregon CSHCN with B/MHC: 48.9% missed ≥ 4 school days, 25% had ≥ 1 unmet health services need, and 14.8% had ≥ 1 unmet family support services need. Care coordination was associated with lower adjusted odds of ≥ 1 unmet health services need but was not significantly associated with missed school days or unmet family support services need. The approach to identify Oregon CSHCN with B/MHC may be adopted by other states endeavoring to improve health for this vulnerable subpopulation. abstract_id: PUBMED:26844190 Unmet needs for dental care in children with special health care needs. Objective: The unmet need for dental care is one of the greatest public health problems facing U.S. children. This issue is particularly concerning for children with special health care needs (CSHCN), who experience higher prevalence of unmet dental care needs. The primary purpose of this study was to investigate regional differences in unmet dental care needs for CSHCN. Using the Social Ecological Model as a framework, additional variables were analyzed for regional differences. It was hypothesized that (H1) unmet dental care needs would be high in the CSHCN population, (H2) there would be regional differences in unmet dental care needs in CSHCN, and (H3) there would be differences in specific individual, interpersonal (family), community (state), and policy level factors by region. Methods: Data were obtained from the 2009-2010 National Survey of CSHCN. SPSS was used for data management and analysis. Results: Each of the study hypotheses was supported for the sample of 40,242 CSHCN. The West region was more likely to have more unmet needs for preventive and specialized dental care in CSHCN than the reference region (Northeast). The South region followed the West region in unmet dental care needs. Statistically significant differences in individual, interpersonal (family), community (state) and policy factors were found by region. Conclusion: Further research is recommended. Effective strategies that include policy to address unmet dental care needs at multiple levels of intervention are suggested. abstract_id: PUBMED:27469109 Health Care Changes for Children with Special Health Care Needs, 2005-2011. Objective This study compared health care utilization of children with special health care needs in 2005/06 and 2009/10. Methods Using data from the National Survey of Children with Special Health Care Needs, this study compared the health care utilization of children with special health care needs in 2005/06 (n = 40,723) and 2009/10 (n = 40,242). Descriptive statistics characterize the sample during the 2005/06 and 2009/10 surveys. Logistic regression models examined the unmet needs for specific health care and support services, delayed care, coordinated care, and medical home. Results Compared to 2005/06, in 2009/10 children with special health care needs had greater unmet dental and therapy needs and less care coordination of health services as well as access to medical home services. Conclusions These findings indicate that additional measures are needed to improve the health care access of children with special health care needs. abstract_id: PUBMED:23597380 Healthy people 2010 leading health indicators: how children with special health care needs fared. Objectives: We compared estimates for children with and without special health care needs (SHCN) at 2 time periods for national health objectives related to the Healthy People 2010 leading health indicators (LHIs). Methods: Data were from the 2003 and 2007 National Surveys of Children's Health. Seven survey items were relevant to the LHIs and available in both survey years: physical activity, obesity, household tobacco use, current insurance, personal health care provider, past-month depressive symptoms, and past-year emergency department visits. Results: In 2003 and 2007, children with SHCN fared worse than those without SHCN with respect to physical activity, obesity, household tobacco exposure, depressive symptomology, and emergency department visits, but fared better on current insurance and having a personal health care provider. Physical activity and access to a personal health care provider increased for all children, whereas the absolute disparity in personal provider access decreased 4.9%. Conclusions: Significant disparities exist for key population health indicators between children with and without SHCN. Analyses illustrated how population-based initiatives could be used to frame health challenges among vulnerable populations. Answer: The Healthy People 2010 outcomes aimed to improve the care of children with special health care needs (CSHCN), including their mental health care needs. However, the effectiveness of this national policy in meeting the mental health care needs of CSHCN has been a subject of analysis and debate. According to the study analyzed in PUBMED:18256914, almost one-fifth of the 2.3 million CSHCN in the U.S. who needed mental health care in 2001 did not receive all the mental health services they needed. The study found that the Maternal and Child Health Bureau's (MCHB) Six Core Outcomes were significant indicators for reducing the probability of having an unmet mental health care need among CSHCN. Sixty-one percent of CSHCN with a need for mental health care had care that fulfilled six of the eight considered outcomes. This suggests that the MCHB outcomes are important in meeting children's mental health needs and are important indicators for informing MCHB policy. However, other studies have highlighted barriers and disparities that suggest the policy may not be fully effective. For instance, PUBMED:35039741 reported that cost barriers and lack of insurance were the most commonly reported obstacles to service access for CSHCN, with reports of these barriers increasing from 2005 to 2009. Additionally, PUBMED:28340082 found that Asian CSHCN experienced significant disparities in the quality of health care compared to white CSHCN, indicating that the policy may not be adequately addressing the needs of all demographic groups. Furthermore, PUBMED:31292839 indicated that CSHCN with anxiety had increased health care needs and unmet needs compared to those without anxiety, particularly among older children. This suggests that better integration of care for mental and physical health is needed to ensure all health care needs are met for CSHCN with anxiety. In summary, while the Healthy People 2010 outcomes have been associated with some improvements in meeting the mental health care needs of CSHCN, there remain significant barriers, disparities, and unmet needs that indicate the policy has not been fully effective for all CSHCN. Additional measures and targeted interventions may be necessary to improve access and quality of mental health care for this vulnerable population.
Instruction: Is there a role for surgery in the management of isolated secundum atrial septal defect in adults? Abstracts: abstract_id: PUBMED:24691795 Is there a role for surgery in the management of isolated secundum atrial septal defect in adults? Objectives: The aim of this retrospective study was to compare the short-term outcomes of surgical versus transcatheter closure of secundum atrial septal defect (ASD) in adults. Methods: From January 2008 to October 2012, 229 patients aged 18 years and older with significant isolated secundum ASDs were admitted to our hospital. We focused only on objective data obtained from their medical records. We collected and compared a total of 163 patients with isolated secundum ASD, who were treated with device occlusion or surgical closure, and had no missing data. Postoperative outcomes, rhythm disturbances, residual ASD, infection rates and length of hospital stay were compared. Results: Complete follow-up data were available for 42 (46%) patients in the device group and for 121 (87%) in the surgery group. Complete closure was observed in 41 of the 42 patients (97.6%) in the device group (p = 0.258) and in all 121 in the surgery group (100 %) (p > 0.05). There were no mortalities. The mean length of hospital stay in the device group was 1.92 ± 0.43 days and in the surgery group 7.14 ± 0.14 days (p < 0.01). Conclusions: The transcatheter approach for closure of ASDs is an effective and safe treatment option when performed for certain indications. Broadening the spectrum of indications may cause some adverse events. Surgical treatment remains a good alternative for all patients with ASDs and can be performed safely in order not to increase procedure-related complications. abstract_id: PUBMED:33123333 Best management in isolated right ventricular hypoplasia with septal defects in adults. Hypoplastic right ventricle is a rare congenital disease usually associated with pulmonary atresia or tricuspid atresia. Isolated right ventricular hypoplasia is a rare anomaly without important valvular abnormalities. It is associated with inter atrial septal defects leading to the right-to-left shunting of blood. Patients with isolated right ventricular hypoplasia usually have different and variable courses. In some patients, it is recognized in the perinatal period and necessitates prompt intervention; nonetheless, there are some reports of this anomaly in old age with no significant symptoms. In this report, we describe the clinical data and management of 6 adult cases with isolated right ventricular hypoplasia treated medically or surgically based on the severity of the disease and symptoms and then offer an in-depth discussion regarding this rare anomaly. abstract_id: PUBMED:21234294 Comparison of the response of the right ventricle with endovascular occlusion and surgical closure in adults with atrial septal defect one year after intervention. Background: Use of the Amplatzer septal occluder (ASO) for the closure of secundum atrial septal defect (ASD) has recently become the procedure of choice, while earlier the only treatment for ASD was surgical closure. This study compares the right ventricular indices of the ASO group with the surgical closure group one year after intervention in adults. Methods: From January 2008 to February 2010, 38 patients with isolated atrial septal defect of the secundum type one year after surgical (n = 20, age = 27 ± 4 years, 13 females, 7 males) or Amplatzer septal occluder closure (n = 18, age = 25 ± 4 years, 12 females, 6 males) were studied. At the same time, thirty-one age-matched normal subjects (age = 26 ± 6 years, 23 females, 9 males) were included as the control group. Strain and strain rate of the right ventricle were measured. Results: The mean values of strain of the midportion were -26% ± 11.7%, -8.9% ± 4.2%, and 24.5% ± 7.4% (P < 0.001). Strain rates of the midportion were -2.19 ± 0.6 s(-1), -1.2 ± 0.4 s(-1), -1.9 ± 0.6 s(-1) (P < 0.001) in ASO, surgery, and control groups, respectively. Conclusion: This study showed that the right ventricle might show better performance in the ASO than the surgery group in adults with ASD in midterm follow-up. abstract_id: PUBMED:25221697 Congenital heart diseases in adults: a review of echocardiogram records in enugu, South-East Nigeria. Background: Congenital abnormalities of the heart and cardiovascular system are reported in almost 1% of live births, and about half of these children need medical or surgical help in infancy. In the first decade, a further 25% require surgery to maintain or improve their life. Only 10% survive to adolescence without treatment. Of these 10%, however, many live a normal life for years before their abnormality is discovered. Aim: The aim of this study was to find the most common congenital heart diseases in adults presenting for echocardiographic examination in Enugu, and to determine whether there are any gender differences in frequency. Materials And Methods: The consecutive echocardiogram reports of 5058 adults done over a period of 9 years (2003-2012) were retrospectively reviewed. All adults who had congenital anomaly on transthoracic echocardiography were included in the study. Results: Congenital heart diseases were found in 115 adults representing 2.5% of the adult population (115/4539). The most common congenital anomalies were ventricular septal defects (VSD) - 31.3%, (36/115), atrial septal defects - 28.7% (33/115) and tetralogy of fallot - 10.4% (12/115). Conclusion: VSD are the most common congenital heart diseases in adults presenting for echocardiographic examination in Enugu, Nigeria. abstract_id: PUBMED:28616279 Benefits of a right anterolateral minithoracotomy rather than a median sternotomy in isolated tricuspid redo procedures. Background: To compare results of tricuspid valve (TV) reoperation between incision via a right anterolateral minithoracotomy (RAMT) and median sternotomy (MS). Methods: We performed a retrospective analysis of 49 patients who underwent isolated TV surgery as a reoperation at our institution between 2006 and 2015. Previous cardiac operations included mitral, aortic, and TV surgeries, atrial septal defect repair, and pericardiectomy. The mean age of the patients was 51.9±12.8 years, 14 (28%) were male and 35 (72%) were female. Follow-up was 95% (38/40) complete, with a mean duration of 41.3±19.5 months. Results: Perioperative demographic and laboratory tests did not show any significant differences between the RAMT and MS groups. The drainage volume, total red cell unit, total serum volume and platelet were significantly different 1150±803.5/2,270±1,920, 4.8±4.1/8.7±8.9, 478.2±488.9/950.0±857.6, 0.04±0.21/0.38±0.64 (P<0.05), while other perioperative data were similar. There were no significant differences in early postoperative death and complications between the RAMT and MS groups. A multivariate linear regression analysis predicted that serum creatinine (Scr), age, and MS group were independent risk factors for bleeding. The Cox regression demonstrated that the MS group had a longer drainage duration (P<0.05) and had a relative hazardous risk (HR) of 2.691 (1.328, 5.450 CI) compared with the RAMT group. Conclusions: The RAMT approach is an alternative, safe, and feasible procedure for isolated TV reoperation. It has the advantages of less drainage and reduced requirement for blood products. abstract_id: PUBMED:25441004 Current readings: long-term management of patients undergoing successful pediatric cardiac surgery. As of 2000, more adults than children are alive with congenital heart disease. Each year, more of these adults with congenital heart disease undergo surgery. Adults with congenital heart disease require lifelong surveillance, follow-up imaging, and clinical decision making by appropriately trained and familiar physicians and extenders. Three common challenges facing adults with congenital heart disease are the neglected patient, weak programmatic infrastructure, and the planning and management of pregnancy in the adult with congenital heart disease. Many challenges must be overcome in order to establish and maintain a comprehensive medical and surgical program for adults with congenital heart disease. Three common and challenging groups of adults with congenital heart disease who may require surgery are adults requiring tricuspid valve surgery for Ebstein's malformation, those requiring reoperation for failing Fontan circulation, and those requiring pulmonary valve replacement (PVR). Surgery for Ebstein's anomaly is now offered to older patients at low risk and with good late outcome. The operation includes tricuspid valve repair or replacement and frequent concomitant procedures such as atrial septal defect closure, arrhythmia surgery (the Maze procedure), and coronary artery bypass grafting. Patients undergoing previous iterations of the Fontan operation are especially prone to arrhythmia and failing circulation. Fontan conversion with arrhythmia surgery and pacemaker therapy was developed by Constantine Mavroudis, Barbara Deal, and Carl Backer to treat these challenging patients, with the first such operation performed in 1994. Fontan conversion involves: (1) conversion of the previously created atriopulmonary connection to a total cavopulmonary connection (TCPC), extracardiac Fontan; (2) arrhythmia surgery, typically with a modified biatrial Maze procedure along with placement of an antitachycardia, dual-chamber pacemaker with steroid-eluting epicardial leads in all patients; and (3) concomitant procedures to treat associated lesions. The need for pulmonary valve replacement is increasing for many adults with congenital heart disease. In the past, chronic pulmonary regurgitation following repair of tetralogy of Fallot was considered benign. Current evidence demonstrates that chronic pulmonary regurgitation causes significant morbidity by producing right ventricular dilatation and dysfunction, exercise intolerance, arrhythmias, and sudden death. Multiple options exist for pulmonary valve replacement including several recent developments such as pulmonary valve replacement with a hand-sewn polytetrafluoroethylene bicuspid valve and percutaneous pulmonary valve replacement. Reoperative cardiac surgery is common in adults with congenital heart disease. Although a history of previous cardiac surgery does not independently confer a significant incremental risk of operative mortality, patients with the greatest number of previous surgeries appear to be a higher risk group. Multi-institutional data about adults with congenital heart disease from The Society of Thoracic Surgeons Congenital Heart Surgery Database can be used to estimate prognosis and council patients and their families. The six manuscripts reviewed in this article have been selected to give a flavor of the state of the art in the domain of caring for adults with congenital heart disease and to provide important information about the long term management of patients undergoing successful pediatric cardiac surgery. abstract_id: PUBMED:32990806 Neonatal intra-atrial baffle repair for isolated ventricular inversion with left isomerism: a case report. Background: Discordant atrioventricular connection with concordant ventriculoarterial connection, otherwise known as isolated ventricular inversion (IVI), is an extremely rare congenital cardiac malformation. Reports on the corrective surgery for this anomaly in neonates are few, and the procedure is difficult and complicated. Herein, we report our use of atrial septostomy as a palliative procedure followed by corrective surgery for the repair of neonatal IVI with situs ambiguous(inversus) morphology. Case Presentation: A 2-day-old girl weighing 3.5 kg was admitted to our hospital with a low oxygen saturation (SpO2) of 70% She was diagnosed with IVI [situs ambiguous(inversus), D-loop, and D-Spiral], atrial septal defect, patent ductus arteriosus (PDA), interrupted inferior vena cava with azygos continuation to the left superior vena cava (SVC), and polysplenia by transthoracic echocardiography and cardiac computed tomography. We planned to perform corrective surgery and decided to first increase interatrial mixing by performing surgical atrial septostomy and PDA ligation 7 days after birth. However, despite the surgical septostomy, pulmonary venous blood flowed toward the right ventricle via the tricuspid valve rather than toward the left-sided atrium and hypoxemia persisted. We decided to perform the intra-atrial switch procedure at the age of 17 days via a re-median sternotomy. The cardiopulmonary bypass (CPB) circuit was established with ascending aorta and venous drainage through the SVC and hepatic veins. Utilizing a left-sided atrium(l-A) approach, a bovine pericardial patch was used for the intra-atrial baffle, which was trimmed into a trouser-shaped patch. Continuous suture using the patch was lying from the front of the right-sided upper pulmonary vein and rerouted SVC, hepatic vein, and coronary sinus to the tricuspid valve. Overall, CPB weaning proceeded smoothly; however, direct current cardioversion was performed for junctional ectopic tachycardia. The postoperative course was uneventful. Her postoperative SpO2 improved (approximately 99-100%); overall, the patient showed clinical improvement. Discharge echocardiography showed normal biventricular function and an intact atrial baffle with no venoatrial or atrioventricular obstruction. Conclusion: We successfully performed an intra-atrial switch procedure for isolated ventricular inversion in a neonate. Long-term follow-up will be necessary to ensure the maintenance of optimal cardiac function. abstract_id: PUBMED:32660338 Impending Paradoxical Embolism: Perioperative Management and Role of Transesophageal Echocardiography. Impending paradoxical embolism is a rare but potentially life-threatening complication of venous thromboembolism that is usually associated with acute pulmonary embolism and a right to left atrial shunt. Patients may have associated right ventricular pressure or volume overload with subsequent failure. Transesophageal echocardiography is the preferred diagnostic test of choice in this patient group. Definitive management has yet to be clearly defined. However, emergent surgical removal of the entrapped intracardiac blood thrombus may be necessary. In this article, we review a case of impending paradoxical embolism managed surgically and describe the perioperative anesthetic considerations in this patient population, along with the role of intraoperative transesophageal echocardiography. abstract_id: PUBMED:29460874 Robot-assisted atrial septal defect closure in adults: first experience in Russia Aim: To analyze immediate results of minimally invasive robot-assisted atrial septal defect (ASD) closure in adults. Material And Methods: For the period from March 2012 to November 2016 sixty patients with contraindications to endovascular procedure have undergone robot-assisted atrial septal defect closure at Meshalkin Siberian Federal Biomedical Research Center. Mean age was 34.5±11.3 years, body mass index - 24.6±4.0 kg/m2. 48 (80%) patients had NYHA class II before surgery. In 37 (61.7%) patients isolated ASD with deficiency or absence of one edge was diagnosed, isolated ASD with primary septum aneurysm - in 16 (26.7%) cases, 7 (11.6%) patients had reticulate ASD. 5 (8.3%) patients had concomitant tricuspid valve insufficiency required surgical repair (suture annuloplasty). All operations were performed under cardiopulmonary bypass with peripheral cannulation. Right-sided anterolateral mini-thoracotomy was used in the first 43 patients. Following 17 patients underwent completely endoscopic procedure. Depending on the shape, size and anatomical features of the defect we performed suturing (14 patients, 23.3%) or repair with xenopericardial patch (46%, 76.6%). Results: Mean CPB and aortic cross-clamping time was 89.1±28.7 and 24.8±9.5 min, respectively. Postoperative variables: mechanical ventilation 3.3±1.5 hours, ICU-stay - 18.2±3.7 hours, postoperative hospital-stay - 13.4±5.7 days. There were no mortality and any life-threatening intra- and postoperative complications. Cases of conversion to thoraco-/sternotomy and postoperative bleeding followed by redo surgery were also absent. 23 patients were followed-up within 1 year, 6 patients - within 2 years, 3 patients - within 3 years. All patients were in NYHA class I-II with 100% freedom from ASD recanalization and redo surgery. According to echocardiography data there were decreased right heart, pulmonary artery pressure and preserved left ventricular function in early postoperative period and 1 year after surgery. Conclusion: In view of favorable course of postoperative period, no significant specific complications and encouraging immediate results we can talk about endoscopic robot-assisted ASD closure in adults as a safe and effective alternative to surgical treatment. abstract_id: PUBMED:20352137 Canadian Cardiovascular Society 2009 Consensus Conference on the management of adults with congenital heart disease: shunt lesions. With advances in pediatric cardiology and cardiac surgery, the population of adults with congenital heart disease (CHD) has increased. In the current era, there are more adults with CHD than children. This population has many unique issues and needs. Since the 2001 Canadian Cardiovascular Society Consensus Conference report on the management of adults with congenital heart disease, there have been significant advances in the field of adult CHD. Therefore, new clinical guidelines have been written by Canadian adult CHD physicians in collaboration with an international panel of experts in the field. Part I of the guidelines includes recommendations for the care of patients with atrial septal defects, ventricular septal defects, atrioventricular septal defects and patent ductus arteriosus. Topics addressed include genetics, clinical outcomes, recommended diagnostic workup, surgical and interventional options, treatment of arrhythmias, assessment of pregnancy risk, and follow-up requirements. The complete document consists of four manuscripts, which are published online in the present issue of The Canadian Journal of Cardiology. The complete document and references can also be found at www.ccs.ca or www.cachnet.org. Answer: Yes, there is a role for surgery in the management of isolated secundum atrial septal defect (ASD) in adults. A retrospective study comparing the short-term outcomes of surgical versus transcatheter closure of secundum ASD in adults found that both approaches are effective and safe treatment options. The study reported complete closure in 100% of the surgery group and 97.6% in the device group, with no mortalities in either group. However, the mean length of hospital stay was significantly shorter in the device group compared to the surgery group (PUBMED:24691795). Another study comparing the response of the right ventricle with endovascular occlusion and surgical closure in adults with ASD one year after intervention showed that the right ventricle might perform better in the device group (Amplatzer septal occluder) than the surgery group in adults with ASD in midterm follow-up (PUBMED:21234294). The Canadian Cardiovascular Society 2009 Consensus Conference on the management of adults with congenital heart disease also provides guidelines for the care of patients with ASD, including recommendations for surgical and interventional options (PUBMED:20352137). Furthermore, robot-assisted atrial septal defect closure in adults has been reported as a safe and effective alternative to surgical treatment, with favorable postoperative outcomes and no significant specific complications (PUBMED:29460874). In conclusion, surgery remains a good alternative for all patients with ASDs and can be performed safely to avoid increasing procedure-related complications. It is particularly relevant when transcatheter closure is not feasible or contraindicated (PUBMED:24691795).
Instruction: Negative affect, relapse, and Alcoholics Anonymous (AA): does AA work by reducing anger? Abstracts: abstract_id: PUBMED:20409438 Negative affect, relapse, and Alcoholics Anonymous (AA): does AA work by reducing anger? Objective: Anger and other indices of negative affect have been implicated in a stress-induced pathway to relapse. The Alcoholics Anonymous (AA) literature states that reduction of anger is critical to recovery, yet this proposed mechanism has rarely been investigated. Using lagged, controlled hierarchical linear modeling analyses, this study investigated whether AA attendance mobilized changes in anger and whether such changes explained AA-related benefit. Method: Alcohol-dependent adults (N = 1,706) receiving treatment as part of a clinical trial were assessed at intake and at 3, 6, 9, 12, and 15 months. Results: Findings revealed substantially elevated levels of anger compared with the general population (98th percentile) that decreased over 15-month follow-up but remained high (89th percentile). AA attendance was associated with better drinking outcomes, and higher levels of anger were associated with heavier drinking. However, AA attendance was unrelated to changes in anger. Conclusions: Although support was not found for anger as a mediator, there was strong convergence between AA's explicit emphasis on anger and the present findings: Anger appears to be a serious, enduring problem related to relapse and heavy alcohol consumption. Methodological factors may have contributed to the lack of association between AA and anger, but results suggest that AA attendance alone may be insufficient to alleviate the suffering and alcohol-related risks specifically associated with anger. abstract_id: PUBMED:20102345 Mechanisms of behavior change in alcoholics anonymous: does Alcoholics Anonymous lead to better alcohol use outcomes by reducing depression symptoms? Rationale: Indices of negative affect, such as depression, have been implicated in stress-induced pathways to alcohol relapse. Empirically supported continuing care resources, such as Alcoholics Anonymous (AA), emphasize reducing negative affect to reduce relapse risk, but little research has been conducted to examine putative affective mechanisms of AA's effects. Methods: Using lagged, controlled, hierarchical linear modeling and mediational analyses this study investigated whether AA participation mobilized changes in depression symptoms and whether such changes explained subsequent reductions in alcohol use. Alcohol-dependent adults (n = 1706), receiving treatment as part of a clinical trial, were assessed at intake, 3, 6, 9, 12 and 15 months. Results: Findings revealed elevated levels of depression compared to the general population, which decreased during treatment and then remained stable over follow-up. Greater AA attendance was associated with better subsequent alcohol use outcomes and decreased depression. Greater depression was associated with heavier and more frequent drinking. Lagged mediation analyses revealed that the effects of AA on alcohol use was mediated partially by reductions in depression symptoms. However, this salutary effect on depression itself appeared to be explained by AA's proximal effect on reducing concurrent drinking. Conclusions: AA attendance was associated both concurrently and predictively with improved alcohol outcomes. Although AA attendance was associated additionally with subsequent improvements in depression, it did not predict such improvements over and above concurrent alcohol use. AA appears to lead both to improvements in alcohol use and psychological and emotional wellbeing which, in turn, may reinforce further abstinence and recovery-related change. abstract_id: PUBMED:511023 Alcoholics Anonymous, alcoholism counseling, and social work treatment. The authors suggest that the misunderstanding by many social workers of the particular values of Alcoholics Anonymous and the skills of alcoholism counselors can involve alcoholic patients themselves in the stress of reconciling different approaches to treatment. They call upon all who treat alcoholics to build mutual trust and cooperation on two assumptions basic to all forms of treatment-that alcoholism is a disease and abstinence is essential to recovery. abstract_id: PUBMED:3347069 The relationship between relapse and involvement in alcoholics anonymous. Recovering alcoholic members (N = 59) of Alcoholics Anonymous (A.A.) were asked to complete a questionnaire to assess the occurrence of relapse and its correlation to their level of involvement in the A.A. program. The questionnaire was distributed to five different A.A. groups. Subjects were asked to rate themselves, using a Likert scale, on their level of involvement in the A.A. program and were also asked to state if they had relapsed during the course of their involvement with A.A. and if so, to indicate their most recent relapse. Depending on their answers to the relapse question, subjects were placed in either a relapse or a no relapse group. The level of involvement in A.A. was found to be related to whether relapse occurred. Those in the relapse group rated themselves lower in every area of involvement listed on the questionnaire than those in the no relapse group. The most significant area of involvement, where the greatest differences between subjects in each group were found, was in reaching out to other members of A.A. for help and in the use of a sponsor. abstract_id: PUBMED:13161454 Alcoholics Anonymous N/A abstract_id: PUBMED:15943643 Alcoholics Anonymous: cult or cure? Objective: To discuss the mechanism of action, the efficacy and the safety of Alcoholics Anonymous (AA) in the treatment of alcoholism. Method: The published works on effective treatments for alcoholism is briefly reviewed and a prospective multidisciplinary follow-up of recovery from alcoholism in two community cohorts of adolescent males followed from 1940 until the present day is reviewed. Conclusions: The suggested mechanism of action of AA is that it employs four factors widely shown to be effective in relapse prevention in addictions: external supervision, substitute dependency, new caring relationships and increased spirituality. In addition, AA serendipitously follows the principles of cognitive behaviour therapy in relapse prevention. Alcoholics Anonymous appears equal to or superior to conventional treatments for alcoholism, and the skepticism of some professionals regarding AA as a first rank treatment for alcoholism would appear to be unwarranted. Alcoholics Anonymous is probably without serious side-effects. abstract_id: PUBMED:32353394 ALCOHOLICS ANONYMOUS "SPIRITUALITY" AND LONG-TERM SOBRIETY MAINTENANCE AS A TOPIC FOR INTERDISCIPLINARY STUDY. The foundational literature of Alcoholics Anonymous provides a good example of how ideas from different intellectual paradigms can be woven together to enhance understanding of addiction and recovery. A detailed hypothesis about how the 12 steps of Alcoholics Anonymous work as a program of emotion management is presented. The hypothesis assigns a central role to a secular interpretation of spirituality, consistent with naturalism. Clinical research which appears to indicate that spirituality is not the chief mechanism of recovery in Alcoholics Anonymous is critiqued. Interdisciplinary research is required, if the extent and nature of the efficacy of AA's 12 steps are to be understood. abstract_id: PUBMED:10627107 Alcoholics Anonymous and the use of medications to prevent relapse: an anonymous survey of member attitudes. Objective: The purpose of this study was to systematically assess the attitudes of Alcoholics Anonymous (AA) members toward the newer medications used to prevent relapse (e.g., naltrexone) and to assess their experiences with medication use, of any type, in AA. Method: Using media solicitations and snowball sampling techniques, 277 AA members were surveyed anonymously about their attitudes toward use of medication for preventing relapse and their experiences with medication use of any type in AA. Results: Over half the sample believed the use of relapse-preventing medication either was a good idea or might be a good idea. Only 17% believed an individual should not take it and only 12% would tell another member to stop taking it. Members attending relatively more meetings in the past 3 months had less favorable attitudes toward the medication. Almost a third (29%) reported personally experiencing some pressure to stop a medication (of any type). However, 69% of these continued taking the medication. Conclusions: The study did not find strong, widespread negative attitudes toward medication for preventing relapse among AA members. Nevertheless, some discouragement of medication use does occur in AA. Though most AA members apparently resist pressure to stop a medication, when medication is prescribed a need exists to integrate it within the philosophy of 12-step treatment programs. abstract_id: PUBMED:7759170 For whom does Alcoholics Anonymous work? A brief summary of the literature to evaluate for whom Alcoholics Anonymous (A.A.) is or is not likely to be effective is provided. A description of A.A. is presented. Next, support and criticisms of A.A., details of who A.A. seems to help and does not seem to help, and alternatives to A.A. are discussed. Finally, recommendations regarding matching clients to supportive treatments are made. abstract_id: PUBMED:35026830 Spiritual Principles of Alcoholics Anonymous. Alcoholics Anonymous is, according to the Program's basic documents, a spiritual program - especially regarding Steps 2-3, 5-7, and 11-12. This paper reviews the history of the Program with emphasis on its spiritual background, as well as on the roots of the spiritual side of the program, and showing the spiritual accents in the Big Book of Alcoholics Anonymous. The paper also considers the Christian background of the movement and, more broadly, explores the connection of AA founders William (Bill) Wilson and Dr. Robert (Bob) Smith with several Catholic people, such as Edward (Ed) Dowling, SJ, and John C. Ford, SJ, and indicates some points of contact between the Program and Ignatian spirituality. The paper illustrates the pioneering role of the Alcoholics Anonymous program in integrating spirituality into the process of recovery from and treatment of addictions, an integration that just recently has been embraced by modern psychotherapy and psychiatry. The paper shows the theoretical and spiritual underpinnings of the Program in the religious context. The paper illustrates the orientation of the Program in assisting addicts to integrate into their recovery their own spirituality, a clearer image of God, and personal spiritual and religious progress. Answer: The study described in abstract PUBMED:20409438 investigated whether attendance at Alcoholics Anonymous (AA) meetings led to changes in anger levels and whether such changes could explain AA-related benefits. The results showed that while AA attendance was associated with better drinking outcomes, it was not related to changes in anger. The study found that anger is a significant and enduring problem related to relapse and heavy alcohol consumption, but AA attendance alone may not be sufficient to alleviate the risks associated with anger. Therefore, the study did not support the hypothesis that AA works by reducing anger.
Instruction: Long-term outcome after balloon angioplasty of coarctation of the aorta in adolescents and adults: Is aneurysm formation an issue? Abstracts: abstract_id: PUBMED:19235242 Long-term outcome after balloon angioplasty of coarctation of the aorta in adolescents and adults: Is aneurysm formation an issue? Objectives: The purpose was to evaluate long-term outcome following balloon angioplasty for coarctation in adults. Background: Long-term results of balloon angioplasty for native coarctation in adults remain incomplete, especially concerning the occurrence of aneurysm formation. Methods: Long-term follow-up data were collected in consecutive patients retrospectively. Results of balloon angioplasty (29 patients, age 15-71 years, during 1995-2005) for discrete, native coarctation were evaluated, including MRI or CT. Results: Mean follow-up ranged from 2.2 to 13 years (mean 8.5 +/- 3.2). Immediate success was obtained in all patients. Early mortality or complications were not encountered. Peak systolic pressure gradient decreased from 52 +/- 21 to 7.2 +/- 7.6 mm Hg (P < 0.001). Intima tear was detected in eight procedures angiographically, without signs of dissection. Three-month follow-up angiography in these patients showed unchanged (4/8 patients) or diminished abnormalities (4/8 patients). One asymptomatic patient, known with left ventricular dysfunction due to significant aortic valve insufficiency, died suddenly 5 years after balloon angioplasty. Recoarctation occurred in one patient (3%). Late aneurysm formation was excluded by MR in 24/29 and CT in remaining 5/29 patients during follow-up, including those patients in whom intima tear was encountered immediately postangioplasty. In three of seven patients an irregular aortic contour persisted, without indication of progression or aneurysm formation. Hypertension was completely relieved in 67% (14/21 patients) and improved in 33% (7/21 patients). Conclusions: Balloon angioplasty for native coarctation yields low reintervention probability in adult patients. Despite occurrence of angiographically established intimal tearing, aortic dissection and aneurysm formation were not encountered. abstract_id: PUBMED:15028367 Long-term outcome (up to 15 years) of balloon angioplasty of discrete native coarctation of the aorta in adolescents and adults. Objectives: This study evaluated the long-term follow-up results of balloon angioplasty (BA) in adolescent and adult patients with discrete coarctation of the aorta. Background: Although the immediate and intermediate term results of BA for patients with aortic coarctation (AC) have been encouraging, there is a paucity of data on long-term follow-up results. Methods: This basis of this study was follow-up of 49 patients (mean age, 22 +/- 7 years) undergoing BA for discrete AC at median interval of 10.2 years, including cardiac catheterization, magnetic resonance imaging, and Doppler echocardiography. Results: No early or late deaths occurred. Balloon angioplasty produced a reduction in peak AC gradient from 66 +/- 23 mm Hg (95% confidence interval [CI]: 59.5 to 72.7) to 10.8 +/- 7 mm Hg (95% CI: 8.8 to 12.5) (p < 0.0001). Follow-up catheterization 12 months later revealed a residual gradient of 6.2 +/- 6 mm Hg (95% CI: 4.4 to 7.9) (p < 0.001). Four patients (7.5%) with suboptimal initial outcome with peak gradient >20 mm Hg had successful repeat angioplasty. Aneurysm developed at the site of dilation in four patients (7.5%). Magnetic resonance imaging follow-up results revealed no new aneurysm or appreciable changes in the size of pre-existing aneurysms, and no recoarctation was observed. Also, no appreciable changes in the Doppler gradient across the AC site were noted. The blood pressure had normalized without medication in 31 (63%) of the 49 patients. Conclusions: Long-term results of BA for discrete AC are excellent and should be considered as first option for treatment of this disease. abstract_id: PUBMED:1832091 Balloon angioplasty for recurrent coarctation of aorta. Immediate and long-term results. Background: As angioplasty techniques have been refined and larger low-profile balloons developed, a nonsurgical approach to recoarctation has become available. Several reports have documented both the efficacy and safety of this procedure. However, there are little data available on the long-term follow-up of these patients. This report details the initial results and long-term evaluation of both the relief of obstruction and the presence of hypertension after balloon angioplasty for recurrent coarctation. Methods And Results: Balloon angioplasty for recurrent coarctation of the aorta was performed 29 times in 26 patients at a median age of 4 years and 9 months (range, 4 months to 29 years), with eight patients less than 1 year old. Initial surgical techniques were end-to-end anastomosis in 11 patients, subclavian flap aortoplasty in 11 patients, and patch aortoplasty in four patients. Angioplasty was performed at a median interval of 2 years and 7 months (range, 4 months to 23 years) after surgery. Mean peak systolic pressure difference across the coarctation decreased from 40.0 +/- 16.8 to 10.3 +/- 9.5 mm Hg (p less than 0.05) after the initial angioplasty, and mean diameter of the aortic lumen at the coarctation site increased from 5.8 +/- 3.5 to 9.0 +/- 4.3 mm (p less than 0.05). There was no mortality, and only one patient developed an aneurysm (4%). Three patients underwent repeat angioplasty for a pressure difference of more than 20 mm Hg. Long-term follow-up is available on 24 of 26 patients with a mean follow-up of 42 +/- 24 months (range, 12-88 months). Mean peak systolic pressure difference across the area of coarctation decreased from 40.3 +/- 17.4 before angioplasty to 8.5 +/- 8.3 mm Hg after final angioplasty (p less than 0.05) and 7.5 +/- 7.5 mm Hg at follow-up. Mean peak systolic blood pressure in the upper extremities decreased from 133.1 +/- 14.9 before angioplasty to 111.1 +/- 14.1 mm Hg at long-term follow-up (p less than 0.05). Conclusions: Balloon angioplasty should be considered the treatment of choice for relief of recurrent aortic coarctation. abstract_id: PUBMED:11233395 Balloon dilation for aortic recoarctation: morphology at the site of dilation and long-term efficacy. Objectives: We undertook this study to assess the immediate and long-term outcome of balloon angioplasty performed for recurrent or residual coarctation of the aorta, and to assess the changes in the vessel wall caused by this procedure. Methods: Clinical, echocardiographic, angiographic and hemodynamic data from 71 patients who underwent balloon angioplasty for recoarctation between January 1987 and January 1998 were analysed retrospectively. Results: Angioplasty was performed after a median of 82.6 months (range 1.4 mo-20.9 y, mean 88.5 mo) following surgery for coarctation. Mean systolic pressure gradients were reduced from 27 +/- 15 mmHg to 11 +/- 11 mmHg after angioplasty (p < 0.0001). The mean diameter at the site of recoarctation increased from 5.5 +/- 2.5 to 7.5 +/- 2.7 mm (p < 0.0001). Outpouchings of contrast agents, indicating the disruption of the inner layers of the vessel wall, were defined as extravasations. They were observed in one-quarter of the angiograms performed immediately after the intervention. Immediate success of angioplasty was achieved in 71%, and persisted in 69% of patients during long-term follow up. The main determinant for immediate success was the age at the time of the procedure (p < 0.05), while the main determinant for long-term success was the increase achieved in diameter. Extravasations did not progress to aneurysms, neither acutely nor during echocardiographic follow-up studies. For further follow-up, more sensitive imaging techniques will be necessary to delineate the morphology of the site of extravasation observed immediately after angioplasty. abstract_id: PUBMED:7860921 Balloon angioplasty of native coarctation of the aorta: midterm follow-up and prognostic factors. Unlabelled: OBJECTIVES, This retrospective analysis was designed to examine the intermediate and long-term effects of balloon angioplasty for native coarctation of the aorta and to determine whether any factors are predictive of outcome. Background: Balloon angioplasty for native coarctation of the aorta is controversial. Concerns exist over lack of long-term follow-up and possible formation of aneurysms. The role of transverse arch and isthmus hypoplasia after balloon dilation is unknown. Methods: Included in the study were all patients 3 days to 29 years old (mean age 4.6 years) referred for possible balloon dilation to the pediatric cardiac catheterization laboratory with evidence of a discrete coarctation of the aorta. The hemodynamic data, angiograms and clinical records of 102 patients were examined, with follow-up data from 2 to 117 months (median 36.2) available in 92 patients. Results: Immediate success with balloon angioplasty was achieved in 93 (91.2%) of the 102 patients. Seventy-one patients (77.2%) with intermediate follow-up data (range 12 to 117 months) available are asymptomatic and normotensive, with insignificant arm to leg blood pressure gradients (< or = 20 mm Hg). Twenty-one patients (22.8%) with an initial successful result developed an increase in gradient 2 at 86 months after angioplasty, requiring reintervention in 18. Follow-up > 72 months is available in 17 patients, 16 of whom are normotensive and have not required additional intervention. No additional intervention was needed in 88.4% of older children and infants > 7 months old. Ten of the 13 surviving neonates who initially had a successful dilation required reangioplasty or operation 14 days to 10 months (median 4.6 months) after angioplasty. Transverse arch hypoplasia had minimal effect on follow-up blood pressure gradient, whereas isthmic hypoplasia was associated with reintervention in 50%. A small aneurysm was noted in 2 (1.9%) of 102 patients. Conclusions: Balloon angioplasty of native aortic coarctation is effective in infants and older children. In neonates, balloon angioplasty provides effective palliation only. Aneurysm formation is rare; however, lifetime follow-up is warranted. abstract_id: PUBMED:6237808 Balloon dilatation angioplasty: nonsurgical management of coarctation of the aorta. Balloon dilatation angioplasty was successfully performed in five patients (ages 18 months to 17 years) with discrete aortic coarctation. The catheter size was No. 8F or 9F. Selection of balloon diameter was based on angiographic measurements of the aorta determined proximal and distal to the coarctation site. A 10 sec inflation-deflation cycle at 6 to 8 atmospheres (90 to 120 psi) was performed. The systolic pressure gradients across the coarctation before balloon dilatation angioplasty ranged from 35 to 70 mm Hg. Systolic pressure gradients after balloon dilatation angioplasty ranged from 0 to 10 mm Hg. All patients had normalized blood pressure immediately. Abnormal pulsed Doppler echocardiograms were observed in all patients before balloon dilatation angioplasty; four patients had normal echocardiograms after balloon dilatation angioplasty. No serious intraprocedural complications occurred. One patient required femoral artery thrombectomy 36 hr after balloon dilatation angioplasty. One to 6 months after balloon dilatation angioplasty no patients have evidence of restenosis of coarctation. Early results suggest that balloon dilatation angioplasty may offer a safe and effective nonsurgical alternative for the treatment of discrete coarctation in older infants and children. Long-term follow-up for the incidence of restenosis and formation of aneurysms will ultimately determine the efficacy and safety of this procedure. abstract_id: PUBMED:36682763 Stent Implantation and Balloon Angioplasty for Native and Recurrent Coarctation of the Aorta. Evidence on the effectiveness and comparative effectiveness of stent implantation and balloon angioplasty for native coarctation of the aorta (CoA) and recurrent CoA separately is lacking. The present meta-analysis was performed to assess the efficacy and safety of stent implantation and balloon angioplasty in native (NaCo) and recurrent (ReCo) CoA.A systematic computerized literature search was conducted to retrieve all relevant studies of stent implantation and balloon angioplasty for CoA. Both single-arm and comparative studies were included. Data on NaCo and ReCo were pooled separately.A post-procedure gradient of ≤ 20 mmHg was achieved in 97% and 92% of patients undergoing stent implantation and balloon angioplasty for NaCo, and in 98% and 90% for ReCo, respectively. A post-procedure gradient of ≤ 10 mmHg was achieved in 97% and 83% of patients undergoing stent implantation and balloon angioplasty for NaCo, and in 86% and 78% for ReCo, respectively. Comparative results confirmed that stent implantation provided a significantly higher success rate compared with balloon angioplasty (odds ratio [OR] = 2.09; 95% confidence interval [CI] = 1.13-3.86; P = 0.019) in treating NaCo. Incidences of all-cause complications, mortality, reintervention, and aneurysm formation were similar between the groups. Patients undergoing stent implantation had a significantly lower incidence of dissection (OR = 0.15; 95% CI = 0.02-0.92; P = 0.040).Current evidence indicates that stent implantation might have superior efficacy compared with balloon angioplasty for the treatment of NaCo with higher success rates and similar complication rates. However, whether this superior effect is also present in ReCo patients needs further evaluation. abstract_id: PUBMED:10732899 Balloon angioplasty of native coarctation: clinical outcomes and predictors of success. Objectives: We sought to investigate the clinical impact of balloon angioplasty for native coarctation of the aorta (CoA) and determine predictors of outcome. Background: Balloon dilation of native CoA remains controversial and more information on its long-term impact is required. Methods: Hemodynamic, angiographic and follow-up data on 69 children who underwent balloon angioplasty of native CoA between 1988 and 1996 were reviewed. Stretch, recoil and gain of CoA circumference and area were calculated and related to outcomes. Results: Initial systolic gradients (mean +/- SD, 31+/-12 mm Hg) fell by -74+/-27% (p < 0.001), with an increase in mean CoA diameters of 128+/-128% in the left anterior oblique and 124+/-87% in the lateral views (p < 0.001). Two deaths occurred, one at the time of the procedure and one 23 months later, both as a result of an associated cardiomyopathy. Seven patients had residual gradients of >20 mm Hg. One patient developed an aneurysm, stable in follow-up, and four patients had mild dilation at the site of the angioplasty. Freedom from reintervention was 90% at one year and 87% at five years with follow-up ranging to 8.5 years. Factors significantly associated with decreased time to reintervention included: a higher gradient before dilation, a smaller percentage change in gradient after dilation, a small transverse arch and a greater stretch and gain, but not recoil. Conclusion: Balloon dilation is a safe and efficient treatment of native CoA in children. Greater stretch and gain are factors significantly associated with reintervention, possibly related to altered elastic properties and vessel scarring. abstract_id: PUBMED:8542629 Late development of dissecting aneurysm following balloon angioplasty of native aortic coarctation. Acute aortic dissection during balloon angioplasty for coarctation of the aorta or aneurysm formation during follow-up are well-known complications of this procedure. Dissecting aneurysm development during long-term follow-up after balloon angioplasty of a native coarctation has not been previously reported. We report a case in which a huge dissecting aneurysm developed 3 years after the native coarctation angioplasty procedure. The aneurysm required surgical repair. abstract_id: PUBMED:19061706 Twenty-two years of follow-up results of balloon angioplasty for discreet native coarctation of the aorta in adolescents and adults. Background And Aims: Although the immediate and intermediate-term results of balloon angioplasty (BA) for patients with aortic coarctation (AC) have been encouraging, there is paucity of data on long-term follow-up results. This study evaluated the long-term (up to 22 years) follow-up results of BA in adolescent and adult patients with discrete (shelf-like) coarctation of the aorta. Methods: Follow-up data of 58 patients (mean age 24+/-9 years) undergoing BA for discrete AC at median interval of 13.4 years including cardiac catheterization, magnetic resonance imaging, and Doppler echocardiography form the basis of this study. Results: No early deaths occurred. Balloon angioplasty produced immediate reduction in peak AC gradient from 60+/-22 mm Hg to 8.5+/-8 mm Hg (P<.0001). Follow-up catheterization 12 months later revealed a residual gradient of 5+/-6.4 mm Hg (P=.01). Five patients (8%) with suboptimal initial outcome (peak gradient>20 mm Hg) developed restenosis, and 4 of these had successful repeat angioplasty. Aneurysm developed at the site of dilatation in 4 patients (7%). Magnetic resonance imaging follow-up results revealed no new aneurysm. In one patient, the aneurysm increased in size, but no recoarctation or appreciable changes in the Doppler gradient across the AC site was noted. The blood pressure had normalized without medical treatment in 29 (50%) of the 58 patients. Conclusion: Long-term results of BA for discrete AC are excellent and should be considered as first option for treatment of this disease. Answer: The long-term outcomes after balloon angioplasty for coarctation of the aorta in adolescents and adults show that aneurysm formation can be a concern, although the incidence varies across studies. In one study with a mean follow-up of 8.5 years, late aneurysm formation was excluded by MRI or CT in all patients, including those with immediate postangioplasty intimal tearing, suggesting that aneurysm formation was not a significant issue (PUBMED:19235242). However, another study with up to 15 years of follow-up reported that aneurysms developed at the dilation site in 7.5% of patients, although no new aneurysms or appreciable changes in the size of pre-existing aneurysms were observed during the MRI follow-up (PUBMED:15028367). In a study of balloon angioplasty for recurrent coarctation, only one patient developed an aneurysm (4%), indicating a relatively low incidence of this complication (PUBMED:1832091). Similarly, another study found that extravasations observed immediately after angioplasty did not progress to aneurysms acutely or during echocardiographic follow-up (PUBMED:11233395). A retrospective analysis also reported that aneurysm formation was rare, with only 1.9% of patients developing a small aneurysm (PUBMED:7860921). However, there have been cases of late development of dissecting aneurysms following balloon angioplasty of native aortic coarctation, although such occurrences are not commonly reported (PUBMED:8542629). A long-term study of up to 22 years found that aneurysms developed in 7% of patients, with one patient experiencing an increase in aneurysm size, but no new aneurysms were detected (PUBMED:19061706). In summary, while aneurysm formation is a recognized complication of balloon angioplasty for coarctation of the aorta, the frequency and severity of this issue vary. Some studies report low incidences and stable aneurysms over time, while others note that aneurysm formation can occur and may require monitoring or intervention. It is important for patients to have long-term follow-up to monitor for this potential complication.
Instruction: Nuchal translucency measurements in consecutive normal pregnancies. Is there a predisposition to increased levels? Abstracts: abstract_id: PUBMED:24332983 Increased nuchal translucency with normal karyotype and anomaly scan: what next? Over the years, it has become clear that increased nuchal translucency is a marker for chromosomal abnormalities, and it is also associated with a wide spectrum of structural anomalies, genetic syndromes, a higher risk of miscarriage, and intrauterine fetal death. These risks are all proportionally related to the degree of nuchal translucency enlargement. After the initial assessment of increased nuchal translucency, parents should be counselled by the fetal medicine specialist about the possible outcomes and the value of additional karyotyping and array comparative genomic hybridisation. A detailed late first-trimester and subsequent 20-week scan should aim at identifying structural anomalies, with special focus on the fetal heart and subtle dysmorphic features. In the absence of structural anomalies or markers, the chance of a favourable outcome is high. abstract_id: PUBMED:26673632 Ultrasound and echocardiographic findings obtained in the second and third trimesters of gestation in fetuses with normal karyotype and increased nuchal translucency. Introduction: Numerous papers have proven that an increased nuchal translucency is connected with a raised risk of chromosomal aberrations, but few analyses are related to the further state of fetuses with a normal karyotype. The Aim Of The Study: The aim of the study was to estimate the risk of cardiac defects and other developmental disorders in fetuses with increased nuchal translucency and normal findings of a standard cytogenetic examination. Methods: The authors carried out a retrospective analysis of 5183 examinations of 3376 patients who reported to the Department of Diagnosis and Prophylaxis of Congenital Malformations in the Polish Mother's Memorial Hospital in Łódź in the period from January 2008 to March 2011 for prenatal ultrasound and echocardiographic examinations. The authors analyzed the results of the examinations performed in the second and third trimesters of gestation in fetuses with an increased nuchal translucency of ≥3 mm in the first trimester and with a normal karyotype. Results: Fifty-seven patients (1.7% of the examined group) fulfilled the criteria necessary to be included in the study. In 31 pregnant women (54%) structural defects or anomalies of the fetus were found. Cardiac anomalies were detected in 17 fetuses (29.8%). The authors detected various types of cardiac defects such as tetralogy of Fallot, ventricular septal defect, atrioventricular septal defect, transposition of the great arteries and hypoplastic left heart syndrome. Conclusions: In more than half of the fetuses with an increased nuchal translucency (NT ≥ 3 mm) and a normal karyotype, developmental defects of various organs appeared in the further course of pregnancy: mainly heart defects that were either isolated, or accompanied other anomalies. abstract_id: PUBMED:17521948 Increased nuchal translucency with normal karyotype Nuchal translucency (NT) measurement in first trimester screening between 11 and 14 weeks' gestation has now been clearly identified as a marker for aneuploidies and in particular for trisomy 21. Even in the absence of aneuploidy increased fetal nuchal translucency has been shown to be a marker for fetal heart malformations and numerous other fetal defects and genetic syndromes when the measure is&gt;or=95th centile which is around 3,5 mm for each gestational age. Fetuses with NT thickness above the 99th centile and normal karyotype have a high risk of adverse prenatal outcome and this increase in risk is exponential as the nuchal translucency measurement increases. However, among children born alive with normal ultrasound at 22 weeks of gestation, there was no significant association between NT thickness and development at 2 years as assessed by clinical examination and ASQ scores, when with a control population. Counselling should emphasize that when the karyotype is normal and no fetal structural malformation was missed prenatally following resolution of nuchal thickening, the prognosis is not impaired at the age of 2. abstract_id: PUBMED:36210052 Follow-up of increased nuchal translucency: Results of a study of 398 cases. Introduction: Increased fetal nuchal translucency is associated with chromosomal as well as morphological abnormalities. The psychomotor development of children from these pregnancies is still unclear. The main objective of our study was to evaluate pregnancy outcomes and the post-natal progress of fetuses with increased nuchal translucency. We also compared the features of patients and fetuses according to their nuchal translucency measurement (above 3.5 mm or not). Methods: Retrospective single-center study in 398 patients in a level 3 maternity unit in France. Mothers whose fetus had a nuchal translucency higher than the 95 th percentile between 2009 and 2018 were included. All patients who had a child with a normal karyotype were prospectively given a questionnaire to evaluate their child's psychomotor development. Results: 37.4% (130/348) of fetuses had a chromosomal abnormality and 2.3% (5/218) had a normal karyotype but a pathogenic copy number variant diagnosed by array- CGH. 28.7% (77/268) of fetus without diagnosed chromosomal abnormalities, presented a morphological abnormality with predominant cardiac malformations. Fetuses with a nuchal translucency ≥ 3.5 mm, had more chromosomal abnormalities (p&lt;0.0001) and were at higher risk of hypotrophy (p=0.005) and birth by cesarean (p=0.04). Among the liveborn children, 70% (166/238) were healthy without morphological or chromosomal abnormalities. Lastly, 17% (17/102) of these children had psychomotor disorder. Conclusion: According to our results, parents should be warned of the increased risk of hypotrophy and delivery by cesarean section for fetuses with a nuchal translucency ≥ 3.5 mm. We recommend prolonged specialized pediatric follow-up for children who have been carriers of increased nuchal translucency. abstract_id: PUBMED:38275068 Nuchal Translucency and Congenital Heart Defects. Nuchal translucency comprises a temporary accumulation of fluid in the subcutaneous tissue on the back of a fetus's neck, which accompanies the crown-rump length and is observed through an ultrasound performed between 11 and 13 weeks + 6 days gestation. Nuchal translucency is considered to be above normal when values are higher than the 95th/99th percentile or equal to or higher than 2.5/3.5 mm. The first connection between increased nuchal translucency and the presence of congenital heart defects is described in the study of Hyett et al., who observed that they are directly proportional. Since that time, several studies have been conducted to understand if nuchal translucency measurements can be used for congenital heart defect screening in euploid fetuses. However, there is great variability in the estimated nuchal translucency cutoff values for congenital heart defect detection. The purpose of this review was to understand how increased nuchal translucency values and congenital heart defects are related and to identify which of these defects are more frequently associated with an increase in these values. abstract_id: PUBMED:30122131 What can be inferred if the fetal increased nuchal translucency thickness changes? Objectives: To study the perinatal outcomes in the women who's the different changes of an increased fetal nuchal translucency thickness.Methods: Since 2013-2016, the pregnant women whose initial measurement of fetal nuchal translucency (NT) ≥ 3.5 mm would be invited to scan the NT again after a week. Between the two measurements over 1 mm, defined as "change". According to different results between two measurements, they were divided into three groups: unchanged group, thinned group and thicken group. To follow-up and compare the perinatal outcomes among the three groups.Results: One hundred seventy-five cases met our criteria and were retrieved in the study. There are 90 cases of the unchanged group, 60 cases of the thinned group and 25 cases of thicken group. Among three groups, the chromosomal abnormalities rate is 11.11, 21.67, and 60.0% respectively, the structure abnormalities rate is 10.0, 8.51, and 20.0%, the normal delivery rate is 76.67, 70.0, and 28.0%. There all have the significantly differences in statistics except the structure abnormalities rate.Conclusion: The chromosomal abnormalities rate and adverse perinatal outcome rate were worse if NT continues to thicken. However, the perinatal outcome of the NT thinned group is no better than the NT unchanged case. abstract_id: PUBMED:35188046 The 16-week sonographic findings in fetuses with increased nuchal translucency and a normal array. Objective: The aim of this prospective study is to evaluate the performance of the intermediate 16-week ultrasound in fetuses with increased nuchal translucency (NT) and a normal chromosomal microarray analysis (CMA). Methods: During a one-year period, a detailed ultrasound was performed at 16 week' gestation for patients with an increased NT (≥3.5 mm) and normal CMA. Pregnancy work-up included a traditional 22-week ultrasound scan, an echocardiography, and the option of a 10-gene Rasopathy panel after a normal 16-week scan. Abnormal findings and pregnancy outcomes were collected and analyzed. Results: In 52 fetuses with an isolated increased NT and normal CMA, 14 (26.5%) were noted to have structural defects on the 16-week ultrasound. Intrauterine fetal death occurred in one (1.9%) case identified by the 16-week scan. Of the remaining 37 cases, six opted for a RASopathy panel. In this group, one case of Noonan syndrome was detected. One case of unilateral duplex kidney had not been found until the 22-week scan. One case of fetal growth restriction was identified in the third trimester. The remaining 34 cases proceeded with normal ultrasound to term. Conclusion: The 16-week ultrasound scan performed on fetuses with increased NT and normal CMA could detect the majority of structural abnormalities that are expected to be identified traditionally at 20-24 weeks. abstract_id: PUBMED:27595799 Increased nuchal translucency and adverse pregnancy outcomes. Objective: To assess the adverse outcome in pregnancies with raised Nuchal translucency measurement. Methods: This retrospective study was conducted at the Aga Khan University Hospital, Karachi, Pakistan between January 2001 and December 2015. Women attending the Fetal Medicine unit for first trimester screening by Nuchal translucency were included in the study. All pregnancies were followed up till delivery. Those with incomplete information about the delivery were excluded from the analysis. Pregnancy outcome was observed in normal Nuchal translucency measurements and in raised Nuchal translucency measurements. Results: Out of the 1941 fetuses, there were 54 (2.8%) with raised Nuchal translucency measurement. Adverse pregnancy outcome was found in 32 (59.3%) of fetuses with raised Nuchal translucency measurement. Invasive testing was done in 15 (27.8%) of pregnancies with raised Nuchal translucency measurement. Among them there were nine (16.7%) cases with abnormal karyotype. It included six (11.1%) cases of trisomy 21, three (5.6%) cases of trisomy 13 and 18. Conclusion: Raised Nuchal translucency measurement is associated with adverse pregnancy outcome. One in three fetuses are affected by it. Live birth in this group where there is no aneuploidy is around 3.7%. abstract_id: PUBMED:22277886 Increased nuchal translucency in the presence of normal chromosomes: what's next? Purpose Of Review: First trimester screening is presently offered to all pregnant women as a means of prenatal screening for Down syndrome, trisomy 18, and trisomy 13. Nuchal translucency measurement is a fundamental component of the screening protocol. A woman whose fetus' nuchal translucency is greater than the 95th percentile is also at increased risk for a multiplicity of other adverse pregnancy and pediatric outcomes, and as a consequence, counseling of patients about their testing options and range of pregnancy outcomes has become complex and difficult. Recent Findings: The increased risk for chromosome abnormalities, congenital heart malformations, and pregnancy loss in the presence of an increased nuchal translucency is well documented. What has not been clearly defined is the incidence of other genetic syndromes, congenital defects, and adverse pregnancy and pediatric outcomes in the presence of increased nuchal translucency. Currently, Noonan syndrome is the only molecular genetic condition that has been shown to have a clear association with the finding of increased nuchal translucency in the first trimester. Summary: This article reviews the current literature on outcomes in pregnancies with an increased nuchal translucency and a normal karyotype. We summarize the range of outcomes detected in the first trimester with recommendations for further prenatal testing and counseling of patients. abstract_id: PUBMED:12224066 Nuchal translucency measurements in consecutive normal pregnancies. Is there a predisposition to increased levels? Objective: The current study aims to assess the relative risk of isolated increased nuchal translucency (NT) in consecutive normal pregnancies. Methods: A retrospective search of the fetal medicine unit database in which NT assessment was conducted. Only singleton fetuses with an unremarkable pregnancy outcome were included in this study. The repeated NT either at a threshold of &gt;2.5 mm or &gt; or =95th centile of the normal range for various CRLs (2 mm at a CRL &lt; or =44 mm to 2.85 mm at CRLs 83-84 mm) was assessed. Pearson's correlation was applied to investigate any correlation between a log(10) NT obtained in two consecutive pregnancies. The chance of an increased NT in the first pregnancy followed by similar results in the subsequent one was assessed using Fisher's exact test. Results: Ninety-two pregnant women with two consecutive normal pregnancies were identified. There was a highly significant correlation between the log(10) NT values measured in the consecutive pregnancies (r = 0.38; p &lt; 0.001). The relative risk of an isolated NT &gt; or =2.5 mm in the first pregnancy to be followed by a similar finding in the subsequent one was 18.21 (95% confidence interval [CI]: 3.62-91.55). The relative risk of an isolated NT &gt; or =95th centile in the first pregnancy to be followed by a similar result in the subsequent one was 21 (95% CI: 4.53-97.44). Conclusion: The current results indicate a significant correlation of NT values in consecutive unaffected pregnancies. There also was a predisposition for a repeated isolated increased NT in subsequent pregnancies. Additional series and more information are needed before firm conclusions can be made. Answer: Yes, there appears to be a predisposition to increased nuchal translucency (NT) levels in consecutive normal pregnancies. A study aimed at assessing the relative risk of isolated increased NT in consecutive normal pregnancies found a highly significant correlation between the NT values measured in consecutive pregnancies (r = 0.38; p < 0.001). The relative risk of an isolated NT > or =2.5 mm in the first pregnancy to be followed by a similar finding in the subsequent one was 18.21 (95% confidence interval [CI]: 3.62-91.55). Similarly, the relative risk of an isolated NT > or =95th centile in the first pregnancy to be followed by a similar result in the subsequent one was 21 (95% CI: 4.53-97.44). These results indicate a significant correlation of NT values in consecutive unaffected pregnancies and suggest a predisposition for a repeated isolated increased NT in subsequent pregnancies (PUBMED:12224066).
Instruction: Is Self-Reported Sleep Duration Associated with Osteoporosis? Abstracts: abstract_id: PUBMED:36050909 Short report: relationship between self-reported sleep characteristics and falls-associated fractures in elderly individuals: a population-based study. Currently, the data for effect of sleep on falls-associated fractures in elderly individuals are still limited. This current study was aimed to assess the link between self-reported sleep characteristics and falls-associated fractures in elderly individuals. This study included a total of 20,497 participants from National Health and Nutritional Examination Survey (NHANES) 2005-2008, and 6,174 participants aged 45 years and older were identified. Self-reported sleep characteristics and conditions of falls-associated fractures of individuals were obtained via the method of personal questionnaires. In a total of 610 participants with exact history of fractures, 168 individuals with falls-associated fractures were identified, and the prevalence was 27.5%. The mean age of falls-associated fractures group was (72.1 ± 8.8) years, and the female (P &lt; 0.001) occupied a higher proportion. Factors of living alone (P = 0.003), combined with hypertension (P = 0.003) and osteoporosis (P &lt; 0.001), sleeping less or more (P = 0.009), and frequent snoring (P = 0.007) were linked to falls-associated fractures. Compared with sleep duration of 6 to 8 h/night, sleep duration of ≤4 h/night (odds ratio [OR] 1.858, 95% confidence interval [CI] 1.115-3.094) and of ≥9 h/night (OR 1.932, 95% CI 1.195-3.123) were related to an increased risk of falls-associated fractures. Collectively, our nationwide data noted that sleep characteristics were closely related to falls-associated fractures in elderly individuals, and a longer sleep duration may exhibit a protective effect against the falls-associated fractures, but it should be limited within 9 h/night. abstract_id: PUBMED:26096586 Is Self-Reported Sleep Duration Associated with Osteoporosis? Data from a 4-Year Aggregated Analysis from the National Health and Nutrition Examination Survey. Objectives: To examine the association between self-reported sleep duration and osteoporosis in a national sample of the U.S. elderly population. Design: Population-based cross-sectional study. Setting: Four-year National Health and Nutrition Examination Survey data from two survey cycles between 2005-06 and 2007-08. Participants: Men and women aged 50 and older with data in the surveys and medical examination data files (N = 5,288). Measurements: Osteoporosis diagnosis was based on bone mineral density measurement and self-report. Sleep duration was ascertained from the sleep survey. Simple and multiple logistic regression models were used to evaluate the association between these two factors. Results: Shorter sleep duration (&lt;6 hours) was found to be significantly associated with osteoporosis after adjusting for potential covariates (odds ratio (OR) = 1.59, 95% confidence interval (CI) = 1.07-2.37). After stratifying according to age group, this association remained significant (OR = 1.80, 95% CI = 1.20-2.68) only in the older age group (&gt;65). Conclusion: These results underscore the possible linkage between sleep and osteoporosis and the need for additional research on this topic. abstract_id: PUBMED:33070238 The associations between different sleep patterns and osteoporosis based on the Osteoporosis Self-Assessment Tool for Asians. Based on the use of Osteoporosis Self-Assessment Tool for Asians (OSTA) to assess osteoporosis risk, we found that short sleep duration and taking a daytime nap had an increased risk of osteoporosis. Purpose: To explore the associations between different sleep patterns with osteoporosis. Methods: 3659 postmenopausal women (average age of 60 years) were divided into low, middle, and high osteoporosis risk categories based on the Osteoporosis Self-Assessment Tool for Asians (OSTA). After having collected by a standard questionnaire, total and nocturnal sleep duration was collapsed to form categories of ≤ 6 h, &gt; 6 h and ≤ 7 h, &gt; 7 h and ≤ 8 h, &gt; 8 h and ≤ 9 h, &gt; 9 h, and daytime nap duration of 0 h and &gt; 0 h. Results: As a categorical variable, the total sleep duration of ≤ 6 h per day (OR = 1.34, 95% CI 1.04-1.72), nocturnal sleep duration of ≤ 6 h per night (OR = 1.65, 95% CI 1.24-2.18), and taking a daytime nap (OR = 1.33, 95% CI 1.09-1.64) had higher osteoporosis risk after adjustment for covariates. As a continuous variable, after the adjustment for covariates, both longer total (OR = 0.86, 95% CI 0.78-0.94) and nocturnal sleep duration (OR = 0.83, 95% CI 0.76-0.91) had lower risk of osteoporosis risk while taking longer daytime nap (OR = 1.10, 95% CI 1.02-1.19) had higher osteoporosis risk. Conclusions: Postmenopausal women with both short total and nocturnal sleep duration (6 h or less) and taking a daytime nap had increased osteoporosis risk as assessed by OSTA. abstract_id: PUBMED:34435311 Sleep duration and health outcomes: an umbrella review. Purpose: To collect existing evidence on the relationship between sleep duration and health outcomes. Methods: A thorough search was conducted in PubMed, Web of Science, Embase, and the Cochrane Database of Systematic Reviews from inception to January, 2021. Meta-analyses of observational and interventional studies were eligible if they examined the associations between sleep duration and human health. Results: In total, this umbrella review identified 69 meta-analyses with 11 outcomes for cancers and 30 outcomes for non-cancer conditions. Inappropriate sleep durations may significantly elevate the risk for cardiovascular disease (CVD), cognitive decline, coronary heart disease (CHD), depression, falls, frailty, lung cancer, metabolic syndrome (MS), and stroke. Dose-response analysis revealed that a 1-h reduction per 24 hours is associated with an increased risk by 3-11% of all-cause mortality, CHD, osteoporosis, stroke, and T2DM among short sleepers. Conversely, a 1-h increment in long sleepers is associated with a 7-17% higher risk of stroke mortality, CHD, stroke, and T2DM in adults. Conclusion: Inappropriate sleep duration is a risk factor for developing non-cancer conditions. Decreasing and increasing sleep hours towards extreme sleep durations are associated with poor health outcomes. abstract_id: PUBMED:29516221 Diet quality, dietary patterns and short sleep duration: a cross-sectional population-based study. Objective: To examine the cross-sectional association between habitual diet quality, dietary patterns and sleep duration. Methods: A cross-sectional study of 838 men and 1065 women participating in a large, population-based cohort study, the Geelong Osteoporosis Study. A diet quality score (DQS) was derived from answers to a food-frequency questionnaire, and a factor analysis identified habitual dietary patterns. Self-reported sleep duration was dichotomized into 'short sleep duration' (&lt; 7 h/night) and 'adequate sleep duration' (≥ 7 h/night). Sleep duration was also grouped into predetermined cut-off ranges (&lt; 5, 5.01-6, 6.01-7, 7.01-8, 8.01-9, 9.01-10 and ≥ 10.01 h /night) to explore the relationship between sleep duration, DQS and dietary patterns. Results: 363 (34.0%) women and 339 (36.1%) men were identified with short sleep duration. After adjustments for age, socioeconomic status, education, physical activity and body mass index, each standard deviation increase in the factor score for traditional dietary pattern was associated with a 13% reduced odds for short sleep duration in men. In women, each standard deviation increase in DQS score was associated with a 21% reduced odds for short sleep duration. In women, adequate sleep was related to higher modern dietary pattern scores when compared both with excessive sleep duration (&gt; 9 h) and shorter sleep duration (&lt; 7 h). Conclusion: Diet quality score was associated with adequate sleep duration (≥ 7 h) and reduced odds for short sleep duration (&lt; 7 h) in Australian women. abstract_id: PUBMED:24724004 Association between Bone Mineral Density and Sleep Duration in the Korean Elderly Population. Background: An association between sleep duration and a wide spectrum of diseases has been reported, but little is known about its relationship with bone mineral density (BMD). Previously conducted studies in Korea and abroad have reported results that are controversial. The present study sought to assess whether sleep duration can be considered an independent risk factor of osteoporosis. Methods: We included participants over the age of 60 years with data on self-reported habitual sleep duration and BMD measured with dual X-ray absorptiometry. Comprehensive data on the study sample was obtained from the Korea National Health and Nutritional Survey performed from 2008 to 2010. Sex-stratified multiple regression analyses were conducted with adjustments for possible confounding factors. Results: There was a significant inverse dose-dependent association between sleep duration and BMD measured at total hip, femur neck, and lumbar spine for women and total hip and femur neck for men. Sex-stratified regression analyses adjusted for age and body mass index revealed that sleep duration had a negative correlation with BMD at total hip and femoral neck for both women (β = -0.0048; P = 0.0172 for total hip, β = -0.0037; P = 0.0303 for femur neck) and men (β = -0.0057; P = 0.0218 for total hip, β = -0.0057; P = 0.0143 for femur neck). For women, the significance remained after further adjustment of confounding variables. Conclusion: Prolonged sleep duration appears to have a significant association with lower total hip and femur neck BMD in elderly women but not in elderly men. abstract_id: PUBMED:34615958 Association of short sleep duration and trabecular bone score. Short sleep duration has been found to be associated with bone health deterioration by using bone mineral density (BMD). Only a few attempts have been made to assess the association of sleep duration and bone by utilizing the trabecular bone score (TBS). The aim of this study was to examine the association between sleep duration and TBS from a national database. A total of 4480 eligible participants older than 20 years who attended the United States National Health and Nutrition Examination Survey (NHANES) from 2005 to 2006 with TBS data and self-reported sleep duration. The association between sleep duration and TBS was investigated using a multivariate regression model with covariate adjustment. TBS was lowest in individuals with a short sleep duration (≤ 5 h) and it was increased in those with longer self-reported total sleep times. After a full adjustment for covariates, those sleeping less than 5 h had a significantly lower TBS than the reference group (sleep duration of 7 h). In subgroup analyses, an association between short sleep duration (≤ 5 h) and lower TBS persisted in older ages (≥ 60 years old), women, obese adults (BMI ≥ 30 kg/m2), and non-Hispanic Whites. Short sleep duration is associated with low TBS in women, obese adults (BMI ≥ 30 kg/m2), and non-Hispanic whites. Strict self-monitoring of body weight, well-tailored controls of underlying disease(s), and adequate sleep may help prevent osteoporosis. abstract_id: PUBMED:28613955 Sleep duration and midlife women's health. Sleep is mandatory for life and therefore any sleep deprivation or disruption might carry ill-health outcomes. The optimal night's sleep duration seems to be around 7 h, as too little sleep and too much sleep have been found to be associated with a variety of changes in well-being, physical and psychological parameters. This mini-review enlightens the potential health consequences of sleep disturbance relevant to menopause and midlife women's health. These include cardiovascular and metabolic derangements, cognitive impairment, osteoporosis, and cancer-related perspectives. Still, the current knowledge suggests that, even if statistically significant associations are detected, the absolute excessive risks are relatively small or modest. abstract_id: PUBMED:37009967 Association of nocturnal sleep duration and sleep midpoint with osteoporosis risk in rural adults: a large-scale cross-sectional study. Background: Evidence has suggested the linkage between sleep habits and several metabolic diseases, but the association of sleep factors with bone health remains unclear, especially in regions with low economic levels. Thus, this study aimed to investigate the relationship of nocturnal sleep duration and sleep midpoint with the osteoporosis risk in a rural population. Method: Eligible subjects were derived from the Henan Rural Cohort Study. The Pittsburgh Sleep Quality Index was applied to collect sleep information including sleep initiating time and wake-up time. The bone mineral density of the calcaneus was measured by the ultrasonic bone density apparatus. Multivariable logistic regression models and restricted cubic splines were utilized to evaluate the odds ratio (OR) and 95% confidence intervals (95% CI). Results: For 8033 participants, 1636 subjects suffered from osteoporosis. Compared with the reference group (7 ~ h group), the ORs and 95% CI of osteoporosis associated with duration of nocturnal sleep were 1.32 (1.10, 1.56), 1.59 (1.25, 2.01), and 1.82 (1.25, 2.65) in the 8 ~ h, 9 ~ h, and ≥ 10 h group, respectively. Additionally, the adjusted ORs and 95% CI were 1.20 (1.01, 1.44) in the early sleep midpoint group and 1.09 (0.92, 1.29) in the intermediate sleep midpoint, compared with the late. Furthermore, there was a joint effect of long duration of nocturnal sleep and the early sleep midpoint on osteoporosis. Conclusion: Long duration of nocturnal sleep and early sleep midpoint were independently and jointly associated with higher risk of osteoporosis in rural areas. Trial Registration: The Henan Rural Cohort Study has been registered at Chinese Clinical Trial Register (Registration number: ChiCTR-OOC-15006699). Date of registration: 06 July 2015. http://www.chictr.org.cn/showproj.aspx?proj=11375. abstract_id: PUBMED:37786334 The Association Between Sleep Duration and Sleep-Related Gene Methylation with Osteoporosis in Chinese Postmenopausal Women. This study aims to investigate the association between sleep duration and osteoporosis. In addition, sleep-related gene methylation was also detected in this study and we explored its relationship with osteoporosis. The epidemiological investigation section of this study was designed as a retrospective cross-sectional study. We gathered 148 postmenopausal women from two communities and used questionnaires to collect data of sleep duration and other sleep patterns. Biochemical variables were tested, and bone mineral density was measured by dual-energy X-ray absorptiometry. In addition, sleep-related gene (PER2 and PER3) methylation was tested, and the association with osteoporosis was further studied. Twenty-nine of the 148 participants (aged from 65 to 86 years) who suffered from osteoporosis were tested for osteopenia. A significant difference was observed in the association between sleep duration and osteoporosis; the p-value was 0.013. In addition, in our study, we found that short sleep duration (&lt;7 hours) may increase the risk of osteoporosis compared with longer sleep duration. Moreover, sleep-related genes such as PER2 and PER3 and their CpG island methylation were tested, and there was no significant difference between PER2 and PER3 CpG island methylation and osteoporosis. Short sleep duration may increase the risk of osteoporosis. However, the association between sleep-related gene methylation and osteoporosis was not found. Answer: Yes, self-reported sleep duration is associated with osteoporosis. Several studies have found a significant relationship between sleep duration and the risk of developing osteoporosis, particularly in elderly populations. One study found that shorter sleep duration (less than 6 hours) was significantly associated with osteoporosis after adjusting for potential covariates in a national sample of the U.S. elderly population (PUBMED:26096586). Another study reported that both short total and nocturnal sleep duration (6 hours or less) and taking a daytime nap were associated with increased osteoporosis risk in postmenopausal women, as assessed by the Osteoporosis Self-Assessment Tool for Asians (OSTA) (PUBMED:33070238). Furthermore, an umbrella review identified that inappropriate sleep durations may significantly elevate the risk for various health conditions, including osteoporosis (PUBMED:34435311). In the Korean elderly population, prolonged sleep duration was found to have a significant association with lower bone mineral density (BMD) in elderly women (PUBMED:24724004). Additionally, short sleep duration was associated with low trabecular bone score (TBS) in women, obese adults, and non-Hispanic whites (PUBMED:34615958). Moreover, a large-scale cross-sectional study in a rural population indicated that long duration of nocturnal sleep and early sleep midpoint were independently and jointly associated with a higher risk of osteoporosis (PUBMED:37009967). Lastly, a study on Chinese postmenopausal women found that short sleep duration (less than 7 hours) may increase the risk of osteoporosis (PUBMED:37786334). In summary, self-reported sleep duration is associated with osteoporosis, with both short and excessively long sleep durations being linked to an increased risk of the condition.
Instruction: Do women with pain and bleeding in early pregnancy require a vaginal speculum examination as part of their assessment? Abstracts: abstract_id: PUBMED:24217155 Do women with pain and bleeding in early pregnancy require a vaginal speculum examination as part of their assessment? Aims: To investigate whether a vaginal speculum examination (VSE) prior to a transvaginal scan (TVS) alters the diagnosis or management of women who complain of bleeding in early pregnancy. Methods: A prospective study. Women were asked to describe their bleeding as heavy, moderate or light and to consent to VSE prior to TVS. TVS was performed to obtain a final diagnosis of pregnancy outcome. Results: 221 consecutive women were included in the study. In 14.5% (n = 32) complaining of heavy bleeding, blood was seen in all but two VSE and 84% (n = 27) had a miscarriage diagnosed by TVS. Products of conception were removed in 18.8% (n = 6), but this did not alter the subsequent immediate management of any cases. 65.2% (n = 144) of women complained of light bleeding, blood was seen on VSE in 53% (n = 77). Of these women, 25% (n = 19) of those where blood was seen had a miscarriage, compared to 6% (n = 4) of women where blood was not seen. A cervical ectropion was visualised in 11.7% (n = 26) and 2.3% (n = 5) had a cervical polyp. No other clinically significant pathology was detected. Conclusion: The amount of bleeding reported by women in early pregnancy relates well with VSE findings. Performing a VSE did not alter the subsequent management of these patients. This study demonstrates that routine objective assessment of blood by a clinician performing VSE prior to a TVS is unnecessary. abstract_id: PUBMED:33589138 Endometriosis presenting as vaginal polyps: outpatient vaginoscopic treatment. Objective: To demonstrate an outpatient vaginoscopic technique for treating multiple vaginal polyps. Design: Demonstration of surgical technique using slides, pictures, and video. Setting: Private hospital. Patient(s): Thirty-two-year-old nulligravid woman presenting to the gynecology clinic with one episode of intermenstrual bleeding, regular menstrual cycles with normal flow, and no history of dysmenorrhoea or dyspareunia. The genital local examination was normal, and speculum examination showed multiple vaginal lesions like polyps in the proximal posterior two-thirds and right lateral vaginal walls. Her transvaginal ultrasound read a normal uterus with a right ovarian simple cyst. Intervention(s): The surgeon performed an outpatient operative vaginoscopy using a 5-mm continuous flow office hysteroscope with a 2.9-mm rod lens optical system and a 5F working channel. Distension of the vagina was achieved with a normal saline solution, and an intrauterine pressure of 50 to 60 mm Hg was maintained by an irrigation and aspiration electronic pump. An inspection of the vaginal walls, fornices, and the external cervical os (Fig. 1) revealed 10 vaginal lesions like polyps in the proximal two-thirds of the posterior and right lateral vaginal wall. The vaginal lesions (Fig. 2) varied in size from 0.5 cm to 4 cm. An excisional biopsy was performed and the sample sent for histopathologic evaluation. The vaginal lesions &lt;2 cm in length were excised by cutting the base with scissors or using a bipolar vaporization electrode, which was connected to an electrocautery unit by a bipolar high-frequency cord. Vaginal lesions &gt;2 cm were excised with the TruClear 5C Hysteroscopic Tissue Removal System (HTRS) with a zero-degree scope using the 2.9-mm incisor with a 5-mm cutting window at one end attached to a reusable handpiece with two connectors-one to the motor unit and second to the suction bottle with a collection bag. The overall diameter of TruClear 5C is 5.7 mm, and the optic size is 0.8 mm. The same irrigation pump is compatible with HTRS, and the pressure was increased to 150 mm Hg to maintain vaginal distension. Three factors influenced our decision to use the HTRS intraoperatively: the number and size of the vaginal lesions and the surgical time in the outpatient setting. A mechanical system that works on the principle of excising and aspirating tissue, the HTRS incisor has a rotatory action with the excising window placed against the most distal part of the vaginal lesions. The cutting action is controlled via a foot pedal attached to a motor control with 800 rotations per minute. The handpiece remains stationary while the polyp is excised and aspirated through the window into the collection bag. Minimal bleeding occurred and stopped spontaneously. The institutional ethics committee exempted this case report from review, and we obtained informed written consent from the patient. Main Outcome Measure(s): All vaginal lesions excised in an outpatient setting via vaginoscopy technique without anesthesia. Result(s): The operative time with the Bettocchi hysteroscope was 14 minutes, and HTRS was 6 minutes. The patient did not complain of pain but did describe minimal discomfort, rated on the visual analog scale as 2 (where ≥5 is severe pain). She was discharged 1 hour later. The histopathology was reported as vaginal endometriosis (ectopic presentation of endometriosis is rare, accounting for 0.02% of cases). After surgery, she was started on cyclical oral contraceptive pills (OCP) in the combination of 30 mg of ethinyl estradiol + 2 mg of dienogest because she desired to delay pregnancy by 1 year. She remained asymptomatic for 6 months. These contraceptive hormones are available in the form of oral pills, vaginal rings, and transdermal patches, and a physician can provide OCP continuously or cyclically. Continuous OCP is more efficacious for control of dysmenorrhoea, but cyclical OCP is preferred because it is affordable, tolerable, effective, produces no unpredictable bleeding, and slows the progression of the disease. (A cohort study found the contraceptive vaginal ring to be more effective for symptom-control in rectovaginal endometriosis with higher patient satisfaction than the transdermal patch; vaginal rings or transdermal patches are not available in some countries.) CONCLUSION(S): Vaginoscopy allows a more in-depth visualization of the vagina with complete inspection and removal of all polyps. Vaginoscopy is feasible in the outpatient setting and allows a comfortable, ergonomic position for the surgeon. Vaginoscopy or no-touch technique avoids the use of a speculum or tenaculum and results in minimal pain during the outpatient procedure. abstract_id: PUBMED:37635592 The feasibility of introducing early pregnancy assessment unit in Nigerian hospitals: A pilot study from a private hospital in Enugu, Southeast Nigeria. Background: Early pregnancy assessment unit (EPAU) is a dedicated unit that provides specialist care such as definitive ultrasonographic and biochemical assessment to women who have or have had vaginal bleeding or pain before 16 weeks of pregnancy. Such patients usually have a positive pregnancy test with a scan showing pregnancy of an unknown location, previous ectopic pregnancy, recurrent miscarriages, or previous molar pregnancy. Such a holistic and prompt approach leads to better outcome in the management of women with early pregnancy and other complications. Aim: This study aimed to describe the diagnosis and outcome of early pregnancy pain and bleeding, managed by rapid ultrasonographic and biochemical assessments by a dedicated gynecological team in a private hospital to determine the feasibility of establishing early pregnancy assessment units (EPAUs) in Nigeria. Materials And Methods: This is a cross-sectional study of eligible cases of early pregnancy bleeding and pain in a private gynecological hospital in Enugu, Nigeria. Results: During the 9-year study period, 160 women with early pregnancy pain or bleeding were analyzed. The mean age of the participants was 32.3 ± 7.2 years, and they were mostly multipara (68.7%) and Christians and of the Igbo tribe (91.3%). The final diagnosis after clinical and radiological evaluations showed that the majority had miscarriages (82.5%), while 17.5% had ectopic gestations. All the participants had their definitive diagnosis and treatment within 24 hours of presentation. There was no recorded case of maternal death. Conclusion: EPAUs are feasible in a Nigerian setting provided that trained, dedicated staff and equipment are provided. abstract_id: PUBMED:1902383 Management of women referred to early pregnancy assessment unit: care and cost effectiveness. Objective: To assess the efficiency of an early pregnancy assessment unit in the care of women with bleeding or pain in early pregnancy. Design: Analysis of women attending in the first year of the unit's operation and in the six months immediately before its introduction. Setting: Early pregnancy assessment unit in a district general hospital serving a population of 310,000. Patients: 1141 women referred with bleeding or pain in early pregnancy. Main Outcome Measures: Length of stay in hospital required for diagnosis and treatment. Results: Before the unit was established the mean admission time was one and a half (range half to three) days for women who required no treatment and three (one and a half to five) days in women requiring evacuation of uterus. These times were reduced to two hours as an outpatient and one day respectively for most women after the unit was established. Between 318 and 505 women were estimated to have been saved from unnecessary admission, and 233 had their stay reduced; the associated saving was between pounds 95,000 and pounds 120,000 in one year. Conclusions: The early pregnancy assessment unit improved the quality of care and also produced considerable savings in financial and staff resources. abstract_id: PUBMED:32638095 Predictors of complete miscarriage after expectant management or misoprostol treatment of non-viable early pregnancy in women with vaginal bleeding. Purpose: To identify predictors of complete miscarriage after expectant management or misoprostol treatment of non-viable early pregnancy in women with vaginal bleeding. Methods: This was a planned secondary analysis of data from a published randomized controlled trial comparing expectant management with vaginal single dose of 800 µg misoprostol treatment of women with embryonic or anembryonic miscarriage. Predefined variables-serum-progesterone, serum-β-human chorionic gonadotropin, parity, previous vaginal deliveries, gestational age, clinical symptoms (bleeding and pain), mean diameter and shape of the gestational sac, crown-rump-length, type of miscarriage, and presence of blood flow in the intervillous space-were tested as predictors of treatment success (no gestational sac in the uterine cavity and maximum anterior-posterior intracavitary diameter was ≤ 15 mm as measured with transvaginal ultrasound on a sagittal view) in univariable and multivariable logistic regression. Results: Variables from 174 women (83 expectant management versus 91 misoprostol) were analyzed for prediction of complete miscarriage at ≤ 17 days. In patients managed expectantly, the rate of complete miscarriage was 62.7% (32/51) in embryonic miscarriages versus 37.5% (12/32) in anembryonic miscarriages (P = 0.02). In multivariable logistic regression, the likelihood of success increased with increasing gestational age, increasing crown-rump-length and decreasing gestational sac diameter. Misoprostol treatment was successful in 80.0% (73/91). No variable predicted success of misoprostol treatment. Conclusions: Complete miscarriage after expectant management is significantly more likely in embryonic miscarriage than in anembryonic miscarriage. Gestational age, crown-rump-length, and gestational sac diameter are independent predictors of success of expectant management. Predictors of treatment success may help counselling women with early miscarriage. abstract_id: PUBMED:26336017 Management of vaginal penetration phobia in Arab women: a retrospective study. Background And Objectives: Vaginal penetration phobia is a common and distressing problem world.wide. It interferes with vaginal penetrative sexual relations, and leads to unconsummated marriage (UCM). This problem may be heightened in Arab women, due to cultural taboos about pain and bleeding, that may be associated with the first coital experience after marriage. Data about this problem is scarce in Arab societies. The aim of this study was to evaluate the response of these women and their husbands to an individualized, psychotherapeutic assessment and treatment to resolve this problem. Design And Settings: Retrospective descriptive in a general gynecology community setting over a 6-year period. Methods: The study involved a retrospective sequential cohort of 100 Arab couples with UCM due to the woman's VPP. They were evaluated by a female gynecologist in out patient clinics. Data was collected through chart review, and telephone interviews. Final analysis was performed on 100 Arab couples, who satisfied the inclusion criteria. They were followed up to assess their response to an individualized, structured treatment protocol. The treatment combined sex education with systematic desensitization, targeting fear and anxiety as.sociated with vaginal penetration. Results: A total of 96% of the studied group had a successful outcome after an average of 4 sessions. Penetrative intercourse was reported by the tolerance of these women; further pregnancy was achieved in 77.8 % of the infertile couples. Conclusion: Insufficient knowledge of sexual intercourse is a major contributor to the development of VPP in the sampled population. It appears that they respond well to an individualized, structured treatment protocol as described by Hawten 1985 (regardless of other risk factors associated with vaginismus). abstract_id: PUBMED:36638667 A novel obstetrical surgical intervention - New episiotomy: Case series study. Objectives: To establish anatomical structures responsible for creating a resistance force on a fetal head during vaginal delivery at term; to develop a new episiotomy to widen the vaginal outlet. Design: A prospective observational case series study. Setting: International centers. Population: Eighteen consecutive women at term during vaginal delivery. Methods: V-shape excision made on the hymeneal membrane, hymeneal ring, and hymeneal plate without extending an incision to the posterior perineal structures. Outlectorrhaphy (repairing the vaginal outlet to repair the surgical defect. Excisional specimens are subjected to histological examinations. Main Outcome Measures: The primary maternal outcome measures were postpartum posterior perineum pain associated with vaginal outlectomy. The secondary outcomes measured a) complication of vaginal outlectomy; b) occurrence of dyspareunia after vaginal outlectomy; c) applicability of vaginal outlectomy; d) neonatal outcome measured by an APGAR score to show how the neonate tolerated the vaginal delivery process with vaginal outlectomy implementation. Results: The V-shape excision widened the vaginal outlet sufficiently for a vaginal delivery without a perineal incision. Outlectorrhaphy requires two or three simple interrupted sutures to close surgical defects and eliminate the vaginal gapping appearance. Eleven out of eighteen women were primigravida (61.1 %), and four multiparas (38.9 %) delivered a live newborn vaginally with a newly developed vaginal outlectomy. A median newborn's weight was 350 g ± 250 g, and APGAR scores at 5 min were median of 9 ± 1. One out of eighteen patients experienced extension of the skin, perineal fascia, and bulbospongiosus muscle. Bleeding from the extension tearing of vaginal outlectomy was heavier than from uncomplicated vaginal outlectomy but negligible. Immediately postpartum, patients reported no moderate or severe perineal pain. At a 3-month postpartum, none of the subjects reported superficial or deep dyspareunia. Histology from the vaginal outlectomy specimens described the direct connection of the longitudinal vaginal smooth muscles to the hymeneal plate and compact connective tissues (like cartilage) of the hymeneal ring. Additionally, the histological examination showed the absence of the perineal skeletal muscle within the vaginal outlectomy specimens. Conclusions: The vaginal outlet is responsible for the most vital resistance force on a fetal head during delivery. Vaginal outlectomy widens the vaginal outlet sufficiently for a fetal vaginal birth with minimal bleeding and is easy to repair. No moderate-severe perineal pain or dyspareunia occurred in this study group. Tweetable Abstract: Vaginal outlectomy eliminates postpartum moderate-to-severe perineal pain and superficial dyspareunia. The posterior perineum creates no resistance force on the fetal head during vaginal delivery to warrant incision. abstract_id: PUBMED:6594157 The use of 16,16-dimethyl-trans delta 2 prostaglandin E1 methyl ester (gemeprost) vaginal pessaries for the termination of pregnancy in the early second trimester. A comparison with extra-amniotic prostaglandin E2. The use of gemeprost pessaries has been compared in an open randomized trial with the extra-amniotic infusion of prostaglandin E2 (PGE2) for the termination of pregnancy between 12 and 16 weeks gestation. The success rates were 77% and 79% for the pessary and infusion group respectively, and these rates were unaffected by parity. There was no significant difference in the cumulative abortion rate between the two groups, nor were there differences in the induction-abortion interval, nor in the time taken to the onset of pain or bleeding. However, women in the pessary group required significantly less analgesia than those in the infusion group. Side-effects, experienced both during treatment and during the 6 weeks after abortion, were similar in both groups. Gemeprost vaginal pessaries are an effective alternative to the extra-amniotic infusion of PGE2 for the termination of pregnancy in the early second trimester. abstract_id: PUBMED:7426533 The use of 16-16 dimethyl trans delta 2 PGE1 methyl ester (ONO 802) vaginal suppositories for the termination of early pregnancy. A comparative study. Vaginal suppositories containing the 16-16 dimethyl trans delta 2 PGE1 methyl ester (ONO 802) were used for the induction of abortion in early pregnancy, and this non-surgical technique was compared with suction termination performed under local anaesthesia or general anaesthesia. Ninety patients were recruited to the study and divided equally between the three groups. Complete abortion was induced in 87 per cent of the patients treated with ONO 802 suppositories, which compared favourably with 87 and 100 per cent for the patients who had suction terminations under local anaesthesia and general anaesthesia respectively. The vaginal suppositories induced uterine bleeding in all patients and the mean time of onset of lower abdominal pain was 2 hours 55 minutes. The average numbers of episodes of vomiting and diarrhoea for patients given ONO 802 suppositories were 0.9 and 0.7 respectively. Menstrual blood loss was measured objectively in all groups and no significant differences between the three methods could be found. In selected cases, ONO 802 vaginal suppositories would seem to be safe and reasonably effective for the termination of early pregnancy. abstract_id: PUBMED:29119588 An Early Pregnancy Unit in the United States: An Effective Method for Evaluating First-Trimester Pregnancy Complications. In the United Kingdom, early pregnancy assessment clinics have existed since the early 1990s and have become the reference standard for evaluating and treating women with first-trimester pregnancy complications. These units have now been established in many countries and have been found to be effective and efficient, saving money and unnecessary emergency department visits and hospital admissions. To our knowledge, no such model has been described in the United States. A PubMed search using a combination of "early pregnancy unit," "early pregnancy assessment clinic," and "United States," "U.S.," and "America" on May 14, 2017, yielded no results. Denver Health, a safety net hospital in Denver, Colorado, has established the first known early pregnancy unit in the United States. Patients with positive urine pregnancy test results who are in their first trimester by best estimation, have not had a prior ultrasound examination, and present to their primary care providers with pain, bleeding, history of an ectopic pregnancy, history of a tubal ligation, or conception with an intrauterine device in place are eligible to be evaluated in the early pregnancy unit. This article describes our clinical setup, methods, and findings in the first 2 years of the unit's inception with the intention of serving as a model for the establishment of more early pregnancy units throughout the United States. Answer: Based on the provided abstracts, it appears that a vaginal speculum examination (VSE) may not be necessary for all women with pain and bleeding in early pregnancy as part of their assessment. A study investigating whether a VSE prior to a transvaginal scan (TVS) alters the diagnosis or management of women with bleeding in early pregnancy found that performing a VSE did not alter the subsequent management of these patients. The study concluded that routine objective assessment of blood by a clinician performing VSE prior to a TVS is unnecessary (PUBMED:24217155). Furthermore, the establishment of early pregnancy assessment units (EPAUs), which provide specialist care such as definitive ultrasonographic and biochemical assessment to women with early pregnancy complications, has been shown to lead to better outcomes. These units are feasible and can provide definitive diagnosis and treatment within 24 hours of presentation, without necessarily requiring a VSE as part of the initial assessment (PUBMED:37635592). Additionally, the management of women referred to an early pregnancy assessment unit showed that the introduction of such units improved the quality of care and produced considerable savings in financial and staff resources, suggesting that a streamlined approach to assessment, which may not include VSE, can be effective (PUBMED:1902383). In conclusion, while a VSE can provide information about the amount of bleeding and may identify conditions such as cervical ectropion or polyps, it may not be essential for the management of all women with pain and bleeding in early pregnancy, especially when other assessment methods such as TVS and EPAUs are available and can provide rapid and definitive diagnosis and management (PUBMED:24217155; PUBMED:37635592; PUBMED:1902383).
Instruction: Are auditory steady-state responses a good tool prior to pediatric cochlear implantation? Abstracts: abstract_id: PUBMED:26092547 Are auditory steady-state responses a good tool prior to pediatric cochlear implantation? Introduction: ASSR allow frequency-specific evaluation in intensities up to 120dB HL and detection of residual hearing in patients with severe-to-profound hearing loss. Aim: to compare ASSR thresholds and behavioral test results in children with suspected severe-to-profound hearing loss. Methods: Cross sectional study to compare ASSR and behavioral responses (VRA or audiometry) in 63 pediatric cochlear implant candidates (126 ears) aged between 6 and 72 months. We included children with normal otomicroscopy, absent responses to click-ABR and otoaccoustic emissions. We excluded children with inner ear malformations, auditory neuropathy spectrum disorder or who did not complete VRA or achieve EEG noise&lt;30nV during the ASSR test. Air-conduction ASSR stimuli were continuous sinusoidal tones presented at 0.5, 1, 2 and 4kHz starting at 110dB HL. Behavioral thresholds were acquired with warble tones presented at 0.5, 1, 2 and 4kHz in each ear through insert or head phones at maximum presentation level of 120dB HL. Results: Behavioral thresholds were obtained in 36.7% (185/504) of all frequencies in all subjects, 9% in intensities &gt;110dB HL. Among 504 ASSR measurements, 53 thresholds were obtained (10.5%). Overall 89.5% of the tested frequencies did not show any response at 110dB HL. Most responses were at 500Hz. Mean differences between behavioral and ASSR thresholds varied from 0.09 to 8.94dB. Twenty-seven comparisons of behavioral and ASSR thresholds were obtained: 12 at 0.5kHz, 9 at 1kHz, 5 at 2kHz and 1 at 4kHz. Absent responses were observed in both tests in 38.1% at 0.5kHz, 52.4% at 1kHz, 74.6% at 2kHz and 81.0% at 4kHz. Specificity was&gt;90% at 1, 2 and 4kHz. In ears with no behavioral response at 120dB HL all ASSR thresholds were in the profound hearing loss range, 90% of them were ≥110dB HL. Conclusion: Among 63 pediatric CI candidates, absent responses to high-intensity ASSR was the major finding (specificity&gt;90%) predicting behavioral thresholds in the profound hearing loss range. These findings can be helpful to confirm the decision for cochlear implantation. abstract_id: PUBMED:29672320 Correlation between Preoperative Auditory Steady-State Response and Postoperative Electrically Evoked Auditory Brainstem Response and T Level in Cochlear Implantation for Child Patients with Inner-Ear Malformations. Objective: This study aims to investigate the correlation between thresholds of preoperative multiple auditory steady-state response (ASSR) and electrically evoked auditory brainstem response (EABR) and the behavioral threshold. Methods: A total of 72 patients were elected to receive a multichannel cochlear implant. According to the residual hearing determined in a preoperative test using high-, moderate-, and low-frequency ASSR, these patients were divided into the following 2 groups: residual hearing and hearing loss. The EABR and behavioral thresholds 1 year after implantation were assayed, and differences between these 2 parameters were compared. Results: Among the high-, moderate-, and low-frequency residual hearing groups, the EABR and behavioral thresholds of patients 1 year after implantation were significantly lower than those in the hearing loss group, and the differences were statistically significant (p &lt; 0.01). Conclusion: Before the operation, ASSR results can be used to predict the efficacy of cochlear implantation in patients, and they serve as one of the reference conditions for choosing the ear for implantation. However, the threshold of ASSR is not equivalent to the actual auditory threshold of patients after implantation, and the deviation between these 2 thresholds is more significant at low frequencies. abstract_id: PUBMED:29566882 Auditory steady-state response in cochlear implant patients. Introduction And Objective: Auditory steady state responses to continuous amplitude modulated tones at rates between 70 and 110Hz, have been proposed as a feasible alternative to objective frequency specific audiometry in cochlear implant subjects. The aim of the present study is to obtain physiological thresholds by means of auditory steady-state response in cochlear implant patients (Clarion HiRes 90K), with acoustic stimulation, on free field conditions and to verify its biological origin. Methods: 11 subjects comprised the sample. Four amplitude modulated tones of 500, 1000, 2000 and 4000Hz were used as stimuli, using the multiple frequency technique. The recording of auditory steady-state response was also recorded at 0dB HL of intensity, non-specific stimulus and using a masking technique. Results: The study enabled the electrophysiological thresholds to be obtained for each subject of the explored sample. There were no auditory steady-state responses at either 0dB or non-specific stimulus recordings. It was possible to obtain the masking thresholds. A difference was identified between behavioral and electrophysiological thresholds of -6±16, -2±13, 0±22 and -8±18dB at frequencies of 500, 1000, 2000 and 4000Hz respectively. Conclusions: The auditory steady state response seems to be a suitable technique to evaluate the hearing threshold in cochlear implant subjects. abstract_id: PUBMED:33866399 Cochlear implantation versus auditory brainstem implantation in children with auditory nerve deficiencies. Background: Cochlear nerve deficiency is one of the known causes of congenital sensorineural hearing loss. Management of hearing loss in children with cochlear nerve deficiency poses a multidimensional challenge. The absent or hypoplastic cochlear nerve may prevent electrical stimulation from reaching the brainstem and the auditory cortex. A deficient cochlear nerve can be associated with other inner ear malformations, which may diminish the success of cochlear implantation in those children. Promising results in adults after auditory brainstem implantation led to the expansion of candidacy to include the pediatric populations who were contraindicated for CIs. Objective: To review the outcomes of cochlear implantation versus that of auditory brainstem implantation in children with various conditions of the auditory nerve. Methods: This retrospective chart review study comprised two pediatric groups. The first group consisted of seven ABI recipients with cochlear nerve aplasia and the second group consisted of another seven children with cochlear nerve deficiencies who underwent CI surgery. The participants' auditory skills and speech outcomes were assessed using different tests selected from the Evaluation of Auditory Responses to Speech (EARS) test battery. Results: There were some individual variations in outcomes depending on the status of the auditory nerve. The mean CAP score of the ABI group was 2.87, while the mean SIR score was 0.62. On the other hand, the mean CAP score of the CI group was 1.29, while the mean SIR score was 0.42. Conclusion: Our results are in good agreement with the reported auditory perception and speech and language development outcomes of pediatric auditory brainstem implantation. We added to the growing body of literature on the importance of verifying and identifying the status of the cochlear nerve in the decision-making process of the surgical management of those pediatric groups. abstract_id: PUBMED:29274947 Electrically-evoked auditory steady-state responses as neural correlates of loudness growth in cochlear implant users. Loudness growth functions characterize how the loudness percept changes with current level between the threshold and most comfortable loudness level in cochlear implant users. Even though loudness growth functions are highly listener-dependent, currently default settings are used in clinical devices. This study investigated whether electrically-evoked auditory steady-state response amplitude growth functions correspond to behaviorally measured loudness growth functions. Seven cochlear implant listeners participated in two behavioral loudness growth tasks and an EEG recording session. The 40-Hz sinusoidally amplitude-modulated pulse trains were presented to CI channels stimulating at a more apical and basal region of the cochlea, and were presented at different current levels encompassing the listeners' dynamic ranges. Behaviorally, loudness growth was measured using an Absolute Magnitude Estimation and a Graphic Rating Scale with loudness categories. A good correspondence was found between the response amplitude functions and the behavioral loudness growth functions. The results are encouraging for future advances in individual, more automatic, and objective fitting of cochlear implants. abstract_id: PUBMED:37688847 Effects of analysis window on 40-Hz auditory steady-state responses in cochlear implant users. Auditory steady-state responses (ASSRs) are phase-locked responses of the auditory system to the envelope of a stimulus. These responses can be used as an objective proxy to assess temporal envelope processing and its related functional outcomes such as hearing thresholds and speech perception, in normal-hearing listeners, in persons with hearing impairment, as well as in cochlear-implant (CI) users. While ASSRs are traditionally measured using a continuous stimulation paradigm, an alternative is the intermittent stimulation paradigm, whereby stimuli are presented with silence intervals in between. This paradigm could be more useful in a clinical setting as it allows for other neural responses to be analysed concurrently. One clinical use case of the intermittent paradigm is to objectively program CIs during an automatic fitting session whereby electrically evoked ASSRs (eASSRs) as well as other evoked potentials are used to predict behavioural thresholds. However, there is no consensus yet about the optimal analysis parameters for an intermittent paradigm in order to detect and measure eASSRs reliably. In this study, we used the intermittent paradigm to evoke eASSRs in adult CI users and investigated whether the early response buildup affects the response measurement outcomes. To this end, we varied the starting timepoint and length of the analysis window within which the responses were analysed. We used the amplitude, signal-to-noise ratio (SNR), phase, and pairwise phase consistency (PPC) to characterize the responses. Moreover, we set out to find the optimal stimulus duration for efficient and reliable eASSR measurements. These analyses were performed at two stimulation levels, i.e., 100% and 50% of the dynamic range of each participant. Results revealed that inclusion of the first 300 ms in the analysis window leads to overestimation of response amplitude and underestimation of response phase. Additionally, the response SNR and PPC were not affected by the inclusion of the first 300 ms in the analysis window. However, the latter two metrics were highly dependent on the stimulus duration which complicates comparisons across studies. Finally, the optimal stimulus duration for quick and reliable characterization of eASSRs was found to be around 800 ms for the stimulation level of 100% DR. These findings suggest that inclusion of the early onset period of eASSR recordings negatively influences the response measurement outcomes and that efficient and reliable eASSR measurements are possible using stimuli of around 800 ms long. This will pave the path for the development of a clinically feasible eASSR measurement in CI users. abstract_id: PUBMED:26906677 Can place-specific cochlear dispersion be represented by auditory steady-state responses? The present study investigated to what extent properties of local cochlear dispersion can be objectively assessed through auditory steady-state responses (ASSR). The hypothesis was that stimuli compensating for the phase response at a particular cochlear location generate a maximally modulated basilar membrane (BM) response at that BM position, due to the large "within-channel" synchrony of activity. This would lead, in turn, to a larger ASSR amplitude than other stimuli of corresponding intensity and bandwidth. Two stimulus types were chosen: 1] Harmonic tone complexes consisting of equal-amplitude tones with a starting phase following an algorithm developed by Schroeder [IEEE Trans. Inf. Theory 16, 85-89 (1970)] that have earlier been considered in behavioral studies to estimate human auditory filter phase responses; and 2] simulations of auditory-filter impulse responses (IR). In both cases, also the temporally reversed versions of the stimuli were considered. The ASSRs obtained with the Schroeder tone complexes were found to be dominated by "across-channel" synchrony and, thus, do not reflect local place-specific information. In the case of the more frequency-specific stimuli, no significant differences were found between the responses to the IR and its temporally reversed counterpart. Thus, whereas ASSRs to narrowband stimuli have been used as an objective indicator of frequency-specific hearing sensitivity, the method does not seem to be sensitive enough to reflect local cochlear dispersion. abstract_id: PUBMED:11981390 Outcome of cochlear implantation in pediatric auditory neuropathy. Objective: Auditory neuropathy, characterized by absence or abnormality of auditory brainstem responses and normal otoacoustic emissions, is often associated with particularly poor response to amplification. Outcome data from four such pediatric patients who received cochlear implants are discussed. Study Design: Four patients from the Carolina Children's Communicative Disorders Program were identified as having received a diagnosis of auditory neuropathy before implantation with a Clarion cochlear implant. Speech data collected after implantation were compared with data from control pediatric implant patients, matched for age at implantation and duration of implant use. Electrically evoked auditory brainstem response data and electrically evoked acoustic reflex data were also obtained. Methods: Routine clinical procedures were used to obtain speech outcome data. A 75-micros biphasic 21.1-Hz pulse train served as the eliciting stimulus for both evoked auditory brainstem responses and reflex measures, which were obtained contralateral to the implant. Results: Speech data were comparable with those obtained from the general population of pediatric patients receiving cochlear implants at this center. Identifiable evoked auditory brainstem response data were obtained in all subjects on at least two of the three electrodes tested, and variability was comparable with that observed in other children with implants. A robust electrically evoked acoustic reflex with no decay was observed at estimated M-level in all children. Conclusion: The data gathered to date suggest that the outcome of cochlear implantation in these four patients is not significantly different from that in other pediatric implant patients. Physiologic data suggest that the implant was able to overcome the desynchronization hypothesized to underlie auditory neuropathy. abstract_id: PUBMED:29871199 Electrically evoked auditory brainstem responses in patients with absence of cochlear modiolus receiving cochlear implantation Objective:The purpose of this study was analyzing the characteristic of the electrically evoked auditory brainstem responses (EABR) of cochlear implantation (CI) patients with absence of cochlear modiolus (ACM) and comparing the EABR results in deaf patients with normal anatomic structures. Also we assessed the auditory rehabilitation of ACM patients after cochlear implantation.Method:Twenty-eight profoundly deaf patients were contained, 14 of them were ACM (group A) and the rest were normal anatomic structures (group B). They all received Austria cochlear implants from 2004 to 2015. Postoperative EABR were measured and recorded with No. 2, 6, 11 electrodes in patients receving cochlear implantation. The lead-out rate and threshold of EABR between groups were compared and analyzed. Result:The lead-out rates of EABR in group A was 71.4% and group B was 100%. The thresholds of EABR in group A were higher than group B (P&lt; 0.05). In group A, there were no statistical difference in the thresholds of EABR among electrodes No.2,6 and 11. However, in group B, there were statistical difference in the thresholds of EABR between electrodes No.2 and 11 (P&lt; 0.05), and the thresholds of No.2 was lower than No.11Conclusion:EABR could be used in assessing the objective auditory rehabilitation in ACM patients after cochlear implantation. abstract_id: PUBMED:36580503 Speech perception and the state of sound perception thresholds in patients with cochlear implants Objective: To assess the state of the thresholds of sound perception at speech frequencies in users of cochlear implantation (CI) systems with developed speech skills. Material And Methods: The study involved 30 patients - users of CI systems, of whom 17 girls, 13 boys aged 6 to 14 years. All patients underwent speech audiometry in a free sound field twice (at the beginning of the study and after adjusting the settings of the speech processor). The ASSR (auditory steady-state responses) thresholds were recorded as the second stage to assess the perception of non-speech stimuli after adjusting the settings of the speech processor. Results: Correlation analysis was carried out between the percentage of speech intelligibility and the average ASSR thresholds. There was no statistically significant relationship between lowering the ASSR thresholds and speech intelligibility. The absence of a relationship between speech intelligibility and user satisfaction with the speech processor settings was revealed. Improving the intelligibility of the user's speech by the CI system did not affect the improvement on the response to the addressed speech. Answer: Auditory steady-state responses (ASSRs) have been studied as a tool for evaluating hearing thresholds in pediatric patients prior to cochlear implantation. The research suggests that ASSRs can be a useful tool in certain contexts. One study found that ASSRs allowed for frequency-specific evaluation in intensities up to 120dB HL and were able to detect residual hearing in patients with severe-to-profound hearing loss. The study concluded that absent responses to high-intensity ASSR were a major finding, with specificity greater than 90%, predicting behavioral thresholds in the profound hearing loss range. This suggests that ASSRs can help confirm the decision for cochlear implantation in pediatric candidates (PUBMED:26092547). Another study indicated that preoperative ASSR results could predict the efficacy of cochlear implantation in patients and serve as one of the reference conditions for choosing the ear for implantation. However, it also noted that the ASSR threshold is not equivalent to the actual auditory threshold of patients after implantation, with more significant deviations at low frequencies (PUBMED:29672320). Additionally, ASSRs have been proposed as an alternative to objective frequency-specific audiometry in cochlear implant subjects, with one study obtaining physiological thresholds through ASSR in cochlear implant patients and verifying its biological origin (PUBMED:29566882). However, there are limitations to the use of ASSRs. For instance, a study on cochlear implantation versus auditory brainstem implantation in children with auditory nerve deficiencies highlighted the importance of verifying and identifying the status of the cochlear nerve in the decision-making process, which may not be fully addressed by ASSRs alone (PUBMED:33866399). In summary, ASSRs can be a valuable tool prior to pediatric cochlear implantation, particularly for confirming severe-to-profound hearing loss and aiding in the decision-making process for cochlear implant candidacy. However, they may not fully represent the actual auditory thresholds post-implantation, and other factors such as the status of the cochlear nerve need to be considered. Therefore, while ASSRs are a good tool, they should be used in conjunction with other assessments for a comprehensive evaluation prior to cochlear implantation.
Instruction: Renin-angiotensin system phenotyping as a guidance toward personalized medicine for ACE inhibitors: can the response to ACE inhibition be predicted on the basis of plasma renin or ACE? Abstracts: abstract_id: PUBMED:24958603 Renin-angiotensin system phenotyping as a guidance toward personalized medicine for ACE inhibitors: can the response to ACE inhibition be predicted on the basis of plasma renin or ACE? Purpose & Methods: Not all hypertensive patients respond well to ACE inhibition. Here we determined whether renin-angiotensin system (RAS) phenotyping, i.e., the measurement of renin or ACE, can predict the individual response to RAS blockade, either chronically (enalapril vs. enalapril + candesartan) or acutely (enalapril ± hydrochlorothiazide, HCT). Results: Chronic enalapril + candesartan induced larger renin rises, but did not lower blood pressure (BP) more than enalapril. Similar observations were made for enalapril + HCT vs. enalapril when given acutely. Baseline renin predicted the peak changes in BP chronically, but not acutely. Baseline ACE levels had no predictive value. Yet, after acute drug intake, the degree of ACE inhibition, like Δrenin, did correlate with ΔBP. Only the relationship with Δrenin remained significant after chronic RAS blockade. Thus, a high degree of ACE inhibition and a steep renin rise associate with larger acute responses to enalapril. However, variation was large, ranging &gt;50 mm Hg for a given degree of ACE inhibition or Δrenin. The same was true for the relationships between Δrenin and ΔBP, and between baseline renin and the maximum reduction in BP in the chronic study. Conclusions: Our data do not support that RAS phenotyping will help to predict the individual BP response to RAS blockade. Notably, these conclusions were reached in a carefully characterized, homogenous population, and when taking into account the known fluctuations in renin that relate to gender, age, ethnicity, salt intake and diuretic treatment, it seems unlikely that a cut-off renin level can be defined that has predictive value. abstract_id: PUBMED:2537557 Need for plasma angiotensin measurements to investigate converting-enzyme inhibition in humans. Since only a minute proportion of total angiotensin-converting enzyme (ACE) is present in plasma, the reliability of conventional in vitro measurements of ACE activity has been questioned. Data presented here demonstrate that the definition of ACE inhibition depends on the methodology used, with different results obtained with different substrates. We have developed a method that provides accurate and precise determinations of "true" angiotensin levels and in vivo ACE activity was estimated by measuring the plasma angiotensin II/angiotensin I ratio. Since the initial interruption of angiotensin II production by an ACE inhibitor stimulates renal renin release, the response can be quantitated by measuring changes in plasma levels of angiotensin I. The actual state of the renin-angiotensin system during ACE inhibition is represented by the plasma angiotensin II level. When ACE inhibition is no longer complete, increased angiotensin I levels bring the system back toward initial angiotensin II concentrations. abstract_id: PUBMED:2690609 Angiotensin-converting enzyme inhibition versus blockade of the renin-angiotensin system. Angiotensin-converting enzyme (ACE) inhibitors have been designed to reduce the generation of angiotensin II, i.e., to block the renin-angiotensin system. Interestingly, there exists a considerable dissociation between the time course of ACE inhibition and that of blockade of the renin-angiotensin system. Due to the greatly improved methodology used to estimate the degree of ACE inhibition in vitro and in vivo, it has become evident that the compensatory increase in renin levels in response to ACE inhibition is the key factor determining the degree and duration of blockade of the renin-angiotensin system resulting from ACE inhibition. A better understanding of these relationships would seem to be useful in determining duration of action and particularly the optimal dose of any ACE inhibitor. abstract_id: PUBMED:2650711 Plasma angiotensin II and the antihypertensive action of angiotensin-converting enzyme inhibition. The measurement of immunoreactive "angiotensin II" in plasma cannot provide an accurate reflection of the efficacy of angiotensin-converting enzyme (ACE) inhibition because different angiotensin fragments interfere in all radioimmunoassays available so far. More complex methods are necessary in order to measure specifically angiotensin-(1-8)octapeptide. With such methodology it can be shown that no tolerance develops to the angiotensin II-reducing effect of ACE inhibitors after prolonged administration. Marked reduction of angiotensin II levels can be shown even in patients with primary aldosteronism. At peak blockade, the level of plasma angiotensin II is still related to circulating active renin and angiotensin I. Accordingly, because ACE inhibitors raise circulating angiotensin I in a dose-dependent fashion, this should be taken into account when dosing ACE inhibitors. The hypothesis that tissue renin-angiotensin systems play an important independent role in determining vasomotor tone is very interesting. However, any discussion on whether tissue or plasma renin determines the pharmacological effect of ACE inhibitors should be based on the simultaneous measurement of true angiotensin II in tissue and plasma under steady-state conditions. abstract_id: PUBMED:3054274 The renin-angiotensin system in diabetic patients We review available data on the activity of the renin-angiotensin system (RAS), responsiveness to angiotensin II (ANG II), ANG II receptor number, and effects of inhibition of the RAS by angiotensin I converting enzyme (ACE) inhibitors in patients with diabetes mellitus. Most authors, including ourselves, observed a normal or enhanced activity of the RAS in metabolically stable diabetics. Increased but also reduced activity of the RAS was described in nephropathic diabetes. This is in contrast to the common suggestion that the RAS of diabetics is generally suppressed and functionally inactive. The last assumption was mainly based on the finding of reduced ANG II receptor numbers in anorectic, severely hyperglycemic rats. These findings could not be reproduced in man, and a higher ANG II receptor concentration on platelets of diabetics goes in parallel with the frequent finding of an enhanced pressor response to infused ANG II in diabetes. This increased responsiveness is most probably of functional importance since the RAS is not suppressed - as one would expect - in the face of a supranormal body sodium content. A number of data also indicate that renal resistance vessels display increased responsiveness to ANG II in diabetics. This may be a reason for hyperfiltration. This notion is further supported by the reduction of albuminuria which is usually observed following inhibition of the RAS with ACE inhibitors, and which may be an index of reduction of glomerular capillary pressure in human diabetes. abstract_id: PUBMED:9711055 The renin-angiotensin system in cardiovascular diseases Background: The renin-angiotensin system is mainly involved in several cardiovascular diseases and in the pathophysiology of heart failure. It exists as a circulating and a local system which can be differently regulated. Interventions in this system by angiotensin-converting enzyme (ACE) antagonists or angiotensin-receptor antagonists slow the progression of heart failure and result in prolongation of life expectancy and improvement of hemodynamics. Mechanisms Of Action: The main underlying mechanisms are: 1. Heart failure results in activation of the renin-angiotensin system as a compensatory mechanism with elevation of circulating angiotensin II, norepinephrine and vasopressin. Antagonists of this compensatory mechanisms acutely result in improvement of the hemodynamic situation. 2. Elevated circulating and local renin-angiotensin systems cause chronic structural myocardial and vascular effects. Angiotensin-converting enzyme antagonists and angiotensin-receptor blockers modulate and partly antagonize these structural changes such as myocardial hypertrophy, myocardial fibrosis and vascular proliferative responses. Gene and receptor regulation of the system are currently not fully understood and are subject of intensive research. 3. The renin-angiotensin system is closely related to the bradykinin system and thus indirectly to nitric oxide and endothelial function. Bradykinin has multiple other effects on the hemostatic system as a well as on the myocardium and vascular system. Conclusion: These complex interactions require further evaluation. Research with specific bradykinin antagonists will give new insights into this system. abstract_id: PUBMED:29288933 Targeting the renin-angiotensin system as novel therapeutic strategy for pulmonary diseases. The renin-angiotensin system (RAS) plays a major role in regulating electrolyte balance and blood pressure. RAS has also been implicated in the regulation of inflammation, proliferation and fibrosis in pulmonary diseases such as asthma, acute lung injury (ALI), chronic obstructive pulmonary disease (COPD), idiopathic pulmonary fibrosis (IPF) and pulmonary arterial hypertension (PAH). Current therapeutics suffer from some drawbacks like steroid resistance, limited efficacies and side effects. Novel intervention is definitely needed to offer optimal therapeutic strategy and clinical outcome. This review compiles and analyses recent investigations targeting RAS for the treatment of inflammatory lung diseases. Inhibition of the upstream angiotensin (Ang) I/Ang II/angiotensin receptor type 1 (AT1R) pathway and activation of the downstream angiotensin-converting enzyme 2 (ACE2)/Ang (1-7)/Mas receptor pathway are two feasible strategies demonstrating efficacies in various pulmonary disease models. More recent studies favor the development of targeting the downstream ACE2/Ang (1-7)/Mas receptor pathway, in which diminazene aceturate, an ACE2 activator, GSK2586881, a recombinant ACE2, and AV0991, a Mas receptor agonist, showed much potential for further development. As the pathogenesis of pulmonary diseases is so complex that RAS modulation may be used alone or in combination with existing drugs like corticosteroids, pirfenidone/nintedanib or endothelin receptor antagonists for different pulmonary diseases. Personalized medicine through genetic screening and phenotyping for angiotensinogen or ACE would aid treatment especially for non-responsive patients. This review serves to provide an update on the latest development in the field of RAS targeting for pulmonary diseases, and offer some insights into future direction. abstract_id: PUBMED:32693766 Precision Medicine in the Renin-Angiotensin System: Therapeutic Targets and Biological Variability. Pathologies linked to the renin-angiotensin system are frequent, and the drugs used in them are numerous and show great variability in therapeutic effects and adverse reactions. Genetic variants have been detected in the angiotensinogen gene (6), angiotensin-converting enzyme (9), angiotensinconverting enzyme 2 (1), and angiotensin receptor Type 1 (4) among others. However, the large number of studies that have analyzed each of them makes it complex and almost impossible to consider all the existing information. This manuscript aims to review the effects of the different known variants on the expected response of different drugs as a basis for the future development of therapeutic guidelines that seek to implement therapeutic individualization strategies on the renin-angiotensin system. abstract_id: PUBMED:8315514 Blockade of the renin-angiotensin system at different sites: effect on renin, angiotensin and aldosterone. INHIBITION OF RENIN-ANGIOTENSIN SYSTEM: It is now possible to effectively block the renin-angiotensin system at several steps by orally active inhibitors. Blockade of renin, the angiotensin coverting enzyme (ACE) or the angiotensin II (Ang II) type 1 (AT1) receptor leads to different hormonal changes. All three modes of blockade reduce aldosterone and all cause a reactive rise in plasma renin due to removal of angiotensin II tonic inhibition on renal renin release. ASSOCIATED CHANGES IN ANG I AND ANG II: AT1 blockade by losartan is followed by rises in plasma Ang I and Ang II; ACE inhibitors are associated with an increase in plasma Ang I but a fall in Ang II, whereas both plasma Ang I and Ang II fall with renin inhibition. DIFFERENCES IN MODE OF INHIBITION: Potential differences between the mode of blockade of the renin-angiotensin system include differences in the bioavailability of compounds, the effect of ACE on other peptide substrates, particularly bradykinin, the possibility of other actions or unknown substrates for renin and the presence of unblocked AT2 angiotensin receptors in the presence of high levels of Ang II. Whether these will result in clinically important differences remains to be determined. abstract_id: PUBMED:9892171 Pharmacologic interruption of the renin-angiotensin system and the kidney: differential responses to angiotensin-converting enzyme and renin inhibition. Pharmacologic interruption of the renin-angiotensin system has played a crucial role in determining its contribution to physiology, pathophysiology, and has made an enormous contribution to therapeutics. Despite this record of success, no pharmacologist would have chosen the angiotensin-converting enzyme (ACE) step for pharmacologic blockade. As renin is rate-limiting in the cascade and has remarkable specificity for its substrate, renin inhibition would have made a far more attractive target. Indeed, evidence reviewed in this article supports this case. A renin inhibitor developed to block rat renin produced a larger reduction in plasma angiotensin II concentration than did two ACE inhibitors at the top of their dose-response relationship. In humans, renin inhibition led to a larger increase in renal plasma flow in healthy human volunteers studied on a low salt diet to activate the renin system than did ACE inhibitors. Again, the studies used doses in each case at the top of the dose-response relationships for renal hemodynamics. Because the response to AT1 receptor blockade was very similar to the response induced by renin inhibitors, the results suggest that both classes of agent acted via their influence on the renin system. Moreover, the data suggest that 30 to 40% of angiotensin II formation influencing the kidney in the healthy human during renin system activation is formed via renin-dependent, but ACE-independent, pathways. These findings have potentially important therapeutic implications, and certainly help to justify the major therapeutic trials now ongoing and planned. Answer: The response to ACE inhibition cannot be reliably predicted on the basis of plasma renin or ACE levels for personalized medicine. A study examining whether renin-angiotensin system (RAS) phenotyping could predict individual responses to RAS blockade found that baseline renin predicted peak changes in blood pressure (BP) chronically but not acutely, and baseline ACE levels had no predictive value. After acute drug intake, the degree of ACE inhibition correlated with ΔBP, but this relationship did not hold after chronic RAS blockade. The study concluded that RAS phenotyping does not support the prediction of individual BP responses to RAS blockade, and it is unlikely that a cut-off renin level with predictive value can be defined due to known fluctuations in renin related to various factors (PUBMED:24958603). Additionally, the complexity of the RAS and the variability in therapeutic effects and adverse reactions to drugs targeting this system are influenced by genetic variants in genes related to the RAS. This suggests that therapeutic individualization strategies based on genetic screening and phenotyping could be considered for the future development of therapeutic guidelines (PUBMED:32693766). However, the current understanding and available data do not support the use of plasma renin or ACE levels alone as reliable predictors for the response to ACE inhibitors in personalized medicine.
Instruction: Do dietary supplements improve micronutrient sufficiency in children and adolescents? Abstracts: abstract_id: PUBMED:22717218 Do dietary supplements improve micronutrient sufficiency in children and adolescents? Objective: To examine if children use supplements to fill gaps in nutritionally inadequate diets or whether supplements contribute to already adequate or excessive micronutrient intakes from foods. Study Design: Data were analyzed for children (2-18 years) from the National Health and Nutrition Examination Survey 2003-2006, a nationally representative, cross-sectional survey (n = 7250). Diet was assessed using two 24-hour recalls, and dietary supplement use was assessed with a 30-day questionnaire. Results: Prevalence of supplements use was 21% (&lt;2 years) and 42% (2-8 years). Supplement users had higher micronutrient intakes than nonusers. Calcium and vitamin D intakes were low for all children. Inadequate intakes of phosphorus, copper, selenium, folate, and vitamins B-6 and B-12 were minimal from foods alone among 2-8 year olds. However, among 9-18 year olds, a higher prevalence of inadequate intakes of magnesium, phosphorus, and vitamins A, C, and E were observed. Supplement use increased the likelihood of intakes above the upper tolerable intake level for iron, zinc, copper, selenium, folic acid, and vitamins A and C. Conclusions: Even with the use of supplements, more than a one-third of children failed to meet calcium and vitamin D recommendations. Children 2-8 years old had nutritionally adequate diets regardless of supplement use. However, in children older than 8 years, dietary supplements added micronutrients to diets that would have otherwise been inadequate for magnesium, phosphorus, vitamins A, C, and E. Supplement use contributed to the potential for excess intakes of some nutrients. These findings may have implications for reformulating dietary supplements for children. abstract_id: PUBMED:27439655 Micronutrient Intake among Children in Puerto Rico: Dietary and Multivitamin-Multimineral Supplement Sources. Background: Micronutrients are critical for healthy growth and development of children. Micronutrient intake from dietary sources is inadequate among some children and may be improved by use of multivitamin and multimineral (MVMM) supplements. Objective: To assess micronutrient intake from dietary and MVMM supplement sources among 12-year-old children in Puerto Rico. Methods: A representative sample of 732 children enrolled in an oral health study in Puerto Rico, who completed dietary and MVMM assessments through one 24-h recall, were included in this analysis. Micronutrient intake sources were described and compared to the Dietary Reference Intakes (DRIs) using the Estimated Average Requirement when available (used Adequate Intake for vitamin K and pantothenic acid). Micronutrient profiles of MVMM users and non-users were compared using t-tests. Results: Mean intakes of vitamins A, D, E, and K, pantothenic acid, calcium, and magnesium from food and beverage sources were below the DRIs. From food and beverage sources, MVMM users had higher intakes of riboflavin and folate compared to non-users (p &lt; 0.05). When MVMM supplements were taken into account, users had higher intakes of all nutrients except vitamin K. With the help of MVMM, users increased intake of vitamins E, A, D, and pantothenic acid to IOM-recommended levels but calcium, magnesium, and vitamin K remained below guidelines. Conclusion: Micronutrient intake from diet was below the IOM-recommended levels in the total sample. MVMM use improved intake of selected micronutrients and facilitated meeting recommendations for some nutrients. Public health measures to improve micronutrient intake among children in Puerto Rico are needed. abstract_id: PUBMED:28604131 Micronutrient Intake and the Contribution of Dietary Supplements in Hispanic Infants. To calculate micronutrient adequacy among infants and toddlers and to determine the contribution of dietary supplements to this adequacy, micronutrient intake was assessed using two nonconsecutive 24-hour recalls in a sample of 296 infants aged 0-24 months. Micronutrient intake was calculated from foods and beverages and from supplements and compared between nonusers and users of supplements. Percentages of children below the Dietary Reference Intake (DRI) and above the tolerable upper limit intake (UL) were also compared between groups. A total of 241 participants had complete data. The prevalence of dietary supplement use among the sample was 15%. Mean intake of all micronutrients from foods and beverages was similar between nonusers and users of supplements (p &gt; .05) but significantly higher for the following vitamins when supplements were included: D, E, B1, B2, B3, and B6 (p &lt; .05). From foods only, the nutrients with highest percentage of children below the DRI were vitamins D and E and potassium. When supplements were included, this percentage significantly decreased among users compared to nonusers (p &lt; .05) for vitamins D and E. From foods, the UL was exceeded for magnesium, zinc, and vitamin B3. These were similar when supplements were added. Supplements significantly increased the intake of some vitamins. Vitamins D and E had the highest percentage of children below the DRI, which was partly corrected with the use of supplements. The UL was exceeded for magnesium, zinc, and vitamin B3 in many children. It is important to understand these patterns as they may be indicative of future nutritional deficiencies and excesses. abstract_id: PUBMED:32154496 Association of Added Sugars Intake with Micronutrient Adequacy in US Children and Adolescents: NHANES 2009-2014. Background: A concern about the excessive consumption of added sugars is the potential for micronutrient dilution, particularly in children and adolescents; however, the evidence is inconsistent. Objective: We examined the associations between added sugars intake and micronutrient adequacy in US children and adolescents using data from NHANES 2009-2014. Methods: Children and adolescents aged 2-18 (n = 7754), 2-8 (n = 3423), and 9-18 y (n = 4331) were assigned to deciles of added sugars intake based on the average of 2 d of dietary recall. Usual intake of micronutrients was determined using 2 dietary recalls and the National Cancer Institute method. Within each age group, regression analyses were used to assess the relationship between added sugars intake decile and percentage of the population below the estimated average requirements (EARs) for 17 micronutrients. Results: Deciles of added sugars intake (percentage of calories) ranged from &lt;6.4 to &gt;22.8 among children and adolescents aged 2-18 y, with a median intake of 13.3% of calories. Significant positive associations (P &lt; 0.01) between added sugars intake and percentage of the population (aged 2-18 y) below the EAR were found only for calcium, magnesium, and vitamin D. These associations virtually disappeared after dropping the 2 highest and lowest deciles of intake, suggesting a threshold effect; intakes below approximately 19% of calories from added sugars were generally not associated with micronutrient inadequacy. Conclusions: As added sugars intake increased, there was a threshold above which an increase in the prevalence of inadequate intakes for calcium, magnesium, and vitamin D among US children and adolescents was observed. However, even at the lower deciles of added sugars, large percentages of the population were below the EAR for these nutrients, suggesting that adequate intakes of these nutrients are difficult to achieve independent of added sugars intake. abstract_id: PUBMED:37127694 Dietary supplements increase the risk of excessive micronutrient intakes in Danish children. Purpose: Dietary supplement use is common in Northern Europe. Many dietary supplements contain 100% of nutrient reference values (NRV) of micronutrients. This study investigates the contribution of dietary supplements to micronutrient intake, the prevalence of excess intake of micronutrients, and parental characteristics of dietary supplement use in Danish children. Methods: Data on 499 4-10-year-old children from the Danish National Survey of Diet and Physical Activity 2011-2013 were analysed using non-parametric statistics to compare micronutrient intake from the diet and dietary supplements to the reference intake (RI), and to the tolerable Upper Intake Level (UL) for users and non-users of dietary supplements. Furthermore, characteristics of the parents of users and non-users of dietary supplements were examined by logistic regression analysis. Results: Sixty-four percent of the children were dietary supplement users. Multivitamin-mineral supplements were the most frequently used type of supplement (60%). Children of never-smokers were more likely to use supplements than children of smokers. Users had significantly higher total intakes of 15 micronutrients compared to non-users. Intakes of vitamin A, zinc, and iodine from the diet alone exceeded ULs in 12-30% of the children. Use of dietary supplements gave rise to 21-73% of children in exceedance of the aforementioned three ULs as well as the UL for iron (6-45%). Conclusion: Dietary supplement use was common among 4-10-year-old Danish children and resulted in a considerable proportion of users exceeding the ULs for vitamin A, zinc, iodine, and iron. The long-term health consequences of exceeding these ULs for children are unknown. abstract_id: PUBMED:38255378 Exploring Differences in Dietary Diversity and Micronutrient Adequacy between Costa Rican and Mexican Adolescents. Diet diversity becomes especially relevant during adolescence to satisfy the adequate micronutrient intake. Diet diversity (DD) and micronutrient probability of adequacy (PA) were studied in 818 Costa Rican (CR) and 1202 Mexican (MX) adolescents aged 13-18 years. DD was compared using the Minimum Dietary Diversity (MDD) score. Receiver-operating characteristic (ROC) curves were employed to identify the optimal MDD for each sample from the respective countries. The mean MDD for the overall CR sample was 4.17 ± 1.43 points, and for the MX sample, the mean MDD was 4.68 ± 1.48 points. The proportion of adolescents with a DD was significantly higher in Costa Rica than in Mexico (66.5% vs. 55.6%; p &lt; 0.0001). Also, DD was higher in rural Costa Rican adolescents, while no difference was found in the MX adolescents by area of residence. CR adolescents reported significantly higher PA than MX participants for 6 of the 11 micronutrients assessed. The calcium PA in MX adolescents was significantly higher than in the CR sample (MX: 0.84 vs. CR: 0.03; p &lt; 0.0001), while low PA was obtained for iron in both countries (CR: 0.01 vs. MX: 0.07; p &lt; 0.0001). In Costa Rica and Mexico, nutritional interventions and assessing the compliance of food-fortifying programs are needed to improve the PA of diverse micronutrients. abstract_id: PUBMED:25070694 Determinants of dietary supplements use among adolescents in Malaysia. Dietary supplements use is relatively widespread in some countries but knowledge of supplements consumption in Malaysia is limited, more so among adolescents. This study aimed to investigate the determinants of dietary supplements use among Malaysian adolescents using multiple logistic regressions analysis. Data from the Malaysia School-based Nutrition Survey 2012 based on a 2-stage stratified sampling was used. The prevalence of vitamin/mineral supplements and food supplements intake among adolescents was 54.1% and 40.2%, respectively. Usage was significantly higher among younger adolescents and among boys. Dietary supplements were also taken mostly by those who thought they were underweight. The most common vitamin/mineral supplements and food supplements consumed were vitamin C and bee products. The main reason for taking supplements was parents' instruction. These findings can be useful for developing health communications on supplement use targeted to adolescents and their parents. abstract_id: PUBMED:24424459 Micronutrient intakes from food and supplements in Australian adolescents. Objective: Low micronutrient intakes in adolescents are frequently reported. We assessed micronutrient intakes in adolescents to determine whether supplement use optimises intakes. Methods: Dietary intake was assessed using a food frequency questionnaire in 17 year old participating in the Western Australian Pregnancy Cohort (Raine) Study (n=991). We calculated median daily micronutrient intakes in supplement users and non-users (from food sources only and from food and supplements), along with the percentage of adolescents meeting the Estimated Average Requirements (EAR) or Adequate Intake (AI) where appropriate. Results: Intakes of calcium, magnesium, folate and vitamins D and E from food only were low. Although supplements significantly increased micronutrient intakes in supplement users, more than half of supplement users failed to meet the EAR or AI for some key micronutrients. Compared with non-users, supplement users had higher micronutrient intakes from food sources with the exception of vitamins D and B12 and were more likely to achieve the EAR or AI for many micronutrients from food only. Conclusions: Intakes of some key micronutrients were low in this population, even among supplement users. Those facing the greatest risk of micronutrient deficiencies were less likely to use supplements. abstract_id: PUBMED:31371184 Micronutrient intake and biochemistry in adolescents adherent or nonadherent to supplements 5 years after Roux-en-Y gastric bypass surgery. Background: Roux-en-Y gastric bypass (RYGB) is an effective obesity treatment in adults and has become established in adolescents. Lower adherence to supplementation in adolescents confers a risk for long-term nutritional deficiencies. Objectives: To assess adherence to supplementation, micronutrient intake, and biochemistry in adolescents through 5 years after RYGB. Setting: University hospitals, multicenter study, Sweden. Methods: Micronutrient intake and adherence to supplementation were assessed by diet history interviews and biochemistry preoperatively, 1, 2, and 5 years after RYGB in 85 adolescents (67% females), aged 16.5 years (± 1.2) with a body mass index of 45.5 kg/m2 (± 6.0). Adherence was defined as taking prescribed supplements ≥3 times a week. Micronutrient intake and biochemistry were compared with matched controls at 5 years. Results: Over 75% completed the dietary assessments across 5 years after RYGB. Adherence ranged between 44-61% through 5 years. At 5 years, ferritin and hemoglobin decreased (P &lt; .04) and 61% had iron deficiency (P ≤ .001). Among females with iron deficiency, most did not adhere to supplementation (P = .005), and 59% of these had anemia (P &lt; .001). Vitamin D insufficiency continued after surgery and 80% of participants who did not adhere to supplementation had insufficiency (P = .002). Adolescents not adhering had lower levels of vitamin D, B12, and ferritin (females) compared with both adhering adolescents and the control group (all P &lt; .04). Conclusions: Half of adolescents after RYGB reported sufficient long-term adherence to supplementation. Adhering to supplements and reporting a higher micronutrient intake were associated with more favorable biochemistry. Results support the recommendations for monitoring micronutrient intake and biochemistry in all patients who have undergone RYGB surgery, and the recommendation of higher preventive supplementation of vitamin D and iron in both sexes. As hypothesized, adolescents not adhering had a higher prevalence of long-term micronutrient deficiencies. abstract_id: PUBMED:11688080 Micronutrient dietary supplements--a new fourth approach. Currently the three main widely used strategies to control micronutrient deficiencies are food diversification, fortification, and consumption of medicinal supplements. In Tanzania a fourth strategy has been evaluated in school children, and is to be studied in pregnant and lactating women. The dietary supplement comes in the form of a powder used to prepare a fruit flavored drink. Children consumed for six months 25 grams per school day attended, the powder being added to 200 ml of water. The dietary supplement provides between 40 and 100 percent of the RDA of 10 micronutrients, which includes iron, vitamin A and iodine. Unlike medicinal supplements it provides the multiple vitamins and minerals in physiologic, not megadoses. In a well conducted randomized double blind placebo controlled trial, a dietary supplement in the form of a fortified powder fruit drink produced statistically significant differences not only in vitamin A and iron status, but also in the growth of young school age children. Answer: Dietary supplements appear to improve micronutrient sufficiency in children and adolescents to varying degrees, depending on the population and the specific nutrients in question. Several studies have shown that supplement use can lead to higher micronutrient intakes and help some children and adolescents meet recommended dietary intakes for certain nutrients. For instance, in a study of children aged 2-18 years from the National Health and Nutrition Examination Survey 2003-2006, supplement use was associated with higher micronutrient intakes and helped older children (9-18 years) meet dietary recommendations for magnesium, phosphorus, vitamins A, C, and E that would have otherwise been inadequate from food alone. However, even with supplement use, more than one-third of children failed to meet recommendations for calcium and vitamin D (PUBMED:22717218). Similarly, in Puerto Rico, 12-year-old children who used multivitamin and multimineral (MVMM) supplements had higher intakes of several micronutrients compared to non-users, and supplement use helped increase intakes of vitamins E, A, D, and pantothenic acid to recommended levels, although calcium, magnesium, and vitamin K remained below guidelines (PUBMED:27439655). In Hispanic infants, supplement use significantly increased the intake of certain vitamins such as D, E, B1, B2, B3, and B6, and helped decrease the percentage of children below the Dietary Reference Intake (DRI) for vitamins D and E (PUBMED:28604131). However, there are concerns about the potential for excessive intakes of some nutrients due to supplement use. For example, in Danish children, dietary supplement use resulted in a considerable proportion of users exceeding the tolerable Upper Intake Levels (ULs) for vitamin A, zinc, iodine, and iron (PUBMED:37127694). In summary, while dietary supplements can improve micronutrient intake and help some children and adolescents meet dietary recommendations, they can also contribute to excessive intakes of certain nutrients. It is important to balance the use of supplements to address deficiencies without exceeding safe levels of intake.
Instruction: Choosing Wisely: Opportunities for Improving Value in Cancer Care Delivery? Abstracts: abstract_id: PUBMED:35279182 De-implementing low-value care in cancer care delivery: a systematic review. Background: Accumulating evidence suggests that interventions to de-implement low-value services are urgently needed. While medical societies and educational campaigns such as Choosing Wisely have developed several guidelines and recommendations pertaining to low-value care, little is known about interventions that exist to de-implement low-value care in oncology settings. We conducted this review to summarize the literature on interventions to de-implement low-value care in oncology settings. Methods: We systematically reviewed the published literature in PubMed, Embase, CINAHL Plus, and Scopus from 1 January 1990 to 4 March 2021. We screened the retrieved abstracts for eligibility against inclusion criteria and conducted a full-text review of all eligible studies on de-implementation interventions in cancer care delivery. We used the framework analysis approach to summarize included studies' key characteristics including design, type of cancer, outcome(s), objective(s), de-implementation interventions description, and determinants of the de-implementation interventions. To extract the data, pairs of authors placed text from included articles into the appropriate cells within our framework. We analyzed extracted data from each cell to describe the studies and findings of de-implementation interventions aiming to reduce low-value cancer care. Results: Out of 2794 studies, 12 met our inclusion criteria. The studies covered several cancer types, including prostate cancer (n = 5), gastrointestinal cancer (n = 3), lung cancer (n = 2), breast cancer (n = 2), and hematologic cancers (n = 1). Most of the interventions (n = 10) were multifaceted. Auditing and providing feedback, having a clinical champion, educating clinicians through developing and disseminating new guidelines, and developing a decision support tool are the common components of the de-implementation interventions. Six of the de-implementation interventions were effective in reducing low-value care, five studies reported mixed results, and one study showed no difference across intervention arms. Eleven studies aimed to de-implement low-value care by changing providers' behavior, and 1 de-implementation intervention focused on changing the patients' behavior. Three studies had little risk of bias, five had moderate, and four had a high risk of bias. Conclusions: This review demonstrated a paucity of evidence in many areas of the de-implementation of low-value care including lack of studies in active de-implementation (i.e., healthcare organizations initiating de-implementation interventions purposefully aimed at reducing low-value care). abstract_id: PUBMED:27845867 Choosing Wisely: Opportunities for Improving Value in Cancer Care Delivery? Introduction: Patients, providers, and payers are striving to identify where value in cancer care can be increased. As part of the Choosing Wisely (CW) campaign, ASCO and the American Society for Therapeutic Radiology and Oncology have recommended against specific, yet commonly performed, treatments and procedures. Methods: We conducted a retrospective analysis of Medicare claims data to examine concordance with CW recommendations across 12 cancer centers in the southeastern United States. Variability for each measure was evaluated on the basis of patient characteristics and site of care. Hierarchical linear modeling was used to examine differences in average costs per patient by concordance status. Potential cost savings were estimated on the basis of a potential 95% adherence rate and average cost difference. Results: The analysis included 37,686 patients with cancer with Fee-for-Service Medicare insurance. Concordance varied by CW recommendation from 39% to 94%. Patient characteristics were similar for patients receiving concordant and nonconcordant care. Significant variability was noted across centers for all recommendations, with as much as an 89% difference. Nonconcordance was associated with higher costs for every measure. If concordance were to increase to 95% for all measures, we would estimate a $19 million difference in total cost of care per quarter. Conclusion: These results demonstrate ample room for reduction of low-value care and corresponding costs associated with the CW recommendations. Because variability in concordance was driven primarily by site of care, rather than by patient factors, continued education about these low-value services is needed to improve the value of cancer care. abstract_id: PUBMED:36158983 Choosing Wisely Philippines: ten low-value or harmful practices that should be avoided in cancer care. The Choosing Wisely Philippines campaign is an initiative that identifies low-value or potentially harmful practices that are relevant to patients with cancer in the Philippines. The main purpose of these initiatives is to facilitate quality improvement systems and maximise patient outcomes. Of the ten practices identified, four are new recommendations, and six are modified adaptations from previous Choosing Wisely initiatives in the USA and Africa. Recommendations in the final list include interventions involving diagnosis (two practices), treatment (five practices), palliative and supportive care (two practices) and surveillance (1 practice). abstract_id: PUBMED:35877263 Choosing Wisely-Barriers and Solutions to Implementation in Low and Middle-Income Countries. Globally, there is increasing emphasis on value-based cancer care. Rising healthcare costs and reduced health care spending and budgets, especially in low- and middle-income countries (LMICs), call for patients, providers, and healthcare systems to apply the Choose Wisely (CW) approach. This approach seeks to advance a dialogue on avoiding unnecessary medical tests, treatments, and procedures. Several factors have been described as barriers and facilitators to the implementation of the Choosing Wisely recommendations in high-income countries but none for LMICs. In this review, we attempt to classify potential barriers to the Choose Wisely implementation relative to the sources of behavior and potential intervention functions that can be implemented in order to reduce these barriers. abstract_id: PUBMED:37377685 Highlights from the Second Choosing Wisely Africa conference: a roadmap to value-based cancer care in East Africa (9-10 February 2023, Dar es Salaam, Tanzania). The ecancer Choosing Wisely conference was held for the second time in Africa in Dar es Salaam, Tanzania, from the 9th to 10th of February 2023. ecancer in collaboration with the Tanzania Oncology Society organised this conference which was attended by more than 150 local and international delegates. During the 2 days of the conference, more than ten speakers from different specialties in the field of oncology gave insights into Choosing Wisely in oncology. Topics from all fields linked to cancer care such as radiation oncology, medical oncology, prevention, oncological surgery, palliative care, patient advocacy, pathology, radiology, clinical trials, research and training were presented to share and bring awareness to professionals in oncology, on how to choose wisely in their approach to their daily practice, based on the available resources, while trying to offer the maximum benefit to the patient. This report, therefore, shares the highlights of this conference. abstract_id: PUBMED:36819800 Highlights from Choosing Wisely 2022 for Resource Limited Settings: Reducing Low Value Cancer Care for Sustainability conference, 17th-18th September, Mumbai, India. The 'Choosing Wisely 2022' conference, organised by the ecancer foundation, was held at the Tata Memorial Hospital, Mumbai, India, on 17 and 18 September. It was a successful event with 159 delegates attending it in person and around 328 delegates attending online. Thirty oncology experts from across the world shared their thoughts during this meeting. The theme of the conference was to focus on cancer care, in low- and middle-income countries (LMICs). The emphasis of discussion was on ways to select more cost-effective and high value treatments and interventions and minimise financial toxicity. In addition, cancer research from LMICs needs to be improved substantially. Collaboration and networking amongst cancer institutions in LMICs is essential. abstract_id: PUBMED:37543851 Use of high- and low-value care among US adults by education levels. Background: Healthcare reform in the United States has focused on improving the value of health care, but there are some concerns about the inequitable delivery of value-based care. Objective: We examine whether the receipt of high- and low-value care differs by education levels. Methods: We employed a repeated cross-sectional study design using data from the 2010-2019 Medical Expenditure Panel Survey. Our outcomes included 8 high-value services across 3 categories and 9 low-value services across 3 categories. Our primary independent variable was education level: (i) no degree, (ii) high school diploma, and (iii) college graduate. We conducted a linear probability model while adjusting for individual-level characteristics and estimated the adjusted values of the outcomes for each education group. Results: In almost all services, the use of high-value care was greater among more educated adults than less educated adults. Compared to those with no degree, those with a college degree were significantly more likely to receive all high-value services except for HbA1c measurement, ranging from blood pressure measurement (4.5 percentage points [95% CI: 3.9-5.1]) to colorectal cancer screening (15.6 percentage points [95% CI: 13.9-17.3]). However, there were no consistent patterns of the use of low-value care by education levels. Conclusion: Our findings suggest that more educated adults were more likely to receive high-value cancer screening, high-value diagnostic and preventive testing, and high-value diabetes care than less educated adults. These findings highlight the importance of implementing tailored policies to address education-based inequities in the delivery of high-value services in the United States. abstract_id: PUBMED:30141837 Alternative payment and care-delivery models in oncology: A systematic review. Rising US health care costs have led to the creation of alternative payment and care-delivery models designed to maximize outcomes and/or minimize costs through changes in reimbursement and care delivery. The impact of these interventions in cancer care is unclear. This review was undertaken to describe the landscape of new alternative payment and care-delivery models in cancer care. In this systematic review, 22 alternative payment and/or care-delivery models in cancer care were identified. These included 6 bundled payments, 4 accountable care organizations, 9 patient-centered medical homes, and 3 other interventions. Only 12 interventions reported outcomes; the majority (n = 7; 58%) improved value, 4 had no impact, and 1 reduced value, but only initially. Heterogeneity of outcomes precluded a meta-analysis. Despite the growth in alternative payment and delivery models in cancer, there is limited evidence to evaluate their efficacy. Cancer 2018. © 2018 American Cancer Society. abstract_id: PUBMED:36005194 Value-Based Care in Systemic Therapy: The Way Forward. The rising cost of cancer care has shed light on an important aspect of healthcare delivery. Financial toxicity of therapy must be considered in clinical practice and policy-making. One way to mitigate the impact of financial toxicity of cancer care is by focusing on an approach of healthcare delivery that aims to deliver value to the patient. Should value of therapy be one of the most important determinants of cancer care? If so, how do we measure it? How can we implement it in routine clinical practice? In this viewpoint, we discuss value-based care in systemic therapy in oncology. Strategies to improve the quality of care by incorporating value-based approaches are discussed: use of composite tools to assess the value of drugs, alternative dosing strategies, and the use of Health Technology Assessment in regulatory procedures. We propose that there must be a greater emphasis on value of therapy in determining its use and its cost. abstract_id: PUBMED:30857957 Choosing Wisely India: ten low-value or harmful practices that should be avoided in cancer care. The Choosing Wisely India campaign was an initiative that was established to identify low-value or potentially harmful practices that are relevant to the Indian cancer health-care system. We undertook a multidisciplinary framework-driven consensus process to identify a list of low-value or harmful cancer practices that are frequently undertaken in India. A task force convened by the National Cancer Grid of India included Indian representatives from surgical, medical, and radiation oncology. Each specialty had representation from the private and public sectors. The task force included two representatives from national patient and patient advocacy groups. Of the ten practices that were identified, four are completely new recommendations, and six are revisions or adaptations from previous Choosing Wisely USA and Canada lists. Recommendations in the final list pertain to diagnosis and treatment (five practices), palliative care (two practices), imaging (two practices), and system-level delivery of care (two practices). Implementation of this list and reporting of concordance with its recommendations will facilitate the delivery of high-quality, value-based cancer care in India. Answer: The Choosing Wisely campaign, initiated by various medical societies including ASCO and the American Society for Therapeutic Radiology and Oncology, aims to identify and reduce the use of low-value and potentially harmful practices in cancer care, thereby improving the value of care delivered to patients (PUBMED:27845867). The campaign has been adopted in different regions, including the Philippines and India, where specific practices have been identified that should be avoided to enhance the quality of care and maximize patient outcomes (PUBMED:36158983; PUBMED:30857957). A retrospective analysis of Medicare claims data across 12 cancer centers in the southeastern United States revealed significant variability in concordance with Choosing Wisely recommendations, with nonconcordance associated with higher costs. The study estimated that increasing concordance to 95% for all measures could result in substantial cost savings, highlighting the potential for reducing low-value care and associated costs (PUBMED:27845867). In low- and middle-income countries (LMICs), the implementation of Choosing Wisely recommendations faces unique barriers, but addressing these can help advance value-based cancer care in these settings. Conferences such as the Choosing Wisely Africa conference and the Choosing Wisely 2022 conference in Mumbai have been organized to share insights and strategies for implementing value-based care in resource-limited settings (PUBMED:37377685; PUBMED:36819800). The systematic review of de-implementing low-value care in cancer care delivery found a paucity of evidence in many areas, indicating the need for more studies on active de-implementation interventions aimed at reducing low-value care in oncology settings (PUBMED:35279182). Additionally, a study on the use of high- and low-value care among US adults by education levels found that more educated adults were more likely to receive high-value cancer screening and care, suggesting the need for policies to address education-based inequities in the delivery of high-value services (PUBMED:37543851). Overall, the Choosing Wisely campaign presents opportunities for improving value in cancer care delivery by identifying and reducing the use of low-value practices, which can lead to cost savings and better patient outcomes. However, successful implementation requires continued education, tailored policies, and a focus on value-based care approaches, particularly in LMICs and among populations with varying education levels (PUBMED:27845867; PUBMED:36158983; PUBMED:35877263; PUBMED:37377685; PUBMED:36819800; PUBMED:37543851; PUBMED:30141837; PUBMED:36005194; PUBMED:30857957).
Instruction: Bridging the gap: does a specialist eating disorder service, aimed at developing a continuum of community care, make a difference? Abstracts: abstract_id: PUBMED:23671227 Bridging the gap: does a specialist eating disorder service, aimed at developing a continuum of community care, make a difference? Objective: In 2010, the authors identified in a separate publication, Mind the evidence gap, the sparse evidence-base for the treatment of adult anorexia nervosa and barriers to accessing care. We report on the ensuing development, implementation and first 18-month results of a novel eating disorder service bridging the primary and specialist continuum of care in Victoria, Australia. Method: Using literature review, stakeholder, and consumer and carer consultation, a model for a community eating disorder service was developed and then implemented. All patients entering the service were then assessed at intake and, if they gave consent, at 12 month follow-up. Results: From December 2010 to July 2012, 208 patients accessed The Body Image Eating Disorders Treatment and Recovery Service (BETRS). Fifty-three per cent had a diagnosis of anorexia nervosa and the mean number of co-morbid psychiatric diagnoses was two. Twenty-three per cent attended a day patient programme and showed a significant improvement in their body mass index. Measures of depression, anxiety and eating disorder symptomatology showed a concomitant, significant decrease. Conclusions: The development of BETRS has led to markedly improved access and effectiveness of specialist services in the region. abstract_id: PUBMED:36217209 Implementing service transformation for children and adolescents with eating disorders across England: the theory, politics, and pragmatics of large-scale service reform. Background: Eating disorders are among the most serious mental health problems affecting children and young people and without appropriate treatment often have a protracted course with high levels of morbidity and mortality. While considerable progress has been made in recent years in developing effective evidence-based outpatient treatments, these are not always readily available. In England, until recently, the usual care pathway for young people with an eating disorder was referral from primary care to local generic Child and Adolescent Mental Health Services with varying levels of expertise in eating disorders and a mix of outpatient treatments available. Poor treatment progress or physical deterioration would usually result in inpatient admission. Admission rates were high, with children and young people with an eating disorder accounting for nearly a quarter of all child and adolescent psychiatric hospital admissions. Inpatient treatment is costly and has high relapse rates with some evidence that it may contribute to poorer long-term outcomes in eating disorders. Accumulating clinical and research evidence that early expert outpatient treatment can significantly reduce the need for inpatient care indicates,+ that investing in dedicated community-based eating disorders services is likely to be both clinically and economically beneficial. Overview Of Paper: This paper describes a large-scale transformation programme following a major government investment (initially £30 million/year, since then increased to over £50 million/year) aimed at service level change in the provision of eating disorder services for children and adolescents in England. We describe the history, background, political context, and clinical and research evidence that contributed to the government's decision to invest in eating disorders. We also provide a brief account of the implementation of an England-wide whole team training to support the creation of a network of over 70 dedicated community-based eating disorders services for children and young people. abstract_id: PUBMED:30450220 Reconfiguring in-patient services for adults with mental health problems: changing the balance of care. Background: Research suggests that a significant minority of hospital in-patients could be more appropriately supported in the community if enhanced services were available. However, little is known about these individuals or the services they require. Aims: To identify which individuals require what services, at what cost. Method: A 'balance of care' (BoC) study was undertaken in northern England. Drawing on routine electronic data about 315 admissions categorised into patient groups, frontline practitioners identified patients whose needs could be met in alternative settings and specified the services they required, using a modified nominal group approach. Costing employed a public-sector approach. Results: Community care was deemed appropriate for approximately a quarter of admissions including people with mild-moderate depression, an eating disorder or personality disorder, and some people with schizophrenia. Proposed community alternatives drew heavily on carer support services, community mental health teams and consultants, and there was widespread consensus on the need to increase out-of-hours community services. The costs of the proposed community care were relatively modest compared with hospital admission. On average social care costs increased by approximately £60 per week, but total costs fell by £1626 per week. Conclusions: The findings raise strategic issues for both national policymakers and local service planners. Patients who could be managed at home can be characterised by diagnosis. Although potential financial savings were identified, the reported cost differences do not directly equate to cost savings. It is not clear whether in-patient beds could be reduced. However, existing beds could be more efficiently used. Declaration Of Interest: None. abstract_id: PUBMED:28884837 Cognitive-behavioral therapy for eating disorders in primary care settings: Does it work, and does a greater dose make it more effective? Objective: This study aimed to determine whether cognitive-behavioral therapy (CBT) for eating disorders can be effective in a routine, primary care clinical setting, and to assess dose response. Method: The participants were 47 patients who commenced treatment with a publicly-funded primary care eating disorder service. They attended 7-33 sessions of individual CBT (mean = 17), using an evidence-based approach. Routine measures were collected pre- and post-therapy. Results: Three-quarters of the patients completed treatment. Using intention to treat analysis (multiple imputation), the patients showed substantial improvements in eating attitudes, bulimic behaviors, and depression. However, there was no association between the level of improvement and the length of therapy past the 8th to 12th session. Discussion: The level of effectiveness shown here is comparable to that previously demonstrated by more specialist services in secondary and tertiary care. The nonlinear association between number of sessions and recovery highlights the importance of early change, across the eating disorders. abstract_id: PUBMED:35927343 Continuum beliefs of mental illness: a systematic review of measures. Purpose: The continuum of mental health/illness has been subject to scientific debate for decades. While current research indicates that continuum belief interventions can reduce mental health stigma and improve treatment seeking in affected populations, no study has yet systematically examined measures of continuum beliefs. Methods: This preregistered systematic review summarizes measures of continuum beliefs. Following the PRISMA statement, three scientific databases (PubMed, PsycInfo and PsycArticles via EBSCOhost, Web of Science) are searched, instruments are described and discussed regarding their scope, and methodological quality. Results: Overall, 7351 records were identified, with 35 studies reporting relevant findings on 11 measures. Most studies examined general population samples and used vignette-based measures. Schizophrenia and depression were most commonly examined, few studies focused on dementia, ADHD, OCD, eating disorders, and problematic alcohol use, or compared continuum beliefs across disorders. Validity was very good for most measures, but reliability was rarely tested. Measures mostly assessed beliefs in the normality of mental health symptoms or the normality of persons with such symptoms but rarely nosological aspects (i.e., categorical v continuous conceptualization of mental disorders). Conclusions: Current research provides psychometrically sound instruments to examine continuum beliefs for a variety of mental disorders. While studies suggest utility for general population samples and mental health professionals, more research is necessary to corroborate findings, for instance, regarding age (e.g., in adolescents), gender, or type of mental disorder. Future research should also compare self-report ratings, and vignette-based measures, include measures of nosological concepts to fully grasp the continuum concept of mental illness. Preregistration: PROSPERO: CRD42019123606. abstract_id: PUBMED:29948625 A Tertiary-Care/Primary-Care Partnership Aimed at Improving Care for People with Eating Disorders. We describe the implementation and impact of a province-wide program of knowledge exchange (KE), aimed at developing capacity for the treatment of people with eating disorders (EDs). The program is designed to equip clinicians working in nonspecialized health-care installations with skills to evaluate and treat people with EDs. Trainings were conducted at 21 institutions. The majority of clinicians reported satisfaction with the KE program and indicated that the trainings enhanced their confidence and ability to treat patients with EDs. A subset of clinicians received case supervision with a specialist ED therapist and followed patients with EDs (n = 119). Treated patients showed significant improvements on eating and depressive symptoms, and reported satisfaction with the treatments they received. abstract_id: PUBMED:24079091 Managing anorexia nervosa in the community. This article describes the development and evaluation of a new specialist intensive community-based service for adults (those aged 18 years and above) with severe and enduring eating disorders (SEEDs). The service was developed by the Yorkshire Centre for Eating Disorders based in Leeds. We developed and evaluated a nurse-delivered community-based service that aimed to manage the complex needs of people with SEEDs without hospital admission. The service was shown to improve quality of life, reduce the number of hospital admissions and improve communication with other health professionals involved in service users' care. abstract_id: PUBMED:30638349 Analysis of 15 years of emergency service interaction with schools. En: For 15 years without interruption, the emergency department of the Hospital Clínic de Barcelona has been holding annual educational sessions for students in their last year of obligatory secondary school. The sessions are organized in collaboration with 3 other entities: the primary care center affiliated with the hospital and serving the area and adjacent to it (THE CAPSBE), the city's emergency ambulance service, and the University of Barcelona. Each year the session lasts 4 hours and covers such content as the organization of the city's emergency medical care (at primary care centers, in prehospital settings, and at the hospital) and the main reasons adolescents require emergency care (related to drugs, traffic accidents, psychiatric disorders, eating disorders, and gynecologic problems). The project seeks to develop the students' sense of social responsibility and to transfer knowledge to the community where the organizers carry out their usual work. The sessions are educational, providing key information intended to encourage individual reflection as well as debate in the student community outside of school hours. A total of 56 secondary schools and 12 260 students between the ages of 15 and 18 years old have participated thus far. This article presents a basic outline of the sessions, reports feedback from the students and their teachers, and analyzes whether there have been significant changes in their opinions over time. Our purpose is to provide information useful to other emergency health services that might consider starting similar programs in which they can interact with their community schools. abstract_id: PUBMED:12467185 Community-based adolescent health services in Israel: from theory to practice. Despite their engagement in health-risk behaviors and their health-related concerns, adolescents have the lowest rate of health service utilization of any age group. Time constraints during routine medical encounters generally leave little opportunity for professional screening for health-risk behaviors or for discussing psychosocial problems. In addition, providers express low levels of perceived competency in areas such as sexuality, eating disorders or drug abuse. To address these needs, a walk-in Adolescent Health Service was established by the Sheba Medical Center to provide diagnosis and short-term treatment for individual adolescents, as well as counseling and support for local care providers. A three-way model of cooperation and partnership was developed and implemented. A professional and financial partnership with local authorities were established to help define the particular needs of the community's youth and to improve the ability to reach youth with special health needs. The partnership along with the main medical provider (Kupat Holim Clalit) helped define local health needs, served as a referral source of patients with unmet health needs, and improved the continuity of care. The regional medical center (Sheba Medical Center) provided supervision and consultation for the medical staff of the service, as well as a referral center for patients. It was emphasized that the service staff was intended as a professional source for the primary physician and should not be considered a rival. The core staff included a specialist in adolescent medicine, gynecologist, mental health specialist and social worker. A structured intake procedure was developed for assessing health concerns and problems of adolescents in the context of a community clinic. Findings from the first years of services showed that the first 547 female adolescents demonstrated that a majority of adolescents presented with primary complaints of a somatic nature, while one third were diagnosed with psychosocial problems and one-fifth with a sexuality-related problem. A considerable percentage of those diagnosed with psychosocial or sexuality-related problems had not stated these issues as their "reason for encounter". This additional increment probably represents the contribution of the Health Concern Checklist (HCC), in which the adolescent was asked to mark each item for which she had concerns or would like to receive further information. The HCC can help primary care physicians as well as adolescent medical specialists approach the teenage patient and initiate productive communication. A practical approach to confidential health care for adolescents: The issue of confidentiality has not been sufficiently clarified by Israeli law or by the medical community. The need for confidentiality was strongly felt in the adolescent health service. A policy which provides all adolescents with the opportunity to meet with a physician and receive health guidance or advice at least once, even without parental knowledge or consent, was formulated and implemented. If parental consent was not feasible, the minor was allowed to give informed consent for medical and psychosocial care for himself/herself, with certain limitations. abstract_id: PUBMED:36038898 Creating a care pathway for patients with longstanding, complex eating disorders. Background: Recovery rates for people with eating disorders are low; fewer than half recover and approximately 20% develop a longstanding eating disorder. Patients with longstanding eating disorders are often referred to as "SEED" (severe and enduing eating disorders) although this remains controversial and is not acknowledged in the British treatment guidance. This project aimed to generate recommendations for a longstanding eating disorder care pathway by identifying what proportion of patients have longstanding eating disorders and how to best identify and support them. Methods: Initially, a literature review was completed, followed by interviews with service-users who consider themselves to have longstanding eating disorders, and focus groups with staff members. The results were combined to create a definition of a longstanding eating disorder which was used to establish how many service-users could benefit from the pathway. The qualitative data was used to produce recommendations for a tailored pathway for those with longstanding eating disorders. Results: The results highlighted that, although "SEED" is often used, participants preferred to be referred to as "longstanding" or having no label. Qualitative analysis identified four themes in relation to supporting this population group which described how to structure the service and individualise care, as well as patients' relationship to the service, and how to build a life after eating disorder services. Conclusions: Recommendations included promoting a hopeful message, focusing on quality of life and introducing peer support. Crucially, accessing the pathway should not result in being labelled "SEED", nor should it prevent access to recovery focused interventions including weight restoration. The full list of recommendations are included as well as the implications of the project and limitations. Answer: Yes, a specialist eating disorder service aimed at developing a continuum of community care does make a difference. The Body Image Eating Disorders Treatment and Recovery Service (BETRS) in Victoria, Australia, is an example of such a service that has led to markedly improved access and effectiveness of specialist services in the region. From December 2010 to July 2012, 208 patients accessed BETRS, with 53% having a diagnosis of anorexia nervosa. Those who attended a day patient programme showed significant improvement in their body mass index, as well as a significant decrease in measures of depression, anxiety, and eating disorder symptomatology (PUBMED:23671227). Similarly, in England, a large-scale transformation programme following a major government investment aimed at service level change in the provision of eating disorder services for children and adolescents resulted in the creation of over 70 dedicated community-based eating disorders services. This investment was based on evidence that early expert outpatient treatment can significantly reduce the need for inpatient care, which is both clinically and economically beneficial (PUBMED:36217209). Moreover, a 'balance of care' study in northern England found that community care was appropriate for approximately a quarter of admissions, including people with mild-moderate depression, an eating disorder, or personality disorder. The proposed community alternatives were more cost-effective compared with hospital admission, suggesting that patients who could be managed at home can be characterised by diagnosis and that existing in-patient beds could be more efficiently used (PUBMED:30450220). In primary care settings, cognitive-behavioral therapy (CBT) for eating disorders has been shown to be effective, with substantial improvements in eating attitudes, bulimic behaviors, and depression. The level of effectiveness was comparable to that previously demonstrated by more specialist services in secondary and tertiary care (PUBMED:28884837). Additionally, a province-wide program of knowledge exchange in a tertiary-care/primary-care partnership aimed at improving care for people with eating disorders in nonspecialized health-care installations reported that clinicians felt more confident and able to treat patients with eating disorders after the trainings. Patients treated showed significant improvements on eating and depressive symptoms and reported satisfaction with the treatments received (PUBMED:29948625).
Instruction: Do defense mechanisms vary according to the psychiatric disorder? Abstracts: abstract_id: PUBMED:17063216 Do defense mechanisms vary according to the psychiatric disorder? Objective: The aim of this study was to evaluate the defense mechanisms used by depressive and anxious patients without comorbidities compared to those used by controls and to determine whether these patterns differ between diagnoses. Method: The sample was composed of 167 psychiatric patients and 36 controls that were evaluated using the Defense Style Questionnaire 40. All subjects were evaluated through a clinical interview, and each evaluation was confirmed through the application of the Mini International Neuropsychiatric Interview, a structured psychiatric interview. We used ANOVA and discriminant analysis to assess differences between groups. Results: Neurotic defense mechanisms discriminated controls from all patients except those with social anxiety. Immature defense mechanisms differentiated controls from all patients, as well as distinguished depressive patients from panic disorder and obsessive disorder patients. The discriminant analysis indicated that depressive patients are characterized by projection, panic disorder patients by sublimation and obsessive-compulsive patients by acting out. Conclusions: Depressive and anxious patients differ from other individuals in their use of defense mechanisms, and each diagnosis has a particular pattern. These findings could lead to the development of specific psychotherapeutic interventions. abstract_id: PUBMED:26491330 Psychogenic non-epileptic seizures: so-called psychiatric comorbidity and underlying defense mechanisms. In Diagnostic and Statistical Manual of Mental Disorders, fifth edition, psychogenic non-epileptic seizures (PNES) do not have a unique classification as they can be found within different categories: conversion, dissociative, and somatization disorders. The ICD-10, instead, considers PNES within dissociative disorders, merging the dissociative disorders and conversion disorders, although the underlying defense mechanisms are different. The literature data show that PNES are associated with cluster B (mainly borderline) personality disorders and/or to people with depressive or anxiety disorders. Defense mechanisms in patients with PNES with a prevalence of anxious/depressive symptoms are of "neurotic" type; their goal is to lead to a "split", either vertical (dissociation) or horizontal (repression). The majority of patients with this type of PNES have alexithymia traits, meaning that they had difficulties in feeling or perceiving emotions. In subjects where PNES are associated with a borderline personality, in which the symbolic function is lost, the defense mechanisms are of a more archaic nature (denial). PNES with different underlying defense mechanisms have different prognoses (despite similar severity of PNES) and need usually a different treatment (pharmacological or psychological). Thus, it appears superfluous to talk about psychiatric comorbidity, since PNES are a different symptomatic expression of specific psychiatric disorders. abstract_id: PUBMED:28266192 Gender differences in defense mechanisms, ways of coping with stress and sense of identity in adolescent suicide attempts. The main aim of this study was to investigate the gender differences in defense mechanisms, ways of coping with stress and identity formation in relation to adolescent suicidal behavior. This study involved 64 adolescents between 12-17 years of age, who were admitted to the emergency service with a suicide attempt. They were evaluated with a semi-structured clinical interview (K-SADS), Ways of Coping Inventory (WCI), Defense Mechanisms Inventory (DMI) and Sense of Identity Assessment Form (SIAF). 60.9% (n = 39) of the adolescents were female, and 39.1% (n=25) were male. There were no statistically significant differences between the girls and the boys with respect to the clinical characteristics of the suicide attempt and the rate of psychiatric disorders. Of the 64 adolescents with suicide attempt, 47 (73.4%) had at least one, and 26 (40.6%) had more than one psychiatric disorder according to K-SADS. Disruptive behavior disorders were more frequent in males, whereas depression was more frequent in girls. The data indicated the importance of identity confusion, major depression and ADHD in adolescents with suicide attempt in both genders. 43.6% (n=17) of the girls and 36% (n=9) of the boys obtained scores higher than the cut-off point of SIAF indicating identity confusion. Professional help seeking and NSSI behaviors before the suicide attempt were more common in adolescents with identity confusion. While there were differences between genders with respect to the defense mechanisms used, no significant difference was found in terms of ways of coping. Evaluation of DMI scores revealed that the turning against object subscale score was significantly higher in boys compared to girls. While evaluating the adolescents at risk, their defense mechanisms, way of coping and sense of identity, as well as their psychiatric diagnosis should be assessed in detail in order to identify the suicidal thoughts and prevent possible suicide attempts. abstract_id: PUBMED:8543536 Defense mechanisms in severe adolescent anorexia nervosa. Objective: To compare ego defense mechanisms in adolescents with anorexia nervosa and other major psychiatric disorders, to defenses in healthy adolescents. Method: Thirty-seven patients with anorexia nervosa, 30 with major depressive disorder, 20 with obsessive-compulsive disorder, 53 with borderline personality disorder, 60 with schizophrenia, and 81 healthy controls were assessed with Pfeffer's Ego Defense Scale. Results: Regression, denial, projection, repression, introjection, and total defenses were common to all psychiatric patients and distinguished them from normal adolescents. In addition to these defenses, anorectic patients also used intellectualization more frequently than normal adolescents and psychiatric patients. They used sublimation more than other psychiatric patients. Patients with disorders, apart from obsessive-compulsive disorder, that are considered to be often comorbid with anorexia did not have different defense than schizophrenic patients. Conclusions: Anorectic adolescents overutilize relatively more mature defenses than do psychiatrically ill adolescents, and they overutilize immature defenses compared with normal adolescents. This combination of mature and immature defenses may be related to the uniquely heterogeneous ego functioning seen in anorectic patients, and it may provide insight into the nature of the psychopathology of anorexia nervosa. It also could have important psychotherapeutic and prognostic value. abstract_id: PUBMED:27086223 Defense styles in Intermittent Explosive Disorder. The overreliance on immature and/or neurotic defense mechanisms, as opposed to more mature defensive functioning has been linked to several psychiatric disorders. However, to date, the role of defense styles among individuals with Intermittent Explosive Disorder (IED) has not been examined. Given that individuals with IED display difficulties controlling their anger and aggression, one might expect these individuals to exhibit more immature and less mature defense styles. The current study compared participants with IED to a personality disorder (PD) comparison group, as well as to healthy volunteers (HV) on the Defense Style Questionnaire, a self-report measure that assesses the extent to which individuals endorse using mature, immature, and neurotic defense styles. Subjects with IED had significantly higher scores than both comparison groups on immature defense styles and exhibited lower scores on mature defense mechanisms. Hierarchical regression of significant defense style subscales showed that higher levels of acting out and lower levels of sublimation uniquely discriminated participants with IED from the PD and HV comparison groups. abstract_id: PUBMED:16389709 Study on psychiatric disorders and defensive process assessed by the "defense style questionnaire" in sterile males SAMPLE consulting in andrology Background: The literature about artificial insemination and the associated psychological, psychiatric and sexual disorders is relatively rich. But the majority of these studies is made in gynaecology, with a feminine approach of the disorder. There are very few works led in andrology. This justified the investigation of new trails in order to understand better the clinical context of the sterile man. We undertake a study about the psychiatric disorders among sterile men and about the defense styles. These are a clinical entity recently introduced in the quantitative psychopathology research. The defense style questionnaire (DSQ) is a psychometric scale used in common practice in order to measure the defense styles. Objectives: We made this study in order to examine the psychiatric state of a sterile males sample consulting in andrology; to assess the defense style by means of the Bond and al DSQ-88 ; to look into a difference between the defensive process according to their clinical situation of azoospermic males or as the oligoazoospermic males and finally, to reveal a correlation between the psychiatric disorders developed in this sample of sterile males and the defensive process they used. Method: There were 42 people (22 azoospermic males and 20 oligoazoospermic males) aged between 23 and 49 years old in the analysed sample. These have been selected at the surgery of andrology at the RUHC of Lille, depending on their arrival order for 6 months. There was no significant difference between the two groups as far as the age and the education standard are concerned. The selection criteria were medical and somatic. Our sample population were divided into two groups: azoospermia (no spermatozoon found in the semen analysis) and oligoasthenospermia (decrease of the number and the mobility of the spermatozoa and an increase of the percentage of atypical forms). The method first consisted in the DSQ, followed by the analysis of the psychiatric state according to the DSM IV, a hetero questionnaire to collect some general information about infertility and a self questionnaire about the sexual, conjugal and social effects of infertility. The DSQ and the interviews took place in the andrology department with the same investigator trained for this job. Results: We found in our sample 26.2% of psychiatric disorders according to the DSM IV with a significant over-representation of generalized anxious disorder and somatization disorder. The comparison between azoospermic males and oligoazoospermic males patients showed the absence of significative difference as far as psychiatric morbidity rate and the use of defense styles are concerned. Discussion: Our sample defended himself in accordance with modalities similar to the general population and used defense mechanisms preferentially belonging to the mature defense style, such as humor, repression and anticipation. The psychiatric pathology was significantly correlated to the preferential use of withdrawal, consumption, reaction formation and lack of humor use. We also confirm in our study the fact that the subjects using especially neurotic defense styles are more likely to develop a psychiatric disorder than the others. Our male sample is a waiting population and threatened by failure. The situation of wait creates anxiety. We also know that infertility is one of the most stressful situations a couple might face. However, our study did not enable us to know the precise relations between generalized anxious disorder and infertility, especially whether the generalized anxious disorder preceded this pathology or not. The over-representation of a somatization disorder only allows us to acknowledge its existence. We can also deduce from that a possible link between infertility and psychic disorder, even if no research permitted to affirm to date the existence of interrelations linking infertility and psychic life. On the whole, this population was suffering despite 73.8% of the patients had no confirmed psychiatric disorder. It is the reason why a liaison psychiatry more inserted into highly specialized teams is interesting, especially because it includes a medical and psychological approach of such disorders. The defense mechanisms preferentially used by this population were humor, repression and anticipation. Humor can only be considered as a defense mechanism when it is applied to oneself. The population who has no psychiatric disorder more uses humor. Does humor protect against the development of a psychiatric pathology, as certain authors proved it ? On the other hand, is repression really protective? It didn't interfere in our study about the development of a psychiatric pathology. So we can suppose that repression was protective for our whole sample, but we can not prove it. However, we wonder if this mechanism works after the failure of an artificial insemination is announced. In which measure such a stress can be repressed out of the conscience field? As for anticipation, it is used by our population who is for the most part in good health. But the question is to know if our sample really envisaged all the different possible solutions or only the success of artificial insemination. As some other works, we confirm that the, psychiatric, people significantly use the neurotic style. Our psychiatric patients used less humor and more consumption, withdrawal and reaction formation than the sane people. Consumption is rarely considered as a defense mechanism by some other authors. And yet, consumption and the existence of psychiatric disorders were very closely linked. This association is found again with anxiety in other studies. The correlation between psychiatric disorder and withdrawal was veryimportant too. The DSM lV defines withdrawal as an apathetic withdrawal. It is not an apathetic withdrawal in our population because the average scores for the ,, activity &gt;, defense mechanism remained high. In our sample, the use of this defense mechanism would encourage the expression of psychiatric troubles. The reaction formation quoted by Freud and Bergeret are both valorised in our society. What kinds of reaction formations use these men ? Are they pathological ? Our study can not answer to these questions. However, the DSQ items examining the reaction formation present its "socially promoted" aspect and forget the pathological one. It has been showed that the evaluation of the defense modalities in a certain type of population can allow the emergence of specific defense mechanisms. This can be considered as predictive factors of development of a mental pathology. The evaluation of specific mental defenses could permit to define vulnerability and affinity for given affections instead of simple personality traits or profiles. Most part of the works shows results in favour of the capacity of DSQ to assess the different defense mechanisms according to the diagnosis groups. But the insufficient numbers of studies moderate on the whole the hypothesis of the existence of specific defense mechanisms--protective factors and factors of vulnerability--linked to a given psychiatric disorder. Conclusion: There is not a difference of psychological effect in terms of degree of sterility. On the other hand, the existence of over-represented psychiatric disorders with sterile males compared with a control group force Consultation-Liaison psychiatrists and andrologists would be able to understand the pain beyond the need of acting by the artificial insemination. In our opinion, this justifies the fact that the patients should have the opportunity of expressing, in the department where they are treated, all the feelings inherent to their personal and conjugal drama as part of a specialized treatment. Our study confirms the difficulty to know whether some defense mechanisms are vulnerability factors for a certain psychiatric disorder or whether the defense mechanisms are an epiphenomenon of a particular psychiatric disorder. This is the reason why a lot of authors having worked with DSQ agree to conclude that additional prospective studies, which would permit to make a link between the defense mechanisms anda certain psychiatric pathology, are necessary. In the case we study, it is important to explore the defensive modalities before the infertility diagnosis and after the birth of a child, with a more important sample population. A better knowledge of the defensive modalities of such a population, used in a psychotherapeutic context could help to prevent the appearance of psychiatric disorders or, if not, to anticipate them. abstract_id: PUBMED:9048700 Defense style in depressed and anxious psychiatric outpatients: an explorative study. The aim of the present study was threefold: (a) to investigate whether anxiety or depressive disorders are related to the use of specific defenses; (b) to evaluate which defenses are associated with self-report measures for anxiety and depression; and (c) to assess whether the association between defenses and anxiety or depression is gender specific. Subjects were 483 consecutive psychiatric outpatients with the following DSM-III-R diagnoses: (a) V-code or no psychiatric diagnosis (N = 71) (controls); (b) one or two anxiety diagnoses (N = 116), among which 47 patients with a single diagnosis of panic disorder; (c) one or two depressive disorder diagnoses (N = 93), among which 35 patients with a single diagnosis of dysthymia and 51 with a single diagnosis of major depression; and (d) 203 otherwise. Defense use was measured with the Defense Style Questionnaire-36 and level of anxiety and depression with the Symptom Checklist-90. Compared to controls, anxiety and depressive disorder patients scored higher for the immature defense style. Moreover, anxiety disorder patients obtained significantly higher scores for the neurotic defense style than both depressive disorder patients and controls. Panic patients made more use of the defense mechanisms of somatization, devaluation, and idealization, and dysthymic patients of somatization, devaluation, and isolation. Both anxiety and depression scores were positively related to the immature and neurotic, and negatively to the mature defense style. Anxiety was predominantly related to somatization and depression to projection. No evidence for a gender specific association between defense and anxiety or depression was found. It is concluded that the observed differences in defense between groups may be a consequence of the nature of the psychiatric disorder irrespective of gender. Prospective studies of persons at risk are needed to evaluate whether specific defenses are predictive of anxiety or depressive disorders and/or symptoms. abstract_id: PUBMED:1492547 Respiratory stridor and repressive defense style in adolescent somatoform disorders. Three cases of adolescents with respiratory stridor of psychiatric etiology are described. All three fulfilled DSM-III-R criteria for diagnoses of somatoform disorders and showed characteristics of repressive defense style. Such patients are likely to undergo extensive medical investigation and treatment if the psychiatric nature of their disorder is not recognized, but few data describing their psychological characteristics or treatment exist. Repressors are typically unaware of emotional arousal and do not recognize the negative affects which lead to their somatic symptoms. They therefore respond poorly to confrontational psychotherapy and are at risk of discontinuing treatment and repeating their maladaptive symptom cycle. Identification of repressive defense style in patients with stridor which has no obvious organic cause may be useful both as a possible "marker" of psychiatric disorder and as a guide to treatment. abstract_id: PUBMED:15692786 Defense mechanisms in inflammatory bowel disease. Background: Although psychiatric disturbances and personality features are possibly involved in inflammatory bowel disease, little attention has been given to the potential role of defense mechanisms in the formation, course, or prognosis of the disease. The purpose of the present study was to determine whether certain defense styles and ego mechanisms of defense are associated with inflammatory bowel disease, including both ulcerative colitis and Crohn's disease. Methods: Seventy-six consecutive unselected outpatients participated in the study. The sample consisted of 39 patients with Crohn's disease, 33 with ulcerative colitis, and 4 with an intermediate form of inflammatory bowel disease. The Symptom Distress Checklist (SCL-90-R), Defense Style Questionnaire (DSQ), and Hostility and Direction of Hostility Questionnaire (HDHQ) were used. Results: Crohn's disease patients presented a more immature defensive profile than ulcerative colitis patients, using a "maladaptive action" style, as well as "consumption" and "pseudo-altruism" defense mechanisms more often. These differences were statistically significant and they were independent of age, sex, educational level, disease extension, and pharmacologic treatment. A significant positive correlation was also found between disease activity and defensive profiles in both ulcerative colitis and Crohn's disease patients. In contrast, there was no statistically significant difference between ulcerative colitis and Crohn's disease patients in terms of psychiatric symptoms. Conclusions: Crohn's disease patients presented a different and more immature defensive profile than patients with ulcerative colitis. In addition, the more psychologically mature inflammatory bowel disease patients had lower rates of relapses and surgical operations, providing evidence that these aspects of personality are likely to influence the patients' adaptation to the disease. abstract_id: PUBMED:21971196 The Life Style Index: correlations with psychological distress and hostility The Life Style Index (LSI) was designed to assess defense mechanisms, assuming that their use is related to specific emotional states and diagnostic concepts. Aiming to further investigate the psychometric properties of the Greek version of the LSI, the aim of the present study was to investigate the relationship of specific defense mechanisms with dimensions of psychological distress and hostility features in three different populations. The sample comprised 1261 adults (410 healthy participants, 723 medical patients and 128 psychiatric patients). Along with defense mechanisms (LSI), Psychological Distress (General Health Questionnaire, GHQ-28) and Hostility features (Hostility and Direction of Hostility Questionnaire, HDHQ) were also assessed. The results showed that increased psychological distress is related with increased use of all defenses except Denial, with which psychological distress is negatively associated. Regression is constantly related with psychological distress and differentiates psychiatric patients from the other groups of participants, while Compensation and Reaction Formation are related to depressive symptomatology. In medical patients, Repression was found to increase the physical dimension of psychological distress and the social dysfunction. On the contrary,Denial was negatively associated with these dimensions of psychological distress. In the psychiatric patient and healthy participant samples, Projection plays the most detrimental role. Regarding hostility and direction of hostility, those who were found to introvert their hostility presented with higher scores in Denial, indicating that they possibly 'deny' their hostility, and the degree of the Denial was found to be negatively associated with the degree of Introverted Hostility. Those who directed their hostility towards the others, presented with higher rates of Projection, while neither Denial nor Reaction Formation seemed sufficient enough to temper the degree of Extroverted Hostility. In conclusion,the present results strengthen further the validity of the Greek version of the LSI and provide additional evidence about the relation of defence mechanisms with dimensions of psychological distress and the direction of hostility in different populations, indicating that the empirical assessment of defense mechanisms can contribute significantly in the study of the factors that mediate or moderate the course or the outcome of medical or psychiatric disorders. Answer: Yes, defense mechanisms do vary according to the psychiatric disorder. Studies have found that different psychiatric conditions are characterized by distinct patterns of defense mechanisms. For instance, depressive and anxious patients differ from controls in their use of defense mechanisms, with depressive patients characterized by projection, panic disorder patients by sublimation, and obsessive-compulsive patients by acting out (PUBMED:17063216). In the case of psychogenic non-epileptic seizures (PNES), patients with a prevalence of anxious/depressive symptoms use "neurotic" type defense mechanisms, leading to a "split," either vertical (dissociation) or horizontal (repression), while those associated with borderline personality disorder, where the symbolic function is lost, use more archaic defense mechanisms such as denial (PUBMED:26491330). Gender differences also exist in defense mechanisms among adolescents with suicidal behavior, with boys and girls using different defense mechanisms (PUBMED:28266192). Adolescents with anorexia nervosa overutilize both mature defenses, such as intellectualization and sublimation, and immature defenses compared to other psychiatrically ill adolescents and normal adolescents (PUBMED:8543536). Individuals with Intermittent Explosive Disorder (IED) exhibit higher scores on immature defense styles and lower scores on mature defense mechanisms compared to other groups (PUBMED:27086223). In sterile males consulting in andrology, the defense mechanisms used were similar to the general population, with a preference for mature defense styles such as humor, repression, and anticipation. However, the psychiatric pathology was significantly correlated with the preferential use of withdrawal, consumption, reaction formation, and lack of humor use (PUBMED:16389709). Depressed and anxious psychiatric outpatients also showed differences in defense mechanisms, with anxiety disorder patients using the neurotic defense style more than depressive disorder patients and controls (PUBMED:9048700). Furthermore, adolescents with somatoform disorders and respiratory stridor showed characteristics of a repressive defense style (PUBMED:1492547), and patients with inflammatory bowel disease, particularly those with Crohn's disease, presented a more immature defensive profile than those with ulcerative colitis (PUBMED:15692786). Lastly, the use of specific defense mechanisms has been correlated with psychological distress and hostility features across different populations (PUBMED:21971196). In summary, defense mechanisms do indeed vary according to psychiatric disorders, and In summary, defense mechanisms do indeed vary according to psychiatric disorders, and understanding these variations can be crucial for developing specific psychotherapeutic interventions tailored to each disorder.
Instruction: New subcategories of class IV lupus nephritis: are there clinical, histologic, and outcome differences? Abstracts: abstract_id: PUBMED:15558526 New subcategories of class IV lupus nephritis: are there clinical, histologic, and outcome differences? Background: The International Society of Nephrologists and Renal Pathology Society (ISN/RPS) classification of lupus nephritis proposes a controversial subclassification of class IV lupus nephritis into IV-segmental (IV-S) and IV-global (IV-G). Methods: A retrospective analysis of a biopsy-proven cohort of patients with lupus nephritis using the ISN/RPS classification was performed. Results: The prevalence of class IV was 47% in the cohort of 70 patients with lupus nephritis. Of 33 patients with class IV lupus nephritis, 11 patients had class IV-S and 22 patients had class IV-G. There were no significant differences in age, sex, and ethnicity in the 2 groups. Greater serological activity (lower C4 level) was observed in the IV-S group, whereas serum creatinine levels and diastolic blood pressures were significantly greater in the IV-G group. Hematocrit levels, significant proteinuria (urine protein &gt; or =3 + ), duration of systemic lupus erythematosus, and the Systemic Lupus Erythematosus Disease Activity Index were similar in the 2 groups. Histologically, combined lesions with segmental endocapillary proliferation and fibrinoid necrosis were significantly more frequent in the IV-S group. The percentage of glomeruli with cellular crescents also was greater in the IV-S group, but the difference was not significant. Wire loops were more common in the IV-G group. Transformation to IV-G was observed in 2 of 3 specimens from repeated biopsies available in the IV-S group; greater than 50% of the IV-G group had both segmental and global glomerular involvement. No significant difference was detected in outcomes in the 2 groups after average follow-ups of 38 and 55 months in the IV-S and IV-G groups, respectively. Conclusion: A clinical and prognostic distinction between IV-S and IV-G remains to be proven. abstract_id: PUBMED:25702048 Significance of global versus segmental subclassification of class III and IV lupus nephritis: a single center experience. Introduction: Class III and IV are the most ominous among the classes of lupus nephritis (LN) and there are contradictory reports on whether LN class IV-G (global) differs from LN class IV-S (segmental) as envisaged by the International Society of Nephrology and the Renal Pathology Society (ISN/RPS) 2003 classification. These subcategories are not validated for LN class III. This study was designed to assess the differences between global and segmental subclasses in classes III and IV of LN. Patients And Methods: In a retrospective analysis, the kidney biopsies of 84 patients with new-onset LN were analyzed. The Student's t-test and Mann-Whitney test were employed to compare differences between the means of continuous variables among the two groups. Fisher's exact test was used to compare the categorical variables. A p-value &lt;0.05 was considered statitistically significant. Results: Of 84 patients, 69 (82.1%) were females and 15 (17.9%) males, with the female to male ratio of 4.6:1. The mean age of all patients was 32.7±12.6 years. The mean serum creatinine at the time of biopsy was 1.5±0.94 mg/dl and the mean urinary protein excretion was 1.6±1.9 g/day. Among 84 biopsies, 26(30.95%) belonged to class III and 37 (44.05%) to class IV LN. In class IV LN, serum creatinine was significantly higher in global vs. segmental subcategory (2.4±1 vs. 1.1±0.5 mg/dl; p=0.034), while. 24-h proteinuria was not significantly different between the subcategories (2.7±1.2 vs. 3.1±1.0 g/d, p=0.56). In LN class III, the mean age, serum creatinine and 24-hour proteinuria did not show significant difference between the global and segmental subcategories (37±17 vs. 30±15 years, p=0.58; 1.2±0.2 vs. 1.25±0.6 mg/dl, p=0.66; 2.03±0.5 vs. 3.1±3.5 g/day, p=0.45, respectively). The proportion of glomeruli showing endocapillary proliferation was significantly higher in global than in segmental subclasses (94.25% vs. 5.72; p=0.026) in class IV LN. The activity and chronicity percent also revealed higher values in global subclass vs. segmental subclass of class IV LN (p=0.038 and p=0.045, respectively). These parameters were not significantly different among the global and segmental subcategories of class III LN (p&gt;0.5 for all parameters). Conclusion: In conclusion, our study showed significant differences in renal function and some pathological features on renal biopsies among the global and segmental subclasses of class IV LN. There were no significant differences among these subclasses of class III LN. Further, and larger studies are needed on this subject to substantiate the above results. abstract_id: PUBMED:24914093 Proteomic analysis of Class IV lupus nephritis. Background: There have been several attempts to standardize the definition and increase reproducibility in classifying lupus nephritis (LN). The last was made by the International Society of Nephrology and Renal Pathology Society in 2003 where the introduction of Class IV subcategories (global and segmental) was introduced. Methods: We investigated whether this subdivision is important using a proteomics approach. All patients with renal biopsies along with their clinical outcome of LN were identified and regrouped according to the above 2003 classifications. Fresh-frozen renal biopsies of Class IV LN (global and segmental), antineutrophil cytoplasmic antibody-associated vasculitis and normal tissue were analyzed using two-dimensional gel electrophoresis (2-DE) and mass spectrometry. Differentially expressed proteins were identified and subjected to principal component analysis (PCA), and post hoc analysis for the four sample groups. Results: PCA of 72 differentially expressed spots separated Class IV global and Class IV segmental from both normal and antineutrophil cytoplasmic antibody-associated vasculitis (ANCA). The 28 identified proteins were used in a post hoc analysis, and showed that IV-global and IV-segmental differ in several protein expression when compared with normal and ANCA. To confirm the proteomic results, a total of 78 patients (50 Class IV-Global and 28 Class IV-Segmental) were re-classified according to 2003 classification. There was no difference in therapy between the groups. The renal survival and patient survivals were similar in both groups. Conclusions: There is no strong evidence to support a different outcome between the two subcategories of Class-IV LN and, they should thus be treated the same until further studies indicate otherwise. abstract_id: PUBMED:17167111 Severe lupus nephritis: racial differences in presentation and outcome. This study assessed whether certain clinicopathologic variables could explain the impact of race on outcome in 86 patients who had severe lupus nephritis and were available for long-term follow-up after participating in a prospective, controlled, clinical trial. Fifty-four (63%) patients were white, 21 (24%) were black, and 11 (13%) were categorized as other. The proportion of patients with anti-Ro, anti-nRNP, and anti-Sm was significantly greater among black patients. Biopsies with segmental active proliferative and necrotizing lesions that involved &gt;or=50% of glomeruli +/- membranous glomerulonephritis (class III &gt;or=50%+/-V) were significantly more common (white 44%, black 76%, other 36%; P &lt; 0.05) and diffuse proliferative glomerulonephritis +/- membranous glomerulonephritis (class IV+/-V) was less common (white 54%, black 24%, other 64%) among black patients. Attainment of a remission was greatest among white patients (white 52%, black 29%, other 27%; P = 0.09). Features that were predictive of a remission were white race, baseline serum creatinine, and class IV+/-V lesions. Patient survival at 10 yr (white 81%, black 59%, other 73%; P = 0.029) and renal survival at 10 yr (white 68%, black 38%, other 61%; P = 0.015) were significantly poorer in black patients. Predictors of ESRD were serum creatinine, the presence of anti-Ro antibodies, class III &gt;or=50%+/-V lesions, and failure to achieve a remission. In conclusion, racial differences were observed in the serologic and histologic features at presentation, response to treatment, and outcome of patients with severe lupus nephritis. In a population of patients with severe lupus nephritis, black patients were significantly more likely to have a serologic profile and renal lesions that were associated with more aggressive renal disease and resulted in worse outcomes than white patients. abstract_id: PUBMED:21442173 Outcome of reclassification of World Health Organization (WHO) class III under International Society of Nephrology-Renal Pathology Society (ISN-RPS) classification: retrospective observational study. The outcome of systemic lupus erythematosus (SLE) is largely influenced by the existence of lupus nephritis (LN), and its histologic classification guides the treatment and prognosis of SLE. International Society of Nephrology-Renal Pathology Society (ISN-RPS) announced a revised classification of LN in 2004. The present study investigated the differential outcome of World Health Organization (WHO) class III LN when reclassified according to ISN-RPS classification. Forty-three patients with biopsy-proven WHO class III LN at a single tertiary hospital were included in the study. Baseline characteristics at the time of renal biopsy and clinical data during follow-up were obtained from medical records. Renal response to treatment at one-year follow-up was analyzed in three ways; complete response (CR), partial response (PR), and no response (NR). Of 43 patients with previous WHO class III LN, 12 cases were reclassified into ISN-RPS class IV (9 cases of class IV-S and 3 cases of IV-G). Baseline characteristics at the time of renal biopsy were not different between the reclassified class IV and remaining class III LN group except activity index on renal histology, which was significantly elevated in the reclassified class IV group (4.90 vs. 6.75; P = 0.02). Significantly higher number of patients with remaining class III LN achieved CR to treatment than those with reclassified class IV LN at one-year follow-up since initial biopsy (CR: PR: NR; 16:7:7 vs. 3:1:8; P = 0.032). Our study suggests that the ISN-RPS classification is more advantageous in predicting renal outcome and guiding treatment when evaluating previously classified WHO class III LN. abstract_id: PUBMED:15569330 The outcome and a new ISN/RPS 2003 classification of lupus nephritis in Japanese. Background: A considerable diversity in prognosis is seen with lupus glomerulonephritis (LGN). Hence, the clinical usefulness of a recent International Society of Nephrology/Renal Pathology Society (ISN/RPS) 2003 classification to judge the long-term outcome of human LGN has been investigated. Methods: We studied retrospectively 60 subjects with LGN (7 males, 53 females, mean age of 33 years old) who underwent renal biopsies and were followed from 1 to 366 months, with a mean of 187 months. We diagnosed renal pathology as classes, active and sclerosing lesions, according to the new and WHO1995 classification of LGN, and analyzed the clinicopathologic factors affecting to the prognosis of LGN. Results: New classification got much higher consensus in the judgment of classes (98% vs. 83%, P = 0.0084). The group of Class IV-S (N = 6) or IV-G (N = 17) at initial biopsies showed higher rate of end-stage renal failure (ESRF) compared with that of Class I, II, III or V (40.9% vs. 2.6%, P &lt; 0.001). The mean 50% renal survival time of Class IV was 189 +/- 29 months, and patients with Class IV-S tended to have a poorer prognosis (95 +/- 22 months for IV-S vs. 214 +/- 35 months for IV-G, P = 0.1495). Class IV was also selected as the most significant risk factor for ESRF by stepwise model (P = 0.002). In subanalysis for ESRF in Class IV (-S or -G), treatment including methylprednisolone pulse therapy was only selected as a significant improving factor for primary outcome (P = 0.034). In addition, activity index was the significant risk factor of death and/or ESRF after initial renal biopsies (P = 0.043). As for actuarial patient death during all follow-up periods, complications with anti-phospholipid syndrome or nephrotic syndrome were significant risk factors (P = 0.013, P = 0.041, respectively). Conclusion: New ISN/RPS 2003 classification provided beneficial pathologic information relevant to the long-term renal outcome and the optimal therapy preventing ESRF and/or death in patients with LGN. abstract_id: PUBMED:36168149 The long-term outcome of New Zealand Maori and Pacific Island children diagnosed with childhood onset lupus nephritis. Objective: To determine the long-term outcome of Maori and Pacific Island children diagnosed with childhood onset lupus nephritis. Method: A chart review was conducted of children diagnosed with biopsy proven lupus nephritis seen by the Starship Hospital and Kidz First paediatric rheumatology and/or Starship renal services between January 1992 and January 2018. Baseline and follow-up kidney histology, adherence and response to therapy including partial or full renal remission, refractory disease, end-stage kidney disease (ESKD) and mortality were determined. Results: In a New Zealand cohort of 42 with childhood onset lupus nephritis, Maori and Pacific Island children were significantly more likely to develop class 4 lupus nephritis (RR (95% CI), 11.3 (3.84-49.9), p &lt; 0.0001), demonstrate medication nonadherence (RR (95% CI) 12.4 (3.48-85.7), p &lt; 0.0001) and experience end stage kidney disease (RR (95% CI) 15.7 (2.97-389.3), p = 0.0003) and mortality (RR (95% CI) 11.1 (1.91-280.1), p = 0.005) compared to non-Maori and Pacific Island children. In addition, Maori children with childhood onset lupus nephritis developed chronic histological changes significantly more rapidly than Pacific or Asian children (p = 0.038). Conclusion: Lupus nephritis is more common among Maori and Pacific Island children in New Zealand associated with a significant incidence of end stage kidney disease and mortality, with some Maori children developing rapid histologic disease progression. abstract_id: PUBMED:22232126 Long-term outcome of lupus nephritis in Asian Indians. Objective: There are sparse data on outcome of lupus nephritis from developing countries. This study looks at outcome in Asian Indians. Methods: This retrospective study included patients at a single center over 20 years. Patients were treated as per standard protocols. The primary outcome measure was chronic renal failure or death; the secondary outcome was end-stage renal disease or death. The worst-case scenario was also calculated, considering those lost to followup in the first year as events. Kaplan-Meier survival curves and the log rank test were used for survival analysis. Data are shown as the mean ± SD. Results: We included 188 patients with lupus nephritis, with a female:male ratio of 11:1, a mean ± SD age at onset of 23.6 ± 10.5 years, and a median followup time of 6 years (interquartile range 3-9 years). Of 136 patients with a biopsy sample, the distribution was as follows: class II in 22, class III in 36, class IV in 61, class V in 16, and class VI in 1. Survival with normal renal function was 84%, 69%, and 57% at 5, 10, and 15 years, respectively; in the worst-case scenario, survival was 77%, 63%, and 51%, respectively. There was no difference in survival by histologic class; however, nonbiopsied patients had lower survival. Renal survival was 91%, 81%, and 76% at 5, 10, and 15 years, respectively; in the worst-case scenario, survival was 79%, 70%, and 66%, respectively. Risk factors for poor outcome were low C3, hematuria, hypertension, creatinine, lack of remission, and occurrence of major infection. There was a high rate of major infections of 42.3%, with tuberculosis at 11.5%. Infections caused one-half of all deaths. Conclusion: The outcome of lupus nephritis in Asian Indians with standard immunosuppressive regimens is reasonable, but immunosuppression is associated with a high rate of infection. abstract_id: PUBMED:10072504 Analysis of MHC class II genes in the susceptibility to lupus in New Zealand mice. Hybrids of New Zealand Black (NZB) and New Zealand White (NZW) mice spontaneously develop a disease similar to human systemic lupus erythematosus. MHC and non-MHC genes contribute to disease susceptibility in this murine model. Multiple studies have shown that the NZW H2z locus is strongly associated with the development of lupus-like disease in these mice. The susceptibility gene(s) within H2z is not known, but different lines of evidence have pointed to class II MHC genes, either H2-E or H2-A (Ez or Az in NZW). Recent studies from our laboratory showed that Ez does not supplant H2z in the contribution to lupus-like disease. In the present work we generated C57BL/10 (B10) mice transgenic for Aaz and Abz genes (designated B10.Az mice) and used a (B10.Az x NZB)F1 x NZB backcross to assess the contributions of Az genes to disease. A subset of backcross mice produced high levels of IgG autoantibodies and developed severe nephritis. However, no autoimmune phenotype was linked to the Az transgenes. Surprisingly, in the same backcross mice, inheritance of H2b from the nonautoimmune B10 strain was strongly linked with both autoantibody production and nephritis. Taken together with our previous Ez studies, the present work calls into question the importance of class II MHC genes for lupus susceptibility in this model and provides new insight into the role of MHC in lupus-like autoimmunity. abstract_id: PUBMED:8870114 Outcome of silent lupus nephritis. To analyze the long-term outcome in patients with silent lupus nephritis, we retrospectively studied 20 patients with systemic lupus erythematosus without clinical renal involvement who had renal biopsies in our unit between 1978 and 1986 and reviewed 193 cases reported between 1957 and 1995. Two patients of the current series were lost to follow-up. Mean follow-up in the other 18 was 13 +/- 3 years (range, 2 to 17). On kidney biopsy, nine had class I, six class II, one class IV, and two class V disease (WHO classification). Three patients with prior normal renal function died of nonrenal causes. During the study, the remaining 15 patients had normal renal function and urinalysis. Most patients from the literature had "mild" histologic lesions, but 30 had diffuse proliferative glomerulonephritis. Over an average of 46 months of follow-up from biopsy, renal survival rate and patient survival rate were 98% and 91%, respectively. Three patients died of end-stage renal failure. In conclusion, end-stage renal failure in patients with silent lupus nephritis is rare regardless of the histopathological renal lesions. Patients survival depends on nonrenal causes. Answer: The introduction of new subcategories of class IV lupus nephritis, specifically IV-segmental (IV-S) and IV-global (IV-G), has been a subject of investigation to determine if there are significant clinical, histologic, and outcome differences between the two. A retrospective analysis of a cohort of patients with lupus nephritis using the International Society of Nephrologists and Renal Pathology Society (ISN/RPS) classification found no significant differences in outcomes between the IV-S and IV-G groups after average follow-ups of 38 and 55 months, respectively. However, there were some differences in serological activity, serum creatinine levels, and diastolic blood pressures, with greater serological activity observed in the IV-S group and higher serum creatinine levels and diastolic blood pressures in the IV-G group. Histologically, combined lesions with segmental endocapillary proliferation and fibrinoid necrosis were more frequent in the IV-S group, while wire loops were more common in the IV-G group (PUBMED:15558526). Another study assessing the significance of global versus segmental subclassification in classes III and IV lupus nephritis found significant differences in renal function and some pathological features on renal biopsies among the global and segmental subclasses of class IV LN. In class IV LN, serum creatinine was significantly higher in the global versus segmental subcategory, and the proportion of glomeruli showing endocapillary proliferation was significantly higher in the global subclass. However, these differences were not observed in class III LN (PUBMED:25702048). Proteomic analysis of Class IV lupus nephritis also investigated the importance of the subdivision between IV-global and IV-segmental. The study found that while there were differences in several protein expressions between the two subcategories, there was no strong evidence to support a different outcome between them, suggesting that they should be treated the same until further studies indicate otherwise (PUBMED:24914093). In summary, while some studies have found clinical and histological differences between the IV-S and IV-G subcategories of class IV lupus nephritis, the evidence for a difference in outcomes is not strong. Further research is needed to substantiate whether these subcategories should be treated differently or if they have distinct prognostic implications.
Instruction: Do physicians have lower risk of severe sepsis and associated mortality? Abstracts: abstract_id: PUBMED:24231761 Do physicians have lower risk of severe sepsis and associated mortality? A matched cohort study*. Objectives: Physicians generally have higher disease awareness and easier access to medical care, which may help them reduce risk of developing severe sepsis and associated mortality when they suffer from acute infection. However, the opposite situation may occur due to the presence of potential barriers to healthcare in physicians. We aim to examine the risk of severe sepsis and associated mortality in physicians. Design: A matched cohort study. Setting: Registry of medical professionals and inpatient and outpatient claims data from Taiwan's National Health Insurance Research Database. Subjects: Physicians (n = 29,697) in Taiwan and a group of persons who were demographically and socioeconomically matched (1:1 ratio) and without any medical education and background. All subjects were followed from the index date (January 1, 2000) to the occurrence of endpoint, withdrawal, or December 31, 2008, whichever date came first. Interventions: None. Measurements And Main Results: Primary outcome was development of severe sepsis. Secondary outcome was 90-day mortality following severe sepsis. The overall incidence density of severe sepsis was lower in physicians than in controls (3.25 vs 3.90 per 1,000 person-years, p &lt; 0.001). According to the Cox regression model, severe sepsis develops in physicians 24% less likely than controls after baseline covariates were adjusted (adjusted hazard ratio, 0.76; 95% CI, 0.68-0.85). The 90-day mortality rates were similar between physicians and controls with severe sepsis (46.5% vs 45.7%, p = 0.72). However, after controlling for the baseline and additional covariates, the risk of death was significantly lower in physicians than in controls (adjusted hazard ratio, 0.82; 95% CI, 0.71-0.95). Conclusions: These findings support the hypothesis that physicians are less likely than controls to develop or die of severe sepsis, implying that medical knowledge, higher disease awareness, and easier healthcare access in physicians may help reduce their risk of severe sepsis and associated mortality. abstract_id: PUBMED:36203605 Immunological risk factors for sepsis-associated delirium and mortality in ICU patients. Background: A major challenge in intervention of critical patients, especially sepsis-associated delirium (SAD) intervention, is the lack of predictive risk factors. As sepsis and SAD are heavily entangled with inflammatory and immunological processes, to identify the risk factors of SAD and mortality in the intensive care unit (ICU) and determine the underlying molecular mechanisms, the peripheral immune profiles of patients in the ICU were characterized. Methods: This study contains a cohort of 52 critical patients who were admitted to the ICU of the First Affiliated Hospital of Jinan University. Comorbidity, including sepsis and SAD, of this cohort was diagnosed and recorded. Furthermore, peripheral blood samples were collected on days 1, 3, and 5 of admission for peripheral immune profiling with blood routine examination, flow cytometry, ELISA, RNA-seq, and qPCR. Results: The patients with SAD had higher mortality during ICU admission and within 28 days of discharge. Compared with survivors, nonsurvivors had higher neutrophilic granulocyte percentage, higher CRP concentration, lower monocyte count, lower monocyte percentage, lower C3 complement level, higher CD14loCD16+ monocytes percentage, and higher levels of IL-6 and TNFα. The CD14hiCD16- monocyte percentage manifested favorable prediction values for the occurrence of SAD. Differentially expressed genes between the nonsurvival and survival groups were mainly associated with immune response and metabolism process. The longitudinal expression pattern of SLC2A1 and STIMATE were different between nonsurvivors and survivors, which were validated by qPCR. Conclusions: Nonsurvival critical patients have a distinct immune profile when compared with survival patients. CD14hiCD16- monocyte prevalence and expression levels of SLC2A1 and STIMATE may be predictors of SAD and 28-day mortality in ICU patients. abstract_id: PUBMED:25140031 Improved outcomes of high-risk emergency medical admissions cared for by experienced physicians. Background: Physician experience has been shown to improve clinical outcomes. Limited numbers of experienced clinicians make it unfeasible that they would care for all patients. We hypothesized that physician experience would impact outcomes for patients with high, but not low, risk of mortality. Methods: All emergency admissions from 2002 to 2013 were studied. Two groups at low and high risk for mortality (1.3% (95% CI: 1.13, 1.62) vs. 19.4% (95% CI: 18.5, 20.3)) were defined by a risk prediction method comprising acute Illness Severity Score, Charlson Comorbidity Index and sepsis status. Length of stay (LOS) and 30-day in-hospital mortality were the primary end points. Consultant experience was studied univariately and also entered into a multiple logistic regression model to predict 30-day in-hospital survival. A zero-truncated Poisson regression model assessed LOS and incidence rate ratios (IRR) were calculated. Results: Over the 12-year study period 66 933 admissions were recorded in 36 271 patients. High-risk patients cared for by more experienced (≥20 years qualified) physicians had a lower 30-day in-hospital mortality (19.4 vs. 17.1% P &lt; 0.001), relative risk reduction 11.9%. The multivariable odds ratio of a 30-day in-hospital death for an experienced physician was 0.89 (95% CI: 0.82, 0.97; P = 0.009). LOS was shorter for high-risk patients allocated to experienced physicians with a multivariable IRR of 0.77 (95% CI: 0.94, 0.99; P = 0.02). No difference in outcomes was found for low-risk patients. Conclusion: High-risk but not low-risk patients have improved outcomes when cared for by physicians with greater experience. abstract_id: PUBMED:36870801 Definitions, rates and associated mortality of ICU-acquired pneumonia: A multicenter cohort study. Introduction: We aimed to analyze intensive care unit (ICU)-acquired pneumonia according to 7 definitions, estimating associated hospital mortality. Methods: This cohort study was nested within an international randomized trial, evaluating the effect of probiotics on ICU-acquired pneumonia in 2650 mechanically ventilated adults. Each clinically suspected pneumonia was adjudicated by two physicians blinded to allocation and center. The primary outcome was ventilator-associated pneumonia (VAP) informed by ventilation for ≥2 days, new, progressive or persistent infiltrate plus 2 of: temperature &gt; 38 °C or &lt; 36 °C; leukopenia (&lt;3 × 10(Fernando et al., 20206)/L) or leukocytosis (&gt;10 × 10(Fernando et al., 20206)/L); and purulent sputum. We also used 6 other definitions estimating the risk of hospital mortality. Results: The frequency of ICU-acquired pneumonia varied by definition: the trial primary outcome VAP (21.6%), Clinical Pulmonary Infection Score (CPIS) (24.9%), American College Chest Physicians (ACCP) (25.0%), International Sepsis Forum (ISF) (24.4%), Reducing Oxidative Stress Study (REDOXS) (17.6%), Centers for Disease Control (CDC) (7.8%), and invasively microbiologically confirmed (1.9%). The trial primary outcome VAP (HR 1.31 [1.08, 1.60]), ISF (HR 1.32 [1.09,1.60]), CPIS (HR 1.30 [1.08,1.58]) and ACCP definitions (HR 1.22 [1.00,1.47]) were associated with hospital mortality. Conclusions: Rates of ICU-acquired pneumonia vary by definition and are associated with differential increased risk of death. abstract_id: PUBMED:33684477 Frailty Index associated with postoperative complications and mortality after lower extremity amputation in a national veteran cohort. Objective: Surgical frailty and its assessment have become essential considerations in perioperative management for the modern aging surgical population. The risk analysis index is a validated frailty score that has been proven to predict short-term outcomes and long-term mortality in several surgical subspecialties and high-risk procedures. We examined the association of risk analysis index scores with postoperative outcomes in a retrospective nationwide database of patients who underwent lower extremity amputation in the Veterans Health Administration Health Care System. Methods: The Veteran Affairs Surgical Quality Improvement Program data was queried across the Veteran Affairs Health Care System with institutional review board approval for lower extremity amputations. Records of above and below knee amputation, Current Procedural Terminology codes 27590, 27591, 27592, 27594, 27596 and 27880, 27881, 27882, 27884, and 27886, respectively, from 1999 to 2018 were obtained. Incomplete and traumatic entries were removed. Risk Analysis Index score was calculated from preoperative variables and patients were separated into five score cohorts (≤15, 16-25, 26-35, 36-45, ≥46). The χ2 test and analysis of variance were used to compare the cohorts. Forward binary logistic regression modeling was used to determine covariate-adjusted odds ratios for outcomes in each cohort (SPSS software; version 25, IBM Corp). Results: A total of 47,197 patients (98.9% male) with an average age of 66.4 ± 10.6 years underwent nontraumatic lower extremity amputation, including 27,098 below knee and 20,099 above knee amputations, during the study period. Frailty was associated with increased rates of deep vein thrombosis, sepsis, cardiac arrest, myocardial infarction, pneumonia, intubation for more than 48 hours, pulmonary embolism, reintubation, acute kidney injury, renal failure, increased length of stay, overall complications, and death. Increases in the frailty score were associated with up to three times the likelihood for the occurrence of a postoperative complication and up to 32 times likelihood to perish within 30 days than those with low frailty scores. Conclusions: Risk analysis index assessment of frailty was found to be associated with several postoperative outcomes in a dose-dependent manner in patients undergoing lower extremity amputation in the Veterans Health Care System, with higher scores associated with higher rates of death and major cardiac (myocardial infarction, cardiac arrest), pulmonary (pneumonia, failure to wean vent, reintubation), and renal (renal insufficiency, renal failure) complications. We recommend the use of risk analysis index score as a frailty screening tool for patients undergoing lower extremity amputation to enable providers to adequately inform and counsel patients regarding potential significant risks. abstract_id: PUBMED:26286481 Is raloxifene associated with lower risk of mortality in postmenopausal women with vertebral fractures after vertebroplasty?: a hospital-based analysis. Background: Osteoporotic fractures are associated with mortality in postmenopausal woman. Whether raloxifen treatment after vertebroplasty can reduce mortality is unclear in this group. To compare the effect of raloxifene and no osteoporosis treatment on the risk of mortality after vertebroplasty, we designed this study. Methods: This was a retrospective study (January 2001 to December 2007). Follow-up for each participant was calculated as the time from inclusion in the study to the time of death, or to December 31(st), 2013, whichever occurred first. All of the patients underwent baseline bone density studies, and age and body mass index (kg/m(2)) were recorded. All associated medical diseases such as diabetes, hypertension, and liver and renal disease were recorded. Results: One hundred and forty-nine patients with vertebral fractures were enrolled, of whom 51 used raloxifene and 98 patients did not receive any anti-osteoporotic therapy. At the end of the follow-up period, 62 patients had died and 87 were still alive. The treated patients had a lower mortality rate than those who did not receive treatment (P = 0.001, HR = 3.845, 95% CI 1.884-7.845). The most common cause of mortality was sepsis, and those who received raloxifene had a lower rate of sepsis compared to those who did not receive treatment (P &lt; 0.001). Conclusions: Effective treatment with raloxifene may had a lower mortality rate in patients with postmenopausal osteoporosis-related vertebral fractures after vertebroplasty. abstract_id: PUBMED:30528872 Mortality risk factors for listeriosis - A 10 year review of non-pregnancy associated cases in England 2006-2015. LISTERIOSIS: is a foodborne illness that can result in septicaemia, Central Nervous System (CNS) disease, foetal loss and death in high risk patients. Objectives: To analyse the demographic trends, clinical features and treatment of non-perinatal listeriosis cases over a ten year period and identify mortality-associated risk factors. Methods: Reported laboratory-confirmed non-pregnancy associated cases of listeriosis between 2006 and 2015 in England were included and retrospectively analysed. Multivariate logistic regression analysis was performed to determine independent risk factors for mortality. Results: 1357/1683 reported cases met the inclusion criteria. Overall all-cause mortality was 28.7%; however, mortality rates declined from 42.1% to 20.2%. Septicaemia was the most common presentation 69.5%, followed by CNS involvement 22.4%. CNS presentations were significantly associated with age &lt; 50 years, and septicaemia with older age. Age &gt; 80 years (OR 3.32 95% CI 1.92-5.74), solid-organ malignancy (OR 3.42 95% CI 2.29-5.11), cardiovascular disease (OR 3.30 95% CI 1.64-6.63), liver disease (OR 4.61 95% CI 2.47-8.61), immunosuppression (OR 2.12 95% CI 1.40-3.21) and septicaemia (OR 1.60 95% CI 1.17-2.20) were identified as independent mortality risk factors. Conclusions: High risk groups identified in this study should be the priority focus of future public health strategies aimed at reducing listeriosis incidence and mortality. abstract_id: PUBMED:37456486 Infection-Associated Mortality During Induction Chemotherapy in Group B Intermediate-Risk Pediatric Burkitt's Lymphoma. Background Burkitt's lymphoma (BL) in the pediatric population has significant burden in developing countries. Infection-related complications during the induction chemotherapy phase pose a major challenge and contribute to high mortality rates due to a severely immunocompromised state. However, there is scarce data on the etiologies and optimal management strategies for infection-related mortality in pediatric BL patients, especially in developing countries like Pakistan. Methods This is a cross-sectional study that included a total of 116 pediatric patients with intermediate-risk BL. All patients were treated based on the Children's Cancer and Leukaemia Group (CCLG) 2020 guidelines. Data on patient demographics, presenting symptoms, diagnosis, infectious etiologies, and outcomes were collected. Infection-related complications and mortality were monitored during the induction chemotherapy period. The results of relevant culture reports were tabulated and data were analyzed. Results Among the 116 included patients, 61.1% were males with a mean age of 4.83 ± 2.12 years. Abdominal BL was the most common anatomical location. During the induction period, 66 patients (56.9%) had culture-proven infections, resulting in 33 deaths (28.4%). Fever was the predominant presenting symptom in all patients, followed by vomiting (57.6%), loose stools (42.4%), and cough (18.2%). Neutropenic colitis, sepsis, pneumonia, and meningitis were among the diagnosed infections. Hospital-acquired bacterial infections, including multi-drug resistant gram-negative and gram-positive organisms, were the main cause of mortality, with fungal infections and cytomegalovirus viremia also identified in a few patients. Conclusions This study highlights the urgent need for improved management strategies in pediatric BL patients in Pakistan to reduce infection-related complications and mortality rates, emphasizing the importance of context-specific approaches for infection prevention and management. abstract_id: PUBMED:20938271 Ventilator-associated pneumonia in trauma patients is associated with lower mortality: results from EU-VAP study. Background: Differences in trauma patients developing ventilator-associated pneumonia (VAP) are described regarding etiology and risk factors associated. We aim to describe the differences in outcomes in trauma and nontrauma patients with VAP. Methods: A prospective, observational study conducted in 27 intensive care units from nine European countries. We included patients requiring invasive mechanical ventilation for &gt;48 hours who developed VAP. Logistic regression model was used to assess the factors independently associated with mortality in trauma patients with VAP. Results: A total of 2,436 patients were evaluated; 465 developed VAP and of these 128 (27.5%) were trauma patients. Trauma patients were younger than nontrauma (45.3 ± 19.4 vs. 61.1 ± 16.7, p &lt; 0.0001). Nontrauma had higher simplified acute physiology score II compared with trauma patients (45.5 ± 16.3 vs. 41.1 ± 15.2, p = 0.009). Most prevalent pathogens in trauma patients with early VAP were Enterobacteriaceae spp. (46.9% vs. 27.8%, p = 0.06) followed by methicillin-susceptible Staphylococcus aureus (30.6% vs. 13%, p = 0.03) and then Haemophilus influenzae (14.3% vs. 1.9%, p = 0.02), and the most prevalent pathogen in late VAP was Acinetobacter baumannii (12.2% vs. 44.4%, p &lt; 0.0001). Mortality was higher in nontrauma patients than in trauma patients (42.6% vs. 17.2%, p &lt; 0.001, odds ratio [OR] = 3.55, 95%CI = 2.14-5.88). A logistic regression model adjusted for sex, age, severity of illness at intensive care unit admission, and sepsis-related organ failure assessment score at the day of VAP diagnosis confirmed that trauma was associated with a lower mortality compared with nontrauma patients (odds ratio [OR] = 0.37, 95%CI = 0.21-0.65). Conclusions: Trauma patients developing VAP had different demographic characteristics and episodes of etiology. After adjustment for potential confounders, VAP episodes in trauma patients are associated with lower mortality when compared with nontrauma patients. abstract_id: PUBMED:23062226 Mild hypoglycemia is independently associated with increased risk of mortality in patients with sepsis: a 3-year retrospective observational study. Introduction: Mild hypoglycemia is associated with increased mortality in critically ill patients. However, data regarding the association between mild hypoglycemia and patient outcomes among patients with sepsis are limited. Methods: Patients admitted to a medical ICU for sepsis, as defined by the Surviving Sepsis Campaign guidelines, during a 3-year period were enrolled retrospectively. Data on blood glucose (BG) control parameters and patient outcomes were collected. The primary outcome was the relationship of mild hypoglycemia (defined as minimum BG of 40 to 69 mg/dl during ICU stay) to hospital mortality, and the secondary outcomes were ICU-acquired complication rates, ICU and 1-year mortality rates. A relationship between glucose variability and hypoglycemic events was also investigated. Results: Three-hundred and thirteen consecutive patients with sepsis were enrolled (mean age, 71.8 ± 11.3 years; male, n = 166; diabetics, n = 102). A total of 14,249 (5.6/day/patient) BG tests were performed, and 175 hypoglycemic events (spontaneous, n = 71; iatrogenic, n = 104) occurred in 80 (25.6%) patients during the ICU stay; severe hypoglycemia (minimum BG level &lt; 40 mg/dl) occurred in 24 (7.7%) patients, and mild hypoglycemia (minimum BG level 40 to 69 mg/dl) was found in 56 (17.9%) patients. The frequency of hypoglycemic events increased with higher glucose variability, and patients with mild hypoglycemia had higher rates of ICU-acquired complications than did those with no hypoglycemia (renal, 36.2% vs. 15.6%, P = 0.003; cardiac, 31.9% vs. 14.3%, P = 0.008; hepatic, 34.0% vs. 18.2%, P = 0.024; bacteremia, 14.9% vs. 4.5%, P = 0.021). Multivariate analysis revealed that mild hypoglycemia was independently associated with increased hospital mortality (odds ratio, 3.43; 95% confidence interval, 1.51 to 7.82), and even a single event was an independent risk factor (odds ratio, 2.98; 95% confidence interval, 1.10 to 8.09). Kaplan-Meier analysis demonstrated that mild hypoglycemia was significantly associated with a lower 1-year cumulative survival rate among patients with sepsis (P &lt; 0.001). Conclusion: Mild hypoglycemia was associated with increased risk of hospital and 1-year mortality, as well as the occurrence of ICU-acquired complications. Physicians thus need to recognize the importance of mild hypoglycemia in patients with sepsis. Answer: Yes, physicians do have a lower risk of severe sepsis and associated mortality. A matched cohort study using data from Taiwan's National Health Insurance Research Database found that the overall incidence density of severe sepsis was lower in physicians than in controls (3.25 vs 3.90 per 1,000 person-years, p < 0.001). After adjusting for baseline covariates, physicians were found to be 24% less likely to develop severe sepsis than controls (adjusted hazard ratio, 0.76; 95% CI, 0.68-0.85). Additionally, while the 90-day mortality rates were similar between physicians and controls with severe sepsis (46.5% vs 45.7%, p = 0.72), the risk of death was significantly lower in physicians after controlling for baseline and additional covariates (adjusted hazard ratio, 0.82; 95% CI, 0.71-0.95). These findings suggest that medical knowledge, higher disease awareness, and easier healthcare access in physicians may help reduce their risk of severe sepsis and associated mortality (PUBMED:24231761).
Instruction: Is prosthetic renal artery reconstruction a durable procedure? Abstracts: abstract_id: PUBMED:11436085 Is prosthetic renal artery reconstruction a durable procedure? An analysis of 489 bypass grafts. Objective: Renal artery stenosis has been classically addressed with saphenous vein bypass graft or transaortic endarterectomy performed either primarily or in combination with an aortic procedure. In this series, we report the outcome of our 12-year experience with renal artery reconstruction using prosthetic conduit. Methods: Patients undergoing renal artery bypass grafting from 1987 to 1999 were identified. Demographics, indications, concurrent operations, complications, and patency were analyzed. Patients underwent postoperative duplex scan with subsequent ultrasound scans at 6-month intervals. Results: There were 489 procedures performed in 414 patients with indications: high-grade renal artery stenosis in combination with abdominal aortic aneurysm repair or symptomatic aortoiliac occlusive disease (309 [63%]), renovascular hypertension (118 [24%]), and renal salvage (20 [4%]). Indications for the remainder included trauma, renal artery aneurysm, or an infected aortic graft. Inflow was aorta or aortic graft in 95% of patients with the remainder taken from the iliac or visceral vessels. The retroperitoneal approach was used in 97.8%. Nonfatal complications occurred in 11.4% with a 1.4% early and 4.8% late occlusion rate. Renal function worsened in 3.1% of all patients. Secondary patency at 1 and 5 years was 98% and 96%, respectively. Conclusion: Renal artery reconstruction with prosthetic conduit has an acceptable and durable result whether used for primary renal artery reconstruction or concomitant reconstruction with aortic procedures. abstract_id: PUBMED:12947286 Radial artery as conduit for distal renal artery reconstruction. Unlabelled: Reconstruction of the renal artery with both saphenous vein and prosthetic material as bypass graft is durable in atherosclerotic disease. Extensive experience with saphenous vein grafts in pediatric patients and patients without atherosclerosis reveals a disturbing incidence of vein graft aneurysm degeneration. Distal renal artery reconstruction involving small branch vessels is generally not amenable to prosthetic reconstruction. We report a new approach to distal renal artery bypass grafting to avert these limitations. Case: A 43-year-old man with previously normal blood pressure had malignant hypertension, which proved difficult to control despite use of a beta-blocker and an angiotensin II inhibitor. At renal angiography a fusiform aneurysm was revealed in a posterior branch of the right renal artery. The renal artery aneurysm was resected, and the left radial artery was harvested and used as a sequential aortorenal bypass graft to the two branch renal arteries. The postoperative course was uneventful, and the patient now has normal blood pressure with a calcium channel blocker for maintenance of the radial artery graft. Pathologic analysis revealed a pseudoaneurysm with dissection between the media and external lamella, consistent with fibromuscular dysplasia. Conclusion: Autologous artery is the preferred conduit for renal reconstruction in the pediatric population. On the basis of cardiac surgery experience, we used the radial artery and found it to be a technically satisfactory conduit for distal renal reconstruction in a patient without atherosclerosis. abstract_id: PUBMED:21079753 Usefulness of cardiac computed tomography in the diagnosis of prosthetic coronary artery graft with interposition procedure. An 80-year-old Japanese man was admitted with orthopnea and pitting edema of both lower legs. We diagnosed congestive heart failure (CHF) on the basis of a chest X-ray and an echocardiogram. An electrocardiogram showed a heart rate of 120 beats/min with atrial fibrillation rhythm (Af). The patient developed aortic valve failure and destruction of the base of right coronary artery (RCA) due to infectious endocarditis at 71 years of age. The patient underwent aortic valve replacement and coronary artery bypass grafting with an interposed graft with polyester vascular graft to RCA. The patient recovered from CHF after the 6 days of treatment with diuretics and verapamil. We confirmed the patency of coronary arteries and bypass grafts using a 64-slice cardiac computed tomography scan (CT) and diagnosed CHF due to Af. Here we describe the estimation of the prosthetic coronary artery graft patency with the interposition procedure using 64-slice cardiac CT. abstract_id: PUBMED:1987381 Extracorporeal renal artery reconstruction for renovascular hypertension. Extracorporeal reconstruction can be applied to the successful repair of stenoses in the distal renal artery and its hilar branches. This study evaluates the short- and long-term results of extracorporeal renal artery reconstruction in 65 patients, including 5 children, with renovascular hypertension who were treated from 1974 through 1989. The mean age of the patients was 37 years (range, 7 to 67 years). The cause of the stenoses was arteriosclerosis in 8 patients, fibrodysplasia in 54 patients, and miscellaneous in 3. Hypertension was severe before treatment with a mean blood pressure of 187/147 mm Hg that was reduced to a mean of 159/102 mm Hg after medical therapy. Ten patients had renal dysfunction. Results were evaluated both at short-term intervals (mean, 7.9 months; 64 patients) and long-term intervals (mean, 5.9 years; 60 patients), after surgery. Blood pressure responses were classified as beneficial (cured/improved) or failures. Anatomic results were evaluated by angiography in 98% of the patients at the short-term interval and in 77% of the patients at the long-term interval. Extracorporeal renal artery surgery was performed on 78 kidneys among 65 patients (unilateral, 45 patients; bilateral, 13 patients; unilateral extracorporeal and contralateral in situ, 7 patients). In most of the cases autologous arterial graft was used for reconstruction. Early in the series one patient died as a result of the operative procedure (1/65, 1.5%). A beneficial blood pressure response occurred in 53 patients (53/65; 82%) at the short-term interval and in 49 patients (49/61; 80%) at the long-term interval, with the average blood pressure at the short-term interval being 138/85 mm Hg and at the long-term interval being 139/85 mm Hg. Renal function improved in all patients with preoperative renal dysfunction. Graft stenosis or occlusion of the main renal artery was neither observed at the short-term interval nor at the long-term interval. However, residual stenoses were observed in 9 of the 163 reconstructed distal branches (5.5%). Extracorporeal renal artery reconstruction with autologous arterial grafts can be effectively applied to lesions of the distal main renal artery and its hilar branches with durable functional results. abstract_id: PUBMED:12239641 Carotid artery reconstruction combined with myocutaneous flap coverage: a complex and durable rescue operation. Carotid reconstruction combined with myocutaneous flap coverage is a rare and challenging operation. We reviewed our experience to define the indications and outcome of this complex procedure. Clinical data and neurologic, wound, and vascular complications of all patients who underwent carotid artery reconstruction combined with myocutaneous flap coverage over a 17-year period were retrospectively reviewed. Results from this procedure indicate that pectoralis major myocutaneous flap is a safe and durable option for wound coverage in patients who undergo carotid endarterectomy, resection, and/or reconstruction for neck malignancy, previous irradiation or graft infection. Late outcome is favorable with low neurologic, vascular, and wound complication rates. abstract_id: PUBMED:33548401 Use of Heparin-Bonded Expanded Polytetrafluoroethylene Grafts for Renal Artery Reconstruction. Background: The purpose of this study was to analyze our initial experience in renal artery reconstruction with heparin-bonded expanded polytetrafluoroethylene (ePTFE) grafts. Methods: The authors retrospectively reviewed the data of consecutive patients who received open renal reconstruction with a heparin-bonded ePTFE graft at our institution between January 2014 and December 2019. Results: A total of 22 renal reconstructions with a heparin-bonded ePTFE graft were performed in 17 consecutive patients. In all cases, renal reconstruction was a concomitant procedure during surgical or endovascular aortic procedures. Postoperative complications within 30 days were observed in 9 (53%) patients, including acute kidney injury (n = 6), pneumonia (n = 1), retrograde type B aortic dissection (n = 1), and lower limb ischemia (n = 1). The 30-day mortality rate was 0%. In a median follow-up period of 32 (19-39) months, all grafts were patent without re-intervention. Six patients with preoperative stage 2 chronic kidney disease progressed to stage 3 during follow-up. No patient required temporary or permanent hemodialysis. One patient died from intestinal ischemia at 23 months after surgery. Conclusions: This study showed that the patency after open renal reconstruction with a heparin-bonded ePTFE graft was excellent, with acceptable renal outcomes, and demonstrates its safety as a concomitant procedure during an aortic procedure. Heparin-bonded ePTFE grafts are a feasible and effective choice for open renal reconstruction in contemporary practice. abstract_id: PUBMED:11413597 Successful ex vivo renal artery reconstruction and renal autotransplantation. Background: The increasing experience with renal allotransplantation has led to continuing development in vascular surgical techniques. These improvements have enabled complex ex vivo renal artery surgery and renal autotransplantation to be performed. The aims of the present study were to describe the results achieved with renal autotransplantation and ex vivo renal artery reconstruction (RAR) at the Newcastle Transplant Unit, John Hunter Hospital, and to review the current indications for such surgery. Methods: A retrospective review was performed of patients who required renal autotransplantation with or without RAR at John Hunter Hospital, between 1991 and 1999. Data were obtained from the Newcastle Transplant Unit and the Medical Record Department of John Hunter Hospital. Results: Two patients required ex vivo RAR and renal autotransplantation for severe fibromuscular dysplasia (FMD) complicated by stenoses and renal artery branch aneurysms. The third patient required autotransplantation for bilateral retroperitoneal fibrosis. There was one postoperative complication of pelviureteric junction obstruction that was treated successfully with a temporary ureteric stent. All patients demonstrated normal graft function and were normotensive on follow up, which ranged from 2.5 to 5 years. Conclusion: The present review confirms the long-term benefits of ex vivo RAR and renal autotransplantation that have been demonstrated by previous studies. In transplant units experienced with this surgery it has been shown to be a successful and durable technique for the treatment of a variety of vascular, urologic and other diseases. abstract_id: PUBMED:1180583 Ex vivo renal artery reconstruction. Ex vivo renal artery reconstruction and autotransplantation is a relatively recent addition to the surgical armamentarium for renal vascular hypertension. Ten consecutive patients were considered for this surgical therapy and form the basis of this communication. The patients were treated by a combination of methods including bilateral ex vivo reconstruction, unilateral in situ and contralateral ex vivo reconstruction, and unilateral ex vivo reconstruction and contralateral nephrectomy. Replacement of the diseased segment of the renal artery in all ex vivo reconstruction consisted of arterial autografts including hypogastric artery, splenic artery, common iliac, and external iliac artery. In the ex vivo reconstruction, the ureter was either left intact or was transected and reconstructed by standard ureterovesicle implantation. After surgery all patients became normotensive without antihypertensive medication. Although this is a relatively small series, the uniform good results in these patients with extensive disease suggest that ex vivo renal artery reconstruction is a safe and effective method of treatment. Thus, it should be more widely applicable, expecially in those patients with renal vascular disease who were previously thought to be inoperable or eligible for nephrectomy only. abstract_id: PUBMED:10522722 Outcome of renal artery reconstruction: analysis of 687 procedures. Objective: To evaluate the short- and long-term results of surgical reconstruction of the renal arteries, the authors review their experience with more than 600 reconstructions performed over a 12-year period. Summary Background Data: Reconstruction of the renal arteries, whether for primary renal indications or concomitantly with aortic reconstruction, has evolved over the past 40 years. There is concern that renal artery reconstructions carry significant rates of mortality and morbidity and may fare poorly compared with less-invasive procedures. Methods: From 1986 to 1998, 687 renal artery reconstructions were performed in 568 patients. Of these, 105 patients had simultaneous bilateral renal artery reconstructions. Fifty-six percent of the patients were male; 11% had diabetes; 35% admitted to smoking at the time of surgery. Mean age was 67 (range, 1 to 92). One hundred fifty-six (23%) were primary procedures and the remainder were adjunctive procedures with aortic reconstructions; 406 were abdominal aortic aneurysms and 125 were aortoiliac occlusive disease. Five hundred procedures were bypasses, 108 were endarterectomies, 72 were reimplantation, and 7 were patch angioplasties. There were 31 surgical deaths (elective and emergent) in the entire group for a mortality rate of 5.5%. Predictors of increased risk of death were patients with aortoiliac occlusive disease and patients undergoing bilateral simultaneous renal artery revascularization. Cause of death was primarily cardiac. Other nonfatal complications included bleeding (nine patients) and wound infection (three patients). There were 9 immediate occlusions (1.3%) and 10 late occlusions (1.5%). Thirty-three patients (4.8%) had temporary worsening of their renal function after surgery. Conclusion: Renal artery revascularization is a safe and durable procedure. It can be performed in selected patients for primary renovascular pathology. It can also be an adjunct to aortic reconstruction with acceptable mortality and morbidity rates. abstract_id: PUBMED:8074403 Simultaneous reconstruction of the abdominal aorta and renal arteries Twenty-six patients underwent simultaneous reconstruction of the abdominal aorta and renal artery on one or both sides, for a total of 31 renal artery repairs. There was no postoperative mortality in the current series. Blood pressure control (with or without antihypertensive treatment) was obtained in 88% of cases. In patients with renal failure, improvement of renal function occurred in 60% of cases. Results obtained in this series show that surgical repair of aorta and renal artery (ies) can be undertaken with low operative risk. It appears to be a safe procedure and gives satisfactory results. Answer: Yes, prosthetic renal artery reconstruction is considered a durable procedure. The analysis of 489 bypass grafts in one study showed that renal artery reconstruction with prosthetic conduit has an acceptable and durable result whether used for primary renal artery reconstruction or concomitant reconstruction with aortic procedures. The secondary patency at 1 and 5 years was reported to be 98% and 96%, respectively (PUBMED:11436085). Another study reported excellent patency after open renal reconstruction with a heparin-bonded expanded polytetrafluoroethylene (ePTFE) graft, with acceptable renal outcomes, demonstrating its safety as a concomitant procedure during an aortic procedure (PUBMED:33548401). Additionally, a review of more than 600 renal artery reconstructions over a 12-year period concluded that renal artery revascularization is a safe and durable procedure, with low rates of immediate and late occlusions (PUBMED:10522722). Furthermore, simultaneous reconstruction of the abdominal aorta and renal arteries has been shown to be a safe procedure with satisfactory results and low operative risk (PUBMED:8074403).
Instruction: Staphylococcal enterotoxin induced IL-5 stimulation as a cofactor in the pathogenesis of atopic disease: the hygiene hypothesis in reverse? Abstracts: abstract_id: PUBMED:12653801 Staphylococcal enterotoxin induced IL-5 stimulation as a cofactor in the pathogenesis of atopic disease: the hygiene hypothesis in reverse? Background: The incidence of Staphylococcus aureus (S. aureus) colonization on the skin of patients with atopic eczema/dermatitis syndrome (AEDS) is approximately 90% and a variety of evidence implicates epidermal staphylococcal infection as a pathogenic factor in atopic dermatitis. However, the mechanism(s) underlying the effects of this organism in the disease process are unclear. The cellular responses of AEDS suffers and asymptomatic atopic individuals to bacterial superantigens (SAg) were investigated in an attempt to elucidate the role of staphylococcal enterotoxin B (SEB) in atopic disease. Methods: Peripheral blood mononuclear cells (PBMC) were isolated from normal nonatopic adults, asymptomatic atopic individuals, patients with active AEDS and patients with active allergic asthma. The cells were cultured for 24 or 96 h with house dust mite (HDM), SEB and phytohaemagluttinin (PHA), and the supernatants were assayed for cytokine levels. Results: Staphylococcal enterotoxin B selectively stimulates the production of interleukin (IL)-5 in AEDS sufferers but not in asymptomatic atopics or nonatopics. Additionally, we observed comparable susceptibility to the IL-5-stimulatory effects of SEB in allergic asthmatics. Conclusions: Given the central role of IL-5-driven eosinophilia in progression from mild atopy to severe disease, these findings provide a plausible mechanism for the AEDS-promoting effects of staphylococcal SAg. Staphylococcal enterotoxin B may also have a similar role in atopic respiratory disease. abstract_id: PUBMED:15564771 Staphylococcal enterotoxin-B-mediated stimulation of interleukin-13 production as a potential aetiologic factor in eczema in infants. Background: Staphylococcus aureus colonization has been found in 80-100% of lesional skin from patients with atopic eczema dermatitis syndrome (AEDS) and is thought to have a role in the pathogenesis of the disease. Furthermore, up to 65% of S. aureus from lesional skin has been shown to produce toxigenic superantigens. Methods: Using a cohort of 11 children under 2 years of age diagnosed with AEDS, we isolated peripheral blood mononuclear cells, cultured them with staphylococcal enterotoxin B (SEB) and phytohaemagglutinin, and assessed the cytokine response profiles. Plasma was also collected for immunoglobulin E analysis. In addition, skin and nasal swabs were taken and cultured to determine the presence of SEB-producing S. aureus by polymerase chain reaction (PCR) and reverse passive latex agglutination. Results: We found a significant increase in the production of the SEB-induced cytokines interleukin (IL)-5 and IL-13 in the patient group when compared with non-atopic, healthy controls. For IL-13, there was almost no overlap in the levels between the groups. However, there was no correlation between SEB-induced IL-13 and disease severity. This difference was not seen when heat-inactivated S. aureus was used to stimulate the cells. Conclusions: IL-13 is an important factor in AEDS development in early childhood, and prophylactic anti-staphylococcal treatment may provide protection from AEDS in atopic individuals. abstract_id: PUBMED:23717008 Potential Immunoinflammatory Role of Staphylococcal Enterotoxin A in Atopic Dermatitis: Immunohistopathological Analysis and in vitro Assay. Background: The underlying mechanism of atopic dermatitis (AD) exacerbated by Staphylococcus aureus has not been established. However, we demonstrated recently that the majority of S. aureus strains colonized in the skin of Korean AD patients carried genes encoding staphylococcal enterotoxin A (SEA) and/or toxic shock syndrome toxin-1 (TSST-1). Objective: To clarify the role of staphylococcal superantigen, SEA in AD. Methods: With the lesional skin of 9 AD patients and normal looking skin of one healthy adult, we examined first the expression of SEA, staphylococcal enterotoxin B (SEB), and TSST-1 using immunohistochemical analysis. In addition, we investigated the effects of SEA on the expression of inflammation-related adhesion molecules and cytokines in human HaCaT keratinocytes and Human Umbilical Vein Endothelial Cells (HUVECs) by reverse transcriptase-polymerase chain reaction (RT-PCR) analysis and enzyme-linked immunosorbent assay. Results: Staphylococcal protein A (SPA) and SEA were detected with increased immunoreactivity in AD patients. However, TSST-1 showed mild-to-moderate immunoreactivity in AD patients, whereas SEB was minimally detected. In the double immunofluorescence investigation, SEA and SPA were well co-localized. SEA induced upregulation of adhesion molecules and elicited inflammatory responses in HaCaT keratinocytes and HUVECs. Conclusion: This study demonstrates the importance of SEA as an immunoinflammatory triggering factor of AD in Koreans. abstract_id: PUBMED:8546480 Staphylococcal enterotoxin B applied on intact normal and intact atopic skin induces dermatitis. Background And Design: Colonization of inflammatory skin diseases with Staphylococcus aureus is a frequent phenomenon and may cause exacerbation of the skin disease. Staphylococcus aureus strains present on atopic dermatitis are capable of releasing staphylococcal enterotoxins, a group of superantigens that are very potent T-cell activators. To determine whether the superantigen staphylococcal enterotoxin B can induce inflammation when applied on the skin, staphylococcal enterotoxin B was applied with and without occlusion on the volar aspect of the skin on the forearm of 10 subjects without skin disease and six subjects with atopic dermatitis of minimal activity and no eczema on the volar aspect of the skin on their forearm. The main outcome measures were clinical rating; determination of the increase of the thickness of the skin-fold; and determination of skin blood flow. Results: Clinically, staphylococcal enterotoxin B induced skin changes of erythema and induration in 10 of 10 healthy volunteer subjects and six of six subjects suffering from atopic dermatitis, while the vehicle induced clinically evident skin changes in only one of 10 healthy subjects and none of six subjects with atopic dermatitis. On day 3 after the application of an occluded patch containing 10 micrograms/cm2 of staphylococcal enterotoxin B in the healthy subjects, the thickness of the skinfold increased 0.47 +/- 0.49 mm (mean +/- SD) (n = 9; P &lt; .02) relative to the increase in the thickness of the skinfold following application of the vehicle. The Doppler laser-measured skin blood flow index had increased from 1.0 +/- 0.4 to 5.3 +/- 3.7 (mean +/- SD) (n = 10; P &lt; .002). On day 3 after the application of occluded patchs containing 10 micrograms/cm2 of staphylococcal enterotoxin B in the subjects suffering from atopic dermatitis, the increase in the thickness of the skinfold increased 0.20 +/- 0.24 mm (n = 6; P, not significant) relative to the increased thickness in the skinfold following application of the vehicle. The Doppler laser-measured skin blood flow index had increased from 1.1 +/- 0.4 to 3.7 +/- 2.2 (n = 6, P, not significant). Three of six subjects suffering from atopic dermatitis experienced a flare of their disease in the elbow flexure ipsilaterally to where the staphylococcal enterotoxin B patch was applied. Conclusions: The superantigen staphylococcal enterotoxin B applied on intact skin from both normal subjects and patients with atopic dermatitis induces an inflammatory reaction. This finding suggests that superantigens released from S aureus present on the skin in inflammatory skin diseases may exacerbate and sustain the inflammation. abstract_id: PUBMED:12542799 Staphylococcal enterotoxin B upregulates fas-mediated apoptosis of peripheral blood mononuclear cells in childhood atopic dermatitis. Staphylococcal infection-producing superantigens, such as staphylococcal enterotoxin B (SEB), are presumed to play an important role of inflammatory processes in atopic dermatitis (AD). The aim of this study was to elucidate the apoptotic response of peripheral blood mononuclear cells (PBMCs) from children with AD. PBMCs from AD children were sampled and cultured with SEB stimulation. Levels of apoptosis and Fas expression were measured using flow cytometry; the soluble Fas ligand (sFasL) was also measured using the enzyme-linked immunosorbent assay method, and the expression of FasL in PBMCs was observed using reverse transcriptase-polymerase chain reaction. There was no difference in the initial levels of apoptosis and Fas expression in precultured PBMCs of AD patients and healthy donors. After culturing for 48 h under SEB stimulation, the apoptosis level and Fas expression were significantly upregulated in the PBMCs from AD children compared with that from the normal controls. In patients, the sFasL was significantly increased, and the expression of FasL was observed in messenger RNA of peripheral monocytes. These results suggest that the Fas/FasL system is involved in the apoptosis induced by SEB in AD, with simultaneous increases in sFasL and expression of FasL. abstract_id: PUBMED:16522472 Staphylococcal enterotoxin B inhibits regulatory T cells by inducing glucocorticoid-induced TNF receptor-related protein ligand on monocytes. Background: Superantigens inhibit naturally occurring CD4+CD25+ regulatory T-cell (nTreg) activity, yet the mechanism for this is unknown. Objective: We tested the hypothesis that staphylococcal enterotoxin B (SEB), a prototypic superantigen, inhibits the ability of nTregs to suppress T-effector cell (Teff) proliferation by an antigen-presenting cell-dependent cell contact mechanism and the induction of glucocorticoid-induced TNF receptor-related protein ligand (GITR-L) on monocytes. Methods: Cell proliferation assays were performed on immunomagnetic-bead separated CD4+CD25+, CD4+CD25- T cells and CD14+ monocytes using anti-CD3, SEB, or a neutralizing anti-GITR-L antibody for stimulation. A cytokine permeable membrane insert separating cells in culture was used to evaluate the role of cell contact. Proliferation was measured by [3H]-thymidine incorporation or a carboxyfluorescein diacetate succinimidyl ester assay. GITR-L expression was determined by RT-PCR and immunostaining. Results: Antigen-presenting cell contact was required for proliferation of nTregs and loss of their ability to suppress Teff proliferation as confirmed by carboxyfluorescein diacetate succinimidyl ester. At baseline, monocytes expressed no GITR-L, but after SEB stimulation, GITR-L gene and protein expression increased significantly over a period of 24 hours (P&lt;.05). When GITR-L activity was neutralized with blocking antibody, as opposed to isotype control, nTregs suppressed Teff proliferation despite the presence of SEB (P&lt;.05). Likewise, loss of GITR-L at 48 hours was associated with restoration of nTreg activity (P&lt;or=.05). Conclusion: Staphylococcal enterotoxin B upregulates GITR-L on monocytes and inhibits nTreg ability to suppress Teff proliferation via a cell contact interaction. Clinical Implications: Prevention of GITR-L induction on monocytes by SEB may be a novel target for control of inflammation in superantigen-driven diseases such as atopic dermatitis. abstract_id: PUBMED:8902655 Role of staphylococcal enterotoxins in pathogenesis of atopic dermatitis: growth and expression of T cell receptor V beta of peripheral blood mononuclear cells stimulated by enterotoxins A and B. In order to investigate the role of staphylococcal enterotoxins in the pathogenesis of dermatitis in atopic patients, the growth and expression of T cell receptor V beta in peripheral blood mononuclear cells (PBMCs) from atopic dermatitis patients induced by stimulation with staphylococcal enterotoxin A (SEA) or staphylococcal enterotoxin B (SEB) were examined. Lymphocyte stimulation tests (LST) using SEA or SEB were performed in atopic dermatitis (AD) patients (n = 10) and normal controls (n = 5). PBMCs from AD patients displayed significantly stronger responses to SEA or SEB than those from the controls. To ascertain further whether SEA acts as a superantigen in atopic dermatitis, the expression of 22 genes in the variable region of the beta chain (V beta) of T cell receptors (TcR) was examined before and after stimulation with SEA by a reverse transcriptase-polymerase chain reaction (RT-PCR). Before stimulation, only weak expression of V beta was observed, and the expression of the various V beta segments was uniform in the normal controls (n = 3). In the AD patients (n = 3), the expression of V beta was enhanced, but was not uniform in 2 out of 3 patients and the pattern of expression was characteristic in each individual. This suggests that V beta expression varies in individual AD patients and displays restricted heterogeneity, reflecting the diversity of the etiology of the disease. After culture of the SEA-stimulated cells, no difference was observed in the expression of TcR V beta segments in the 3 normal controls as compared with that prior to stimulation, but particular V beta segments were intensely expressed in 3 AD patients, displaying distinct patterns (case I: V beta 9, V beta 10, V beta 18; case 2: V beta 6.1-3; case 3: V beta 6.1-3, V beta 18). Many of these V beta segments corresponded with those known to be induced by SEA. These results suggest oligoclonal proliferation of T cells in the peripheral blood of AD patients and high responsiveness in each clone, and since the expression of V beta segment after SEA stimulation was restricted, the actions of staphylococcal enterotoxins as superantigens were suggested. abstract_id: PUBMED:29984392 Rapid and Sensitive Detection of Staphylococcal Enterotoxin B by Recombinant Nanobody Using Phage Display Technology. Staphylococcal enterotoxin B, from Staphylococcus aureus (S. aureus), is one of the most potent bacterial superantigens with profound toxic effects on the immune system. It is associated with food poisoning, toxic shock, atopic dermatitis, asthma, and nasal polyps in humans. The current diagnostic methods for staphylococcal enterotoxin are mainly based on traditional monoclonal antibodies which hardly meet the requirements for clinical applications, and hybridoma clones lose their ability to secrete antibodies during time. The present study investigates the development of a novel, highly specific, low-cost, and sensitive nanobody capable of being used in immunoassays for Staphylococcal enterotoxin B (SEB) detection in suspicious foods. For this purpose, Camelus dromedarius was immunized against SEB toxin. After obtaining acceptable titration, a high-quality phage display nanobody library (4 × 1010 PFU/ml) was constructed. High-affinity SEB-specific nanobodies were retrieved from constructed libraries. After phage rescue and five round of biopanning, clone screening was performed by phage ELISA. Recombinant nanobodies which were expressed from C7 and C21 clone showed the highest affinity for SEB. The presence of high quality and pure nanobody band at ~ 15 kDa was confirmed by SDS-PAGE and western blotting. The affinity constant which was measured by ELISA was calculated to be around 10-9 M. The results suggest that the proposed detection method by nanobodies is an alternative diagnostic tool enabling a rapid, inexpensive, and specific detection of the SEB. abstract_id: PUBMED:10678719 Infants colonized with enterotoxin-producing staphylococci at 3 months display a decreased frequency of interferon-gamma-producing CD45RO lymphocytes upon stimulation with staphylococcal enterotoxin A at birth but not at 6 months of age. The aim of the study was to elucidate the relationship between the cytokine response to staphylococcal enterotoxin A (SEA) at birth and subsequent staphylococcal colonization in the first months of life. In a cohort of 45 newborns, cord blood lymphocytes were stimulated with SEA (10 ng/ml) in vitro, re-stimulated with PMA (phorbol myristate acetate) and ionomycin at day 3 and assessed for CD45RO expression and cytokine generation by flow cytometry. The infants were classified into three groups according to nasal staphylococcal colonization and enterotoxin generation at 3 months: There were 16 infants with either no colonization or non-enterotoxin-producing staphylococci, 16 infants with enterotoxins B, C, D and E, and 13 infants colonized with SEA-producing staphylococci. At birth, the group without subsequent colonization displayed a significantly higher frequency of CD45RO-positive interferon-gamma-producing cells (1.7%; range 0.0-9.3%) in comparison to the SEA-positive group (0.1%; range 0.0-0.4%) and also to the group positive for other enterotoxins (0.50%; range 0.0-2.5%). Comparable but less pronounced results were found for interleukin-5 but not for interleukins 2 and 4. At 6 months, no differences in cytokine generation were detected between the three groups. The results provide evidence that a non-specific immunologic immaturity at birth is a risk factor for early bacterial colonization. Furthermore, it is remarkable that this immaturity is similar to that seen in infants destined to be atopic with respect to disequilibrium of interferon-gamma to interleukin-4 generation. Thus the link between early staphylococcal colonization and subsequent atopy requires further investigation. abstract_id: PUBMED:20864149 Staphylococcal exotoxins are strong inducers of IL-22: A potential role in atopic dermatitis. Background: Patients with atopic dermatitis (AD) and psoriasis are frequently colonized with Staphylococcus aureus that produces staphylococcal enterotoxin B (SEB) and α-toxin. In patients with AD, S aureus colonization is positively correlated with the severity of their eczema. Moreover, IL-22-producing cells have been shown to accumulate in AD skin and to correlate with disease severity. Objective: To assess IL-22 production in response to SEB and sublytic α-toxin stimulation in patients with AD and psoriasis compared with healthy controls. Methods: IL-22 induction was investigated in PBMCs, T cells, and autologous cocultures of keratinocytes and T cells on SEB and α-toxin stimulation in a time-dependent and dose-dependent manner at the mRNA and protein (ELISA and flow cytometry) level. Anti-IL-1 receptor or anti-IL-6 antibodies were used in blocking experiments. Results: Staphylococcal enterotoxin B and sublytic α-toxin concentrations induced IL-22 production in PBMCs and isolated CD4(+) T cells. IL-22 secretion was enhanced by α-toxin stimulation in autologous cocultures of keratinocytes and T cells. In T cells and PBMCs from patients with AD, IL-22 secretion was significantly enhanced on α-toxin stimulation compared with patients with psoriasis and healthy controls. Conclusion: Increased IL-22 secretion induced by staphylococcal exotoxins in the skin partially explains how skin colonization and infection with S aureus can contribute to chronic skin inflammation in AD. Answer: The hygiene hypothesis suggests that a lack of early childhood exposure to infectious agents, symbiotic microorganisms, and parasites increases susceptibility to allergic diseases by suppressing the natural development of the immune system. In contrast, the findings related to Staphylococcal enterotoxin B (SEB) and its role in atopic diseases might be considered as a reverse scenario to the hygiene hypothesis. Staphylococcal enterotoxin B has been shown to selectively stimulate the production of interleukin (IL)-5 in sufferers of atopic eczema/dermatitis syndrome (AEDS) but not in asymptomatic atopics or nonatopics. This effect was also observed in patients with allergic asthma, suggesting a potential role for SEB in promoting atopic diseases (PUBMED:12653801). IL-5 is known for its central role in driving eosinophilia, which is a characteristic feature of atopic diseases, indicating that SEB may contribute to the progression from mild atopy to more severe disease. Furthermore, in infants with AEDS, a significant increase in the production of SEB-induced cytokines IL-5 and IL-13 was observed when compared with non-atopic, healthy controls. IL-13 is an important factor in the development of AEDS in early childhood, and the study suggests that prophylactic anti-staphylococcal treatment may provide protection from AEDS in atopic individuals (PUBMED:15564771). In addition, staphylococcal enterotoxin A (SEA) has been implicated as an immunoinflammatory triggering factor in atopic dermatitis (AD), particularly in Korean patients. SEA was found to induce upregulation of adhesion molecules and elicit inflammatory responses in human keratinocytes and endothelial cells, further supporting the role of staphylococcal superantigens in the exacerbation of AD (PUBMED:23717008). These findings collectively suggest that rather than a lack of microbial exposure, the presence of certain microbial factors, such as staphylococcal enterotoxins, may act as cofactors in the pathogenesis of atopic diseases. This could be seen as a "reverse" of the hygiene hypothesis, where microbial exposure leads to an increased risk of atopic disease through the stimulation of specific immune responses, such as IL-5 and IL-13 production, that are associated with allergic inflammation and disease progression , rather than preventing it. This is in contrast to the original hygiene hypothesis, which posits that reduced exposure to microbes in early life leads to a higher incidence of allergic diseases. Additionally, the application of SEB on intact skin has been shown to induce an inflammatory reaction in both healthy subjects and patients with atopic dermatitis, suggesting that superantigens released from S. aureus present on the skin may exacerbate and sustain inflammation in atopic conditions (PUBMED:8546480). Moreover, SEB has been found to upregulate Fas-mediated apoptosis of peripheral blood mononuclear cells in children with AD, indicating that SEB may influence the inflammatory process in AD through the modulation of cell death pathways (PUBMED:12542799). SEB also inhibits regulatory T cells by inducing glucocorticoid-induced TNF receptor-related protein ligand on monocytes, which could further contribute to the inflammatory response in atopic diseases (PUBMED:16522472). The role of staphylococcal enterotoxins in the pathogenesis of atopic dermatitis is further supported by the observation that PBMCs from AD patients show stronger responses to staphylococcal enterotoxins than those from controls, and that the expression of T cell receptor V beta segments after stimulation with SEA is restricted, suggesting the actions of staphylococcal enterotoxins as superantigens (PUBMED:8902655). In summary, the evidence suggests that staphylococcal enterotoxins, particularly SEB and SEA, may act as cofactors in the pathogenesis of atopic diseases by inducing specific immune responses that promote inflammation and disease progression. This mechanism stands in contrast to the hygiene hypothesis, which associates a lack of microbial exposure with an increased risk of atopic diseases.
Instruction: Graves' disease and radioiodine therapy. Is success of ablation dependent on the achieved dose above 200 Gy? Abstracts: abstract_id: PUBMED:18278207 Graves' disease and radioiodine therapy. Is success of ablation dependent on the achieved dose above 200 Gy? Aim: This study was performed to determine the results of ablative radioiodine therapy (RIT) when the achieved dose in the thyroid was above 200 Gy and to characterize predictive factors for treatment outcome. Patients, Methods: A total of 571 consecutive patients were observed for 12 months between July 2001 and June 2004. Inclusion criteria were a confirmed diagnosis Graves' disease, compensation of hyperthyroidism and withdrawal of antithyroid drugs two days before preliminary radioiodine-testing and RIT. The intended dose was 250 Gy and the therapeutically achieved dose was calculated from serial uptake measurements. The end-point measure was thyroid function 12 months after RIT; success was defined as elimination of hyperthyroidism. The relation between success rate and the achieved dose, thyroid volume, age and sex of patients, TSH- and TRAb-values and presence of ophthalmopathy was analysed. Results: Relief from hyperthyroidism was achieved in 96% of patients who received more than 200 Gy, even for thyroid volumes &gt;40 ml. The success of ablative RIT was not influenced by age or sex of patients, or by TSH- or TRAb values or concomitant ophthalmopathy. The mean achieved dose in the thyroid was 298 Gy with a standard deviation of 74.6 Gy. Conclusion: To achieve a dose of over 200 Gy with the above standard deviation, we recommend calculating an intended dose of 250 Gy and using a dosimetric approach with early and late uptake values in the radioiodine test, to allow early therapeutic intervention should the posttherapeutic thyroid dose fall unexpectedly below 200 Gy. abstract_id: PUBMED:18690374 Graves' disease and radioiodine therapy. Is success of ablation dependent on the choice of thyreostatic medication? Aim: This study was performed to analyse the impact of the choice of antithyroid drugs (ATD) on the outcome of ablative radioiodine therapy (RIT) in patients with Graves' disease. Patients, Material, Methods: A total of 571 consecutive patients were observed for 12 months after RIT between July 2001 and June 2004. Inclusion criteria were the confirmed diagnosis of Graves' disease, compensation of hyperthyroidism and withdrawal of ATD two days before preliminary radioiodine-testing and RIT. The intended dose of 250 Gy was calculated from the results of the radioiodine test and the therapeutically achieved dose was measured by serial uptake measurements. The end-point measure was thyroid function 12 months after RIT; success was defined as elimination of hyperthyroidism. The pretreatment ATD was retrospectively correlated with the results achieved. Results: Relief from hyperthyroidism was achieved in 96% of patients. 472 patients were treated with carbimazole or methimazole (CMI) and 61 with propylthiouracil (PTU). 38 patients had no thyrostatic drugs (ND) prior to RIT. The success rate was equal in all groups (CMI 451/472; PTU 61/61; ND 37/38; p = 0.22). Conclusion: Thyrostatic treatment with PTU achieves excellent results in ablative RIT, using an accurate dosimetric approach with an achieved post-therapeutic dose of more than 200 Gy. abstract_id: PUBMED:25285287 Radioiodine therapy in patients with Graves' disease and the effects of prior carbimazole therapy. Unlabelled: The use of radioiodine as the first line of treatment in Graves' disease is restricted in India because of its limited availability and an unrealistic risk perception associated with it. Additionally, the effectiveness of radioiodine ablation in Graves' disease is influenced by many factors. Prior medical antithyroid therapy is one such important factor. Aims: To analyze the efficacy of low dose radioiodine therapy (5 mCi) in treatment of naive patients of Graves' disease in comparison to that in which it was already primed with an antithyroid drug, carbimazole. Settings And Design: A non-randomized, interventional study conducted in the Department of Medicine and Endocrinology of a tertiary care institute in South India. Materials And Methods: The study had two groups; Group A (36 treatment naive, uncomplicated Graves' disease patients) and B (34 Graves' disease patients on carbimazole prior to radioiodine therapy). Both groups had baseline clinical, biochemical evaluation and were reassessed at 3 and 6 months for evaluating the clinical status for possible documentation of cure. Results: The cure rate was 61.1% in drug naive group and 58.8% in pretreated group at 6 months following radioiodine (P = 0.845). Higher baseline 999m technicium (99m Tc) uptake, male gender, BMI and higher baseline free thyroxine (fT4) level predicted treatment failure following radioiodine therapy. Conclusions: Administration of carbimazole prior to low dose radioiodine therapy does not alter the efficacy of radioiodine. Low fixed dose (5 mCi) of radioactive iodine may be a safe and effective primary therapeutic option in Graves' disease patients pretreated with antithyroid drugs. abstract_id: PUBMED:26772754 The effect of prior antithyroid drug use on delaying remission in high uptake Graves' disease following radioiodine ablation. Antithyroid drugs (ATDs) have been shown to attenuate the effectiveness of radioiodine (radioiodine ablation, RIA) therapy in Graves' disease. We undertook a study to look at the impact of iodine uptakes on the outcome of (131)I therapy. To determine the effect of prior ATD use on the duration of time to achieve cure in patients with high vs intermediate uptake Graves' disease who received a fixed dose (15 mCi) of (131)I radioiodine. In a retrospective study of patients with Graves' disease, 475 patients who underwent RIA were followed-up on a two-monthly basis with thyroid function tests. Of the 123 patients with a documented preablation RAIU and consistent follow-up it was observed that 40 patients had an intermediate RAIU (10-30%) and 83 subjects had a distinctly increased uptake (&gt;30%). Successful cure was defined as the elimination of thyrotoxicosis in the form of low free thyroxin and rising TSH levels. When a standard dose of 15 mCi (131)I was administered, a cure rate of 93% was achieved. The median duration of time to cure (TC) was 129 days. Surprisingly, a direct proportional linear relationship (R(2)=0.92) was established between time to cure and radioiodine uptake (TC&gt; 3 0%=172days, TC10 - 3 0%=105 days, P&lt;0.001). Patients who used ATD medications took a proportionately longer duration to achieve remission (TCNO ATD=102days, TCATD=253days, P&lt;0.001). The effect of prior ATD therapy in delaying remission was amplified in the subset of patients with higher uptakes (TC&gt; 3 0% + ATD=310days, TC&gt; 3 0% + NO ATD=102days, P&lt;0.001) compared to those with the intermediate uptakes (TC10 - 3 0% + ATD=126 days, TC10 - 3 0% + NO ATD=99 days, P&lt;0.001). RIA, using a dose of 15 mCi achieved a high cure rate. Higher uptakes predicted longer time to achieve remission, with prior ATD use amplifying this effect. abstract_id: PUBMED:30539323 Impact of different approaches to calculation of treatment activities on achieved doses in radioiodine therapy of benign thyroid diseases. Purpose: Radioiodine has been used for the treatment of benign thyroid diseases for over 70 years. However, internationally, there is no common standard for pretherapeutic dosimetry to optimally define the individual therapy activity. Here, we analyze how absorbed tissue doses are influenced by different approaches to pretherapeutic activity calculation of varying complexity. Methods: Pretherapeutic determination of treatment activity was retrospectively recalculated in 666 patients who had undergone radioiodine therapy for benign thyroid diseases (Graves' disease, non-toxic goiter, and uni- and multinodular goiter). Approaches considering none, some, or all of a set of individual factors, including target volume, maximum radioiodine uptake, and effective half-life, were applied. Assuming individually stable radioiodine kinetics, which had been monitored twice a day under therapy, hypothetically achieved tissue doses based on hypothetically administered activities resulting from the different methods of activity calculation were compared to intended target doses. Results: The Marinelli formula yields the smallest deviations of hypothetically achieved doses from intended target doses. Approaches taking individual target volume into consideration perform better than fixed therapy activities, which lead to high variances in achieved doses and high deviations of hypothetically achieved doses from intended target doses. Conclusion: Elaborate pretherapeutic dose planning, taking individual radioiodine uptake, half-life, and target volume into consideration, should be used whenever possible. The use of disease-specific fixed activities cannot be recommended. Deviations of achieved tissue doses from target doses can already be significantly lowered by application of volume-adapted treatment activities if more elaborate means are not available. abstract_id: PUBMED:11373933 Characterization of therapy failures in radioiodine therapy of Graves' disease without simultaneous antithyroid agents Unlabelled: Aim of this study was a characterization of radioiodine therapy (RIT) failures in Graves' disease without simultaneous carbimazole. Method: 226 patients with a confirmed diagnosis of Graves' disease received 686.8 +/- 376.4 MBq of iodine-131 orally for thyroid ablation. Target dose was 250 Gy. All patients were followed up for 6 months. Therapy failures were compared with successes regarding possible influencing variables initial thyroid volume, thyroid function, immune activity (TRAb), I-131 uptake, effective half-life, absorbed energy dose, age and gender. Results: 212 of 226 patients (93.8%) were treated successfully, 14 (6.2%) showed a hyperthyroidism relapse within 6 months which required a second radioiodine therapy. A success rate of 92.5% (62/67) could also be achieved with 67 patients who were hyperthyroid at the time of RIT. Compared to the therapy successes, the 14 failures achieved significantly lower absorbed doses (223.8 +/- 76.6 Gy vs. 285.2 +/- 82.1 Gy, p &lt; 0.005), but with no significant differences regarding age, thyroid volume, function or TRAb (all p &gt; 0.2). Of the 14 failures, n = 8 reached an absorbed dose &lt; 200 Gy and n = 1 a dose &lt; 250 Gy, although 5 of the failures reached an absorbed dose of &gt; 250 Gy. Stepwise logistic regression revealed only absorbed energy dose as a variable significantly influencing therapy success (p &lt; 0.005), but no influence of initial thyroid volume, function, TRAb value, age (all p &gt; 0.2) or gender (p = 0.13). Two-tailed Fisher's exact test showed no significant influence of gender on success rates (failures/successes: male 1/36, female 13/176, p = 0.48). Conclusions: Except for the absorbed energy dose, no other significant variable influencing the outcome of radioiodine therapy in Graves' disease without simultaneous carbimazole could be found. It should be noted, though, that 5 therapy failures (2.2%) reached an absorbed energy dose of &gt; 250 Gy. abstract_id: PUBMED:32403138 Improving treatment outcomes for Graves' disease patients with inactive forms of Graves' orbitopathy through an increased dose of radioiodine therapy. Introduction: While surgical treatment is preferred for Graves' disease with active forms of GO, there are various concepts for treating inactive forms of GO. The goal of radioiodine therapy is to resolve immunogenic hyperthyroidism by damaging the thyroid cells.The effects of the radioiodine dose on an associated inactive GO remain unclear, however. Methodology: We conducted a retrospective analysis of 536 patients who received first-time radioiodine therapy to treat Graves' hyperthyroidism. Patients without GO always received 200 Gy of iodine-131. Before the introduction of a differentiated treatment concept, patients with GO also received 200 Gy, while afterwards they received 300 Gy. For further analysis, we formed three patient groups based on GO diagnosis and administered radiation dose and compared their results. The main research question focused on the effect of an increased dose on Graves' orbitopathy. The sub-questions addressed the resolution rate achieved with the higher dose as well as the development of GO in patients who received radioiodine therapy. Results: The results show that GO symptoms were improved after radioiodine treatment in 68.5 % of patients treated with 300 Gy but only in 47.5 % of the patients treated with 200 Gy (p = 0.003). While in the 300 Gy group, hyperthyroidism was resolved in 93.2 % of patients, this was achieved in only 68.8 % of patients in the 200 Gy group (p &lt;/= 0.001). Discussion: Especially with an inactive form of GO profit from their hyperthyroidism being quickly and sufficiently resolved. This is achieved significantly better by administering 300 Gy instead of 200 Gy. For this reason, data analysis supports a differentiated dose concept that provides 300 Gy for patients with GO and 200 Gy for patients without GO. abstract_id: PUBMED:28856610 Use of low-dose radioiodine ablation for Graves' orbitopathy: results of a pilot, perspective study in a small series of patients. Objective: Elimination of thyroid antigens by total thyroid ablation (TTA), namely, thyroidectomy followed by radioiodine, may be beneficial for Graves' Orbitopathy (GO). TTA is usually performed with a 131I dose of 30 mCi. In Italy, this dose must be followed by a 24-h protected hospitalization, with increase in the waiting lists. In contrast, a 15 mCi dose can be given without hospitalization and with lower costs. Here, we investigated whether a lower dose of radioiodine can be used to ablate thyroid remnants in patients with GO, after thyroidectomy. Methods: The study was performed in two small groups of consecutive thyroidectomized patients (six patients per group) with Graves' hyperthyroidism and GO. Patients underwent ablation with either 15 or 30 mCi of 131I following treatment with recombinant human TSH (rhTSH). The primary outcome was rhTSH-stimulated serum thyroglobulin (Tg) at 6 months. The secondary outcome was baseline Tg at 6 months. Results: Baseline Tg and rhTSH-stimulated Tg after at 6 months did not differ between two groups, suggesting a similar extent of ablation. rhTSH-stimulated Tg was reduced significantly compared with rhTSH-stimulated Tg at ablation in both groups. GO outcome following treatment with intravenous glucocorticoids did not differ between the two groups. Conclusions: Our findings may provide a preliminary basis for the use of a 15 mCi dose of radioiodine upon rhTSH stimulation in thyroidectomized patients with Graves' hyperthyroidism and GO. abstract_id: PUBMED:22942775 Radioiodine thyroid ablation in graves' hyperthyroidism: merits and pitfalls. Ablative approaches using radioiodine are increasingly proposed for the treatment of Graves' disease (GD) but their ophthalmologic and biological autoimmune responses remain controversial and data concerning clinical and biochemical outcomes are limited. The aim of this study was to evaluate thyroid function, TSH-receptor antibodies (TRAb) and Graves' ophthalmopathy (GO) occurrence after radioiodine thyroid ablation in GD. We reviewed 162 patients treated for GD by iodine-131 ((131)I) with doses ranging from 370 to 740 MBq, adjusted to thyroid uptake and sex, over a 6-year period in a tertiary referral center. Collected data were compared for outcomes, including effectiveness of radioiodine therapy (RIT) as primary endpoint, evolution of TRAb, and occurrence of GO as secondary endpoints. The success rate was 88.3% within the first 6 months after the treatment. The RIT failure was increased in the presence of goiter (adjusted odds ratio = 4.1, 95% confidence interval 1.4-12.0, P = 0.010). The TRAb values regressed with time (r = -0.147; P = 0.042) and patients with a favorable outcome had a lower TRAb value (6.5 ± 16.4 U/L) than those with treatment failure (23.7 ± 24.2 U/L, P &lt; 0.001). At the final status, 48.1% of patients achieved normalization of serum TRAb. GO occurred for the first time in 5 patients (3.7%) who were successfully cured for hyperthyroidism but developed early and prolonged period of hypothyroidism in the context of antithyroid drugs (ATD) intolerance (P = 0.003) and high TRAb level (P = 0.012). On the basis the results of this study we conclude that ablative RIT is effective in eradicating Graves' hyperthyroidism but may be accompanied by GO occurrence, particularly in patients with early hypothyroidism and high pretreatment TRAb and/or ATD intolerance. In these patients, we recommend an early introduction of LT4 to reduce the duration and the degree of the radioiodine-induced hypothyroidism. abstract_id: PUBMED:25586901 Combination of ultrasound guided percutaneous microwave ablation and radioiodine therapy in benign thyroid diseases. A suitable method to reduce the 131I activity and hospitalization time? Aim: Goiters and thyroid nodules are an ongoing problem in healthcare. There has not been any treatment of goiters and thyroid nodules based on the combined therapy of microwave ablation (MWA) and radioiodine therapy (RIT) until now. In this study the potential benefit of a combined therapy versus single RIT is evaluated in order to achieve improvements concerning ¹³¹I-dose and hospitalization time. Patients, Material, Methods: Ten patients with goiter and benign thyroid nodules or Graves' disease were included. Pre-ablation assessments included sonographical imaging, functional imaging with 99mTc and FNAB to collect data of nodules and total thyroid volume and to exclude malignancy. Prior to treatment, radioiodine uptake test was performed. MWA was operated under local anesthesia with a system working in a wavelength field 902-928 MHz. Post-MWA, thyroid volume was recalculated ultrasonically. Due to reduced vital volume, changes of ¹³¹I-dose and hospitalization time could be monitored. Results: Mean absolute thyroid volume reduction by MWA before applying RIT was 22 ± 11 ml, meaning a relative reduction of 24 ± 6% (p &lt; 0.05). Thereby, administered activity could be reduced by 393 ± 188 MBq using the combined therapy, reflecting a relative reduction of 24 ± 6% (p &lt; 0.05). Additionally, mean hospitalization time was decreased by 2.1 ± 0.8 days using MWA prior to RIT, implying a relative reduction of 28 ± 6% (p &lt; 0.05). Conclusion: Depending on ablated volume by MWA, RIT-monotherapy requires on average 31.2% more ¹³¹I-activity than the combined therapy. The combined therapy remarkably decreases ¹³¹I-dose and hospitalization time. The combined MWA and RIT therapy is a considerable, effective and safer alternative to surgery for the treatment of very large benign nodular goiters. Answer: The success of ablative radioiodine therapy (RIT) for Graves' disease appears to be dependent on achieving a dose above 200 Gy. A study found that relief from hyperthyroidism was achieved in 96% of patients who received more than 200 Gy, even for thyroid volumes greater than 40 ml. The success of ablative RIT was not influenced by age, sex, TSH- or TRAb values, or concomitant ophthalmopathy. The mean achieved dose in the thyroid was 298 Gy with a standard deviation of 74.6 Gy. To ensure a dose of over 200 Gy, the study recommended calculating an intended dose of 250 Gy and using a dosimetric approach with early and late uptake values in the radioiodine test (PUBMED:18278207). Another study characterized therapy failures in RIT of Graves' disease without simultaneous antithyroid agents and found that the 14 failures achieved significantly lower absorbed doses compared to the therapy successes. The study concluded that except for the absorbed energy dose, no other significant variable influencing the outcome of RIT in Graves' disease without simultaneous carbimazole could be found (PUBMED:11373933). Furthermore, a retrospective analysis of patients who received first-time radioiodine therapy to treat Graves' hyperthyroidism showed that GO symptoms were improved in a higher percentage of patients treated with 300 Gy compared to those treated with 200 Gy. The study suggested that especially patients with an inactive form of Graves' orbitopathy (GO) benefit from their hyperthyroidism being quickly and sufficiently resolved with a higher dose of 300 Gy (PUBMED:32403138). In conclusion, the success of ablative RIT for Graves' disease is dependent on achieving a dose above 200 Gy, with higher doses potentially leading to better outcomes in terms of relief from hyperthyroidism and improvement of GO symptoms.
Instruction: Are a Healthy Diet and Physical Activity Synergistically Associated with Cognitive Functioning in Older Adults? Abstracts: abstract_id: PUBMED:27102791 Are a Healthy Diet and Physical Activity Synergistically Associated with Cognitive Functioning in Older Adults? Objectives: Previous research has demonstrated that being both physically active and adhering a healthy diet is associated with improved cognitive functioning; however, it remains unclear whether these factors act synergistically. We investigated the synergistic association of a healthy diet and being physically active with cognitive functioning. Design: Cross-sectional study. Setting And Participants: Data from the Longitudinal Aging Study Amsterdam (LASA) were used. We analyzed data from 2,165 community dwelling adults who were aged 55-85 years, 56% of whom were female. Cognitive functioning was assessed by the Mini-Mental State Examination (MMSE), an MMSE score of &gt;26 indicates good cognitive functioning. Physical activity was assessed by the LASA Physical Activity Questionnaire and was considered sufficient if the person engaged in moderately intense physical activity ≥ 20 min/day. A healthy diet score was based on the intake of fruit, vegetables and fish. Each of the food groups was assigned a score that ranged from 1 (well below the Dutch guideline for a healthy diet) to 4 (well above the Dutch guideline for a healthy diet), and the scores were aggregated to determine a healthy diet (healthy ≥ 9 points). Multiple logistic and linear regression analyses were used to examine the (synergistic) association among physical activity, a healthy diet and cognitive functioning. All analyses were adjusted for potential chronic diseases and lifestyle confounders. Results: Of all of the participants, 25% were diagnosed with a cognitive impairment (MMSE ≤26), 80% were physically active and 41% had a healthy diet. Sixty three percent of the participants both adhered to a healthy diet and were physically active. Sufficient daily physical activity (OR=2.545 p&lt;.001) and adherence to a healthy diet (OR=1.766 p=.002) were associated with good cognitive functioning. After adjusting for confounding factors, sufficient physical activity was not significantly related to cognitive functioning (p=.163); however adherence to a healthy diet remained significantly associated with good cognitive functioning (p=.017). No interaction among sufficient physical activity, healthy diet adherence and good cognitive functioning was observed (crude: p=.401, adjusted: p=.216). Conclusion: The results of this cross-sectional study indicate that adherence to a healthy diet is inde-pendently related to cognitive functioning. Being physically active does not modify this association. Furthermore, these two lifestyle factors do not synergistically relate to cognitive functioning. abstract_id: PUBMED:28710144 The Impact of the Mediterranean Diet on the Cognitive Functioning of Healthy Older Adults: A Systematic Review and Meta-Analysis. Evidence from epidemiologic studies suggests a relation between the Mediterranean diet (MeDi) and cognitive function, but results are inconsistent. Prior reviews have not provided pooled data from meta-analysis of longitudinal studies and randomized controlled trials (RCTs), or they included younger adult participants. This systematic review and meta-analysis examines the impact of the MeDi on the cognitive functioning of healthy older adults. Fifteen cohort studies with 41,492 participants and 2 RCTs with 309 and 162 participants in intervention and control groups, respectively, were included. The primary outcome of interest was cognitive function, divided into domains of memory and executive function. Meta-analysis of cohort studies revealed a significant association between MeDi and older adults' episodic memory (n = 25,369, r = 0.01, P = 0.03) and global cognition (n = 41,492, r = 0.05, P ≤ 0.001), but not working memory (n = 1487, r = 0.007, P = 0.93) or semantic memory (n = 1487, r = 0.08, P = 0.28). Meta-analysis of RCTs revealed that compared with controls, the MeDi improved delayed recall (n = 429, P = 0.01), working memory (n = 566, P = 0.03), and global cognition (n = 429, P = 0.047), but not episodic memory (n = 566, P = 0.15), immediate recall (n = 566, P = 0.17), paired associates (n = 429, P = 0.20), attention (n = 566, P = 0.69), processing speed (n = 566, P = 0.35), or verbal fluency (n = 566, P = 0.12). The strongest evidence suggests a beneficial effect of the MeDi on older adults' global cognition. This article discusses the influence of study design and components of the MeDi on cognitive function and considers possible mechanisms. abstract_id: PUBMED:29682228 Physical activity and depression in older adults with and without cognitive impairment. Low physical activity and depression may be related to cognitive impairment in the elderly. Objetive: To determine depression and physical activity (PA) among older adults with and without cognitive impairment. Methods: 156 older adults, both males and females, aged ≥60 years, were asked to complete the Thai Mini-Mental State Examination (Thai-MMSE), a global cognitive impairment screening tool. Seventy-eight older adults with cognitive impairment and 78 older adults without cognitive impairment were then separately administered two questionnaires (i.e., the Thai Geriatric Depression Scale; TGDS and Global Physical Activity Questionnaire; GPAQ). Logistic regression analysis was used to determine the risk of developing cognitive impairment in the groups of older individuals with and without cognitive impairment. Results: A cross-sectional study of elderly with a mean age of 74.47 ± 8.14 years was conducted. There were significant differences on the depression scale and in PA between older adults with and without cognitive impairment. Further, participants with low PA and high level of depressive symptoms had an increased risk of cognitive impairment (Odds ratio = 4.808 and 3.298, respectively). Conclusion: Significant differences were noted in PA and on depression scales between older adults with and without cognitive impairment. Therefore, increased PA and decreased depressive symptoms (i.e., having psychological support) are suggested to reduce the risks of cognitive impairment in older adults. abstract_id: PUBMED:36816402 Frailty is associated with worse cognitive functioning in older adults. Introduction: Frailty and impaired cognitive functioning often co-occur in older adults and are associated with adverse health outcomes. However, their relationship is unclear. This study sought to examine the association of frailty status with cognitive functioning in older adults. Method: The study population consisted of 2,296 older adults aged ≥60 from the National Health and Nutrition Examination Survey 2011-2014. Frailty status was measured based on the Fried Phenotype and the participants were categorized into three groups- robust, pre-frailty, and frailty. Cognitive functioning was measured using the Consortium to Establish a Registry for Alzheimer's Disease Word Learning subtest (CERAD-WL) immediate and delayed recall tests, the Animal Fluency test (AFT), and the Digit Symbol Substitution Test (DSST). Test-specific and global cognition z-scores were calculated. Multinomial linear regression models were constructed to examine the association between frailty status (reference: robust) and test-specific and global cognition z-scores. Multiple linear regression models were used to examine the relationship between the number of frailty dimensions and test-specific and global cognition z-scores. All models controlled for age, race/ethnicity, education, total cholesterol level, and systolic blood pressure. Results: About half of the participants (median age 68 years) were female (49.9%) and non-Hispanic White (48.7%). A quarter (23.3%) of the participants completed some college and above. Multinominal linear regression showed that compared with participants who were robust, those with frailty had worse DSST [β = -0.234, 95% confidence interval (CI): -0.391, -0.078, P = 0.003] and global cognition z scores (β = -0.129, 95% CI -0.233, -0.025, P = 0.02). Multiple linear regression model showed that the number of frailty dimensions was significantly associated with decreased the DSST (β = -0.065, 95% CI -0.103, -0.026, P = 0.001) and global cognition z-scores (β= -0.034, 95% CI -0.06, -0.009, P = 0.009). Conclusion: Frailty is associated with worse processing speed, sustained attention, working memory, and global cognition in older adults. Prevention and treatment of frailty in older adults may help protect their cognitive functioning. Further, clinicians should consider assessing cognitive functioning, especially processing speed, sustained attention, and working memory, among frail older patients, which may allow early identification and interventions of cognitive impairment. abstract_id: PUBMED:37924210 A comparative study of the 2D- and 3D-based skeleton avatar technology for assessing physical activity and functioning among healthy older adults. Background: Maintaining physical activity (PA) and functioning (mobility, balance) is essential for older adults' well-being and quality of life. However, current methods (functional tests, self-reports) and available techniques (accelerometers, sensors, advanced movement analysis systems) for assessing physical activity and functioning have shown to be less reliable, time- and resource-consuming with limited routine usage in clinical practice. There is a need to simplify the assessment of physical activity and functioning among older adults both in health care and clinical studies. This work presents a study on using Skeleton Avatar Technology (SAT) for this assessment. SAT analyzes human movement videos using artificial intelligence (AI). The study compares handy SAT based on 2D camera technology (2D SAT) with previously studied 3D SAT for assessing physical activity and functioning in older adults. Objective: To explore whether 2D SAT yields accurate results in physical activity and functioning assessment in healthy older adults, statistically compared to the accuracy of 3D SAT. Method: The mobile pose estimation model provided by Tensorflow was used to extract 2D skeletons from the video recordings of functional test movements. Deep neural networks were used to predict the outcomes of functional tests (FT), expert-based movement quality assessment (EA), accelerometer-based assessments (AC), and self-assessments of PA (SA). To compare the accuracy with 3D SAT models, statistical analysis was used to test whether the difference in the predictions between 2D and 3D models is significant or not. Results: Overall, the accuracy of 2D SAT is lower than 3D SAT in predicting FTs and EA. 2D SAT was able to predict AC with 7% Mean Absolute Error (MAE), and self-assessed PA (SA) with 16% MAE. On average MAE was 4% higher for 2D than for 3D SAT. There was no significant difference found between the 2D and the 3D model for AC and for two FTs (30 seconds chair stand test, 30sCST and Timed up and go, TUG). A significant difference was found for the 2D- and 3D-model of another FT (4-stage balance test, 4SBT). Conclusion: Altogether, the results show that handy 2D SAT might be used for assessing physical activity in older adults without a significant loss of accuracy compared to time-consuming standard tests and to bulky 3D SAT-based assessments. However, the accuracy of 2D SAT in assessing physical functioning should be improved. Taken together, this study shows promising results to use 2D SAT for assessing physical activity in healthy older adults in future clinical studies and clinical practice. abstract_id: PUBMED:29876828 Are changes in pain, cognitive appraisals and coping strategies associated with changes in physical functioning in older adults with joint pain and chronic diseases? Background: As the population ages, the number of people with chronic diseases increases. Frequently, older people suffer from joint pain together with other chronic diseases, which can lead to decreased physical functioning. Aims: To investigate the associations of the changes in cognitive appraisals, coping strategies and pain with the change in physical functioning in older people, who have chronic pain and chronic diseases. Methods: Elderly persons (n = 407, mean age 77 years, and 62% female), with self-reported joint pain and at least two chronic diseases, filled in questionnaires about cognitive appraisals, coping strategies, pain intensity and physical functioning at baseline, at 6- and 18-month follow-ups. The associations of change in physical functioning with changes in cognitive appraisals, coping strategies and pain were modelled using generalized estimating equations (GEE). Results: Increase in pain, in negative thinking about the consequences of pain, and in activity avoidance and decrease in self-efficacy beliefs were associated with a decline in physical functioning. Discussion: Observed mean changes were small but large inter-individual variability was seen. This shows that cognitive appraisals and coping strategies are malleable. Statistical model of change clarifies the direction of longitudinal associations. Conclusions: The longitudinal findings suggest that joint pain, cognitive appraisals and coping strategies may determine physical functioning in older people who have chronic pain and comorbidity. abstract_id: PUBMED:32616014 Cognitive function of older adults engaging in physical activity. Background: Physical activity can be classified as open-skilled or closed-skilled. Open-skilled physical activity, such as tennis, require participants to perform within a dynamic setting and respond to unpredictable and frequent environmental changes throughout the activity. Closed-skilled types of physical activity, such as swimming, are predictable and self-directed. However, the benefits of cognitive function in these two types of physical activities to older adults are unknown. This study examined the effects of participation in open- and closed-skilled physical activity on the cognitive function of older adults. Methods: The study recruited a total of 61 participants aged 65 years and over. Participant recruitment was achieved by distributing flyers asking for volunteers in various sports venues. Participants self-reported to be without medical conditions affecting their physical and cognitive function. All participants underwent a two-hour assessment session involving the completion of seven standardised cognitive function assessments, which were used to assess a range of cognitive function. Results: Overall mean scores across all of the assessments showed superior performance for the open- or closed-skilled participants when compared with the no-physical-activity group. The results of 61 adults who participated in this study showed that closed-skilled physical activity was associated with better selective attention and visuospatial function while open-skilled physical activity was associated with better inhibition and cognitive flexibility function. No significant difference in self-regulation ability was found between the open- or closed-skilled groups. Conclusions: Open-skilled physical activity was associated with better inhibition, visual tracking, and cognitive flexibility while closed-skilled physical activity was associated with better selective attention and visuospatial perception. The findings have important practical implications for the health and quality of life of ageing populations, knowing which particular types of physical activity might affect the cognitive function. abstract_id: PUBMED:28587957 Role of physical exercise on cognitive function in healthy older adults: A systematic review of randomized clinical trials. Cognitive impairment has a harmful effect on quality of life, is associated with functional limitations and disability in older adults. Physical activity (PA) has shown to have beneficial effects on cognition but the results and conclusions of randomized controlled trials (RCTs) are less consistent. Update of knowledge was necessary to examine the effects on cognitive function of new training modalities developed in recent years, such as multicomponent exercise training. Therefore, the purpose of this review was to examine the role of multicomponent training versus aerobic or resistance training alone on cognition in healthy older adults (&gt;65 years) without known cognitive impairment. The mean differences (MD) of the parameters from pre-intervention to post-intervention between groups were pooled using a random-effects model. Twenty-one RCTs published between 2002 and 2016 were included. Multicomponent exercise training may have the most positive effects on cognitive function in older adults. The small number of included studies and the large variability in study populations, study design, exercise protocols, adherence rates and outcome measures complicate the interpretation of the results and contribute to discrepancies within the exercise research literature. abstract_id: PUBMED:32768594 Effects of exergames and cognitive-motor dual-task training on cognitive, physical and dual-task functions in cognitively healthy older adults: An overview. This overview aims to summarize the effectiveness of cognitive-motor dual-task and exergame interventions on cognitive, physical and dual-task functions in healthy older adults, as well as the feasibility, safety, adherence, transfer and retention of benefits of these interventions. We searched for systematic reviews or meta-analyses assessing the effects of cognitive-motor dual-task and exergame interventions on cognitive, physical and dual-task functions in cognitively healthy older adults through eight databases (CDSR (Cochrane), MEDLINE (PubMed), Scopus, EMBASE, CINAHL, PsycINFO, ProQuest and SportDiscus). Two reviewers performed the selection, data extraction and risk of bias evaluation independently (PROSPERO ID: CRD42019143185). Eighteen reviews were included in this overview. Overall, positive effects of cognitive-motor dual-task interventions on cognitive, physical and dual-task functions, as well as exergames on cognitive functions only, were observed in cognitively healthy older adults. In contrast, the effects of exergames on physical functions are more controversial, and their effects on dual-task functions have not been studied. The feasibility, safety, adherence, transfer and retention of benefits for both intervention types are still unclear. Future studies should follow more rigorous methodological standards in order to improve the quality of evidence and provide guidelines for the use of cognitive-motor dual-task and exergame interventions in older adults. abstract_id: PUBMED:30038951 Loneliness, Depressive Symptoms, and Cognitive Functioning Among U.S. Chinese Older Adults. Objective: Loneliness has been associated with cognitive functioning in the general older adult population. Previous studies further indicate that loneliness has a strong association with depressive symptoms and the two constructs can reinforce each other to diminish well-being. However, such relationships have not been examined in U.S. Chinese older adults. This study attempts to bridge this knowledge gap. Method: Data were drawn from a population-based study of 3,159 U.S. Chinese older adults in the Greater Chicago area. Stepwise multivariate regression analyses were conducted to examine the relationship between loneliness, depressive symptoms, and global cognitive functioning. Results: Loneliness was associated with poor global cognitive functioning in U.S. Chinese older adults, though the relationship became nonsignificant after adjustment for depressive symptoms. The interaction term between loneliness and cognitive functioning was statistically significant (p &lt; .01). The findings further highlight the importance of age, education, number of children, number of people in household, and length of residence in the U.S. in cognitive functioning among U.S. Chinese older adults. Discussion: The study findings indicate that loneliness and depressive symptoms act together to influence cognitive functioning in U.S. Chinese older adults. Research and clinical implications of the findings are discussed. Answer: Based on the provided abstracts, it appears that while both a healthy diet and physical activity are associated with cognitive functioning in older adults, they do not act synergistically. A cross-sectional study from the Longitudinal Aging Study Amsterdam (LASA) found that adherence to a healthy diet was independently related to good cognitive functioning, but being physically active did not modify this association, and the two lifestyle factors did not synergistically relate to cognitive functioning (PUBMED:27102791). In contrast, a systematic review and meta-analysis examining the impact of the Mediterranean diet (MeDi) on cognitive functioning in healthy older adults found a significant association between MeDi and improved cognitive functioning, particularly in global cognition and episodic memory (PUBMED:28710144). However, this review did not specifically address the synergistic effects of diet and physical activity. Another study highlighted that low physical activity and depression might be related to cognitive impairment in the elderly, suggesting that increased physical activity and decreased depressive symptoms could reduce the risks of cognitive impairment in older adults (PUBMED:29682228). This study did not explore the combined effect of diet and physical activity on cognitive functioning. Furthermore, research on frailty in older adults showed that frailty is associated with worse cognitive functioning, indicating that prevention and treatment of frailty may help protect cognitive functioning (PUBMED:36816402). This study did not directly address the synergistic relationship between diet and physical activity. A study on the use of Skeleton Avatar Technology (SAT) for assessing physical activity and functioning in older adults did not provide information on the relationship between diet, physical activity, and cognitive functioning (PUBMED:37924210). Lastly, studies on the cognitive function of older adults engaging in physical activity (PUBMED:32616014), the role of physical exercise on cognitive function (PUBMED:28587957), and the effects of exergames and cognitive-motor dual-task training (PUBMED:32768594) focused on the impact of physical activity on cognitive function without examining the potential synergistic effect with diet. In summary, while both a healthy diet and physical activity are important for cognitive functioning in older adults, the abstracts provided do not support the notion that they have a synergistic effect on cognitive functioning.
Instruction: Is Helicobacter pylori eradication indicated in cirrhotic patients with peptic ulcer disease? Abstracts: abstract_id: PUBMED:17542294 Is Helicobacter pylori eradication indicated in cirrhotic patients with peptic ulcer disease? Introduction: The association between H. pylori infection and peptic ulcer disease (PUD) and the efficacy of eradication of H. pylori in treating ulcer disease in cirrhotic patients remains controversial. This study was carried out to ascertain the prevalence and significance of H. pylori in cirrhotic patients with PUD and to assess the need for anti H. pylori thrapy Methods: Three groups of patients were studied . These were patients with (A) cirrhosis and PUD, (B) uncomplicated PUD and (C) cirrhosis without PUD. H. pylori status was determined by endoscopic urease test . Eradication therapy was given with a four drug regimen and repeat endoscopy was done three months later to detect ulcer healing as well as H. pylori status with PUD in groups A and B. Results: Cirrhotic patients with PUD had a significantly lesser prevalence of H. pylori compared to uncomplicated ulcer patients (46.9 % vs 80 %; p = 0.04). While H. pylori eradication rates were similar between cirrhotic and non cirrhotic patients, ulcer healing rate was significantly lesser in cirrhotic patients ( 48 % vs 80.9 %) . Majority of residual ulcers in cirrhotic patients were negative for H. pylori. Conclusion: Eradication of H. pylori does not reduce the residual ulcer rate indicating that H. pylori infection might not be a significant risk factor for PUD in cirrhotic patients. Hence, routine H. pylori eradication might not be warranted in patients with cirrhosis and peptic ulcer disease. abstract_id: PUBMED:7762174 Helicobacter pylori eradication in patients with diseases of the stomach The authors present a review of contemporary findings concerning treatment of gastroduodenal ulcers with a positive finding of Helicobacter pylori. Only combined anti-Helicobacter treatment leads to the eradication of Helicobacter pylori in 80-96% of the patients. Moreover it leads to gradual recovery from gastritis and bulbitis and thus prevents ulcer recurrence. abstract_id: PUBMED:34370686 Helicobacter pylori - 2021. Összefoglaló. A Helicobacter pylori továbbra is a világ legelterjedtebb fertőzése: prevalenciája a fejlődő országokban 70-80%, a fejlett országokban csökkenő tendenciát mutat. A dél-magyarországi véradókban a prevalencia 32%-ra csökkent. A migráció a befogadó ország számára a fertőzés fokozott kockázatával jár. A szövettani diagnózisban az immunhisztokémiai vizsgálat pontosabb a hagyományos Giemsa-festésnél. A mesterséges intelligencia érzékenysége a hagyományos endoszkópiáéval összehasonlítva 87%, pontossága 86%. Az újgenerációs szekvenálással lehetséges egy biopsziás mintából több antibiotikumérzékenység meghatározása. A Helicobacter pylori kezelésének európai regisztere kimutatta, hogy 2013 és 2018 között a bizmutalapú négyes vagy a 14 napos egyidejű négyes kezelések hatásosabbak, mint a hagyományos hármas kezelés, de elterjedésük igen lassú folyamat, jelentős földrajzi különbségekkel. Az új típusú koronavírus (SARS-CoV-2) felléphet Helicobacter pylori fertőzésben is, egymás kóros hatását felerősítve. A diagnosztikai módszerek korlátozottak. Protonpumpagátlók szedése növeli a COVID-19-fertőzés kockázatát és annak súlyos kimenetelét. Előzetesen ismert peptikus fekély, vérzés, illetve antikoguláns kezelés előtt az eradikáció a vírusos fertőzés lezajlása után indokolt. A probiotikumoknak az eradikációra gyakorolt hatásáról 20, közepes minőségű metaanalízis született, így a konszenzusokban foglalt álláspontok sem egyértelműek: a jövőben ezt tisztázni kell. Orv Hetil. 2021; 162(32): 1275-1282. Summary. Helicobacter pylori is still the most widespread infection in the world: its overall prevalence is 70-80% in developing regions, but fortunately it is decreasing in the Western world. The prevalence in blood donors from South-Eastern Hungary decreased from 63% in the 1990's to 32% in 2019. Migration constitutes an increased risk of infection for the destination countries. Immunohistochemistry has proven to be more accurate in histological diagnosis than the conventional Giemsa stain. The sensitivity and accuracy of artificial intelligence as compared to videoendoscopy were 87% and 86%, respectively. The European Register on the management of Helicobacter pylori infection revealed that concomitant quadruple and 14-day bismuth-based therapies are more efficient than triple combinations, although their incorporation in practice is a long-lasting process, with large geographical variations. The novel type of coronavirus (SARS-CoV-2) can also occur in Helicobacter pylori-infected patients, mutually enhancing their pathogenetic effects. Diagnostic possibilities are limited in this setting. The use of proton pump inhibitors increases the risk of viral infection and the severity of the disease. Eradication treatment seems justified in patients with previously known peptic ulcers or gastrointestinal bleeding, or before starting anticoagulant treatment, but must be postponed after resolution of viral infection. The effect of probiotics on eradication was addressed by 20, medium-to-low quality meta-analyses and so, the recommendations of the guidelines are equivocal, which must be clarified in the future with higher quality studies. Orv Hetil. 2021; 162(32): 1275-1282. abstract_id: PUBMED:31617972 First line eradication treatment of Helicobacter pylori in 2019 Helicobacter pylori infection is associated with chronic gastric inflammation, peptic ulcer and an increased risk of gastric cancer. Helicobacter eradication traditionally consists of an empirical therapy combining clarithromycine, amoxicillin and proton pump inhibitors. However, this classic therapy needs to be reassessed because of the raising prevalence of clarithromycine resistance. Various alternative eradication treatments have been studied. This article aims to review the recommended alternatives and the different factors to guide the most appropriate first line eradication therapy. abstract_id: PUBMED:19388513 Genetic type of Helicobacter pylori and the efficacy of eradication therapy Unlabelled: Helicobacter pylori is one of the most popular bacteria in the world. The H. pylori infection is an etiological factor of permanent changes in inflammatory of stomach mucous membrane, peptic ulcer of the stomach and duodenum disease and stomach cancer or mucosa associated lymphoid tissue from lymphoid tissue of mucous membrane. The strain bacteria which produce the protein CagA and cytotoxin VacA belong to the more pathogenic strains. The most successful method of treatment for H. pylori infection is an eradication of the bacteria. The Aim Of The Study: Was an evaluation of the influence which H. pylori genetic type (type I: CagA-positive, CagA-negative, VacA-positive, VacA-negative) has on efficacy of eradication therapy. Material And Methods: 214 of patients over the third year of life with symptoms of dyspepsia, of the upper part of the gastrointestinal tract was performed and H. pylori infection was proved in histopathological or (and) urease test and urea breath test. H. pylori identification was performed using PCR method for biopsy specimens of the gastric mucosa, estimating genetic type of the bacteria (CagA-positive, CagA-negative, VacA-positive, VacA-negative). Triple-drug eradication therapy was introduced. The efficiency of this treatment was checked after 6 weeks with the breath test. Results: The H. pylori infection was found in 101 patients (47.2%), 33 patients were infected with the strain type I (32.7%) and 68 patients (67.3%) with the strain type II. After the treatment the eradication of the infection was found at 71 patients (70.3%), lack of efficacy in H. pylori infection treatment was found at 30 patients (29.7%). Considerably higher percentage of eradicative infection was shown in the group of patients infected with the type II of H. pylori (76.5% vs. 58.8%, p &lt; 0.04). Conclusions: The effectiveness of eradication can be influenced by the genetic type of H. pylori. The better effects of eradicative treatment can be expected if one is infected with the strains of smaller virulence. abstract_id: PUBMED:19025303 Effect of Helicobacter pylori eradication on patients with functional dyspepsia Objective: This study evaluated Helicobacter pylori eradication therapy in terms of symptomatic response in patients with functional dyspepsia. On the other hand, we analyzed the importance of histologic findings as a predictor of treatment response. In particular, we studied whether antral gastritis (which is associated with peptic ulcer) may predict a greater symptomatic response to Helicobacter pylori eradication in functional dyspepsia. Patients And Methods: This prospective, randomized, single-center trial included 48 patients with functional dyspepsia and Helicobacter pylori infection (27 women and 21 men, mean age 37 +/- 13.5 years). Twenty-seven patients received a 10-day course of rabeprazole, amoxicillin, and clarithromycin (eradication group), followed by 20 mg of rabeprazole for 3 months. Twenty-one patients received 20 mg of rabeprazole for 3 months (control group). Patients were followed up over a 1-year period. All patients completed the Dyspepsia-Related Health Scale Questionnaire, which studies four dimensions: pain intensity, pain disability, non-pain symptoms, and satisfaction with dyspepsia-related health. Results: There was significant symptomatic improvement (p &lt; 0.002) after 6 and 12 months, which was similar with both treatments. In the multivariate analyses, eradication therapy and less severe symptoms before treatment were the only independent factors. The symptomatic response to Helicobacter pylori eradication after 6 months was significantly greater as compared to control therapy (p = 0.01) in patients with antral gastritis and in the non-pain symptoms dimension of the questionnaire. Conclusions: Both treatments proved to be clinically beneficial in patients with functional dyspepsia. We observed a tendency to greater symptomatic benefit with Helicobacter pylori eradication therapy when compared to control treatment in patients with functional dyspepsia and in a population with a high prevalence of this infection. There is a tendency to symptomatic benefit with Helicobacter pylori eradication therapy in patients with antral gastritis. abstract_id: PUBMED:23758027 Helicobacter pylori - Update 2013 Helicobacter pylori has an important role in the pathogenesis of peptic ulcer, adenocarcinoma of the stomach, lymphoma of the stomach and autoimmune gastritis. Furthermore Helicobacter pylori is involved in the development of symptoms in patients with dyspepsia. Guidelines of the German Society of Digestive Diseases (DGVS) and recommendations of the European Helicobacter Study Group (Maastricht-Consensus) exist for the diagnosis and treatment of Helicobacter pylori and were recently published in updated versions. The German approval and introduction of a new quadrupel eradication therapy for Helicobacter pylori infections is a good occasion to outline and discuss the current state of the art of diagnosis and treatment of Helicobacter pylori in Germany. abstract_id: PUBMED:9567401 Eradication of Helicobacter pylori in developing countries A NIH Consensus Conference recommended Helicobacter pylori eradication to all ulcer patients, based mainly on information coming from countries with a low prevalence of infection in general population. The epidemiological situation is different in developing countries, where a pandemic of H. pylori goes unchecked, and most people become infected at young age. It is possible that response to eradication therapies and reinfection rate were to be included among the differences between developed and developing countries, raising doubts about the worldwide applicability of NIH recommendations. Limited published evidence and our experience suggest that eradication therapies have a lower efficacy and reinfection rate is significantly higher in developing compared to developed countries. In spite of this, the risk of ulcer recurrence after H. pylori eradication is substantially reduced compared to antisecretory therapy. Model analysis to evaluate the cost-effectiveness of H. pylori eradication, using figures that probably include the clinical and costs situation of developing countries, suggests that also from an economic perspective H. pylori eradication should be the standard treatment for peptic ulcer disease in developing countries. Local studies must determine the best eradication therapy for a particular geographical location, and longer follow-up of eradicated patients is needed to determine the true reinfection rate. abstract_id: PUBMED:9214051 Helicobacter pylori in 1997 In this review Helicobacter pylori (H. pylori) infection and its relation to different diseases is presented. H. pylori doesn't cause inconvenience to most infected people, though all infected persons have chronic active gastritis. The 10 year risk of peptic ulcer for people infected with H. pylori is about 10%. Randomized double-blinded trials have shown that eradication of H. pylori can cure most patients with peptic ulcer disease. Some people infected with H. pylori develop atrophic gastritis which is a risk factor for development of gastric cancer. It is not known if H. pylori screening and eradication would have a prophylactic effect against gastric cancer. It is also unknown if persons with non-organic dyspepsia and persons in long-term treatment with proton-pump-inhibitors would benefit from H. pylori eradication. abstract_id: PUBMED:20209758 Prevalence and role of Helicobacter pylori infection in some gastroduodenal and hepatic complications in cirrhotic patients It has been clearly established that Helicobacter pylori (H. pylori) infection plays a pivotal role in the pathogenesis of chronic gastritis, peptic ulcer, gastric adenocarcinoma, and gastric lymphoma MALT (mucosa-associated lymphoid tissue) in the general population, but data regarding the prevalence and the role of H. pylori infection in liver cirrhosis are conflicting. Most serological studies estimated a high prevalence of H. pylori infection in patients with liver cirrhosis; however, when other methods (urea breath test, histology, culture, rapid urease test) were used, the overall H. pylori prevalence was similar to that in controls. Although the prevalence of both gastric ulcer (GU) and duodenal ulcer (DU) is higher in cirrhotic patients than in general population, the relationship between H. pylori infection and peptic ulcer in cirrhosis remains controversial. Our data regarding peptic ulcer prevalence in cirrhotic patients are in agreement with previous studies that suggest an increased prevalence of both GU and DU. The incidence of bleeding peptic ulcer is high in cirrhotic patients and carries an increased risk of complications or death in these patients and therefore eradication of H. pylori infection might be as effective in preventing ulcer relapse and bleeding as it is in noncirrhotic ulcer patients. Hepatic encephalopathy is a frecquent complication of liver cirrhosis, and it is widely accepted that ammonia plays a major role in its pathogenesis. The ammonia production by H. pylori urease does not increase blood ammonia levels during cirrhosis, and eradication of H. pylori infection does not affect hepatic encephalopathy status. Answer: The indication for Helicobacter pylori eradication in cirrhotic patients with peptic ulcer disease is not straightforward. A study found that cirrhotic patients with peptic ulcer disease had a significantly lower prevalence of H. pylori compared to patients with uncomplicated peptic ulcer disease (46.9% vs. 80%; p = 0.04). Additionally, while H. pylori eradication rates were similar between cirrhotic and non-cirrhotic patients, the ulcer healing rate was significantly lower in cirrhotic patients (48% vs. 80.9%). The majority of residual ulcers in cirrhotic patients were negative for H. pylori. This led to the conclusion that eradication of H. pylori does not reduce the residual ulcer rate, indicating that H. pylori infection might not be a significant risk factor for peptic ulcer disease in cirrhotic patients. Therefore, routine H. pylori eradication might not be warranted in patients with cirrhosis and peptic ulcer disease (PUBMED:17542294). However, it is important to note that H. pylori infection is a well-established risk factor for peptic ulcer disease in the general population, and eradication therapy is known to lead to recovery from gastritis and prevent ulcer recurrence (PUBMED:7762174). Moreover, the prevalence of both gastric and duodenal ulcers is higher in cirrhotic patients than in the general population, and the incidence of bleeding peptic ulcer carries an increased risk of complications or death in cirrhotic patients. Therefore, eradication of H. pylori infection might be effective in preventing ulcer relapse and bleeding as it is in non-cirrhotic ulcer patients (PUBMED:20209758). In conclusion, while the study suggests that routine H. pylori eradication may not be necessary in cirrhotic patients with peptic ulcer disease due to the lower prevalence and impact on ulcer healing (PUBMED:17542294), the potential benefits of preventing ulcer relapse and bleeding complications should be considered on an individual patient basis, especially given the established role of H. pylori in peptic ulcer disease in the general population and the higher prevalence of ulcers in cirrhotic patients (PUBMED:20209758).
Instruction: Is routine sedation or topical pharyngeal anesthesia beneficial during upper endoscopy? Abstracts: abstract_id: PUBMED:15557943 Is routine sedation or topical pharyngeal anesthesia beneficial during upper endoscopy? Background: Upper endoscopy is an invasive procedure. However, the benefits of routinely administered sedative medication or topical pharyngeal anesthesic are controversial. The aim of this study was to clarify their effects on patient tolerance and difficulty of upper endoscopy. Methods: A total of 252 patients scheduled for diagnostic upper endoscopy were randomly assigned to 4 groups: (1) sedation with midazolam and placebo pharyngeal spray (midazolam group), (2) placebo sedation and lidocaine pharyngeal spray (lidocaine group), (3) placebo sedation and placebo pharyngeal spray (placebo group), and (4) no intravenous cannula/pharyngeal spray (control group). The endoscopist and the patient assessed the procedure immediately after the examination. Another questionnaire was sent to the patients 2 weeks later. Results: Patients in the midazolam group rated the examination easier and less uncomfortable compared with those in the other groups. The differences were especially evident in the questionnaires completed 2 weeks after the examination ( p &lt; 0.001). Lidocaine did not significantly improve patient tolerance. However, endoscopists found the procedure easier in patients in the lidocaine group compared with the midazolam ( p &lt; 0.01) and control groups ( p &lt; 0.01) but not the placebo group. Conclusions: Routine administration of midazolam for sedation increased patient tolerance for upper endoscopy. However, endoscopists found intubation to be more difficult in sedated vs. non-sedated patients. Topical pharyngeal anesthesia did not enhance patient tolerance, but it did make upper endoscopy technically easier compared with endoscopy in patients sedated with midazolam without topical pharyngeal anesthesia, and in patients who had no sedation or pharyngeal anesthesia, but not in patients who received placebo sedation and placebo pharyngeal anesthesia. abstract_id: PUBMED:9167371 Topical pharyngeal anesthesia without intravenous sedation during upper gastrointestinal endoscopy. Background: There have been conflicting reports about the usefulness of topical pharyngeal anesthesia as a pre-endoscopic medication. Aims: To assess the effect of topical pharyngeal anesthesia without intravenous sedatives on easing upper gastrointestinal endoscopy. Methods: 153 consecutive consenting patients were randomized to receive either placebo (normal saline) spray (78 patients) or 80 mg of 4% lidocaine spray (75 patients). Difficulty of intubation was judged both by the patient and the physician on a linear analogue scale. Results: There was no significant difference in the scores in the two groups (p &gt; 0.05). The patient and physician assessments correlated well. Conclusions: Topical lignocaine spray does not facilitate upper gastrointestinal endoscopy in the absence of intravenous sedation. abstract_id: PUBMED:10406247 Topical pharyngeal anesthesia does not improve upper gastrointestinal endoscopy in conscious sedated patients. Objective: We undertook this study to determine whether topical pharyngeal anesthesia with conscious sedation is superior to conscious sedation alone, with respect to procedure performance or tolerance in patients undergoing diagnostic upper gastrointestinal endoscopy. Methods: Ninety-five patients undergoing diagnostic upper endoscopy with conscious sedation were randomized to receive either topical pharyngeal anesthesia with 2% tetracaine/14% benzocaine spray or no pharyngeal anesthesia. Conscious sedation was achieved in all patients using intravenous midazolam and meperidine. Patients were asked to rate their pretest anxiety, comfort during endoscopy, recollection of the procedure, and willingness to undergo subsequent examinations using a 100-mm visual analog scale. Additionally, they were asked to estimate procedure duration and rate their tolerance for topical pharyngeal anesthesia. All examinations were performed by two endoscopists who were blinded to whether or not patients had received pharyngeal anesthesia. Endoscopists were asked to determine whether they believed that patients had received topical pharyngeal anesthesia and to estimate ease of esophageal intubation and procedure performance using a 100-mm visual analog scale. Procedure duration and doses of midazolam and meperidine were measured. Results: The two groups did not differ with respect to age, gender, and previous endoscopic history. There were no significant differences between the two groups with respect to pretest anxiety, procedural comfort, and willingness to undergo subsequent examinations. Patients receiving topical pharyngeal anesthesia rated it as moderately unpleasant. Endoscopists were able to discriminate patients who received pharyngeal anesthesia from those who did not with a sensitivity of 0.73 and a specificity of 0.59. There were no significant differences between the two groups with respect to ease of intubation, procedure performance, procedure duration, and dosing of midazolam or meperidine. Conclusions: In patients undergoing diagnostic upper endoscopy using intravenous midazolam and meperidine, the use of topical pharyngeal anesthesia does not improve patient tolerance or procedure performance. Elimination of this agent in the performance of diagnostic upper endoscopy will save time and money without adversely affecting patient care or outcomes. abstract_id: PUBMED:38441100 Topical pharyngeal anesthesia with articaine for gastroscopy: a double-blinded, randomized cross-over study in healthy volunteers. Objectives: The benefits of topical pharyngeal anesthesia for gastroscopy remain under debate. Articaine, a local anesthetic with fast onset and offset of action as well as low systemic toxicity, could be a promising choice for topical anesthesia. The objective of this study was to assess whether topical pharyngeal anesthesia with articaine is beneficial in sedated gastroscopy. Materials And Methods: This randomized double-blinded cross-over study included nine volunteers who underwent two gastroscopies under conscious sedation. One was performed with topical pharyngeal anesthesia with articaine and the other with placebo. Hemodynamic parameters including autonomic nervous system state were recorded prior to and during the endoscopic procedure. The endoscopist and the volunteer assessed the endoscopy after the examination. Results: Topical pharyngeal anesthesia with articaine resulted in less discomfort during esophageal intubation and higher patient satisfaction with the procedure. Topical pharyngeal anesthesia with articaine did not increase satisfaction or facilitate the procedure as rated by the endoscopist. There were no clinically relevant differences in hemodynamic parameters. Conclusion: The use of articaine for topical pharyngeal anesthesia results in less intubation-related discomfort and better satisfaction. abstract_id: PUBMED:16650534 Pharyngeal anesthesia during sedated EGDs: is "the spray" beneficial? A meta-analysis and systematic review. Background: Pharyngeal anesthesia is widely used as an adjunct to sedation during upper endoscopy. Methemoglobinemia and anaphylactic reactions are rare but serious complications of topical anesthetic agents. Individual studies produce variable results about the effectiveness of pharyngeal anesthesia in improving patient tolerance. Objective: A systematic review was performed to evaluate the effectiveness of pharyngeal anesthesia in improving patient tolerance and ease of endoscopy during sedated upper endoscopy. Design: A MEDLINE search, an EMBASE search, and manual searches were performed to identify pertinent English language articles. Randomized controlled trials (RCT) comparing the efficacy of pharyngeal anesthesia to placebo or no treatment were identified. Duplicate data extraction about patient tolerance of the procedure and endoscopist assessment regarding the ease of endoscopy was performed. Results: From a pool of 53 studies, 5 RCTs evaluated a total of 491 patients and provided interpretable data. Patients who rated their discomfort during the sedated procedure as none/minimal were more likely to have received pharyngeal anesthesia (odds ratio [OR] 1.88, 95% confidence interval [CI] 1.13-3.12). Endoscopists were more likely to rate the procedure as "not difficult" for patients who received pharyngeal anesthesia (OR 2.60, 95% CI 1.63-4.17). Limitations: Lack of standardized outcome measurements and standardized sedation strategies led to heterogeneity in the patient-tolerance portion of the meta-analysis. Conclusions: Pharyngeal anesthesia before upper endoscopy improves ease of endoscopy and also improves patient tolerance. abstract_id: PUBMED:16907901 Evaluation of topical pharyngeal anaesthesia by benzocaine lozenge for upper endoscopy. Background: Among the randomized controlled trials evaluating the effect of pharyngeal anaesthesia only some suggest benefit. Spray is irritating for some people and leaves bitter taste in the throat. We hypothesized that delivering the local anaesthetic as a sucking lozenge would benefit the patients in terms of decreasing anxiety and will improve procedure performance and patient tolerance. Aim: To determine whether benzocaine/tyrothricin sucking lozenges with conscious sedation is superior to conscious sedation alone, with respect to procedure performance and tolerance in patients undergoing upper endoscopy. Methods: One hundred and seventy-four adult patients undergoing upper endoscopy with conscious sedation completed the study. They were randomized to receive sucking lozenge containing benzocaine or placebo before the procedure. Patients were asked to rate prestudy anxiety, tolerance for topical pharyngeal anaesthesia, comfort during endoscopy, degree of difficulty of intubation, postprocedure throat discomfort and willingness to undergo subsequent examinations using a 10-cm visual analogue scale. Endoscopists were asked to estimate the ease of oesophageal intubation and procedure performance. Results: No significant statistical differences regarding all the points studied were found between the groups. Conclusions: Topical pharyngeal anaesthesia with benzocaine/tyrothricin lozenges with conscious sedation has no advantages over conscious sedation alone in patients undergoing upper endoscopy. abstract_id: PUBMED:22000794 Propofol sedation alone or in combination with pharyngeal lidocaine anesthesia for routine upper GI endoscopy: a randomized, double-blind, placebo-controlled, non-inferiority trial. Background: In patients undergoing routine upper EGD, propofol is increasingly used without pharyngeal anesthesia because of its excellent sedative properties. It is unclear whether this practice is non-inferior in regard to ease of endoscopic intubation and patient comfort. Objective: To assess the relevance of local pharyngeal anesthesia regarding the ease of EGD performance in patients sedated with propofol as monotherapy. Design: Randomized, double-blind, placebo-controlled, non-inferiority trial. Setting: One community hospital and one university hospital in Switzerland. Patients: We enrolled 300 consecutive adult patients undergoing elective EGD. Intervention: Pharyngeal anesthesia with 4 squirts of lidocaine spray versus placebo spray immediately before propofol sedation. Main Outcome Measurements: Number of gag reflexes (primary endpoint), number of intubation attempts, and degree of salivation during intubation (secondary endpoints) assessed by the endoscopists and staff. Results: In the lidocaine group, 122 patients (82%) had no gag events, and 25 patients had a total of 39 gag events, whereas in the placebo group 104 patients (71%) had no gag events, and 43 patients had a total of 111 gag events. The rate ratio of gagging with quasi-likelihood estimation of placebo compared with lidocaine was 2.85 (95% confidence interval [CI], 1.42-6.19; P = .005). In adjusted logistic regression analysis, the odds ratio for gagging for placebo pharyngeal anesthesia compared with lidocaine was 1.9 (95% CI, 1.03-3.54). The number of intubation attempts and the degree of salivation were similar in both groups. Two patients in the placebo group experienced oxygen desaturation and needed short-term mask ventilation. Limitations: The level of sedation and possible long-term side effects of pharyngeal anesthesia were not assessed. Conclusion: Topical pharyngeal anesthesia reduces the gag reflex in patients sedated with propofol even though it does not seem to have an influence on the ease of the procedure and on patient or endoscopist satisfaction in adequately sedated patients. abstract_id: PUBMED:30595710 Viscous lidocaine solution versus lidocaine spray for pharyngeal local anesthesia in upper gastroesophageal endoscopy. Background: Advantage of using local sedation during upper gastrointestinal endoscopy (UGE) is still challenging. In the current study, the effect of lidocaine spray versus lidocaine viscous solution for pharyngeal local anesthesia during UGE has been compared. Materials And Methods: This is a randomized clinical trial conducted on 130 patients conducting UGE in 2013. Patients were randomly divided into two groups of viscous lidocaine solution (Group V) and lidocaine spray (Group S). Patients' tolerance, satisfaction, pain/discomfort, and anxiety (based on 11-point numerical score scale) and ease of endoscopy were compared. Results: Ease of procedure, patients' tolerance, and patients' satisfaction were not statistically different between two groups (P &gt; 0.05). Patients' pain/discomfort and anxiety during endoscopy were significantly different between groups (P &lt; 0.05). Conclusion: As a conclusion, there was not any difference between two groups except for pain, discomfort, and anxiety that was higher in those who administered spray that might be due to the method of usage. abstract_id: PUBMED:8903978 Topical pharyngeal anesthesia improves tolerance of upper gastrointestinal endoscopy: a randomized double-blind study. Background And Study Aims: The usefulness of topical pharyngeal anesthesia is not well established. The aim of the present study was to determine its benefits in relation to patient tolerance and facilitation of the procedure in unsedated patients undergoing upper gastrointestinal endoscopy. Patients And Methods: A randomized double-blind study comparing Topicaina spray - a mixture of benzocaine butyl aminobenzoate (butoforme), amethocaine, and butacaine - and a placebo was carried out on 256 outpatients referred for diagnostic endoscopy. No additional premedication was used. After the examination, both the tolerance to and difficulty of the intubation and examination were evaluated by patients and endoscopists respectively, using visual analogue scales and a questionnaire. Results: Three patients (1.2%) did not tolerate the endoscopy. One patient was excluded for unexpected therapeutic endoscopy. One hundred twenty-five patients received the active spray and 127 received the placebo. The two groups were similar with respect to patient characteristics. Both intubation and examination, assessed with visual analogue scales, were better tolerated (both p = 0.0001) and more easily performed (p = 0.02 and p = 0.0001 respectively) in the active treatment group. Patients receiving the active spray had a better tolerance for the procedure, according to questionnaire replies. Conclusions: Topical pharyngeal anesthesia in unsedated patients undergoing diagnostic upper gastrointestinal endoscopy improves tolerance and makes examination easier. abstract_id: PUBMED:11154482 Evaluation of topical pharyngeal anesthesia for upper endoscopy including factors associated with patient tolerance. Background: Topical pharyngeal anesthesia is generally used as a pretreatment for upper endoscopy. However its efficacy has not been established. Methods: A randomized double-blind placebo-controlled study was undertaken. Subjects were 201 patients who underwent upper endoscopy and gave informed consent. Relative risks (RR) of patient discomfort were calculated for pharyngeal anesthesia, anxiety, and other potential confounding factors by using logistic regression analyses. Results: The RR of patient discomfort on intubation was 0.56 for the anesthesia versus the placebo group (95% CI, 0.31-1.01). RR was higher in patients aged 39 or younger than in those 40 or over (RR = 2.22, 95% CI, 1.04-4.74). With subgroup analysis in those examinees less than 40 years old, the RR of patient discomfort was 0.21 for the topical anesthesia (95% CI, 0.04-0.99) versus the placebo group and 4.93 for patients undergoing upper endoscopy for the first time (95% CI, 1.13-21.60). In the first-time patients, the RR was lower in the topical anesthesia than in the placebo group (RR = 0.20; 95% CI, 0.04-0.93); it was higher in patients with a trait-anxiety score higher by 10 points than in those with a lower score (RR = 3.35, 95% CI, 1.01-11.15). With the Bonferroni correction for multiple testing of data, statistical significance is indicated by a CI of 97.5% in the subgroup analyses. Conclusions: Topical pharyngeal anesthesia appears to be effective in patients less than 40 years old and in those undergoing the procedure for the first time. A high trait-anxiety score could be a predictor of discomfort in first-time examinees. Answer: Routine sedation and topical pharyngeal anesthesia have been studied to determine their benefits during upper endoscopy, with mixed results. Sedation with midazolam has been shown to increase patient tolerance for upper endoscopy. Patients in the midazolam group rated the examination as easier and less uncomfortable compared to other groups, especially in follow-up questionnaires completed two weeks after the examination. However, endoscopists found the procedure more difficult in sedated patients compared to non-sedated patients (PUBMED:15557943). Topical pharyngeal anesthesia with lidocaine, on the other hand, did not significantly improve patient tolerance in some studies. For instance, one study found that lidocaine spray did not facilitate upper gastrointestinal endoscopy in the absence of intravenous sedation (PUBMED:9167371). Another study concluded that topical pharyngeal anesthesia does not improve patient tolerance or procedure performance when used with conscious sedation (PUBMED:10406247). However, a meta-analysis and systematic review found that patients who received pharyngeal anesthesia were more likely to rate their discomfort during sedated procedures as none/minimal, and endoscopists were more likely to rate the procedure as "not difficult" (PUBMED:16650534). Similarly, another study found that topical pharyngeal anesthesia with articaine resulted in less discomfort during esophageal intubation and higher patient satisfaction (PUBMED:38441100). In contrast, a study evaluating benzocaine lozenges found no significant differences in procedure performance or patient tolerance when compared to conscious sedation alone (PUBMED:16907901). Another study found that while topical pharyngeal anesthesia with lidocaine reduced the gag reflex in patients sedated with propofol, it did not influence the ease of the procedure or satisfaction (PUBMED:22000794). In conclusion, routine sedation with midazolam appears to be beneficial for patient tolerance during upper endoscopy, although it may make the procedure technically more difficult for endoscopists. The benefits of topical pharyngeal anesthesia are less clear, with some studies showing improvement in patient tolerance and ease of procedure, while others show no significant benefit. The effectiveness of topical pharyngeal anesthesia may also depend on the specific anesthetic agent used and the patient population.
Instruction: Melanoma metastatic to cervical lymph nodes: Can radiotherapy replace formal dissection after local excision of nodal disease? Abstracts: abstract_id: PUBMED:15952196 Melanoma metastatic to cervical lymph nodes: Can radiotherapy replace formal dissection after local excision of nodal disease? Background: Completion cervical lymphadenectomy is usually performed after excisional biopsy of nodal metastases from melanoma. Radiation (XRT) might be effective for some patients in lieu of formal lymph node dissection. Methods: Thirty-six patients with parotid or cervical node metastases from melanoma were treated with excision of nodal disease and postoperative XRT without formal lymph node dissection. Radiation was delivered to the primary site (if known), the site of nodal excision, and the undissected ipsilateral neck. Results: With a median follow-up of 5.3 years, the disease recurred within the regional basin in two patients and at distant sites in 14 patients. The actuarial 5-year regional control and distant metastasis-free survival rates were 93% and 59%, respectively. Two patients had a clinically significant side effect develop. Conclusions: The results of this study suggest that selected patients may receive regional XRT after local excision of nodal disease from melanoma in lieu of formal lymph node dissection. abstract_id: PUBMED:30116948 Treatment of Clinically Positive Cervical Lymph Nodes by Limited Local Node Excision and Adjuvant Radiotherapy in Melanoma Patients with Major Comorbidities. Introduction: When cervical lymph nodes are clinically positive for metastatic melanoma, surgeons may be hesitant to recommend a therapeutic complete lymph node dissection if the patient is elderly or has major comorbidities. A limited local node excision of the clinically positive nodes only, followed by adjuvant radiotherapy to the entire node field, may be an effective alternative in such patients. Methods: All patients who had presented with a primary head and neck melanoma or an unknown primary site and had subsequently undergone limited local node excision and adjuvant radiotherapy for macroscopically involved cervical nodes between 1993 and 2010 at a tertiary referral center were selected for study. Results: Twenty-eight patients were identified, with a median age of 78 years and a median of 2 major comorbidities. The 5-year regional control, disease-free survival, and overall survival rates were 69%, 44%, and 50%, respectively. At the time of data analysis, seven patients were alive without evidence of disease. Twenty-one patients had died: 11 of melanoma (4 with neck recurrence) and 10 of other causes (2 with neck recurrence). Conclusions: Excision of clinically positive metastatic cervical lymph nodes followed by radiotherapy provides satisfactory regional disease control without risking serious morbidity or mortality in melanoma patients whose general condition is considered a contraindication for therapeutic complete lymph node dissection. abstract_id: PUBMED:8377499 Local control following therapeutic nodal dissection for melanoma. Regional recurrence of melanoma is usually treated with surgical resection alone. Unfortunately sites of failure following surgical resection are poorly documented. Little information exists regarding local failure following surgery. In order to define local control, a retrospective analysis was performed of all patients undergoing a potentially curative lymph node dissection for metastatic melanoma. From 1978 to 1988, 48 patients underwent lymph node dissection with removal of all known disease (15 axillary, 25 groin and 8 radical neck dissections). Seven patients had stage II disease with simultaneous resection of the primary lesion and nodal dissection. The remaining 41 patients had stage I disease with dissection delayed until nodal metastasis became apparent. Of these 48 patients, 25 experienced local failure for an overall local control rate of 48%. Univariate and multivariate analysis showed only age to be a statistically significant prognostic indicator of local failure with a rate of 31% for patients &lt; 50 years of age vs. 66% for patients &gt; 50 years of age (P = 0.02). Nodal size, number of nodes involved, extracapsular extension, initial stage, location, or sex did not influence prognosis. Although not statistically significant, time to recurrence was much shorter in patients with extracapsular extension, 5 months vs. 16 months. With an overall local failure rate of 52% following a potentially curable therapeutic nodal dissection further local treatment should be considered. abstract_id: PUBMED:19914463 Complete lymph node dissection for regional nodal metastasis. The primary management of lymph nodes involved with metastatic melanoma is regional lymphadenectomy. Axillary or inguinal node complete lymph node dissection (CLND) is performed after an occult metastasis is found by sentinel lymph node biopsy, or after a clinically apparent regional lymph node metastasis. CLND completely removes all lymph-node-bearing tissue in a nodal basin. This procedure continues to be controversial. No randomized prospective studies have yet determined the survival advantage of CLND. The National Comprehensive Cancer Network recommends that all patients with stage III melanoma have a CLND. abstract_id: PUBMED:10661355 Nodal basin recurrence following lymph node dissection for melanoma: implications for adjuvant radiotherapy. Purpose: To analyze patterns of failure in malignant melanoma patients with lymph node involvement who underwent complete lymph node dissection (LND) of the nodal basin. To determine prognostic factors predictive of local recurrence in the lymph node basin in order to select patients who may benefit from adjuvant radiotherapy. Methods And Materials: A retrospective analysis of 338 patients undergoing complete LND for melanoma between 1970 and 1996 who had pathologically involved lymph nodes was performed. Mean follow-up from the time of LND was 54 months (range: 12-306 months). Lymph node basins dissected included the neck (56 patients), axilla (160 patients), and groin (122 patients). Two hundred fifty-three patients (75%) underwent therapeutic LND for clinically involved nodes, while 85 patients (25%) had elective dissections. Forty-four percent of patients received adjuvant systemic therapy. No patients received adjuvant radiotherapy to the lymph node basin. Results: Overall and disease-specific survival for all patients at 10 years was 30% and 36%, respectively. Overall nodal basin recurrence was 30% at 10 years. Mean time to nodal basin recurrence was 12 months (range: 2-78 months). Site of nodal involvement was prognostic with 43%, 28%, and 23% nodal basin recurrence at 10 years with cervical, axillary, and inguinal involvement, respectively (p = 0.008). Extracapsular extension (ECE) led to a 10-year nodal basin failure rate of 63% vs. 23% without ECE (p &lt; 0.0001). Patients undergoing a therapeutic dissection for clinically involved nodes had a 36% failure rate in the nodal basin at 10 years, compared to 16% for patients found to have involved nodes after elective dissection (p = 0.002). Lymph nodes larger than 6 cm led to a failure rate of 80% compared to 42% for nodes 3-6 cm and 24% for nodes less than 3 cm (p &lt; 0.001). The number of lymph nodes involved also predicted for nodal basin failure with 25%, 46%, and 63% failure rates at 10 years for 1-3, 4-10, and &gt; 10 nodes involved (p = 0.0001). There was no significant difference in nodal basin control in patients with synchronous or metachronous lymph node metastases, nor in patients receiving or not receiving adjuvant systemic therapy. Nodal basin failure was predictive of distant metastasis with 87% of patients with nodal basin recurrence developing distant disease compared to 54% of patients without nodal failure (p &lt; 0.0001). On multivariate analysis, number of positive nodes and type of dissection (elective vs. therapeutic) were significant predictors of overall and disease-specific survival. Size of the largest lymph node was also predictive of disease-specific survival. Site of nodal involvement and ECE were significant predictors of nodal basin failure. Conclusions: Malignant melanoma patients with nodal involvement have a significant risk of nodal basin failure after LND if they have cervical involvement, ECE, &gt;3 positive lymph nodes, clinically involved nodes, or any node larger than 3 cm. Patients with these risk factors should be considered for adjuvant radiotherapy to the lymph node basin to reduce the incidence of nodal basin recurrence. Patients with nodal basin failure are at higher risk of developing distant metastases. abstract_id: PUBMED:31548823 Primary excision margins, sentinel lymph node biopsy, and completion lymph node dissection in cutaneous melanoma: a clinical practice guideline. Background: For patients who are diagnosed with early-stage cutaneous melanoma, the principal therapy is wide surgical excision of the primary tumour and assessment of lymph nodes. The purpose of the present guideline was to update the 2010 Cancer Care Ontario guideline on wide local excision margins and sentinel lymph node biopsy (slnb), including treatment of the positive sentinel node, for melanomas of the trunk, extremities, and head and neck. Methods: Using Ovid, the medline and embase electronic databases were systematically searched for systematic reviews and primary literature evaluating narrow compared with wide excision margins and the use of slnb for melanoma of the truck and extremities and of the head and neck. Search timelines ran from 2010 through week 25 of 2017. Results: Four systematic reviews were chosen for inclusion in the evidence base. Where systematic reviews were available, the search of the primary literature was conducted starting from the end date of the search in the reviews. Where systematic reviews were absent, the search for primary literature ran from 2010 forward. Of 1213 primary studies identified, 8 met the inclusion criteria. Two randomized controlled trials were used to inform the recommendation on completion lymph node dissection.Key updated recommendations include:■ Wide local excision margins should be 2 cm for melanomas of the trunk, extremities, and head and neck that exceed 2 mm in depth.■ slnb should be offered to patients with melanomas of the trunk, extremities, and head and neck that exceed 0.8 mm in depth.■ Patients with sentinel node metastasis should be considered for nodal observation with ultrasonography rather than for completion lymph node dissection. Conclusions: Recommendations for primary excision margins, sentinel lymph node biopsy, and completion lymph node dissection in patients with cutaneous melanoma have been updated based on the current literature. abstract_id: PUBMED:29620000 Lymph node dissection for melanoma using tumescence local anaesthesia: an observational study. The possibility that tumescence local anaesthesia (TLA) may lead to dissemination of tumour cells in lymph nodes is presently unclear. To evaluate whether infiltration by TLA influences metastatic spread and survival probability, compared to general anaesthesia (GA), based on lymph node dissection in melanoma patients. In total, 281 patients (GA: 162; TLA: 119) with cutaneous melanoma and clinically or histologically-confirmed metastases in regional lymph nodes were included. All patients underwent complete lymph node dissection. Median follow-up was 70 months. The rate of lymph node recurrence at the dissection site was 25.3% in the GA group and 17.6% in the TLA group (p = 0.082). No significant difference was found concerning 10-year melanoma-specific survival (GA: 56.2%, TLA: 67.4%; p = 0.09), disease-free survival (GA: 72.8 %, TLA: 81.1%; p = 0.095), or lymph node-free survival (GA: 72.8%, TLA: 81.1%; p = 0.095). Distant metastases-free survival appeared to be slightly reduced in the TLA group (GA: 49.9%, TLA: 64.0%; p = 0.025). No differences were identified between the GA and TLA groups regarding prognostic outcome for overall survival or disease-free survival. abstract_id: PUBMED:36115028 Regional control after precision lymph node dissection for clinically evident melanoma metastasis. Introduction: Completion lymph node dissection (CLND) for microscopic lymph node metastases has been replaced by observation; however, CLND is standard for clinically detectable nodal metastases (cLN). CLND has high morbidity, which may be reduced by excision of only the cLN (precision lymph node dissection [PLND]). We hypothesized that same-basin recurrence risk would be low after PLND. Methods: Retrospective review at four tertiary care hospitals identified patients who underwent PLND. The primary outcome was 3-year cumulative incidence of isolated same-basin recurrence. Results: Twenty-one patients underwent PLND for cLN without synchronous distant metastases. Reasons for forgoing CLND included patient preference (n = 11), comorbidities (n = 5), imaging indeterminate for distant metastases (n = 2), partial response to checkpoint blockade (n = 1), or not reported (n = 2). A median of 2 nodes (range: 1-6) were resected at PLND, and 68% contained melanoma. Recurrence was observed in 33% overall. Only 1 patient (5%) developed an isolated same-basin recurrence. Cumulative incidences at 3 years were 5.0%, 17.3%, and 49.7% for isolated same-basin recurrence, any same-basin recurrence, and any recurrence, respectively. Complications from PLND were reported in 1 patient (5%). Conclusions: These pilot data suggest that PLND may provide adequate regional disease control with less morbidity than CLND. These data justify prospective evaluation of PLND in select patients. abstract_id: PUBMED:24752717 Outcomes of routine ilioinguinal lymph node dissection for palpable inguinal melanoma nodal metastasis. Background: Patients who present with palpable inguinal melanoma nodal metastasis have two surgical options: inguinal or ilioinguinal lymph node dissection. Indications for either operation remain controversial. This study examined survival and recurrence outcomes following ilioinguinal dissection for patients with palpable inguinal nodal metastasis, and assessed the incidence and preoperative predictors of pelvic nodal metastasis. Methods: This was a retrospective clinicopathological analysis of consecutive surgical patients with stage III malignant melanoma. All patients underwent a standardized ilioinguinal dissection at a specialist tertiary oncology hospital over a 12-year period (1998-2010). Results: Some 38.9 per cent of 113 patients had metastatic pelvic nodes. Over a median follow-up of 31 months, the 5-year overall survival rate was 28 per cent for patients with metastatic inguinal and pelvic nodes, and 51 per cent for those with inguinal nodal metastasis only (P = 0.002). The nodal basin control rate was 88.5 per cent. Despite no evidence of pelvic node involvement on preoperative computed tomography (CT), six patients (5.3 per cent) with a single metastatic inguinal lymph node had metastatic pelvic lymph nodes. Logistic regression analysis showed that the number of metastatic inguinal nodes (odds ratio 1.56; P = 0.021) and suspicious CT findings (odds ratio 9.89; P = 0.001) were both significantly associated with metastatic pelvic nodes. The specificity of CT was good (89.2 per cent) in detecting metastatic pelvic nodes, but the sensitivity was limited (57.9 per cent). Conclusion: Metastatic pelvic nodes are common when palpable metastatic inguinal nodes are present. Long-term survival can be achieved following their resection by ilioinguinal dissection. As metastatic pelvic nodes cannot be diagnosed reliably by preoperative CT, patients presenting with palpable inguinal nodal metastasis should be considered for ilioinguinal dissection. abstract_id: PUBMED:30054111 Pelvic lymph node dissection in metastatic melanoma to the groin should not be abandoned yet. Background: In recent years there has been a plea to abandon the pelvic lymph node dissection in the treatment of patients with metastatic melanoma to the groin. A trend towards a conservative surgical treatment is already evolving in several European countries. The purpose of this study is to identify factors associated with pelvic nodal involvement, in order to improve selection of patients whom might benefit from a pelvic nodal dissection. Methods: A retrospective analysis was performed on prospectively collected data concerning patients who underwent an inguinal lymph node dissection (ILND) with pelvic lymph node dissection for metastatic melanoma at the University Medical Center Groningen. Multivariable logistic regression analysis was performed to determine factors associated with pelvic nodal involvement. Diagnostic accuracy was calculated for 18F-FDG PET + contrast enhanced CT-scan and 18F-FDG PET + low dose CT-scan. Results: Two-hundred-and-twenty-six ILND's were performed in 223 patients. The most common histologic subtype was superficial spreading melanoma (42.6%). In patients with micrometastatic disease, 15.7% had pelvic nodal involvement vs 28.2% in patients with macrometastatic disease (p: 0.030). None of the characteristics known prior to the ILND, were associated with pelvic nodal involvement. Imaging methods were unable to accurately predict pelvic nodal involvement. Negative predictive value was 78% for 18F-FDG PET + low dose CT-scan and 86% for an 18F-FDG PET + contrast enhanced CT-scan. Conclusion: There are no patient- or tumor characteristics available that can predict pelvic nodal involvement in patients with melanoma metastasis to the groin. As no imaging technique is able to predict pelvic nodal involvement it seems unjust to abandon the pelvic lymph node dissection. Answer: The question of whether radiotherapy can replace formal dissection after local excision of nodal disease in melanoma metastatic to cervical lymph nodes has been explored in several studies. According to the results of a study with 36 patients who had parotid or cervical node metastases from melanoma and were treated with excision of nodal disease and postoperative radiotherapy without formal lymph node dissection, the 5-year regional control rate was 93%, suggesting that selected patients may receive regional radiotherapy after local excision of nodal disease from melanoma instead of formal lymph node dissection (PUBMED:15952196). Another study involving 28 patients with a median age of 78 years and a median of 2 major comorbidities who underwent limited local node excision and adjuvant radiotherapy for macroscopically involved cervical nodes reported a 5-year regional control rate of 69%. This suggests that excision of clinically positive metastatic cervical lymph nodes followed by radiotherapy provides satisfactory regional disease control without risking serious morbidity or mortality in melanoma patients who are not suitable candidates for therapeutic complete lymph node dissection (PUBMED:30116948). However, it is important to note that the standard primary management of lymph nodes involved with metastatic melanoma is regional lymphadenectomy, and no randomized prospective studies have yet determined the survival advantage of complete lymph node dissection (CLND) over other treatments (PUBMED:19914463). Furthermore, a study analyzing patterns of failure in melanoma patients with lymph node involvement who underwent complete lymph node dissection found a significant risk of nodal basin failure after LND in patients with certain risk factors, suggesting that these patients should be considered for adjuvant radiotherapy to reduce the incidence of nodal basin recurrence (PUBMED:10661355). In conclusion, while radiotherapy after local excision of nodal disease has shown promising regional control rates in selected patients, especially those with comorbidities or advanced age, it has not been established as a standard replacement for formal lymph node dissection in all patients with melanoma metastatic to cervical lymph nodes. The decision to use radiotherapy in lieu of formal dissection should be made on a case-by-case basis, considering the patient's overall health, the extent of nodal disease, and other individual factors.
Instruction: Does the left atrial appendage morphology correlate with the risk of stroke in patients with atrial fibrillation? Abstracts: abstract_id: PUBMED:36223248 Association Between Ischemic Stroke and Left Atrial Appendage Morphology in Patients With Atrial Fibrillation and Low CHA2DS2-VASc Scores. Background: Patients with atrial fibrillation are at risk for ischemic stroke, even with low CHA2DS2-VASc scores. The left atrial appendage is a known site of thrombus formation in individuals with atrial fibrillation. Methods: We conducted a prospective study, enrolling patients with nonvalvular atrial fibrillation and CHA2DS2-VASc scores of 0 or 1. Patients were divided into groups based on left atrial appendage morphology (determined by computed tomography): the "chicken wing" group and the non-chicken wing group. We followed patients for more than 1 year to observe the incidence of stroke. Results: Of 509 patients with a mean (SD) age of 48.9 (11.6) years; 332 (65.2%) were men. The chicken wing group had fewer left atrial appendage lobes, a lower left atrial appendage depth, and a smaller left atrial appendage orifice area (all P &amp;lt; .001). During the follow-up period, 5 of the 133 patients (3.8%) in the chicken wing group and 56 of the 376 patients (14.9%) in the non-chicken wing group experienced ischemic stroke (P &amp;lt; .001). The following findings were significantly associated with the incidence of stroke: left atrial appendage depth (hazard ratio [HR], 1.98; 95% CI, 1.67-3.12; P = .03), left atrial appendage orifice area (HR, 2.16; 95% CI, 1.59-3.13; P &amp;lt; .001), and non-chicken wing left atrial appendage morphology (HR, 1.16; 95% CI, 1.10-1.23; P &amp;lt; .001). Conclusion: For patients with atrial fibrillation and a low CHA2DS2-VASc score, the non-chicken wing left atrial appendage morphology type is independently associated with ischemic stroke. abstract_id: PUBMED:36803426 Impact of left atrial appendage morphology on thrombus formation in TAVI patients with atrial fibrillation. Purpose: We aimed to correlate left atrial appendage (LAA) morphology with thrombus formation in patients with severe aortic valve stenosis and atrial fibrillation. Methods: We analyzed LAA morphology and the prevalence of a thrombus in 231 patients with atrial fibrillation and severe aortic valve stenosis that were referred for pre-interventional CT scan before trans-catheter aortic valve implantation (TAVI) between 2016 and 2018. In addition, we documented neuro-embolic events depending on the presence of LAA thrombus within a follow-up of 18 months. Results: The overall distribution of different LAA morphologies was chicken-wing 25.5%, windsock 51.5%, cactus 15.6% and cauliflower 7.4%. Compared to chicken-wing morphology, patients with non-chicken-wing morphology showed a significantly higher thrombus rate (OR: 2.48, 95%; CI 1.05 to 5.86, p = 0.043). Within the 50 patients with a LAA thrombus, we observed chicken-wing (14.0%), windsock (62.0%), cactus (16.0%) and cauliflower (8.0%) configuration. In patients with LAA thrombus those with chicken-wing configuration have a higher risk (42.9%) to develop neuro-embolic events compared to non-chicken-wing configuration (20.9%). Conclusion: We found a lower LAA thrombus rate in patients with chicken-wing morphology compared to patients with non-chicken-wing configuration. However, in the presence of thrombus, those patients with chicken-wing morphology showed a doubled risk for neuro-embolic events compared to patients with non-chicken-wing morphology. These results must be confirmed in larger trials but underline the importance of LAA evaluation in thoracic CT scans and could have an impact on the anticoagulation management. abstract_id: PUBMED:33512055 Left atrial appendage morphology predicts the formation of left atrial appendage thrombus. Background: Nonchicken wing left atrial appendage (LAA) morphology is associated with higher risk for stroke in patients with atrial fibrillation (AF) than chicken wing (CW) morphology. Objective: Assess whether LAA morphology predicts the formation of LAA thrombus independent of age, sex, presenting rhythm, left ventricular ejection fraction (LVEF), or anticoagulant use. Methods: A cross-sectional analysis was performed on patients prospectively enrolled in the Vanderbilt LAA Registry or presenting for transesophageal echocardiogram (TEE) between January 1, 2015, and November 1, 2017 (n = 306). Two physicians independently reviewed TEEs interpreted as having LAA thrombus. Determination of LAA morphology, ejection velocity, and presence of thrombus (n = 102) were based on 0°, 45°, 90°, and 135° TEE views. The control cohort (n = 204) included consecutive AF patients undergoing TEE without LAA thrombus. Results: LAA morphology in patients with LAA thrombus was: 35% windsock, 47% broccoli, and 12% CW. Windsock (odds ratio [OR], 4.0; 95% confidence interval [CI]: 1.7-9.3, p = .001) and broccoli (OR, 6.6; 95% CI: 2.6-16.6; p &lt; .001) morphology were higher risk for thrombus compared to CW. Female sex predicted higher-odds for LAA thrombus (OR, 2.6; 95% CI: 1.4-4.8; p = .002) as did LAA-EV &lt; 20 cm/s (OR, 11.12; 95% CI: 5.6-22.1). Anticoagulation use (OR, 0.5; 95% CI: 0.3-0.9; p = .03) and higher LVEF (OR, 0.95; 95% CI: 0.93-0.98; p &lt; .001) were associated with lower risk. In patients with a CW morphology who had LAA thrombus, 4 of the 7 had an LAA-EV &lt; 20 cm/s and acute systolic heart failure with LVEF &lt; 30% or active malignancy. In multivariable linear regression analysis controlling for presenting rhythm, anticoagulant use, age, sex, and LVEF, CW morphology appears relatively protective from LAA thrombus (p = .001). Conclusion: CW LAA morphology appears relatively protective against the formation of LAA thrombus. abstract_id: PUBMED:27016474 Left atrial appendage morphology assessment for risk stratification of embolic stroke in patients with atrial fibrillation: A meta-analysis. Background: Thromboembolic event (TE) risk stratification is performed by using CHA2DS2VASc score. It has been suggested that left atrial appendage (LAA) morphology independently influences TE risk in patients with nonvalvular atrial fibrillation. LAA morphology has been classified into 4 types: chicken wing, cauliflower, windsock, and cactus. Objective: The purpose of this study was to determine TE risk for each LAA morphology in patients with atrial fibrillation with low to intermediate TE risk. Methods: A systematic review of MEDLINE, Cochrane Library, and Embase for studies that used computed tomography, tridimensional transesophageal echocardiography, and cardiac magnetic resonance imaging to categorize the LAA morphologies with assessment of TE prevalence. Odds ratio (OR) and 95% confidence intervals (CIs) were measured using the Mantel-Haenszel method. The fixed effects model was used, and if heterogeneity (I2) was &gt;25%, effects were analyzed using a random model. Results: Eight studies with 2596 patients were included. Eighty-four percent (n=1872) of patients had a CHADS2 score of &lt;2. TE risk was lower in chicken wing morphology than in non-chicken wing morphology (OR 0.46; 95% CI 0.36-0.58). Likewise, chicken wing morphology had lower TE risk than did other morphologies (chicken wing vs cauliflower: OR 0.38; 95% CI 0.26-0.56; chicken wing vs windsock: OR 0.48; 95% CI 0.31-0.73; chicken wing vs cactus: OR 0.49; 95% CI 0.36-0.66). Conclusion: Patients with chicken wing LAA morphology are less likely to develop TE than patients with non-chicken wing morphology. LAA morphology may be a valuable criterion in predicting TE and could affect the stratification and anticoagulation management of patients with low to intermediate TE risk. abstract_id: PUBMED:34128199 Relations between left atrial appendage contrast retention and thromboembolic risk in patients with atrial fibrillation. Left atrial appendage (LAA), a blind pouch, accounts for more than 90% of the source of cardiac thrombus formation. Contrast retention (CR) in the LAA has been frequently observed during left atrial appendage occlusion (LAAO) procedures, especially in patients with stroke history. This study was designed to assess the relations between LAA contrast retention and thrombogenesis risk of the LAA in patients with non-valvular atrial fibrillation. A total of 132 consecutive patients who underwent LAAO were enrolled. The data collected from computed tomography (CT), transthoracic echocardiography (TTE), transesophageal echocardiography (TEE) and blood samples were analyzed. Univariate and multivariate logistic regression models were constructed to assess the association between CR, left atrial appendage thrombus (LAAT) and other factors. Contrast retention was observed in 33 patients, accounting for 25% of the population. Compared to the non-CR group, patients in the CR group had a larger left atrium anteroposterior diameter (49.64 ± 11.57 vs. 42.42 ± 7.04, P = 0.002), higher CHADS2 (3.88 ± 0.99 vs. 2.97 ± 1.35, P = 0.001) and CHA2DS2-VASc scores (5.79 ± 1.14 vs. 4.89 ± 1.56, P = 0.003), a higher rate of prior stroke (90.9% vs. 66.7%, P = 0.007), more LAA lobes (3.13 ± 1.18 vs. 2.64 ± 1.12, P = 0.038), and a higher prevalence of LAAT (63.6% vs. 13.1%, P &lt; 0.001). After having adjusted the logistic model, only contrast retention, LAA cauliflower morphology and left ventricular ejection fraction (LVEF) were independently associated with LAAT. Patients with LAA contrast retention have a higher risk of left atrial appendage thrombosis. Contrast retention may be a cardiac factor strongly associated with cardiogenic stroke. abstract_id: PUBMED:33470330 Value of left atrial diameter with CHA2DS2-VASc score in predicting left atrial/left atrial appendage thrombosis in non-valvular atrial fibrillation. Background: Atrial fibrillation is the most common persistent arrhythmia, and is the main factor that leads to thromboembolism. Objective: To investigate the value of left atrial diameter combined with CHA2DS2-VASc score in predicting left atrial/left atrial appendage thrombosis in non-valvular atrial fibrillation. Methods: This is a retrospective study. 238 patients with non-valvular atrial fibrillation were selected and divided into two groups: thrombosis and non-thrombosis. CHA2DS2-VASc score was determined. P&lt;0.05 was considered statistically significant. Results: Multivariate logistic regression analysis revealed that the history of stroke/transient ischemic attack, vascular disease, CHA2DS2-VASc score, left atrial diameter (LAD), left ventricular end-diastolic dimension (LVEDD) and left ventricular ejection fraction (LVEF) were independent risk factors for left atrial/left atrial appendage thrombosis (p&lt;0.05). Receiver operating characteristic curve analysis revealed that the area under the curve for the CHA2DS2-VASc score in predicting left atrial/left atrial appendage thrombosis was 0.593 when the CHA2DS2-VASc score was ≥3 points, and sensitivity and specificity were 86.5% and 32.6%, respectively, while the area under the curve for LAD in predicting left atrial/left atrial appendage thrombosis was 0.786 when LAD was ≥44.17 mm, and sensitivity and specificity were 89.6% and 60.9%, respectively. Among the different CHA2DS2-VASc groups, the incidence rate of left atrial/left atrial appendage thrombosis in patients with LAD ≥44.17 mm was higher than patients with LAD &lt;44.17 mm (p&lt;0.05). Conclusion: CHA2DS2-VASc score and LAD are correlated with left atrial/left atrial appendage thrombosis in non-valvular atrial fibrillation. For patients with a CHA2DS2-VASc score of 0 or 1, when LAD is ≥44.17 mm, the risk for left atrial/left atrial appendage thrombosis remained high. (Arq Bras Cardiol. 2020; [online].ahead print, PP.0-0). abstract_id: PUBMED:30460239 Complex Left Atrial Appendage Morphology Is an Independent Risk Factor for Cryptogenic Ischemic Stroke. Importance: Ischemic strokes pose a significant health burden. However, the etiology of between 20 and 40% of these events remains unknown. Left atrial appendage morphology may influence the occurrence of thromboembolic events. Design: A retrospective cross-sectional study was conducted to investigate the role of LAA morphology in patients with atrial fibrillation (AF) and cardioembolic-associated stroke and patients with cryptogenic stroke without atrial fibrillation. LAA morphology is classified into two groups: (1) simple (chicken-wing) vs. (2) complex (non-chicken wing) based on transesophageal echocardiography (TEE) findings. In addition to the LAA morphology, left atrial parameters, including orifice diameter, depth, emptying velocity, and filling velocity, were collected for both groups. Mathematical, computational models were constructed to investigate flow velocities in chicken-wing and non-chicken wing morphological patterns to assess LAA function further. Findings: TEE values for volume, size, emptying, and filling velocities were similar between simple and complex LAA morphology groups. Patients with cryptogenic stroke without coexisting AF were noted to have significantly higher rates of complex LAA morphology. Chicken-wing LAA morphology was associated with four-fold higher flow rate (kg/s) in computational simulations. Conclusions: Complex LAA morphology may be an independent contributing factor for cryptogenic strokes. Further studies are warranted to investigate the mechanism involved in LAA morphology and thromboembolic events. abstract_id: PUBMED:24076444 Relationship between left atrial appendage morphology and stroke in patients with atrial fibrillation. Background: Atrial fibrillation (AF) is an important cause of stroke. Given the morbidity and mortality associated with stroke, the risk stratification of patients based on left atrial appendage (LAA) characteristics is of great interest. Objective: To explore the association between LAA morphology and LAA characteristics including the extent of trabeculations, orifice diameter, and length with prevalent stroke in a large cohort of patients with drug refractory AF who underwent AF ablation to develop mechanistic insight regarding the risk of stroke. Methods: An institutional cohort of 1063 patients referred for AF ablation from 2003 to 2012 was reviewed to identify patients that underwent preprocedural cardiac computed tomography (CT). LAA morphology was characterized as chicken wing, cactus, windsock, or cauliflower by using previously reported methodology. Left atrial size and LAA trabeculations, morphology, orifice diameter, and length were compared between patients with prevalent stroke and patients without prevalent stroke. Results: Of 678 patients with CT images, 65 (10%) had prior stroke or transient ischemic attack. In univariate analyses, prevalent heart failure (7.7% in cases vs 2.8% in controls; P = .033), smaller LAA orifice (2.26 ± 0.52 cm vs 2.78 ± 0.71 cm ; P &lt; .001), shorter LAA length (5.06 ± 1.17 cm vs 5.61 ± 1.17 cm; P &lt; .001), and extensive LAA trabeculations (27.7% vs 14.4%; P = .019) were associated with stroke. LAA morphologies were unassociated with stroke risk. In multivariable analysis, smaller LAA orifice diameter and extensive LAA trabeculations remained independently associated with thromboembolic events. Conclusions: The extent of LAA trabeculations and smaller LAA orifice diameter are associated with prevalent stroke and may mediate the previously described association of cauliflower LAA morphology with stroke. abstract_id: PUBMED:28553875 AssesSment of the left atrial appendage morphoLogy in patients aAfter ischaeMic Stroke - the ASSAM study protocol. Stroke remains the most dangerous and frightening complication of atrial fibrillation (AF). A causal relationship between ischaemic stroke and atrial arrhythmias such as AF or atrial flutter has been well established. Numerous factors predisposing to peripheral embolism in patients with AF have been well established and included in the CHA₂DS₂-VASc score. Although proper anticoagulation minimises the risk attributable to "known" risk factors, stroke may still occur. Thus, "unknown" risk factors may play an important role in stroke risk stratification in patients with AF. We assume that one of the important "unknown" risk factor is the left atrial appendage morphology. The ASSAM study is planned to include 100 patients after ischaemic stroke or transient ischaemic attack (TIA) and known status of anticoagulation at the time of stroke. The control group will consist of 100 patients scheduled for AF ablation without a history of stroke or TIA. abstract_id: PUBMED:33950066 Analysis of risk factors for thrombosis of the left atrium/left atrial appendage in patients with non-valvular atrial fibrillation. Objectives: Left atrial appendage (LAA) morphology is a powerful predictor of thrombogenesis of the left atrium (LA) in patients with non-valvular atrial fibrillation (NVAF). However, it remains unknown whether LAA morphology is useful for stroke risk stratification in patients with NVAF. Methods: A total of 555 atrial fibrillation patients were divided into thrombus and non-thrombus groups according to transoesophageal echocardiography. We analysed the correlation between LAA morphology and the CHADS2 score. We determined the L2CHADS2 score and compared the ability to predict LA/LAA thrombosis of the CHADS2, L2CHADS2 and CHA2DS2-VASc scores from the area under the curve (AUC). Results: The odds ratio of non-chicken wing LAA morphology was 11.48. Non-chicken wing LAA morphology was significantly correlated with LA/LAA thrombosis. We incorporated LAA morphology into the CHADS2 score and named it the L2CHADS2 score. The AUC of the L2CHADS2 score (0.767) in predicting LA/LAA thrombosis was significantly higher than that of the CHADS2 (0.558) or CHA2DS2-VASc scores (0.557). The positive and negative predictive values of the L2CHADS2 score (13.1 and 98.7%) were higher than those of the CHADS2 (8.7 and 94.2%) and CHA2DS2-VASc scores (6.9 and 6.9%). Conclusions: Non-chicken wing LAA morphology was a powerful predictor of LA/LAA thrombosis in NVAF patients. The AUC, sensitivity and specificity of the L2 CHADS2 score were higher than those of the CHADS2 and CHA2 DS2 -VASc scores. Answer: Yes, the left atrial appendage (LAA) morphology does correlate with the risk of stroke in patients with atrial fibrillation (AF). Several studies have found that non-chicken wing LAA morphologies, such as windsock, cactus, and cauliflower, are associated with a higher risk of stroke and thrombus formation compared to the chicken wing morphology. For instance, a study found that patients with non-chicken wing LAA morphology had a significantly higher rate of ischemic stroke compared to those with chicken wing morphology (PUBMED:36223248). Another study reported that non-chicken wing morphologies were associated with a higher risk of thrombus formation, and in the presence of a thrombus, patients with chicken wing morphology had a doubled risk for neuro-embolic events compared to those with non-chicken wing morphology (PUBMED:36803426). Further supporting this correlation, a cross-sectional analysis indicated that windsock and broccoli morphologies were at higher risk for thrombus formation compared to chicken wing morphology (PUBMED:33512055). A meta-analysis also concluded that patients with chicken wing LAA morphology are less likely to develop thromboembolic events (TE) than patients with non-chicken wing morphology, suggesting that LAA morphology could be a valuable criterion in predicting TE and could affect the stratification and anticoagulation management of patients with low to intermediate TE risk (PUBMED:27016474). Moreover, complex LAA morphology has been identified as an independent risk factor for cryptogenic ischemic stroke (PUBMED:30460239), and the extent of LAA trabeculations and smaller LAA orifice diameter have been associated with prevalent stroke, potentially mediating the association of cauliflower LAA morphology with stroke (PUBMED:24076444). The L2CHADS2 score, which incorporates LAA morphology into the CHADS2 score, was found to have a higher area under the curve (AUC) in predicting LA/LAA thrombosis than the CHADS2 or CHA2DS2-VASc scores, indicating that non-chicken wing LAA morphology is a powerful predictor of LA/LAA thrombosis in patients with non-valvular AF (PUBMED:33950066). In summary, the morphology of the LAA is an important factor in assessing the risk of stroke in patients with AF, with non-chicken wing morphologies being associated with a higher risk.
Instruction: Complications and outcome after acute stroke. Does dysphagia matter? Abstracts: abstract_id: PUBMED:8685928 Complications and outcome after acute stroke. Does dysphagia matter? Background And Purpose: The published data on the relationship between dysphagia and both outcome and complications after acute stroke have been inconclusive. We examined the relationship between these, using bedside assessment and videofluoroscopic examination. Methods: We prospectively studied 121 consecutive patients admitted with acute stroke. A standardized bedside assessment was performed by a physician. We performed videofluoroscopy blinded to this assessment within 3 days of stroke onset and within a median time of 24 hours of the bedside evaluations. The presence of aspiration was recorded. Mortality, functional outcome, lengthy of stay, place of discharge, occurrence of chest infection, nutritional status, and hydration were the main outcome measures. Results: Patients with an abnormal swallow (dysphagia) on bedside assessment had a higher risk of chest infection (P=.05) and a poor nutritional state (P=.001). The presence of dysphagia was associated with an increased risk of death (P=.001), disability (P=.02), length of hospital stay (P&lt;.001), and institutional care (P&lt;.05). When other factors were taken into account, dysphagia remained as an independent predictor of outcome only with regard to mortality. The use of videofluoroscopy in detecting aspiration did not add to the value of bedside assessment. Conclusions: Bedside assessment of swallowing is of use in identifying patients at risk of developing complications. The value of routine screening with videofluoroscopy to detect aspiration is questioned. abstract_id: PUBMED:35038706 Complications during Fiberoptic Endoscopic Evaluation of Swallowing in 5,680 Examinations. Objective: To evaluate retrospectively the incidence of complications during fiberoptic endoscopic evaluation of swallowing (FEES) in 5,680 examinations. Patients And Methods: 5,680 patients were evaluated at the Department of Otorhinolaryngology, Audiology and Phoniatrics of Pisa University Hospital between January 2014 and December 2018, involving both inpatients and outpatients. Most common comorbidities included neurological pathologies such as stroke (11.8%), neurodegenerative diseases (28.9%) and a history of previous head and neck surgery (24.6%). The evaluation was conducted by clinicians with experience in swallowing for a minimum of 10 years with the assistance of one or more speech-language pathologists. Results: In all patients studied the endoscope insertion was tolerated, and it was possible to visualize the pharyngolaryngeal structures. Three subjects refused to undergo the procedure after being informed regarding the protocol and were therefore not included in this study. Most patients reported discomfort (70.1%) and gagging (20.8%). In a minority of patients complications were recorded, such as anterior epistaxis (0.1%), posterior epistaxis (0.02%), vasovagal crises (0.08%) and laryngospasm (0.04%). Especially laryngospasm was recorded in patients affected by amyotrophic lateral sclerosis. Multivariate binary logistic regression showed that discomfort (OR 9.944; CI 7.643-12.937), chronic gastrointestinal diseases (OR 2.003; CI 1.518-2.644), neurodegenerative diseases (OR 1.550; CI 1.302-1.846) and brain tumors (OR 1.577; CI 1.179-2.111) were risk factors associated with minor complications. Conclusions: FEES proved to be easy to perform, well tolerated by the patients and cost-effective. It can be performed at the patient's bedside, and it is characterized by a low rate of complications. As a matter of fact, normally only discomfort, gagging and/or vomiting are reported. Complications occurred only rarely, such as anterior or posterior epistaxis episodes or vasovagal crises, but these are still easily managed. Exceptionally, more severe complications are reported: adverse drug reactions to substances such as blue dye (methylene blue) and local anesthetics (not used in our protocol), and laryngospasm. abstract_id: PUBMED:9422626 The risk and complications of aspiration following cricopharyngeal myotomy. This article reviews a series of patients undergoing cricopharyngeal myotomy and determines whether clinically dangerous aspiration is seen postoperatively. A total of 23 patients underwent myotomy. Indications included Zenker's diverticulum (14), anticipated or real dysphagia from skull base lesions (5), dysphagia from stroke (2), and dysphagia from glossectomy and radiation therapy (2). Surgical procedures, complications, and effectiveness were reviewed. Of patients with Zenker's diverticula, 13 of 14 had clinically useful improvement in dysphagia. Of patients with skull base lesions, all 5 had improvement (4 of these also had thyroplasites and cervical plexus-to-superior laryngeal nerve anastomoses). Of the patients with strokes, neither had significant improvement. Of the patients with glossectomy and radiation, 1 had useful improvement. Complications were seen in 5 patients: 2 had self-limiting pharyngeal leaks, and 3 had pneumonia 1-4 months postoperatively. One patient also had a postoperative ipsilateral recurrent laryngeal nerve injury. There were no postoperative deaths. In conclusion, cricopharyngeal myotomy has definite utility in the management of cervical dysphagia, even though the etiology of the dysphagia can be multifactorial. Risks directly attributable to the procedure are usually self-limiting; serious complications are usually associated with the underlying disease. The addition of adjunctive procedures, such as thyroplasty and superior laryngeal nerve reinnervation, may be of additional benefit to patients with high extracranial vagal injuries. abstract_id: PUBMED:34262441 The Role of White Matter in the Neural Control of Swallowing: A Systematic Review. Background: Swallowing disorders (dysphagia) can negatively impact quality of life and health. For clinicians and researchers seeking to improve outcomes for patients with dysphagia, understanding the neural control of swallowing is critical. The role of gray matter in swallowing control has been extensively documented, but knowledge is limited regarding the contributions of white matter. Our aim was to identify, evaluate, and summarize the populations, methods, and results of published articles describing the role of white matter in neural control of swallowing. Methods: We completed a systematic review with a multi-engine search following PRISMA-P 2015 standards. Two authors screened articles and completed blind full-text review and quality assessments using an adapted U.S. National Institute of Health's Quality Assessment. The senior author resolved any disagreements. Qualitative synthesis of evidence was completed. Results: The search yielded 105 non-duplicate articles, twenty-two of which met inclusion criteria. Twenty were rated as Good (5/22; 23%) or Fair (15/22; 68%) quality. Stroke was the most represented diagnosis (n = 20; 91%). All studies were observational, and half were retrospective cohort design. The majority of studies (13/22; 59%) quantified white matter damage with lesion-based methods, whereas 7/22 (32%) described intrinsic characteristics of white matter using methods like fractional anisotropy. Fifteen studies (68%) used instrumental methods for swallowing evaluations. White matter areas commonly implicated in swallowing control included the pyramidal tract, internal capsule, corona radiata, superior longitudinal fasciculus, external capsule, and corpus callosum. Additional noteworthy themes included: severity of white matter damage is related to dysphagia severity; bilateral white matter lesions appear particularly disruptive to swallowing; and white matter adaptation can facilitate dysphagia recovery. Gaps in the literature included limited sample size and populations, lack of in-depth evaluations, and issues with research design. Conclusion: Although traditionally understudied, there is sufficient evidence to conclude that white matter is critical in the neural control of swallowing. The reviewed studies indicated that white matter damage can be directly tied to swallowing deficits, and several white matter structures were implicated across studies. Further well-designed interdisciplinary research is needed to understand white matter's role in neural control of normal swallowing and in dysphagia recovery and rehabilitation. abstract_id: PUBMED:35832937 Cerebrovascular Complications of Vestibular Schwannoma Surgery. Objective Cerebrovascular complications (CVC) are rare consequences of vestibular schwannoma (VS) surgery. Our objective was to assess incidences of findings suggestive of postoperative CVC in a large single surgeon cohort, as well as potential risk factors, and implications. Study Design A cohort of 591 patients was retrospectively reviewed. Postoperative magnetic resonance images were screened for findings suggestive of stroke, T2 hyperintensity in the cerebellopontine angle structures or new encephalomalacia. Clinical records were queried for findings consistent with postoperative CVC. Results In total, 61 patients had radiographic findings consistent with possible postoperative CVC (10%); of them, eight had documented intraoperative vascular injury (1.4%), and four had postoperative clinical exam changes indicative of CVC (0.7%). Clinically manifest intraoperative vascular injuries occurred in four patients and involved the petrosal venous complex ( n = 3, 5%) or anterior inferior cerebellar artery ( n = 1, 2%); clinical deficits included hemiparesis ( n = 1, 2%), facial anesthesia ( n = 2, 4%), dysphagia ( n = 2, 2%), and unfavorable facial nerve function in two (50%). Three out of four patients in this group required out-of-home placement (75%). Clinical CVCs ( n = 4) were not significantly associated with tumor size, tumor cyst, gross total resection, or length of stay. Patients with clinical CVC were significantly more likely to require posthospitalization rehabilitation (19 vs. 75%, p = 0.02; 14 vs. 100%, p = 0.0002). Conclusion Although radiographic findings suggestive of CVC were unexpectedly common in this cohort, intraoperative vascular injury and postoperative clinical CVC were exceedingly rare. The association between unfavorable facial nerve outcome and clinical CVC is likely a marker for more difficult operations, predisposing to higher risk of complications. abstract_id: PUBMED:17172601 Long-term outcome after stroke: does dysphagia matter? Background: swallowing problems (dysphagia) are common following acute stroke and are independent predictors of short-term outcome. It is uncertain as to whether these swallowing problems are associated with outcome in the longer-term. Aim: insert to determine whether dysphagia present in the first week of acute stroke associated with long-term outcome. Methods: a population-based long-term follow-up of people with first in a life-time stroke. Dysphagia was assessed within 1 week of stroke and patients were followed up at 3 months and yearly for 5 years by face-to-face interview. Outcome was defined by survival and place of residence, using multinomial logistic regression. Barthel Scores were divided into the two groups 15-20 and 0-14, and modelled using multiple logistic regression. Results: there were 567 patients with dysphagia (mean age 74.3 years) and 621 with a safe swallow (mean age 69.6 years). Following multinomial logistic regression, residence in a nursing home was more likely to occur in those who failed the swallow test during the first week of their stroke; however, this only reached statistical significance at 3 months (relative risk ratio (RRR)=1.73; 95% confidence interval (CI) 1.02 to 2.95), and years 4 (RRR 3.35, 1.37-8.19) and 5 (RRR 3.06, 1.06-8.83). There was also a significant association with increased mortality only during the first three months (RRR 2.03, 1.12 to 3.67). Conclusion: this study confirms that the presence of dysphagia during the acute phase of stroke is associated with poor outcome during the subsequent year, particularly at 3 months, and is associated with increased institutionalisation rate in the long term. abstract_id: PUBMED:18679534 Complications of and controversies associated with percutaneous endoscopic gastrostomy: report of a case and literature review. Context: Percutaneous endoscopic gastrostomy (PEG) is one of the most commonly performed gastrointestinal procedures, despite absence of benefit in many patients and risks associated with the procedure. Increased education of primary care physicians about the shortcomings of PEG may allow for better selection of patients to be referred for PEG placement. Evidence Acquisition: We performed a comprehensive literature review by searching PUBMED using the search headings percutaneous enteral gastrostomy, PEG, complications, dementia, stroke, dysphagia, malnutrition, and complications. We identified English language articles from 1980 onward. The highest quality data were considered to be randomized controlled trials although given the paucity of trials in this area, we used all of the various types of literature. Evidence Synthesis: We based the major conclusions of this review, where possible, on the most robust literature, namely, controlled trials. However, the majority of the available literature in this field is based on case series. We attempted to maximize the use of larger case series with longer term follow-up. Case reports were used only to report on rare complications where no other literature was available. Conclusions: Despite more than 30 years of experience with PEG, numerous questions remain regarding the utility of nutrition support in many of the clinical scenarios in which PEG placement is contemplated. There is a multitude of evidence that artificial nutrition does not improve outcome or quality of life in patients with dementia who have decreased oral intake. It is likely that ethical, moral, religious, and legal considerations of family members and caregivers play a role in the decision to place a PEG in a patient with dementia despite the medical evidence demonstrating lack of benefit. abstract_id: PUBMED:35493949 Transcutaneous Auricular Vagus Nerve Stimulation Promotes White Matter Repair and Improves Dysphagia Symptoms in Cerebral Ischemia Model Rats. Background: Clinical and animal studies have shown that transcutaneous auricular vagus nerve stimulation (ta-VNS) exerts neuroprotection following cerebral ischemia. Studies have revealed that white matter damage after ischemia is related to swallowing defects, and the degree of white matter damage is related to the severity of dysphagia. However, the effect of ta-VNS on dysphagia symptoms and white matter damage in dysphagic animals after an ischemic stroke has not been investigated. Methods: Middle cerebral artery occlusion (MCAO) rats were randomly divided into the sham, control and vagus nerve stimulation (VNS) group, which subsequently received ta-VNS for 3 weeks. The swallowing reflex was measured once weekly by electromyography (EMG). White matter remyelination, volume, angiogenesis and the inflammatory response in the white matter were assessed by electron microscopy, immunohistochemistry, stereology, enzyme-linked immunosorbent assay (ELISA) and Western blotting. Results: ta-VNS significantly increased the number of swallows within 20 s and reduced the onset latency to the first swallow. ta-VNS significantly improved remyelination but did not alleviate white matter shrinkage after MCAO. Stereology revealed that ta-VNS significantly increased the density of capillaries and increased vascular endothelial growth factor (VEGF) and basic fibroblast growth factor (FGF2) expression in the white matter. ta-VNS significantly alleviated the increase inTLR4, MyD88, phosphorylated MAPK and NF-κB protein levels and suppressed the expression of the proinflammatory factors IL-1β and TNF-α. Conclusion: These results indicated ta-VNS slightly improved dysphagia symptoms after ischemic stroke, possibly by increasing remyelination, inducing angiogenesis, and inhibiting the inflammatory response in the white matter of cerebral ischaemia model rats, implying that ta-VNS may be an effective therapeutic strategy for the treatment of dysphagia after ischemic stroke. abstract_id: PUBMED:25214444 Gastrointestinal complications after ischemic stroke. Ischemic stroke is an important cause of morbidity and mortality, and currently the leading cause of adult disability in developed countries. Stroke is associated with various non-neurological medical complications, including infections and thrombosis. Gastrointestinal complications after stroke are also common, with over half of all stroke patients presenting with dysphagia, constipation, fecal incontinence or gastrointestinal bleeding. These complications are associated with increased hospital length of stay, the development of further complications and even increased mortality. In this article we review the epidemiology, pathophysiology, diagnosis, management and prevention of the most common gastrointestinal complications associated with ischemic stroke. abstract_id: PUBMED:24142070 Risk factors for complications of percutaneous endoscopic gastrostomy. Background: Percutaneous endoscopic gastrostomy (PEG) is a commonly performed procedure for patients with severe dysphagia leading to malnutrition. Improved knowledge of risk factors for PEG-related complications might decrease patient discomfort and healthcare costs. Aim: The aim of the present study was to investigate factors associated with complications after PEG. Methods: A retrospective review was performed for all patients referred for PEG placement from December 2002 to December 2012 in single-tertiary care center. PEG-related complications and risk factors were evaluated through chart reviews, endoscopic reports, and endoscopic and radiologic images. Results: Among a total of 245 consecutive individuals (146 male, mean age 59.2 ± 12.6 years) enrolled, 43 major complications had developed. Multivariate analysis revealed that patients with an internal bolster of a PEG tube in the upper body of stomach were at significant risk for early [OR 6.127 (95 % CI 1.447-26.046)] and late complications [OR 6.710 (95 % CI 1.692-26.603)]. Abnormal leukocyte counts [OR 3.198 (95 % CI 1.174-8.716)], stroke as an indication for PEG [OR 3.047 (95 % CI 1.174-8.882)], and PEG tube placement by an inexperienced endoscopist [OR 3.401 (95 % CI 1.073-10.779)] were significantly associated with early complications. Conclusions: A PEG tube should not be inserted into the upper body of stomach to reduce complication risk, and PEG procedures should be performed by skilled endoscopists to prevent early complications. An abnormal leukocyte count can be a predictor of early complication, and care is needed when PEG is performed for patients with stroke. Answer: Yes, dysphagia does matter in the context of complications and outcomes after acute stroke. Dysphagia, or difficulty swallowing, has been associated with a higher risk of complications and poorer outcomes in stroke patients. Patients with dysphagia after an acute stroke have been found to have a higher risk of chest infection and poor nutritional state (PUBMED:8685928). Dysphagia is also associated with an increased risk of death, disability, longer hospital stays, and a greater likelihood of requiring institutional care. Even when other factors are taken into account, dysphagia remains an independent predictor of mortality (PUBMED:8685928). Furthermore, dysphagia present in the first week of acute stroke is associated with poor outcomes in the subsequent year, particularly at 3 months, and is associated with increased institutionalization rate in the long term (PUBMED:17172601). This suggests that the presence of dysphagia during the acute phase of stroke has long-term implications for patient outcomes. The management of dysphagia in stroke patients is also important to consider. For instance, fiberoptic endoscopic evaluation of swallowing (FEES) is a procedure used to assess swallowing function and has been shown to be well-tolerated by patients, cost-effective, and characterized by a low rate of complications (PUBMED:35038706). However, complications such as discomfort, gagging, and rare instances of epistaxis or laryngospasm can occur. In terms of interventions, cricopharyngeal myotomy, a surgical procedure to improve swallowing, has shown utility in managing cervical dysphagia, although serious complications are usually associated with the underlying disease rather than the procedure itself (PUBMED:9422626). Additionally, research has indicated that white matter damage can be directly tied to swallowing deficits, and several white matter structures have been implicated in the control of swallowing (PUBMED:34262441). This highlights the importance of understanding the neural control of swallowing for improving outcomes in patients with dysphagia following a stroke. In summary, dysphagia is a significant factor in the complications and outcomes of stroke patients, affecting mortality, morbidity, and the need for long-term care. Proper assessment and management of dysphagia are crucial for improving patient outcomes after an acute stroke.
Instruction: Nailfold capillaroscopy and laser Doppler fluxmetry for evaluation of Raynaud's phenomenon: how valid is the local cooling test? Abstracts: abstract_id: PUBMED:9286153 Nailfold capillaroscopy and laser Doppler fluxmetry for evaluation of Raynaud's phenomenon: how valid is the local cooling test? Background: Since there is a need for an objective parameter of microcirculation in follow-up of patients with Raynaud's syndrome we evaluated the reproducibility of the reaction to a cold exposure test with nailfold capillaroscopy and laser Doppler fluxmetry during a winter period. Patients And Methods: 10 healthy subjects and 16 patients with primary Raynaud's syndrome were evaluated with nailfold capillaroscopy and laser Doppler fluxmetry of the finger tips during and after a standardized cold exposure stress test. The measurements were repeated in the controls within 2 weeks, in the patient population twice within 24 weeks. Results: In the healthy controls the percentage of capillaries with flow stop was stable (r = 0.785) and the duration of flow stop was reproducible (r = 0.993). Both parameters were significantly lower in controls than in the patients. The flow stop durations in patients varied intra-individually to a great extent between the different weeks without any significant correlation. For all calculated laser Doppler perfusion and time parameters we did not find any significant differences between controls and patients nor any consistent correlations in the intra-individual comparison within the different weeks for both, controls and patients. Conclusion: Nailfold capillaroscopy during cold exposure is able to discriminate between healthy persons and patients with primary Raynaud's syndrome, but seems to be of minor value for follow-up evaluation of patients because of intraindividual variations. Laser Doppler fluxmetry is invalid for both purposes when using the applied cold exposure test. abstract_id: PUBMED:35585956 Nailfold capillaroscopy for the early diagnosis of the scleroderma spectrum of diseases in patients without Raynaud's phenomenon. Background: The utility of nailfold capillaroscopy in the evaluation of patients without Raynaud's phenomenon is unclear. Objective: This study aims to compare the utility of nailfold capillaroscopy for the early diagnosis of the scleroderma-spectrum of diseases in patients who present with Raynaud's phenomenon, undifferentiated non-Raynaud's phenomenon features and positive systemic sclerosis-associated antibodies without scleroderma-spectrum of disease features. Methods: Eligible patients were divided into three referral criteria groups: (I) Raynaud's phenomenon; (II) Undifferentiated non-Raynaud's phenomenon features and (III) Positive systemic sclerosis-associated autoantibodies without features to suggest scleroderma-spectrum of diseases. This includes systemic sclerosis, mixed connective tissue disease and dermatomyositis. The association between baseline scleroderma pattern on nailfold capillaroscopy (systemic sclerosis-nailfold capillaroscopy) and final diagnosis at follow-up was determined using logistic regression analysis. Test characteristics of nailfold capillaroscopy were compared and stratified by referral groups. Results: Of 95 patients followed-up for a mean of 1.6 years, 28 (29.5%) patients developed scleroderma-spectrum of diseases, 36 (37.9%) patients had suspected/other connective tissue disease and 27 (28.4%) patients had no connective tissue disease. Baseline systemic sclerosis-nailfold capillaroscopy was significantly associated with the development of scleroderma-spectrum of diseases in patients from Group I (odds ratio, 7.1, p = 0.01) and Group II (odds ratio 7.3, p = 0.005). In Group II patients, nailfold capillaroscopy had a sensitivity, specificity, positive and negative predictive values of 71.4%, 76.5%, 55.6% and 86.7%, respectively. Specificity (81.8%) and PPV (69.2%) were the highest in Group I patients. Nailfold capillaroscopy had the highest negative predictive value in Group III (100%), followed by Group II (86.7%) and Group I (78.3%) patients. Conclusion: In addition to evaluating patients with Raynaud's phenomenon, nailfold capillaroscopy was useful in the evaluation and exclusion of scleroderma-spectrum of diseases in patients with undifferentiated non-Raynaud phenomenon features and those with systemic sclerosis-associated antibodies without features to suggest scleroderma-spectrum of diseases. abstract_id: PUBMED:33600677 Nailfold capillaroscopy in systemic diseases: short overview for internal medicine. Nailfold capillaroscopy (NFC) is now one of the main imaging tools in systemic sclerosis and imposed over time as an easy, non-invasive method for the nailfold microvascular bed assessment. In qualitative NFC normal pattern is characterized by homogeneous, parallel fashion arrangement of the last capillaries row as well as by capillaries with hairpin or non-specific variations like tortuous and/ or crossing shape. Nailfold capillaroscopy is strongly recommended for evaluation of all patients with Raynaud phenomenon. Appearance of giant capillaries is chronologically the first relevant finding for scleroderma spectrum disorders development (systemic sclerosis, dermatomyositis, undifferentiated and mixed connective tissue disease). Collapses of the giant loops generate microhemorrhages and further capillary loss with subsequent hypoxia, and neoangiogenesis seen as ramified/ bushy capillaries. Nailfold capillaroscopy is indicated especially in systemic sclerosis, being also included in the classification criteria. Based on these major NFC pathologic findings (giant capillaries, microhemorrhages, avascularity and neoangiogenesis), three evolutive stages were described in systemic sclerosis, namely the early, active, and late scleroderma pattern. In other connective tissue diseases than those scleroderma-related, like systemic lupus erythematosus, psoriatic arthritis, or antiphospholipid syndrome, the interest for capillaroscopy is growing, but the attempts of defining specific characteristics failed until now. Besides qualitative NFC, semiquantitative and quantitative capillaroscopic assessments were proposed for more accurate evaluation. Lately, automated systems are under development. There is still need of more studies to sustain the nailfold capillaroscopy validity as diagnostic and prognostic test. abstract_id: PUBMED:36040673 Nailfold capillaroscopy: tips and challenges. Although nailfold capillaroscopy (NFC) appears to have a bright future in clinical practice, the lack of familiarity with the technique and how to interpret its outcomes is major barriers which have made nailfold capillaroscopy an underutilized method in standard clinical practice. Traditional methods for assessment and measurement of capillary patterns, density, and blood flow are falling behind and face some challenges. In fact, there have been calls for improvement, hence the recent publication of the standardization of NFC by the EULAR Study Group on Microcirculation in Rheumatic Diseases. Nailfold capillaroscopy has the advantage of being a non-invasive technique that provides a window into the digital microcirculation. This paved the way for a rapidly growing interest in using capillaroscopy parameters as outcome measures in research. In standard clinical practice, whilst its main application is in the identification of an underlying systemic sclerosis spectrum disorder in patients presenting with Raynaud's phenomenon, its use has expanded to include other clinical features possibly suggestive of an underlying connective tissue disease. This article presents the challenges, provides tips, and highlights the exciting potential of nailfold capillaroscopy in standard practice. abstract_id: PUBMED:37419757 Nailfold capillaroscopy. Nailfold capillaroscopy is a safe and well-established method for the assessment of structural alterations of the microcirculation. It is a crucial tool in the investigation and monitoring of patients presenting with Raynaud's phenomenon. Detection of the characteristic "scleroderma pattern" on capillaroscopy may indicate an underlying rheumatic disease, particularly systemic sclerosis (SSc). Herein, we highlight the practical aspects of videocapillaroscopy, including image acquisition and analysis, with mention of dermoscopy. Special emphasis is placed on standardized use of terminology to describe capillary characteristics. Systematic evaluation of images in discerning the normal from the abnormal using the validated European Alliance of Associations for Rheumatology (EULAR) Study Group consensus reporting framework is paramount. In addition to the relevance of capillaroscopy in the (very) early diagnosis of SSc, its emerging predictive value (especially capillary loss) for new organ involvement and disease progression is underscored. We further provide capillaroscopic findings in selected other rheumatic diseases. abstract_id: PUBMED:36353844 Clinical utility of nailfold capillaroscopy. Nailfold capillaroscopy (NFC) is a simple noninvasive microscopic technique used to identify characteristic morphological abnormalities in the nailfold capillaries. The presence of this microvasculopathy appears to be of fundamental importance in the pathological processes that underlie the scleroderma spectrum disorders (including dermatomyositis and antisynthetase myositis). This review discusses the different methodologies and techniques in performing NFC and stresses the diagnostic utility achieved with simple 'bedside' techniques utilising the ophthalmoscope, dermatoscope or smart phone. Recent advances in reporting abnormal microvascular patterns and vascular metrics (e.g. capillary density and dropout) are discussed. The aetiopathogenesis of the microvasculopathy is currently unknown but its close association with Raynaud Phenomena and specific autoantibodies together with recent observations from sequential NFC allows speculations on its possible mechanism. Finally, future developments in the use of NFC as a possible biomarker in the management of the scleroderma spectrum disorders are discussed, with a recommendation that NFC becomes more widely available, particularly in rheumatological, immunological and dermatological practice. NFC provides a clinically accessible window on the pathologic process fundamental to scleroderma-related disease. abstract_id: PUBMED:33493310 Quantitative nailfold capillaroscopy-update and possible next steps. We review the exciting potential (and challenges) of quantitative nailfold capillaroscopy, focusing on its role in systemic sclerosis. Quantifying abnormality, including automated analysis of nailfold images, overcomes the subjectivity of qualitative/descriptive image interpretation. First we consider the rationale for quantitative analysis, including the potential for precise discrimination between normal and abnormal capillaries and for reliable measurement of disease progression and treatment response. We discuss nailfold image acquisition and interpretation, and describe how early work on semi-quantitative and quantitative analysis paved the way for semi-automated and automated analysis. Measurement of red blood cell velocity is described briefly. Finally we give a personal view on 'next steps'. From a clinical perspective, increased uptake of nailfold capillaroscopy by general rheumatologists could be achieved via low-cost hand-held devices with cloud-based automated analysis. From a research perspective, automated analysis could facilitate large-scale prospective studies using capillaroscopic parameters as possible biomarkers of systemic sclerosis-spectrum disorders. abstract_id: PUBMED:8121320 Influence of local finger cooling on laser Doppler flux and nailfold capillary blood flow velocity in normal subjects and in patients with Raynaud's phenomenon. We examined the influence of local cooling (-15 degrees C, 60 sec) on blood flow velocity in nailfold capillaries and on laser Doppler flux (LDF) at the fingertip in 12 patients with primary Raynaud's phenomenon (PRP) and 13 patients with secondary Raynaud's (SRP) in comparison with 10 health controls. Skin temperature at the pulp of the finger decreased significantly during local cooling in RP but not in controls. LDF before cooling was comparable between patients and controls and dropped significantly under cooling. Flux motions at the fingertip were present in all individuals with 7.9-9.1 cycles/min. Flux motions persisted throughout the cooling test and faded away in 5 patients (1 PRP, 4 SRP) with cooling. Frequency of LDF motions was little influenced by a change in skin temperature. The amplitudes were dampened in relation to the drop in LDF level during local cooling in patients with RP. We conclude that LDF does respond to local cooling of the fingertip by significant decrease of perfusion but not with regard to flux motions. The wide overlap of the single values does not allow as good a separation of patients with RP from controls as capillary microscopy. abstract_id: PUBMED:18189191 Functional and morphological evaluation of hand microcirculation with nailfold capillaroscopy and laser Doppler imaging in Raynaud's and Sjögren's syndrome and poly/dermatomyositis. Objectives: Nailfold capillaroscopy is widely used in autoimmune patients to determine capillary morphology. Laser Doppler imaging (LDI) is a relatively new method for measuring the microcirculation of cutaneous perfusion. The aim of this study was to investigate the capillary morphology and microcirculation among patients with Sjögren's syndrome (SS) and poly/dermatomyositis (PM/DM) with these two non-invasive methods and to detect secondary Raynaud's syndrome (SRS) in these autoimmune diseases. Methods: Thirty patients with primary SS, 30 patients with PM/DM, 30 patients with primary Raynaud's syndrome (PRS), and 30 healthy volunteers were included in the study. Nailfold capillaroscopy and LDI were performed on each patient. Results: A comprehensive analysis was performed among the patients and healthy individuals. Among SS patients avascularity and among PM/DM patients avascularity and capillary morphology changes were most often detected by capillaroscopy. With LDI the mean steady-state cutaneous perfusion was 1.25 perfusion units (PU) in region of interest 1 (ROI1), 1.22 in ROI2, and 1.49 at the fingertips in PRS patients; the corresponding values were 1.2, 1.03, and 1.48 PU in SS, 0.91, 0.76, and 1.19 PU in PM/DM, and 1.79, 1.62, and 2.2 PU in the controls. The differences were significant between each autoimmune group compared to the control group (p&lt;0.02, p&lt;0.001, and p&lt;0.001, respectively). Conclusions: By using nailfold capillaroscopy, abnormalities in capillary morphology can be detected, and by using LDI, the reduced blood flow in the capillaries can be detected. These investigations can be useful in the detection of SRS, or in distinguishing whether the reduced blood flow is due to primary/systemic autoimmune diseases. abstract_id: PUBMED:35640131 Nailfold capillaroscopy: a survey of current UK practice and 'next steps' to increase uptake among rheumatologists. Objectives: To identify barriers to the use of nailfold capillaroscopy as a diagnostic tool for patients presenting with Raynaud's phenomenon in UK rheumatology centres and to obtain rheumatologists' views on a proposed internet-based standardized system for clinical reporting of nailfold capillaroscopy images. Methods: An online survey was developed using expert opinion from clinicians, scientists and health service researchers. The survey was piloted and sent to UK-based rheumatologists using established electronic mailing lists between October 2020 and March 2021. Survey data were analysed using descriptive statistics. Results: A total of 104 rheumatologists representing rheumatology centres across the UK responded to the survey. Wide variations in terms of workloads and practices were described. Thirty-four (33%) respondents reported using nailfold capillaroscopy only at their own centre, 33 (32%) referred to other centres, 9 (9%) did both and 28 (27%) did not use capillaroscopy at all. Of the 43 respondents using capillaroscopy on site, 25 (58%) used either a dermatoscope or universal serial bus microscope and 9 (21%) used videocapillaroscopy. Among the 61 respondents not undertaking capillaroscopy on site, barriers included lack of equipment (85%), lack of experience in acquiring images (69%) and lack of expertise in interpreting images (67%). Sixty-six respondents (63%) expressed interest in an internet-based, standardized automated system for reporting images. Conclusion: Most UK rheumatologists currently do not perform nailfold capillaroscopy on site. An internet-based nailfold capillaroscopy system for use with low-cost microscopes as well as with videocapillaroscopy could help increase uptake of capillaroscopy and thereby facilitate early diagnosis of SSc across the UK. Answer: The local cooling test, when used in conjunction with nailfold capillaroscopy and laser Doppler fluxmetry for the evaluation of Raynaud's phenomenon, has shown mixed validity. According to a study (PUBMED:9286153), nailfold capillaroscopy during cold exposure can discriminate between healthy individuals and patients with primary Raynaud's syndrome. The study found that the percentage of capillaries with flow stop and the duration of flow stop were stable and reproducible in healthy controls and significantly lower than in patients. However, the flow stop durations in patients varied greatly intra-individually over different weeks, indicating that nailfold capillaroscopy may have limited value for follow-up evaluation of patients due to these variations. On the other hand, laser Doppler fluxmetry did not show significant differences between controls and patients nor consistent correlations in the intra-individual comparison, rendering it invalid for both diagnostic and follow-up purposes when using the applied cold exposure test. Another study (PUBMED:8121320) examined the influence of local finger cooling on laser Doppler flux and nailfold capillary blood flow velocity in normal subjects and patients with Raynaud's phenomenon. The study concluded that while laser Doppler fluxmetry does respond to local cooling of the fingertip by a significant decrease in perfusion, the wide overlap of individual values did not allow for a clear separation of patients with Raynaud's phenomenon from controls as effectively as capillary microscopy. Overall, while nailfold capillaroscopy has shown some validity in distinguishing between healthy individuals and those with primary Raynaud's syndrome during a local cooling test, its variability in patients and the limited effectiveness of laser Doppler fluxmetry suggest that the local cooling test may not be entirely reliable for all evaluation purposes of Raynaud's phenomenon.
Instruction: Continuing stability of center differences in pediatric diabetes care: do advances in diabetes treatment improve outcome? Abstracts: abstract_id: PUBMED:17540955 Continuing stability of center differences in pediatric diabetes care: do advances in diabetes treatment improve outcome? The Hvidoere Study Group on Childhood Diabetes. Objective: To reevaluate the persistence and stability of previously observed differences between pediatric diabetes centers and to investigate the influence of demography, language communication problems, and changes in insulin regimens on metabolic outcome, hypoglycemia, and ketoacidosis. Research Design And Methods: This was an observational cross-sectional international study in 21 centers, with clinical data obtained from all participants and A1C levels assayed in one central laboratory. All individuals with diabetes aged 11-18 years (49.4% female), with duration of diabetes of at least 1 year, were invited to participate. Fourteen of the centers participated in previous Hvidoere Studies, allowing direct comparison of glycemic control across centers between 1998 and 2005. Results: Mean A1C was 8.2 +/- 1.4%, with substantial variation between centers (mean A1C range 7.4-9.2%; P &lt; 0.001). There were no significant differences between centers in rates of severe hypoglycemia or diabetic ketoacidosis. Language difficulties had a significant negative impact on metabolic outcome (A1C 8.5 +/- 2.0% vs. 8.2 +/- 1.4% for those with language difficulties vs. those without, respectively; P &lt; 0.05). After adjustement for significant confounders of age, sex, duration of diabetes, insulin regimen, insulin dose, BMI, and language difficulties, the center differences persisted, and the effect size for center was not reduced. Relative center ranking since 1998 has remained stable, with no significant change in A1C. Conclusions: Despite many changes in diabetes management, major differences in metabolic outcome between 21 international pediatric diabetes centers persist. Different application between centers in the implementation of insulin treatment appears to be of more importance and needs further exploration. abstract_id: PUBMED:34766429 Center differences in diabetes treatment outcomes among children with type 1 diabetes: A nationwide study of 3866 Danish children. Objective: Mean differences in HbA1c across centers are well established, but less well understood. The aim was to assess whether differences in patient case-mix can explain the variation in mean HbA1c between pediatric diabetes centers in Denmark. The association between HbA1c , frequency of blood glucose monitoring (BGM), treatment modality, and center visits was investigated. Research Design And Methods: This longitudinal nationwide study included 3866 Danish children with type 1 diabetes from 2013 to 2017 (n = 12,708 child-year observations) from 16 different pediatric diabetes centers. Mean HbA1c , proportion of children reaching HbA1c treatment target (HbA1c ≤ 58 mmol/mol [7.5%]) were compared across centers using linear regression models. This was done with and without adjustment for socioeconomic characteristics (patient case-mix). Results: The mean difference in HbA1c during follow-up was 11.6 mmol/mol (95% CI 7.9, 15.3) (1.1% [95% CI 0.7, 1.4]) when comparing the centers with the lowest versus highest mean HbA1c . The difference was attenuated and remained significant after adjustment for the patient case-mix (difference: 10.5 mmol/mol [95% CI 6.8, 14.2] (1.0% [95% CI 0.6, 1.3])). Overall, 6.8% of the differences in mean HbA1c across centers were explained by differences in the patient case-mix. Across centers, more frequent BGM was associated with lower HbA1c . The proportion of insulin pump users and number of visits was not associated with HbA1c . Conclusion: In a setting of universal health care, large differences in HbA1c across centers were found, and could not be explained by patient background, number of visits or use of technology. Only BGM was associated with center HbA1c . abstract_id: PUBMED:33070877 Ethnic differences in characteristics and outcome of acute ischemic stroke in China: A single center experience. Background: This study aimed to explore ethnic differences of acute ischemic stroke (AIS) in patients' characteristics, management and outcome in Chinese population in an institutional retrospective study. Methods: Data of consecutive patients with AIS treated at our center from January 2018 to December 2019 were retrospectively reviewed. Ethnic differences in demographics, risk factors, treatment, and outcomes were evaluated. Results: A total of 400 patients were included in the present analysis. Compared with Han patients, patients in ethnic group showed shorter interval between symptom onset and admission, lower baseline Glasgow coma scale (GCS) score, lower prevalence of diabetes, higher prevalence of medical history of anticoagulation or antiplatelet treatment, lower rates of partial anterior circulation infarct (PACI), lacunar infarct (LACI) and posterior circulation infarct (POCI). They were less likely to receive antiplatelet therapy and more likely to present higher risks of pulmonary infection. Furthermore, multivariable analysis revealed that the ethnic group was associated with unfavorable outcomes in AIS patients at discharge. In subgroup analysis after adjustment, only pulmonary infection remained an independent predictor of poor outcomes at discharge in ethnic group, while in Han group, baseline GCS score, total anterior circulation infarct (TACI), LACI and pulmonary infection were determined to be independent predictors of poor outcomes at discharge. Conclusions: Patients from ethnic minority groups presented distinctive characteristics in their baseline measurements, background cerebrovascular risk factors, treatment after admission, risk of in-hospital complications compared with Han patients. The ethnic difference of the AIS appeared to affect the neurological outcome. abstract_id: PUBMED:8565543 Prediction of three outcome states from pediatric intensive care. Objective: To develop a method based on admission day data for predicting patient outcome status as independently functional, compromised functional, or dead. Design: Prospectively acquired development and validation samples. Setting: A pediatric intensive care unit located in a tertiary care center. Patients: Consecutive admissions (n = 1,663) for predictor development, and consecutive admissions (n = 1,153) for predictor validation. Methods: Pediatric Risk of Mortality score, baseline Pediatric Overall Performance Category score, age, operative status, and primary diagnosis classified into ten organ systems and nine etiologies were recorded at the time of intensive care unit admission. Predictor was developed by stepwise polychotomous logistic regression analysis for the outcome functional, compromised, and dead. Model fit was evaluated by chi-square statistics; prediction performance was measured by the area under the receiver operating characteristic curve, and classification table analysis of observed vs. predicted outcomes. Measurements And Main Results: The resulting predictor included Pediatric Risk of Mortality, baseline Pediatric Overall Performance Category, operative status, age, and diagnostic factors from four systems (cardiovascular, respiratory, neurologic, gastrointestinal), and six etiologies (infection, trauma, drug overdose, allergy/immunology, diabetes, miscellaneous/undetermined). Its application to the validation sample yielded good agreement between the total number expected and the observed outcomes for each state (chi-square = 3.16, 2 degrees of freedom, p = .206), with area indices of 0.96 +/- 0.01 for discrimination of fully functional vs. the combination of the two poor outcome states (compromised or death), and 0.94 +/- 0.02 for discrimination of fully or compromised functional vs. death. The 3 x 3 classification resulted in correct classification rates of 83.2%, 74.4%, and 81.3%, for the outcomes functional, compromised, and death, respectively. Conclusions: Prediction of three outcome states using physiologic status, baseline functional level, and broad-based diagnostic groupings at admission is feasible and may improve the relevance of quality of care assessment. abstract_id: PUBMED:27819155 Drugs for the treatment of pediatric type 2 diabetes mellitus and related co-morbidities. Introduction: The continuing global epidemic of obesity in adolescents has raised the prevalence of type 2 diabetes mellitus (T2DM). Despite the wealth of information concerning T2DM in adults, rare data are available targeting treatment of T2DM in pediatric. Areas covered: This article has reviewed clinical practice guidelines, particularly the American Diabetes Association and the Pediatric Endocrine Society consensus, jointly with clinical trial data available in databases with respect to the use of available pharmacological options to treat T2DM and its complications in youth. Expert opinion: The use of other pharmacological treatments of T2DM in addition to metformin and insulin entails several problems. Since rare studies have been conducted on the medications available to manage T2DM in children, treating them may be more difficult than that of adults. It needs longer and larger size clinical trials along with better pharmacological agents to affect various pathophysiological mechanisms of diabetes. Meanwhile, the efficacy and safety of combinations therapies should be completed in preclinical and clinical phases. abstract_id: PUBMED:18375428 Continuing stability of center differences in pediatric diabetes care: do advances in diabetes treatment improve outcome? The Hvidoere Study Group on Childhood Diabetes: response to de Beaufort et al. N/A abstract_id: PUBMED:18570331 Prevalence and clinical outcome of inpatient hyperglycemia in a community pediatric hospital. Background: Inpatient hyperglycemia in adult patients with and without a history of diabetes is a predictor of poor clinical outcome. No previous studies, however, have examined the association of hyperglycemia and clinical outcome in children admitted to a community pediatric hospital. Methods: The study was a retrospective observational cohort of pediatric patients admitted to a community children's hospital from January 2004 to August 2004. Medical records of 903 consecutive children admitted to critical and non-critical care areas were reviewed. Of them, 342 patients (38%) had no blood glucose measurements during their hospital stay. In the remaining patients, we determined the prevalence of hyperglycemia and examined the association of hyperglycemia with clinical outcome. Results: A total of 406 patients (75%) had an admission blood glucose &lt; or =120 mg/dL (mean +/- SEM 98 +/- 1 mg/dL), 103 children (19%) had an admission blood glucose level of 121-179 mg/dL (mean 143 +/- 2 mg/dL), and 32 patients (5.9%) had a blood glucose level &gt; or =180 mg/dL (mean 260 +/- 18 mg/dL). Seventeen patients (13%) had a known history of diabetes prior to admission. Children with hyperglycemia were more likely to be admitted to the ICU (P &lt; .001) and had a longer length of ICU stay (P &lt; .001), but admission hyperglycemia was not associated with longer hospital stay or higher hospital mortality. Conclusions: Hyperglycemia is present in one-fourth of children admitted to the hospital, most of them without a history of diabetes prior to admission. Hyperglycemia was associated with a greater need for ICU care and longer ICU stay but not with increased in-hospital mortality. abstract_id: PUBMED:30662046 LONG-TERM OUTCOME OF PEDIATRIC KIDNEY TRANSPLANTATION: A SINGLE-CENTER EXPERIENCES (Background) Long-term care is necessary for normal growth and development of pediatric recipients of kidney transplants. We report on our experience with pediatric kidney transplantation (KTx) during the past 19 years. (Methods) We retrospectively analyzed the data from 26 recipients who received KTx between 1996 and 2014 at Niigata University Hospital (one patient underwent two consecutive KTx during the designated period). All recipients were 16 years old or younger at the time of KTx. (Results) The graft survival rates at 1, 5, and 10 years after transplantation were 96%, 96%, and 88%, respectively. Three recipients lost the renal graft function due to graft thrombosis, antibody mediated rejection and steroid resistant rejection. Drug non-adherence was associated with rejection episodes, which led to the increasing of estimated glomerular filtration rate (eGFR) level. In addition, renal graft function was related to the growth after KTx. Eighteen recipients graduated from high school during follow-up periods and 17 recipients obtained employment. (Conclusion) Interventions promoting adherence should be implemented among pediatric recipients and parents to optimize graft survival and growth after KTx. Successful KTx contributed the high rate of social participation and employment after pediatric KTx. abstract_id: PUBMED:20642071 A proposal of continuing health education to update health team professionals in diabetes mellitus The objective of the study was to report the experience of Continuing Health Education to update the health team of a Health Center about comprehensive and humanized care to individuals with type 2 diabetes. The methodology used to train the health teams was Continuing Health Education by means of educational workshops in diabetes focused on the problem-posing of the professional and educational process. Hence, it was sought to design a program for education in diabetes with a view to changing and reorienting the practice of the health team. Continuing Education helped to integrate the individuals, strengthened the professional commitment and developed the group awareness. The experience of including education in the everyday activities of the health teams helped to improve the integration among university, service and community, thus favoring the process of planning and organizing the educational program, valuing interdisciplinarity. abstract_id: PUBMED:29159931 Targets and teamwork: Understanding differences in pediatric diabetes centers treatment outcomes. Objective: The reason for center differences in metabolic control of childhood diabetes is still unknown. We sought to determine to what extent the targets, expectations, and goals that diabetes care professionals have for their patients is a determinant of center differences in metabolic outcomes. Research Design And Methods: Children, under the age of 11 with type 1 diabetes and their parents treated at the study centers participated. Clinical, medical, and demographic data were obtained, along with blood sample for centralized assay. Parents and all members of the diabetes care team completed questionnaires on treatment targets for hemoglobin A1c (HbA1c) and recommended frequency of blood glucose monitoring. Results: Totally 1113 (53% male) children (mean age 8.0 ± 2.1 years) from 18 centers in 17 countries, along with parents and 113 health-care professionals, participated. There were substantial differences in mean HbA1c between centers ranging from 7.3 ± 0.8% (53 mmol/mol ± 8.7) to 8.9 ± 1.1% (74 mmol/mol ± 12.0). Centers with lower mean HbA1c had (1) parents who reported lower targets for their children, (2) health-care professionals that reported lower targets and more frequent testing, and (3) teams with less disagreement about recommended targets. Multiple regression analysis indicated that teams reporting higher HbA1c targets and more target disagreement had parents reporting higher treatment targets. This seemed to partially account for center differences in Hb1Ac. Conclusions: The diabetes care teams' cohesiveness and perspectives on treatment targets, expectations, and recommendations have an influence on parental targets, contributing to the differences in pediatric diabetes center outcomes. Answer: The abstracts provided suggest that despite advances in diabetes treatment, major differences in metabolic outcomes between pediatric diabetes centers persist. The Hvidoere Study Group on Childhood Diabetes observed that mean A1C levels varied substantially between centers, with no significant change in A1C or relative center ranking since 1998, indicating that advances in diabetes treatment have not uniformly improved outcomes across centers (PUBMED:17540955). This study also highlighted that language difficulties negatively impacted metabolic outcomes and that different applications of insulin treatment between centers might be of more importance (PUBMED:17540955). Similarly, a nationwide study of Danish children with type 1 diabetes found large differences in HbA1c across centers that could not be explained by patient background, number of visits, or use of technology. Only blood glucose monitoring frequency was associated with center HbA1c, suggesting that factors other than advances in treatment modalities might influence outcomes (PUBMED:34766429). Furthermore, the study titled "Targets and teamwork: Understanding differences in pediatric diabetes centers treatment outcomes" indicated that the cohesiveness of diabetes care teams and their perspectives on treatment targets and recommendations influenced parental targets, which in turn contributed to differences in pediatric diabetes center outcomes (PUBMED:29159931). These findings collectively suggest that while there have been advances in diabetes treatment, the improvement in outcomes is not solely dependent on these advancements. Other factors, such as the implementation of insulin treatment, communication issues, frequency of blood glucose monitoring, and the cohesiveness and perspectives of diabetes care teams, play a significant role in the variability of outcomes across different pediatric diabetes care centers.
Instruction: Are surgeons effective counsellors for women with a family history of breast cancer? Abstracts: abstract_id: PUBMED:12217301 Are surgeons effective counsellors for women with a family history of breast cancer? Aim: In the TRACE (Trial of Genetic Assessment in Breast Cancer) study, the data on the time spent by the surgeon with the women with a family history of breast cancer was analysed with respect to the changes in the anxiety scores and breast cancer worry scores. We wanted to see whether longer consultation times with surgeons improved these scores. Methods: In the TRACE study, women with a family history of breast cancer attended a standard breast clinic to be seen by a surgeon and nurse specialist. Anxiety scores and breast cancer worry scores of these women were recorded prior to attending the clinic and immediately after their appointment. Results: In the 274 evaluable women, there was a significant improvement in the anxiety and breast cancer worries after surgical consultation but it did not correlate with the length of surgical consultation. However, women in the high-risk group showed a significant reduction of breast cancer worry score with longer consultation (P=0.032). Conclusions: Spending more time with the women with a family history of breast cancer does not help to reduce anxiety scores and breast cancer worry scores except in the subset of women who were told that their risk of breast cancer was high. abstract_id: PUBMED:38391399 Association between Family History of Breast Cancer and Breast Density in Saudi Premenopausal Women Participating in Mammography Screening. Background: Mammographic density and family history of breast cancer (FHBC) are well-established independent factors affecting breast cancer risk; however, the association between these two risk factors in premenopausal-screened women remains unclear. The aim of this study is to investigate the relationship between mammographic density and FHBC among Saudi premenopausal-screened women. Methods: A total of 446 eligible participants were included in the study. Mammographic density was assessed qualitatively using the Breast Imaging Reporting and Data System (BIRADS 4th edition). Logistic regression models were built to investigate the relationship between mammographic density and FHBC. Results: Women with a family history of breast cancer demonstrated an 87% greater chance of having dense tissue than women without a family history of breast cancer (95% CI: 1.14-3.08; p = 0.01). Having a positive family history for breast cancer in mothers was significantly associated with dense tissue (adjusted odds ratio (OR): 5.6; 95% CI: 1.3-24.1; p = 0.02). Conclusion: Dense breast tissue in Saudi premenopausal women undergoing screening may be linked to FHBC. If this conclusion is replicated in larger studies, then breast cancer risk prediction models must carefully consider these breast cancer risk factors. abstract_id: PUBMED:35251509 A Preliminary Survey of Women Orthopaedic Surgeons on Awareness of Radiation Safety Practice and Breast Cancer Risk in India. Objectives: To assess women surgeons' awareness of radiation protection protocols, cumulative dose, safety measures, radiation exposure, and breast cancer risk in India. Methods: The data were acquired through a survey monkey questionnaire circulated to women orthopaedic surgeons across the country through the WOICE group, e-mails and phone. The questionnaire assessed the respondents' awareness of radiation protection measures and level of exposure. Medical history on breast cancer during their professional life, time of diagnosis, and radiation exposure were collected. Data were analysed and expressed as percentages. Results: Fifty-one women orthopaedic surgeons responded. These were sub-grouped into early and late-career group based on whether they had worked in orthopaedics for less than 10 years (n = 33) or 10 or more years (n = 18). The mean number of years of practice of the respondents was 8.1 years, and for the group with 10 or more years was 18.7 years. 90.2% reported compliance to the lead shield, while 9.8% (n = 5) were non-compliant because of the shield's weight. Of the included respondents, 88.8% are currently working. Among all the respondents, only 13.7% (n = 7) use a radiation dosimeter. Not surprisingly, 88.2% (n = 45) of all respondents were unaware of the total radiation exposure received. Two surgeons in the more than 10-year practice category reported a history of breast carcinoma. Conclusions: This preliminary report suggested a need for more awareness of radiation protection measures among women orthopaedic surgeons. Hence, along with the practice of wearing a lead gown, use of lightweight shield, awareness on exposure measure and use of radiation dosimeter should be encouraged. abstract_id: PUBMED:25640708 Putting it off: family breast cancer history and women's retirement planning. Objective: Cancer diagnoses have significant consequences that extend beyond the individual to family members. Our research builds on prior research by examining how a family history of breast cancer is related to women's retirement preparations. Methods: Taking guidance from the stress process model, we generate and test hypotheses using multivariate logistic regression and unique data on retirement planning and familial cancer histories for 467 women. We supplement this analysis with the qualitative findings from two focus groups. Results: We find consistent evidence that women with a mother and/or sister who had a breast cancer diagnosis are significantly less likely to engage in retirement preparation activities than otherwise similar women with no family history. The same effect is not observed when other first-degree relatives have different cancer diagnoses. The face validity of these quantitative findings is confirmed by the focus group analysis. Conclusions: Our research suggests that the stressors experienced by close female relatives of women who have had breast cancer may lead to behaviors and attitudes that have consequences for their post-retirement quality of life. abstract_id: PUBMED:30789301 Differences in Family Health History Knowledge Among Bisexual and Lesbian Women. Purpose: We aimed to determine whether there are differences between sexual minority women and heterosexual women in family health history knowledge. Methods: We used data from Dr. Susan Love Research Foundation's The Health of Women Study®. We included women who completed two of six online surveys between 2012 and 2015 (n = 22,410). Results: Compared with heterosexual women, bisexual and lesbian women had consistently greater odds of not knowing their family health history (e.g., odds ratios of 2.59 and 1.56 for breast cancer, respectively). Conclusion: To avoid exacerbating existing health disparities, in the era of precision medicine, we must address gaps in knowledge of family health history. abstract_id: PUBMED:33555545 Knowledge and perceptions of BRCA1/2 genetic testing and needs of diverse women with a personal or family history of breast cancer in South Florida. The vast majority of (BRCA1/2) genetic testing has been conducted in White women, in particular Ashkenazi Jewish women, with limited information available for Black and Hispanic women. Understanding perspectives of those who are underserved is critical to developing interventions to support inclusive approaches to genetic testing. This qualitative study explored knowledge and perceptions of BRCA1/2 genetic testing among diverse women in South Florida. We also explored participants' information needs. Convenience sampling was used to recruit a diverse group of 15 women with a personal or family history of breast cancer. We conducted semi-structured interviews and used grounded theory method to analyze the data. Five themes were identified: (1) lacking awareness and knowledge of BRCA1/2 genetic testing and results among Black women, (2) perceiving BRCA1/2 genetic testing as beneficial to themselves and a way to be proactive about cancer risk, (3) perceiving BRCA1/2 genetic testing as beneficial to family members, (4) interactions with healthcare providers and the healthcare system that shape genetic testing experiences, and (5) information needs for reducing cancer risk and promoting health. Our findings suggest that diverse underserved women perceived genetic testing as beneficial to themselves and family members. Women needed more information about the BRCA genes and genetic testing, prevention strategies, and the latest breast cancer research. Healthcare providers, particularly nurse practitioners, need to engage diverse high-risk women in discussions about their cancer risk, address unmet information needs, and, in particular, educate Black women about the benefits of pursuing genetic testing. abstract_id: PUBMED:28520842 Alcohol Consumption and Breast Cancer Risk in Younger Women According to Family History of Breast Cancer and Folate Intake. To evaluate the association between alcohol consumption and breast cancer risk in younger women, overall and by family history of breast cancer and folate intake, we prospectively followed 93,835 US women aged 27-44 years in Nurses' Health Study II who had alcohol consumption data in 1991. Alcohol consumption and folate intake were measured by food frequency questionnaire every 4 years. We documented 2,866 incident cases of invasive breast cancer between 1991 and 2011. Alcohol consumption was not associated with breast cancer risk overall (for intake of ≥10 g/day vs. nondrinking, multivariate hazard ratio = 1.07, 95% confidence interval: 0.94, 1.22). When the association was stratified by family history and folate intake, a positive association between alcohol consumption and breast cancer was found among women with a family history and folate intake less than 400 μg/day (multivariate hazard ratio = 1.82, 95% confidence interval: 1.06, 3.12; P-trend = 0.08). Alcohol consumption was not associated with breast cancer in other categories of family history and folate intake (P-interaction = 0.55). In conclusion, in this population of younger women, higher alcohol consumption was associated with increased risk of breast cancer among those with both a family history of breast cancer and lower folate intake. abstract_id: PUBMED:38020349 Unawareness of breast cancer family history among African women. Introduction: comprehensive cancer risk assessment services are lacking in most sub-Saharan African countries and the use of accurate family history (FH) information could serve as a cheap strategy for risk evaluation. The aim of this study is to determine the proportion of women unaware of family history of cancer among female relatives and associated socio-demographic characteristics. Methods: using case-control data on breast cancer among 4294 women in Nigeria, Uganda and Cameroon, we investigated the proportion of women unaware of family history of cancer among their female relatives. The association between participants' response to their awareness of female relatives' cancer history and socio-demographic characteristics was analysed according to case-control status, family side and distance of relation. Results: the proportion of women unaware if any relative had cancer was 33%, and was significantly higher among controls (43.2%) compared to 23.9% among cases (p&lt;0.001) (Adjusted Odds Ratio (OR) = 2.51, 95% CI = 2.14 - 2.95). Age, education and marital status remained significantly associated with being unaware of FH among controls on multiple regression. Conclusion: about a third of women interviewed did not know about cancer history in at least one of their female relatives. Efforts aimed at improving cancer awareness in sub-Saharan Africa (SSA) are needed. Our findings could be useful for future studies of cancer risk assessment in SSA. abstract_id: PUBMED:25713754 Family history of breast and ovarian cancer and triple negative subtype in hispanic/latina women. Familial breast and ovarian cancer prevalence was assessed among 1150 women of Mexican descent enrolled in a case-only, binational breast cancer study. Logistic regression was conducted to compare odds of triple negative breast cancer (TNBC) to non-TNBC according to family history of breast and breast or ovarian cancer among 914 of these women. Prevalence of breast cancer family history in a first- and first- or second-degree relative was 13.1% and 24.1%, respectively; that for breast or ovarian cancer in a first-degree relative was 14.9%. After adjustment for age and country of residence, women with a first-degree relative with breast cancer were more likely to be diagnosed with TNBC than non-TNBC (OR=1.98; 95% CI, 1.26-3.11). The odds of TNBC compared to non-TNBC were 1.93 (95% CI, 1.26-2.97) for women with a first-degree relative with breast or ovarian cancer. There were non-significant stronger associations between family history and TNBC among women diagnosed at age &lt;50 compared to ≥50 years for breast cancer in a first-degree relative (P-interaction = 0.14) and a first- or second-degree relative (P-interaction = 0.07). Findings suggest that familial breast cancers are associated with triple negative subtype, possibly related to BRCA mutations in Hispanic/Latina women, which are strongly associated with TNBC. Family history is an important tool to identify Hispanic/Latina women who may be at increased risk of TNBC, and could benefit from prevention and early detection strategies. abstract_id: PUBMED:31890475 Awareness of breast cancer risk related to a positive family history and alcohol consumption among women aged 15-44 years in United States. Awareness of the link between breast cancer and risk factors such as family history of breast cancer and alcohol consumption may help modify health behaviors. To reduce risk factors for breast cancer among young women, it is important to understand overall levels of risk awareness and socioeconomic differences in awareness. Data from the National Survey of Family Growth 2011-2015 were used to examine awareness of two risk factors for breast cancer, positive family history and alcohol consumption, among women aged 15-44 years (n = 10,940) in the United States by presence of risk factors and by socioeconomic characteristics. Prevalence of positive family history, non-binge, and binge drinking was 30%, 29%, and 31%, respectively among women aged 15-44. Awareness of positive family history of breast cancer as a risk factor for breast cancer was 88%, whereas for alcohol consumption it was 25%. Awareness of family history as a risk factor was higher among women with positive family history of breast cancer compared to those without. Current drinkers were more likely to believe that alcohol was not a risk factor for breast cancer compared to those who did not drink. Racial/ethnic minority women and those with lower education and income had lower awareness of family history as a risk factor. Awareness of alcohol consumption as a risk factor for breast cancer was low across all socioeconomic groups. Evidence-based interventions to increase risk awareness and decrease excessive alcohol use among young women are needed to reduce the risk of developing breast cancer. Answer: Based on the findings from the TRACE study, surgeons appear to have a limited role as effective counsellors for women with a family history of breast cancer in terms of reducing anxiety and breast cancer worry scores. The study found that while there was a significant improvement in anxiety and breast cancer worries after surgical consultation, this improvement did not correlate with the length of the surgical consultation. However, it is important to note that a subset of women who were told they had a high risk of breast cancer did show a significant reduction in breast cancer worry scores with longer consultation times (PUBMED:12217301). This suggests that surgeons may be more effective counsellors for women who are identified as high-risk, potentially due to more targeted and relevant information being provided during the consultation. In general, the effectiveness of surgeons as counsellors may also depend on other factors such as the surgeon's communication skills, the quality of information provided, and the individual needs of the patient. It is also possible that other healthcare professionals, such as genetic counsellors or nurse specialists, may be better equipped to provide comprehensive counselling to women with a family history of breast cancer, especially considering the complexity of genetic risk factors and the need for personalized risk assessment and management strategies.
Instruction: Surgical therapy of traumatic shoulder dislocation. Are there evidence-based indications for arthroscopic Bankart operation? Abstracts: abstract_id: PUBMED:11572133 Surgical therapy of traumatic shoulder dislocation. Are there evidence-based indications for arthroscopic Bankart operation? Introduction: For young athletic patients with a primary traumatic shoulder dislocation a surgical treatment is recommended. The operation of choice is the Bankart-Repair. Question: Are there evidence-based indications for an arthroscopic Bankart repair. Methods: Based on the criteria of the "Cochrane Collaboration" a systematic literature search was performed using medline (1966 to 9/2000). 172 publications were found with the key words "shoulder dislocation" and "Bankart". All relevant articles were ranked and analysed by the criteria of "evidence-based medicine". Results: There are 12 prospective studies (evidence grade Ib/IIa) and another 28 retrospective studies (evidence grade III). For open Bankart-Repair a recurrence rate of 0 to 8% is reported (prospective/retrospective studies). For arthroscopic Bankart-Repair, 19 of 40 studies and 8 of 12 prospective studies, show a recurrence rate of &lt; 10%; however in other studies (prospective/retrospective) an atraumatic recurrence rate of up to 38% is reported. The reasons for these differences in the recurrence rate are not obvious from the given data. In particular, there seems to be no correlation between the type of arthroscopic fixation technique and the recurrence rate. Concerning the postoperative range of shoulder motion, the reported data suggest that external rotation is less limited after arthroscopic than after open Bankart-Repair (arthroscopic: 5-12 degrees, open: 5-25 degrees). However, there is no evidence that patients are more likely to return to their previous level of sporting activities when operated on in an arthroscopic technique than in an open technique (arthroscopic: 42-100%, open: 72-94%). Conclusion: In the surgical treatment of a traumatic shoulder dislocation, the open Bankart-Repair remains the "gold standard". In reviewing the literature, arthroscopic Bankart-Repair has not been shown to be equal or superior to the open technique. abstract_id: PUBMED:28246765 The arthroscopic extraarticular Bankart repair. The arthroscopic extraarticular Bankart procedure tries to imitate the open Bankart procedure. An anterior-inferior transmuscular approach through the subscapular muscle permits to implant self-locking tacks into the anterior inferior third of the glenoid rim. The extraarticular location of the implants makes a superomedial capsular shift possible, if required. A total of 257 arthroscopic repairs following traumatic recurrent anterior shoulder dislocation have been carried out between 1992 and 1996. 177 patients were treated only with bioabsorbable Suretac device. Clinical and radiological follow up was possible in 165 patients. According to the Rowe score 69,7 % were classified as excellent, 10,9 % as good, 9,7 % fair and 9,7 % poor. Postoperative complications: the recurrence rate was 9,7 %, allergic reactions representing a foreign body reaction to the synthetic material were seen in 5 cases (3 %) and a frozen shoulder in 6 cases 3,6 %). 61 % of the patients involved in overhead or contact sports returned to their preoperative sport activities. abstract_id: PUBMED:34377348 Comparison of efficacy between montgomery and Jobe technique and arthroscopic bankart repair in treating traumatic recurrent anterior shoulder dislocation. Objective: To compare the clinical efficacy of Montgomery and Jobe technique versus arthroscopic Bankart repair in treating traumatic recurrent anterior shoulder dislocation (ASD). Methods: A total of 113 patients with traumatic recurrent ASD admitted to our hospital from June 2016 to January 2019 were selected as study subjects, and were divided into Group A and B in accordance with surgical options. The clinical data of the subjects were collected retrospectively. Group A was treated by the Montgomery and Jobe technique, while Group B was treated with arthroscopic Bankart repair. The arthroscopic manifestations were analyzed before and after arthroscopic Bankart repair. Scores of visual analogue scale (VAS) for shoulder joint and American Shoulder and Elbow Surgeons (ASES), Constant-Murley Score (CMS), Rowe Score, and complications were compared between the two groups before and after surgery. Results: Compared with Group A, Group B had a lower score of VAS for the shoulder joint, and higher scores of the range of motion (ROM), functional activities, myodynamia, pain, CMS, vital functions, ASES, and shoulder joint function, and a higher Rowe score after surgery (P &lt; 0.05). The incidence rate (1.75%) of complications in Group B was lower than that (14.29%) in Group A (P &lt; 0.05). Conclusion: Arthroscopic Bankart repair is superior to the Montgomery and Jobe technique in treating traumatic recurrent ASD. Arthroscopic Bankart repair, exhibiting a high safety profile, is conducive to improving shoulder joint function and pain. abstract_id: PUBMED:32490416 Bankart versus Latarjet operation as a revision procedure after a failed arthroscopic Bankart repair. Background: An arthroscopic Bankart operation is the most common operative procedure to treat shoulder instability. In case of recurrence, both Bankart and Latarjet procedures are used as revision procedures. The purpose of this study was to compare the re-recurrence rate of instability and clinical results after arthroscopic revision Bankart and open revision Latarjet procedures following failed primary arthroscopic Bankart operations. Methods: Consecutive patients operatively treated for shoulder instability at Turku University Hospital between 2002 and 2013 were analyzed. Patients who underwent a primary arthroscopic Bankart operation followed by a recurrence of instability and underwent a subsequent arthroscopic Bankart or open Latarjet revision operation with a minimum of 1 year of follow-up were called in for a follow-up evaluation. The re-recurrence of instability, Subjective Shoulder Value, and Western Ontario Shoulder Instability index were assessed. Results: Of 69 patients, 48 (dropout rate, 30%) were available for follow-up. Recurrent instability symptoms occurred in 13 patients (43%) after the revision Bankart procedure and none after the revision Latarjet procedure. A statistically and clinically significant difference in the Western Ontario Shoulder Instability index was found between the patients after the revision Bankart and revision Latarjet operations (68% and 88%, respectively; P = .0166). Conclusions: The redislocation rate after an arthroscopic revision Bankart operation is high. Furthermore, patient-reported outcomes remain poor after a revision Bankart procedure compared with a revision Latarjet operation. We propose that in cases of recurring instability after a failed primary Bankart operation, an open Latarjet revision should be considered. abstract_id: PUBMED:26403552 Early outcome of arthroscopic Bankart's repair for recurrent traumatic anterior shoulder instability. Background/objectives: Despite the improvements in the methods of arthroscopic stabilization of anterior shoulder instability, a recurrence rate of as high as 30% is reported in the literature. In this context, we report the outcome of arthroscopic Bankart repair in anterior shoulder instability, with the use of bio-absorbable suture anchors for patients that were followed up for at least two years from the date of surgery. The arthroscopic method offers a less invasive technique of Bankart repair for traumatic anterior shoulder instability. We would like to report the 2-year clinical outcomes of bio-absorbable suture anchors used in traumatic anterior dislocations of the shoulder. Methods: Data from 79 shoulders in 74 patients were collected over 4 years (2005-2009). Each patient was followed up over a period of 2 years. The patients underwent arthroscopic Bankart repair using bio-absorbable suture anchors for their anterior shoulder instability. These surgeries were performed at a single institution by a single surgeon over the time period. The patients were assessed with two different outcome measurement tools. The University of California at Los Angeles (UCLA) shoulder rating scale and the Simple Shoulder Test (SST) score. The scores were calculated before surgery and at the 2-year follow-up. The recurrence rates, range of motion as well post-operative function and return to sporting activities were evaluated. Results: SST results from the 12 domains showed a significant improvement from a mean of 6.1 ± 3.1 to 11.1 ± 1.8 taken at the 2-year follow-up (p &lt; 0.0001). Data from the UCLA scale showed a pre and post-operative mean of 20.2 ± 5.0 and 32.4 ± 4.6 respectively (p &lt; 0.0001). 34 had excellent post-operative scores, 35 had good scores, 1 had fair score and 3 had poor scores. 75% of the patients returned to sports while 7.6% developed a recurrence of shoulder dislocation or subluxation. Conclusion: Arthroscopic Bankart repair with the use of suture anchors is a reliable treatment method, with good clinical outcomes, excellent post-operative shoulder motion and low recurrence rates. abstract_id: PUBMED:29290865 Clinical Outcome of Arthroscopic Remplissage as Augmentation During Arthroscopic Bankart Repair for Recurrent Anterior Shoulder Instability. Purpose: We aimed to assess functional outcomes and postoperative recurrence rate associated with the remplissage procedure used for bone augmentation with Bankart repair in patients with Hill-Sachs lesions after shoulder dislocation. Methods: Preoperative computed tomography was performed to check for bony Bankart lesions,calculate the bone defect rate, and estimate the risk for re-dislocation. Functional and clinical scores were assessed preoperatively and at three months, six months, and one year postoperatively. Results: Between 2011 and 2014, 18 patients (17 male; age at surgery, 29.0±10.4 years; 18 affected shoulders) underwent arthroscopic Bankart repair with arthroscopic remplissage (remplissage group), and 18 sex- and age-matched controls underwent arthroscopic Bankart repair alone (control group). The incidence of bony Bankart lesion and glenoid bone defect was significantly higher in the remplissage group. No complications, re-dislocation, or re-subluxation was noted during or after the operation. Postoperatively, the range of motion and muscular weakness alleviated with time, and the clinical scores improved significantly from the preoperative values. However, the remplissage group showed significantly restricted shoulder flexion, abduction, internal rotation and external rotation even at one year postoperatively. Compared to the control group, the remplissage group showed significantly lower Rowe scores preoperatively, and both Rowe scores and University of California-Los Angeles scale scores remained significantly lower throughout the one-year follow-up. Conclusion: Despite some restriction of external rotation, remplissage leads to better clinical scores and no recurrence, providing a valid means of augmentation for Bankart repair in high-risk patients with engaged Hill-Sachs lesion. abstract_id: PUBMED:34689242 Does the immediate repair of Bankart lesion following first dislocation reduce the chances of recurrence after arthroscopic repair in soldiers? Purpose: The study attempted to compare the effects of immediate and delayed arthroscopic Bankart repair on the shoulder dislocation recurrence. Methods: In total, 465 soldiers with first-time anterior shoulder dislocation were included in the study. Of the total, 285 soldiers underwent a quick arthroscopic Bankart repair procedure, whether 180 soldiers underwent a delayed repair procedure. The initial dislocation was traumatic in both groups and operated using the standard arthroscopic suture anchor repair technique. Patient's age, repair time since the first dislocation, number of dislocations before surgery, number of suture anchors used during the repair, duration of surgical procedure, duration from surgery to return to work, and recurrence of dislocation after surgery were recorded. Rowe score, Constant score, and American Shoulder and Elbow Surgeons score were used for clinical assessment. Results: The operating time and recurrence rate were higher in the group subjected to delayed repair procedure than in the group subjected to immediate repair procedure. Repair timing was found to be crucial for a successful outcome. Conclusions: The findings suggest that Bankart repair must be performed immediately to minimize recurrence and other degenerative changes, which may require an additional surgical procedure for satisfactory repair. More studies are required to reach a definitive conclusion. Levels Of Evidence: Level III. abstract_id: PUBMED:37501089 Arthroscopic Bankart repair versus conservative treatment for first-time traumatic anterior shoulder dislocation: a systematic review and meta-analysis. Background: Shoulder is vulnerable to dislocation owing to its anatomical structure and the increasing popularity of contact sports in young population. The management of first-time anterior shoulder dislocation in this group is still controversial and the prognosis are varied. This review aimed to compare the results of arthroscopic Bankart repair and conservative management for first-time traumatic anterior shoulder dislocation in young active patients. Methods: Databases were searched till November 2021, and comparative studies between arthroscopic Bankart repair and conservative management for first-time traumatic anterior shoulder dislocation in young population were selected. Methodological quality of the studies was assessed according to the Cochrane Back Review Group 12-item scale. Outcome measures included recurrence of instability, return to play, subsequent instability surgery, and shoulder functional scores. Results: The search returned 12 eligible trials with 786 participants. All the trials were of prospective design. After arthroscopic Bankart repair, patients experienced significantly less re-dislocation (7.5% vs. 53.0%, p &lt; 0.00001, I2 = 0%), subluxation (3.1% vs. 24.2%, p &lt; 0.0001, I2 = 0%), positive apprehension test (7.3% vs. 25.8%, p = 0.002, I2 = 11%), and subsequent surgical treatment for instability (5.6% vs. 37.8%, p &lt; 0.00001, I2 = 0%) when compared with those underwent conservative management. And more patients returned to play (83.5% vs. 66.0%, p = 0.03, I2 = 81%) after arthroscopic Bankart repair. Outcomes regarding the functional scores did not reach a significant difference between the two cohorts. Conclusions: Arthroscopic Bankart repair showed superiority over conservative management in terms of recurrence, return to play, and subsequent instability surgery during the follow-up in young active patients that encountered first episode of dislocation. As long-term prognosis is comparable, an immediate surgical stabilization might not be suitable for everyone. abstract_id: PUBMED:23325970 Analysis of the functional results of arthroscopic Bankart repair in posttraumatic recurrent anterior dislocations of shoulder. Background: The Bankart lesion represents the most common form of labro-ligamentous injury in patients with traumatic dislocations of the shoulder leading to shoulder instability. We report the clinical outcome of arthroscopic repair of Bankart lesion in 50 patients. Materials And Methods: Sixty five patients with posttraumatic anterior dislocation of shoulder were treated by arthroscopic repair from Jan 2005 to Nov 2008. Fifty patients, with an average age of 26.83 years (range 18-45 years), were reviewed in the study. The average followup period was 27 months (range 24-36 months). University of California Los Angeles shoulder rating scale was used to determine the outcome after surgery. The recurrence rates, range of motion, as well as postoperative function and return to sporting activities were evaluated. Results: Thirty six patients (72.0%) had excellent results, whereas seven patients (14.0%) had good results. The mean pre- and postoperative range of external rotation was 80.38° and 75.18°, respectively. Eighty-six percent patients had stability compared with the normal sided shoulder and were able to return to sports. There were no cases of redislocation observed in this study; however, three cases had mild laxity of the joint. Conclusion: Arthroscopic Bankart repair with the use of suture anchors is a reliable treatment method, with good clinical outcomes, excellent postoperative shoulder motion and low recurrence rates. abstract_id: PUBMED:26170186 Anterior shoulder instability with engaging Hill-Sachs defects: a comparison of arthroscopic Bankart repair with and without posterior capsulodesis. Purpose: The purpose of this study was to compare the clinical results of isolated arthroscopic Bankart repair and those of arthroscopic Bankart repair with posterior capsulodesis for anterior shoulder instability with engaging Hill-Sachs lesions. Methods: Thirty-five shoulders that underwent isolated arthroscopic Bankart repair (Bankart group) and 37 shoulders that underwent arthroscopic Bankart repair with posterior capsulodesis (remplissage group) for anterior shoulder instability with engaging Hill-Sachs lesions were evaluated retrospectively. The mean age at the time of the surgery was 26.1 ± 7.0 years in the Bankart group and 24.8 ± 9.0 years in the remplissage group. Results: At the final follow-up, the Rowe and UCLA scores significantly improved in both the Bankart and remplissage groups (P &lt; 0.001, in both groups). The post-operative mean deficit in external rotation at the side was 3° ± 10° in the Bankart group and 8° ± 23° in the remplissage group (P = n.s. and P = 0.044, respectively). There was no decrease in muscle strength in either group. The recurrence rate was 25.7 % in the Bankart group and 5.4 % in the remplissage group (P = 0.022). Conclusions: Arthroscopic Bankart repair with posterior capsulodesis demonstrated good clinical outcomes with a low recurrence rate in the treatment for anterior shoulder instability with an engaging Hill-Sachs lesion. Although a limitation in external rotation was observed, there was no significant limitation of any other motion and no decrease in muscle strength after the remplissage procedure. Posterior capsulodesis alone for remplissage should be considered as a surgical technique that can replace the conventional method. Level Of Evidence: Case-control study, Level III. Answer: Based on the provided abstracts, there are evidence-based indications for arthroscopic Bankart repair in the surgical treatment of traumatic shoulder dislocation, although the literature presents varying outcomes and opinions on its efficacy compared to other methods. The systematic literature search performed in PUBMED:11572133 found that while there are prospective and retrospective studies supporting the use of arthroscopic Bankart repair, with some reporting a recurrence rate of less than 10%, others reported a recurrence rate of up to 38%. The study concluded that open Bankart repair remains the "gold standard" and that arthroscopic Bankart repair has not been shown to be equal or superior to the open technique. However, other studies suggest that arthroscopic Bankart repair can be effective. PUBMED:34377348 reported that arthroscopic Bankart repair was superior to the Montgomery and Jobe technique in treating traumatic recurrent anterior shoulder dislocation, with better outcomes in terms of pain, shoulder joint function, and lower complication rates. PUBMED:26403552 also reported good clinical outcomes, excellent post-operative shoulder motion, and low recurrence rates with arthroscopic Bankart repair using bio-absorbable suture anchors. PUBMED:34689242 suggested that immediate repair of a Bankart lesion following the first dislocation could reduce the chances of recurrence after arthroscopic repair in soldiers, indicating that timing of the repair is crucial for successful outcomes. PUBMED:37501089 conducted a systematic review and meta-analysis comparing arthroscopic Bankart repair with conservative treatment for first-time traumatic anterior shoulder dislocation in young active patients. The review found that arthroscopic Bankart repair showed superiority over conservative management in terms of recurrence, return to play, and subsequent instability surgery. In contrast, PUBMED:32490416 reported a high redislocation rate after arthroscopic revision Bankart operation and suggested considering an open Latarjet revision in cases of recurring instability after a failed primary Bankart operation. In summary, while arthroscopic Bankart repair is a commonly used procedure for traumatic shoulder dislocation with evidence supporting its effectiveness, the decision to use this technique should be based on individual patient factors, and it may not be the best option in all cases. Open Bankart repair and other surgical techniques, such as the Latarjet procedure, may be more appropriate in certain scenarios, particularly after a failed primary arthroscopic Bankart repair.
Instruction: Is off-pump superior to conventional coronary artery bypass grafting in diabetic patients with multivessel disease? Abstracts: abstract_id: PUBMED:21167727 Is off-pump superior to conventional coronary artery bypass grafting in diabetic patients with multivessel disease? Objective: Diabetic patients often present with diffuse coronary disease than nondiabetic patients posing a greater surgical challenge during off-pump revascularization. In this study, the safety, feasibility, and completeness of revascularization for this subset of patients was assessed. Methods: From 2002 to 2008, 1015 diabetic patients underwent myocardial revascularization. Patients received either off-pump coronary artery bypass (OPCAB; n = 540; 53%) or coronary artery bypass grafting (CABG; n = 475; 47%). Data collection was performed prospectively and data analysis was done by propensity-score (PS)-adjusted regression analysis. Primary endpoints were mortality, major adverse cardiac and cerebrovascular events (MACCEs), and a composite endpoint including major noncardiac adverse events (MNCAEs) such as respiratory failure, renal failure, and rethoracotomy for bleeding was applied. An index of complete revascularization (ICOR) was defined to assess complete revascularization by dividing the total number of distal anastomoses by the number of diseased vessels. Complete revascularization was assumed when ICOR was &gt;1. Results: OPCAB patients had a significantly lower mortality-rate (1.1% vs 3.8%; propensity-adjusted odds ratio (PAOR) = 0.11; p = 0.018) and displayed less frequent MACCE (8.3% vs 17.9%; PAOR = 0.66; p = 0.07) including myocardial infarction (1.3% vs 3.2%; PAOR = 0.33; p = 0.06) and stroke (0.7% vs 2.3%; PAOR = 0.28; p = 0.13). Similarly, a significantly lower occurrence of the noncardiac composite endpoint (MNCAE) (PAOR = 0.46; confidence interval (CI) 95% 0.35-0.91; p &lt; 0.001) was detected. In particular, lesser respiratory failure (0.9% vs 4.3%; PAOR = 0.24; p = 0.63) and pleural effusions (3.3% vs 7.5%; PAOR = 0.45; p = 0.04) occurred, so that fast extubation (≤ 12 h postoperative) was more frequently possible (58.3% vs 34.2%; PAOR = 1.64; p = 0.007). The number of arterial grafts was significantly higher among OPCAB patients (1.54 ± 0.89 vs 1.33 ± 0.81; p = 0.006) due to a more frequent use of the right-internal mammary artery (35.6% vs 22.9%; p &lt; 0.001). ICOR was significantly higher among CABG patients (1.24 ± 0.34 vs 1.30 ± 0.28; p = 0.001). However, for similar proportions in both groups, an ICOR &gt; 1 was achieved clearly indicating complete revascularization (94.3% vs 93.7%; p = 0.24). Conclusions: OPCAB offers a lower mortality and superior postoperative outcomes in diabetic patients with multivessel disease. Arterial grafts are used more frequently that may contribute to better long-term outcomes and the OPCAB approach does not come at the cost of less complete revascularization. abstract_id: PUBMED:23791161 Coronary revascularization in diabetic patients: off-pump versus on-pump surgery. Background: Coronary artery bypass grafting (CABG) is a well-established procedure for treating diabetic patients with multivessel disease, but extracorporeal circulation and cardioplegia-induced cardiac arrest introduce a severe burden to these patients. The present study investigated if off-pump CABG decreases 30-day mortality and mid-term mortality in diabetic patients in comparison with conventional CABG. Methods: From February 2009 through October 2011, data from 355 consecutive adult diabetic patients undergoing off-pump CABG and 502 patients undergoing on-pump CABG were prospectively recorded. Data analysis was performed by propensity score (PS)-adjusted logistic regression analysis and PS-adjusted Cox regression analysis. The primary endpoint was 30-day mortality. Secondary endpoints were major complications and mortality on follow-up. Results: Off-pump CABG was associated with a significantly lower 30-day mortality rate (0.3% vs 4.2%; adjusted odds ratio [OR] = 0.09 [95% confidence interval (CI):0.01 to 0.70] p = 0.021) than on-pump CABG. Results coincided with a lower rate of postoperative neurologic complications in patients undergoing off-pump CABG (1.7% vs 5.4%; adjusted OR = 0.31 [95% CI: 0.12 to 0.77] p = 0.012) and a less frequent need for hemofiltration in these patients (3.4% vs 10.4%; adjusted OR = 0.30 [95% CI: 0.14 to 0.64] p = 0.002). The off-pump technique decreased the 6-month mortality rate (2.3% vs 8.8%; adjusted hazard ratio = 0.27 [95% CI: 0.12 to 0.61] p = 0.002) and also the 1-year mortality rate (4.0% vs 10.6%; adjusted hazard ratio = 0.40 [95% CI: 0.22 to 0.75] p = 0.004) significantly. Conclusions: Our data indicate that in terms of postoperative complications and early and mid-term survival, off-pump CABG is superior to the on-pump technique in diabetic patients. abstract_id: PUBMED:15346694 Off-pump myocardial revascularization in patients with multivessel coronary disease Background: Some criticisms have been addressed to off-pump coronary surgery technique concerning the possibility of its systematic use with the respect of the completeness of revascularization. We report our experience with off-pump revascularization in patients with multivessel coronary disease. Methods: Between September 1997 and April 2003, 868 patients with multivessel coronary disease were scheduled for off-pump surgical revascularization. From September 2000, the percentage of patients operated on without cardiopulmonary bypass has been stably &gt; 90%. Fifteen patients (1.7%) had a conversion to cardiopulmonary bypass for anatomical reasons (n = 6) or clinical instability (n = 9). Results: An average of 2.5 +/- 0.8 (range 1-5) anastomoses per patient were completed. Bilateral mammary artery was used in 573 patients (66%); totally arterial revascularization was accomplished in 479 patients (55.2%). In-hospital mortality rate was 0.6% (5 patients). Total incidence of non-fatal postoperative complications (bleeding requiring re-exploration, perioperative myocardial infarction, stroke, new onset of acute renal failure) was 3.5%. Mean postoperative hospital stay was 4.8 +/- 3.8 days. At a mean follow-up of 21.6 +/- 15.6 months (range 1-65 months), the postoperative actuarial survival rates were 97.3, 93.7 and 86.7% at 1, 3 and 5 years postoperatively. Actuarial freedom rates from new revascularization were 98.7, 96.6 and 96.6% at 1, 3 and 5 years postoperatively. Conclusions: Early- and intermediate-term results of this study demonstrate the feasibility of off-pump revascularization in all patients with multivessel coronary disease, respecting the criterion of complete myocardial revascularization. abstract_id: PUBMED:35946398 An overlap-weighted analysis on 10-year survival of off-pump versus on-pump coronary artery grafting in multivessel coronary artery disease. Background And Objective: The introduction of off-pump coronary artery bypass surgery intended to overcome some of the conventional on-pump procedure limitations by avoiding potentially harmful adverse effects of extracorporeal circulation and aortic cross-clamping. However, the doubt remains on whether it is associated with worse long-term outcomes. To compare long-term survival in patients with multivessel ischemic heart disease undergoing off-pump versus on-pump coronary artery bypass grafting. Methods: Retrospective analysis of 4788 consecutive patients undergoing primary isolated multivessel coronary artery bypass grafting surgery, performed from 2000 to 2015, in Northern Portugal. Among the study population, we identified 1616 and 3172 patients that underwent off-pump and on-pump coronary artery grafting, respectively. We employed a propensity-score-based overlap weighting (OW) algorithm to restrict confounding by indication. The primary endpoint was all-cause mortality at 10 years. Results: The mean age of the study population was 63.9 (±9.8) years, and 951 (19.9%) were females. OW was effective in eliminating differences in all major baseline characteristics. Follow-up was 100% complete. The median follow-up time was 12.80 (9.62, 16.62) years. The primary endpoint of all-cause mortality at 10 years occurred in 431 patients (26.7%) in the off-pump group, as compared with 863 (27.2%) in the on-pump group (hazard ratio, 0.93; 95% confidence interval, 0.83-1.04; p = .196). Conclusions: In this longitudinal, population-level comparison of off-pump versus on-pump coronary artery bypass surgery for treating multivessel coronary artery disease, the primary outcome of long-term mortality was identical among both patients' groups. abstract_id: PUBMED:22316549 Comparing outcomes after off-pump coronary artery bypass versus drug-eluting stent in diabetic patients. Background: Off-pump coronary artery bypass surgery and sirolimus-eluting stent placement have been widely used for the treatment of coronary artery disease. The goal of this study was to compare long-term outcomes after off-pump coronary artery bypass surgery or sirolimus-eluting stent placement in diabetic patients with multivessel disease. Methods: This observational study enrolled 350 off-pump coronary artery bypass patients and 143 sirolimus-eluting stent patients receiving care at our institution between 2000 and 2007. All patients had diabetes and multivessel disease including proximal left anterior descending or left main coronary artery. The choice of revascularization (percutaneous coronary intervention versus coronary artery bypass surgery) was left to the physician's discretion rather than randomization. Cox proportional-hazard analyses, adjusting baseline risk factors and propensity score, which predicted the probability of receiving off-pump coronary artery bypass, were conducted to evaluate outcomes, including all-cause mortality, cardiac death, target vessel revascularization, and major adverse cardiac and cerebrovascular events. Results: During the follow-up (2.6±1.6 years) period, there was no difference between off-pump coronary artery bypass and sirolimus-eluting stent placement in all-cause mortality or cardiac death. However, the incidences of acute coronary syndrome, target vessel revascularization, and major adverse cardiac and cerebrovascular events were markedly lower in the patients undergoing off-pump coronary artery bypass than in those receiving sirolimus-eluting stent placement. Conclusion: Off-pump coronary artery bypass is superior to sirolimus-eluting stent placement in terms of acute coronary syndrome, target vessel revascularization, and major adverse cardiac and cerebrovascular events in diabetic patients with multivessel coronary artery disease. abstract_id: PUBMED:31623766 Individualizing Revascularization Strategy for Diabetic Patients With Multivessel Coronary Disease. Background: In patients with diabetes and multivessel coronary artery disease (CAD), the FREEDOM (Future Revascularization Evaluation in Patients with Diabetes Mellitus: Optimal Management of Multivessel Disease) trial demonstrated that, on average, coronary artery bypass grafting (CABG) was superior to percutaneous coronary intervention (PCI) for major acute cardiovascular events (MACE) and angina reduction. Nonetheless, multivessel PCI remains a common revascularization strategy in the real world. Objectives: To translate the results of FREEDOM to individual patients in clinical practice, risk models of the heterogeneity of treatment benefit were built. Methods: Using patient-level data from 1,900 FREEDOM patients, the authors developed models to predict 5-year MACE (all-cause mortality, nonfatal myocardial infarction, and nonfatal stroke) and 1-year angina after CABG and PCI using baseline covariates and treatment interactions. Parsimonious models were created to support clinical use. The models were internally validated using bootstrap resampling, and the MACE model was externally validated in a large real-world registry. Results: The 5-year MACE occurred in 346 (18.2%) patients, and 310 (16.3%) had angina at 1 year. The MACE model included 8 variables and treatment interactions with smoking status (c = 0.67). External validation in stable CAD (c = 0.65) and ACS (c = 0.68) demonstrated comparable performance. The 6-variable angina model included a treatment interaction with SYNTAX score (c = 0.67). PCI was never superior to CABG, and CABG was superior to PCI for MACE in 54.5% of patients and in 100% of patients with history of smoking. Conclusions: To help disseminate the results of FREEDOM, the authors created a personalized risk prediction tool for patients with diabetes and multivessel CAD that could be used in shared decision-making for CABG versus PCI by estimating each patient's personal outcomes with both treatments. abstract_id: PUBMED:27496617 Off-pump versus on-pump coronary artery bypass surgery in patients with actively treated diabetes and multivessel coronary disease. Objectives: We conducted a single-center analysis on short-term outcomes and long-term survival in actively treated diabetic patients undergoing off-pump coronary artery bypass versus on-pump coronary artery bypass surgery. Methods: The final population consisted of 2450 patients with actively treated diabetes (mean age, 66 ± 9 years; female/male 545/1905, 22%). Of those, 1493 subjects were orally treated and 1011 subjects were taking insulin. Off-pump coronary artery bypass and on-pump coronary artery bypass were performed in 1253 and 1197 patients, respectively. Propensity score matching was used to compare the 2 matched groups. Results: When compared with on-pump coronary artery bypass, off-pump coronary artery bypass was associated with a significant risk reduction for postoperative cerebrovascular accident (odds ratio, 0.49; 95% confidence interval [CI], 0.25-0.99; P = .04), need for postoperative intra-aortic balloon pump (odds ratio, 0.48; 95% CI, 0.30-0.77; P = .002), and reexploration for bleeding (odds ratio, 0.55; 95% CI, 0.33-0.94; P = .02). Off-pump coronary artery bypass did not significantly affect early (hazard ratio [HR], 1.32; 95% CI, 0.73-2.40; P = .36) and late (HR, 1.08; 95% CI, 0.92-1.28; P = .32) mortality. However, off-pump coronary artery bypass with incomplete revascularization was associated with reduced survival when compared with off-pump coronary artery bypass with complete revascularization (HR, 1.82; 95% CI, 1.34-2.46; P = .0002) and on-pump coronary artery bypass with complete revascularization (HR, 1.83; 95% CI, 1.36-2.47; P &lt; .0001). Conclusions: Off-pump coronary artery bypass is a safe and feasible option for diabetic patients with multivessel disease, reduces the incidence of early complications including postoperative cerebrovascular events, and provides excellent long-term survival similar to on-pump coronary artery bypass surgery in case of complete revascularization. abstract_id: PUBMED:33060948 Off-pump versus on-pump coronary artery bypass grafting-a surreal controversy? As per the findings of the SYNTAX Trial, it has been established that coronary artery bypass grafting (CABG) affords the best long-term results in patients with multivessel disease. However, it is still a point of contention whether 'off-pump' CABG (OPCAB) (on cardiopulmonary bypass) is better. Many studies appear to establish the superiority of one or the other. But there is still no well-planned multi-centre randomised controlled trial looking into all aspects of the issue that can, once and for all, put the controversy to rest. But as long as one is able to give the patient a good result, does it really matter whether the procedure is done off or on pump? This article looks at the available evidence with regard to various parameters comparing OPCAB and conventional CABG in an attempt to answer the question, if at all, it is a question! abstract_id: PUBMED:21256284 Simultaneous hybrid revascularization versus off-pump coronary artery bypass for multivessel coronary artery disease. Background: This study sought to compare early and midterm clinical outcomes of a simultaneous hybrid coronary revascularization procedure with those in a propensity-matched subset of patients undergoing conventional off-pump coronary artery bypass grafting. Methods: From June 2007 through December 2009, 104 consecutive patients (mean age 61.8 ± 10.2 years) with multivessel coronary artery disease underwent elective simultaneous coronary revascularization at Fuwai Hospital. Using propensity score methodology, these patients were matched with 104 patients who had undergone off-pump coronary artery bypass grafting through median sternotomy during the same period. We compared these groups' in-hospital clinical outcomes and freedom from major adverse cardiac or cerebrovascular events at a mean follow-up of 18 ± 7.9 months. Results: The hybrid procedure required longer operative time and incurred higher in-hospital costs, but had shorter median intubation time (11.6 ± 6.3 vs 13.8 ± 6.8 hours, p = 0.02), intensive care unit length of stay (34.5 ± 35.6 vs 55.3 ± 46.4 hours, p &lt; 0.001), and postoperative in-hospital length of stay (8.2 ± 2.6 vs 9.5 ± 4.5 days, p = 0.01). The hybrid group had significantly less chest tube drainage (789 ± 389 vs 834 ± 285 mL, p = 0.005) and need for blood transfusion (28.8% vs 51.9%, p &gt; 0.001). At a mean follow-up of 18 months, the freedom from major adverse cardiac or cerebrovascular events is in favor of the hybrid group (99.0% vs 90.4%; p = 0.03). Conclusions: Compared with conventional off-pump coronary artery bypass grafting, simultaneous hybrid coronary revascularization shortens recovery time and has superior outcomes at a mean follow-up of 18 months. Simultaneous hybrid coronary revascularization provides a safe and reproducible alternative for selected patients with multivessel coronary artery diseases. abstract_id: PUBMED:22437231 Impact of Previous PCI on Hospital Mortality After Off-Pump Coronary Artery Bypass Grafting in Diabetic Patients With Multivessel Disease. Objective: : Because percutaneous coronary intervention (PCI) has been performed excessively, many patients initially managed with PCI are being referred for coronary artery bypass grafting (CABG). The purpose of this study was to assess the impact of previous PCI on surgical mortality after off-pump CABG in diabetic patients with multivessel disease. Methods: : Between January 2002 and April 2008, 595 consecutive patients (99.8% off-pump) had isolated CABG by one single surgeon. Of these, 274 patients with diabetes mellitus and multivessel disease were retrospectively analyzed. Patients with previous PCI (n = 79) were compared with patients with no previous PCI (n = 196), and risk-adjusted impact of previous PCI on surgical mortality after CABG was determined using multivariate and propensity score analyses. Results: : All patients underwent off-pump CABG without conversion to cardiopulmonary bypass during operation. Patients with previous PCI had a significantly higher prevalence of history of myocardial infarction, renal dysfunction, and hemodialysis. Rates of surgical mortality were higher in patients with previous PCI (7.6% versus 1.0%, P = 0.008). After multivariate logistic regression analysis including all potential univariate predictors, previous PCI remained a strong predictor of surgical mortality [odds ratio (OR), 6.9; 95% confidence interval (CI), 1.2 to 42.1; P = 0.035]. After matching and regression adjustment by propensity score, the impact of previous PCI on surgical mortality was similar in direction (matching OR, 6.5; 95% CI, 0.8 to 55.0; P = 0.088; regression adjustment OR, 6.3; 95% CI, 1.2 to 33.6; P = 0.031). Conclusions: : Previous PCI increases the risk of surgical mortality after off-pump CABG in diabetic patients with multivessel disease. Answer: The evidence from the provided abstracts suggests that off-pump coronary artery bypass grafting (OPCAB) may be superior to conventional on-pump coronary artery bypass grafting (CABG) in diabetic patients with multivessel disease in several aspects. Several studies have shown that OPCAB is associated with lower mortality rates, both in the short-term and mid-term, compared to on-pump CABG. For instance, one study reported a significantly lower 30-day mortality rate for OPCAB (0.3% vs 4.2%) and a lower 6-month mortality rate (2.3% vs 8.8%) compared to on-pump CABG (PUBMED:23791161). Another study found that OPCAB patients had a significantly lower mortality rate (1.1% vs 3.8%) and displayed less frequent major adverse cardiac and cerebrovascular events (MACCE) (8.3% vs 17.9%) (PUBMED:21167727). OPCAB also seems to reduce the incidence of non-fatal postoperative complications, such as cerebrovascular accidents, the need for postoperative intra-aortic balloon pump, and reexploration for bleeding (PUBMED:27496617). Additionally, OPCAB was associated with less frequent neurologic complications and a less frequent need for hemofiltration (PUBMED:23791161). In terms of long-term outcomes, one study found no significant difference in all-cause mortality at 10 years between off-pump and on-pump groups, suggesting that long-term mortality outcomes may be similar for both procedures (PUBMED:35946398). However, another study indicated that off-pump CABG is superior to sirolimus-eluting stent placement in terms of acute coronary syndrome, target vessel revascularization, and major adverse cardiac and cerebrovascular events in diabetic patients with multivessel coronary artery disease (PUBMED:22316549). It is important to note that the completeness of revascularization is a critical factor for long-term survival, and off-pump CABG with incomplete revascularization was associated with reduced survival compared to complete revascularization, whether off-pump or on-pump (PUBMED:27496617). In summary, the evidence from these studies suggests that OPCAB may offer better short-term and mid-term outcomes compared to on-pump CABG in diabetic patients with multivessel disease, without compromising long-term survival, provided that complete revascularization is achieved.
Instruction: Does primary care referral to an exercise programme increase physical activity one year later? Abstracts: abstract_id: PUBMED:15564275 Does primary care referral to an exercise programme increase physical activity one year later? A randomized controlled trial. Objective: To assess the effectiveness of a primary care referral scheme on increasing physical activity at 1 year from referral. Design Two-group randomized controlled trial recruiting primary care referrals to a borough-based exercise scheme. Setting A local authority borough in the north-west of England. Participants 545 patients defined as sedentary by a primary care practitioner. Intervention Referral to a local-authority exercise referral scheme and written information compared with written information only. Main outcome measures Meeting physical activity target at 12 months following referral, with a secondary outcome measured at 6 months from referral. Results: At 12 months, a non-significant increase of 5 per cent was observed in the intervention compared with control group, for participation in at least 90 minutes of moderate/vigorous activity per week (25.8 versus 20.4 per cent, OR 1.45, 0.84 to 2.50, p = 0.18). At 6 months, a 10 per cent treatment effect was observed which was significant (22.6 versus 13.6 per cent, OR 1.67, 1.08 to 2.60, p = 0.05). The intervention increased satisfaction with information but this did not influence adherence with physical activity. Conclusion: Community-based physical activity referral schemes have some impact on reducing sedentary behaviour in the short-term, but which is unlikely to be sustained and lead to benefits in terms of health. abstract_id: PUBMED:35581538 Linking a peer coach physical activity intervention for older adults to a primary care referral scheme. Background: Physical inactivity has contributed to the current prevalence of many age-related diseases, including type 2 diabetes and cardiovascular disease. Peer coach physical activity intervention are effective in increasing long term physical activity in community dwelling older adults. Linking peer coach physical activity interventions to formal care could therefore be a promising novel method to improve health in inactive older adults to a successful long-term physical activity intervention. Methods: We evaluated the effects of linking a peer coach physical activity intervention in Leiden, The Netherlands to primary care through an exercise referral scheme from July 2018 to April 2020. Primary care practices in the neighborhoods of three existing peer coach physical activity groups were invited to refer patients to the exercise groups. Referrals were registered at the primary care practice and participation in the peer coach groups was registered by the peer coaches of the exercise groups. Results: During the study, a total of 106 patients were referred to the peer coach groups. 5.7% of patients participated at the peer coach groups and 66.7% remained participating during the 1 year follow up. The number needed to refer for 1 long term participant was 26.5. The mean frequency of participation of the referred participants was 1.2 times a week. Conclusion: Linking a peer coach physical activity intervention for older adults to a primary care referral scheme reached only a small fraction of the estimated target population. However, of the people that came to the peer coach intervention a large portion continued to participate during the entire study period. The number needed to refer to engage one older person in long term physical activity was similar to other referral schemes for lifestyle interventions. The potential benefits could be regarded proportional to the small effort needed to refer. abstract_id: PUBMED:22058134 Effect of exercise referral schemes in primary care on physical activity and improving health outcomes: systematic review and meta-analysis. Objective: To assess the impact of exercise referral schemes on physical activity and health outcomes. Design Systematic review and meta-analysis. Data Sources: Medline, Embase, PsycINFO, Cochrane Library, ISI Web of Science, SPORTDiscus, and ongoing trial registries up to October 2009. We also checked study references. Study selection Design: randomised controlled trials or non-randomised controlled (cluster or individual) studies published in peer review journals. Population: sedentary individuals with or without medical diagnosis. Exercise referral schemes defined as: clear referrals by primary care professionals to third party service providers to increase physical activity or exercise, physical activity or exercise programmes tailored to individuals, and initial assessment and monitoring throughout programmes. Comparators: usual care, no intervention, or alternative exercise referral schemes. Results: Eight randomised controlled trials met the inclusion criteria, comparing exercise referral schemes with usual care (six trials), alternative physical activity intervention (two), and an exercise referral scheme plus a self determination theory intervention (one). Compared with usual care, follow-up data for exercise referral schemes showed an increased number of participants who achieved 90-150 minutes of physical activity of at least moderate intensity per week (pooled relative risk 1.16, 95% confidence intervals 1.03 to 1.30) and a reduced level of depression (pooled standardised mean difference -0.82, -1.28 to -0.35). Evidence of a between group difference in physical activity of moderate or vigorous intensity or in other health outcomes was inconsistent at follow-up. We did not find any difference in outcomes between exercise referral schemes and the other two comparator groups. None of the included trials separately reported outcomes in individuals with specific medical diagnoses. Substantial heterogeneity in the quality and nature of the exercise referral schemes across studies might have contributed to the inconsistency in outcome findings. Conclusions Considerable uncertainty remains as to the effectiveness of exercise referral schemes for increasing physical activity, fitness, or health indicators, or whether they are an efficient use of resources for sedentary people with or without a medical diagnosis. abstract_id: PUBMED:26527835 Health professionals' perspectives on exercise referral and physical activity promotion in primary care: Findings from a process evaluation of the National Exercise Referral Scheme in Wales. Background And Objectives: Referring clinicians' experiences of exercise referral schemes (ERS) can provide valuable insights into their uptake. However, most qualitative studies focus on patient views only. This paper explores health professionals' perceptions of their role in promoting physical activity and experiences of a National Exercise Referral Scheme (NERS) in Wales. Design: Qualitative semi-structured group interviews. Setting: General practice premises. Methods: Nine semi-structured group interviews involving 46 health professionals were conducted on general practice premises in six local health board areas. Purposive sampling taking into account area deprivation, practice size and referral rates was employed. Interviews were transcribed verbatim and analysed using the Framework method of thematic analysis. Results: Health professionals described physical activity promotion as important, although many thought it was outside of their expertise and remit, and less important than other health promotion activities such as smoking cessation. Professionals linked decisions on whether to advise physical activity to patients to their own physical activity levels and to subjective judgements of patient motivation. While some described ERS as a holistic alternative to medication, with potential social benefits, others expressed concerns regarding their limited reach and potential to exacerbate inequalities. Barriers to referral included geographic isolation and uncertainties about patient selection criteria, medico-legal responsibilities and a lack of feedback about patient progress. Conclusion: Clinicians' concerns about expertise, priority setting and time constraints should be addressed to enhance physical activity promotion in primary care. Further research is needed to fully understand decision making relating to provision of physical activity advice and use of ERS. abstract_id: PUBMED:31806028 Effectiveness of physical activity promotion and exercise referral in primary care: protocol for a systematic review and meta-analysis of randomised controlled trials. Background: Physical inactivity is the fourth leading risk factor for global mortality. Reducing sedentary behaviour and increasing physical activity are efficacious for improving many physical and mental health conditions including cardiovascular disease, type 2 diabetes and depression. Reducing sedentary behaviour and increasing physical activity can also be effective at reducing obesity; however, sedentary behaviour and reduced physical activity are also associated with mortality independently. Despite this, most adults in the UK do not currently meet the UK Chief Medical Officers' guidelines for weekly physical activity. As most adults visit their general practitioner at least once a year, the primary care consultation provides a unique opportunity to deliver exercise referral or physical activity promotion interventions. This is a protocol for a systematic review of randomised controlled trials for the effectiveness of physical activity promotion and referral in primary care. Methods: A comprehensive literature search of Embase, MEDLINE (Ovid), Web of Science (Core Collection), Scopus, CINAHL, PsycINFO, and The Cochrane Library (CENTRAL) will be conducted for studies with a minimum follow-up of 12 months that report physical activity as an outcome measure (by either self-report or objective measures) including an intention to treat analysis. The authors will screen papers, first by title and abstract and then by full text, independently assess studies for inclusion, appraise risk of bias and extract data. The quality of the evidence will be assessed using the GRADE (Grading of Recommendations Assessment, Development and Evaluations) approach. The primary outcome will be participation in physical activity at 12 months. Pooled effects will be calculated using random effects models. Results will be submitted for publication in a peer-reviewed journal and for presentation at UK national primary care conferences. Discussion: This systematic review and meta-analyses will summarise the evidence for the effectiveness of physical activity promotion and referral as interventions for improving physical activity, as well as whether studies using objective measures of physical activity have similar effects to those studies using self-report measures. This knowledge has importance for primary care clinicians, patients and, given the focus of the recent NHS long-term plan on preventive medicine, those making policy decisions. Systematic Review Registration: The protocol is registered with PROSPERO the international prospective register of systematic reviews, ID CRD42019130831. abstract_id: PUBMED:32221668 Trekstock RENEW: evaluation of a 12-week exercise referral programme for young adult cancer survivors delivered by a cancer charity. Purpose: To evaluate the uptake and effect of RENEW, a 12-week exercise referral programme for young adult cancer survivors delivered by Trekstock, a UK-based cancer charity. Methods: The RENEW programme provides one-to-one individually tailored support from a level-4 cancer-rehabilitation-qualified gym instructor, free gym membership and access to information resources online. Objective and self-report data on cardiorespiratory function, strength, body composition, fatigue, sleep quality and general health-related quality of life (HRQoL) was collected from participants before the programme (week 0), immediately after (week 12) and 1 month later (week 16). Results: Forty-eight young adults (83% female; mean age, 29 years) with a history of cancer took part within the 12-week programme and completed the evaluation measures. Physical activity (PA) levels significantly increased following the programme and remained raised at follow-up. Improvements in physical function were significant: peak expiratory flow (mean change, 30.96, p = 0.003), sit-and-reach test (mean change, 6.55 ± 4.54, p &lt; 0.0001), and 6-mine-walk test (mean change, 0.12 ± 0.04, p &lt; 0.0001). No significant changes in BMI, weight or muscle mass were observed. Improvements in fatigue, sleep and HRQoL were observed across the programme and at follow-up (mean change, weeks 0-16; 8.04 ± 1.49 p &lt; 0.01; 1.05 ± 0.49 p &lt; 0.05; and - 0.9 ± 0.46 p = 0.051, respectively). Changes in self-efficacy to exercise and motivations to exercise were not observed at 12 weeks or at follow-up. Conclusions: Results suggest that the RENEW exercise referral programme has a positive impact upon some domains of physical function and well-being among young adult cancer survivors. Implication For Cancer Survivors: Exercise referral programmes delivered by charity organisations are one means by which PA behaviour change support may be widely disseminated to young adult cancer survivors. Health professionals and charitable bodies specialising in the care of young adults with cancer should look to address factors which prevent engagement and uptake of 'real-world' PA interventions such as the RENEW programme. abstract_id: PUBMED:32802359 An evidence-based structured one-year programme to sustain physical activity in patients with heart failure in primary care: A non-randomized longitudinal feasibility study. Aim: The primary objective of this non-randomized feasibility study was to test a 1-year model programme for sustaining/increasing patients' motivation to perform daily physical activity. Design: Non-randomized longitudinal feasibility study with a one-group repeated measures design. Methods: The study took place at a primary care centre in mid-Sweden in 2017-2018. The model programme included individual and group-based support, individualized physical activity prescriptions, a wrist-worn activity tracker and an activity diary. The main outcomes were the participants' perceptions of programme feasibility and scores on the Exercise Self-Efficacy Scale. Results: Seven patients were recruited. Six patients completed the programme that was perceived to imply learning, motivation and support. Compared with baseline, the median score of the Exercise Self-Efficacy Scale improved 3 months after participants completed the programme. abstract_id: PUBMED:9778097 Promoting physical activity: issues in primary health care. In the past few years, alliances between primary care and leisure services have become a popular strategy for exercise promotion in the England. 'GP Referral Schemes' typically involve referral to a local leisure centre by a General Practitioner (GP), whereupon patients are inducted into a 10-12 week exercise programme at a reduced fee. Referred patients are usually white, middle aged and apparently healthy women, with the main reason for referral being overweight. The schemes are characterised by their lack of formal evaluation, making conclusions about effectiveness impossible. In the US, physicians advice to exercise has been the focus of interventions to date. Two controlled trials, one randomised, have provided some evidence that exercise behaviour can be changed, at least in the short term. The ability to recruit sufficient numbers of patients, who have potentially the most to gain from increased physical activity, is the biggest barrier to primary care based interventions. One contributing factor to this problem, may be doctors and nurses' knowledge about the benefits of physical activity. abstract_id: PUBMED:32429011 Overview of Physical Activity Counseling in Primary Care. Physical inactivity is a global health burden. Promoting physical activity in primary care through physical activity counseling is an effective intervention. This article provides an overview of and perspectives on physical activity counseling in primary care. The identification of physical inactivity as a health problem may increase the awareness of physical inactivity among patients and primary care providers, which will lead to an action plan. The contents of physical activity counseling should be based on evidence-based recommendations. Safety issues should be evaluated appropriately to optimize the utilization of primary care services. Physical activity counseling methods should be "tailored" for an individual using appropriate counseling methods and setting-specific resources. Multilevel barriers to physical activity counseling in primary care (i.e., healthcare providers, patients, and systems) must be addressed. Setting-specific strategies to overcome these barriers should be implemented to maximize the effectiveness of physical activity counseling in primary care. abstract_id: PUBMED:15780323 Approaches to increase physical activity: reviewing the evidence for exercise-referral schemes. Objective: To review current evidence of effectiveness for exercise-referral schemes. Methods: Studies were identified from MEDLINE 1966-2002, EMBASE 1980-2002 and CINHAL 1982-2002 and bibliographies of relevant papers. Inclusion Criteria: Interventions providing access to exercise activities and/or facilities, experimental or quasi-experimental studies, studies with a control group, interventions based in a primary care setting, and interventions including an exercise component with measures of physical activity levels. Conclusions: Exercise-referral schemes appear to increase physical activity levels in certain populations, namely individuals who are not sedentary but already slightly active, older adults and those who are overweight (but not obese). However, increases in the level of physical activity may not be sustained over time. Further studies are required to assess effectiveness in a range of populations and for different activities, and to find strategies to increase long-term adherence. Answer: The evidence on whether primary care referral to an exercise programme increases physical activity one year later is mixed. A randomized controlled trial found that at 12 months, there was a non-significant increase of 5% in the intervention group compared to the control group for participation in at least 90 minutes of moderate/vigorous activity per week, suggesting that community-based physical activity referral schemes may have some impact on reducing sedentary behaviour in the short-term but are unlikely to be sustained and lead to long-term health benefits (PUBMED:15564275). Another study evaluating the effects of linking a peer coach physical activity intervention to primary care through an exercise referral scheme found that only a small fraction of the estimated target population was reached, but of those who participated, a large portion continued to participate during the entire study period (PUBMED:35581538). A systematic review and meta-analysis assessing the impact of exercise referral schemes on physical activity and health outcomes indicated that compared with usual care, exercise referral schemes showed an increased number of participants achieving 90-150 minutes of physical activity of at least moderate intensity per week, but the evidence of a between-group difference in physical activity of moderate or vigorous intensity or in other health outcomes was inconsistent at follow-up (PUBMED:22058134). Health professionals' perspectives highlighted that while some saw exercise referral schemes as a holistic alternative to medication with potential social benefits, others expressed concerns regarding their limited reach and potential to exacerbate inequalities (PUBMED:26527835). In summary, while there is some evidence that exercise referral schemes can increase physical activity levels in the short term, the long-term effectiveness of these schemes in sustaining increased physical activity levels is uncertain, and their impact may vary depending on the population and the nature of the scheme (PUBMED:15564275, PUBMED:35581538, PUBMED:22058134, PUBMED:26527835).
Instruction: Are we doing our best to educate travelers about the risks of acute mountain sickness? Abstracts: abstract_id: PUBMED:34687872 Morbidity among Israeli backpack travelers to tropical areas. Background: Travelers to tropical areas may be susceptible to illness or injuries. This study aims to assess morbidity among travelers during their travel and compare those who became ill or were injured with those who did not. Methods: This prospective study included 400 travelers who were counselled by a physician in pre-travel clinics in central Israel between 2017 and 2018. Participants were interviewed within a month after their return regarding morbidity during travel, including health problems that started one week following their return. Results: Most travelers (N = 320, 80%) reported at least one illness or injury. Illnesses/injuries were more common among females than males (84.9% vs. 75.1%, p = 0.01), travel periods longer than 30 days (87.7% vs. 77.2%, p = 0.03), and travelers accompanied by their friends or solo travelers compared with those who traveled with their family/partner (83.8% and 70.0%, respectively, p = 0.002). The most common complaint was diarrhea (N = 159, 49.6%), followed by high-altitude sickness (N = 118, 36.9%) and fever (N = 100, 31.2%). Altitude sickness symptoms were more common in females than in males (58.9% vs. 41.0%, p = 0.006) and in those who ascended rapidly in comparison to those who ascended gradually (58.7% vs. 44.6%, p = 0.04). Animal injury was reported by 30 (7.5%) participants yet only eight (27.0%) received medical care, seven of whom (23.3%) were vaccinated against rabies. Conclusions: Being a female, traveling with friends or alone and longer travel periods were associated with illness/injury. Practitioners at pre-travel clinics should inform travelers of the possible risks including the potential severe consequences of rabies and altitude sickness. abstract_id: PUBMED:17437472 Are we doing our best to educate travelers about the risks of acute mountain sickness? An on-site prospective study in the Himalayas. Background: Acute mountain sickness (AMS) affects almost half of those ascending to heights over 3,000 m. The knowledge and practices concerning AMS have not been adequately studied. Methods: We conducted a prospective on-site study in the Himalayas. Between July and October 2004, we approached a convenience sample of trekkers during or after their ascent to high altitudes and provided them with detailed questionnaires regarding knowledge of and personal experience with AMS. Results: One hundred ninety-two travelers (of whom 106 were Israelis and 86 Western Europeans) participated in the study. One hundred thirty travelers (68%) had received any written/oral information about AMS prior to their departure. The most common source of information among Israeli travelers was the travel clinic (65%), compared to only 9% among Europeans. Overall, 89% were found to have good "basic knowledge" about symptoms of AMS (defined as headache plus &gt; or =2 of the following: fatigue, dizziness, nausea, insomnia). Travelers who had received information about AMS prior to their trip were significantly more knowledgeable about symptoms of AMS (p= 0.0001), and treatment options, such as oxygen, medications, and rest (p= 0.023, p= 0.024, p= 0.011, respectively). Only 2% of the Israelis versus 16% of the Europeans (p= 0.0004) knew that O(2) was a treatment option. Forty-seven percent suffered from AMS, which in 50% began at 3,000 m. About 33 of 90 (37%) of those who suffered AMS symptoms had acetazolamide with them, but only 14 actually used it as treatment. Conclusions: Knowledge of AMS among climbers to high altitudes should be fostered, with specific emphasis on practical advice. abstract_id: PUBMED:15996449 Pretravel health advice among international travelers visiting Cuzco, Peru. Background: Cuzco, a Peruvian city of historical interest located 3,326 m above sea level, is a frequent destination for tourists. We conducted a descriptive study to assess the extent and sources of pretravel health advice received by international travelers before their arrival to Cuzco. Methods: Data were collected as part of a health survey among travelers. Between August and November 2002, travelers between 15 and 65 years old were invited to fill out a questionnaire in the departing area of Cuzco's international airport. Results: A total of 5,988 travelers participated. The mean age was 35.4 years (SD 11.4 yr); 50.6% were female and 50.8% were single. Tourism was the reason for traveling in 90.2% of the participants, and 89.3% of them were traveling with companions. Pretravel health information was received by 93.6%. The median number of information sources was two, with books (41.5%), travel medicine clinics (38.8%), the Internet (23.3%), and general practitioners (22.7%) as the main sources. Most frequently received recommendations were about safe food and water consumption (85%), use of insect repellents (66.0%), sunburn protection (64.4%), and condom use (22%). Only 16.5% took medication to prevent altitude sickness, and 14.2% took medication to prevent traveler's diarrhea. Variables independently associated with receiving pretravel health information from a health care professional were female gender, country of residence other than the United States, length of stay in Cuzco &gt; 7 days, length of stay in other Peruvian cities &gt; 7 days, tourism as the main reason for visiting Cuzco, traveling with companions, and consulting of more than one source of information. Conclusions: Most travelers arriving to Cuzco had received pretravel health information, and the majority obtained it from more than one source. Recommendations addressed for specific health risks, such as altitude sickness prophylaxis, were received by few travelers. abstract_id: PUBMED:9815315 Inadequacies in Health Recommendations Provided for International Travelers by North American Travel Health Advisors. The rise of international travel has increased the need for more, improved travel advice from physicians and public health facilities. The quality of the health information given has not been examined on a large-scale basis by many studies, however. Surveys in Canada, Switzerland, and the United States, for example, report that only 20% to 50% of practitioners could give accurate information regarding immunization and prophylaxis about travel-related disease. Anonymous surveys were sent to 1165 American and 96 Canadian public health units and travel clinics. Using five scenarios on travel to developing countries, each source was asked to complete a standardized form giving their recommendations for immunization, antimalarials, travelers' diarrhea, and other travel issues. Of the American respondents, 60% were physicians equally distributed among private practice, university, and corporate clinics; nurses comprised 75% of the Canadian respondents, primarily from public health clinics. The number of travelers counseled per year ranged from 3 to 40,000 (American mean, 448; Canadian mean, 2180). Depending on the scenario, 20 to 75% of the immunization groups recommended were inadequate or inappropriate: most frequently, lack of tetanus/polio boosters; indiscriminant use of yellow fever/cholera vaccines; haphazard advice about meningococcal, rabies, and typhoid vaccines; and a lack of consideration of measles in young adults. Of the antimalarial recommendations given, 20 to 60% were incorrect, including prescribing medication for nonrisk areas, failure to recognize chloroquine-resistant areas, and failure to understand the use of, or contraindications to, mefloquine. Frequently, acclimatization, altitude sickness, sunscreens, and safe-sex issues were omitted. The prevention and treatment of travelers' diarrhea were adequately covered, however. Pre-travel advice given by North American health advisors shows a considerable variability in the accuracy and extent necessary for effective travel disease prevention and treatment. Despite the growing efforts to further educate those responsible, higher quality of health advice needs to become a priority. abstract_id: PUBMED:22776382 Acute mountain sickness impact among travelers to Cusco, Peru. Background: Increasing numbers of travelers are visiting high altitude locations in the Andes. The epidemiology of acute mountain sickness (AMS) among tourists to high altitude in South America is not well understood. Methods: A cross-sectional study to evaluate the epidemiology, pre-travel preparation, and impact of AMS among travelers to Cusco, Peru (3,400 m) was performed at Cusco's International Airport during June 2010. Foreign travelers, 18 years or older, staying 15 days or less, departing Cusco were invited to participate. Demographic, itinerary, and behavioral data were collected. The Lake Louise Clinical score (LLCS) was used to assess AMS symptoms. Results: In total, 991 travelers participated, median age 32 years (interquartile range 25-49), 55.5% female, 86.7% tourists, mostly from the United States (48.2%) and England (8.1%). Most (76.7%) flew from sea level to Cusco and 30.5% visited high altitude in the previous 2 months. Only 29.1% received AMS advice from a physician, 19% recalled advice on acetazolamide. Coca leaf products (62.8%) were used more often than acetazolamide (16.6%) for prevention. AMS was reported by 48.5% and 17.1% had severe AMS. One in five travelers with AMS altered their travel plans. Travelers older than 60 years, with recent high altitude exposure, who visited lower cities in their itinerary, or used acetazolamide were less likely to have AMS. Using coca leaf products was associated with increased AMS frequency. Conclusions: AMS was common and adversely impacted plans of one in five travelers. Acetazolamide was associated with decreased AMS but was prescribed infrequently. Other preventive measures were not associated with a decrease in AMS in this population. Pre-travel preparation was suboptimal. abstract_id: PUBMED:14733848 Prevention and Treatment of High-altitude Illness in Travelers. High-altitude illness is the collective term for acute mountain sickness (AMS), high-altitude cerebral edema (HACE), and high-altitude pulmonary edema (HAPE). These syndromes can affect unacclimatized travelers shortly after ascent to high altitude (especially higher than 2500 m). AMS is relatively common and usually is mild and self-limiting; HACE and HAPE are uncommon but life-threatening. Gradual ascent is the best strategy for preventing or minimizing high-altitude illness, although chemoprophylaxis may be useful in some situations. Acetazolamide remains the chemoprophylactic agent of choice, although other drugs, such as gingko biloba, are being investigated. Immediate descent remains the cornerstone of treatment for HACE and HAPE, although pharmacologic and hyperbaric therapies may facilitate this process. abstract_id: PUBMED:17995530 Environmental hazards in Nepal: altitude illness, environmental exposures, injuries, and bites in travelers and expatriates. Background: Adventure travel necessarily places travelers at risk of environmental hazards. We assessed the burden of "environmental" hazards among a cohort of travelers and expatriates presenting to a large travel clinic in Nepal. Methods: Data on travelers and expatriates seen at the Canadian International Water and Energy Consultants (CIWEC) clinic in Kathmandu were prospectively collected and entered into the GeoSentinel Surveillance Network database. Data on individuals receiving predefined diagnoses related to environmental hazards were extracted and analyzed. Results: Of 10,499 travelers and 4,854 expatriates in the database, 2,160 were diagnosed with 2,533 environment-related illnesses. Injuries were common among both travelers and expatriates [N= 788 (6.1%) and 328 (4.9%), respectively], while altitude illness was seen almost exclusively in travelers [N= 611 (4.7%) vs N= 8 (0.1%)]. Factors independently associated with environmental diagnoses include male gender (p &lt; 0.001), traveling for tourism (p &lt; 0.001), and lack of pre-travel advice (p= 0.043). Three percent of travelers and 2% of expatriates presenting to CIWEC sustained a bite wound or required rabies postexposure prophylaxis. Injured travelers were less likely than others to have obtained pre-travel advice (p= 0.003), while those who sustained bite wounds were more likely to have received pre-travel advice (p &lt; 0.001). Conclusions: Environmental hazards are important causes of morbidity and potential mortality among adventure travelers and expatriates. Current pre-travel interventions are missing certain risk groups entirely and failing to have the desired educational impact in others. abstract_id: PUBMED:19278350 Should travelers with hypertension adjust their medications when traveling to high altitude? Luks, Andrew M. Should travelers with hypertension adjust their medications when traveling to high altitude? High Alt. Med. Biol. 10:11-15, 2009.-When advising travelers to high altitude, clinicians should consider how their patient's underlying medical conditions may be affected by the high altitude and whether changes are necessary in their medical regimen. Given the high prevalence of hypertension in the general population, it is a common medical condition that is often considered in this regard. Little information exists, however, to guide the management of this condition at high altitude. This review considers the available data regarding systemic blood pressure responses and the risks of worsening blood pressure control at high altitude and which antihypertensive medications are more effective for blood pressure control in this environment. The available evidence suggests blood pressure rises to a modest extent in patients with mild to moderate hypertension upon acute ascent to high altitude, but there is no clear evidence of an increased risk of complications due to these increased pressures. There is, however, marked interindividual variability in blood pressure responses at altitude; as a result, patients with poorly controlled or very labile hypertension should monitor their blood pressure upon ascent to high altitude and travel with a plan for altering their medical regimen in the event marked or symptomatic blood pressure elevations occur. abstract_id: PUBMED:17447710 Mountain sickness knowledge among foreign travelers in Cuzco, Peru. Objective: Cuzco, Peru, is host to a dangerous mix of high altitude and rapid access from low elevation, which results in a high prevalence of acute mountain sickness (AMS). Thus, it is important that travelers entering Cuzco understand the basics of AMS. To this end, we assessed travelers' awareness of AMS, especially the resources used to obtain this knowledge. With this information we make recommendations with regard to better avenues for distribution of AMS information. Methods: One hundred foreign travelers, representing an international population, completed a 45-item written questionnaire that was subsequently analyzed. Results: Over half of the respondents (51%) rated their knowledge of AMS as "low" or "none." Furthermore, very few respondents knew about acetazolamide (Diamox) as prophylaxis for (9%) or treatment of AMS (5%). People who consulted physicians for AMS information were more likely to know about the utility of acetazolamide than those who depended on a guidebook (P = .0266) but were less likely to correctly identify the symptomatology of AMS (P = .047). While AMS knowledge was poor, the majority of travelers (90%) indicated compliance with recommended pretravel vaccinations. Conclusions: This survey adds to the body of knowledge that indicates a lack of AMS knowledge among travelers. In addition, this survey identifies 2 potential avenues for improved dispersal of information: 1) guidebooks for countries of concern and 2) national health agency Web sites linked to information on pretravel vaccinations. Recommendations are made to utilize these commonly accessed resources to increase AMS information distribution to the general populous. abstract_id: PUBMED:26821507 HIGH ALTITUDE EXPOSURE IN TRAVELERS WITH PREEXISTING MEDICAL CONDITIONS The number of travelers visiting high altitude regions is increasing. High altitude areas have become more accessible in recent years, and reaching areas at altitudes over 3000 meters above sea level has become more common than before. In many circumstances older travelers, who are more likely to have pre-existing chronic diseases and for whom altitude and hypoxic condition might be a risk, reach high altitudes in a fast and tight schedule, therefore having a shorter time for adaptation and acclimatization. Pre-travel consultation, including the discussion of chronic illnesses and medication use, is therefore crucial for the reduction of the risk of acute mountain sickness and preventing the deterioration of their pre-existing medical conditions. Answer: Based on the provided abstracts, it appears that there are significant gaps in the education of travelers regarding the risks of acute mountain sickness (AMS). A study conducted in the Himalayas found that while 89% of travelers had good "basic knowledge" about the symptoms of AMS, those who had received information about AMS prior to their trip were significantly more knowledgeable about symptoms and treatment options (PUBMED:17437472). However, only a small percentage of travelers knew that oxygen was a treatment option, and less than half of those who suffered from AMS symptoms had acetazolamide with them, with even fewer actually using it as treatment. In Cuzco, Peru, a popular high-altitude destination, most travelers had received pretravel health information, but recommendations for specific health risks such as altitude sickness prophylaxis were received by few travelers (PUBMED:15996449). Another study in Cuzco reported that only 29.1% of travelers received AMS advice from a physician, and coca leaf products were used more often than acetazolamide for prevention, despite the latter being associated with decreased AMS (PUBMED:22776382). A study on health recommendations provided for international travelers by North American travel health advisors highlighted considerable variability in the accuracy and extent of advice necessary for effective travel disease prevention and treatment (PUBMED:9815315). This suggests that pre-travel advice given by health advisors may not always be adequate. Furthermore, a survey among travelers in Cuzco indicated a lack of AMS knowledge, with over half rating their knowledge as "low" or "none" and very few knowing about acetazolamide for prophylaxis or treatment of AMS (PUBMED:17447710). In summary, while some travelers receive adequate information about AMS, many do not, and there is a need for improved education and dissemination of information regarding the risks and prevention of AMS. Pre-travel consultations should emphasize the importance of gradual ascent, the use of prophylactic medications like acetazolamide, and the need for travelers with pre-existing medical conditions to take special precautions (PUBMED:14733848, PUBMED:26821507). The evidence suggests that we are not doing our best to educate all travelers about the risks of acute mountain sickness, and there is room for improvement in pre-travel health advisories and education (PUBMED:17995530).
Instruction: Does response to hormonal therapy predict presence or absence of endometriosis? Abstracts: abstract_id: PUBMED:18262150 Does response to hormonal therapy predict presence or absence of endometriosis? Study Objective: We sought to determine whether relief of chronic pelvic pain symptoms with hormonal treatment is predictive of presence or absence of endometriosis or location of endometriosis within the pelvis. Design: A retrospective, chart review (Canadian Task Force classification II-3). Setting: A private practice endometriosis referral center. Patients: Patients undergoing laparoscopy for pelvic pain with at least 3 months of previous hormonal treatment. Interventions: Laparoscopy. Measurements And Main Results: Response to hormonal treatment was ascertained by patient report in medical records. Presence and location of endometriosis was recorded based on operative findings and pathology reports. In all, 486 patients with chronic pelvic pain were identified, of whom 104 met our inclusion criteria. Endometriosis was diagnosed at laparoscopy in 88 (85%) patients. Endometriosis was identified at laparoscopy in 46 (81%) of 57 patients who failed to respond to hormonal therapy compared with 41 (87%) of 47 patients who previously responded to hormonal therapy (p = .37). Using final pathology as basis of diagnosis, 31 (67%) of 46 responders, and 39 (68%) of 57 nonresponders had endometriosis (p = .91). When data were analyzed by anatomic site of endometriosis, no significant difference was noted in response to preoperative hormonal therapy. Conclusion: Relief of chronic pelvic pain symptoms, or lack of response, with preoperative hormonal therapy is not an accurate predictor of presence or absence of histologically confirmed endometriosis at laparoscopy. Patients with chronic pelvic pain may benefit from laparoscopy and no judgment regarding diagnosis should be made without this evaluation. abstract_id: PUBMED:17545798 Hormonal therapy of endometrial stromal sarcoma. Purpose Of Review: Low-grade endometrial stromal sarcomas are steroid receptor positive tumors with slow tumor progression and high recurrence rates, which lack established treatment protocols. We present an update on hormonal therapy options. Recent Findings: In the past, hormonal therapy consisted of progestins for advanced/recurrent/metastatic low-grade endometrial stromal sarcomas. Aromatase inhibitors and gonadotropin-releasing hormone analogues have become new effective alternatives for first and second line treatment. The high recurrence rates after short disease free intervals in low-grade endometrial stromal sarcoma patients were partly due to inadvertent growth stimulation during estrogen-containing hormone replacement therapy and tamoxifen treatment, which - according to current knowledge - are contraindicated. Recently, hormonal therapy has been introduced for the prevention of recurrences. Aromatase inhibitors are becoming the treatment of choice, since progestins are poorly tolerated due to side effects. The effective duration of preventive hormonal therapy is still undetermined. Summary: Hormonal therapy with progestins, aromatase inhibitors and gonadotropin-releasing hormone analogues has become an effective treatment alternative to radiation and chemotherapy for low-grade endometrial stromal sarcoma patients. Preventive hormonal therapy is of particular interest in the setting of concomitant endometriosis. abstract_id: PUBMED:38300227 Bacterial infection in endometriosis: a silver-lining for the development of new non-hormonal therapy? The pathogenesis of endometriosis is a hotly debated topic, yet still cloaked in multiple layers of hypothetical theories. A recent report raises the possibility that bacterial infection, especially those of the genus Fusobacterium, may be the cause of endometriosis, at least in certain women. More importantly, the demonstration that treatment with broad-spectrum antibiotics significantly reduced the size of lesions in a mouse endometriosis model rekindles the hope for new non-hormonal treatments. The development of new therapies has been plagued by strings of unsuccessful clinical trials over the last two decades. Is this antibiotic therapy, a silver lining for the research and development of non-hormonal drugs for endometriosis? abstract_id: PUBMED:27067871 Postoperative hormonal therapy after surgical excision of deep endometriosis. The clinical management of women with deep peritoneal endometriosis remains controversial. The debate focuses mainly on the precise role of hormonal medical treatment and surgery and on the most suitable surgical technique to be used. In particular, considering the risks of second-line surgery, prevention of recurrences after first-line surgery is a priority in this context. Post-surgical medical therapy has been advocated to improve the effectiveness of surgery and prevent recurrences. However, adjuvant therapy, i.e. a short course of 3-6 months of hormonal therapy after surgery, has been proven to be of limited or no benefit for endometriosis in general and for deep peritoneal endometriosis in particular. On the other hand, two cohort studies suggest a beneficial effect of prolonged hormonal therapy after surgery for deep endometriosis. Even if this evidence is too weak to confidently advocate systematic administration of prolonged medical therapy after surgery, we argue in favour of this approach because of the strong association of deep endometriosis with other disease forms. In fact, women operated on for deep endometriosis may also face recurrences of endometriomas, superficial peritoneal lesions and pelvic pain in general. The demonstrated high effectiveness of prolonged postoperative therapy for the prevention of endometriomas' formation and dysmenorrhea recurrence should thus receive utmost consideration in the decision-making process. abstract_id: PUBMED:31508191 HORMONAL THERAPY IN WOMEN OF REPRODUCTIVE AGE WITH ENDOMETRIOSIS: AN UPDATE. Medical therapy of endometriosis is under continuous reevaluation. Hereby we updated the drugs currently available or under investigation for the hormonal treatment of endometriosis. abstract_id: PUBMED:125952 On the structural changes induced in adenomyosis uteri and endometriosis externa by hormonal therapy (author's transl) This study was undertaken to determine whether the hormonal sensitivity of the endometrium might be a measure of the effectiveness of hormonal therapy for adenomyosis and endometriosis. Accordingly, the effects of endogenous and exogenous hormones on the endometrium, adenomyosis, and endometriosis were correlated. The results revealed that, depending on where the ectopic endometrial tissue was located and on the type (duration and intensity) of hormonal treatment, the functional response of the endometriosis varied from that of the endometrium. The variations, however, can be predicted. The response of adenomyosis to hormonal stimulation was most like that of the endometrium. Endometriosis of the ovary often revealed an excessive response to stimulation, especially after gestagens. The other types of extrauterine foci of endometriosis, however, reacted only weakly to hormonal therapy. Important prognostic consequence for therapy can be drawn from the results. abstract_id: PUBMED:31416164 Hormonal Replacement Therapy in Menopausal Women with History of Endometriosis: A Review of Literature. Hormonal replacement therapy (HRT) is effective in treating the symptoms of menopause. Endometriosis is defined as the presence of functional endometrial tissue outside the uterine cavity with a tendency towards invasion and infiltration. Being an estrogen-dependent disease, it tends to regress after menopause. Nevertheless, it affects up to 2.2% of postmenopausal women. Conclusive data are not available in the literature on the appropriateness of HRT in women with endometriosis or a past history of the disease. The hypothesis that exogenous estrogen stimulation could reactivate endometriotic foci has been proposed. The aim of this state-of-the-art review was to revise the current literature about endometriosis in perimenopause and menopause and to investigate the possible role of HRT in this setting of patients. An electronic databases search (MEDLINE, Scopus, ClinicalTrials.gov, EMBASE, Sciencedirect, the Cochrane Library at the CENTRAL Register of Controlled Trials, Scielo) was performed, with the date range of from each database's inception until May 2019. All of the studies evaluating the impact of different HRT regimens in patients with a history of endometriosis were selected. 45 articles were found: one Cochrane systematic review, one systematic review, five narrative reviews, two clinical trials, two retrospective cohort studies, 34 case reports and case series. Some authors reported an increased risk of malignant transformation of endometriomas after menopause in patients assuming HRT with unopposed estrogen. Low-quality evidence suggests that HRT can be prescribed to symptomatic women with a history of endometriosis, especially in young patients with premature menopause. Continuous or cyclic combined preparations or tibolone are the best choices. HRT improves quality of life in symptomatic post-menopausal women, who should not be denied the replacement therapy only due to their history of endometriosis. Based on low-grade literature evidence, we recommend to prescribe combined HRT schemes; tibolone could be considered. abstract_id: PUBMED:27079920 Hormonal therapy after the operation for catamenial pneumothorax - is it always necessary? Background: Our recent clinical observations put into question the routine hormonal therapy for pneumothorax recurrence prevention, in patients operated for catamenial pneumothorax (CP). Methods: Retrospective review of the treatment of four women operated for CP in a recent 32-months period. Results: The four presented patients with CP represent 4.8 % of the overall number of patients operated for spontaneous pneumothorax and 19 % of women operated for pneumothorax in the same period. In all patients, typical multiple diaphragm holes existed. The involved part of the diaphragm was removed with diaphragm suture in three patients, whilst in one patient, a diaphragm placation was done. Endometriosis was histologically confirmed in two patients. During the follow-up period of 6-43 months, none of the patients underwent a postoperative hormonal therapy for different reasons, and in none of them the pneumothorax recurrence occurred. Conclusion: The clinical course of these patients, with the absence of the pneumothorax recurrence despite the omission of the hormonal treatment, suggests that the appropriateness of the routine hormonal treatment with gonadotrophin-releasing hormone analogues for 6-12 months, should be reconsidered and re-evaluated in further studies. abstract_id: PUBMED:10757339 Hormonal pathology of the endometrium. The endometrial tissue is a sensitive target for steroid sex hormones and is able to modify its structural characteristics with promptness and versatility. This article discusses briefly endogenous hormonal effects (cyclic changes, luteal phase defect, unopposed estrogen effect) and describes the histologic patterns encountered in the most commonly used hormone therapies: oral contraceptives, ovulation stimulation, hormone replacement therapy, and antitumoral hormone therapy. Oral contraceptives exert a predominant progestational effect on the endometrium, inducing an arrest of glandular proliferation, pseudosecretion, and stromal edema followed by decidualized stroma with granulocytes and thin sinusoidal blood vessels. Prolonged use results in progressive endometrial atrophy. Ovulation induction therapy accelerates the maturation of the stroma and is often associated with a discrepancy between early secretory glands and an edematous or decidualized stroma with spiral arterioles. Hormone replacement therapy with estrogen alone may result in continuous endometrial proliferation, hyperplasia, and neoplasia. The use of both estrogen and progesterone elicits a wide range of histologic patterns, seen in various combinations: proliferative and secretory changes, often mixed in the same tissue sample; glandular hyperplasia (in polyps or diffuse) ranging from simple to complex atypical; stromal hyperplasia and/or decidual transformation; epithelial metaplasia (eosinophilic, ciliated, mucinous); and inactive and atrophic endometrium. Progesterone therapy for endometrial hyperplasia and neoplasia induces glandular secretory changes, decidual reaction, and spiral arterioles. Glandular proliferation is usually arrested, but neoplastic changes may persist and coexist with secretory changes. Lupron therapy produces a shrinking of uterine leiomyomas by accelerating their hyaline degeneration, similar to that in postmenopausal involution. It generally produces endometrial atrophy. Tamoxifen for breast carcinoma has an estrogen agonist effect on the uterus in approximately 20% of patients, who develop endometrial polyps, glandular hyperplasia, adenomyosis, and/or leiomyomata. Both endometrioid and nonendometrioid carcinomas are seen, often in polyps. Their causal relationship to tamoxifen therapy is debatable. abstract_id: PUBMED:36959070 Decrease of dysmenorrhoea with hormonal treatment is a marker of endometriosis severity. Research Question: Is a decrease in dysmenorrhoea after suppressive hormonal therapy a marker of the endometriosis phenotype and of greater disease severity? Design: Retrospective observational cohort study conducted in a French university hospital, between January 2004 and December 2019. Non-pregnant women aged younger than 42 years, who tested for dysmenorrhoea relief after suppressive hormonal therapy before surgery, and who had histological confirmation of endometriosis, were included. The comparisons were carried out according to the results of the suppressive hormonal test. Results: Of the 578 histologically proven endometriosis patients with preoperative pain symptoms, the rate of dysmenorrhoea decrease after suppressive hormonal therapy was 88.2% (n = 510). These patients had a higher incidence of deep infiltrating endometriosis (DIE) intestinal lesions (45.7% [233/510] versus 30.8% [21/68], P = 0.01) and an increased rate of multiple DIE lesions (two or more) (72.8% [287/394] versus 56.4% [22/39], P = 0.02). After multivariate analysis, decrease of dysmenorrhoea after suppressive hormonal therapy remained significantly associated with the severe DIE phenotype (adjusted OR 3.9, 95% CI 2.0 to 7.6, P &lt; 0.001). Conclusion: In women with endometriosis, a decrease of dysmenorrhoea after suppressive hormonal therapy is associated with the DIE phenotype and is a marker of greater severity. Answer: The response to hormonal therapy does not accurately predict the presence or absence of endometriosis. A study involving 486 patients with chronic pelvic pain, of whom 104 met the inclusion criteria for the study, found that endometriosis was diagnosed at laparoscopy in 85% of patients. Among these, 81% of patients who failed to respond to hormonal therapy and 87% of patients who previously responded to hormonal therapy were found to have endometriosis during laparoscopy (p = .37). Additionally, when final pathology was used as the basis of diagnosis, 67% of responders and 68% of nonresponders had endometriosis (p = .91). The study concluded that relief of chronic pelvic pain symptoms, or lack thereof, with preoperative hormonal therapy is not an accurate predictor of the presence or absence of histologically confirmed endometriosis at laparoscopy (PUBMED:18262150). Therefore, patients with chronic pelvic pain may benefit from laparoscopy, and no judgment regarding the diagnosis of endometriosis should be made without this evaluation, regardless of their response to hormonal therapy.
Instruction: Can the presence of endoscopic high-risk stigmata be predicted before endoscopy? Abstracts: abstract_id: PUBMED:24953205 A simplified clinical risk score predicts the need for early endoscopy in non-variceal upper gastrointestinal bleeding. Background: Pre-endoscopic triage of patients who require an early upper endoscopy can improve management of patients with non-variceal upper gastrointestinal bleeding. Aims: To validate a new simplified clinical score (T-score) to assess the need of an early upper endoscopy in non variceal bleeding patients. Secondary outcomes were re-bleeding rate, 30-day bleeding-related mortality. Methods: In this prospective, multicentre study patients with bleeding who underwent upper endoscopy were enrolled. The accuracy for high risk endoscopic stigmata of the T-score was compared with that of the Glasgow Blatchford risk score. Results: Overall, 602 patients underwent early upper endoscopy, and 472 presented with non-variceal bleeding. High risk endoscopic stigmata were detected in 145 (30.7%) cases. T-score sensitivity and specificity for high risk endoscopic stigmata and bleeding-related mortality was 96% and 30%, and 80% and 71%, respectively. No statistically difference in predicting high risk endoscopic stigmata between T-score and Glasgow Blatchford risk score was observed (ROC curve: 0.72 vs. 0.69, p=0.11). The two scores were also similar in predicting re-bleeding (ROC curve: 0.64 vs. 0.63, p=0.4) and 30-day bleeding-related mortality (ROC curve: 0.78 vs. 0.76, p=0.3). Conclusions: The T-score appeared to predict high risk endoscopic stigmata, re-bleeding and mortality with similar accuracy to Glasgow Blatchford risk score. Such a score may be helpful for the prediction of high-risk patients who need a very early therapeutic endoscopy. abstract_id: PUBMED:30083330 Absence of high-risk stigmata predicts good prognosis even in severely anemic patients with suspected acute upper gastrointestinal bleeding. Background: The need for a blood transfusion, defined as hemoglobin &lt; 70 g/L, is the measure for assessing the risk levels in patients with upper gastrointestinal bleeding (UGIB). However, not all patients with a low hemoglobin level have a poor prognosis. Objective: We assessed the clinical predictive factors associated with poor short-term prognosis in patients with a low hemoglobin level. Methods: In this prospective cohort study, all consecutive patients with suspected acute UGIB at Tokyo Metropolitan Tama Medical Center were enrolled between 2008 and 2015. Then, we extracted those who needed a blood transfusion (hemoglobin &lt; 70 g/L) and explored the variables associated with all-cause mortality within 28 days after presentation. Results: Among 1307 patients, 311 needed a blood transfusion and 13 (4.2%) died from all causes. The presence of high-risk stigmata requiring endoscopic treatment (peptic ulcers; Forrest scores Ia, Ib and IIa; varices with current bleeding or signs of recent bleeding; and spurting or gushing bleeding or visible vessel in other diseases), diagnosed by emergency endoscopy, was a unique factor affecting mortality (odds ratio: 8.47, 95% confidence interval: 1.45-160, P = 0.01). Patients without high-risk stigmata neither died from UGIB nor had rebleeding, irrespective of the hemoglobin levels. Conclusions: Patients without high-risk stigmata showed a good prognosis even if they needed a blood transfusion. This result could facilitate triage of patients with suspected acute UGIB who only need a blood transfusion. abstract_id: PUBMED:24945183 Can the presence of endoscopic high-risk stigmata be predicted before endoscopy? A multivariable analysis using the RUGBE database. Background: Many aspects in the management of acute upper gastrointestinal bleeding rely on pre-esophagogastroduodenoscopy (EGD) stratification of patients likely to exhibit high-risk stigmata (HRS); however, data predicting the presence of HRS are lacking. Objective: To determine clinical and laboratory predictors of HRS at the index EGD in patients presenting with acute upper gastrointestinal bleeding using retrospective data from a validated national database - the Canadian Registry in Upper Gastrointestinal Bleeding and Endoscopy registry. methods: Relevant clinical and laboratory parameters were evaluated. HRS was defined as spurting, oozing, nonbleeding visible vessel or adherent clot after vigorous irrigation. Multivariable modelling was used to identify predictors of HRS including age, sex, hematemesis, use of antiplatelet agents, American Society of Anesthesiologists (ASA) classification, nasogastric tube aspirate, hemoglobin level and elapsed time from the onset of bleeding to EGD. Results: Of the 1677 patients (mean [± SD] age 66.2 ± 16.8 years; 38.3% female), 28.7% had hematemesis, 57.8% had an ASA score of 3 to 5, and the mean hemoglobin level was 96.8 ± 27.3 g⁄L. The mean time from presentation to endoscopy was 22.2 ± 37.5 h. The best fitting multivariable model included the following significant predictors: ASA score 3 to 5 (OR 2.16 [95% CI 1.71 to 2.74]), a shorter time to endoscopy (OR 0.99 [95% CI 0.98 to 0.99]) and a lower initial hemoglobin level (OR 0.99 [95% CI 0.99 to 0.99]). Conclusion: A higher ASA score, a shorter time to endoscopy and lower initial hemoglobin level all significantly predicted the presence of endoscopic HRS. These criteria could be used to improve the optimal selection of patients requiring more urgent endoscopy. abstract_id: PUBMED:18763288 Endoscopic findings in patients with upper gastrointestinal bleeding clinically classified into three risk groups prior to endoscopy. Aim: To investigate in a prospective study whether a simplified clinical score prior to endoscopy in upper gastrointestinal bleeding (UGIB) patients was able to predict endoscopic findings at urgent endoscopy. Methods: All consecutive UGIB patients referred to a single endoscopic center during a 16 mo period were enrolled. Before endoscopy patients were stratified according to a simple clinical score (T-score), including T1 (high-risk), T2 (intermediate-risk) and T3 (low-risk). Endoscopy was performed in all cases within 2 h, and high-risk stigmata were considered for further analysis. Results: Out of the 436 patients included into the study, 126 (29%) resulted to be T1, 135 (31%) T2, and 175 (40%) T3. Overall, stigmata of recent haemorrhage (SRH) were detected in 118 cases (27%). SRH occurred more frequently in T1 patients than in T2/T3 cases (85% vs 3.2%; c2 = 304.5309, P &lt; 0.001). Older age (t = 3.311; P &lt; 0.01) and presence of comorbidities (c2 = 14.7458; P &lt; 0.01) were more frequently detected in T1 than in T2/T3 patients. Conclusion: Our simplified clinical score appeared to be associated with the detection of endoscopic findings which may deserve urgent endoscopy. A further, randomised study is needed to assess its accuracy in safely scheduling endoscopy in UGIB patients. abstract_id: PUBMED:26142030 Nonvariceal Upper Gastrointestinal Bleeding: Timing of Endoscopy and Ways to Improve Endoscopic Visualization. Upper gastrointestinal (UGI) endoscopy is the cornerstone of diagnosis and management of patients presenting with acute UGI bleeding. Once hemodynamically resuscitated, early endoscopy (performed within 24 hours of patient presentation) ensures accurate identification of the bleeding source, facilitates risk stratification based on endoscopic stigmata, and allows endotherapy to be delivered where indicated. Moreover, the preendoscopy use of a prokinetic agent (eg, i.v. erythromycin), especially in patients with a suspected high probability of having blood or clots in the stomach before undergoing endoscopy, may result in improved endoscopic visualization, a higher diagnostic yield, and less need for repeat endoscopy. abstract_id: PUBMED:16292361 Intravenous proton pump inhibitors before endoscopy in bleeding peptic ulcer with high-risk stigmata: a multicentre comparative study. Background: It is not clear if starting intravenous proton pump inhibitors (IV PPI) before endoscopic therapy provides additional benefit over starting it afterward in patients with high-risk ulcer stigmata of peptic ulcer disease. Methods: All patients who received IV pantoprazole bolus and infusion and underwent endoscopy in six Canadian hospitals over 20 months were reviewed. Only patients with high-risk ulcer stigmata (arterial bleeding, oozing, nonbleeding visible vessel or adherent clot) were included. Patients receiving IV PPI before endoscopy (before group) were compared with those who received it after endoscopy (after group) with respect to endoscopic findings and, secondarily, to patient demographics and clinical outcomes. Results: The demographics and baseline characteristics of the before group (n=57) and the after group (n=109) were similar. The before group was more likely to have had IV PPI started outside of daytime hours, and median time to endoscopy in patients admitted with upper gastrointestinal bleeding was 24 h (interquartile range 9.5 to 35) in the before group and 11.3 h (interquartile range 3.7 to 17.2) in the after group (P&lt;0.0001). At the time of endoscopy, 33% of patients in the before group had actively bleeding lesions (Forrest 1a or 1b) compared with 54% in the after group (P=0.01), but there were no significant differences in rebleeding, surgical rates, intensive care unit admission or death between the groups. Conclusion: IV PPI infusions before endoscopy may lower the proportion of actively bleeding peptic ulcer lesions at endoscopy, but this finding does not appear to affect rates of rebleeding, surgery or death. abstract_id: PUBMED:16810597 Predictors of peptic ulcer rebleeding after scheduled second endoscopy: clinical or endoscopic factors? Background And Study Aims: In a previous study we demonstrated the efficacy of second-look endoscopy with therapy within 16 - 24 hours after index endoscopy in reducing major recurrent peptic ulcer bleeding. In this study, we sought to identify factors that might predict further rebleeding after this scheduled second-look endoscopy. Patients And Methods: We studied 249 patients (181 men, 68 women) with acute bleeding peptic ulcers who were treated at the United Christian Hospital, Hong Kong from 1999 to 2002 and who underwent a scheduled second endoscopy. Those patients who developed rebleeding after the second endoscopy were evaluated, and possible predictive factors for rebleeding were analyzed using a logistic regression model. Results: Of the 249 patients who underwent scheduled second-look endoscopy, 17 patients (6.8 %) developed rebleeding: seven of these patients were treated by another endoscopic therapy; ten patients required surgery. The overall mortality rate was 3.1 %. A logistic regression analysis performed on the possible predictive factors for rebleeding found that the following factors were associated with a significant risk of further rebleeding after scheduled second endoscopy: American Society of Anesthesiologists (ASA) grade III or grade IV status (odds ratio 3.81, 95 % CI 1.27 - 11.44), ulcer size greater than 1.0 cm (odds ratio 4.69, 95 % CI 1.60 - 13.80), and a finding of persistent stigmata of recent hemorrhage at the scheduled second endoscopy (odds ratio 6.65, 95 % CI 2.11 - 20.98). Conclusions: Endoscopic factors, including large ulcer size and the persistence of endoscopic stigmata of recent hemorrhage are important predictors for recurrent bleeding after scheduled second endoscopy. abstract_id: PUBMED:9746158 The evolution of stigmata of hemorrhage in bleeding peptic ulcers: a sequential endoscopic study. Background And Study Aims: Stigmata of hemorrhage in bleeding peptic ulcers have prognostic characteristics. In the present study, the evolution of these stigmata was studied prospectively using daily endoscopic examinations. Patients And Methods: From January 1989 to October 1989, 778 consecutive patients with bleeding peptic ulcers underwent endoscopy within 24 hours of admission. The bleeding peptic ulcers were assigned by three endoscopists to five categories, those with: a) active bleeding, b) a nonbleeding visible vessel, c) adherent clot, d) dot, or e) a clean base. Actively bleeding ulcers were treated by epinephrine injection. Ulcers with nonbleeding visible vessels, adherent clots, or dots were left untreated. Daily endoscopic examinations were carried out for three subsequent days, or until the ulcer base became clean. Results: On day 0, there were 56 actively bleeding ulcers (7%), 62 ulcers with visible vessels (8%), 104 with adherent clots (13%), 182 with flat dots (23%), and 374 with a white base (48%). On the subsequent three days, 24 of 62 ulcers with visible vessels (39%), 30 of 104 with adherent clots (29%), 24 of 182 with dots (13%), and 19 of 374 with a clean base (5%) on day 0 re-bled endoscopically or clinically, or both. The overall rebleeding risk was 9.9%, 4.9%, and 2.7% on days 1, 2, and 3, respectively. Conclusions: Stigmata of hemorrhage in bleeding peptic ulcers are predictive of rebleeding. They represent intermediate phases in the evolution of bleeding vessels into clean-based ulcers. The associated rebleeding risk diminishes as the vessel disappears from the ulcer base. abstract_id: PUBMED:34925851 Superiority of urgent vs early endoscopic hemostasis in patients with upper gastrointestinal bleeding with high-risk stigmata. Background: Guidelines recommend that all patients with upper gastrointestinal bleeding (UGIB) undergo endoscopy within 24 h. It is unclear whether a subgroup may benefit from an urgent intervention. We aimed to evaluate the influence of endoscopic hemostasis and urgent endoscopy on mortality in UGIB patients with high-risk stigmata (HRS). Methods: Consecutive patients with suspected UGIB were enrolled in three Japanese hospitals with a policy to perform endoscopy within 24 h. The primary outcome was 30-day mortality. Endoscopic hemostasis and endoscopy timing (urgent, ≤6 h; early, &gt;6 h) were evaluated in a regression model adjusting for age, systolic pressure, heart rate, hemoglobin, creatinine, and variceal bleeding in multivariate analysis. A propensity score of 1:1 matched sensitivity analysis was also performed. Results: HRS were present in 886 of 1966 patients, and 35 of 886 (3.95%) patients perished. Median urgent-endoscopy time (n = 769) was 3.0 h (interquartile range [IQR], 2.0-4.0 h) and early endoscopy (n = 117) was 12.0 h (IQR, 8.5-19.0 h). Successful endoscopic hemostasis and urgent endoscopy were significantly associated with reduced mortality in multivariable analysis (odds ratio [OR], 0.22; 95% confidence interval [CI], 0.09-0.52; P = 0.0006, and OR, 0.37; 95% CI, 0.16-0.87; P = 0.023, respectively). In a propensity-score-matched analysis of 115 pairs, adjusted comparisons showed significantly lower mortality of urgent vs early endoscopy (2.61% vs 7.83%, P &lt; 0.001). Conclusions: A subgroup of UGIB patients, namely those harboring HRS, may benefit from endoscopic hemostasis and urgent endoscopy rather than early endoscopy in reducing mortality. Implementing triage scores that predict the presence of such lesions is important. abstract_id: PUBMED:35020056 Scheduled second look endoscopy after endoscopic hemostasis to patients with high risk bleeding peptic ulcers: a Randomized Controlled Trial. Background: The recommendation of second look endoscopy (SLOGD) in selected patients at high risk for rebleeding has been inconclusive. This study aimed to evaluate the benefit of SLOGD in selected patients predicted at high risk of recurrent bleeding. Methods: From a cohort of 939 patients with bleeding peptic ulcers who underwent endoscopic hemostasis, we derived a 9-point risk score (age &gt; 60, Male, ulcer ≥ 2 cm in size, posterior bulbar or lesser curve gastric ulcer, Forrest I bleeding, haemoglobin &lt; 8 g/dl) to predict recurrent bleeding. We then validated the score in another cohort of 1334 patients (AUROC 0.77). To test the hypothesis that SLOGD in high-risk patients would improve outcomes, we did a randomized controlled trial to compare scheduled SLOGD with observation alone in those predicted at high risk of rebleeding (a score of ≥ 5). The primary outcome was clinical bleeding within 30 days of the index bleed. Results: Of 314 required, we enrolled 157 (50%) patients (SLOGD n = 78, observation n = 79). Nine (11.8%) in SLOGD group and 14 (18.2%) in observation group reached primary outcome (absolute difference 6.4%, 95% CI - 5.0% to 17.8%). Twenty-one of 69 (30.4%) patients who underwent SLOGD needed further endoscopic treatment. No surgery for bleeding control was needed. There were 6 vs. 3 of 30-day deaths in either group (p = 0.285, log rank). No difference was observed regarding blood transfusion and hospitalization. Conclusions: In this aborted trial that enrolled patients with bleeding peptic ulcers at high-risk of recurrent bleeding, scheduled SLOGD did not significantly improve outcomes. Clinicaltrials: gov:NCT02352155. Answer: Yes, the presence of endoscopic high-risk stigmata (HRS) can be predicted before endoscopy. A study using the Canadian Registry in Upper Gastrointestinal Bleeding and Endoscopy (RUGBE) database identified clinical and laboratory predictors of HRS at the index esophagogastroduodenoscopy (EGD) in patients presenting with acute upper gastrointestinal bleeding. The multivariable analysis found that a higher American Society of Anesthesiologists (ASA) score, a shorter time to endoscopy, and a lower initial hemoglobin level were significant predictors of the presence of endoscopic HRS (PUBMED:24945183). Additionally, a simplified clinical score (T-score) was validated to assess the need for an early endoscopy in non-variceal bleeding patients. The T-score's sensitivity and specificity for high-risk endoscopic stigmata and bleeding-related mortality were found to be 96% and 30%, and 80% and 71%, respectively, indicating its potential usefulness in predicting high-risk patients who may require very early therapeutic endoscopy (PUBMED:24953205). Furthermore, a study aimed to investigate whether a simplified clinical score prior to endoscopy in upper gastrointestinal bleeding (UGIB) patients could predict endoscopic findings at urgent endoscopy. The study found that the clinical score was associated with the detection of endoscopic findings that may warrant urgent endoscopy (PUBMED:18763288). These findings suggest that certain clinical and laboratory parameters can be used to predict the presence of HRS before endoscopy, which can improve the optimal selection of patients requiring more urgent endoscopic evaluation and management.
Instruction: Management of multinodular goiter in Germany (Papillon 2005): do the approaches of thyroid specialists and primary care practitioners differ? Abstracts: abstract_id: PUBMED:17549317 Management of multinodular goiter in Germany (Papillon 2005): do the approaches of thyroid specialists and primary care practitioners differ? Aim: Large-scale survey to focus on management of multinodular goiter and to compare the approaches of practitioners in primary care and thyroid specialists in Germany. Methods: Replies to a questionnaire were received from 2,191 practitioners and 297 thyroid specialists between June 1 and September 30, 2005. The hypothetical cases and their modifications described multinodular goiters of different sizes with and without toxic nodules. Results: In the workup, TSH determination and thyroid sonography were found to be standard procedures. Scintigraphy was selected by 80.2% of practitioners and 92.9% of specialists (p &lt; 0.001), in preference to fine needle aspiration cytology (17.9% of practitioners and 34.5% of the specialists, p &lt; 0.001). Only 6.1% of practitioners and 24.4% of specialists (p &lt; 0.001) advocated calcitonin screening. Euthyroid multinodular goiter (50-80 ml) was treated medically by 67.1% of practitioners and 65.6% of specialists, the combination of levothyroxine with iodine being clearly preferred (54.5% of practitioners, 52.3% of specialists). For toxic nodular goiter the preference for radioiodine therapy was significantly higher (p &lt; 0.001) among specialists (67.7%) than among practitioners (47.5%). Referral to surgery was recommended for cold nodules with negative cytology by 64.9% of practitioners and 73.5% of specialists (p = 0.004). Conclusions: Treatment and diagnostic procedures are used to nearly the same extent in primary care and specialist institutions, but the opinions diverge over the issues of calcitonin screening and referral for radioiodine therapy. abstract_id: PUBMED:20075403 Management of thyroid disorders in primary care: challenges and controversies. Thyroid diseases are common, and most can be safely and effectively managed in primary care. Two of the most common reasons for thyroid function testing are fatigue and obesity, but the vast majority of affected patients do not have hypothyroidism. There is no plausible basis for the assertion that hypothyroidism commonly occurs despite normal thyroid function tests. In primary hypothyroidism all patients, except the elderly and those with ischaemic heart disease, can safely be started on a full replacement dose of thyroxine; the aim is to restore thyroid stimulating hormone (TSH) to normal. Triiodothyronine (T3) has no role in the treatment of primary hypothyroidism. Subclinical thyroid disease should not be treated except in certain well defined situations. Its main importance lies in the increased risk of progression to overt thyroid disease. The development of hyperthyroidism is easily overlooked, and it is important to maintain a high index of suspicion, especially in the elderly. The most common causes are Graves' disease and thyroiditis (especially postpartum), and in the elderly toxic nodular goitre and amiodarone. Patients taking amiodarone should have their thyroid function checked every 6 months. Patients with overt hyperthyroidism should be referred for specialist management; beta-blockers and sometimes anti-thyroid drugs may be initiated in primary care. Most thyroid nodules, especially those detected incidentally on ultrasound scanning, are benign. Indications for referral include newly occurring nodules &gt;1cm in diameter, painful nodules, and nodules that are increasing in size. abstract_id: PUBMED:12914398 Thyroid gland ultrasound screening (Papillon Initiative). Report of 15 incidentally detected thyroid cancers In the framework of the Thyroid initiative Papillon 2001 - 2002 thyroid nodules and sonographically detectable focal lesions with a diameter more than 5 mm were verified in 23.1% of 64.123 persons without a history of thyroid disease. The frequency of such findings was with 33.0% in 46-65 years old persons three times higher then in persons aged 18-30 years (9.6%). In the diagnostic workup thyroid cancer was detected in 15 patients operated on up to now (6 women and 9 men; papillary cancers in 12 cases, follicular cancers in 3 cases) The smallest tumor had a diameter of 1.3 cm, the mean diameter was 2.3 +/- 91 cm. In 3 patients the tumor had invaded the tissue beyond the thyroid capsule, in 4 cases multicentric growth was detected. Lymph node metastases were found in 7/15 cases and distant metastases in 2/15 cases. These result reveal, that thyroid cancer detected by chance may be relatively aggressive. In daily routine, sonographically detectable lesions with poor echogenicity, irregular margins or internal calcifications with a diameter of more than 1 cm should be clarified by fine needle aspiration biopsy. abstract_id: PUBMED:16220439 Thyroid surgery: generalist or specialist? Background: Thyroid surgery is done in Germany in a considerable numbers of operations (about 110,000 per year). To perform thyroid operations by so called "generalists" or "specialists" have been discussed intensively, however, this issue have not been analyzed in detail. Methods: Study material comprised 16,500 consecutive thyroid operations with 30,000 operated sites that have been prospectively documented in the German Thyroid Multicenter Study performed 1998 through 2001. Quality of surgery were analyzed by calculating the inverse relationship between volume and outcome (complication rate). Results: To achieve complication rates (permanent unilateral recurrent laryngeal nerve paralysis and hypocalcemia) of &lt; 1 % (primary surgery) or &lt; 3 % (redo surgery) the minimum number of thyroid operations of lower level of experience (e. g. benign nodular goiter) per year was n=30. The minimum number of operations with higher level of experience was significantly lower (n=3-12) due to the higher level of experience of operating surgeons. In contrast to the rates of postoperative hypocalcemia the rates of postoperative recurrent laryngeal nerve paralysis was clearly related to the number of thyroid operations performed. Conclusions: The high number of thyroid operations in Germany with about 20 % of operations of high level experience are requiring surgical curricula and hospital structures that offer as well generalists as specialists to treat the broad spectrum of thyroid diseases accordingly. To lower the complication rate especially of difficult thyroid operations the level of specialization in Germany have to be increased. abstract_id: PUBMED:30865530 MANAGEMENT OF THYROTOXICOSIS AMONG GENERAL PRACTITIONERS IN TRINIDAD COMPARED WITH 2016 AMERICAN THYROID ASSOCIATION GUIDELINES FOR HYPERTHYROIDISM. Objective: A previous Trinidadian survey highlighted the investigative and therapeutic approaches selected by general practitioners (GPs) in managing thyrotoxicosis. The main objective of this study was to compare practice with existing guidelines. Methods: In this cross-sectional study a pretested de novo questionnaire was self-administered to GPs throughout Trinidad. The survey evaluated GPs' choices in management of thyrotoxicosis cases and compared their responses to the 2016 American Thyroid Association guidelines as well as with those previously reported locally. Results: A total of 159 completed questionnaires were analyzed (59% response rate). Thyroid stimulating hormone was the preferred (94%) biochemical test to confirm thyrotoxicosis etiology. A combination of ultra-sound and thyroid scintigraphy, thyroid ultrasound alone, and scintigraphy only were the testing options selected by 41%, 38%, and 12%, respectively. Generally medical therapy with antithyroid drugs was the preferred treatment option with 86% of respondents selecting this option for the index case of newly diagnosed female Graves disease. The greatest proportion of respondents that selected radioactive iodine (RAI) was 35% for both the index case as well as the male equivalent. Surgery was the most popular option at 25% for patients with a toxic multinodular goiter. Having access to RAI and scintigraphy was reported by 32% and 28%, respectively. Conclusion: GPs appear to be constrained to making rational choices based upon availability rather than what the guidelines recommend. In the absence of formal continuing medical education for GPs on thyrotoxicosis, dissemination of guidelines at the primary care level may reduce this gap. Abbreviations: ATA = American Thyroid Association; ATD = antithyroid drugs; CME = continued medical education; GP = general practitioner; RAI = radioactive iodine; SURG = surgery; T4 = thyroxine; TSH = thyroid-stimulating hormone. abstract_id: PUBMED:23553532 The increasing role of otolaryngology in the management of surgical thyroid disorders. Objectives/hypothesis: To determine trends in office visits and medical specialty seen for surgical diagnoses of the thyroid gland. Study Design: A cross-sectional analysis of a national healthcare database. Methods: From the National Ambulatory Medical Care Survey (NAMCS), all cases of surgical thyroid disorders (e.g., benign neoplasm of thyroid gland, malignant neoplasm of thyroid gland, multinodular goiter) were extracted for 2 calendar year cohorts: 1995 to 1999 and 2005 to 2009. In addition to demographic information, the medical specialty of the health care provider seen was extracted. Comparisons were conducted for the proportion of surgical thyroid cases seen between general surgeons and otolaryngologists for the respective cohorts. Results: In the 1995 to 1999 cohort, there were a total of 107 ± 13 thousand outpatient visits annually to either general surgery or otolaryngology for surgical thyroid conditions. Among these, 62.7 ± 8.4 thousand visits (58.3 ± 5.6%) were seen by general surgery versus 44.8 ± 9.1 thousand (41.7 ± 5.6%) seen by otolaryngology. In comparison, in the 2005 to 2009 cohort, there were 218 ± 29 thousand visits annually for surgical thyroid conditions. Among these, 88.4 ± 17 thousand (40.5 ± 5.4%) were seen by general surgery versus 130 ± 21 thousand (59.5 ± 5.4%) seen by otolaryngology. The increase in proportion of surgical thyroid patients seen by otolaryngology in the second 5 year cohort was statistically significant (P = 0.032, chi-square). Conclusions: There is a national trend in the United States toward otolaryngologists seeing an increasing majority proportion of increasingly prevalent surgical thyroid conditions. These data objectively confirm the perceived increasing role of otolaryngologists in the management of surgical thyroid disorders on a national level. abstract_id: PUBMED:32571507 Surgical management of benign thyroid disease in children. Benign thyroid disease in children represents a wide spectrum of disease. While most benign thyroid disorders may be either monitored or managed medically, surgery is an important treatment, particularly for Graves' disease or large multinodular goiters. Multidisciplinary teams including specialists in pediatric endocrinology, genetics, radiology, pathology and surgery at high volume centers offer the safest and most effective management. abstract_id: PUBMED:15789274 Questionnaire on management of nodular thyroid disease (Annual Meeting of the Thyroid Section of the German Society of Endocrinology 2003). Nodular thyroid disease is highly prevalent in iodine deficient areas. In Germany it affects approximately 30 % of the adult population. Differential diagnosis of thyroid nodules is directed at exclusion of rare thyroid malignancy and assessment of the nodules' functional characteristics in order to determine the best treatment approach. In 2003 the annual meeting of the Sektion Schilddrüse, a thyroid specialist group of the German Society of Endocrinology addressed the topic of "management of benign nodular thyroid disease". To assess the current diagnostic and therapeutic approach to nodular thyroid disease by German thyroid experts we designed a questionnaire, which was sent to all members of the Thyroid Section before the meeting and was answered by 70 % of the participants. We here report the results of the questionnaire, which show a considerable national heterogeneity in the management of thyroid nodules as well as diagnostic and treatment preferences for different thyroid specialist subgroups, e.g. endocrinologists and nuclear medicine specialist. This is in agreement with results of previous questionnaires assessing state-of-the-art practise among members of the European and American Thyroid Associations and underlines the definite need for a consensus process and for carefully planned randomized trials to answer the many unresolved questions in diagnosis and therapy of nodular thyroid disease. abstract_id: PUBMED:17726743 Preoperative localization and minimally invasive management of primary hyperparathyroidism concomitant with thyroid disease. The coexistence of thyroid diseases with primary hyperparathyroidism (PHPT) can present a challenge in the clinical diagnosis and management for these patients. This study aims to determine the frequency of coexisting thyroid gland lesions in a consecutive series patients with PHPT, and to analyze the clinical features, diagnosis and treatment of these patients. Twenty-two cases of a total of 52 PHPT patients who had synchronous thyroid and parathyroid pathology were surgically managed in this study. Thirteen patients had ipsilateral thyroid nodules, and 9 patients had thyroid nodules in contralateral or bilateral side. Seven patients underwent direct parathyroidectomy and hemithyroidectomy via a mini-incision (about 3 cm), while other 15 procedures were converted to Kocher incision. Seventeen nodular goiter (32.7%), 2 thyroiditis (3.8%), 2 thyroid adenoma (3.8%) and 1 thyroid carcinoma (1.9%) coexisting with parathyroid adenoma were pathologically diagnosed. The sensitivity of preoperative ultrasonography (US) and methoxy-isobutyl-isonitrile (MIBI) scintigraphy for parathyroid lesions was 63.6% and 85.7%; and the overall positive predictive values for MIBI and US were 100% and 95.5% respectively. A high incidence of thyroid diseases that coexisted with PHPT in literatures was briefly reviewed. Our study illustrated the need for clinical awareness of concomitant PHPT and thyroid disease. A combination of US, computed tomography (CT) and MIBI scintigraphy would be recommended for preoperative localization of enlarged parathyroid adenoma and for evaluation of thyroid lesions. Synchronous treatment of associated thyroid abnormalities is desirable, and open minimally invasive surgical approach with additional resection of isolated ipsilateral thyroid nodules is possible in some of these patients. abstract_id: PUBMED:28124062 Prevalence of thyroid disorders in elderly people in Germany. A screening study in a country with endemic goitre. Aim: In a screening study on people of working age the prevalence of morphological thyroid disorders was 33.1 %. It is not known precisely whether this further increases after retirement. The present study aimed to establish whether the increase in the prevalence of morphological and functional thyroid disorders continues beyond working age in elderly citizens. Methods: 223 participants were recruited in an institute for geriatric rehabilitative medicine. All patients were screened by thyroid ultrasound, measurement of serum levels of thyroid hormones and thyroid related autoantibodies as well as measurement of urinary iodine excretion. Results: The median thyroid volume was 17 ml (range 4-41 ml) in men and 16 ml (range 0.4-73 ml) in women. 27/59 (46 %) of the men (median age [range]: 77 [58-87] years) and 85/139 (61 %) of the women (median age [range]: 77 [52-97] years), or a total of 112/198 (57 %) residents, showed morphological thyroid abnormalities. There was no clear increase with age. 85 % of the participants without a history of thyroid treatment were euthyroid. 12 % showed a subclinical hyperthyroidism whereas 1 % showed a biochemically manifest hyperthyroidism. 2 % were biochemically hypothyroid. Tg-antibodies was elevated in 10 (5 %) and TPO-antibodies in 18 (9 %) of the patients. Median iodine excretion was 129 µg/l. 41.5 % of the patients showed iodine excretion values below 100 µg/l whereas only 7 % of the patients showed a severe iodine deficiency with an excretion below 50 µg/l. Conclusion: Compared to a previously described population of working age, the prevalence of morphological thyroid disorders does not further increase above the age of 65; thyroid function abnormalities in this iodine sufficient screening population were relatively rare. Answer: According to the study "Management of multinodular goiter in Germany (Papillon 2005)" (PUBMED:17549317), there are differences in the approaches of thyroid specialists and primary care practitioners in Germany regarding the management of multinodular goiter. The study, which involved a large-scale survey with responses from 2,191 practitioners and 297 thyroid specialists, found that while TSH determination and thyroid sonography were standard procedures in the workup of multinodular goiter, there were significant differences in the preference for scintigraphy and fine needle aspiration cytology. Scintigraphy was selected by a higher percentage of specialists (92.9%) compared to practitioners (80.2%), and fine needle aspiration cytology was also more favored by specialists (34.5%) than by practitioners (17.9%). Additionally, there was a divergence in the management of euthyroid multinodular goiter (50-80 ml), with a similar percentage of practitioners (67.1%) and specialists (65.6%) treating it medically, but a clear preference for the combination of levothyroxine with iodine was observed among both groups (54.5% of practitioners, 52.3% of specialists). For toxic nodular goiter, specialists showed a significantly higher preference for radioiodine therapy (67.7%) compared to practitioners (47.5%). When it came to referral to surgery for cold nodules with negative cytology, more specialists (73.5%) recommended it compared to practitioners (64.9%). The study concluded that while treatment and diagnostic procedures are used to nearly the same extent in primary care and specialist institutions, opinions diverge over issues such as calcitonin screening and referral for radioiodine therapy.
Instruction: Can revascularization restore sinus rhythm in patients with acute onset atrioventricular block? Abstracts: abstract_id: PUBMED:15725137 Can revascularization restore sinus rhythm in patients with acute onset atrioventricular block? Background: We studied patients with coronary artery disease (CAD) and complete atrioventricular (AV) block of acute onset that were treated with coronary artery bypass grafting (CABG) to see whether revascularization can restore the sinus rhythm. Methods: CABG was performed on eight patients with newly developed complete AV block and severe CAD. The distribution of coronary artery lesions showed a type IV pattern in six patients and a type II pattern in two patients. Complete revascularization was performed in six patients. Left anterior descending artery was revascularized in all eight patients. The patients were followed-up after operation for approximately 10 days before the implantation of a permanent pacemaker to see if they recover from AV block. Results: The mean interval from development of complete AV block to operation was 3.63 +/- 1.3 days. There was no operative and/or early mortality. None of the patients recovered from complete AV block after coronary revascularization. Early morbidity was not detected. The mean hospital stay (12.75 +/- 1.49 days) and intensive care unit stay (30.25 +/- 19.39 hours) were relatively long because of the delay in permanent pacemaker implantation. All patients were asymptomatic at the end of their follow-up period (23.38 +/- 18.41 months). Conclusions: Preoperatively developed complete AV block did not adversely affect the operative and early postoperative outcome of CABG operations. Recovery from complete AV block cannot be achieved by coronary revascularization performed 3.63 +/- 1.3 days after the onset of complete AV block. abstract_id: PUBMED:19233762 The effect of coronary revascularization on new-onset complete atrioventricular block due to acute coronary syndrome. Background: Coronary artery disease is one of the most common causes of complete atrioventricular block (AVB) in adults. In this study, we evaluated whether prompt revascularization of the coronary artery occlusion can ameliorate new-onset complete AVB due to acute coronary syndrome (ACS). Methods: Five patients (4 men and 1 woman) with a mean age of 69.8+/-7.1 years with diagnosed new-onset complete AVB and proven coronary artery disease were enrolled in the study. At the time of AVB diagnosis, 3 of the patients had acute myocardial infarction, and the other 2 patients had unstable angina pectoris. All patients underwent complete coronary bypass surgery after the diagnosis of complete AVB. A patient who underwent 2 coronary bypasses also underwent aortic valve replacement. Results: No mortality was observed in the study group. All but one of the patients converted back to sinus rhythm after a mean interval of 30+/-13.6 hours following revascularization procedures. Complete AVB persisted in 1 patient, and a permanent pacemaker was implanted. All patients were discharged uneventfully. The mean hospital stay was 11.4+/-4.5 days. All patients are still being followed up after surgery; at a mean follow-up of 27.4+/-0.9 months, there have been no further problems. Conclusion: Coronary revascularization may ameliorate ACS-related new-onset complete AVB with an acceptable rate of successful reversion to sinus rhythm. An especially appropriate time for surgery, complete coronary revascularization, and management of myocardial protection during surgery might improve the results of coronary bypass procedures in these patients. abstract_id: PUBMED:36537118 Acupuncture Treatment of a Patient with Bradycardia and Idioventricular Rhythm. A patient with bradycardia and an idioventricular rhythm was observed. According to cardiologists, there is no reliable drug treatment for bradycardia with an idioventricular rhythm; instead, the sole treatment is a pacemaker. In the course of this case, it was shown that acupuncture can restore the heart rhythm from bradycardia to normocardia, and from idioventricular with third-degree atrioventricular node block and an average heart rate of 34 BPM, to normal sinus rhythm with a heart rate of 71 BPM. Additionally, at the end of the treatment, the patient's number of episodes of ventricular extrasystole decreased 36 times (3289 versus 91 episodes). These results show that research on this technique should be continued. abstract_id: PUBMED:1157275 Use of atropine in patients with acute myocardial infarction and sinus bradycardia. Fifty-six patients with acute myocardial infarction complicated by sinus bradycardia (SB) were treated with intravenous atropine and monitored in a coronary care unit. Atropine decreased or completely abolished premature ventricular contractions (PVCs) and/or bouts of accelerated idioventricular rhythm in 27 of 31 patients (87%) and brought systemic blood pressure up to normal in 15 of 17 patients (88%) with hypotension. In addition, atropine administration was associated with improved atrioventricular conduction in 11 of 13 patients (85%) with acute inferior myocardial infarction associated with 2 degrees or 3 degrees atrioventricular block. Seven patients developed ten significant adverse effects: ventricular tachycardia or fibrillation in three, sustainedsinus tachycardia in three, increased PVCs in three, and toxic psychosis in one. These major adverse effects correlated with either a higher initial dose of atropine (i.e., 1.0 mg aa compared with the usual 0.5 or 0.6 mg) or a total cumulative dose exceeding 2.5 mg over 21/2 hours. Atropine is the drug of choice for management of patients with SB and hypotension and is effective in the treatment of ventricular arrhythmias as well as conduction disturbances in patients with inferior myocardial infarction. Serious adverse effects, however, preclude use of atropine without careful medical supervision. abstract_id: PUBMED:1146617 Registration of sinus node recovery time in patients with sinus rhythm and in patients with dysrhythmias. Sinus node recovery time (SRT) after rapid atrial pacing has been recorded in 66 patients, 28 with coronary heart disease, 11 with advanced AV block, 10 with sick sinus syndrome and 17 with paroxysmal tachyarrhythmias. In patients with a normal functioning sinus node SRT was related to the basal heart rate. On an average SRT was 130% of the basal P-P interval with an upper limit of 160%. In patients with a presumed normal atrial function the mean SRT was found to be 1 080 msec, with an upper limit of 1 500 msec. This corresponds with previously published observations. In all 5 patients examined, beta-receptor blockade (propranolol 5 mg i.v.) prolonged SRT. The prolonged SRT was related to sinus bradycardia. Verapamil (Isoptin 5 mg i.v.) had no effect on SRT in the 7 patients examined. The observation of an SRT of more than 1 500 msec indicates a poor sinus node function. Recording of a normal SRT, however, cannot exclude a sinus node dysfunction, as normal SRT is occasionally found even in patients with a clinically proved dysfunction. abstract_id: PUBMED:17449500 Use of intravenous magnesium to treat acute onset atrial fibrillation: a meta-analysis. Objectives: To assess the effects of intravenous magnesium on converting acute onset atrial fibrillation to sinus rhythm, reducing ventricular response and risk of bradycardia. Design And Data Sources: Randomised controlled trials evaluating intravenous magnesium to treat acute onset atrial fibrillation from MEDLINE (1966 to 2006), EMBASE (1990 to 2006) and Cochrane Controlled Trials Register without language restrictions. Review Methods: Two researchers independently performed the literature search and data extraction. Results: 10 randomised controlled trials, including a total of 515 patients with acute onset atrial fibrillation, were considered. Intravenous magnesium was not effective in converting acute onset atrial fibrillation to sinus rhythm when compared to placebo or an alternative antiarrhythmic drug. When compared to placebo, adding intravenous magnesium to digoxin increased the proportion of patients with a ventricular response &lt;100 beats/min (58.8% vs 32.6%; OR 3.2, 95% CI 1.93 to 5.42; p&lt;0.001). When compared to calcium antagonists or amiodarone, intravenous magnesium was less effective in reducing the ventricular response (21.4% vs 58.5%; OR 0.19, 95% CI 0.09 to 0.44; p&lt;0.001) but also less likely to induce significant bradycardia or atrioventricular block (0% vs 9.2%; OR 0.13, 95% CI 0.02 to 0.76; p = 0.02). The use of intravenous magnesium was associated with transient minor symptoms of flushing, tingling and dizziness in about 17% of the patients (OR 14.5, 95% CI 3.7 to 56.7; p&lt;0.001). Conclusions: Adding intravenous magnesium to digoxin reduces fast ventricular response in acute onset atrial fibrillation. The effect of intravenous magnesium on the ventricular rate and its cardiovascular side effects are less significant than other calcium antagonists or amiodarone. Intravenous magnesium can be considered as a safe adjunct to digoxin in controlling the ventricular response in atrial fibrillation. abstract_id: PUBMED:18855876 Should we revascularize before implanting a pacemaker? Background: The effect of coronary revascularization on disappearance of the severe conduction disturbances is still unclear. Hypothesis: We sought to determine whether revascularization may induce recovery of sinus rhythm in patients with significant coronary artery disease and complete atrioventricular block (AVB). Methods: Fifty-three patients who had third-degree AVB and significant coronary artery disease were enrolled. Patients with acute coronary syndromes were excluded. Thirty-three (62%) patients were men and the mean age was 65 +/- 10 y. All patients received a permanent dual-mode, dual-pacing, dual-sensing (DDD) pacemaker. Coronary disease was treated medically in 16 (30%) patients due to patient preference or ineligibility. Thirty-seven (70%) patients underwent a revascularization procedure (coronary artery bypass grafting [CABG]: 16, percutaneous coronary intervention [PCI]: 21 pts). Mean follow-up was 36 +/- 6 mo and patients were evaluated every 3 mo according to their resting electrocardiograms (ECGs) at each visit. Results: In the medically treated group, 13 (81%) patients still had third-degree AVBs at the end of the follow-up period, while 3 (19%) patients returned to normal sinus rhythm. On the other hand, 27 out of 37 patients (73%) who were revascularized were still in complete AVB, and 10 patients from this group (27%) had returned to normal sinus rhythm. There was no statistically significant difference between the revascularized and medically treated groups in terms of need for a pacemaker. Conclusions: Patients who have concomitant severe conduction disturbances and significant coronary disease may well receive a pacemaker before a revascularization procedure. Our data shows that coronary revascularization has little, if any, impact on returning to normal AV conduction. abstract_id: PUBMED:27557623 Frailty syndrome in patients with heart rhythm disorders. Aim: To assess the prevalence of frailty syndrome in patients with heart rhythm disorders that qualified for pacemaker implantation. Methods: The study included 171 patients (83 women, aged 73.9 ± 6.7 years) who qualified for pacemaker implantation as a result of sinus node dysfunction (81 patients) or atrio-ventricular blocks (AVB; 90 patients). A total of 60 patients (25 women, aged 72.40 ± 7.09 years) without heart rhythm disorders were included in the control group. Frailty syndrome was diagnosed using the Canadian Study of Health and Aging Clinical Frailty Scale test. Results: Frailty syndrome was diagnosed in 25.15% of the patients, and pre-frailty in 36.84% of the patients. Frailty syndrome was diagnosed in 10% of the control group, and the average value of frailty was 3.35 ± 0.92. Frailty occurred significantly more often among patients with AVB (33.34%) compared with patients who were diagnosed with sinus node dysfunction (16.05%); P = 0.0081. The average score of frailty for sinus node dysfunction was 3.71 ± 0.89, and for AVB it was 4.14 ± 0.93; P = 0.0152. In the case of AVB, the women had a statistically more intense level of frailty of 4.54 ± 0.90 as compared with the men 3.87 ± 0.85; P = 0.0294. In the multiple logistic analysis, the presence of any arrhythmia was strongly associated with frailty syndrome (OR 2.1286, 95% CI 1.4594 - 3.1049; P = 0.0001). Conclusions: Frailty syndrome was diagnosed in one-quarter of patients with cardiac arrhythmias, whereas a further 40% were at a higher risk of frailty syndrome, and its occurrence was significantly higher if compared with the control group. Frailty occurred significantly more often among patients with atrio-ventricular blocks, especially in women. The results of the present research showed that there is a statistical association between frailty and arrhythmias. Geriatr Gerontol Int 2017; 17: 1313-1318. abstract_id: PUBMED:20480171 Resumption of sinus rhythm from fetal complete atrioventricular block after maternally administered ritodrine. Reports have indicated that maternal administration of ritodrine increased the ventricular rate and thus ameliorated signs of heart failure in a fetus with complete atrioventricular block (CAVB). A fetus from a mother without the anti-SS-A/SS-B antibody had CAVB, with atrial rate 148-154 bpm and ventricular rate 53-57 bpm. After maternal administration of ritodrine, the ventricular rate increased to 60-65 bpm, and then sinus rhythm resumed. Ritodrine may not only increase the ventricular rate but also induce sinus rhythm in a fetus with CAVB. abstract_id: PUBMED:1929713 Retrograde cardioplegia through the coronary sinus in the surgery of the aortic valve. A clinical study of 500 patients Over the last few years there has been a renewal of interest in retrograde administration of cardioplegic solutions but this method has not been validated by a large scale clinical trial. From 1980 to 1989 our group used retrograde coronary sinus perfusion as the only means of administering cardioplegia in 500 consecutive patients undergoing isolated aortic valve replacement (AVR) (359 cases) or associated with another valvular or myocardial revascularization procedure (141 cases). Using this method, there were 31 hospital deaths (6.2%) of which 20 were of cardiac origin (4%). During the last 5 years, hospital mortality for isolated AVR decreased to 1.5%. With an incidence of low cardiac output of 12%, of arrhythmias and atrioventricular block of 7.4% and 1.2% respectively, and of myocardial infarction of 1.4%, this method of cardioplegia compares favourably with anterograde perfusion. There were only 3 traumatic lesions of the coronary sinus, all occurring at the beginning of the series and all of which were successfully repaired. Retrograde coronary sinus perfusion is therefore a safe and effective method of cardioplegia during aortic valve surgery: the cardioplegic solution is distributed uniformly in patients with coronary artery disease and therefore, this could become the technique of choice for myocardial protection in coronary artery or mixed coronary and valvular surgery. Answer: The effect of revascularization on restoring sinus rhythm in patients with acute onset atrioventricular (AV) block appears to be variable based on the provided abstracts. In one study, none of the eight patients with newly developed complete AV block and severe coronary artery disease (CAD) who underwent coronary artery bypass grafting (CABG) recovered from complete AV block after the revascularization procedure (PUBMED:15725137). This suggests that revascularization performed an average of 3.63 days after the onset of complete AV block may not be effective in restoring sinus rhythm in such cases. In contrast, another study reported that all but one of the patients with new-onset complete AV block due to acute coronary syndrome (ACS) converted back to sinus rhythm after revascularization procedures, with an average interval of 30 hours following the procedure (PUBMED:19233762). This indicates that prompt revascularization may ameliorate ACS-related new-onset complete AV block and restore sinus rhythm in a majority of cases. However, another study concluded that coronary revascularization has little, if any, impact on returning to normal AV conduction, as 73% of revascularized patients remained in complete AV block, and only 27% returned to normal sinus rhythm (PUBMED:18855876). This suggests that the decision to revascularize should not be solely based on the expectation of restoring sinus rhythm in patients with significant coronary artery disease and complete AV block. Overall, the evidence suggests that while revascularization may have the potential to restore sinus rhythm in some patients with acute onset AV block, particularly when performed promptly in the context of ACS, it is not universally effective and may not be a reliable treatment for AV block in all cases. The decision to pursue revascularization should be based on a comprehensive assessment of the individual patient's condition and not solely on the expectation of restoring sinus rhythm.
Instruction: Are activated clotting times helpful in the management of anticoagulation with subcutaneous low-molecular-weight heparin? Abstracts: abstract_id: PUBMED:11579347 Are activated clotting times helpful in the management of anticoagulation with subcutaneous low-molecular-weight heparin? Background: Enoxaparin has recently been shown to be superior to unfractionated heparin in patients with unstable angina/non-ST-elevation myocardial infarction. Theoretical advantages of low-molecular-weight heparin versus unfractionated heparin include a higher ratio of anti-Xa to anti-IIa activity (3:1 for enoxaparin), a more predictable dose response that precludes the need for frequent monitoring, and the convenience of subcutaneous administration. Both activated partial thromboplastin time and activated clotting time (ACT) are used to monitor anticoagulation with heparin, and ACTs are now standard during percutaneous coronary intervention (PCI) with heparin. At doses of up to 90 mg, subcutaneous enoxaparin leads to a modest dose-related increase in activated partial thromboplastin time, but the effect on ACT is unknown. Methods: Thrombolysis In Myocardial Infarction (TIMI) 11A was a multicenter, dose-ranging trial to evaluate the safety and tolerability of subcutaneous enoxaparin in patients with unstable angina/non-ST-elevation myocardial infarction. We obtained peak (mean 4.3 hours after enoxaparin) and trough (mean 11.5 hours after enoxaparin) anti-Xa levels and ACTs for 26 patients in the TIMI 11A trial. Results: Despite doses of enoxaparin in the range of 89 +/- 19 mg every 12 hours and significant increases in anti-Xa levels even at trough, there was no change in the ACT measured by HemoTec and only a small increase with Hemachron. The correlation of peak Hemachron ACT with peak anti-Xa levels was poor (R = 0.5, P =.08). Conclusions: In contrast to heparin, ACTs are not useful for assessment of anticoagulation with subcutaneous enoxaparin and should not be relied on in patients receiving enoxaparin who require acute PCI. Studies to determine the optimal dose, safety, and efficacy of enoxaparin in patients undergoing PCI are underway. abstract_id: PUBMED:27650345 Anticoagulation management during cross-clamping and bypass. Anticoagulation is required for successful implementation of cardiopulmonary bypass (CPB), as well as for surgeries requiring temporary aortic occlusion. It is well established that both coagulation and fibrinolysis are activated during CPB (Teufelsbauer et al., 1992) [1]. Appropriate dosing, monitoring, and maintenance of anticoagulation are essential to prevent devastating thrombosis of the CPB circuit or the occluded aorta and to minimize the activation of the hemostatic system. Although numerous novel anticoagulants have been developed over the past decade, unfractionated heparin remains the primary anticoagulant utilized during these types of procedures, with monitoring systems primarily based upon the activated clotting time and/or heparin concentration. This article will review the current state of anticoagulation management during cross-clamp and CPB. abstract_id: PUBMED:10341797 Anticoagulation control during neuroradiological interventions with ACT (activated clotting time) Purpose: To evaluate the effect of anticoagulation control by bedside ACT during interventional neuroradiological procedures with intraarterial heparin application. Materials And Methods: The study included 168 patients with intraarterial anticoagulation. Depending on the expected duration of the intervention and the body weight, a heparin dose of 5000 i.u. (70 patients) or 7500 i.u. (98 patients) was given through the catheter. The clotting time was measured with the Hemochrom 401 (Fa. Fresenius). Results: The effect of anticoagulation with heparin was measurable a few minutes after administration. Within the first 20 minutes the ACT increased about two times above the normal ACT from 156.1 +/- 26.1 (5000 i.u. heparin) to 296 +/- 58.0 and from 146.5 +/- 26.3 (7500 i.u. heparin) to 317.2 +/- 72.0. Additional administration of 2500 i.u. heparin was necessary in 16 patients. An ACT monitored neutralisation with protamine hydrochloride took place in 15 patients with high ACT values at the end of the intervention. Three thromboembolic events and one bleeding complication occurred during the interventions. Conclusions: (1) The ACT is a useful and readily available parameter to monitor the anticoagulation status of patients in the interventional suits. (2) ACT detects heparin nonresponders. (3) Exact monitoring of heparin administration and neutralisation with protamine hydrochloride is helpful in preventing thromboembolic events and bleeding complications. abstract_id: PUBMED:8298321 Use of the activated clotting time in anticoagulation monitoring of intravascular procedures. The activated clotting time first came into clinical use in the mid-1970s to guide the administration and reversal of heparin during cardiopulmonary bypass procedures. The explosive growth of cardiopulmonary bypass led to the development of automated techniques for measuring activated clotting times. Recent advances in the field of interventional cardiology have emphasized the importance of the coagulation cascade and the need for the prevention of thrombosis with anticoagulant drugs. The activated clotting time has emerged as an important means of monitoring and guiding heparin therapy during invasive intravascular procedures. This review focuses on the following topics: 1) the development of anticoagulation monitoring techniques; 2) current alternatives in bedside anticoagulation monitoring; and 3) the clinical application of activated clotting times outside surgery. Until prospective studies can establish appropriate "target" activated-clotting-time values for interventional procedures, procedural anticoagulation must be guided empirically. Nevertheless, the activated clotting time is extremely useful in the catheterization laboratory, for monitoring heparin therapy and the adequacy of anticoagulation. abstract_id: PUBMED:34957799 Satisfactory outcome with low activated clotting time in extracorporeal membrane oxygenation. Optimal anticoagulation is critical for successful extracorporeal membrane oxygenation (ECMO) to counterbalance the activation of the coagulation system initiated by the blood-biosurface reaction and mechanical stresses. Systemic anticoagulation is achieved mainly with unfractionated heparin (UFH). Activated clotting time (ACT) is a widely used laboratory parameter to monitor anticoagulation. The therapeutic range of ACT is 180-220 s. We investigated the effect of a lower target ACT (&lt;150 s) during ECMO on safety and outcomes and compared it with those of a conventional target ACT (180-200 s). In this single-center, retrospective study, we reviewed 72 adult patients treated with ECMO from March 2017 to October 2019. We included 43 patients after applying the exclusion criteria and divided them into the low ACT group (&lt;150 s, n = 14, 32.6%) and conventional ACT group (≥150 s, n = 29, 67.4%). There was no difference in the successful weaning from ECMO support (50% vs. 62.1%, p = 0.452) and discharge (50% vs. 41.4%, p = 0.594) rates between the groups. One patient in the conventional ACT group had intracranial hemorrhage. There was one thromboembolic complication case with an intra-circuit thrombus. To date, anticoagulation remains a challenge during ECMO. Our results suggest that a lower target ACT does not necessarily increase the thromboembolic risk during ECMO management. Clinicians may consider anticoagulation with lower ACT target for some patients with careful assessment and close monitoring. Further prospective trials are warranted to validate these results. abstract_id: PUBMED:29503117 Assessment of Heparin Anticoagulation Measured Using i-STAT and Hemochron Activated Clotting Time. Objective: Adequate anticoagulation, measured using activated clotting time (ACT), is important during vascular and cardiac surgeries. Unfractionated heparin is the most common anticoagulant used. The purpose of this analysis was to compare the i-STAT ACT (iACT) to the Hemochron ACT (hACT), both of which were then compared to anti-factor Xa (anti-Xa) assay, a representation of heparin level and activity. Design: Prospective study. Setting: Tertiary care cardiovascular center. Participants: Eleven consecutive elective adult cardiac surgical patients. Interventions: Prior to cardiopulmonary bypass, ACTs were measured using i-STAT and Hemochron technologies and compared to each other and to anti-Xa assay prior to and during a cumulative administration of heparin. Data were compared using bias analyses. Measurements And Main Results: Heparin (300 U/kg) was administered in quarterly doses. Coagulation labs were collected prior to and 3 minutes after each quarterly dose of heparin. The baseline ACTs for i-STAT and Hemochron were 147 and 142 seconds, respectively. A significant association was found between iACT and hACT (p = 0.002). The iACT measurements underestimated hACT at ACT levels &gt;180 seconds or anti-Xa levels &gt;0.75 U/mL. No significant difference was found between ACT data at anti-Xa levels &lt;0.5 U/mL. Conclusion: There was a good association between the iACT and hACT; however, the 2 tests are not equivalent. Overall, the iACT underestimated the hACT. Agreement between the ACT technologies was good at lower ACTs and anti-Xa levels, but declined with an anti-Xa &gt;0.75 U/mL. abstract_id: PUBMED:36590973 Risk for excessive anticoagulation during hemodialysis is associated with type of vascular access and bedside coagulation testing: Results of a cross-sectional study. Background: Recommendations and practice patterns for heparin dosing during hemodialysis show substantial heterogeneity and are scantly supported by evidence. This study assessed the variability in unfractionated heparin (UFH) dosing during hemodialysis and its clinical and biological anticoagulatory effects, and identified explanatory factors of heparin dosing. Methods: Cross-sectional study assessing UFH dosing, coagulation tests - activated partial thromboplastin time (aPTT) and activated clotting time (ACT) before dialysis start, 1 h after start and at treatment end (4 h) - and measurement of residual blood compartment volume of used dialyzers. Results: 101 patients, 58% male, with a median dialysis vintage of 33 (6-71) months received hemodialysis using a total UFH dose of 9,306 ± 4,079 (range 3,000-23,050) IU/session. Use of a dialysis catheter (n = 56, 55%) was associated with a 1.4 times higher UFH dose (p &lt; 0.001) irrespective of prior access function. aPTT increased significantly more than ACT both 1 h and 4 h after dialysis start, independent of the dialysis access used. 53% of patients with catheter access and ACT ratio &lt; 1.5, 1 h after dialysis start had simultaneous aPTT ratios &gt; 2.5. Similar findings were present at 1 h for patients with AVF/AVG and at dialysis end for catheter use. No clinically significant clotting of the extracorporeal circuit was noted during the studied sessions. Dialyzer's blood compartment volume was reduced with a median of 9% (6-20%) without significant effect of UFH dose, aPTT or ACT measurements and vascular access type. Conclusion: UFH dose adaptations based on ACT measurements frequently result in excessive anticoagulation according to aPTT results. Higher doses of UFH are used in patients with hemodialysis catheters without evidence that this reduces dialyzer clotting. abstract_id: PUBMED:2605621 Adequate heparinization during PTCA: assessment using activated clotting times. Heparinization during PTCA is often done empirically with an initial 10,000 unit bolus of heparin and subsequent additional boluses as deemed necessary to prevent thrombus formation and fibrin deposition. However, the initial 10,000 unit bolus may not result in adequate systemic anticoagulation in every patient, exposing some patients to risk of thrombus at the angioplasty site and subsequent reocclusion. In this non-randomized study, we assessed systemic coagulation during PTCA by retrospectively analyzing activated clotting times obtained in 108 consecutive patients. All patients had normal baseline prothrombin times and activated partial thromboplastin times. Patients who were on heparin prior to PTCA were excluded. Based on data from studies on heparinization during extracorporeal bypass an activated clotting time (ACT) of greater than 300 seconds was required. Twelve patients (11%) were observed to have activated clotting times of below 300 seconds after an initial 10,000 unit bolus of heparin. These patients required an additional 3,000-10,000 units of heparin to have systemic anticoagulation during PTCA. Symptoms of stable or unstable angina had no significant effect on heparin requirement, although there was a trend toward greater heparin resistance in unstable angina. We conclude that it is important to monitor the status of anticoagulation during PTCA, for 11% of patients undergoing PTCA require additional initial heparin bolus to achieve an ACT greater than 300 seconds and to be effectively anticoagulated. Careful monitoring of heparinization during PTCA may reduce the incidence of thrombosis. abstract_id: PUBMED:33715057 Correlation between activated clotting time monitoring and heparin concentration measurement in a patient with antiphospholipid syndrome during cardiac valve surgery: a case report. Background: Anticoagulation management of patients with antiphospholipid syndrome (APS) undergoing cardiac surgery is challenging due to the prolongation of activated clotting time (ACT). Currently, no study has compared the utility of ACT monitoring using the Hemochron Jr. Signature+ and that of heparin concentration management using the Hemostasis Management System (HMS) Plus in patients with APS. Case Presentation: A 71-year-old woman with APS was scheduled to undergo an aortic valve replacement for aortic regurgitation. The ACT was measured using the Hemochron Jr. Signature+, and the heparin concentration was measured concurrently using the HMS Plus. ACT over 480 s corresponded to an adequate heparin concentration during cardiopulmonary bypass. The clinical course was uneventful, and no thrombotic or hemorrhagic complications were observed. Conclusion: In the present patient with APS, the Hemochron Jr. Signature+ was useful as an anticoagulation management during cardiac valve surgery. abstract_id: PUBMED:36042956 Comparison of Extracorporeal Life Support Anticoagulation Using Activated Clotting Time Only to a Multimodal Approach in Pediatric Patients. Objective: To evaluate an institutional practice change from an extracorporeal life support (ECLS) anticoagulation monitoring strategy of activated clotting time (ACT) alone to a multimodal strategy including ACT, activated partial thrombin time, heparin anti-factor-Xa, and thromboelastography. Methods: This was a retrospective review of patients younger than 18 years on ECLS and heparin between January 2014 and June 2020 at a single institution. Results: Twenty-seven patients used an ACT-directed strategy and 25 used a multimodal strategy. The ACT-directed group was on ECLS for a shorter median duration than the multimodal group (136 versus 164 hours; p = 0.046). There was a non-significant increase in major hemorrhage (85.1% versus 60%; p = 0.061) and a significantly higher incidence of central nervous system (CNS) hemorrhage in the ACT-directed group (29.6% versus 0%; p = 0.004). Rates of thrombosis were similar, with a median of 3 circuit changes per group (p = 0.921). The ACT-directed group had larger median heparin doses (55 versus 34 units/kg/hr; p &lt; 0.001), required more dose adjustments per day (3.8 versus 1.7; p &lt; 0.001), and had higher rates of heparin doses &gt;50 units/kg/hr (62.9% versus 16%; p = 0.001). More anticoagulation parameters were supratherapeutic (p = 0.015) and fewer were therapeutic (p &lt; 0.001) in the ACT-directed group. Conclusions: Patients with a multimodal strategy for monitoring anticoagulation during ECLS had lower rates of CNS hemorrhage and decreased need for large heparin doses of &gt;50 units/kg/hr without an increase in clotting complications, compared with ACT-directed anticoagulation. Multimodal anticoagulation monitoring appears superior to ACT-only strategies and may reduce heparin exposure and risk of hemorrhagic complications for pediatric patients on ECLS. Answer: Activated clotting times (ACTs) are not useful for the assessment of anticoagulation with subcutaneous low-molecular-weight heparin (LMWH) such as enoxaparin. A study conducted within the Thrombolysis In Myocardial Infarction (TIMI) 11A trial found that despite significant increases in anti-Xa levels with doses of enoxaparin, there was no change in ACT measured by HemoTec and only a small increase with Hemachron. Moreover, the correlation between peak Hemachron ACT and peak anti-Xa levels was poor (R = 0.5, P = .08). Therefore, the study concluded that ACTs should not be relied on in patients receiving enoxaparin who require acute percutaneous coronary intervention (PCI) (PUBMED:11579347). In contrast, ACT is widely used to monitor anticoagulation during procedures that require unfractionated heparin (UFH), such as cardiopulmonary bypass (CPB) and surgeries involving temporary aortic occlusion. Unfractionated heparin remains the primary anticoagulant used during these procedures, with monitoring systems primarily based upon ACT and/or heparin concentration (PUBMED:27650345). In summary, while ACT is an important tool for monitoring anticoagulation during procedures that involve UFH, it is not a reliable method for assessing anticoagulation with subcutaneous LMWH like enoxaparin.
Instruction: The concurrent use of anticholinergics and cholinesterase inhibitors: rare event or common practice? Abstracts: abstract_id: PUBMED:15571547 The concurrent use of anticholinergics and cholinesterase inhibitors: rare event or common practice? Objectives: To measure the prevalence of anticholinergic use cross-sectionally in patients receiving cholinesterase inhibitors and to describe change in use of anticholinergics upon inception of cholinesterase inhibitor treatment. Design: Cross-sectional and inception cohort studies. Setting: State of Iowa. Participants: Iowa Medicaid beneficiaries aged 50 and older with a pharmacy claim for a cholinesterase inhibitor during January 1997 through February 2000. Measurements: Anticholinergic use was determined for all patients with a cholinesterase inhibitor pharmacy claim during January and February of 2000. A frequency distribution of all anticholinergics was compiled, with emphasis placed on those considered inappropriate in the elderly. In a separate analysis, anticholinergic use was determined at two points: 90 days before and after cholinesterase inhibitor inception. Results: Of 557 patients receiving a cholinesterase inhibitor, 197 (35.4%) received an anticholinergic concurrently. Of all anticholinergics, 74.5% (178/239) had been identified as inappropriate for use in the elderly, 22.2% (53/239) under any circumstances. At the time of cholinesterase inhibitor inception, 30.2% (143/474) and 33.5% (159/474) of patients received an anticholinergic 90 days before and 90 days after inception, respectively. Increases in anticholinergic prescribing upon cholinesterase inhibitor inception exceeded decreases (Wilcoxon signed-rank test, S=529, P=.020). Conclusion: The concurrent use of anticholinergics and cholinesterase inhibitors is common although rarely appropriate. Patients with Alzheimer's disease deserve to receive the optimum benefit from cholinesterase inhibitor treatment, which can only be achieved through diligent and appropriate use of concurrent pharmacotherapy. abstract_id: PUBMED:35864474 Prevalence and patterns of the concurrent use of anticholinergics for the motor symptoms of Parkinson's disease and acetylcholinesterase inhibitors in Parkinson's disease patients with dementia: a cross-sectional study using Korea National Health Insurance claims data. Background: The concurrent use of anticholinergics and acetylcholinesterase inhibitors (ACHEIs) in Parkinson's disease (PD) patients with dementia should be avoided because the opposing pharmacological actions of both drugs reduce the treatment efficacy. We aimed to investigate the prevalence of the concurrent use of these two types of drugs in Korean patients. Methods: In the 2017 Health Insurance Review and Assessment Service-National Aged Patient Sample data, comprising insurance claims records for a 10% random sample of patients aged ≥ 65 years in Korea, "concurrent use" was defined as the overlapping of anticholinergic and ACHEI doses for at least 2 months. Results: Among 8,845 PD patients with dementia, 847 (9.58%) were co-administered anticholinergics, used to treat the motor symptoms of PD, and ACHEIs for a mean duration of 7.7 months. A total of 286 (33.77% of all co-administered) patients used both drug types concurrently all year. About 80% of concurrent users were prescribed each drug by the same prescriber, indicating that coadministration may not be due to a lack of information sharing between providers. Logistic regression analysis showed that patients mainly treated at clinics (odds ratio (OR), 1.541; 95% confidence interval (CI), 1.158-2.059), hospitals (OR, 2.135; 95% CI, 1.586-2.883), and general hospitals (OR, 1.568; 95% CI, 1.221-2.028) were more likely to be co-prescribed anticholinergics and ACHEIs than those mainly treated at tertiary-care hospitals. PD patients with dementia treated at healthcare organizations located in areas other than the capital city had an approximately 22% higher risk of concurrent use (OR: 1.227, 95% CI: 1.046-1.441). Conclusions: The concurrent use of anticholinergics for the motor symptoms of PD and ACHEIs in elderly Korean PD patients with dementia cannot be ignored, and strategies that mitigate potentially inappropriate concurrent drug use are required. abstract_id: PUBMED:28387023 Simultaneous usage of dementia medications and anticholinergics among Asians and Pacific Islanders. Background: The simultaneous use of dementia medications and anticholinergic medications occurs frequently. Cholinesterase inhibitors and anticholinergic medications likely counteract one another, potentially exposing patients to medications with decreased benefit, more adverse effects, and higher cost of care. We identified the rate of concurrent prescriptions of cholinesterase inhibitors/memantine with anticholinergics in an urban hospital setting with a large Asian and Pacific Islander population. Methods: This study is a retrospective review of patients hospitalized from 1 January 2006 to 31 December 2010 at a general hospital who simultaneously received US Food and Drug Administration-approved dementia medications (galantamine, rivastigmine, donepezil, and/or memantine) and anticholinergics. Results: Overall, 304 patients receiving cholinesterase inhibitors/memantine also received anticholinergics. Of these patients, 64.1% were given high-potency anticholinergic medications, and 35.9% received medium-potency medications. Indications for the use of anticholinergic medication were urological (17.8%), gastrointestinal excluding nausea (32.6%), nausea (10.2%), psychiatric (7.9%), and other (31.6%). Asian patients received the combination of cholinesterase inhibitors/memantine and anticholinergics less frequently than Native Hawaiian or Caucasian patients (8.4% vs 12.2% and 13.3%, respectively; χ2 = 16.04, degrees of freedom = 2, P &lt; 0.0003). Conclusions: Simultaneous prescribing of cholinesterase inhibitors, memantine, and anticholinergic medications was significantly less common than in previous studies, with some ethnic variability. The less frequent occurrence of concurrent medications in the Asian population may be because of variations in the rate of indications or in tolerability for anticholinergic medications among the population. abstract_id: PUBMED:29642720 Use of Potentially Inappropriate Medications in People With Dementia in Vietnam and Its Associated Factors. This study examined the use of potentially inappropriate medicines that may affect cognition (PIMcog) in people with dementia and its associated factors. Medical records of all outpatients with dementia attending a tertiary hospital in Vietnam between January 1, 2015, and December 31, 2016, were examined. Medicine use was assessed against a list of PIMcog. Variables associated with having a PIMcog were assessed using a multiple logistic regression. Of the 128 patients, 41% used a PIMcog, 39.1% used cholinesterase inhibitors (CEIs) concomitantly with anticholinergics, and 18% used antipsychotics. The number of hospital visits (adjusted odds ratio [OR]: 1.08; 95% confidence interval [CI]: 1.02-1.16) and number of treating specialists (adjusted OR: 0.61; 95% CI: 0.45-0.83) were associated with PIMcog use. This study highlights a high-level use of medicines that can further impair cognition or reduce the effectiveness of CEIs in people with dementia. Efforts to improve quality use of medicines for this population are warranted. abstract_id: PUBMED:19467148 Concomitant use of anticholinergics with acetylcholinesterase inhibitors in Medicaid recipients with dementia and residing in nursing homes. Objectives: To evaluate the extent of concomitant use of anticholinergic and cholinesterase inhibitor medications in Medicaid recipients with dementia residing in nursing homes. Design: Cross-sectional survey of medical claims data. Setting: Indiana Medicaid claims for 2004. Participants: Indiana Medicaid recipients continuously eligible for Medicaid in 2004 aged 65 and older with dementia who were residing in nursing homes and taking cholinesterase inhibitors. Measurements: Rates of concomitant anticholinergic and cholinesterase inhibitor use, number of days residents experienced concomitant use, and concomitant use according to therapeutic class and level of anticholinergic activity were determined. Results: A large proportion (46.7%) of 3,251 Medicaid beneficiaries living in nursing homes and taking cholinesterase inhibitors received anticholinergics concomitantly. Anticholinergics designated as Level 3, or having markedly anticholinergic adverse effects, accounted for most of the concomitant anticholinergic use. More than half (58.1%) of the individuals with concomitant anticholinergic use had 100 or more days of such use. Conclusion: Nearly half of Indiana Medicaid recipients with dementia residing in nursing homes who were taking cholinesterase inhibitors in 2004 were using anticholinergics concomitantly. Patterns of concomitant use in the population examined may assist practitioners in reviewing their prescribing decisions for this vulnerable population. abstract_id: PUBMED:32052322 Patterns of Potentially Inappropriate Bladder Antimuscarinic Use in People with Dementia: A Retrospective Cohort Study. Background: Emergency department (ED) visits or hospitalizations should prompt review of the patient's medications after discharge and targeted deprescribing to reduce ongoing risks. Objective: To see if this is happening, we sought to examine the proportion of serious falls and delirium that were followed by a prescription fill for a bladder antimuscarinic. Our secondary objective was to identify concurrent use of other anticholinergics, cholinesterase inhibitors (ChEIs), and diuretics among people with dementia taking bladder antimuscarinics. Methods: Retrospective descriptive study using data from the National Health and Aging Trends Study (2011-2014) linked with Medicare claims from 2010-2014. The primary outcome was the proportion of serious falls or delirium that were followed by a bladder antimuscarinic prescription fill within 12 months. The secondary outcome was the proportion of individuals concurrently receiving prescriptions for bladder antimuscarinics, other strong anticholinergics, ChEIs, and diuretics. Results: During the observation period, 8.88% of people with dementia filled a bladder antimuscarinic prescription. Over one-third (35%) initiated use after being identified as having dementia. Many used bladder antimuscarinics concurrently with other strong anticholinergics (19%), diuretics (42%), and ChEIs (32%). The majority (76%) of serious falls or delirium among people with dementia and bladder antimuscarinic use were followed by a bladder antimuscarinic prescription within 12 months. Conclusions: In this descriptive study, bladder antimuscarinic prescriptions were often filled irrespective of dementia status, serious falls, or delirium. Concurrent use of other anticholinergics, diuretics, and ChEIs was common. Falls and delirium should trigger review of the medication list after discharge and targeted deprescribing in people with dementia. abstract_id: PUBMED:31929157 Asthma and Chronic Obstructive Pulmonary Disease as a Comorbidity and Association with the Choice of Antidementia Medication Among Persons with Alzheimer's Disease. Background: Asthma and chronic obstructive pulmonary disease (COPD) are common comorbidities in persons with Alzheimer's disease (AD). However, pharmacotherapy of these diseases may have opposite mechanisms of action; anticholinergics in asthma/COPD and acetylcholinesterase inhibitors (AChEI) in AD. Objective: To investigate whether existing asthma/COPD affects the choice of AD medication, and the survival of the patients with AD. Methods: In this retrospective cohort study, data from the MEDALZ-study, which includes all community-dwelling persons with AD during 2005-2011 in Finland (n = 70718) was utilized. Persons with asthma/COPD (N = 7211) were defined as having a special reimbursement for asthma/COPD, or long-term use (≥250 days) of inhaled anticholinergics, inhaled corticosteroids, or leukotriene antagonists during the year before AD diagnosis. We compared persons with and without asthma/COPD regarding the choice of the initial antidementia medication (AChEI versus memantine) with logistic regression and mortality with Cox regression model during the follow-up (up to end of 2015). Results: Memantine was favored over AChEIs as first-line treatment to AD in persons with asthma/COPD compared to those without asthma/COPD (odds ratio 1.23, 95% confidence interval (CI) 1.15-1.31). Memantine was also more commonly used among those who used multiple asthma/COPD medications (7.9% of memantine initiators used ≥3 asthma/COPD medications compared with 5.5% of those who initiated with AChEI). Mortality was higher in persons with asthma/COPD compared to those without asthma/COPD (adjusted hazard ratio 1.10, 95% CI 1.07-1.13). Conclusion: More frequent use of memantine instead of AChEI may result from an attempt to prevent possible worsening of asthma/COPD by AChEIs. Vulnerable persons with both AD and asthma/COPD need individually assessed pharmacotherapy for their medical conditions. abstract_id: PUBMED:18384584 Dual use of bladder anticholinergics and cholinesterase inhibitors: long-term functional and cognitive outcomes. Objectives: To determine the cognitive and functional consequences of dual use of cholinesterase inhibitors (ChIs) and the bladder anticholinergics oxybutynin or tolterodine. Design: Prospective cohort study. Setting: Nursing homes (NHs) in the state of Indiana. Participants: Three thousand five hundred thirty-six Medicaid-eligible NH residents aged 65 and older taking a ChI between January 1, 2003, and December 31, 2004. Residents were excluded if they were taking an anticholinergic other than oxybutynin or tolterodine. Measurements: Indiana Medicaid claims data were merged with data from the Minimum Data Set (MDS). Repeated-measures analyses were performed to assess the effects of dual therapy on change in cognitive function measured using the MDS Cognition Scale (MDS-COGS; scored 0-10) and change in activity of daily living (ADL) function using the seven ADL items in the MDS (scored 0-28). Potential covariates included age, sex, race, number of medications, and Charlson Comorbidity Index score. Results: Three hundred seventy-six (10.6%) residents were prescribed oxybutynin or tolterodine concomitantly with a ChI. In residents in the top quartile of ADL function, ADL function declined an average of 1.08 points per quarter when not taking bladder anticholinergics (ChI alone), compared with 1.62 points per quarter when taking dual therapy, a 50% greater rate in quarterly decline in ADL function (P=.01). There was no excess decline attributable to dual therapy in MDS-COGS scores or in ADL function for residents who started out with lower functioning. Conclusion: In higher-functioning NH residents, dual use of ChIs and bladder anticholinergics may result in greater rates of functional decline than use of ChIs alone. The MDS-COGS may not be sensitive enough to detect differences in cognition due to dual use. abstract_id: PUBMED:16791627 Oral anticholinergics in overactive bladder Behavioural therapy and anticholinergics are the mainstays in the treatment of symptoms of overactive bladder in patients with idiopathic and neurogenic detrusor overactivity; they are the first-line treatment. Oxybutynin, propiverine, tolterodine and trospium chloride as well as the "newcomers" solifenacin and darifenacin are comparable in regards to their efficacy. However, based on different pharmacokinetics and pharmacodynamics with different resorption velocity, different metabolisation and different CNS penetration, the profile of adverse events is different, qualitatively and quantitatively. Substances that are resorbed slowly or available as slow-release formulations are tolerated better. Lipophilic anticholinergics which pass the blood-brain barrier may compromise cognitive functions, especially in geriatric patients, who are already on cholinesterase inhibitors due to memory disorders. The following article gives an overview of the anticholinergics currently prescribed in patients with symptoms of overactive bladder with special attention to the influence of pharmacokinetics/pharmacodynamics on the adverse events profile including possible CNS side effects. abstract_id: PUBMED:29729025 Anticholinergic medications in patients admitted with cognitive impairment or falls (AMiCI). The impact of hospital admission on anticholinergic cognitive medication burden. Results of a multicentre observational study. What Is Known And Objective: Drugs with anticholinergic properties increase the risk of falls, delirium, chronic cognitive impairment, and mortality and counteract procholinergic medications used in the treatment of dementia. Medication review and optimisation to reduce anticholinergic burden in patients at risk is recommended by specialist bodies. Little is known how effective this review is in patients who present acutely and how often drugs with anticholinergic properties are used temporarily during an admission. The aim of the study was to describe the changes in the anticholinergic cognitive burden (ACB) in patients admitted to hospital with a diagnosis of delirium, chronic cognitive impairment or falls and to look at the temporary use of anticholinergic medications during hospital stay. Methods: This is a multi-centre observational study that was conducted in seven different hospitals in the UK, Finland, The Netherlands and Italy. Results And Discussion: 21.1% of patients had their ACB score reduced by a mean of 1.7%, 19.7% had their ACB increased by a mean of 1.6%, 22.8% of DAP naïve patients were discharged on anticholinergic medications. There was no change in the ACB scores in 59.2% of patients. 54.1% of patients on procholinergics were taking anticholinergics. Out of the 98 medications on the ACB scale, only 56 were seen. Medications with a low individual burden were accounting for 64.9% of the total burden. Anticholinergic drugs were used temporarily during the admission in 21.9% of all patients. A higher number of DAPs used temporarily during admission was associated with a higher risk of ACB score increase on discharge (OR = 1.82, 95% CI for OR: 1.36-2.45, P &lt; .001). What Is New And Conclusion: There was no reduction in anticholinergic cognitive burden during the acute admissions. This was the same for all diagnostic subgroups. The anticholinergic load was predominantly caused by medications with a low individual burden. More than 1 in 5 patients not taking anticholinergics on admission were discharged on them and similar numbers saw temporary use of these medications during their admission. More than half of patients on cholinesterase-inhibitors were taking anticholinergics at the same time on admission, potentially directly counteracting their effects. Answer: The concurrent use of anticholinergics and cholinesterase inhibitors is not a rare event but rather a common practice. Studies have shown that a significant proportion of patients receiving cholinesterase inhibitors are also prescribed anticholinergics concurrently. For instance, in a study of Iowa Medicaid beneficiaries aged 50 and older, 35.4% of patients receiving a cholinesterase inhibitor also received an anticholinergic concurrently, with many of these anticholinergics identified as inappropriate for use in the elderly (PUBMED:15571547). Similarly, in a cross-sectional study using Korea National Health Insurance claims data, 9.58% of Parkinson's disease patients with dementia were co-administered anticholinergics and acetylcholinesterase inhibitors for a mean duration of 7.7 months (PUBMED:35864474). In an urban hospital setting with a large Asian and Pacific Islander population, 304 patients receiving cholinesterase inhibitors/memantine also received anticholinergics, with a significant proportion given high-potency anticholinergic medications (PUBMED:28387023). In Vietnam, 41% of outpatients with dementia attending a tertiary hospital used potentially inappropriate medicines that may affect cognition, with 39.1% using cholinesterase inhibitors concomitantly with anticholinergics (PUBMED:29642720). Furthermore, a cross-sectional survey of Indiana Medicaid claims revealed that 46.7% of Medicaid beneficiaries with dementia residing in nursing homes and taking cholinesterase inhibitors received anticholinergics concomitantly (PUBMED:19467148). A retrospective cohort study also found that over one-third of people with dementia initiated bladder antimuscarinic use after being identified as having dementia, with many using these medications concurrently with other strong anticholinergics, diuretics, and cholinesterase inhibitors (PUBMED:32052322). These findings indicate that the concurrent use of anticholinergics and cholinesterase inhibitors is a widespread practice, despite the potential for reduced treatment efficacy and increased risk of adverse effects due to the opposing pharmacological actions of these drugs.
Instruction: Neighbourhood deprivation and alcohol consumption: does the availability of alcohol play a role? Abstracts: abstract_id: PUBMED:15737966 Neighbourhood deprivation and alcohol consumption: does the availability of alcohol play a role? Background: Previous studies suggest that the physical availability of alcohol may mediate the association between neighbourhood-level material deprivation and alcohol consumption. This study tests the relationships between neighbourhood-level deprivation, alcohol availability, and individual-level alcohol consumption using a multilevel analysis. Methods: Data are from cross-sectional surveys conducted between 1979 and 1990 as part of the Stanford Heart Disease Prevention Program (SHDPP). Women and men (n = 8197) living in four northern/central California cities and 82 neighbourhoods were linked to neighbourhood deprivation variables derived from the US census (e.g. unemployment, crowded housing) and to measures of alcohol availability (density of outlets in the respondent's neighbourhood, nearest distance to an outlet from the respondent's home, and number of outlets within a half mile radius of the respondent's home). Separate analyses were conducted for on- and off-sale outlets. Results: The most deprived neighbourhoods had substantially higher levels of alcohol outlet density than the least deprived neighbourhoods (45.5% vs 14.8%, respectively). However, multilevel analyses showed that the least deprived neighbourhoods were associated with the heaviest alcohol consumption, even after adjusting for individual-level sociodemographic characteristics (OR 1.30, CI 1.08-1.56). Alcohol availability was not associated with heavy drinking and thus did not mediate the relationship between neighbourhood deprivation and heavy alcohol consumption. Conclusions: Although alcohol availability is concentrated in the most deprived neighbourhoods, women and men in least deprived neighbourhoods are most likely to be heavy drinkers. This mismatch between supply and demand may cause people in the most deprived neighbourhoods to disproportionately suffer the negative health consequences of living near alcohol outlets. abstract_id: PUBMED:25128780 Neighbourhood availability of alcohol outlets and hazardous alcohol consumption in New Zealand. The socio-spatial arrangement of alcohol retailers is potentially important in understanding the relationship between neighbourhood context and 'excessive' alcohol consumption. This New Zealand study examines whether the availability of alcohol products is associated with individual-level alcohol consumption. Measures capturing the availability of alcohol retailers were calculated for neighbourhoods across the country and then appended to a national health survey. At the national level there was no evidence for an association between hazardous consumption and alcohol outlet access. However, there was evidence of associations with neighbourhood retailing for younger Māori and Pacific peoples males; younger European females; middle-aged European men; and older men. The findings provide evidence that 'alcogenic' environments are associated with excessive drinking in New Zealand, albeit that the associations are restricted to particular vulnerable groups. abstract_id: PUBMED:30511169 The neighbourhood social environment and alcohol use among urban and rural Scottish adolescents. Objectives: This research examined the relationship between neighbourhood social environmental characteristics and drinking outcomes among a sample of urban and rural adolescents. Methods: From a sample of 1558 Scottish secondary schoolchildren, surveyed as part of the 2010 Health Behaviour in School-aged Children study, we modelled three drinking outcomes on a variety of neighbourhood conditions, including social cohesion, disorder, alcohol outlet density, deprivation, and urban/rurality. Nested and cross-classified multilevel logistic regressions were specified. Results: An urban-to-rural gradient was found with non-urban adolescents exhibiting higher odds of having ever drank. Neighbourhood social cohesion related to having ever drank. Among drinkers, those living in accessible small towns had higher odds of weekly drinking and drunkenness compared to urban areas. Higher odds of drunkenness were also found in remote rural areas. Those residing in the least deprived areas had lower odds of weekly drinking. Conclusions: In Scotland, inequalities exist in adolescent alcohol use by urban/rurality and neighbourhood social conditions. Findings support regional targeting of public health efforts to address inequalities. Future work is needed to develop and evaluate intervention and prevention approaches for neighbourhoods at risk. abstract_id: PUBMED:26032427 Adolescents from affluent city districts drink more alcohol than others. Aims: To estimate the level of alcohol consumption and problems among adolescents in city districts in Oslo, Norway with different socio-economic composition; to test whether differences in alcohol consumption are related to district differences in socio-demographic characteristics; and to analyse whether such associations remain significant after controlling for individual-level variables. Design: Cross-sectional survey using multi-level linear regression analyses with individual responses at the lowest level and city-district data at the highest level. Setting: Oslo, Norway. Participants: A total of 6635 secondary school students, in 62 schools, living in 15 different city districts. Measurements: Frequency of alcohol consumption and alcohol intoxication; alcohol problems; and individual characteristics such as immigrant status, religious involvement and parental norms with regard to alcohol. Socio-economic indicators in city districts, such as education, income and unemployment, were combined into a district-level socio-economic index (DLSI). Findings: DLSI scores were related positively to alcohol use (r = 0.31, P &lt; 0.01) and alcohol intoxication (r = 0.25, P &lt; 0.01) but negatively to alcohol problems among alcohol users (r = -0.18, P &lt; 0.01). DLSI scores remained significant for alcohol consumption and alcohol intoxication, after controlling for individual-level variables (P &lt; 0.01), but this was not the case for alcohol problems. Conclusion: Adolescents in affluent areas of Oslo, Norway report the highest levels of alcohol consumption and alcohol intoxication of all areas; neighbourhood characteristics such as education, income and unemployment levels seem to play a role in such drinking behaviour. Alcohol users in poorer districts reported more alcohol problems than those in other districts; however, here neighbourhood effects do not seem to play a role. abstract_id: PUBMED:22142140 Neighbourhood matters: perceptions of neighbourhood cohesiveness and associations with alcohol, cannabis and tobacco use. Introduction And Aims: To examine relationships between perceived neighbourhood cohesion and alcohol, tobacco and cannabis consumption in New Zealand. Design And Methods: A two-level random intercept regression model was used to examine the extent to which perception of neighbourhood cohesion (at the individual and area level) was associated with the frequency of substance (alcohol, tobacco and cannabis) consumption, after controlling for demographics and deprivation. This study is based on data from two national Health Behaviours Surveys (Drugs and Alcohol) conducted in 2003 and 2004 in New Zealand. Data were collected by computer-assisted telephone interviewing with two complementary computer-assisted cellphone interviewing samples. The combined sample consists of 6346 men and 8411 women (n = 14,757) distributed across 1572 census area units. Results: Perception of neighbourhood cohesion was significantly associated with the level of alcohol, tobacco and cannabis consumption. Individuals who perceived their neighbourhood as more cohesive had higher annual frequency of alcohol consumption but lower consumption on a typical drinking occasion. Higher perceived neighbourhood cohesion was also associated with a decrease in the probability of tobacco and cannabis use and of the amounts consumed. Area-level analysis suggested that aggregate census area unit-level neighbourhood cohesion exerted a significant additional contextual effect on the frequency of tobacco and cannabis consumption over and above individual perceptions of neighbourhood cohesiveness. Discussion And Conclusions: This study provides empirical evidence that perceptions of the neighbourhood social environment are associated with people's substance consumption patterns. Increasing residents' sense of neighbourhood cohesion might prove a promising way to decrease health-damaging consumption behaviours. abstract_id: PUBMED:34338827 Individual differences in social play behaviour predict alcohol intake and control over alcohol seeking in rats. Rationale: Social play behaviour is a rewarding social activity displayed by young mammals, thought to be important for the development of brain and behaviour. Indeed, disruptions of social play behaviour in rodents have been associated with cognitive deficits and augmented sensitivity to self-administration of substances of abuse, including alcohol, later in life. However, the relation between social development and loss of control over substance use, a key characteristic of substance use disorders including alcohol use disorder (AUD), has not been investigated. Moreover, it remains unknown how inherent differences in playfulness relate to differences in the sensitivity to substance use and AUD. Objective: The objective of this study is to determine how individual differences in juvenile social play behaviour predict alcohol intake and loss of control over alcohol seeking. Methods: Juvenile male Lister hooded rats were characterized for their tendency to engage in social play behaviour. Subsequently, alcohol consumption and conditioned suppression of alcohol seeking were assessed in the tertiles of rats that showed the most and least social play. Results: The rats that engaged most in social play behaviour consumed more alcohol than their less playful counterparts. However, whereas the most playful rats showed intact conditioned suppression of alcohol seeking, the least playful rats showed no such suppression. Conclusion: Individual levels of playfulness predict the sensitivity to alcohol-directed behaviour. Highly playful rats are more prone to alcohol intake, yet show greater control over alcohol seeking. These findings increase our understanding of the relationship between social development and vulnerability to AUD. abstract_id: PUBMED:34957322 Teaching alcohol and smoking counselling in times of COVID-19 to 6th-semester medical students: experiences with a digital-only and a blended learning teaching approach using role-play and feedback. Objectives: Digital teaching formats have seen increased use, and not just since the beginning of the pandemic. They can also be used to teach cognitive, practical and communicative learning objectives effectively. We describe the implementation of an online-only course on alcohol/smoking counselling in the COVID-19 summer semester (SS) 2020 and an inverted classroom (IC) concept in the winter semester (WS) 2020/21 at the University Hospital of Würzburg. Methodology: The interdisciplinary subject of "prevention" teaches students about high-risk alcohol consumption/smoking and how to conduct a brief verbal intervention. All 143 (SS) and 131 (WS) 6th-semester medical students completed a 90-minute course: module 1 included a Prezi® presentation on alcohol/smoking basics. Module 2 introduced a counselling concept (alcohol or smoking) online or classroom-based (WS only), depending on the participants' choice. In the online practical component, each student created a counselling video and reflections at home, and later received written feedback from lecturers/tutors. Supervised role-playing was used in the classroom-based format in the WS. There were 2 exam questions on module 1 at the end of each semester. Results: The students surveyed (11%) were satisfied with module 1. Practical exercises and feedback received praise in the evaluation of the classroom-based format (response: 97%). It was not possible for all students to perform counselling due to time constraints. A majority of participants filmed live role-playing in the online practical component. The exam questions were answered correctly by 31% (SS) and 36% (WS) respectively. Conclusions: Counselling can also be taught digitally: creating one's own videos with delayed written feedback is an innovative form of teaching. We are now aiming for a mix of both aspects as an IC with 90 minutes of classroom-based practical exercises. abstract_id: PUBMED:24612367 Gender differences in associations of neighbourhood disadvantage with alcohol's harms to others: a cross-sectional study from the USA. Introduction And Aims: To examine whether alcohol's harms to others are more prevalent in socioeconomically disadvantaged neighbourhoods and whether men or women are at differential risk in these neighbourhoods. Design And Methods: Cross-sectional survey data from 2000 and 2005 National Alcohol Surveys were linked to geo-referenced indicators of neighbourhood disadvantage from the US 2000 Decennial Census. The pooled sample included 10,121 adults (54% female; average age 44.4 years; 69% White; 13% African-American; 13% Hispanic). A dichotomous indicator denoted neighbourhoods based on the top quartile on a five-item measure of disadvantage (alpha = 0.90). We examined past-year family problems due to someone else's drinking (marriage difficulties and/or financial trouble) and victimisation by someone who had been drinking (having property vandalised and/or being pushed, hit or assaulted). Results: During the prior 12 months, 6% of women and 3% of men experienced family problems from someone else's drinking, and 4% of women and 7% of men reported being victimised by drinkers. Multivariate logistic regression models adjusting for individual-level socioeconomic status and other demographic characteristics showed the relationship between neighbourhood disadvantage and harms from someone else's drinking was moderated by gender, with significantly higher odds of family problems in disadvantaged neighbourhoods for men but not for women, as well as significantly higher odds of crime victimisation in disadvantaged neighbourhoods for women but not men. Discussion And Conclusions: Experiences of harms from someone else's drinking in disadvantaged neighbourhoods vary for men and women. Targeted intervention strategies are needed to reduce alcohol's harm to others. abstract_id: PUBMED:16143745 Use of cigarettes and alcohol by preschoolers while role-playing as adults: "Honey, have some smokes". Objective: To examine preschoolers' attitudes, expectations, and perceptions of tobacco and alcohol use. Design: Structured observational study. Children used props and dolls to act out a social evening for adults. As part of the role play, each child selected items from a miniature grocery store stocked with 73 different products, including beer, wine, and cigarettes, for an evening with friends. Setting: A behavioral laboratory at the Department of Psychological and Brain Sciences, Dartmouth College. Patients: One hundred twenty children, 2 to 6 years old, participated individually in the role-playing. Main Outcome Measure: Whether or not a child purchased cigarettes or alcohol at the store. Results: Children purchased a mean of 17 of the 73 products in the store. Thirty-four children (28.3%) bought cigarettes and 74 (61.7%) bought alcohol. Children were more likely to buy cigarettes if their parents smoked (adjusted odds ratio [OR], 3.90; 95% confidence interval [CI], 1.20-12.63). Children were more likely to buy beer or wine if their parents drank alcohol at least monthly (adjusted OR, 3.04; 95% CI, 1.02-9.10) or if they viewed PG-13- or R-rated movies (adjusted OR, 5.10; 95% CI, 1.14-22.90). Children's play behavior suggests that they are highly attentive to the use and enjoyment of alcohol and tobacco and have well-established expectations about how cigarettes and alcohol fit into social settings. Conclusions: The data suggest that observation of adult behavior, especially parental behavior, may influence preschool children to view smoking and drinking as appropriate or normative in social situations. These perceptions may relate to behaviors adopted later in life. abstract_id: PUBMED:28802872 Smoking and binge-drinking among adolescents, Ontario, Canada: Does the school neighbourhood matter? This study examines whether access to alcohol and tobacco around the school relates to higher or lower odds of cigarette smoking and binge-drinking among Ontario high school students. The 2013 Ontario Student Drug Use and Health Survey provides data on self-reported smoking and drinking, individual covariates and subjective socio-economic status for 6142 students (grades 9-12) in Ontario, Canada. Locations of schools were geocoded and 1.6km network buffers defined the school neighbourhoods. Multi-level logistic regression examines individual and school neighbourhood factors with smoking and binge drinking as the dependent variable. Higher density of retail outlets increased the odds of smoking, but not the odds of binge-drinking. Older age, lower SES and being male increased the odds of smoking; while older age and being male also increased the odds of binge-drinking. Lower SES and higher population density decreased the odds of binge-drinking. Proximity to tobacco and alcohol outlets was not significant. Findings showed that a greater number of outlets in the school neighbourhood is significantly associated with higher odds of smoking, but not binge-drinking. School neighbourhood access to tobacco outlets should be considered when formulating policy interventions to reduce smoking for adolescents. Answer: The relationship between neighbourhood deprivation and alcohol consumption is complex and varies across different studies. The availability of alcohol does not seem to consistently mediate the association between neighbourhood-level material deprivation and alcohol consumption. In a study conducted as part of the Stanford Heart Disease Prevention Program, it was found that although the most deprived neighbourhoods had substantially higher levels of alcohol outlet density, the least deprived neighbourhoods were associated with the heaviest alcohol consumption. Alcohol availability was not associated with heavy drinking, suggesting that it did not mediate the relationship between neighbourhood deprivation and heavy alcohol consumption (PUBMED:15737966). Similarly, a New Zealand study found no evidence at the national level for an association between hazardous consumption and alcohol outlet access. However, there were associations with neighbourhood retailing for specific vulnerable groups, indicating that 'alcogenic' environments are associated with excessive drinking in New Zealand, albeit restricted to particular groups (PUBMED:25128780). In Scotland, an urban-to-rural gradient was observed with non-urban adolescents exhibiting higher odds of having ever drank. Neighbourhood social cohesion was related to having ever drank, and higher odds of weekly drinking and drunkenness were found in certain non-urban areas. Those residing in the least deprived areas had lower odds of weekly drinking, indicating that inequalities in adolescent alcohol use exist by urban/rurality and neighbourhood social conditions (PUBMED:30511169). In Oslo, Norway, adolescents in affluent areas reported the highest levels of alcohol consumption and intoxication, and neighbourhood characteristics such as education, income, and unemployment levels seemed to play a role in such drinking behaviour. However, alcohol problems among users in poorer districts were not significantly associated with neighbourhood effects (PUBMED:26032427). Perceptions of neighbourhood cohesion in New Zealand were significantly associated with the level of alcohol, tobacco, and cannabis consumption. Individuals who perceived their neighbourhood as more cohesive had higher annual frequency of alcohol consumption but lower consumption on a typical drinking occasion (PUBMED:22142140). In summary, while neighbourhood deprivation and the availability of alcohol outlets may influence alcohol consumption patterns, the relationship is not straightforward and can be influenced by various factors, including individual and neighbourhood characteristics, as well as specific population groups.
Instruction: Is decrease of tumor volume correlated with stage change after preoperative concurrent chemoradiotherapy? Abstracts: abstract_id: PUBMED:15966201 Is decrease of tumor volume correlated with stage change after preoperative concurrent chemoradiotherapy? Background/aims: The significance of tumor volume and its change after concurrent chemoradiotherapy (CCRT) was evaluated. Methodology: Standard-dose external radiation and oral UFUR plus leucovorin were used to treat 30 middle and lower rectal adenocarcinoma patients. Volume of tumor calculated from images obtained by dynamic MRI of the rectum before and after CCRT was compared to pathological results after definite resection and other clinical data. Results: The T-stage in 15 patients (50%), the N-stage in 13 (72.2%), and overall, the TNM stage in 18 (60%), were downstaged, including 7 (23.3%) with complete responses (CR). Volume of tumor before CCRT (Vpre) and after CCRT (VPost) was 10.3+/-6.1cm3 and 4.2+/-2.2cm3, respectively, and VPre correlated with initial T stage, N stage, age, and location. The net decrease ratio (NDR) of tumor volume was related to Vpre and initial T stage. As to the downstaging effect, VPre was related to incidence of CR; NDR was related to the downstaging of the N stage. Conclusions: All tumors showed volume reduction after CCRT, but the downstaging benefits were not in proportion to the size change. Initially larger tumors had higher ratios of volume reduction, and smaller tumors had higher chance of CR. abstract_id: PUBMED:32903963 High Mean Corpuscular Volume as a Predictor of Poor Overall Survival in Patients with Esophageal Cancer Receiving Concurrent Chemoradiotherapy. Background: Increasing numbers of recent studies have demonstrated that high mean corpuscular volume (MCV) is a predictor of poor overall survival (OS) and therapeutic response in patients with solid tumors. The aim of the present study was to explore the association between high MCV and OS in patients with advanced esophageal cancer (EC) undergoing concurrent chemoradiotherapy. Patients And Methods: Enrolled in this study were 249 patients with advanced EC who underwent concurrent chemoradiotherapy. Pre-treatment MCV values were collected in all patients and their correlations with OS and pathophysiological characteristics were analyzed. The chi-square test was used to explore the correlation between MCV and various clinical pathophysiological characteristics, and the prognostic significance of high MCV using Kaplan-Meier curves and the Cox proportional hazards model. All P-values were two-tailed and a P-value &lt;0.05 was considered statistically significant. Results: According to ROC curve analysis, the optimal cut-off value of MCV was 93.6 fL. The mean OS was 14.7 months in all 249 EC patients, 10.9 months in patients with MCV &gt;93.6 fL, and 18.8 months in patients with MCV &lt;93.6 fL; the difference is statistically significant (P&lt;0.05). Chi-square test showed that the MCV value was correlated with the N stage of the tumor and the therapeutic effect, indicating that the higher the MCV was, the higher the T stage of the tumor and the worse the therapeutic effect would be (p=0.012 and p &lt;0.01). Multivariate analysis showed that MCV (OR = 1.864, 95% CI: 1.439-2.415) was an independent prognostic factor for OS in EC patients. Conclusion: High MCV is a poor predictor of OS in patients with advanced EC receiving concurrent chemoradiotherapy. abstract_id: PUBMED:28179586 Phase III randomized trial of preoperative concurrent chemoradiotherapy versus preoperative radiotherapy for patients with locally advanced head and neck squamous cell carcinoma. Purpose: To determine the role of preoperative concurrent chemoradiotherapy in the treatment of locally advanced head and neck squamous cell carcinoma (HNSCC). Methods: A total of 222 patients with stage III/IVA-B HNSCC were randomly assigned to receive preoperative concurrent chemoradiotherapy (Pre-S CRT, weekly cisplatin 30mg/m2) or preoperative radiotherapy alone (Pre-S RT). Survival analysis was estimated by the Kaplan-Meier method and compared by the log-rank test. Results: With a medial follow-up of 59 month, the 5-year overall survival (OS), progression-free survival (PFS), distant metastasis-free survival (DMFS) of Pre-S CRT v Pre-S RT group were 53.8% v 39.0% (hazard ratio [HR], 0.74, 95% CI, 0.50 to 1.10, P = 0.13), 53.2% v 38.7%, (HR, 0.69, 95% CI, 0.47 to 1.01, P =0.06), and 80.4% v 68.1% (HR, 0.53, 95% CI, 0.28 to 0.98, P = 0.04), respectively. In patients with larynx-hypopharynx primaries, the 5-year OS, PFS and DMFS of Pre-S CRT v Pre-S RT were 62.7% v 38.8% (HR, 0.59, 95% CI 0.35 to 1.02, P = 0.054), 63.1% v 39.9% (HR, 0.52; 95% CI 0.30 to 0.89, P = 0.03) and 86.2% v 63.3% (HR, 0.35, 95% CI 0.15 to 0.82, P = 0.01), respectively. Conclusion: The addition of weekly cisplatin concurrent to preoperative RT does not improve OS, but improve DMFS in locally advanced HNSCC. However, in a subset of patients with the larynx-hypopharynx primaries, preoperative chemoradiotherapy has significantly improved PFS and DMFS, and has also provided a borderline benefit in OS in comparison with preoperative radiotherapy alone. abstract_id: PUBMED:31165155 Volumetric imaging parameters are significant for predicting the pathological complete response of preoperative concurrent chemoradiotherapy in local advanced rectal cancer. Preoperative concurrent chemoradiotherapy (CCRT) as the standard treatment for locally advanced rectal cancer (LARC) has been widely used in clinic. Its efficiency influences the prognosis and the selection of subsequent treatment. The current criteria for evaluating the prognosis of patients with extremely sensitive preoperative CCRT include the clinical complete remission response (cCR) and pathological complete response (pCR), but those with cCR may not necessarily achieve pCR, and the pCR can be confirmed only after surgery. Some scholars believe that patients with pCR after CCRT can be categorized as 'watch and wait'. Therefore, it is extremely important to find a way to predict the pCR status of patients before therapy. In this study, we examined the expression of stem cell markers and obtained direct and derivative volumetric imaging parameters before treatment. Subsequently, these factors and the general clinical data were adopted into a regression model, and the correlation between them and the pCR was analyzed. We found that the pCR of LARC was positively correlated with tumor compactness (TC), whereas it was negatively correlated with approximate tumor volume (ATV), real tumor volume (RTV), total surface area of the tumor (TSA) and tumor maximum longitudinal length (TML). In these meaningful predictors, the positive predictive values and the negative predictive values of TC were 74.73% and 94.61%, respectively. Compared with other possible predictors, TC is the most encouraging predictor of pCR. Our findings provide a way for clinicians to predict the sensitivity of preoperative CCRT and will help to select individualized treatment options for LARC patients. abstract_id: PUBMED:24421644 Comparison of concurrent chemoradiotherapy followed by radical surgery and high-dose-rate intracavitary brachytherapy: a retrospective study of 240 patients with FIGO stage IIB cervical carcinoma. Background: The aim of this study was to compare the long-term survival outcome and late toxicity in patients with FIGO (International Federation of Gynecology and Obstetrics) stage IIB cervical carcinoma after two treatment modalities, ie, concurrent chemoradiotherapy followed by radical surgery and concurrent chemoradiotherapy followed by high-dose-rate intracavitary brachytherapy. Methods: Between November 2004 and November 2011, 240 patients with FIGO stage IIB cervical carcinoma were analyzed, comprising 119 patients treated with concurrent chemoradiotherapy followed by radical surgery (group 1) and 121 patients treated with concurrent chemoradiotherapy followed by high-dose-rate intracavitary brachytherapy (group 2). Local control, overall survival, progression-free survival, and treatment-related complications were compared between the two groups. Results: The median follow-up duration was 36 months. Concurrent chemoradiotherapy followed by radical surgery showed a survival benefit when comparing group 1 and group 2 (3-year overall survival, 94.9% versus 84.6%, P=0.011; 3-year progression-free survival, 91.0% versus 81.8%, P=0.049, respectively). Three-year local pelvic control was 94.6% in group 1 and 93.3% in group 2 (P=0.325). Prognostic factors in group 1 were: age (≤35 years versus &gt;35 years), 3-year progression-free survival (74.1% versus 90.9%, P=0.037); tumor diameter (≥6 cm versus &lt;6 cm); and 3-year progression-free survival, (60.6% versus 92.9%, P=0.004). Prognostic factors in group 2 were: tumor diameter (≥4 cm versus &lt;4 cm); 3-year overall survival (78.0% versus 94.8%, P=0.043); tumor diameter (≥6 cm versus &lt;6 cm); 3-year progression-free survival (42.9% versus 84.2%, P=0.032); and 3-year overall survival (42.9% versus 87.1%, P=0.013). Further, 50 patients (42.02%) in group 1 and 46 patients (38.02%) in group 2 suffered from late complications. Analysis of the difference in composition of late complications showed that the rate of leg edema was higher in group 1 (35.29% versus 4.96%, P=0.000) while the rate of radiation enteritis was higher in group 2 (30.58% versus 5.04%, P=0.000). Conclusion: In patients with FIGO stage IIB cervical carcinoma, concurrent chemoradiotherapy followed by radical surgery achieved higher overall survival and progression-free survival rates in comparison with radical radiotherapy associated with concurrent chemotherapy. Tumor diameter could be a common prognostic factor in these two groups of patients. abstract_id: PUBMED:24137561 Comparison between preoperative and postoperative concurrent chemoradiotherapy for rectal cancer: an institutional analysis. Purpose: To evaluate the treatment outcomes of preoperative versus postoperative concurrent chemoradiotherapy (CRT) on locally advanced rectal cancer. Materials And Methods: Medical data of 114 patients with locally advanced rectal cancer treated with CRT preoperatively (54 patients) or postoperatively (60 patients) from June 2003 to April 2011 was analyzed retrospectively. 5-Fluorouracil (5-FU) or a precursor of 5-FU-based concurrent CRT (median, 50.4 Gy) and total mesorectal excision were conducted for all patients. The median follow-up duration was 43 months (range, 16 to 118 months). The primary end point was disease-free survival (DFS). The secondary end points were overall survival (OS), locoregional control, toxicity, and sphincter preservation rate. Results: The 5-year DFS rate was 72.1% and 48.6% for the preoperative and postoperative CRT group, respectively (p = 0.05, the univariate analysis; p = 0.10, the multivariate analysis). The 5-year OS rate was not significantly different between the groups (76.2% vs. 69.0%, p = 0.23). The 5-year locoregional control rate was 85.2% and 84.7% for the preoperative and postoperative CRT groups (p = 0.98). The sphincter preservation rate of low-lying tumor showed significant difference between both groups (58.1% vs. 25.0%, p = 0.02). Pathologic tumor and nodal down-classification occurred after the preoperative CRT (53.7% and 77.8%, both p &lt; 0.001). Acute and chronic toxicities were not significantly different between both groups (p = 0.10 and p = 0.62, respectively). Conclusion: The results confirm that preoperative CRT can be advantageous for improving down-classification rate and the sphincter preservation rate of low-lying tumor in rectal cancer. abstract_id: PUBMED:31690565 Preoperative endoscopic tattooing technique improved lymph node retrieval in rectal cancer patients receiving neoadjuvant concurrent chemoradiotherapy. Aims: To describe the clinical utility of lymph node retrieval and prognostic value of tattooing in rectal cancer (RC) patients undergoing neoadjuvant concurrent chemoradiotherapy (CCRT). Methods: A total 97 RC patients underwent preoperative CCRT, and 38 patients had preoperative endoscopic tattooing. Surgical intervention was performed after CCRT and the specimens were sampled as standard protocol in all patients. Other clinicopathological parameters correlated with lymph node retrieval status were also analysed. Results: Fifteen patients (39.5%) of 38 RC patients in the tattooing group (TG) had adequate lymph node retrieval (&gt;12) compared with 12 (20.3%) of 59 in the non-tattooing group. Higher lymph node retrieval rate was noted in the TG (p=0.04). In multivariable analysis, it showed tattooing was an independent predictive factor for higher lymph node retrieval in RC patients after CCRT (p=0.024) by logistic regression modelling. Besides histological grade, positive lymphovascular invasion, presence of lymph node metastasis, poor CCRT response and advanced pathological stage, inadequate lymph node retrieval was significantly associated with poor survival (all p&lt;0.05) by Kaplan-Meier analysis. In multivariable analyses, the results revealed that lymph node retrieval (p=0.005), pathological stage (p=0.001) and tumour progression grade (p=0.02) were independent prognostic markers in RC patients receiving CCRT. Conclusion: Preoperative endoscopic tattooing is a useful technique for RC patient receiving neoadjuvant CCRT. It can improve lymph node retrieval and provide an adequate diagnosis for proper treatment and prognosis. abstract_id: PUBMED:31297478 Pretreatment SUVmax value to predict outcome in patients with stage III NSCLC receiving concurrent chemoradiotherapy. Objective: Stage III disease accounts for approximately one-fourth of all non-metastatic non-small cell lung cancer (NSCLC). The patients who are not candidates for curative resection are offered concomitant chemoradiotherapy. In this subgroup, which is difficult to manage, studies that address the role of PET-CT to predict outcome measures specifically for stage III NSCLC receiving concurrent chemoradiotherapy may help better risk stratification. This study aimed to assess whether baseline PET maximum standardized uptake value (SUVmax) value in stage III NSCLC treated with concurrent chemoradiotherapy would independently identify patients with high risk of progression and death. Methods: The study population consisted of patients aged 18 years or more with unresectable stage III histologically or cytologically proven NSCLC who received concurrent chemoradiotherapy. From 2007 to 2014, medical records of patients admitted to our institution were retrospectively analyzed. Pretreatment PET-CT SUVmax values were recorded for each patient. These values were categorized as low or high according to the median SUVmax measure of the study population. Results: A total of 175 patients were analyzed. The median follow-up time was 23 months (range 6-109). The PET-CT SUVmax values ranged from 3.5 to 46 with a median value of 14. The median overall survival was 25 months in SUVmax &lt;14 and 18 months in SUVmax ≥14 group (p=0.023). The median progression-free survival was 16 months in SUVmax &lt;14 and 11 months in SUVmax ≥14 group (p=0.033). Multivariate analysis revealed that both PET-CT SUVmax value (p&lt;0.001) and age (p=0.016) were independent significant predictors for overall survival (OS). Conclusion: The results of this study involving patients with stage III NSCLC receiving concurrent chemoradiotherapy provide evidence that suggests that high values of pretreatment SUVmax, an indicator of metabolic tumor burden, predicted a higher risk of disease progression and death. abstract_id: PUBMED:34176930 Clinical Outcome of Palliative Concurrent Chemoradiotherapy with Cisplatin/Docetaxel for Stage III Non-small Cell Lung Cancer. Palliative concurrent chemoradiotherapy (CCRT) is often administered to patients with stage III non-small cell lung cancer (NSCLC). We investigated the clinical outcomes of patients receiving palliative CCRT for NSCLC. Data of patients with NSCLC who underwent palliative CCRT (n=16), preoperative CCRT plus surgery (n=97), or definitive CCRT (n=48) were evaluated. In all groups, the concurrent chemotherapy regimens consisted of cisplatin and docetaxel. Rates of local control (LC), distant metastasis-free survival (DMFS), progression-free survival (PFS), overall survival (OS), and prognosis were compared. The 2-year rates of LC, DMFS, PFS, and OS in 16 patients who underwent palliative CCRT were 44.4%, 12.5%, 12.5%, and 18.8%, respectively. Univariate analysis showed that palliative CCRT was associated with poor LC (p&lt;0.001), DMFS (p&lt;0.001), PFS (p&lt;0.001), and OS (p&lt;0.001) outcomes in patients who completed CCRT as a preoperative treatment and poor LC (p=0.01), DMFS (p=0.003), PFS (p=0.04), and OS (p=0.004) outcomes in patients who were considered for definitive CCRT. Although there were some long-term survivors, the clinical outcomes of palliative CCRT were significantly inferior to those of the ideal treatments. Therefore, careful determination of the appropriate treatment indications and further studies are warranted. abstract_id: PUBMED:28476443 Phase II study of preoperative concurrent chemoradiotherapy with oxaliplatin for locally advanced esophageal cancer. Background: We investigated preoperative concurrent chemoradiotherapy (CCRT) with oxaliplatin for locally advanced, potentially operative esophageal cancer in this Phase II study. Methods: Between October 2009 and October 2011, 35 consecutive patients with newly diagnosed esophageal cancer clinical stage T3-4, N0-1, M0 were enrolled into this study. One dose of chemotherapy with oxaliplatin (35 mg/m2) on Day 1 and Day 2, leucovorin (200 mg/m2) on Day 1, and 5-fluorouracil [5-FU; 2400 mg/m2 intravenously (i.v.) administered continuously for 48 hours] on Day 1 was administered 2 weeks before preoperative CCRT. During preoperative CCRT, radiation dose of 4500 cGy in 25 fractions was administered to the clinical target volume and 5000 cGy to 5040 cGy in 25 fractions was administered to the gross tumor volume; chemotherapy is administered concomitantly with oxaliplatin (45 mg/m2) on Day 1 of radiation therapy (R/T) every 14 days; 5-FU (400 mg/m2 i.v. bolus for 1 hour) for 5 days on Weeks 1 and 5 of R/T. Operation was performed 4-6 weeks after preoperative CCRT. Acute toxicity profile, overall survival rate, disease-free survival rate, distant metastasis failure-free survival rate, and local recurrence rate were evaluated. Results: Four patients withdrew from the study. The total number of patients in this analysis was 31. The resection rate was 64.5%. The pathologic complete response rate was 15%. The overall median survival was 19.3 months. The 5-year overall survival rate was 37.8%. The 5-year disease-free survival rate was 31.1%. The 5-year distant metastasis failure-free survival rate was 40.7% (50.56% for patients with operation; 27.2% for patients without operation, p=0.0298). The acute toxicities were mild, and no Grade 3 or above hematologic toxicity was noted. There was only one patient with Grade 3 esophagus toxicity. Grade 3 lung toxicity occurred in only three patients. Conclusion: Preoperative chemoradiotherapy with oxaliplatin in the treatment of locally advanced, potentially resectable esophageal cancer is feasible and safe. Answer: The decrease of tumor volume after preoperative concurrent chemoradiotherapy (CCRT) is not directly proportional to stage change. According to a study involving 30 middle and lower rectal adenocarcinoma patients, all tumors showed volume reduction after CCRT, but the downstaging benefits were not in proportion to the size change. Initially larger tumors had higher ratios of volume reduction, and smaller tumors had a higher chance of complete response (CR). The net decrease ratio (NDR) of tumor volume was related to the initial T stage, and the volume of tumor before CCRT (Vpre) was related to the incidence of CR; NDR was related to the downstaging of the N stage (PUBMED:15966201). This suggests that while tumor volume reduction is a common outcome of CCRT, the extent of downstaging, which includes changes in T-stage, N-stage, and overall TNM stage, may be influenced by other factors in addition to the reduction in tumor size.
Instruction: Is there any impact of PET/CT on radiotherapy planning in rectal cancer patients undergoing preoperative IMRT? Abstracts: abstract_id: PUBMED:25790541 Is there any impact of PET/CT on radiotherapy planning in rectal cancer patients undergoing preoperative IMRT? Background/aim: To investigate the effect of positron emission tomography-computed tomography (PET/CT)-based contouring on dosimetric parameters in rectal cancer patients undergoing preoperative intensity-modulated radiation therapy (IMRT). Materials And Methods: Preoperative radiation therapy plans with conformal radiotherapy (CRT) or IMRT were created and examined according to the CT- and PET/CT-based contouring of 20 rectal cancer patients, retrospectively. Results: The target volumes delineated with PET/CT were significantlylarger than the volumes created by CT (P= 0.043). Dose delivered to 98% of the planning target volume was high in IMRT planning contouring with CT and PET/CT compared with CRT planning, but the difference was not statistically significant (P = 0.056). Percent volumes receiving 105% of dose and 110% of dose were low in IMRT planning when compared with CRT (P &lt; 0.0001 and P = 0.044, respectively). The volumes receiving 45 Gy for the small intestine, femur heads, and bladder and the maximum dose received by the bladder were significantly lower in IMRT. Conclusion: We showed that the target volumes created with PET/CT are significantly larger than the target volumes created with CT and that IMRT provides lower radiation exposure to the tumor-free tissues compared to the CRT planning. The dosimetric results primarily favor IMRT planning in rectal cancer patients and consequently present the significant alteration in target volumes. abstract_id: PUBMED:34277406 FDG-PET/CT in the Radiotherapy Treatment Planning of Locally Advanced Anal Cancer: A Monoinstitutional Experience. Aims: Radiotherapy with concurrent 5-fluorouracil/mitomycin-C based chemotherapy has been established as definitive standard therapy approach for anal cancer. Intensity Modulated Radiotherapy (IMRT) leads to a precise treatment of the tumor, allowing dose escalation on Gross Tumor Volume (GTV), with a surrounding healthy tissues sparing. Our study assessed the impact of 18-Fluorodeoxyglucose positron emission tomography (18FDG-PET/CT) on the radiotherapy contouring process and its contribution to lymphatic spread detection, resulting to a personalization of Clinical Target Volume (CTV) and dose prescription. Methods: Thirty-seven patients, with histologically proven squamous cell carcinoma of the anal canal (SCCAC) were analyzed. All patients were evaluated with history and physical examination, trans-anal endoscopic ultrasound, pelvis magnetic resonance imaging (MRI), computed tomography (CT) scans of the chest, abdomen and pelvis and planning 18FDG-PET/CT. The GTV and CTV were drawn on CT, MRI and 18FDG-PET/CT fused images. Results: Thirty-four (91%) out of 37 patients presented lymph nodes involvement, in one or more areas, detected on 18FDG-PET/CT and/or MRI. The 18FDG-PET/CT showed positive lymph nodes not detected on MRI imaging (PET+, MRI-) in 14/37 patients (38%). In 14 cases, 18FDG-PET/CT allowed to a dose escalation in the involved nodes. The 18FDG-PET/CT fused images led to change the stage in 5/37(14%) cases: four cases from N0 to N1 (inguinal lymph nodes) and in one case from M0 to M1 (common iliac lymph nodes). Conclusions: The 18FDG-PET/CT has a potentially relevant impact in staging and target volume delineation/definition in patients affected by anal cancer. In our experience, clinical stage variation occurred in 14% of cases. More investigations are needed to define the role of 18FDG-PET/CT in the target volume delineation of anal cancer. abstract_id: PUBMED:30940428 Positron emission tomography with computed tomography imaging (PET/CT) for the radiotherapy planning definition of the biological target volume: PART 2. Aim: Positron Emission Tomography with Computed Tomography (PET/CT) has been proven to be useful in the definition of Radiotherapy (RT) target volume. In this regard, the present expert review summarizes existing data for pancreas, prostate, gynecological and rectum/anal cancer. Methods: A comprehensive search of published original article was made, based on SCOPUS and PubMed database, selecting the paper that evaluated the role of PET/CT in the definition of RT volume. Results: FDG-PET has an important and promising role for pancreatic cancer. Choline PET/CT could be useful for identifying high-risk volumes for prostate cancer; while PSMA PET/CT is still under evaluation. FDG PET/CT in gynecological cancers has been shown to impact external-beam RT planning. The role of FDG-PET for Gross Tumor volume identification is crucial, representing a useful and powerful tool for anal and rectal cancer. Conclusion: Taken together, molecular and functional imaging approaches offer a major step to individualize radiotherapeutic approach. abstract_id: PUBMED:20626204 Target volume delineation for preoperative radiotherapy of rectal cancer: inter-observer variability and potential impact of FDG-PET/CT imaging. To analyze the inter-observer variability and the potential impact of (18)F-fluorodeoxyglucose (FDG)-positron emission tomography/computed tomography (PET/CT) imaging for target volume delineation in preoperative radiotherapy of rectal cancer. Gross tumor volume (GTV) and clinical target volume (CTV) in 2 cases of rectal cancer were contoured by 10 radiation oncologists, 5 on CT and 5 on PET/CT images. Resulting volumes were analyzed by coefficient of variation (CV) and concordance index (CI). Mean GTV was 120 cc +/- 20.4 cc in case A and 119 cc +/- 35.7 cc in case B. Mean CTV was 723 cc +/- 147.5 cc in case A and 739 cc +/- 195.6 cc in case B. CV was lower and CI was similar or higher across the observers contouring GTV on PET/CT. CTV variability was less influenced by the use of PET/CT. PET/CT may allow reducing inter-observer variability in GTV delineation. abstract_id: PUBMED:29759568 Positron emission tomography and computed tomographic (PET/CT) imaging for radiation therapy planning in anal cancer: A systematic review and meta-analysis. To improve the accuracy of chemoradiation therapy in anal cancer patients PET/CT is frequently used in the planning of radiation therapy. A systematic review was performed to assess impact on survival, quality of life, symptom score, change in target definition and treatment intention. Systematic literature searches were conducted in Medline, EMBASE, the Cochrane Library, and Centre for Reviews and Dissemination. Ten cross-sectional studies were identified. No data were available on survival or quality of life. The summary estimate of the proportion of patients in which PET/CT had an impact on the target definition, was 23% (95% CI 16;33). The corresponding summary estimate of a change in treatment intent from curative to palliative was 3% (95% CI 2;6). Almost one in four patients had a change in target definition, which supports the use of PET/CT in radiation therapy planning, but the consequence regarding survival and quality of life is still uncertain. abstract_id: PUBMED:28557585 PET-CT guided SIB-IMRT combined with concurrent 5-FU/MMC for the treatment of anal cancer. Background: To evaluate local control (LC), survival and toxicity in anal cancer patients treated with intensity-modulated radiation therapy (IMRT) and concurrent chemotherapy at a single institution. Material And Methods: From August 2010 to May 2015, 26 patients were treated at our institution with IMRT and concurrent 5-fluorouracil/mitomycin-C (5-FU/MMC) for localized squamous cell carcinoma of the anal canal (SCCAC). Radiotherapy (RT) with 50.4-60 Gy was delivered with a sequential boost in 31%, and a simultaneous-integrated boost (SIB-IMRT) in 69% of cases. Initial staging was based on PET-CT and MRI. Clinical measures of interest were the influence of PET-CT on staging and treatment planning, LC, disease free survival (DFS), overall survival (OS), colostomy free survival (CFS) and toxicities. Results: Median age was 61 years, 22 patients (85%) were female, and no patient was HIV-positive. The proportion of patients with stage I, II, IIIA and IIIB disease was 15%, 35%, 23% and 27%, respectively. PET-CT modified the extent of nodal disease in 9/23 cases (39%) and lead to major changes in treatment planning in 4/23 patients (17%). MRI was more accurate at identifying T4 disease. RT was delivered at full dose in 26 patients (100%) and chemotherapy in 22/26 patients (85%). Two patients (7.7%) required RT breaks. Median follow-up was 35 months [IQR: 19-52]. The 2-year LC, DFS, OS and CFS were 100%, 100%, 100% and 92%. Acute grade ≥3 dermatitis and diarrhea occurred in 73% and 8% of cases, respectively. Grade 3-4 neutropenia was seen in 10/23 patients (43%). Four patients (15%) developed chronic grade 2 GI toxicity. Conclusions: PET-CT provided additional information leading to major changes in treatment planning for 17% of patients. Considering our excellent outcomes, routine use of PET-CT as standard staging modality and IMRT planning procedure appears justified for patients with SCCAC. abstract_id: PUBMED:19370430 Impact of (18)F-FDG-PET/CT on staging and irradiation of patients with locally advanced rectal cancer. Purpose: To investigate the impact of fluorodeoxyglucose-positron emission tomography/computed tomography (FDG-PET/CT) on planning of neoadjuvant radiotherapy for locally advanced rectal cancer (LARC) patients. Patients And Methods: From January 2003 to December 2007, a total of 36 patients with LARC underwent a retroprospective PET/CT study for radiotherapy-planning purposes. Gross tumor volume (GTV), clinical target volume (CTV) and planning target volume (PTV) were defined in a retrospective analysis by a blinded reader. The hypothetical boost volume was defined primarily on CT alone, and afterwards on the fused PET/CT dataset. The CT- and PET/CT-based GTVs were quantitatively compared and percentage of overlap (OV%) was calculated and analyzed. The impact of PET/CT on radiation treatment planning and overall patient management was evaluated. Results: PET/CT-GTVs were smaller than CT-GTVs (p &lt; 0.05). PET/CT imaging resulted in a change of overall management for three patients (8 %). In 16 of 35 patients (46 %), PET/CT resulted in a need for modification of the usual target volumes (CT-PTV) because of detection of a geographic miss. Conclusion: FDG-PET/CT had significant impact on radiotherapy planning and overall treatment of patients with LARC. abstract_id: PUBMED:34858808 The Value of 18F-FDG PET/CT and Abdominal PET/MRI as a One-Stop Protocol in Patients With Potentially Resectable Colorectal Liver Metastases. Purpose: The aim of this study was to evaluate the clinical value of simultaneous positron emission tomography/computed tomography (PET/CT) and abdominal positron emission tomography/magnet resonance imaging (PET/MRI) in the detection of liver metastases and extrahepatic disease (EHD) in patients with potentially resectable colorectal liver metastases (CLM). Methods: Fifty-six patients with CLM underwent conventional imaging (chest and abdomen CT, liver contrast-enhanced CT or MRI) and PET imaging [fluorine-18 fluorodeoxyglucose (18F-FDG) PET/CT and subsequent liver PET/MRI] for staging or restaging. Diagnostic ability of PET imaging was compared with conventional imaging. Abnormal findings were correlated with follow-up imaging and/or histology. The influence of the PET imaging findings was categorized for each patient in relation to operability and other significant findings. The clinical management included three modalities (surgery for resectable CLM, unresectable CLM with conversion treatment, and systemic therapy). The clinical impact of the imaging modality was analyzed. The operative histopathological analysis and/or imaging follow-up were performed as the standard of reference. Results: This study enrolled a total of 56 patients (median age 60 years, 62.5% were male, 36 with colon cancer and 20 with rectal cancer). For EHD detection, PET/CT detected more EHD than conventional imaging (60.7% vs. 46.4%). PET/CT had different findings in 19 (33.9%) patients, including downstaging in 4 (7.1%) patients and upstaging in 15 (26.8%) patients. For liver lesion detection, PET/MRI showed comparable detection ability with CE-MRI and CE-CT (99.5%, 99.4%, and 86.5%, respectively) based on lesion analysis, much higher than PET/CT (47.5%). PET imaging had a major impact in 10/56 (17.9%) patients (4 from unresectable to resectable, 6 from resectable to unresectable) and a minor impact in 4/56 (7.1%) patients for changing the surgery extent. The therapeutic strategies had been altered in a total of 14/56 patients (25%) after PET/CT and PET/MRI scans. Conclusion: The results of this study indicate that simultaneous 18F-FDG PET/CT and abdominal PET/MRI scans can provide accurate information regarding CLM status and EHD, and can affect the management of 25% of the patients by changing the therapeutic strategies determined by conventional imaging. This new modality may serve as a new one-stop method in patients with potentially resectable CLM. abstract_id: PUBMED:30151260 Neoadjuvant PET and MRI-based intensity modulated radiotherapy leads to less toxicity and improved pathologic response rates in locally advanced rectal cancer. Background: Neoadjuvant chemoradiation (NeoCRT) is standard of care for the treatment of locally advanced rectal cancer (LARC). Contemporary radiation techniques and pre-treatment imaging may impact toxicities and pathologic response (PR). Herein we compare intensity modulated radiotherapy (IMRT) and advanced pre-treatment imaging in the neoadjuvant treatment of LARC and resulting impact on toxicities and pathologic outcomes relative to 3 dimensional conformal radiotherapy (3DCRT). Methods: LARC patients treated at 4 large academic centers in the US from 2007-2016 were reviewed. Patients received 5-FU-based NeoCRT concurrently with IMRT or 3DCRT. PR was recorded as none, partial, or complete. Common terminology for adverse events version 4 was used to grade toxicities. Toxicity rates were compared using Chi-square analysis. Multivariable models were fit adjusting for age, gender, pre-tx CT to identify independent predictors of PR and toxicity. Results: A total of 128 patients were analyzed: 60.1% male and 39.8% female, median age 57.7 years (range, 31-85 years). Clinical characteristics were similar across RT groups. The outcome of partial and complete PR was similar for IMRT and 3DCRT (48.1%, 23.1% vs. 31.7%, 23.3%), respectively. After adjusting for gender, age, and pre-RT chemotherapy type, IMRT and pretreatment PET and/or MRI imaging was significantly associated with increased odds for complete and partial response (OR =2.95, 95% CI: 1.21-7.25, P=0.018; OR =14.70, 95% CI: 3.69-58.78, P&lt;0.0001). Additionally, IMRT was associated with reduced rates of dehydration, dermatitis, rectal pain, rectal bleeding, and diverting ostomy (P&lt;0.05). Overall rates of grade 2 and higher toxicities were significantly reduced in IMRT vs. 3DCRT after adjusting for confounders (OR =0.27, 95% CI: 0.08-0.87). Conclusions: NeoCRT IMRT with pretreatment PET and/or MRI for LARC leads to reduced acute toxicities and improved PR compared to 3DCRT. Given the challenges associated with prospective validation of these data, IMRT with pretreatment PET and/or MRI should be considered standard treatment for LARC. abstract_id: PUBMED:32165928 Dose-time fractionation schedules of preoperative radiotherapy and timing to surgery for rectal cancer. Chemoradiotherapy (CRT) is extensively used prior to surgery for rectal cancer to provide significantly better local control, but the radiotherapy (RT), as the other component of CRT, has been subject to less interest than the drug component in recent years. With considerable developments in RT, the use of advanced techniques, such as intensity-modulated radiotherapy (IMRT) in rectal cancer, is garnering more attention nowadays. The radiation dose can be better conformed to the target volumes with possibilities for synchronous integrated boost without increased complications in normal tissue. Hopefully, both local recurrence and toxicities can be further reduced. Although those seem to be of interest, many issues remain unresolved. There is no international consensus regarding the radiation schedule for preoperative RT for rectal cancer. Moreover, an enormous disparity exists regarding the RT delivery. With the advent of IMRT, variations will likely increase. Moreover, time to surgery is also quite variable, as it depends upon the indication for RT/CRT in the clinical practices. In this review, we discuss the options and problems related to both the dose-time fractionation schedule and time to surgery; furthermore, it addresses the research questions that need answering in the future. Answer: Yes, there is an impact of PET/CT on radiotherapy planning in rectal cancer patients undergoing preoperative IMRT. PET/CT-based contouring has been shown to significantly alter the target volumes compared to those created with CT alone, resulting in larger target volumes (PUBMED:25790541). This can lead to a higher dose delivered to the planning target volume, although the difference was not statistically significant in the study mentioned. Additionally, IMRT provides lower radiation exposure to tumor-free tissues compared to CRT planning (PUBMED:25790541). PET/CT imaging has also been reported to reduce inter-observer variability in gross tumor volume (GTV) delineation, which is an important factor in the accuracy and effectiveness of radiotherapy (PUBMED:20626204). Furthermore, PET/CT has been found to have a potentially relevant impact on staging and target volume delineation/definition in patients affected by anal cancer, which is anatomically and clinically related to rectal cancer (PUBMED:34277406). It has led to changes in clinical stage in some cases, which can influence treatment planning and outcomes. Moreover, PET/CT has been shown to impact the overall management of patients with locally advanced rectal cancer (LARC), leading to modifications in the usual target volumes due to the detection of a geographic miss in a significant number of patients (PUBMED:19370430). In the context of anal cancer, PET/CT provided additional information leading to major changes in treatment planning for a subset of patients (PUBMED:28557585). In summary, PET/CT plays a significant role in the radiotherapy planning of rectal cancer patients undergoing preoperative IMRT by influencing target volume delineation, reducing inter-observer variability, and potentially impacting patient management and treatment outcomes.
Instruction: Postoperative respiratory compromise in children with obstructive sleep apnea syndrome: can it be anticipated? Abstracts: abstract_id: PUBMED:35983621 Incidence and Risk Factors Associated with Respiratory Compromise in Planned PICU Admissions Following Tonsillectomy. Objectives: Review the incidence and factors associated with respiratory compromise requiring intensive care unit level interventions in children with planned admission to the pediatric intensive care unit (PICU) following tonsillectomy or adenotonsillectomy (T/AT). Study Design: Retrospective cohort study. Methods: Review of all patients with PICU admissions following T/AT from 2015 to 2020 at a tertiary care pediatric hospital. Patient demographics, underlying comorbidities, operative data, and respiratory complications during PICU admission were extracted. Results: Seven hundred and seventy-two patients were admitted to the PICU following T/AT, age 6.1 ± 4.6 years. All children were diagnosed with obstructive sleep apnea or sleep-disordered breathing (mean pre-operative apnea-hypopnea index 29 ± 26.5 and O2 nadir 77.1% ± 11.1). Neuromuscular disease, enteral feed dependence, and obesity were common findings (N = 240 (31%), N = 106 (14%), and N = 209 (27%) respectively). Overall, 29 patients (3.7%) developed respiratory compromise requiring PICU-level support, defined as new-onset continuous or bilevel positive airway pressure support (n = 25) or reintubation (n = 9). Three patients were diagnosed with pulmonary edema. Multivariable regression analysis demonstrated pre-operative oxygen nadir and enteral feed dependence were associated with respiratory compromise (OR = 0.97, 95% CI 0.94-0.99, P = .04; OR = 6.3, 95% CI 2.36-52.6, P = .001 respectively). Conclusions: Our study found respiratory compromise in 3.7% of patients with planned PICU admissions following T/AT. Oxygen nadir and enteral feeds were associated with higher respiratory compromise rates. Attention should be given to these factors in planning for post-operative disposition. abstract_id: PUBMED:8165079 Postoperative respiratory compromise in children with obstructive sleep apnea syndrome: can it be anticipated? Objective: The aim of this research was to describe the postoperative respiratory complications after tonsillectomy and/or adenoidectomy (T and/or A) in children with obstructive sleep apnea syndrome (OSAS), to define which children are at risk for these complications, and to determine whether continuous positive airway pressure (CPAP) is an effective strategy for dealing with these complications. Methods: The data for this study were gathered through a retrospective chart review of all children 15 years of age or younger with polysomnographically (PSG) proven OSAS who had a T and/or A at Hennepin County Medical Center between January 1985 and September 1992. Particular attention was paid to factors that contributed to the OSAS, postoperative respiratory complications, and intervention strategies for dealing with these complications. Results: The charts of 37 children with OSAS documented by preoperative PSG who later had a T and/or A were reviewed retrospectively. Ten of these children had significant postoperative respiratory compromise secondary to OSAS that prolonged their hospital stay from 1 to 30 days and caused symptoms ranging from O2 desaturation &lt; 80% to respiratory failure. These children were younger and had significant associated medical problems that contributed to or resulted from their OSAS in addition to large tonsils and adenoids. The associated medical problems included craniofacial anomalies, hypotonia, morbid obesity, previous upper airway trauma, cor pulmonale, and failure to thrive. The children with postoperative respiratory complications also had more severe apnea on their preoperative PSG. One child had a uvulopalatopharyngoplasty (UPPP) in addition to the T &amp; A. Taken together, the history, physical and neurological examination, and the PSG were able to identify successfully the children who subsequently developed respiratory compromise secondary to OSAS after a T and/or A. Nasal continuous positive airway pressure (CPAP) and bilevel CPAP was used successfully to manage the preoperative and/or postoperative upper airway obstruction in five of these children. Conclusions: Based on these findings, overnight observation is recommended with an apnea monitor and oximeter for patients undergoing a T and/or A who have OSAS and meet any of the following high-risk clinical criteria: (1) &lt; 2 years of age, (2) craniofacial anomalies affecting the pharyngeal airway particularly midfacial hypoplasia or micro/retrognathia, (3) failure to thrive, (4) hypotonia, (5) cor pulmonale, (6) morbid obesity, and (7) previous upper airway trauma; or high-risk PSG criteria: (1) respiratory distress index (RDI) &gt; 40 and (2) SaO2 nadir &lt; 70%; or undergoing a UPPP in addition to the T and/or A. Nasal CPAP/bilevel CPAP can be used to manage the preoperative and/or postoperative upper airway obstruction in patients with OSAS undergoing a T and/or A. abstract_id: PUBMED:12960542 Urgent adenotonsillectomy: an analysis of risk factors associated with postoperative respiratory morbidity. Background: The aim of this study was to determine the frequency and type of respiratory complications after urgent adenotonsillectomy (study group) for comparison with a control group of children undergoing a sleep study and adenotonsillectomy for obstructive sleep apnea syndrome. A second aim was to assess risk factors predictive of respiratory complications after urgent adenotonsillectomy. Methods: The perioperative course of children who underwent adenotonsillectomy between January 1, 1999, and March 31, 2001, was reviewed. Two groups of children were identified from two different databases: the hospital database for surgical procedures (the study group) and the sleep laboratory database (the control group). The retrospective chart review focused on the preoperative status (including an evaluation for obstructive sleep apnea), anesthetic management, and need for postoperative respiratory interventions. Results: A total of 64 consecutive cases for urgent adenotonsillectomy were identified, and 54 children met the inclusion criteria. Thirty-three children (60%) had postoperative respiratory complications necessitating a medical intervention; 11 (20.3%) required a major intervention (reintubation, ventilation, and/or administration of racemic epinephrine or Ventolin), and 22 (40.7%) required a minor intervention (oxygen administration). Six children (11.1%) required reintubation in the recovery room for respiratory compromise. Risk factors for respiratory complications were an associated medical condition (odds ratio, 8.15; 95% confidence interval, 1.81-36.73) and a preoperative saturation nadir less than 80% (odds ratio, 5.54; 95% confidence interval, 1.15-26.72). Sixteen (49%) of the medical interventions were required within the first postoperative hour. Atropine administration, at induction, decreased the risk of postoperative respiratory complications (odds ratio, 0.18; 95% confidence interval, 0.11-1.050. Control Group: Of 75 children who underwent a sleep study and adenotonsillectomy, 44 had sleep apnea and were admitted to hospital after elective adenotonsillectomy. Sixteen (36.4%) children had postoperative respiratory complications necessitating a medical intervention. Six percent of the children (n = 3) required a major medical intervention. No child required reintubation for respiratory compromise. Conclusions: Severe obstructive sleep apnea syndrome and an associated medical condition are risk factors for postadenotonsillectomy respiratory complications. Risk reductions strategies should focus on their assessment. abstract_id: PUBMED:1503720 Respiratory compromise after adenotonsillectomy in children with obstructive sleep apnea. A retrospective study of pediatric patients with obstructive sleep apnea who underwent adenotonsillectomy between 1987 and 1990 was undertaken to determine the frequency of postoperative respiratory compromise and to determine if risk factors for its development could be identified. Sixty-nine patients less than 18 years old had polysomnographically documented obstructive sleep apnea and were observed postoperatively in the pediatric intensive care unit. Of these, 16 (23%) had severe respiratory compromise, defined as intermittent or continuous oxygen saturation of 70% or less, and/or hypercapnia, requiring intervention. Compared with patients without respiratory compromise, these patients were younger (3.4 +/- 4 vs 6.1 +/- 4 years) and had more obstructive events per hour of sleep on the polysomnogram (49 +/- 41 vs 19 +/- 30). They were more likely to weight less than the fifth percentile for age (odds ratio [OR], 5.1; 95% confidence interval [CI], 1.4 to 18.7), to have an abnormal electrocardiogram and/or echocardiogram (OR, 4.5; 95% CI, 1.3 to 15.1), and to have a craniofacial abnormality (OR, 6.2; 95% CI, 1.5 to 26). Multiple logistic regression analysis revealed the most significant risk factors were age below 3 years and an obstructive event index greater than 10. Children with obstructive sleep apnea are at risk for respiratory compromise following adenotonsillectomy; young age and severe sleep-related upper airway obstruction significantly increase this risk. We recommend in-hospital postoperative monitoring for children undergoing adenotonsillectomy for obstructive sleep apnea. abstract_id: PUBMED:19961114 Postoperative respiratory problems in morbidly obese patients. Morbid obesity results in a restrictive pulmonary syndrome including decreased functional residual capacity. General anaesthesia further decreases functional residual capacity, and consequently alters gas exchanges more profoundly in morbidly obese patients than in nonobese patients. Moreover, these changes persist longer during the postoperative period, rendering obese subjects vulnerable to postoperative respiratory complications. In this review, we present postoperative measures improving respiratory function of these patients. Whether these measures affect outcome remains however unknown. Patients suffering from obstructive sleep apnoea syndrome deserve special considerations that are briefly described. Finally, the algorithm of the postoperative respiratory management of morbid obese patients used in our institution is provided. abstract_id: PUBMED:31331649 Characterisation and monitoring of postoperative respiratory depression: current approaches and future considerations. Respiratory depression is common in patients recovering from surgery and anaesthesia. Failure to recognise and lack of timely institution of intervention can lead to catastrophic cardiorespiratory arrest, anoxic brain injury, and mortality. Opioid-induced respiratory depression (OIRD) is a common and often under-diagnosed cause of postoperative respiratory depression. Other causes include residual anaesthesia, residual muscle paralysis, concurrent use of other sedatives, splinting from inadequate pain control, and obstructive sleep apnoea. Currently used methods to identify and monitor respiratory safety events in the post-surgical setting have serious limitations leading to lack of universal adoption. New tools and technologies currently under development are expected to improve the prediction of respiratory depression especially in patients requiring opioids to alleviate acute postoperative pain. In this narrative review, we discuss the various causes of postoperative respiratory depression, and highlight the advances in monitoring and early recognition of patients who develop this condition with an emphasis on OIRD. abstract_id: PUBMED:30035782 Respiratory Depression: A Case Study of a Postoperative Patient With Cancer. Respiratory depression is a complication that often follows cancer-related surgery and can result in life-threatening consequences. Oncology nurses caring for postoperative patients must monitor for respiratory complications and implement measures to prevent respiratory depression. A review of postoperative respiratory depression risk factors, clinical findings, and monitoring will be presented in this article using a case study. abstract_id: PUBMED:23429163 Postoperative respiratory muscle dysfunction: pathophysiology and preventive strategies. Postoperative pulmonary complications are responsible for significant increases in hospital cost as well as patient morbidity and mortality; respiratory muscle dysfunction represents a contributing factor. Upper airway dilator muscles functionally resist the upper airway collapsing forces created by the respiratory pump muscles. Standard perioperative medications (anesthetics, sedatives, opioids, and neuromuscular blocking agents), interventions (patient positioning, mechanical ventilation, and surgical trauma), and diseases (lung hyperinflation, obesity, and obstructive sleep apnea) have differential effects on the respiratory muscle subgroups. These effects on the upper airway dilators and respiratory pump muscles impair their coordination and function and can result in respiratory failure. Perioperative management strategies can help decrease the incidence of postoperative respiratory muscle dysfunction. Such strategies include minimally invasive procedures rather than open surgery, early and optimal mobilizing of respiratory muscles while on mechanical ventilation, judicious use of respiratory depressant anesthetics and neuromuscular blocking agents, and noninvasive ventilation when possible. abstract_id: PUBMED:29033358 Predictive factors of early postoperative respiratory complications after tonsillectomy in children with unidentified risks for this complication. Introduction: Tonsillectomy is considered as a therapeutic option in obstructive sleep apnoea syndrome (OSAS). Postoperative respiratory failure is a complication that can require respiratory support. The main objective of our study is to determine risk factors of postoperative respiratory complications in children undergoing tonsillectomy. Material And Methods: This is a retrospective single centre observational study including patients with unanticipated postoperative respiratory failure. Patients with a planned preoperative intensive care admission were excluded (age is lower than 2 years, overweight (&gt;95% percentile of BMI), moderate or severe asthma, major medical conditions). Those patients were compared with randomly selected control patients. Factors studied were: age, weight, indication of surgery, ASA status, preoperative illness conditions, durations of surgery and anaesthesia and administered medications. Statistics used a univariate analysis and a multivariate logistic regression. Results: Eight hundred and five patients underwent adenotonsillectomy during the study period and 25 developed postoperative respiratory failure. These patients were compared to 103 non-complicated control patients. Age (&lt;4 years), weight (&lt;18kg), indication of surgery (as SOAS), laryngomalacia, stable and minor congenital cardiac malformation and duration of anaesthesia were found statistically associated. Multivariate analysis found that weight &lt;18kg is a risk factor associated with the occurrence of postoperative respiratory failure. Overall the model shows a strong accuracy with an area under the curve of ROC analysis of 0.9 [95% confidence interval: 0.85-0.95]. Discussion: Our study found that weight &lt;18kg is a major risk factor for predicting a postoperative respiratory complication. abstract_id: PUBMED:28303676 Differences in respiratory arousal threshold in Caucasian and Chinese patients with obstructive sleep apnoea. Background And Objective: Ethnic differences in obstructive sleep apnoea (OSA) phenotype may not be limited to obesity and craniofacial factors. The aims of the study were to (i) compare the proportion of Caucasians and Chinese patients with a low respiratory arousal threshold (ArTH) and (ii) explore the influence of anatomical compromise on ArTH. Methods: Interethnic comparison was conducted between cohorts of Caucasian and Chinese patients from specialist sleep disorder clinics. Polysomnography and craniofacial photography were performed. A low respiratory ArTH was determined by an ArTH score of 2 or above (one point for each: apnoea-hypopnoea index (AHI) &lt; 30/h, nadir oxygen saturation (SaO2 ) &gt; 82.5%, fractions of hypopnoeas &gt; 58.3%). Anatomical compromise was stratified according to the photographic face width measurement. Results: A total of 348 subjects (163 Caucasians and 185 Chinese) were analysed. There was a significantly lower proportion of Chinese patients with moderate-severe OSA (AHI ≥ 15) who had a low ArTH (28.4% vs 48.8%, P = 0.004). This difference remained significant among those with severe OSA (AHI ≥ 30) (2.6% vs 17.1%, P = 0.02). The proportion of moderate-severe OSA Caucasians with a low ArTH was significantly less in those with severe anatomical compromise (36.6% vs 61.0%, P = 0.03), whereas there was no difference in Chinese patients (25.5% vs 31.5%, P = 0.49). Conclusion: Compared to Caucasians with severe OSA, a low respiratory ArTh appears to be a less common pathophysiological mechanism in Chinese patients. Caucasians with less severe anatomical compromise exhibit evidence of a lower ArTh, an association which is absent in Chinese patients. Our data suggest that OSA mechanisms may vary across racial groups. Answer: Postoperative respiratory compromise in children with obstructive sleep apnea syndrome (OSAS) can indeed be anticipated based on various risk factors identified in several studies. Children with OSAS who undergo tonsillectomy and/or adenoidectomy (T and/or A) are at risk for postoperative respiratory complications. Factors that contribute to an increased risk of these complications include younger age, significant associated medical problems (such as craniofacial anomalies, hypotonia, morbid obesity, previous upper airway trauma, cor pulmonale, and failure to thrive), and more severe apnea on preoperative polysomnography (PSG) (PUBMED:8165079). Specifically, a respiratory distress index (RDI) greater than 40 and an SaO2 nadir less than 70% on PSG are high-risk criteria (PUBMED:8165079). A study found that 3.7% of patients with planned pediatric intensive care unit (PICU) admissions following tonsillectomy or adenotonsillectomy (T/AT) developed respiratory compromise requiring PICU-level support. Factors associated with higher rates of respiratory compromise included a lower pre-operative oxygen nadir and enteral feed dependence (PUBMED:35983621). Another study identified severe OSAS and an associated medical condition as risk factors for postadenotonsillectomy respiratory complications. It was noted that risk reduction strategies should focus on the assessment of these factors (PUBMED:12960542). In a retrospective study, children with obstructive sleep apnea who experienced severe respiratory compromise postoperatively were younger, had more obstructive events per hour of sleep, and were more likely to weigh less than the fifth percentile for age, have an abnormal electrocardiogram and/or echocardiogram, and have a craniofacial abnormality (PUBMED:1503720). Weight less than 18kg was identified as a major risk factor for predicting postoperative respiratory complications in children undergoing tonsillectomy without identified risks for this complication (PUBMED:29033358). In summary, postoperative respiratory compromise in children with OSAS can be anticipated by considering factors such as age, weight, severity of apnea, associated medical conditions, and preoperative oxygen saturation levels. Close monitoring and appropriate perioperative management strategies are recommended for children identified as high-risk based on these factors.
Instruction: Accounting for differences in transfusion volume: Are all massive transfusions created equal? Abstracts: abstract_id: PUBMED:22695418 Accounting for differences in transfusion volume: Are all massive transfusions created equal? Background: Among patients subjected to massive transfusion (MT), some will require considerably more blood than others, depending on the rate and quantity of hemorrhage. In analyses concerning plasma to red blood cell (RBC) ratios and platelet to RBC ratios, this has yet to be examined. We sought to evaluate the effect of the number of RBC units transfused on both plasma:RBC and platelet:RBC and their association with mortality in MT patients. Methods: Prospective data were collected on trauma patients taken directly to surgery from the resuscitation room who received ≥ 10 RBC units by completion of operation. MT protocol was in place for all patients. To account for survival bias, intra-operative deaths were excluded. Patients were stratified by plasma:RBC and platelet:RBC (HIGH &gt; 0.5, MID 0.33-0.5, LOW &lt; 0.33). Crude and adjusted risk ratios (RRs) for hospital mortality were determined, using the HIGH ratio as the reference group. Results: One hundred thirty-five patients met inclusion criteria. There were no significant differences with respect to demographics, injury characteristics, or shock severity. However, the mean number of intra-operative RBC units transfused was significantly different between plasma:RBC groups (HIGH: 16.2, MID: 19.7, LOW: 25.1; p &lt; 0.001). The crude risk for mortality was significantly higher for the LOW group relative to the HIGH group (RR 1.99, 95% confidence interval [CI] 1.02-3.89). However, after adjustment for the number of RBCs transfused, the risk was not significantly different (RR 1.54, 95% CI 0.75-3.15). The adjusted mortality risk for the LOW versus HIGH platelet:RBC groups was also not statistically different (RR 1.92, 95% CI 0.99-3.71). Conclusions: Among patients subjected to MT, those who receive relatively higher quantities of RBCs are both more likely to receive a lower plasma:RBC and are more likely to die. Any analysis concerning transfusion ratios should take the potential confounding of this heterogeneity among MT patients into account. Level Of Evidence: Prognostic study, level III. abstract_id: PUBMED:7379466 Cost accounting in plateletpheresis: comparison of two techniques. Cost comparison of two comparably effective techniques of plateletpheresis, a manual method and an automated discontinuous flow centrifugation technique, is presented using a hypothetical model. The former procedure costs $62.48 per pheresis for disposables and labour as opposed to $78.32 per pheresis for disposables and labour for the latter. The annual volume of plateletpheresis at which the accumulated costs equal the total charges, i.e. the 'break-even' point, is calculated and found to be 63.7 for the automated technique and 10.9 for the manual method, if the charge for each is $200.00. For the manual method at a current charge of $80.00, the break even point is 85.8. The assumptions underlying this hypothetical model are examined, and the effects of deviation from these assumptions are analyzed in terms of the break even point. Cost accounting of plateletpheresis is shown to be dependent upon the choice of approach to allocation of costs, the assumptions of the cost accounting model, and the selection of an appropriate charge. abstract_id: PUBMED:21610387 All massive transfusion criteria are not created equal: defining the predictive value of individual transfusion triggers to better determine who benefits from blood. Background: As familiarity with military massive transfusion (MT) triggers has increased, there is a growing interest in applying these in the civilian population to initiate MT protocols (MTP) earlier. We hypothesize that these triggers do not have equal predictability for MT and understanding the contribution of each would improve our ability to initiate the MTP earlier. Methods: All patients presenting to a Level I trauma center from October 2007 to September 2008 requiring immediate operation were included in this study. Emergency department records, operative logs, and blood transfusion data from arrival to procedure end were analyzed using multivariate regression techniques. Triggers included systolic blood pressure (SBP) &lt;90 mm Hg, hemoglobin &lt;11 g/dL, temperature &lt;35.5°C, International normalized ratio (INR) &gt;1.5, and base deficit ≥6. Results: One hundred seventy patients required immediate operation with an overall survival of 91%. Transfusion of packed red blood cells was noted in 45% (77 of 170) with the mean number of transfused units highest in those meeting SBP (12.9 Units) or INR (12.3 Units) triggers. The triggers do not contribute equal predictive value for the need for transfusion with INR being the most predictive (odds ratio, 16.7; 95% confidence interval, 2-137) for any transfusion and highly predictive for the need for MT (odds ratio, 11.3; 95% confidence interval, 3-47). In fact, if patients met either INR or SBP triggers alone, they were likely to receive MT (p = 0.018 and 0.003, respectively). Conclusion: Triggers have differential predictive values for need for transfusion. Defining the individual utility of each criterion will help to identify those most likely to benefit from an early initiation of the MTP. abstract_id: PUBMED:28191788 Full-thickness skin mesh graft vaginoplasty: a skin sparing technique. Introduction: The ideal vaginoplasty method should promote good cosmetic and functional results with low morbidity. We describe a new technique for congenital vaginal agenesis using a full-thickness perforated skin graft. Materials And Methods: We report an 18 year old patient with vaginal agenesis (Morris syndrome) that undergone a modified version of McIndoe vaginoplasty. Patient is set in a low lithotomy position and lateral traction sutures are placed in labia and a 16Fr urethral catheter inserted. An inverted "V"-shaped incision is made in the mucosal plaque below the urethra. Blunt dissection in a cephalic posterior direction forms a space between the rectum and urethra. Special care is taken to avoid rectal tear during this maneuver. A full-thickness skin graft is removed from the lower abdomen measuring 12.0x6.0cm as an aesthetic abdominoplasty. The fat tissue is removed, remaining epidermis and dermis and the graft is perforated, allowing a great surface increase. After suturing over a mold, the graft is fixed in the created space. The donor site is closed with intradermal transversal suture. Results: From January 2009 to August 2015, seven patients diagnosed with vaginal agenesis underwent this technique. There were no major complications or need for blood transfusions. At the six-month follow-up, all patients reported satisfactory sexual intercourse. There were no significant complications at donor site or neovagina that needed surgical intervention. Conclusion: Vaginal reconstruction using the perforated graft is viable with excellent functional results. Applying this modification, we yielded the good results of a classic McIndoe technique with lower donor site morbidity. abstract_id: PUBMED:10169784 The principles of quality-associated costing: derivation from clinical transfusion practice. As clinical transfusion practice works towards achieving cost-effectiveness, prescribers of blood and its derivatives must be certain that the prices of such products are based on real manufacturing costs and not market forces. Using clinical cost-benefit analysis as the context for the costing and pricing of blood products, this article identifies the following two principles: (1) the product price must equal the product cost (the "price = cost" rule) and (2) the product cost must equal the real cost of product manufacture. In addition, the article describes a new method of blood product costing, quality-associated costing (QAC), that will enable valid cost-benefit analysis of blood products. abstract_id: PUBMED:31688783 Is all plasma created equal? A pilot study of the effect of interdonor variability. Background: Clinical benefits of plasma as an adjunct for treatment of hemorrhagic shock (HS) have been well established. However, its use is not without risk. Little is understood regarding the clinical implications of plasma variability. We hypothesized there to be interdonor variability in plasma that would impact endothelial and organ function postinjury. Methods: Pulmonary endothelial cells (ECs) were incubated with plasma from 24 random donors, and transendothelial electrical resistance was measured. Plasma units with a more or less protective effect on reducing EC permeability were selected for testing in vivo. Syndecan-1 and cytokines were measured. Mice underwent laparotomy and then HS followed by resuscitation with the selected plasma units and were compared with mice receiving no resuscitation and shams. Lung tissue was sectioned and stained for myeloperoxidase and pulmonary syndecan-1 and scored for lung histopathologic injury. Results: Plasma from 24 donors revealed variability in the reversal of EC monolayer hyperpermeability; transendothelial electrical resistance for the more protective plasma was significantly higher than that for the less protective plasma (0.801 ± 0.022 vs. 0.744 ± 0.035; p = 0.002). Syndecan-1 was also markedly increased in the less protective compared with the more protective plasma (38427 ± 1257 vs. 231 ± 172 pg/mL, p &lt; 0.001), while cytokines varied. In vivo, the more protective plasma mitigated lung histopathologic injury compared with the less protective plasma (1.56 ± 0.27 vs. 2.33 ± 0.47, respectively; p = 0.005). Similarly, myeloperoxidase was significantly reduced in the more protective compared with the less protective plasma group (2.590 ± 0.559 vs. 6.045 ± 1.885; p = 0.02). Lastly, pulmonary syndecan-1 immunostaining was significantly increased in the more protective compared with the less protective plasma group (20.909 ± 8.202 vs. 9.325 ± 3.412; p = 0.018). Conclusion: These data demonstrate significant interdonor variability in plasma that can adversely influence the protective effects of plasma-based resuscitation on HS-induced lung injury. This may have important implications for patient safety and clinical outcomes. abstract_id: PUBMED:37436379 Effect of Early Equal-Proportional Infusion of Plasma and Red Blood Cells on the Prognosis of Emergency Patients with Traumatic Hemorrhage. Background: The goal was to study the effect of early equal-proportion transfusion on the prognosis of trauma patients with bleeding. Methods: Emergency hospital trauma patients were randomly divided into two groups, a group based on assessment of blood consumption (ABC) to assess whether need to start the massive blood transfusion patients, such as proportion of blood transfusion (fresh frozen plasma: suspended red blood cells = 1:1), and the other group using traditional methods of blood transfusion, namely according to routine blood and clotting function and hemodynamic parameters, to decide when and what blood constituents should be transfused. Results: The coagulation got better in the early equal-proportion transfusion group, there were significant differences of PT and APTT (p &lt; 0.05). The amount of 24 hours RBC and plasma transfusion was decreased in the early equal-proportion transfusion group, compared to the control group (p &lt; 0.05), the length of ICU stay was shortened, the 24-hours SOFA score was improved, and there was no significant difference in 24-hours mortality, in-hospital mortality and total length of in-hospital stay (p &gt; 0.05). Conclusions: Early transfusion can reduce the total amount of blood transfusion and shorten ICU time, but has no significant effect on mortality. abstract_id: PUBMED:32234241 All trauma is not created equal: Redefining severe trauma for combat injuries. Injury severity scores (ISS) and shock index (SI) are popular trauma scoring systems. We assessed ISS and SI in combat trauma to determine the optimal cut-off values for mortality and trauma outcomes. Retrospective analysis of the Department of Defense Trauma Registry, 2008-2016, was performed. Areas under receiver operating characteristic curves (AUROCs) were calculated for ISS and SI on mortality, massive volume transfusion (MVT), and emergent surgical procedure (ESP). Optimal cut-off values were defined using the Youden index (YI). 22,218 patients (97.1% male), median ages 25-29 years, ISS 9.4 ± 0.07, with 58.1% penetrating injury were studied. Overall mortality was 3.4%. AUROCs for ISS on mortality, MVT, and ESP were 0.882, 0.898, and 0.846, while AUROCs for SI were 0.727, 0.864, and 0.711 respectively. The optimal cut-off values for ISS on mortality, MVT, and ESP were 12.5 (YI = 0.634), 12.5 (YI = 0.666), and 12.5 (YI = 0.819), with optimal values for SI being 0.94 (YI = 0.402), 0.88 (YI = 0.608), and 0.81 (YI = 0.345) respectively. Classic values for severe ISS underrepresent combat injury while the SI values defined in this study are consistent with civilian data. abstract_id: PUBMED:17903225 Laparoscopically assisted sigmoid colon vaginoplasty in women with Mayer-Rokitansky-Kuster-Hauser syndrome: feasibility and short-term results. Objectives: To evaluate the technical feasibility and anatomical and functional outcomes of laparoscopically assisted sigmoid colon vaginoplasty (LASV) in women with Mayer-Rokitansky-Kuster-Hauser (MRKH) syndrome. Design: A retrospective review of prospectively collected data. Setting: Shanghai First People's Hospital, Shanghai Jiao Tong University. Population: Twenty-six women with MRKH syndrome. Methods: A record was made of mean operating time, length of hospital stay, perioperative complications and the anatomical and functional outcomes of surgery. Main Outcome Measures: The perioperative results, complications and anatomical and functional outcomes of LASV (with median 20 months follow up, range 5-48 months). Results: The mean operating time and hospital stay were 238 minutes and 9.8 days, respectively. The mean fall in haemoglobin was 2.0 g/dl. The only significant perioperative complications were one case with blood transfusion and three cases with infection (one with urinary tract and two with adjunctive incision). A functioning vagina 10 to 15 cm in length and 4 cm in width was created in all women. Introital stenosis occurred in only two women (2 months later). Twenty-two women subsequently had intercourse and 20 women (91%) were satisfied with the surgery and subsequent sexual activity. Conclusions: LASV is an effective approach for women with MRKH syndrome. Both the anatomical and functional outcomes are satisfactory. abstract_id: PUBMED:24353686 Gender differences in severity of sickle cell diseases in non-smokers. Objective: To find out gender differences in severity of sickle cell diseases (SCDs) in non-smokers. Methods: Three groups of SCDs patients on the basis of red blood cell (RBC) transfusions were included. Less than 10 units in their lives were kept in Group-1, Ten units of higher in Group-2 and 50 units or higher as the Third Group. Patients with a history of using one pack of cigarettes -year or above were excluded. Results: The study included 269 patients. Mean ages of the groups were similar (28.4, 28.5, and 28.9 years, respectively). Prevalences of cases without any RBC transfusion in their lives were 7.2% and 3.7% in females and males, respectively (p&lt;0.05). Prevalences of cases without any painful crisis were 13.8% and 6.0% in females and males, respectively (p&lt;0.001). There was progressive increase according to mean painful crises, clubbing, chronic obstructive pulmonary disease (COPD), leg ulcers, stroke, chronic renal disease (CRD), pulmonary hypertension, and male ratio from the first towards the third groups (p&lt;0.05, nearly for all). Mean ages of mortal cases were 29.1 and 26.2 years in females and males, respectively (p&gt;0.05). Conclusion: The higher painful crises per year, digital clubbing, COPD, leg ulcers, stroke, CRD, pulmonary hypertension, and male ratio of the third group, lower male ratio of patients without any RBC transfusion, lower male ratio of patients without any painful crisis, lower mean ages of male SCDs patients with mortality, and longer overall survival of females in the world could not be explained by well known strong atherosclerotic effects of smoking alone, instead it may be explained by the dominant role of male sex in life. Answer: No, all massive transfusions (MT) are not created equal. The study by PUBMED:22695418 found that among patients subjected to MT, those who receive relatively higher quantities of red blood cells (RBCs) are more likely to receive a lower plasma:RBC ratio and are more likely to die. However, after adjusting for the number of RBCs transfused, the risk for mortality was not significantly different between groups with different plasma:RBC ratios. This suggests that the volume of RBCs transfused is a confounding factor that should be considered in analyses concerning transfusion ratios. Additionally, PUBMED:21610387 highlights that not all MT criteria have equal predictability for the need for transfusion. The study found that certain triggers, such as the International normalized ratio (INR) and systolic blood pressure (SBP), have different predictive values for the need for transfusion, with INR being the most predictive for any transfusion and highly predictive for the need for MT. Furthermore, PUBMED:37436379 indicates that early equal-proportion transfusion can reduce the total amount of blood transfusion and shorten ICU time, but it does not significantly affect mortality. This suggests that the timing and proportion of transfusion components can influence the prognosis of trauma patients with bleeding. Lastly, PUBMED:31688783 demonstrates that there is significant interdonor variability in plasma that can influence the protective effects of plasma-based resuscitation on hemorrhagic shock-induced lung injury. This variability may have important implications for patient safety and clinical outcomes. In summary, the volume of transfusion, the predictive value of MT criteria, the timing and proportion of transfusion components, and interdonor variability in plasma all contribute to the differences in MT, indicating that they are not all created equal.
Instruction: Can magnetic resonance imaging be used to select patients for sentinel lymph node biopsy in prophylactic mastectomy? Abstracts: abstract_id: PUBMED:18257089 Can magnetic resonance imaging be used to select patients for sentinel lymph node biopsy in prophylactic mastectomy? Background: Sentinel lymph node biopsy (SLNB) in the setting of prophylactic mastectomy (PM) remains controversial. In the current study, recent experience with PM was described and the value of preoperative magnetic resonance imaging (MRI) was analyzed in selecting patients for PM with or without SLNB. Methods: Between January 1999 and January 2006, 529 patients underwent 613 PMs. Both preoperative magnetic resonance imaging (MRI) and SLNB were performed selectively at the discretion of the surgeon. Results: Occult cancer was identified in 33 of 613 PMs (5%) (10 invasive and 23 ductal carcinoma in situ cases). PM with SLNB was performed in 393 of 529 patients (74%), 178 of whom underwent MRI. Of these, occult cancer was found in 6 of 178 patients (3%), all of whom had negative SLNB. Preoperative MRI was concordant with PM in 4 of 6 cases with occult carcinoma. The remaining 215 of 393 patients (55%) underwent PM with SLNB without MRI. Occult cancer was found in 18 of 215 patients (8%); 3 had positive SLNB. Overall, PM with SLNB spared 4 of 393 patients (1%) from axillary lymph node dissection (ALND). Among 136 patients undergoing PM alone, 57 had preoperative MRI. MRI detected 5 cancers and PM revealed an additional 4 occult carcinomas not detected by MRI. Overall, 9 of 136 patients (7%) undergoing PM alone were found to have occult cancer, 3 of which were invasive, raising the decision of reoperation with ALND. Conclusions: Occult cancer was identified in 5% of PMs. PM with or without SLNB spared only 4 of 393 patients (1%) from undergoing ALND, whereas PM alone identified unsuspected invasive disease in 3 of 136 patients (2%). When performed, MRI accurately ruled out the presence of an invasive cancer in the prophylactic breast, suggesting that MRI can be used to select patients for PM without SLNB. abstract_id: PUBMED:26748493 Is Sentinel Lymph Node Biopsy Necessary in Patients Undergoing Prophylactic Mastectomy? A Systematic Review and Meta-Analysis. The gain by performing sentinel lymph node biopsy (SLNB) during prophylactic mastectomy (PM) is debatable, and we performed a meta-analysis of existing literature to evaluate that the role of SLNB in subjects undergoing PM. A systematic search was conducted using MEDLINE, PubMed, EMBASE, Current Contents Connect, Cochrane library, Google Scholar, Science Direct, and Web of Science. The search identified 11 relevant articles reporting on patients who underwent SLNB at the time of PM. Data were abstracted from each study and used to calculate a pooled odds ratio (OR) and 95% confidence interval (95% CI). We included 14 studies comprising of 2,708 prophylactic mastectomies. Among 2,708 prophylactic mastectomies, the frequency of occult invasive cancer (51 cases) was 1.8% and the rate of positive SLNs (33 cases) was 1.2%. In 25 invasive cancers at the time of PM were found to have negative SLNs which avoided axillary lymph node dissection (ALND). In seven cases with positive SLNBs were found not to have invasive cancer at the time of PM and needed a subsequent ALND. Most of the patients with positive SLNs had locally advanced disease in the contralateral breast. SLNB may be suitable for patients with ipsilateral, locally advanced breast cancer and is not recommend for all patients undergoing PM. abstract_id: PUBMED:21651834 Meta-analysis of sentinel lymph node biopsy at the time of prophylactic mastectomy of the breast. Background: Prophylactic mastectomy is performed to decrease the risk of breast cancer in women at high risk for the disease. The benefit of sentinel lymph node biopsy (SLNB) at the time of prophylactic mastectomy is controversial, and we performed a meta-analysis of the reported data to assess that benefit. Methods: We searched MEDLINE, EMBASE and the Cochrane Library databases from January 1993 to December 2009 for studies on patients who underwent SLNB at the time of prophylactic mastectomy. Two reviewers independently evaluated all the identified papers, and only retrospective studies were included. We used a mixed-effect model to combine data. Results: We included 6 studies in this review, comprising a total study population of 1251 patients who underwent 1343 prophylactic mastectomies. Of these 1343 pooled prophylactic mastectomies, the rate of occult invasive cancer (21 cases) was 1.7% (95% confidence interval [CI] 1.1%-2.5%), and the rate of positive SLNs (23 cases) was 1.9% (95% CI 1.2%-2.6%). In all, 36 cases (2.8%, 95% CI 2.0%-3.8%) led to a significant change in surgical management as a result of SLNB at the time of prophylactic mastectomy. In 17 cases, patients with negative SLNs were found to have invasive cancer at the time of prophylactic mastectomy and avoided axillary lymph node dissection (ALND). In 19 cases, patients with positive SLNBs were found not to have invasive cancer at the time of prophylactic mastectomy and needed a subsequent ALND. Of the 23 cases with positive SLNs, about half the patients had locally advanced disease in the contralateral breast. Conclusion: Sentinel lymph node biopsy is not suitable for all patients undergoing prophylactic mastectomy, but it may be suitable for patients with contralateral, locally advanced breast cancer. abstract_id: PUBMED:19959044 Indications for sentinel lymph node biopsy in the setting of prophylactic mastectomy. Background: Bilateral/contralateral prophylactic mastectomy (PM) is offered to high-risk women to decrease their actual or perceived breast cancer risk. When an invasive occult cancer is identified, prevailing wisdom suggests that an axillary dissection be performed. This single-institution study aims to identify patients who may benefit from sentinel node biopsy (SLNB) at the time of prophylactic mastectomy. Study Design: We performed a retrospective review of a prospective database of patients treated at our institution with bilateral/contralateral PM between 1995 and 2006. We examined patients' clinicopathologic characteristics in comparison with their incidence of occult cancer in the contralateral breast or axilla. Results: There were 449 patients who underwent PM and SLNB. Twenty-eight underwent bilateral PM. Of the 28, no occult cancers were identified. Occult cancers were identified in 18 of 420 (4.3%) contralateral prophylactic specimens; they were invasive in 6 (1.4%). In cases of occult carcinoma, the primary established tumor was more likely to be of invasive lobular histology. Eight of 420 (2%) patients had a positive contralateral sentinel node, and within this subset of 8 patients the majority had locally advanced disease on the known tumor side. Other features associated with a positive contralateral sentinel node included the presence of lymphovascular involvement or skin or nipple involvement and grade 2 to 3 invasive primary established tumors. Conclusions: Overall, SLNB in patients undergoing bilateral PM or contralateral PM associated with early-stage disease is not indicated. But patients with locally advanced primary breast cancers have a significantly increased risk of occult cancer in the contralateral axilla, likely due to crossover metastasis; this select group of patients may benefit from SLNB at the time of surgery. abstract_id: PUBMED:22213101 Routine prophylactic sentinel lymph node biopsy is not indicated in women undergoing prophylactic mastectomy. Background: Prophylactic mastectomies (PM) are performed to reduce the risk of breast cancer. Occasionally an occult carcinoma is found in PM specimens. Given the high morbidity of axillary lymph node dissection (ALND), some perform prophylactic sentinel lymph node biopsy (SLNB). We undertook a study to examine if prophylactic SLNB is indicated in all patients undergoing PM. Methods: A retrospective review of all PM between 2004 and 2010 was performed. The stage of tumor on the disease side and the pathologic findings in the prophylactic breast were analyzed. The number of SLN and the frequency of lymph node metastases were evaluated. Results: A total of 199 PM on 184 patients were performed: 169 contralateral PM and 30 bilateral PM. Of the 199 PM, 12 had occult carcinomas (6.0%): 10 non-invasive, 1 microinvasive and 1 T1b invasive tumor. 153 of 199 PM specimens had prophylactic SLNB, of which only 2 had a positive SLN that originated from the index side of the breast. Conclusions: Although it is not uncommon to find occult carcinomas in the prophylactic breast, it is rare for the occult carcinoma to spread to the lymph nodes. Therefore, routine prophylactic SLNB is not indicated in patients undergoing PM. abstract_id: PUBMED:11113434 The role of sentinel lymph node biopsy in women undergoing prophylactic mastectomy. Background: Indications for prophylactic mastectomy (PM) range from LCIS to BRCA 1-2 positive, cosmesis, and cancer phobia. Occult cancers have been found in up to 5% of PM cases. Consequently, consideration must be given to the role of sentinel lymph node (SLN) biopsy as a diagnostic procedure in these patients as PM excludes the subsequent option of SLN biopsy. Methods: From April 1994 to November 1999, all patients undergoing PM had SLN biopsy after four quadrant periareolar injections of radiocolloid (450 mci) and blue dye (5 cc). All patients were prospectively accrued to the computerized database of breast patients. The SLN were all evaluated with hematoxylin and eosin (H&amp;E) as well as CAM5.2 cytokeratin immunohistochemical (CK-IHC) stains. Results: Over a 67-month period, 1,356 patients were mapped; 57 patients underwent PM in which 148 nodes (2.6 nodes per patient) were evaluated. Nodes were examined by routine H&amp;E and CK-IHC staining. Two patients, neither of whom was found to have a cancer in the prophylactic mastectomy breast, were found to have a positive SLN by CK-IHC staining. Infiltrating carcinoma was discovered within the PM breasts of 2 additional patients. Sentinel lymph node biopsy was negative for malignancy by H&amp;E as well as CK-IHC stains. No lymphedema has been detected in PM patients. Conclusions: Sentinel node biopsy has been shown to be an accurate and minimally invasive method of evaluating the lymphatic basin. This study shows that the absence of known disease within the breast does not preclude the presence of occult cancer or metastatic nodal disease. Four patients (7%) had a significant change in their surgical management as a direct result of sentinel lymph node biopsy. Two patients were spared the complications of a complete axillary node dissection. This minimally invasive procedure accurately evaluated the known disease status and provided new diagnostic information. Most important, once a mastectomy is performed, the opportunity for SLN biopsy is lost should a cancer be found within the breast specimen. abstract_id: PUBMED:36575101 Is Sentinel Lymph Node Biopsy Necessary in Patients who Undergo Prophylactic Mastectomy? Background: Over the last decade, rates of risk-reducing prophylactic mastectomy (PM) have risen dramatically. A topic of debate regarding the procedure is whether to use sentinel lymph node biopsy (SLNB) at the time of PM and what factors might predict for such need. In order to assess the rate of identifying presence of occult invasive breast cancer in the PM specimen, we performed a retrospective review of the pathology findings from a single-surgeon case-series of PM. Methods: Patients undergoing PM between January 2013 and June 2019 at Orlando Health Cancer Institute were identified for a retrospective chart review. Demographic, clinical, and histopathological data from the surgical procedure were collected and analyzed for the incidence of occult invasive breast cancer in the PM specimen. Results: A total of 146 consecutive patients with PM were identified; 120 (82.2%) underwent contralateral PM (CPM) and 26 (17.8%) underwent bilateral PM (BPM). Final pathology of the 172 PM specimens identified 4 (3.3%) with lobular carcinoma in situ, 3 (2.5%) with atypical ductal hyperplasia, and 2 (1.7%) with atypical lobular hyperplasia and 2 (1.7%) with intraductal papilloma. No invasive malignancy was detected in any of the 172 PM specimens. Conclusions: The absence of occult invasive carcinoma in 172 consecutive PM specimens suggests a limited clinical utility in routinely performing SLNB in this setting. This study also suggests that use of preoperative breast MRI imaging could offer a potential non-invasive tool to detect occult malignancy and select patients who can safely undergo omission of SLNB at the time of PM. abstract_id: PUBMED:24535940 Sentinel lymph node biopsy during prophylactic mastectomy: is there a role? Purpose: We investigated rates of occult malignancy in the breast and sentinel lymph nodes (SLNs) in patients undergoing prophylactic mastectomy (PM) and whether routine sentinel lymph node biopsy (SLNB) is justified. Methods: A retrospective review of our database identified patients undergoing PMs with SLNB. Descriptive statistics were utilized for data summary. A P value of &lt;0.05 was considered significant. Results: There were 384 patients during the study period who underwent 467 PMs. Of the 467 PMs, 15 (3.2%) cancers were found on final pathology. All 6 of the invasive cancers identified were T1. A total of 682 SLNs were taken for an average of 1.46 SLNs per PM. There were 5 positive SLNs (1.1%). All were in patients undergoing contralateral PM for either history of breast cancer or new diagnosis of breast cancer. Only one patient with a positive SLN was found to have an occult breast malignancy in the PM specimen. Conclusion: In 467 PMs performed, 15 (3.2%) occult malignancies were found in the breast and 5 (1.1%) positive SLNs were found. Based on these results, the routine use of SLNB at the time of PM is unnecessary and does not warrant the morbidity associated with this procedure. abstract_id: PUBMED:23865803 Sentinel lymph node biopsy for risk-reducing mastectomy. Risk-reducing mastectomy (RRM) confers 90-95% decreased risk of breast cancer, and may reduce mortality, especially in high-risk groups such as BRCA carriers. Risk of occult disease in RRM specimen is ~5%. This demands axillary staging: sentinel lymph node (SLN) biopsy is no longer possible, axillary clearance confers significant risks and may prove negative. Contemporaneous SLN biopsy allows axillary staging with minimal further dissection. Women undergoing RRM and SLN biopsy between June 2005 and July 2010 were reviewed retrospectively from our prospectively maintained database of 1,522 SLN procedures in 1,498 patients. SLN(s) localized using routine tracer methods. SLNs and mastectomy specimens underwent routine histologic examination. Eighty-three RRMs with SLN biopsy were performed in 71 patients (12 bilateral). Indications for RRM: contralateral invasive (55), in situ (5) disease, BRCA 1/2 mutation (12), and strong family history (10). Mean number of SLNs: 1.35. Occult disease was detected in four cases (4.8%), with one case of occult invasive lobular carcinoma (1.2%). Remaining occult disease was lobular in situ neoplasia (LISN). SLNs were negative in all cases. Our findings are comparable to those in the literature: 4.8% rate of occult disease overall, 1.2% invasive. The significant risk with SLN biopsy is lymphoedema, quoted around 7%. We have had no reports of symptomatic lymphoedema in patients undergoing RRM and SLN biopsy. We propose that SLN at the time of mastectomy requires only limited further dissection, and confers minimal risk compared with secondary axillary surgery. abstract_id: PUBMED:20618969 Axillary sentinel lymph node biopsy after mastectomy: a case report. Background: Sentinel lymph node biopsy has been established as the preferred method for staging early breast cancer. A prior history of mastectomy is felt to be a contraindication. Case Presentation: A patient with recurrent breast cancer in her skin flap was discovered to have positive axillary sentinel nodes by sentinel lymph node biopsy five years after mastectomy for ductal carcinoma in situ. Conclusion: A prior history of mastectomy may not be an absolute contraindication to sentinel lymph node biopsy. Answer: Magnetic resonance imaging (MRI) has been evaluated for its utility in selecting patients for sentinel lymph node biopsy (SLNB) in the context of prophylactic mastectomy (PM). The findings from the studies suggest that MRI can play a role in this decision-making process. In one study, preoperative MRI was found to be concordant with PM findings in 4 out of 6 cases with occult carcinoma, indicating that MRI can accurately rule out the presence of invasive cancer in the prophylactic breast. This suggests that MRI can be used to select patients for PM without SLNB, as it spared only a small percentage (1%) of patients from undergoing axillary lymph node dissection (ALND) when SLNB was performed (PUBMED:18257089). Another study concluded that SLNB may be suitable for patients with ipsilateral, locally advanced breast cancer but is not recommended for all patients undergoing PM. This study also found that most patients with positive SLNs had locally advanced disease in the contralateral breast, which could influence the decision to perform SLNB (PUBMED:26748493). A meta-analysis also supported the notion that SLNB is not suitable for all patients undergoing prophylactic mastectomy but may be suitable for patients with contralateral, locally advanced breast cancer (PUBMED:21651834). Furthermore, a single-institution study suggested that SLNB in patients undergoing bilateral PM or contralateral PM associated with early-stage disease is not indicated. However, it may be beneficial for patients with locally advanced primary breast cancers due to the increased risk of occult cancer in the contralateral axilla (PUBMED:19959044). In summary, while MRI can be a useful tool in selecting patients for PM without SLNB, the decision to perform SLNB should be individualized based on factors such as the presence of locally advanced disease and the findings of preoperative MRI. Routine prophylactic SLNB is not indicated for all patients undergoing PM, and its use should be considered in the context of the patient's overall risk profile and imaging results (PUBMED:22213101).
Instruction: Is chronic hepatitis C treatment as efficient in the general population as in randomised trials? Abstracts: abstract_id: PUBMED:14586246 Is chronic hepatitis C treatment as efficient in the general population as in randomised trials? Aims: The aim of this population-based study was to determine the effectiveness of antiviral therapy in non selected chronic hepatitis C patients. Methods: The study was performed in all new patients with anti-HCV antibodies who had registered in a specialised viral hepatitis registry since 1994 and who lived in the French departments of Côte-d'Or and Doubs (1 005 817 inhabitants). Results: 1251 of the 1508 recorded cases were studied; 262 were treated. Results of treatment were available in 238 cases, 157 were treated with interferon alone and 81 with interferon + ribavirine. Sustained virological response was observed in 18% of cases after interferon alone and 46.9% after interferon + ribavirine. In multivariate analysis, age under 40 and combined therapy were positively correlated to sustained virological response. Premature treatment discontinuation was associated with a lack of sustained response which was unrelated to gender or liver fibrosis. Fifty seven patients received combined therapy after unsuccessful monotherapy: sustained virological response rates were 33.3% in responders-relapsers and 23.7% in non-responders to initial therapy. Conclusions: This study showed that sustained response rates were similar in non selected patients from the general population and in highly selected patients from randomised trials. These results emphasize the necessity of increasing the ratio of treated patients which is still too low in general population. abstract_id: PUBMED:28833855 Efficacy of vitamin D supplementation in combination with conventional antiviral therapy in patients with chronic hepatitis C infection: a meta-analysis of randomised controlled trials. Background: Although a contributory role of vitamin D levels for the development of chronic hepatitis C has been suggested, the efficacy of vitamin D supplementation in combination with conventional antiviral therapy consisting of pegylated interferon-α (Peg-IFN-α) injection and oral ribavirin (RBV) remains unclear. We investigated its efficacy in the treatment of chronic hepatitis C via a meta-analysis of randomised controlled trials. Methods: We searched PubMed, EMBASE, the Cochrane Library, ClinicalTrials.gov and the bibliographies of relevant articles to locate additional publications in September 2016. Three evaluators independently reviewed and selected eligible studies based on predetermined selection criteria. Results: Of 522 articles meeting our initial criteria, a total of seven open-label, randomised controlled trials involving 548 participants, were included in the final analysis. Vitamin D supplementation in combination with Peg-IFN-α injection and oral RBV significantly increased the rate of viral response for hepatitis C at 24 weeks after treatment in a random-effects meta-analysis (relative risk = 1.30; 95% confidence interval = 1.04-1.62; I2 = 75.9%). Also, its significant efficacy was observed in patients with hepatitis C virus genotype 1, which is known to be refractory to antiviral therapy. Conclusions: In summary, we observed that additional use of vitamin D has a positive effect on sustained viral response rates of patients with chronic hepatitis C infection. However, we cannot establish the efficacy because of substantial heterogeneity, a small sample size and a low methodological quality. abstract_id: PUBMED:11577799 Interferon alfacon-1: a review of its pharmacology and therapeutic efficacy in the treatment of chronic hepatitis C. Unlabelled: Interferon alfacon-1 (consensus interferon) is a non-naturally occurring, synthetic, type 1 interferon (IFN)alpha that is used for the treatment of patients with chronic hepatitis C. The efficacy of subcutaneously administered interferon alfacon-1 has been demonstrated in clinical trials during the treatment of LFN-naive patients (interferon alfacon-1 9microg 3 times a week for 24 weeks) and retreatment of nonresponders and relapsers to previous interferon therapy (interferon alfacon1 15 microg 3 times a week for up to 48 weeks). Higher and more frequent interferon alfacon-1 dosages have also been investigated. Results from a pivotal double-blind randomised trial in 704 patients with chronic hepatitis C showed that interferon alfacon-19 microg 3 times a week achieved virological and biochemical response rates of 34.9 and 42.2%, respectively, at treatment end-point (week 24). Sustained virological and biochemical responses (week 48) were reported in 12.1 and 20.3% of the patients, respectively. In general, response rates in recipients of interferon alfacon-1 9 microg 3 times a week were similar to those achieved with IFN-alpha2b 3 MIU 3 times a week. However, interferon alfacon-1 was more effective in the subgroup of patients infected with hepatitis C virus (HCV) genotype 1 at end-point (virological response, 24 vs 15%; p &lt; 0.05) and post-treatment observation period (8 vs 4%) although the difference between treatment groups was statistically significant only at treatment end-point. The sustained virological response rate achieved in patients with high baseline levels of serum HCV RNA receiving interferon alfacon-1 was statistically superior to that exhibited in the IFN-alpha2b treatment group (7 vs 0%; p &lt; Interferon alfacon-1 also showed efficacy during the retreatment of non-responders and relapsers to previous IFN therapy in a large nonblind multicentre trial. Sustained virological response (week 72) was observed among 13 and 58% of nonresponders and relapsers, respectively, after 48 weeks of treatment with interferon alfacon-1 15 microg 3 times a week. Interferon alfacon-1 has been generally well tolerated in clinical trials. As with other IFNs, adverse events were reported frequently but were usually considered of mild to moderate severity, decreased with time and caused a small percentage of patients to withdraw from the treatment. Fever, fatigue, arthralgia, myalgia, headache and rigors were the most frequently reported adverse events. Psychiatric adverse events appeared to be dose-related and caused the majority of treatment withdrawals. Conclusion: Interferon alfacon-1 is generally well tolerated and is an effective agent in the treatment of patients with chronic hepatitis C. Comparative data from a pivotal randomised trial indicate that the drug has at least equivalent efficacy to IFNalpha-2b, and a statistically significant advantage was demonstrated at treatment end-point in patients infected with HCV genotype 1. A number of ongoing trials with interferon alfacon-1 are evaluating issues such as the optimal dosage regimen and duration of therapy in an effort to improve sustained virological response to therapy, a goal for IFNs in general. abstract_id: PUBMED:30853287 Hepatitis C treatment with all-oral direct-acting antivirals: Effectiveness and tolerance in a multicenter, prospective, observational study from French general hospitals (APROVVIE, ANGH). Background And Aims: According to clinical trials, the treatment of patients with chronic hepatitis C (CHC) with second-generation direct acting antiviral agents (DAAs) is highly efficient and well tolerated. The goal of this study was to investigate the effectiveness and safety of various combinations of these drugs during their first 2 years of use in the real-world practice of French general hospitals. Methods: Data from patients treated with all-oral DAAs in 24 French non-academic hospital centers from March 1, 2014 to January 1, 2016, were prospectively recorded. The sustained virological response 12-24 weeks after treatment (SVR 12-24) was estimated and severe adverse events (SAE) were evaluated and their predictive factors were determined using logistic regression. Results: Data from 1123 patients were analyzed. The population was 69% genotype (G) 1, 13% G3, 11.5% G4, 5% G2, 49% with cirrhosis and 55% treatment-experienced. The treatment regimens were sofosbuvir/ledipasvir (38%), sofosbuvir/daclatasvir (32%), sofosbuvir/simeprevir (17%), ombitasvir+paritaprevir+ritonavir (5%) (with dasabuvir 3.5%), and sofosbuvir/ribavirin (8%). Ribavirin was given to 24% of patients. The SVR 12-24 was 91.0% (95% CI: 89.2-92.5%). Sofosbuvir-ribavirin was less effective than other regimens. The independent predictors of SVR 12-24 by logistic regression were body weight, albumin, previous hepatocellular carcinoma and treatment regimen (sofosbuvir/ribavirin vs. others). Sixty-four severe adverse events (SAE) were observed in 59 [5.6%] patients, and were independently predicted by cirrhosis and baseline hemoglobin. Serum creatinine increased during treatment (mean 8.5%, [P&lt;10-5]), satisfying criteria for acute kidney injury in 62 patients (7.3%). Patient-reported overall tolerance was excellent, and patient-reported fatigue decreased during and after treatment. Conclusions: Second generation DAAs combinations are as effective and well tolerated in a « real-world » population as in clinical trials. Further studies are needed on renal tolerance. abstract_id: PUBMED:23219917 Hepatitis C treatment and SVR: the gap between clinical trials and real-world treatment aspirations. Objective: Despite the remarkable improvements in pharmacologic treatment efficacy for hepatitis C (HCV) reported in published clinical trials, published research suggests that, in "real-world" patient care, these medical outcomes may be difficult to achieve. This review was undertaken to summarize recent experience in the treatment of HCV in clinical settings, examining the course of patients through the stages of treatment and barriers to treatment encountered. Method: A comprehensive and representative review of the relevant literature was undertaken to examine HCV treatment experience outside of clinical trials in the last decade. This review found 25 unique studies with data on course of treatment and/or barriers to treatment in samples of patients with HCV not preselected for inclusion in clinical trials. Results: Results were examined separately for samples selected for HCV infection versus HCV/HIV coinfection. Only 19% of HCV-selected and 16% of HCV/HIV-coinfection selected patients were considered treatment eligible and advanced to treatment; even fewer completed treatment (13% and 11%, respectively) or achieved sustained virologic response (3% and 6%, respectively). Psychiatric and medical ineligibilities were the primary treatment barriers. Conclusion: Only by systematically observing and addressing potentially solvable medical and psychosocial barriers to treatment will more patients be enrolled in and complete HCV therapy. abstract_id: PUBMED:15832589 Treatment of hepatitis C viral infections in substance abusers. Aims: To examine the evidence for excluding chronic hepatitis C (CHC) patients with substance abuse from treatment with interferon (IFN) and ribavirin. Methods: We reviewed clinical trials focussing on the treatment of chronic hepatitis C of patients with substance abuse between 2001 and 2004. Ten clinical trials concerning antiviral treatment in substance abusers were described of which six were controlled ones. There were no randomised trials. There was one controlled multi-centre trial. One trial used pegylated IFN. Results: In the total group of substance abusers the sustained viral response (SVR) and the adherence was not different from control groups. In former drug users, active drug users and patients taking substitution therapy for opioid dependence the sustained viral response and adherence was not different from control populations. However, non-substituted active drug users seemed more likely to be lost to follow-up. Discontinuation of treatment occurred most frequently during the first 8 weeks of therapy. Neurobehavioural changes leading to depression started in the first 8 weeks of treatment. Although follow-up periods after SVR were short, the currently described re-infection rate occurring in active intravenous drug users remains low. Conclusions: There is no evidence to withhold antiviral treatment against HCV in active substance abusers. It seems important to advise to start substitution therapy in non-substituted active drug users, increase substitution therapy dose in substituted patients and treat depression as early as possible. More prospective controlled trials on HCV treatment in active and difficult-to-reach substance users are needed. abstract_id: PUBMED:19196557 Antiviral treatment for hepatitis C virus infection: effectiveness at general population level in a highly endemic area. Background: Peginterferon plus ribavirin treatment induced a sustained virological response in &gt;50% of HCV-RNA-positive individuals enrolled in published clinical trials. Aim: To determine anti-HCV treatment effectiveness at a general population level. Patients And Methods: In 2002, a 1:5 random sample of &gt;11 years old inhabitants of a small Italian town (Cittanova) was invited for HCV screening. HCV-RNA-positive individuals were evaluated for antiviral treatment. Results: 1645 of 1924 invited individuals (85.5%) participated in the screening. 84 HCV-RNA-positive individuals were detected: median age was 65 years (range: 32-87); 67% was infected with genotype 1 or 4. Antiviral treatment was judged unnecessary for 43 (51.2%), due to persistently normal alanine aminotransferases, mild disease at liver biopsy or age &gt;70 years without cirrhosis. Twenty-eight of the remaining 41 patients (68.3%) were ineligible for treatment, because of medical/psychiatric contraindications (42.9%), alcohol/drug abuse (17.9%), decompensated cirrhosis/hepatocellular carcinoma (17.9%), not attending official appointments (10.7%), previous intolerance/non-response to interferon plus ribavirin (10.7%). 5 of 13 eligible patients (38.5%) did not receive treatment (4 refused and 1 accidental death). 3 of 8 treated patients (37.5%) reached a sustained virological response. Conclusions: Although efficacy of anti-HCV therapy improved in recent years, we found that low eligibility to treatment still limited its effectiveness at general population level in a highly endemic town. abstract_id: PUBMED:35338062 Randomised controlled trial of active case management to link hepatitis C notifications to treatment in Tasmania, Australia: a study protocol. Introduction: By subsidising access to direct acting antivirals (DAAs) for all people living with hepatitis C (HCV) in 2016, Australia is positioned to eliminate HCV as a public health threat. However, uptake of DAAs has declined over recent years and new initiatives are needed to engage people living with HCV in care. Active follow-up of HCV notifications by the health department to the notifying general practitioner (GP) may increase treatment uptake. In this study, we explore the impact of using hepatitis C notifications systems to engage diagnosing GPs and improve patient access to treatment. Methods And Analysis: This study is a randomised controlled trial comparing enhanced case management of HCV notifications with standard of care. The intervention includes phone calls from a department of health (DoH) specialist HCV nurse to notifying GPs and offering HCV management support. The level of support requested by the GP was graded in complexity: level 1: HCV information only; level 2: follow-up testing advice; level 3: prescription support including linkage to specialist clinicians and level 4: direct patient contact. The study population includes all GPs in Tasmania who notified HCV diagnosis to the DoH between September 2020 and December 2021. The primary outcome is proportion of HCV cases who initiate DAAs after 12 weeks of HCV notification to the health department. Secondary outcomes are proportion of HCV notifications that complete HCV RNA testing, treatment workup and treatment completion. Multiple logistic regression modelling will explore factors associated with the primary and secondary outcomes. The sample size required to detect a significant difference for the primary outcome is 85 GPs in each arm with a two-sided alpha of 0.05% and 80% power. Ethics And Dissemination: The study was approved by University of Tasmania's Human Research Ethics Committee (Protocol ID: 18418) on 17 December 2019. Results of the project will be presented in scientific meetings and published in peer-reviewed journals. Trial Registration Number: NCT04510246. Trial Progression: The study commenced recruitment in September 2020 and end of study expected December 2021. abstract_id: PUBMED:18184190 Interferon monotherapy of chronic hepatitis C in dialysis patients: meta-analysis of clinical trials. The efficacy of monotherapy with interferon (IFN) (conventional or pegylated IFN) in dialysis patients with chronic hepatitis C remains unclear, although a number of clinical trials have been published addressing this issue. The aim of the study was to evaluate the efficacy and safety of monotherapy by conventional or pegylated IFN in dialysis patients with chronic hepatitis C by performing a systematic review of the literature with a meta-analysis of clinical trials. The primary outcome was sustained virological response (SVR; as a measure of efficacy), and the secondary outcome was drop-out rate (as a measure of tolerability). We used the random-effects model of Der Simonian and Laird, with heterogeneity and sensitivity analyses. We identified 28 clinical trials (645 unique patients), of which six (21.4%) had a controlled design. In the group of trials based on conventional IFN, the summary estimate for SVR and drop-out rate was 39% [95% confidence interval (CI) 32-46] and 19% (95% CI 13-26) respectively. The summary estimate for SVR rate in patients with the hepatitis C virus genotype 1 was 33% (95% CI 19-47). In the subset of trials using pegylated IFN, the summary estimate for SVR and drop-out rate was 31% (95% CI 7-55) and 27% (95% CI 1-52) respectively. The most frequent side-effects requiring interruption of treatment were flu-like symptoms, and gastrointestinal and haematological changes. A relationship between age and drop-out rate was found, even if no statistical significance was reached (P = 0.064). The studies were heterogeneous with regard to SVR and drop-out rate. No publication bias was observed. One-third of dialysis patients with chronic hepatitis C were successfully treated with conventional or pegylated IFN monotherapy. Preliminary evidence does not support additional benefit due to monotherapy with pegylated IFN on the viral response in the chronic kidney disease (CKD) population. Tolerance to IFN monotherapy was unsatisfactory, particularly to pegylated IFN. The optimal antiviral treatment of chronic hepatitis C in dialysis populations is currently under active investigation. abstract_id: PUBMED:31933274 Addressing Hepatitis C in the American Incarcerated Population: Strategies for Nationwide Elimination. Purpose Of Review: The prevalence of Hepatitis C virus (HCV) in the US incarcerated population is disproportionately high, and when inmates with infection are released back into the general population, they play a substantial role in the spread of disease. This review provides support for targeting the jail/prison population to eliminate HCV in the general population. It will also summarize various screening/treatment models to curtail the burden of disease behind and beyond bars. Recent Findings: Transitioning from risk-based testing to opt-out testing in prisons/jails would be cost-effective through greater identification of cases and treatment to prevent complications from cirrhosis. Other innovative strategies, such as the nominal pricing mechanism or the "Netflix" DAA subscription model, have the potential to be cost-effective and to increase access to treatment. Addressing HCV in the incarcerated population is a strategy to bring the US closer to successfully eradicating the epidemic. Such findings should incentivize policymakers to implement care models that target this population. Answer: The effectiveness of chronic hepatitis C treatment in the general population appears to be similar to that observed in randomized trials. A population-based study conducted in the French departments of Côte-d'Or and Doubs found that sustained virological response rates were comparable between non-selected patients from the general population and highly selected patients from randomized trials. The study reported sustained virological response rates of 18% after interferon alone and 46.9% after combined therapy with interferon and ribavirin. The necessity of increasing the ratio of treated patients in the general population was emphasized, as it was still considered too low (PUBMED:14586246). However, it is important to note that there are challenges in achieving these outcomes in real-world settings. A review of the treatment of hepatitis C in clinical settings outside of clinical trials found that only a small percentage of patients with HCV or HCV/HIV coinfection advanced to treatment and achieved sustained virological response, with psychiatric and medical ineligibilities being primary barriers to treatment (PUBMED:19196557). Similarly, a study in a highly endemic Italian town found that low eligibility for treatment limited the effectiveness of anti-HCV therapy at the general population level (PUBMED:23219917). Despite these challenges, there are interventions that can improve treatment outcomes in the general population. For instance, vitamin D supplementation in combination with conventional antiviral therapy was found to significantly increase the rate of viral response for hepatitis C in a meta-analysis of randomized controlled trials (PUBMED:28833855). Additionally, a multicenter, prospective, observational study from French general hospitals showed that second-generation direct-acting antiviral agents (DAAs) combinations were as effective and well tolerated in a "real-world" population as in clinical trials (PUBMED:30853287). In summary, while chronic hepatitis C treatment can be as efficient in the general population as in randomized trials, real-world application faces several barriers that can affect treatment eligibility and outcomes. Efforts to improve access to treatment and manage barriers are necessary to achieve similar efficacy in the general population.
Instruction: Is sonographic surveillance of polytetrafluoroethylene-covered transjugular intrahepatic portosystemic shunts (TIPS) necessary? Abstracts: abstract_id: PUBMED:18774362 Is sonographic surveillance of polytetrafluoroethylene-covered transjugular intrahepatic portosystemic shunts (TIPS) necessary? A single centre experience comparing both types of stents. Aim: To investigate whether sonographic (US) surveillance of polytetrafluoroethylene covered transjugular intrahepatic portosystemic shunts (TIPS) is necessary. Materials And Methods: We identified 128 patients who underwent TIPS for complications of portal hypertension between January 2001 and December 2005 at a large tertiary centre. Procedural data were retrospectively analysed. US surveillance of the TIPS was performed at baseline with scheduled follow-up or whenever shunt dysfunction was suspected. Clinical and radiology reports were compared to assess US surveillance of the TIPS. Results: Four hundred and twenty-six US studies were performed, with a median of three per patient (range 1-5). The median follow-up period was 378 days (range 1-1749 days). Twenty-three patients (18%) had baseline US studies performed only whereas 105 (82%) also had follow-up studies. Forty-one (32%) of 128 patients [32 (78%) Wallstent, nine (22%) Viatorr] had Doppler ultrasound abnormalities noted. Venography was performed in all 41 patients. Abnormal venography and elevated hepatic venous pressure gradient (HVPG) was seen in 34 (82.9%) of the 41 patients [29 (85.3%) Wallstent, five (14.7%) Viatorr]. Among the 34 patients, 17 (50%) [13 (76.5%) Wallstent, four (23.5%) Viatorr] had venographic abnormalities noted at the hepatic venous end accompanied by increased HVPG. All four of the Viatorr patients had minor narrowing at the hepatic venous end and HVPG measurements that ranged 3-4 mm Hg above 12 mm Hg. Conclusion: Considering the improved patency of covered stents in TIPS, US surveillance may be superfluous after the baseline study. abstract_id: PUBMED:25714299 TIPS for refractory ascites: a 6-year single-center experience with expanded polytetrafluoroethylene-covered stent-grafts. OBJECTIVE. This single-center study evaluated the use of expanded polytetrafluoroethylene (ePTFE)-covered stent-grafts for transjugular intrahepatic portosystemic shunt (TIPS) placement to manage portal hypertension-related refractory ascites. MATERIALS AND METHODS. One hundred patients at a single tertiary care center in a major metropolitan hospital underwent TIPS placement with an ePTFE-covered stent-graft (Viatorr TIPS Endoprosthesis). Patients with portal hypertension-related ascites and preexisting hepatocellular carcinoma or liver transplant were excluded from the analysis. Records were reviewed for demographic characteristics, technical success of the TIPS procedures, and stent follow-up findings. Clinical results were assessed at 90- and 180-day intervals. RESULTS. Immediate technical success of the TIPS procedure was 100%. Of the 61 patients with documented follow-up, 55 (90.2%) had a partial or complete ascites response to TIPS creation. Of these 55 patients, nine experienced severe encephalopathy. Six of 61 patients (9.8%) did not experience a significant ascites response. Overall survival was 78.7% at 365-day follow-up. The 365-day survival was 84.2% for patients with a model for end-stage liver disease (MELD) score of less than 15, 67.0% for those with a score of 15-18, and 53.8% for those with a score of greater than 18 (p = 0.01). For patients with a MELD score of less than 18, the 365-day survival was 88.0% for those with an albumin value of 3 mg/dL or greater and 72.8% for those with an albumin value of less than 3 mg/dL (p = 0.04). CONCLUSION. TIPS placement using an ePTFE-covered stent-graft is an efficacious therapy for refractory ascites. Patients with preserved liver function-characterized by a MELD score of less than 15 or a MELD score of less than 18 and an albumin value of 3 mg/dL or greater-experience the greatest survival benefit. abstract_id: PUBMED:29675358 Persistence of gastric or esophageal varices on final angiography increases transjugular intrahepatic portosystemic shunt revision rate after polytetrafluoroethylene-covered stent shunt creation. Background: To assess the association between final polytetrafluoroethylene (PTFE)-covered stent transjugular intrahepatic portosystemic shunt (TIPS) angiographic parameters and free shunt revision survey. Methods: Series of two comparison groups were generated with persistence of varices or not, the 25th, 50th, and 75th percentile as cutoff for each angle and a 15-mm distance as cutoff for distance D. Kaplan Meier free shunt revision curves were then created and compared with Log Rank test. Results: Mean follow-up was 455 days. Thirteen (19.4%) patients had shunt revision. Significant free shunt revision survey difference was found between post-procedural angiographic persistent varices group and the group without varices (P=0.0001). Shunt revision rate at 3, 12 and 24 months was respectively 13%, 29%, and 39% in the group with varices versus 0%, 2.7% and 2.7% in the group without. No difference was found between groups for angles A, B, C and distance D. Conclusions: Persistence of gastric or esophageal varices on final trans-TIPS angiography increases TIPS revision rate after PTFE-covered stent shunt creation whereas geometric parameters have no influence. abstract_id: PUBMED:15046218 Polytetrafluoroethylene-covered stent grafts for TIPS procedure: 1-year patency and clinical results. Objectives: Polytetrafluoroethylene (PTFE)-covered stent grafts appear to have the potential to improve TIPS patency, but data available are few and controversial. The aim of this prospective nonrandomized trial was to assess TIPS safety and 1-yr patency with a new commercially available PTFE-covered stent graft in comparison with a group of historical controls treated with conventional stents. Methods: Between July 1992 and December 1999, 87 consecutive cirrhotics underwent TIPS with conventional stents, while from January 2000 to November 2001, 32 consecutive cirrhotics were treated with PTFE-covered stent grafts. All patients were followed by the same medical team according to a prospective protocol for a diagnostic work-up and a surveillance strategy. Results: The two groups were comparable for age, sex, etiology, and severity of cirrhosis. The 1-yr probability of remaining free of shunt dysfunction tended to be higher in the covered stent group: 76.3% (95% CI = 58.7-93.9%) versus 57.5% (95% CI = 46.6-68.4%); log rank test: p = 0.055. However, stenoses inside the stent were significantly higher in patients with bare stents (88% vs 17%), while stenoses at the hepatic or portal vein were more frequent in PTFE-covered stent-graft group (50% vs 9% and 33% vs 3%, respectively), (chi2 = 15.42; df = 2.0; p = 0.0004). Stenoses inside the covered portion of the stent did not occur. One-year cumulative rebleeding, encephalopathy, and survival were similar. Conclusions: PTFE-covered stents are able to solve pseudointimal hyperplasia within the stent tract, but have a high incidence of hepatic or portal vein stenosis. Improvements in stent design and insertion techniques are necessary to fully achieve the potential benefit of this new device. abstract_id: PUBMED:32394042 Impact of PTFE-covered stent position and extension on TIPS patency. Purpose: The objectives of this study were to determine the effects of expanded polytetrafluoroethylene (PTFE)-covered stent location and TIPS extension on primary patency. Methods: This retrospective cohort study examined patients with PTFE-covered TIPS creation between 07/2002 and 06/2016. Clinical information and patency outcomes at 24 months were extracted. At TIPS creation, extension was performed at the discretion of the operator. Kaplan-Meier curves of primary patency were generated with conditional variables of HVO-HCJ distance, extension status, and covered versus uncovered extensions. Additional logistic regression analyses of distances were performed. Results: Of 393 patients, 115 patients (29%) underwent stent extension, 79 (20% of total cohort) of which were at the HVO end alone. Primary patency for all TIPS was 75%, 68%, and 54% at 3, 6, and 12 months. The data endpoint were transplant or death in 92 (23%) and 116 (30%). Kaplan-Meier curves showed no statistically significant difference between the variables and primary patency at 12 and 24 months: distance up to versus greater than 10 mm (p = 0.32, 0.81); extension versus no extension (p = 0.83, 0.85); uncovered versus covered extensions (p = 0.58, 0.70). Logistic regression analyses showed a trend toward statistical significance. Conclusion: In the setting of PTFE-covered TIPS creation, extended TIPSs and unextended well-positioned TIPSs have no difference in primary patency rates. Stent position and extension length may have an effect on primary patency, but were likely obscured by "user recognition" effects. abstract_id: PUBMED:20533444 Comparison study of Doppler ultrasound surveillance of expanded polytetrafluoroethylene-covered stent versus bare stent in transjugular intrahepatic portosystemic shunt. Objective: This prospectively randomized controlled study aimed to assess with Doppler ultrasound (US) the shunt function of expanded polytetrafluoroethylene (ePTFE)-covered transjugular intrahepatic portosystemic shunt (TIPS) stent versus bare stent and to evaluate the usefulness of routine TIPS follow-up of ePTFE-covered stents. Methods: Sixty consecutive patients were randomized for bare or covered transjugular TIPS stenting in our institution between April 2007 and April 2009. Data of follow-up Doppler US, angiography, and portosystemic pressure gradient measurements were collected and analyzed. Results: The follow-up period was 8.34 + or - 4.42 months in the bare-stent group and 6.16 + or - 3.89 months in the covered-stent group. Baseline clinical characteristics were similar in both groups. Two hundred three US studies were performed in 60 patients, with a mean of 3.4 per patient, and demonstrated abnormalities in 28 patients (21 bare stents, 7 ePTFE-covered stents), 19 of them (13 in bare-stent group, 6 in covered-stent group) showing no clinical evidence of recurrence. Ten of 13 patients in the bare-stent group underwent balloon angioplasty or additional stent placement, whereas only one of six patients in the covered-stent group needed reintervention for intimal hyperplasia. The average peak velocity in the midshunt of ePTFE-covered stent was 139 + or - 26 cm/s after TIPS creation and 125 + or - 20 cm/s during follow-up, which was significantly higher than the bare-stent group (p &lt; 0.05). The main portal vein and hepatic artery showed higher flow velocities in the ePTFE-covered stent group than in the bare-stent group. ePTFE-covered stents maintained lower portosystemic pressure gradient than bare stents (9.5 + or - 2.9 versus 13.2 + or - 1.5 mmHg, p &lt; 0.05). Conclusions: ePTFE-covered stents resulted in higher patency rates and better hemodynamics than bare stents. Routine US surveillance may not be necessary in patients with ePTFE-covered TIPS stent. abstract_id: PUBMED:16950701 The use of a polytetrafluoroethylene-covered stent graft for transjugular intrahepatic portosystemic shunt (TIPS): Long-term follow-up of 100 patients. Purpose: To retrospectively evaluate results and clinical outcome of transjugular intrahepatic portosystemic shunt (TIPS) after implantation of a polytetrafluoroethylene (PTFE)-covered stent graft. Material And Methods: The stent graft was used in 112 patients with higher risk of primary (i.e. patients with Budd-Chiari syndrome) or secondary shunt failure (patients with occlusion of a previous uncovered stent), or a complicated TIPS procedure with an imminent technical complication requiring covered stent. Patients were scheduled at 3- to 6-month intervals for duplex-sonographic control of shunt function. Radiological revision was performed in patients with a decrease in shunt function by &gt;25%, primary non-response, or relapse of the index symptom. Results: Twelve patients were lost to follow-up, and 100 patients were followed for 22+/-15 months. The actuarial rates of shunt patency were 90, 84, and 74% at 1, 2, and 3 years of follow-up, respectively. Two patients developed early (within 1 month) and 14 patients late shunt failure. Except for one transient rise in liver enzymes due to outflow obstruction by the stent graft, no technical complications were seen. Primary response to treatment was seen in 97% of patients treated for variceal bleeding and 84% of patients treated for refractory ascites. A relapse of the index symptom was seen in 13% of bleeders and 9% of patients treated for refractory ascites. Conclusion: TIPS created with a PTFE-covered stent graft showed favorable long-term results. abstract_id: PUBMED:30223282 Doppler ultrasound surveillance of TIPS-patency in the era of covered stents - retrospective analysis of a large single-center cohort. Purpose: The purpose of this study was to determine accuracy and necessity of long-term Doppler ultrasound (DU) surveillance of transjugular intrahepatic portosystemic shunt (TIPS) patency after implantation of an ePTFE-covered stent-graft (Viatorr). Methods: This single-center retrospective study includes 228 consecutive cirrhotic patients with TIPS implantation due to portal hypertensive complications. Standardized DU surveillance was scheduled 3 - 5 days, 3 months, and 6 months after TIPS implantation and every 6 months thereafter. Portal venography was performed in case of DU findings suspicious of TIPS dysfunction, clinical signs of recurrent portal hypertension, or refractory hepatic encephalopathy. Results: During a mean follow-up of 16.6 ± 23.4 months, 866 DU examinations were performed. Twenty-two cases of TIPS dysfunction were observed in 16 patients with no first dysfunction more than 4 years after implantation. Routine DU in asymptomatic patients had little therapeutic impact (0.75 %). DU and venography were concordant in 39/46 (84.8 %) paired examinations, and 1-, 2-, and 5-year primary TIPS patency was 87.4 %, 83.7 %, and 79.97 %, respectively. Patients with TIPS dysfunction and subsequent successful revision during the first 2 years of follow-up had a significantly higher risk (p = 0.001) of new dysfunction compared to those without TIPS dysfunction. Cumulative 1-, 2-, and 5-year survival was 68.7 %, 61.3 %, and 42.7 %, respectively. Conclusions: Despite acceptable accuracy, scheduled DU surveillance proved to have minor therapeutic impact. Thus, detailed DU surveillance is not useful in asymptomatic patients after 2 years of unremarkable follow-up. In contrast, long-term DU surveilleance should be performed in patients after successful revision of TIPS dysfunction and patients with prothrombotic states (e. g., portal vein thrombosis, Budd-Chiari syndrome). abstract_id: PUBMED:9502688 Treatment of TIPS stenosis with ePTFE graft-covered stents. Midshunt stenosis and recurrent variceal bleeding occurred in 2 patients after transjugular intrahepatic portosystemic shunts (TIPS). Repeat angioplasty was performed in both cases but recurrent stenosis again led to hemorrhage. Expanded polytetrafluoroethylene (ePTFE) graft-covered stents were expanded in each of the TIPS at the midshunt, reducing the portosystemic gradient for both patients. Variceal bleeding ceased, and follow-up studies showed no evidence of recurrent stenosis in either case. abstract_id: PUBMED:23811307 TIPS: 25 years later. In the 25 years since the first TIPS intervention has been performed, technical standards, indications, and contraindications have been set up. The previous considerable problem of shunt failure by thrombosis or intimal proliferation in the stent or in the draining hepatic vein has been reduced considerably by the availability of polytetrafluoroethylene (PTFE)-covered stents resulting in reduced rebleeding and improved survival. Unfortunately, most clinical studies have been performed prior to the release of the covered stent and, therefore, do not represent the present state of the art. In spite of this, TIPS has gained increasing acceptance in the treatment of the various complications of portal hypertension and vascular diseases of the liver. Answer: The necessity of sonographic surveillance of polytetrafluoroethylene (PTFE) covered transjugular intrahepatic portosystemic shunts (TIPS) is debated. Some studies suggest that routine ultrasound (US) surveillance may not be necessary for patients with PTFE-covered TIPS stents. For instance, one study found that PTFE-covered stents resulted in higher patency rates and better hemodynamics than bare stents, implying that routine US surveillance might not be required for patients with PTFE-covered TIPS stent (PUBMED:20533444). Another study concluded that considering the improved patency of covered stents in TIPS, US surveillance may be superfluous after the baseline study (PUBMED:18774362). However, other studies highlight the importance of continued surveillance. A study on the persistence of gastric or esophageal varices on final angiography found that it increases TIPS revision rate after PTFE-covered stent shunt creation, suggesting that some form of monitoring is necessary (PUBMED:29675358). Another study indicated that despite acceptable accuracy, scheduled Doppler ultrasound surveillance had minor therapeutic impact, but detailed surveillance is not useful in asymptomatic patients after 2 years of unremarkable follow-up, while it should be performed in patients after successful revision of TIPS dysfunction and patients with prothrombotic states (PUBMED:32394042). In summary, while some evidence suggests that routine sonographic surveillance may not be necessary due to the high patency rates of PTFE-covered stents, other findings indicate that surveillance could still be important in certain cases, such as after TIPS revision or in patients with prothrombotic conditions. The decision to perform sonographic surveillance should be individualized based on patient-specific factors and clinical judgment.
Instruction: Do patients perceive a link between a fragility fracture and osteoporosis? Abstracts: abstract_id: PUBMED:18366716 Do patients perceive a link between a fragility fracture and osteoporosis? Background: To evaluate factors associated with whether patients associate their fracture with future fracture risk. Methods: Fragility fracture patients participated in a telephone interview. Unadjusted odds ratios (OR, [95% CI]) were calculated to identify factors associated with whether patients associate their fracture with increased fracture risk or osteoporosis. Predictors identified in univariate analysis were entered into multivariable logistic regression models. Results: 127 fragility fracture patients (82% female) participated in the study, mean (SD) age 67.5 (12.7) years. An osteoporosis diagnosis was reported in 56 (44%) participants, but only 17% thought their fracture was related to osteoporosis. Less than 50% perceived themselves at increased risk of fracture. The odds of an individual perceiving themselves at increased risk for fracture were higher for those that reported a diagnosis of osteoporosis (OR 22.91 [95%CI 7.45;70.44], p &lt; 0.001), but the odds decreased with increasing age (0.95 [0.91;0.99], p&lt;0.009). The only variable significantly associated with the perception that the fracture was related to osteoporosis was self-reported osteoporosis diagnosis (39.83 [8.15;194.71], p&lt;0.001). Conclusion: Many fragility fracture patients do not associate their fracture with osteoporosis. It is crucial for physicians to communicate to patients that an osteoporosis diagnosis, increasing age or a fragility fracture increases the risk for future fracture. abstract_id: PUBMED:23962530 Factors predictive of the perceived osteoporosis-fracture link in fragility fracture patients. Objective: Given the asymptomatic nature of osteoporosis, a fragility fracture provides an opportunity to make the issue of osteoporosis relevant to patients. Patients who link their fragility fracture with osteoporosis are more likely to initiate osteoporosis treatment, yet to date, we know little about who is likely to make this link. This study examined whether demographic, health, and osteoporosis belief factors predicted a perceived link between a fragility fracture and osteoporosis. Study Design: This longitudinal cohort study analyzed baseline and follow up data collected as part of a provincial osteoporosis screening initiative targeting fragility fracture patients. Logistic regression analysis was used to examine the relationship between hypothesized predictors and the outcome. Main Outcome Measure: Patient perception of the osteoporosis-fracture link at follow up. Results: At baseline, 93% (1615/1735) of patients did not believe their fracture could have been caused by osteoporosis. Of these, only 8.2% changed this perception at follow up. Adjusted analyses showed that baseline characteristics associated with making the osteoporosis-fracture link at follow up were: a previous fracture (odds ratio (OR) 1.7, confidence interval (CI) 1.2-2.6), perception of osteoporosis pharmacotherapy benefits OR 1.2 (CI 1.0-1.5), diagnosis of rheumatoid arthritis OR 2.6 (CI 1.4-4.9) and the perception of bones as "thin" OR 8.2 (CI 5.1-13.1). Conclusion: These results shed more light on patient-level barriers to osteoporosis management following an osteoporosis educational programme. They may be used to identify patients less likely to make the link between their fracture and osteoporosis and to inform interventions for this patient group. abstract_id: PUBMED:37061647 Study of the link between hemotopoietic and skeletal systems in patients attending a referral center for osteoporosis. Purpose: To investigate the link between hematopoietic and skeletal tissues in patients with fragility fractures. Methods: We retrospectively analyzed the medical records of women older than 40 years who attended the Bone Disease Unit of "Sapienza" University of Rome for their first visit for osteoporosis from January 2020 to June 2022. Results: Fragility fractures were found in 61.8% of the sample. In particular, vertebral fractures in 35.5%, femoral fractures in 6.3%, Colles fractures in 16.5% and non-vertebral non-hip in 42.5%. Fractured patients were significantly older compared to non-fractured, had lower mean values of lumbar spine (p = 0.01), and femoral neck BMD (p = 0.007). A red blood cell distribution width (RDW) value higher than 15% was observed four times more in those with fractures compared to non-fractured patients (8.9% vs 2%, p = 0.01) and was associated with vertebral fracture after adjusting for age, BMI, menopause, nutritional status, smoking, osteoporosis and anemia (OR = 4.1, 95% CI 1.6-11.4, p = 0.003). Hematocrit was negatively associated with hip fracture also adjusting for age, BMI, menopause, nutritional status, smoking, osteoporosis (p = 0.025). Conclusion: Our study demonstrates that RDW values were significantly associated with vertebral fracture and hematocrit with hip fracture. Since both parameters are included in the initial evaluation of patients with suspected bone fragility, our results should push doctors to look at these values with no incremental cost for national health services. abstract_id: PUBMED:24535494 Pediatric nephrolithiasis and the link to bone metabolism. Purpose Of Review: To review the recent publications describing the link between pediatric nephrolithiasis and bone metabolism. Recent Findings: Nephrolithiasis incidence is increasing in children and is associated with low bone mineral density (BMD). Affected children are conceptually at risk for fractures and osteoporosis. In addition to abnormal calcium metabolism, inflammation, genetic makeup and dietary habits are being recognized as important factors in the pathophysiology of nephrolithiasis and low bone density. Findings from retrospective reviews suggest that low BMD in children may be improved with citrate or thiazide treatment. Summary: The healthcare burden from low BMD with subsequent osteoporosis and fracture risk is immense with potential far-reaching effects in patient quality of life and healthcare expense. Bone mass is acquired in the pediatric age range, thus it is important to identify and treat at-risk children. Retrospective reviews in pediatric patients indicate that citrate or thiazide diuretic treatment may improve BMD. We now understand that a relationship exists between nephrolithiasis and low BMD. To improve healthcare for our current patients as well as protect their future health it is important to identify low BMD and initiate strategies to improve BMD in 'at-risk' children. abstract_id: PUBMED:30813666 Clinical application of bridge-link combined fixation system for adult mid-shaft clavicle fractures Objective: To explore clinical efficacy of bridge-link combined fixation system for adult mid-shaft clavicle fractures. Methods: From January 2016 to August 2016, 28 patients with mid-shaft clavicle fractures were treated by bridge-link combined fixation system, including 15 males and 13 females, aged from 27 to 82 years old with an average of(48.50±15.34) years old; the courses of disease was for 13 to 15 months with an average of (14.17±0.77) months. Fracture healing time and complication were observed, postoperative and postoperative Constant function score of shoulder joint at 1, 3 and 13 months and were compared. Results: All patients were followed up for 13 to 15 months with an average of (14.17±0.77) months. Twenty-eight patients achieved clinical fracture healing without infection, bone un-union, delayed union, breakage of internal fixation and re-fracture after removing of the internal fixation occurred. Fracture healing time ranged from 2.5 to 4 months with (3.05±0.44) months. Postoperative Constant score at 1, 3 and 13 months were 76.57±4.70, 90.75±3.62, 96.07±2.40 respectively, and had significant difference compared with before operation(58.36±4.98). Conclusions: Bridge-link combined fixation system could be used as a new internal fixation for adult mid-shaft clavicle fractures, which has advantages of rapid recovery, less complications, and could reduce incidence of breakage of internal fixation, osteoporosis, re-fractures after removing the internal fixation. abstract_id: PUBMED:32504095 Few fragility fracture patients perceive that their bone health is affected by their comorbidities and medications. We examined fragility fracture patients' perceptions of associations between bone health and other chronic conditions and medications. Awareness of the associations between bone health and these conditions and medications was low. Providers should increase patients' awareness of these associations in order to minimize the risk of future fracture. Introduction: Among patients with a fragility fracture presenting with at least one other chronic health condition, we examined (1) perceptions of the association between bone health and their other health conditions, and (2) perceptions of the association between bone health and prescribed medications taken for other health conditions. Methods: We identified fragility fracture patients presenting to a Canadian urban fracture clinic with at least one self-reported chronic health condition (in addition to bone fragility). In-depth interviews, 60-90 min in duration, were conducted. Our qualitative methodology was informed by saliency analysis. Results: We interviewed 26 patients (21 females, 5 males) aged 45 to 84 years old. Participants were taking 1-13 medications each and presented with a variety of comorbidities (range 1-7). All participants described at least one condition or medication they were currently taking for which there existed evidence of a negative effect on bone health (increased risk of fracture, bone loss, falling). Two participants perceived a correct association between their other health conditions and compromised bone health, and four participants perceived a correct association between their medications and compromised bone health. Conclusion: All patients reported a chronic health condition and/or were taking at least one medication that potentially compromised their bone health. Patient awareness of the association between bone health and other health conditions and prescribed medications was low. Health care providers should increase patients' awareness of the bone health significance of their chronic conditions and medications in order to minimize the risk of future fracture. abstract_id: PUBMED:23225289 How do osteoporosis patients perceive their illness and treatment? Implications for clinical practice. Unlabelled: Non-adherence inhibits successful treatment of osteoporosis. This study used a theoretical framework to explore osteoporosis patients' cognitive and emotional representations of their illness and medication, using both interviews and drawing. We recorded some misconceptions patients have about their condition and medication which could act as barriers to treatment adherence. Purpose: Despite the high efficacy of current treatments in reducing fracture risk, poor adherence is still a problem in osteoporosis. This qualitative study aims to inform the development of a psychological intervention to increase adherence through the investigation of osteoporosis patients' perceptions of their illness and medication. The self-regulation model (Leventhal) provided the framework for the study. Method: Participants were 14 female outpatients from a London teaching hospital who suffer with osteoporosis or osteopenia. Data were collected using both semi-structured interviews and drawings. Drawings were used to elicit participants' visual representations (imagery) of their condition. Results: We found that patients held illness and medication beliefs that were not in accord with current scientific evidence. Interviews revealed that participants had good knowledge of what osteoporosis is, but they had low understanding of the role of medication in reducing fracture risk, various concerns about the side effects of medication, poor understanding of the causes of osteoporosis and uncertainty about how it can be controlled. Additionally, drawings elicited more information about the perceived effects of osteoporosis and emotional reactions to the condition. Conclusions: Osteoporosis sufferers need a better understanding of their fracture risk and what they can do to control their condition. Concerns about medication need to be addressed in order to improve adherence, particularly in relation to the management of side effects. Since drawings of osteoporosis were found to arouse emotions, it is concluded that risk communication in osteoporosis could benefit from using visual images. abstract_id: PUBMED:28567345 Dementia and osteoporosis in a geriatric population: Is there a common link? Aim: To determine the existence of a common pathological link between dementia and osteoporosis through reviewing the current evidence base. Methods: This paper reviews the current literature on osteoporosis and dementia in order to ascertain evidence of a common predisposing aetiology. A literature search of Ovid MED-LINE (1950 to June 2016) was conducted. The keywords "osteoporosis", "osteoporotic fracture", "dementia" and "Alzheimer's disease" (AD) were used to determine the theoretical links with the most significant evidence base behind them. The key links were found to be vitamins D and K, calcium, thyroid disease, statins, alcohol and sex steroids. These subjects were then searched in combination with the previous terms and the resulting papers manually examined. Theoretical, in vitro and in vivo research were all used to inform this review which focuses on the most well developed theoretical common causes for dementia (predominantly Alzheimer's type) and osteoporosis. Results: Dementia and osteoporosis are multifaceted disease processes with similar epidemiology and a marked increase in prevalence in elderly populations. The existence of a common link between the two has been suggested despite a lack of clear pathological overlap in our current understanding. Research to date has tended to be fragmented and relatively weak in nature with multiple confounding factors reflecting the difficulties of in vivo experimentation in the population of interest. Despite exploration of various possible mechanisms in search for a link between the two pathologies, this paper found that it is possible that these associations are coincidental due to the nature of the evidence available. One finding in this review is that prior investigation into common aetiologies has found raised amyloid beta peptide levels in osteoporotic bone tissue, with a hypothesis that amyloid beta disorders are systemic disorders resulting in differing tissue manifestations. However, our findings were that the most compelling evidence of a common yet independent aetiology lies in the APOE4 allele, which is a well-established risk for AD but also carries an independent association with fracture risk. The mechanism behind this is thought to be the reduced plasma vitamin K levels in individuals exhibiting the APOE4 allele which may be amplified by the nutritional deficiencies associated with dementia, which are known to include vitamins K and D. The vitamin theory postulates that malnutrition and reduced exposure to sunlight in patients with AD leads to vitamin deficiencies. Conclusion: Robust evidence remains to be produced regarding potential links and regarding the exact aetiology of these diseases and remains relevant given the burden of dementia and osteoporosis in our ageing population. Future research into amyloid beta, APOE4 and vitamins K and D as the most promising aetiological links should be welcomed. abstract_id: PUBMED:32851148 Patients' Experiences of Nurse Case-Managed Osteoporosis Care: A Qualitative Study. Background: Osteoporosis is a chronic condition that is often left untreated. Nurse case-managers can double rates of appropriate treatment in those with new fractures. However, little is known about patients' experiences of a nurse case-managed approach to osteoporosis care. Objective: Our aim was to describe patients' experiences of nurse case-managed osteoporosis care. Methods: A qualitative, descriptive design was used. We recruited patients enrolled in a randomized controlled trial of a nurse case-management approach. Individual semi-structured interviews were conducted which were transcribed and analyzed using content analysis. Data were managed with ATLAS.ti version 7. Results: We interviewed 15 female case-managed patients. Most (60%) were 60-years or older, 27% had previous fracture, 80% had low bone mineral density tests, and 87% had good osteoporosis knowledge. Three major themes emerged from our analysis: acceptable information to inform decision-making; reasonable and accessible care provided; and appropriate information to meet patient needs. Conclusions: This study provides important insights about older female patients' experiences with nurse case-managed care for osteoporosis. Our findings suggest that this model to osteoporosis clinical care should be sustained and expanded in this setting, if proven effective. In addition, our findings point to the importance of applying patient-centered care across all dimensions of quality to better enhance the patients' experience of their health care. abstract_id: PUBMED:28740519 Low-dose diclofenac in patients with fragility fractures. Most osteoporotic patients complain of back pain one year after a fragility vertebral fracture and the frequency of chronic back pain increases with increasing age. The use of the lowest effective dose of an analgesic which is able to control symptoms seems to be a possible solution in order to limit potential side effects in multi-treated elderly patients. Non-steroidal anti-inflammatory drugs (NSAIDs) have a proven efficacy in the treatment of back pain associated with fragility vertebral fractures and diclofenac is available at low-dose subcutaneous injective formulation. This is the rational of ImPAVeDic study, acronym of Improvement of back Pain Associated with fragility Vertebral fractures with low-dose Diclofenac, an observational study that will be performed in a group of 50 elderly (≥ 65 years), male and female osteoporotic patients with symptomatic fragility vertebral fractures. The objective of the study is to evaluate the improvement of back pain in the study population treated with low-dose diclofenac and regularly monitored for 2-6 months. Visual Analogic Scale (VAS) and Numerical Rating Scale (NRS) will be used for pain monitoring. The reduction of the risk of occurrence of drug side effects can favour the optimization of elderly patients' care. Answer: Patients often do not perceive a link between a fragility fracture and osteoporosis. Studies have shown that many patients with fragility fractures do not associate their fracture with an increased risk of future fractures or osteoporosis. For instance, one study found that only 17% of fragility fracture patients thought their fracture was related to osteoporosis, and less than 50% perceived themselves at increased risk of fracture. The odds of an individual perceiving themselves at increased risk for fracture were higher for those who reported a diagnosis of osteoporosis, but the odds decreased with increasing age. The perception that the fracture was related to osteoporosis was significantly associated only with a self-reported osteoporosis diagnosis (PUBMED:18366716). Another study reported that at baseline, 93% of patients did not believe their fracture could have been caused by osteoporosis. After an educational program, only 8.2% of these patients changed this perception. Factors that were associated with making the osteoporosis-fracture link at follow-up included a previous fracture, perception of osteoporosis pharmacotherapy benefits, diagnosis of rheumatoid arthritis, and the perception of bones as "thin" (PUBMED:23962530). These findings suggest that there is a need for better communication and education from healthcare providers to patients regarding the relationship between fragility fractures, osteoporosis, and the risk of future fractures. It is crucial for physicians to inform patients that an osteoporosis diagnosis, increasing age, or a fragility fracture increases the risk for future fractures, and to address patient-level barriers to osteoporosis management (PUBMED:18366716; PUBMED:23962530).
Instruction: Should child protection services respond differently to maltreatment, risk of maltreatment, and risk of harm? Abstracts: abstract_id: PUBMED:35204989 Parental Worries, Child Maltreatment Risk and Empowerment: How Are They Noticed in Child and Family Services? Parental empowerment has been related to their well-being and self-efficacy. Learning more about the signs describing child maltreatment risk are crucial for the welfare of children and families. The aim of this study was to assess the risk of child maltreatment (CM) and related worrying factors of parents and associations between the CM risk, worries and parental empowerment. The study is based on self-report surveys administered to parents in primary health care and hospital settings. The risk of CM and related worrying factors were measured by the Brief Child Abuse Potential Inventory (BCAP) from 453 parents. Family empowerment was measured by The Generic Family Empowerment Scale (G-FES). Parents expressed worries such as loneliness and distress (20%), feelings of persecution (9%), family conflict (17%), rigidity (21%) and financial insecurity (4%). The BCAP found 27 parents with increased risk. Parents with CM risk expressed more empowerment in connection to services for their child and family. It is crucial to discuss worries in child and family services before they raise the risk level. Tools such as the BCAP are useful in systematically identifying the child maltreatment risk and parental worries under discussion, offering possibilities for preventing child maltreatment and increasing well-being of children. abstract_id: PUBMED:21481930 Should child protection services respond differently to maltreatment, risk of maltreatment, and risk of harm? Objective: To examine evidence available in large-scale North American datasets on child abuse and neglect that can assist in understanding the complexities of child protection case classifications. Methods: A review of child abuse and neglect data from large North American epidemiological studies including the Canadian Incidence Study of Reported Child Abuse and Neglect (CIS), the National Child Abuse and Neglect Data System (NCANDS), and the National Incidence Studies of Reported Child Abuse and Neglect (NIS). Results: The authors of this paper argue that recent evidence from large North American epidemiological studies examining the incidence of child abuse and neglect demonstrate that children and families identified as being at risk of maltreatment present with as many household and caregiver concerns as investigations that are substantiated. Conclusions: In order to continue to develop appropriate services and policies for vulnerable children the authors urge continue definitional clarity for research in child maltreatment that considers the exemplars or indicators of categories, in tandem with parental and child characteristics which can provide one source of evidence-basis to meaningful child protection case classifications. Continued monitoring, refined by the dilemmas faced in practice, are critical for a continued public health investment in children's well-being, predicated upon upholding children's rights. abstract_id: PUBMED:25107943 Risk factors for child maltreatment recurrence: An updated systematic review. Objectives: Children who have been maltreated are at increased risk of further maltreatment. Identification of those at highest risk of further maltreatment is a priority for professionals working in child protection services. The current study is intended to consolidate and expand on previous work on recurrence of child maltreatment. It has sought to identify risk factors for maltreatment recurrence in the recent literature in the expectation that this may help in the practical identification of children at risk. Methods: We conducted a systematic review of cohort studies published between 2003 and 2009, identifying factors associated with maltreatment recurrence in children. Studies included demonstrated differing levels of substantiation of maltreatment. Results: Fifteen studies met inclusion criteria but showed significant heterogeneity, varying in setting, recruitment of subjects, types of maltreatment considered and length of follow-up. Previous findings were replicated and expanded in the current study in relation to a range of factors, including rates of maltreatment recurrence, maltreatment types, frequency of previous episodes of maltreatment, child and family considerations, home environment and service provision. Factors were identified irrespective of level of maltreatment substantiation. Conclusion: This study provides further systematic evidence of the existence of a number of factors associated with child maltreatment recurrence. It points to the possibility of practical application of its findings within the wider context of decision making in child protection services, with the ultimate aim of reducing recurrence of maltreatment in individual cases. abstract_id: PUBMED:32663718 Exploring the interrelatedness of risk factors for child maltreatment: A network approach. Background: Theories on the etiology of child maltreatment generally focus on the interaction between multiple risk and protective factors. Moreover, the quadratic model of cumulative risk describes a threshold at which the risk of child maltreatment increases exponentially, suggesting a synergistic effect between risk factors. Objective: This study explored the interrelatedness of risk factors for child maltreatment. Participants And Setting: The sample consisted of risk assessments performed for both high-risk families (n = 2,399; child protection services) and lower risk families (n = 1,904; community outreach services). Methods: Network analyses were performed on parental risk factors. Three networks were constructed: a cross-sample network, a high-risk network, and a lower risk network. The relations between risk factors were examined, as well as the centrality of each risk factor in these networks. Additionally, the networks of the two samples were compared. Results: The networks revealed that risk factors for child maltreatment were highly interrelated, which is consistent with Belsky's multi-dimensional perspective on child maltreatment. As expected, risk factors were generally stronger related to each other in the high-risk sample than in the lower risk sample. Centrality analyses showed that the following risk factors play an important role in the development of child maltreatment: "Caregiver was maltreated as a child", "History of domestic violence", and "Caregiver is emotionally absent". Conclusions: We conclude that studying the interrelatedness of risk factors contributes to knowledge on the etiology of child maltreatment and the improvement of both risk assessment procedures and interventions for child maltreatment. abstract_id: PUBMED:24075569 Adolescent risk factors for child maltreatment. We investigate adolescent risk factors, measured at both early and late adolescence, for involvement in child maltreatment during adulthood. Comprehensive assessments of risk factors for maltreatment that use representative samples with longitudinal data are scarce and can inform multilevel prevention. We use data from the Rochester Youth Development Study, a longitudinal study begun in 1988 with a sample of 1,000 seventh and eighth graders. Participants have been interviewed 14 times and, at the last assessment (age 31), 80% were retained. Risk factors represent 10 developmental domains: area characteristics, family background/structure, parent stressors, exposure to family violence, parent-child relationships, education, peer relationships, adolescent stressors, antisocial behaviors, and precocious transitions to adulthood. Maltreatment is measured by substantiated reports from Child Protective Services records. Many individual risk factors (20 at early adolescence and 14 at later adolescence) are significantly, albeit moderately, predictive of maltreatment. Several developmental domains stand out, including family background/structure, education, antisocial behaviors, and precocious transitions. In addition, there is a pronounced impact of cumulative risk on the likelihood of maltreatment. For example, only 3% of the youth with no risk domains in their background at early adolescence were involved in later maltreatment, but for those with risk in 9 developmental domains the rate was 45%. Prevention programs targeting youth at high risk for engaging in maltreatment should begin during early adolescence when risk factors are already at play. These programs need to be comprehensive, capable of addressing the multiple and interwoven nature of risk that is associated with maltreatment. abstract_id: PUBMED:27352090 Cumulative risk hypothesis: Predicting and preventing child maltreatment recidivism. Although Child Protective Services (CPS) and other child welfare agencies aim to prevent further maltreatment in cases of child abuse and neglect, recidivism is common. Having a better understanding of recidivism predictors could aid in preventing additional instances of maltreatment. A previous study identified two CPS interventions that predicted recidivism: psychotherapy for the parent, which was related to a reduced risk of recidivism, and temporary removal of the child from the parent's custody, which was related to an increased recidivism risk. However, counter to expectations, this previous study did not identify any other specific risk factors related to maltreatment recidivism. For the current study, it was hypothesized that (a) cumulative risk (i.e., the total number of risk factors) would significantly predict maltreatment recidivism above and beyond intervention variables in a sample of CPS case files and that (b) therapy for the parent would be related to a reduced likelihood of recidivism. Because it was believed that the relation between temporary removal of a child from the parent's custody and maltreatment recidivism is explained by cumulative risk, the study also hypothesized that that the relation between temporary removal of the child from the parent's custody and recidivism would be mediated by cumulative risk. After performing a hierarchical logistic regression analysis, the first two hypotheses were supported, and an additional predictor, psychotherapy for the child, also was related to reduced chances of recidivism. However, Hypothesis 3 was not supported, as risk did not significantly mediate the relation between temporary removal and recidivism. abstract_id: PUBMED:29678130 Cumulative risk effect of household dysfunction for child maltreatment after intensive intervention of the child protection system in Japan: a longitudinal analysis. Background: Building an effective casework system for child maltreatment is a global issue. We estimated the effect of household dysfunction (i.e., interparental violence, caregiver mental health problems, and caregiver substance abuse) on child maltreatment to understand how to advance the current framework of child welfare. Methods: The sample comprised 759 children (1- to 17-year-old; mean age was 10.6; 404 boys and 355 girls) placed in temporary custody units (one of the strongest intervention of the Japanese child protection system). Caseworkers from 180 units across 43 prefectures completed questionnaires on children and their family and were asked whether a child maltreatment report had been made after cancelation of custody in a 15-month follow-up period. The relations of household dysfunction and maltreatment reports were assessed using the Cox proportional hazard model. Results: About half (48.4%) of the children had been placed in the unit because of maltreatment, and 88.3% had a history of victimization. Seventy-six cases had maltreatment reports after cancelation. We entered household dysfunction variables individually into the model, and each had a significant relationship with maltreatment reports (hazard ratios for interparental violence, caregiver mental health problem, and substance abuse were 1.69, 1.69, and 2.19, respectively) after covariate adjustment. When treating these three variables as cumulative risk score model of household dysfunction, the hazard ratio increased with increasing number of score (1.96 for score two; 2.35 for score three; score 0 as reference). Conclusions: Greater household dysfunction score is a risk of maltreatment after intensive intervention. It is imperative to construct systems facilitating cooperation between child and adult service sectors and to deliver seamless services to children and families. Our findings provide child protect services with risk-stratified interventions for children at victimization risk and promote adult-focused services to be proactive in prevention or intervention for adults with perpetration risk. abstract_id: PUBMED:26056058 Burden attributable to child maltreatment in Australia. Child maltreatment is a complex phenomenon, with four main types (childhood sexual abuse, physical abuse, emotional abuse, and neglect) highly interrelated. All types of maltreatment have been linked to adverse health consequences and exposure to multiple forms of maltreatment increases risk. In Australia to date, only burden attributable to childhood sexual abuse has been estimated. This study synthesized the national evidence and quantified the burden attributable to the four main types of child maltreatment. Meta-analyses, based on quality-effects models, generated pooled prevalence estimates for each maltreatment type. Exposure to child maltreatment was examined as a risk factor for depressive disorders, anxiety disorders and intentional self-harm using counterfactual estimation and comparative risk assessment methods. Adjustments were made for co-occurrence of multiple forms of child maltreatment. Overall, an estimated 23.5% of self-harm, 20.9% of anxiety disorders and 15.7% of depressive disorders burden in males; and 33.0% of self-harm, 30.6% of anxiety disorders and 22.8% of depressive disorders burden in females was attributable to child maltreatment. Child maltreatment was estimated to cause 1.4% (95% uncertainty interval 0.4-2.3%) of all disability-adjusted life years (DALYs) in males, and 2.4% (0.7-4.1%) of all DALYs in females in Australia in 2010. Child maltreatment contributes to a substantial proportion of burden from depressive and anxiety disorders and intentional self-harm in Australia. This study demonstrates the importance of including all forms of child maltreatment as risk factors in future burden of disease studies. abstract_id: PUBMED:32320911 Disadvantaged neighborhoods and the spatial overlap of substantiated and unsubstantiated child maltreatment referrals. Background: Considerable debate exists on whether the substantiation decision is a reliable measure for rates of maltreatment. Studies have shown that risks among children victims of maltreatment versus children investigated but unsubstantiated are similar. Objective: This paper aims to respond to two research questions: (1) Do most child maltreatment referrals, substantiated and unsubstantiated, come from the same neighborhoods? (2) Do substantiated and unsubstantiated referrals share the same neighborhood risk factors? Participants And Settings: We used geocoded data from substantiated (n = 1799) and unsubstantiated (n = 1638) child maltreatment referrals in Valencia, Spain (2004-2015). As the neighborhood proxy, we used 552 Census block groups. Neighborhood characteristics analyzed were: socioeconomic status, immigration concentration, residential instability, and public disorder and crime. Methods: To study the geographical overlap of child maltreatment referrals, a Bayesian joint modeling approach was used. To analyze the influence of neighborhood-level characteristics on risk, we used a Bayesian random-effects modeling approach. Results: For substantiated child maltreatment referrals, 90 % of the total between-area variation in risk is captured by the shared component, while for unsubstantiated child maltreatment referrals, the shared component was 88 %. The correlation between substantiated and unsubstantiated risks of child maltreatment referrals was .80. These risks were higher in neighborhoods with low levels of socioeconomic status, higher immigrant concentration, public disorder and crime. Conclusions: Child maltreatment referrals, regardless of whether substantiated or unsubstantiated, overlap in the same disadvantaged neighborhoods. This suggests that in these neighborhoods, families are at a higher risk of being investigated by child protective services suggesting a potential reporting bias. abstract_id: PUBMED:36205182 Identifying Children at Risk for Maltreatment Using Emergency Medical Services' Data: An Exploratory Study. The objective of this study was to use natural language processing to query Emergency Medical Services (EMS) electronic health records (EHRs) to identify variables associated with child maltreatment. We hypothesized the variables identified would show an association between the Emergency Medical Services encounter and risk of a children maltreatment report. This study is a retrospective cohort study of children with an EMS encounter from 1/1/11-12/31/18. NLP of EMS EHRs was conducted to generate single words, bigrams and trigrams. Clinically plausible risk factors for child maltreatment were established, where presence of the word(s) indicated presence of the hypothesized risk factor. The EMS encounters were probabilistically linked to child maltreatment reports. Univariable associations were assessed, and a multivariable logistic regression was conducted to determine a final set of predictors. 11 variables showed an association in the multivariable modeling. Sexual, abuse, chronic condition, developmental delay, unconscious on arrival, criminal activity/police, ingestion/inhalation/exposure, and &lt;2 years old showed positive associations with child maltreatment reports. Refusal and DOA/PEA/asystole held negative associations. This study demonstrated that through EMS EHRs, risk factors for child maltreatment can be identified. A future direction of this work include developing a tool that screens EMS EHRs for households at risk for maltreatment. Answer: Child protection services (CPS) should indeed respond differently to maltreatment, risk of maltreatment, and risk of harm, as these categories represent different levels and types of threats to child welfare. Evidence from large-scale North American epidemiological studies suggests that children and families identified as being at risk of maltreatment present with as many household and caregiver concerns as investigations that are substantiated for maltreatment (PUBMED:21481930). This indicates that the risk of maltreatment is a significant concern that requires attention and intervention from CPS, potentially as much as actual instances of maltreatment. Moreover, the cumulative risk hypothesis suggests that the total number of risk factors significantly predicts maltreatment recidivism above and beyond intervention variables (PUBMED:27352090). This implies that CPS should consider the overall risk environment when responding to cases, as a higher number of risk factors can increase the likelihood of maltreatment occurring or recurring. Additionally, research has shown that certain risk factors, such as caregiver history of being maltreated as a child, history of domestic violence, and caregiver emotional absence, are central to the development of child maltreatment (PUBMED:32663718). CPS responses should be tailored to address these specific risk factors to effectively prevent maltreatment. Furthermore, studies have demonstrated that disadvantaged neighborhoods often have a spatial overlap of substantiated and unsubstantiated child maltreatment referrals, suggesting that neighborhood-level risk factors should be considered in CPS responses (PUBMED:32320911). This could involve community-level interventions and support services to address the broader environmental risks that contribute to maltreatment and harm. Lastly, the use of tools such as the Brief Child Abuse Potential Inventory (BCAP) can help in systematically identifying the risk of child maltreatment and parental worries, offering possibilities for preventing child maltreatment and increasing the well-being of children (PUBMED:35204989). CPS can benefit from such tools to differentiate between levels of risk and respond appropriately. In summary, CPS should respond differently to maltreatment, risk of maltreatment, and risk of harm by considering the complexity and interrelatedness of risk factors, the cumulative risk environment, and the specific needs of the child and family, as well as the broader community context.
Instruction: Evidence for multidimensional resilience in adult patients with transfusion-dependent thalassemias: Is it more common than we think? Abstracts: abstract_id: PUBMED:27018402 Evidence for multidimensional resilience in adult patients with transfusion-dependent thalassemias: Is it more common than we think? Background: Life expectancy of patients with transfusion-dependent thalassemias has increased with the development of improved treatment over the last few decades. However, β-thalassemia disorder still has considerable lifetime treatment demands and heightened risk of frequent complications due to transfusion-transmitted infections and iron overload, which may affect thalassemic patients' functioning in different domains. Objectives: The vast majority of published studies on thalassemic patients have focused on children and adolescent functioning, and little research has examined adults. Hence, the current study was planned to examine the functioning and resilience of adult thalassemic patients in a comprehensive way. Methods: We examined multidimensional resilience and functioning across different domains (psychological adjustment, treatment adherence, social functioning and occupational functioning). We also examined demographic and medical variables that may relate to resilience and functioning. Participants were adult patients [n = 38; age M = 31·63, standard deviation (SD) = 7·72; 72% female] with transfusion-dependent thalassemia in treatment in a hospital in the northeastern United States. Results: The results suggest that most adult thalassemic patients tend to be resilient, demonstrating good functioning in four main domains: psychological adjustment, treatment adherence, social functioning and occupational functioning. Conclusion: Despite the considerable demands of their illness, adult thalassemic patients appeared to be adapting well, demonstrating evidence of multidimensional resilience. abstract_id: PUBMED:11146569 TT virus infection in adult beta-thalassemia major patients. Background And Objectives: Patients with thalassemia-major are at risk of blood-borne viral infections. TT virus (TTV), a single stranded, circular DNA virus, has recently been found to be associated with acute and chronic hepatitis. The aims of this study were to assess the prevalence of TTV infection in adult patients with transfusion-dependent thalassemia, and to evaluate the clinical significance of TTV. Design And Methods: We studied 68 adult patients with thalassemia major, 97% of whom were hepatitis C virus (HCV) antibody positive. TTV DNA was amplified from serum by heminested polymerase chain reaction (PCR). Direct sequencing of PCR products was used to establish TTV genotypes. Results: TTV DNA was detected in 47 patients (69.1%). Sequence analysis of PCR products identified TTV genotype 1b as the most common viral genotype in this group. Patients co-infected by HCV and TTV had a significantly higher histologic grade score than patients with isolated HCV infection (5.1+/-2.7 vs. 2.8+/-1.7, p=0.02) while the stage score was not significantly different. Interpretation And Conclusions: TTV is highly prevalent among Italian multiply transfused patients. In these patients TTV viremia appears to affect the necro-inflammatory activity of hepatitis C, though no evidence of an effect on the evolution of fibrosis was found. abstract_id: PUBMED:21385429 Ethical issues of unrelated hematopoietic stem cell transplantation in adult thalassemia patients. Background: Beta thalassemia major is a severe inherited form of hemolytic anemia that results from ineffective erythropoiesis. Allogenic hematopoietic stem cell transplantation (HSCT) remains the only potentially curative therapy. Unfortunately, the subgroup of adult thalassemia patients with hepatomegaly, portal fibrosis and a history of irregular iron chelation have an elevated risk for transplantation-related mortality that is currently estimated to be about 29 percent. Discussion: Thalassemia patients may be faced with a difficult choice: they can either continue conventional transfusion and iron chelation therapy or accept the high mortality risk of HSCT in the hope of obtaining complete recovery.Throughout the decision making process, every effort should be made to sustain and enhance autonomous choice. The concept of conscious consent becomes particularly important. The patient must be made fully aware of the favourable and adverse outcomes of HSCT. Although it is the physician's duty to illustrate the possibility of completely restoring health, considerable emphasis should be put on the adverse effects of the procedure. The physician also needs to decide whether the patient is eligible for HSCT according to the "rule of descending order". The patient must be given full details on self-care and fundamental lifestyle changes and be fully aware that he/she will be partly responsible for the outcome. Summary: Only if all the aforesaid conditions are satisfied can it be considered reasonable to propose unrelated HSCT as a potential cure for high risk thalassemia patients. abstract_id: PUBMED:9217195 Prevalence and clinical significance of hepatitis G virus infection in adult beta-thalassaemia major patients. The risk of polytransfused patients for hepatitis C virus (HCV) infection is likely to extend to another recently identified member of the Flaviviridae, hepatitis G virus (HGV). We investigated the prevalence of HGV in 40 adult Italian patients with transfusion-dependent thalassaemia and evaluated the clinical significance of HGV infection. HGV-RNA was detected in 9/40 patients (22.5%). HGV infection was significantly associated with HCV viraemia (P = 0.0012), with all patients positive for HGV being also viraemic for HCV. Overall, the clinical picture of patients with HCV/HGV co-infection was not different from that of patients with isolated HCV. However, patients co-infected with both viruses had lower values of alanine-transferase (P = 0.035) and a lower titre of HCV viraemia (P = 0.042) in the absence of other evident factors which could influence the clinical expression of HCV infection. In conclusion, HGV is highly prevalent among Italian polytransfused patients. No evidence of a clinically significant pathogenic role for HGV in liver disease could be found in these patients. In a subset of cases a possible interference of HGV with HCV infection was observed. abstract_id: PUBMED:37644448 Economic burden of adult patients with β-thalassaemia major in mainland China. Background: β-thalassaemia major poses a substantial economic burden, especially in adults. We aimed to estimate the economic burden of adult patients with β-thalassaemia major from a societal perspective using the real-world data. According to the clinical guideline, we also estimated the annual medical costs for patients with the same body weight and calculated the lifetime medical costs over 50 years in mainland China. Methods: This was a retrospective cross-sectional study. An online survey with snowball sampling covering seven provinces was conducted. We extracted patient demographics, caregiver demographics, disease and therapy information, caring burden, and costs for adult patients diagnosed with β-thalassaemia major and their primary caregivers. In the real world, we estimated the annual direct medical cost, direct nonmedical cost, and indirect cost. In addition, we calculated the annual direct medical cost and lifetime direct medical cost by weight with discounted and undiscounted rates according to the clinical guideline. Results: Direct medical costs was the main driver of total cost, with blood transfusion and iron chelation therapy as the most expensive components of direct medical cost. In addition, adult patients with β-thalassaemia major weighing 56 kg were associated with an increase of $2,764 in the annual direct medical cost using the real-world data. The undiscounted and discounted (5% discount rate) total lifetime treatment costs were $518,871 and $163,441, respectively. Conclusions: Patients with β-thalassaemia major often encounter a substantial economic burden in mainland China. Efforts must be made to help policymakers develop effective strategies to reduce the burden and pevalence of thalassaemia. abstract_id: PUBMED:32532297 Development of the Thalassaemia Adult Life Index (ThALI). Background: Beta Thalassaemia Major (βTM) is a chronic genetic illness whereby the challenges faced by patients exposes them to increased risk of psychosocial issues. Despite this, a disease-specific tool to measure the impact of this illness on adult patients has yet to be developed. Methods: In collaboration with βTM adult patients, this study aimed to develop a comprehensive, disease-specific, easy to use psychometrically sound tool to measure the impact of chelation and transfusion dependent βTM in a cross-cultural patient group in England.The Thalassaemia Life Index (ThALI) was developed in two stages - item generation and pre-testing and item reduction - in collaboration with service users. Recruited adult patients shaped the design of the instrument including its statements and subscales. Standard item reduction techniques were used to develop the instrument. Results: The final version of the ThALI encompasses 35 statements and five sub-scales - general physical health, coping, body image, appearance and confidence, social relationships and autonomy. This endorses the multidimensionality of quality of life (QoL). The factor structure of the ThALI is highly stable and its internal consistency is high (alpha = 0.87 for the overall scale; 0.83-0.94 for its subscales). The ThALI has sound scaling assumptions, acceptability and score variability. Content validity was confirmed by experts and service user interviewees. The loadings for the items retained were adequate and the item discriminant validity sound. Conclusions: The ThALI covers the impact of βTM in adult patients. Preliminary testing shows its multidimensionality to be reliable and valid. The national authentication of the tool with patients treated in Centres of Excellence will aim to provide further evidence regarding the ThALI's psychometric properties. Once authenticated, the ThALI may be utilised in research and in clinical settings to assess the effects of new therapies and/or interventions from the patients' perspective to inform practice and/or to identify areas of concern. abstract_id: PUBMED:16205730 Unrelated donor stem cell transplantation in adult patients with thalassemia. Allogeneic SCT remains the only potential cure for patients with thalassemia. However, most BMT candidates lack a suitable family donor and require an unrelated donor (UD). We evaluated whether BMT using UDs in high-risk adult thalassemia patients can offer a probability of cure comparable to that reported employing an HLA-compatible sibling as donor. A total of 27 adult thalassemia patients (15 males and 12 females, median age 22 years) underwent BMT from a UD selected by high-resolution HLA molecular typing. The conditioning regimen consisted of Busulphan (BU, 14 mg/kg) plus Cyclophosphamide (CY, 120 or 160 mg/kg) in 12 cases and BU (14 mg/kg), Thiotepa (10 mg/kg) and CY (120-160 mg/kg) in the remaining 15 cases. Cyclosporine-A and short-term Methotrexate were used for graft-versus-host disease (GVHD) prophylaxis. In all, 19 patients (70%) are alive and transfusion-independent after a median follow-up of 43 months (range 16-137). A total of 10 patients (37%) developed grade II-IV acute GVHD and six (27%) chronic GVHD. Eight patients (30%) died from transplant-related causes. UD-BMT can cure more than two-thirds of adult thalassemia patients, and is a particularly attractive option for patients who are not compliant with conventional treatment. abstract_id: PUBMED:36743186 Predictors of health state utility values using SF-6D for Chinese adult patients with β-thalassemia major. Background: Patients with β-thalassemia major (β-TM), predominantly adult patients, are associated with physical, mental, and social problems, that result in decreased quality of life (QoL). However, there is a paucity of data on QoL and health status utility (HSU) among adult patients with β-TM in mainland China. Our study aimed to evaluate the QoL by short form 36 questionnaire (SF-36) of adult patients with β-TM in mainland China and to estimate their HSU by SF-6D. In addition, we aimed to identify predictors of HSU. Methods: In this cross-sectional descriptive study, a total of 75 adult patients with β-TM were included by the snowball sampling method that applied involving seven provinces with a relatively high prevalence of thalassemia across mainland China between September 1, 2021 and January 31, 2022. The collected information included social-demographic characteristics, health conditions, treatment, social support (social support rating scale), caregiver burden (Zarit burden interview), and QoL (SF-36). HSU scores were calculated for each adult patient from their SF-36 responses using the SF-6D algorithm with Hong Kong's tariff. The frequency of participants' responses to the SF-6D for each item of the options was described. Mean HSU scores between different subgroups were calculated. Ordinary least squares (OLS) regression modeling was performed to identify factors associated with HSU. Results: A total of 75 adult patients with β-TM were included in this study. The mean SF-36 score was 50.2 ± 10.70, of which physical and mental scores were 47.57 ± 11.28 and 52.85 ± 14.21, respectively. In addition, the mean SF-6D utility score was estimated to be 0.598 ± 0.112, ranging from 0.391 to 0.962. Univariate analyses showed that interruption of iron chelation treatment significantly affected HSU values (P = 0.038); diagnosis with comorbidity very slightly affected HSU values (P = 0.0996). In the multivariate analysis, diagnosis with comorbidity (P = 0.042) was significantly negatively associated with HSU values; the minimum pre-transfusion hemoglobin concentration (P = 0.047) and social support (P = 0.068) were positively associated with HSU values. Conclusion: This study presents poor QoL and HSU outcomes in Chinese adult patients with β-TM. The study also highlights the importance of social support and treatment compliance, which can increase hemoglobin content and reduce comorbidities, further to ensure the QoL of patients. These findings can be used for future clinical and economic studies. abstract_id: PUBMED:28401102 Human parvovirus B19 in patients with beta thalassemia major from Tehran, Iran. Background: Due to the tropism of human parvovirus B19 to erythroid progenitor cells, infection in patients with an underlying hemolytic disorder such as beta-thalassemia major leads to suppression of erythrocyte formation, referred to as transient aplasia crisis (TAC), which may be life-threatening. We investigated the prevalence of parvovirus B19 among patients with beta thalassemia major attending the Zafar Adult Thalassemia Clinic in Tehran, Iran. Methods: This cross-sectional study was performed to determine the presence of parvovirus B19 DNA in blood samples and parvovirus B19 genotypes in plasma samples of patients with thalassemia major. The population consisted of 150 patients with beta-thalassemia major who attended the Zafar clinic in Tehran. Specimens were studied using a real-time polymerase chain reaction assay. Results: The prevalence of parvovirus B19 in our study population was 4%. Of 150 patients with thalassemia, six (4%) were positive for B19 DNA. There was no significant correlation between blood transfusion frequency and B19 DNA positivity. Finally, phylogenetic analysis of human parvovirus B19 revealed genotype I in these six patients. Conclusion: In this study, acute B19 infections were detected in patients with beta thalassemia major. Screening of such high-risk groups can considerably reduce the incidence and prevalence of B19 infection; thus, screening is required for epidemiologic surveillance and disease-prevention measures. abstract_id: PUBMED:37488065 Thalassemia-related complications in pediatric, adolescent, and young adult patients with transfusion-dependent thalassemia: A multicenter study in Thailand. Introduction: Management of transfusion-dependent thalassemia (TDT) can be challenging due to numerous potential disease-related complications and comorbidities in particular age groups. The objective of this study was to report thalassemia-related complications and risk factors in pediatric, adolescent, and young adult patients with TDT. Methods: A multicenter web-based registry was conducted in patients with TDT aged 25 years and younger from eight university hospitals covering all parts of Thailand. Factors significantly associated with each complication were analyzed by logistic regression methods. Results: Of 605 patients, 267 thalassemia-related complications were reported from 231 pediatric, adolescent, and young adult patients with TDT patients (38.2%). The most common complications were infections, followed by cholelithiasis and growth failure. Splenectomy and elevated pre-transfusion hemoglobin were statistically significant risk factors for infections (adjusted odds ratio [AOR] = 2.3, 95% confidence interval [CI]: 1.2-4.5, p-value = .01 and AOR = 1.5, 95% CI: 1.2-1.7, p-value &lt; .005, respectively). There were two statistically significant risk factors conferred endocrinopathies, including older age (AOR = 1.06, 95% CI: 1.01-1.1, p-value = .01) and being male (AOR = 2.4, 95% CI: 1.4-4.0, p-value = .002). Conclusion: Nearly 40% of the patients in this cohort had thalassemia-related complications. Periodic surveillance and optimal care for respective complications may minimize comorbidities in pediatric, adolescent, and young adult patients with TDT. Answer: The evidence suggests that multidimensional resilience in adult patients with transfusion-dependent thalassemias may indeed be more common than previously thought. A study examining the functioning and resilience of adult thalassemic patients across different domains, including psychological adjustment, treatment adherence, social functioning, and occupational functioning, found that most adult thalassemic patients tend to be resilient, demonstrating good functioning in these main domains (PUBMED:27018402). Despite the considerable demands of their illness, these patients appeared to be adapting well, which indicates evidence of multidimensional resilience. This resilience is particularly noteworthy given the heightened risk of complications due to transfusion-transmitted infections and iron overload that can affect thalassemic patients' functioning in various domains.
Instruction: Is blood homocysteine elevated in migraine? Abstracts: abstract_id: PUBMED:11576202 Is blood homocysteine elevated in migraine? Objective: To determine total serum homocysteine levels in a large group of patients with migraine with and without aura. Background: Hypercoagulable state is a known risk factor for stroke in the young. The existence of a hypercoagulable state has been postulated in migraine and homocysteinemia with young-onset stroke. To the best of our knowledge, blood homocysteine has not been studied in a significant number of patients with various forms of migraine. Methods: Total serum homocysteine was measured with high-performance liquid chromatography in 78 patients with migraine and in 126 age- and sex-matched healthy volunteers. Results: Seventy-eight patients aged 18 to 65 years were studied: 22 with migraine with aura and 56 with migraine without aura. Only 1 man had significantly elevated blood homocysteine (38.6 micromol/L), while another had a borderline elevation (15.8 micromol/L) (reference value for both sexes in our laboratory is 4 to 14 micromol/L). Both patients suffered from migraine without aura. Conclusions: Blood homocysteine is not elevated in migraine. abstract_id: PUBMED:25657748 Association between serum levels of homocysteine with characteristics of migraine attacks in migraine with aura. Background: Evidences have shown that migraine with aura (MA) is associated with elevated homocysteine levels but, few studies have evaluated the relationship between homocysteine levels and characteristics of migraine attacks such as severity, frequency, duration and headache diary result (HDR). Thus, in this study, we investigated the association between homocysteine levels and characteristics of migraine attacks in patients with MA. Materials And Methods: This cross-sectional study was carried out in Isfahan city, Iran, in February 2013. Fasting serum levels of homocysteine were measured in 130 MA patients (31 males and 99 females) aged 15-60 years. Severity, frequency and duration of migraine attacks, as well as HDR, were determined in each patient according to international headache society criteria by a neurologist. Linear and ordinal logistic regression tests were used to evaluate the relationship between serum homocysteine levels and characteristics of migraine attacks. Results: There is no significant association between serum levels of homocysteine with severity, frequency, duration and HDR. This association was not significant after adjustment of confounding variables such as age, body mass index (BMI) and family history of migraine. However, serum homocysteine levels were significantly associated with HDR among males after adjustment for age, BMI and family history of migraine (P = 0.01). Conclusion: Significant relationship between homocysteine levels and characteristics of migraine attacks such as severity, frequency, duration and HDR were not found. However, after adjustment of confounding variables, we found a significant positive relationship between homocysteine levels and HDR among men. abstract_id: PUBMED:24613517 Homocysteine and migraine. A narrative review. Recent evidence suggests that migraine is associated with an increased risk of cardiovascular disorders, so that it is increasingly hypothesized that this primary form of headache may be linked to thrombotic diseases by some biological pathways and risk factors. Homocysteine, a sulfur-containing molecule, is now recognized as an independent risk factor for a variety of thrombotic disorders, especially ischemic heart disease and stroke. This article is hence aimed to provide an overview of epidemiological evidence about the association between homocysteine and migraine published in cross-sectional, prospective or interventional studies. Overall, the evidence gathered from cross-sectional studies that measured plasma homocysteine levels suggests that the epidemiological link between the plasma concentration of this biomarker and migraine is very weak, at best. Contradictory evidence emerged from interventional studies, in which treatment of hyperhomocysteinemia with folic acid or vitamin B supplementation was effective to lower plasma homocysteine and decrease frequency and/or severity of migraine. The association remains largely speculative, however, since it could not be clearly demonstrated that these two biological effects were directly linked. The only study that has assessed homocysteine in cerebrospinal fluid reported that the concentration of this biomarker in migraine patients was significantly increased compared to controls. Although this evidence must be obviously confirmed in larger trials, some putative mechanisms may support a causal link between increased generation of homocysteine in the brain environment and migraine. abstract_id: PUBMED:21702266 Plasma homocysteine in patients of migraine without aura. Background: Few studies have studied the role of homocysteine in migraineurs and have produced conflicting results. The MTHFR C677T genotype has been associated with increased risk of migraine in selected clinical samples. We assessed the association of the MTHFR C677T variant with migraine, the corresponding homocysteine levels and their correlation. Method: We studied 27 random adult migraineurs with aura (MWA), migraine without aura (MWOA), and 32 non-migraineurs (controls) from Lahore, Pakistan in this pilot study which is still under progress. Results: We found significant differences in homocysteine levels between various diagnostic groups (K-W test: p=0.005). One-way ANOVA, post-hoc tests revealed significant differences in homocysteine levels between Non-migraineurs, MWA (p=0.002, CI: 1.93 - 9.19) and MWoA (p=0.002, CI: -9.19 - -1.9). We found a significant association between the migraine group and C677T-MTHFR variant mutant allele (C/T) (p=0.039). We did not find a significant association between C677T-MTHFR variant and homocysteine levels. Conclusion: In this pilot study, we found plasma homocysteine levels to be significantly associated with MWOA. Additionally, plasma homocysteine levels were lower in MWA than in MWOA. Furthermore, we did not find a relationship between homocysteine levels and the MTHFR variant (SNP rs1801133). Lastly, there may be a relationship between the MTHFR variant (SNP rs1801133) and migraine in this population. abstract_id: PUBMED:19384265 The effects of vitamin supplementation and MTHFR (C677T) genotype on homocysteine-lowering and migraine disability. Background: Migraine is a prevalent and debilitating disease that may, in part, arise because of disruption in neurovascular endothelia caused by elevated homocysteine. This study examined the homocysteine-lowering effects of vitamin supplementation on migraine disability, frequency and severity and whether MTHFRC677T genotype influenced treatment response. Methods: This was a randomized, double-blind placebo, controlled trial of 6 months of daily vitamin supplementation (i.e. 2 mg of folic acid, 25 mg vitamin B6, and 400 microg of vitamin B12) in 52 patients diagnosed with migraine with aura. Findings: Vitamin supplementation reduced homocysteine by 39% (approximately 4 mumol/l) compared with baseline, a reduction that was greater then placebo (P=0.001). Vitamin supplementation also reduced the prevalence of migraine disability from 60% at baseline to 30% after 6 months (P=0.01), whereas no reduction was observed for the placebo group (P&gt;0.1). Headache frequency and pain severity were also reduced (P&lt;0.05), whereas there was no reduction in the placebo group (P&gt;0.1). In this patient group the treatment effect on both homocysteine levels and migraine disability was associated with MTHFRC677T genotype whereby carriers of the C allele experienced a greater response compared with TT genotypes (P&lt;0.05). Interpretation: This study provides some early evidence that lowering homocysteine through vitamin supplementation reduces migraine disability in a subgroup of patients. Larger trials are now warranted to establish whether vitamin therapy is a safe, inexpensive and effective prophylactic option for treatment of migraine and whether efficacy is dependant on MTHFRC677T genotype. abstract_id: PUBMED:19804388 Increased dopamine is associated with the cGMP and homocysteine pathway in female migraineurs. Background: The group of catecholamines, which include dopamine, adrenaline, and noradrenaline, are neurotransmitters which have been considered to play a role in the pathogenesis of migraine. However, the impact of catecholamines, especially dopamine on migraine as well as the exact mechanisms is not clear to date as previous studies have yielded in part conflicting results. Objective: This study aimed to produce a comprehensive examination of dopamine in migraineurs. Methods: Catecholamines and various parameters of the homocysteine, folate, and iron metabolism as well as cyclic guanosine monophosphate (cGMP) and inflammatory markers were determined in 135 subjects. Results: We found increased dopamine levels in the headache free period in female migraineurs but not in male patients. Increased dopamine is associated with a 3.30-fold higher risk for migraine in women. We found no significant effects of aura symptoms or menstrual cycle phases on dopamine levels. Dopamine is strongly correlated with cGMP and the homocysteine-folate pathway. Conclusion: We show here that female migraineurs exhibit increased dopamine levels in the headache free period which are associated with a higher risk for migraine. abstract_id: PUBMED:23891042 The role of hyperhomocysteinemia in neurological features associated with coeliac disease. Although a range of neurological and psychiatric disorders are widely reported to be associated with coeliac patients, their pathogenesis remains unclear. Some such disorders are believed to be secondary to vitamin deficiency due to malabsorption, others to immune mechanisms. We hypothesise that hyperhomocysteinemia might, by damaging the blood-brain barrier, expose neuronal tissue to all neuro-irritative metabolites, such as homocysteine itself, a neurotoxic excitatory and proconvulsant amino acid. Neurons respond to these stimuli through hyperexcitability, thereby predisposing subjects to neurological disorders such as epilepsy and headache. Furthermore, persisting endothelial damage may cause blood extravasation and subsequent deposition of calcium salts. We suggest that this might be the pathogenesis of the CEC syndrome, which is characterized by the association of coeliac disease, epilepsy and cerebral calcifications. Indeed, homocysteine plays a well-known role in cardiovascular endothelial dysfunction, with high serum and cerebrospinal fluid levels often being reported in coeliac patients. Moreover, data in the literature show a strong, growing association of homocysteine with epilepsy and migraine in non-coeliac subjects. Despite these findings, homocysteine has never been held directly responsible for neuronal functional features (neuronal hyperexcitability underlying epilepsy and migraine) and structural brain damage (expressed as cerebral calcification) in coeliac patients. Damage to the blood-brain barrier might also facilitate immune reactions against neuronal tissue to a considerable extent. This hypothesis combines the two afore-mentioned theories (vitamin deficiency due to malabsorption and immune mechanisms). We also wish to point out that no studies have yet investigated the prevalence of neuronal hyperexcitability and subclinical electroencephalic abnormalities in children and adults with newly-diagnosed coeliac disease before the introduction of a gluten-free diet, and in particular any changes following the introduction of the diet. We believe that the onset of clinical symptoms such as migraine and convulsions is preceded by a period in which damage is expressed exclusively by subclinical electroencephalic abnormalities; persisting damage to neuronal tissue subsequently leads to clinical manifestations. We propose two types of investigations: the first is to determine whether newly-diagnosed coeliac patients with hyperhomocysteinemia are a subgroup at risk for neurological features (clinical and subclinical); the second is to determine whether appropriate treatment of hyperhomocysteinemia and vitamin B status deficiency improves neurological abnormalities and reduces the risk of cerebral calcifications. The aim of these investigations is to develop new therapeutic strategies designed to prevent neuronal damage and increase the quality of life in children affected by such disorders. abstract_id: PUBMED:20561069 A remarkable increase in total homocysteine concentrations in the CSF of migraine patients with aura. Objective: To investigate whether total and free homocysteine (HC) levels are increased in the cerebrospinal fluid (CSF) of patients with migraine headache compared with normal control populations. Methods: The concentrations of free and total HC in the CSF of migraine without aura (MOA) and migraine with aura (MWA) patients were determined. Results: The concentration of free HC did not differ significantly from normal controls, but the total HC concentration was significantly higher in MOA and MWA patients (41% increase in MOA, P &lt; .001 and 376% increase in MWA, P &lt; .0001) in the mean of the confidence interval of each groups compared with normal controls). Conclusions: These findings suggest that an increase of total HC concentration in the brain is commonly seen in migraine patient and is particularly pronounced in MWA sufferers. We speculate that total HC not only contribute to the development of atherosclerotic conditions, including cardiocerebrovascular diseases, but also reflects an epiphenomenon. abstract_id: PUBMED:31241265 Hyperhomocysteinemia in female migraineurs of childbearing ages. Background And Purpose: Migraine is a risk factor for ischemic stroke in women of childbearing ages. Previous researches revealed a higher prevalence of hyperhomocysteinemia in migraineurs. Possible differences on the frequencies of hyperhomocysteinemia between migraine with aura and migraine without aura could contribute the established variances in stroke risk between these migraine types. Therefore, we aimed to search if the frequency of hyperhomocysteinemia was different between these subtypes of migraine or not. Methods: We analyzed the findings of serum homocysteine levels in female migraineurs of 16-49 years old who admitted to our outpatient clinic. Results: Homocysteine level was elevated in 13.3% of study population. There were not any significant differences on median serum homocysteine levels between migraine with aura (8.0 mikromol/L) and without aura (8.5 mikromol/L). (p=0.426) The frequencies of hyperhomocysteinemia were also similar (9.1% versus 16.7%, respectively; p=0.373). Correlation analyses did not reveal any linear correlation between ages and homocysteine levels either in group of migraine with aura or in group of migraine without aura (p=0.417 and p=0.647, respectively). Similarly, any linear correlation between disease ages and homocysteine levels either in group of migraine with aura or in group of migraine without aura was not detected (p=0.359 and p=0.849, respectively). Conclusion: The median serum homocysteine levels and the frequencies of hyperhomocysteinemia are similar between migraine with aura and without aura in women of childbearing ages. Therefore, the variances on stroke risk ratios between these types of migraine are probably not originated from the differences of serum homocysteine status. abstract_id: PUBMED:19619240 The relationship between homocysteine and genes of folate-related enzymes in migraine patients. Background: It has been suggested that homocysteine (Hcy) and the 5'-10'-methylenetetrahydrofolate reductase (MTHFR) C677T variant are implicated in the pathogenesis of migraine. Homocysteine has the potential to damage endothelium and accelerate atherosclerosis. Genetic factors such as the MTHFR C677T polymorphism, and other polymorphisms in folate-related genes associated with high homocysteine levels, may contribute to increasing this vascular risk. Results: We recruited 427 migraine patients (199 without aura [MO]; 228 with aura [MA]), and 310 controls in a neurologic clinic. Hcy levels and 6 polymorphisms corresponding to 6 folate-related genes, including the MTHFR C677T variant, were determined in all migraine participants and in a subset of 155 controls. We found higher sex-adjusted Hcy levels in MA (mean: 11.02 microM) than MO patients (9.86 microM; P = .005 for the difference). Hcy levels higher than 12.0 microM doubled the risk for MA (OR = 2.145; 95% confidence intervals [CI] = 1.3-3.4; P = .001), and those higher than 15.0 microM incurred a 6-fold increase (OR = 5.95; 95% CI = 2.1-20.0, P &lt; .001). The number of MTHFR 677T alleles was the best genetic predictor of Hcy levels (r(2) = 0.06; P = 6.2e-6; corrected for genetic variants analyzed) and this effect remained significant after correction for other confounding factors. Using multi-dimensionality reduction approaches, we observed significant epigenetic interaction among some of the folate-related genetic variants to predict higher Hcy levels, and also among higher Hcy levels and folate-related genetic variants to predict the end-diagnosis of MA only among migraineurs. In controls, Hcy levels and the number of MTHFR 677T alleles were found to be intermediate between those observed in MA and MO patients. Conclusion: Our results suggest that MA patients have higher Hcy levels. We also observed complex epigenetic interaction among folate-related enzymes, sex, and Hcy levels predicting MA phenotype. Nevertheless, genetic factors explained only a minor proportion of the variance for both Hcy plasma levels and for predicting MA phenotype. Determination of MTHFR C677T polymorphisms and Hcy levels may be useful to identify patients with a high risk of suffering from MA. Answer: The evidence regarding the elevation of blood homocysteine levels in migraine patients is mixed and somewhat contradictory. Some studies have found no significant elevation of blood homocysteine in migraine patients. For instance, one study measured total serum homocysteine in 78 patients with migraine and found that blood homocysteine was not elevated in this group, with only one man having significantly elevated levels and another with borderline elevation, both suffering from migraine without aura (PUBMED:11576202). Another study also reported no significant association between serum levels of homocysteine and characteristics of migraine attacks such as severity, frequency, duration, and headache diary result (HDR) in patients with migraine with aura (PUBMED:25657748). However, other studies have suggested a possible link between homocysteine and migraine. A narrative review indicated that while cross-sectional studies show a weak link between plasma homocysteine levels and migraine, interventional studies have found that treatment with folic acid or vitamin B supplementation, which lowers plasma homocysteine, can decrease the frequency and/or severity of migraine (PUBMED:24613517). Another study found that plasma homocysteine levels were significantly associated with migraine without aura, and that homocysteine levels were lower in migraine with aura than in migraine without aura (PUBMED:21702266). Furthermore, a randomized controlled trial found that vitamin supplementation reduced homocysteine levels and the prevalence of migraine disability, with the treatment effect on both homocysteine levels and migraine disability being associated with the MTHFR C677T genotype (PUBMED:19384265). Additionally, increased dopamine levels, which are associated with the homocysteine-folate pathway, were found in female migraineurs during the headache-free period (PUBMED:19804388). In summary, while some studies do not find a significant elevation of blood homocysteine in migraine patients, others suggest that there may be a link, particularly in certain subgroups of patients or when considering the effects of vitamin supplementation on homocysteine levels and migraine symptoms. The relationship between homocysteine and migraine may be influenced by genetic factors such as the MTHFR C677T genotype and could vary between different types of migraine (with aura vs. without aura).
Instruction: Are registered sputum smear-negative tuberculosis patients in Karnataka, India, diagnosed by national algorithm? Abstracts: abstract_id: PUBMED:25517817 Are registered sputum smear-negative tuberculosis patients in Karnataka, India, diagnosed by national algorithm? Setting: Four districts of Karnataka State, India, that have implemented the National Tuberculosis Control Programme (RNTCP). Objective: To assess the proportion of patients diagnosed according to the nationally recommended algorithm among new smear-negative (NSN) PTB cases registered under the RNTCP. Methods: Information on 201 registered NSN-PTB patients as regards date of initial sputum examination, repeat sputum examination and chest X-ray (CXR) if undertaken, treatment initiation and number of days of antibiotic treatment after initial sputum examination, were collected through record review and patient interviews. In patients with negative or unknown human immunodeficiency virus (HIV) status, the algorithm was considered completed if the patient underwent initial sputum examination, antibiotic trial for ⩾10 days, repeat sputum examination ⩾10 days after initial sputum examination, CXR after repeat sputum examination and anti-tuberculosis treatment ⩾10 days after initial sputum examination. In HIV-positive patients, the algorithm was considered completed if CXR was performed after or at the same time as initial sputum examination. Results: Complete information was available for 170 patients. Of these, the algorithm was completed in 14 (8.2%, 95%CI 0.9-15.5): 1/140 patients with negative or unknown HIV status and 13/30 HIV-positive patients. Conclusion: The algorithm was not completed in most patients registered for treatment. Measures are needed to improve the diagnostic process for smear-negative PTB. abstract_id: PUBMED:25216839 Implementation efficiency of a diagnostic algorithm in sputum smear-negative presumptive tuberculosis patients. Background: For the diagnosis of smear-negative pulmonary tuberculosis (PTB), India's Revised National Tuberculosis Control Programme (RNTCP) recommends a course of broad-spectrum antibiotics after negative smear on initial sputum examination, followed by repeat sputum examination and chest X-ray (CXR). Objectives: 1) To ascertain the proportion of presumptive PTB patients smear-negative on initial sputum examination who completed the diagnostic algorithm, and 2) to investigate barriers to the completion of the algorithm. Methods: In Karnataka State, India, 256 study participants were interviewed in 2012 to ascertain the number of days antibiotics had been prescribed and consumed, the number of re-visits to health centre(s), whether repeat sputum examinations had been performed, whether or not CXR had been performed and when, and whether PTB had been diagnosed. In-depth interviews were conducted with 19 medical officers. Results: The diagnostic algorithm was completed in 13 (5.1%) of 256 participants; three were diagnosed with PTB without completing the algorithm. Most medical officers were unaware of the algorithm, had trained 5-10 years previously, prescribed antibiotics for &lt;10 days and advised CXR without repeat sputum examination, irrespective of the number of days of antibiotic treatment. Other main reasons for non-completion of algorithm were patients not returning to the health centres and a proportion switching to the private sector. Conclusion: Refresher training courses, raising patient awareness and active follow-up of patients to complete the algorithm are suggested. abstract_id: PUBMED:26392971 Sputum smear examination and time to diagnosis in patients with smear-negative pulmonary tuberculosis in the Pacific. Setting: National tuberculosis programmes (NTPs) in Kiribati and the Marshall Islands, 2006-2010. Objective: To determine the proportion of all tuberculosis (TB) cases that were pulmonary smear-negative, and for these patients to determine how many sputum smears were examined and the time from sputum smear examination to registration. Design: A retrospective cross-sectional study involving a record review of national TB and laboratory registers. Results: Of 2420 TB cases identified, 709 (29%) were registered as smear-negative pulmonary TB. Of the 695 (98%) with information on smear examination, 222 (32%) had no smear recorded, 61 (9%) had one smear, 86 (12%) two smears and 326 (47%) three smears. Among the 473 patients who had at least one smear, 238 (50%) were registered before sputum examination, 131 (28%) within 1 week, 72 (15%) between 1 and 4 weeks, and 34 (7%) &gt;4 weeks after sputum examination. Conclusion: NTPs in Kiribati and the Marshall Islands are diagnosing 29% of all TB patients as smear-negative pulmonary TB. Many patients do not have smears done or are registered before undergoing smear examination. Corrective measures are needed. abstract_id: PUBMED:26393052 Pretreatment sputum smear grade and smear positivity during follow-up of TB patients in Ahmedabad, India. In Ahmedabad, India, a retrospective record review was undertaken among 2842 sputum smear-positive tuberculosis patients registered for treatment from April to September 2011 to assess the association of pretreatment sputum smear grade with sputum positivity and the additional yield of a second sputum sample during each follow-up examination. Respectively 39%, 26%, 28% and 7% of patients had pretreatment sputum grade 3+, 2+, 1+ and scanty. The higher the pretreatment sputum grade, the higher the proportion found positive during various follow-up periods. Overall, the additional yield of the second sputum sample was &lt;2%; it did not vary with pretreatment smear grading. abstract_id: PUBMED:23152880 Should sputum smear examination be carried out at the end of the intensive phase and end of treatment in sputum smear negative pulmonary TB patients? Background: The Indian guidelines on following up sputum smear-negative Pulmonary tuberculosis (PTB) patients differ from the current World Health Organization (WHO) guidelines in that the former recommends two follow up sputum examinations (once at the end of intensive phase and the other at the end of treatment) while the latter recommends only one follow up sputum smear microscopy examination, which is done at the end of the intensive phase. This study was conducted to examine if there was any added value in performing an additional sputum smear examination at the end of treatment within the context of a national TB program. Methods: This study was a descriptive record based review conducted in nine tuberculosis (TB) units in Delhi, India. All consecutive new sputum smear-negative PTB patients registered in these nine TB units from 1(st) January 2009 to 31(st) December 2009 were included in the study. Results: Of 2567 new sputum smear-negative TB patients, 1973 (90%) had sputum specimens examined at the end of the intensive phase, of whom 36 (2%) were smear-positive: the majority (n = 28) successfully completed treatment with either the same or a re-treatment regimen. At treatment completion, 1766 (85%) patients had sputum specimens examined, of whom 16 (0.9%) were smear-positive: all these were changed to a re-treatment regimen. Amongst the sputum-positive patients identified as a result of follow up (n = 52), four were diagnosed with multi-drug resistant TB (MDR-TB), three of whom were detected after smear examination at the end of treatment. Conclusions: Given the high burden of TB in India, a 0.9% additional yield of smear-positive sputum smears at the end of treatment translates to 3,297 cases of smear-positive PTB. End-of-treatment smear is a low-yield strategy for detection of smear-positive TB cases, although further studies are needed to determine its population-level impact and cost, particularly in relation to other TB control interventions. abstract_id: PUBMED:32642373 Diagnostic Accuracy of High-Resolution Computed Tomography of Chest in Diagnosing Sputum Smear Positive and Sputum Smear Negative Pulmonary Tuberculosis. Introduction Pulmonary tuberculosis (PTB) is caused by species of organisms in the Mycobacterium tuberculosis complex. It is a major public health problem worldwide and is endemic in Pakistan. Various clinical and biochemical markers exist for its diagnosis. Radiology has an important role in the diagnosis of PTB. Initially, a chest radiograph is warranted for PTB evaluation. High-resolution computed tomography (HRCT) also has high sensitivity and specificity for PTB diagnosis. Features of primary TB include consolidation, lymphadenopathy, pleural effusion and miliary nodules whereas post-primary TB include apical consolidation, nodules and cavitation. The aim of this study was to determine the diagnostic accuracy of HRCT chest in diagnosing sputum smear positive and smear negative PTB. Methods A cross-sectional study was conducted at a large tertiary care teaching hospital. A retrospective review of medical records of patients who underwent HRCT chest and sputum acid-fast bacillus (AFB) direct smear and AFB culture for suspicion of PTB was undertaken. All HRCT chest examinations were performed on multislice computed tomography (CT) scanner. On HRCT, PTB was defined as the presence of consolidation, centrilobular nodules, branching nodules with tree in bud appearance with or without lymphadenopathy and pleural effusion. Diagnostic accuracy of HRCT including sensitivity, specificity, positive and negative predictive values was calculated using 2 x 2 table, taking findings of AFB culture as a gold standard. Results A total of 108 patients were included in this study with a mean age of 51.85 ± 16.86 years. Diagnostic accuracy of HRCT in diagnosing PTB was found to be 84.26% with sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of 89.09%, 79.25%, 81.67%, and 87.50%, respectively. In sputum smear positive patients, HRCT has diagnostic accuracy, sensitivity, specificity, PPV and NPV of 87.50%, 88.57%, 84.62%, 93.94%, and 73.33%, respectively. In sputum smear negative patients, HRCT has diagnostic accuracy, sensitivity, specificity, PPV and NPV of 81.67%, 90.00%, 77.50%, 66.67%, and 93.94%, respectively. Conclusion HRCT has high sensitivity in diagnosing sputum smear positive and sputum smear negative PTB. The specificity of HRCT in diagnosing sputum smear positive PTB was high, whereas it was slightly low in diagnosing sputum smear negative PTB. Overall diagnostic accuracy of HRCT was high in diagnosing PTB. abstract_id: PUBMED:25606253 An audit of sputum smear negative pulmonary tuberculosis cases in kinta district, perak, in 2011. 58 cases of sputum smear negative pulmonary tuberculosis (PTB) were reported to Kinta Health Department between January and June 2011. It was found that 47 cases (81%) were sputum smear negative cases. Six cases (10.3%) were actually sputum smear positive but were wrongly reported as smear negative. Five cases (8.6%) were misdiagnosed as tuberculosis. Majority of these patients (82.7%) sought consultation and treatment after two weeks of having cough. Only 6.9% sought treatment for cough that was less than two weeks old. Accurate diagnosis of PTB is crucial and the Perak State Health Department needs to quickly adopt steps to improve the situation. abstract_id: PUBMED:20616936 Diagnosing sputum/smear-negative pulmonary tuberculosis: Does fibre-optic bronchoscopy play a significant role? Background: Diagnosis of sputum/smear-negative pulmonary tuberculosis patients can be both challenging and time consuming with many patients being put on empirical anti-tubercular treatment. Fibreoptic bronchoscopy may provide a confirmative and early diagnosis in such patients. Aims: To assess the role of fibreoptic bronchoscopy in the diagnosis of sputum /smear-negative pulmonary tuberculosis. Materials And Methods: The study was conducted on 75 suspected sputum / smear-negative pulmonary tuberculosis cases attending Pulmonary Medicine Department of Mamata Medical College and Hospital, Khammam, AP. Fibreoptic bronchoscopy was performed; culture of sputum and bronchial washings for Mycobacterium tuberculosis was done by BACTEC method. Results: A final diagnosis of sputum /smear-negative pulmonary tuberculosis was made in 60 patients. Bronchial washings smear for acid-fast bacilli (AFB) was positive in 21 patients while culture of bronchial washings was positive in 39 patients. In 29 patients, smear or culture of bronchial washing alone contributed to the final diagnosis. Total yield of bronchoscopy in diagnosis of sputum smear negative pulmonary tuberculosis was 83.33% (50/60); bronchoscopy was the only diagnostic method in 66% cases (40/60) with bronchial washings being the only diagnostic method in 48.33%. Bronchial washings smear for AFB and histopathological evidence of caseating granuloma made immediate diagnosis possible in 48.33% (29/60) patients. Conclusion: Our study suggests that fibreoptic bronchoscopy can provide excellent material for diagnosis of suspected cases of Pulmonary Tuberculosis in whom smears of expectorated sputum do not reveal mycobacteria. abstract_id: PUBMED:26786483 Diagnosis of smear-negative tuberculosis in Nigeria: Do health care workers adhere to the national guidelines? Objective: The study sought to assess the extent to which healthcare workers (HCWs) adhere to the National Tuberculosis Program (NTP) guidelines for the diagnosis of smear negative tuberculosis in Nigeria. Method: This was a cross-sectional retrospective desk analysis of case files of 280 smear negative pulmonary TB in six States in southern Nigeria. Results: About 93% of the 280 patients had their first set of sputum smear microscopy tests done, but only 3.6% had the second set of diagnostic tests as prescribed by the NTP guidelines. Only 45.7% (128/280) received broad spectrum antibiotics after their first smear microscopy. 98% had a chest X-ray done, while 93.6% (262/280) had HIV counseling and testing (HCT), out of which 45.0% were HIV positive. Overall, only 2 patients (0.7%) were diagnosed in strict compliance with the NTP guidelines. There was no significant difference in the pattern of diagnosis of smear negative TB cases and smear positive TB cases. Conclusion: The adherence of HCWs to the NTP guidelines for diagnosis of smear negative TB is apparently sub-optimal and needs improvement. abstract_id: PUBMED:22723991 Sputum smear microscopy at two months into continuation-phase: should it be done in all patients with sputum smear-positive tuberculosis? Background: The Revised National Tuberculosis Control Program (RNTCP) of India recommends follow-up sputum smear examination at two months into the continuation phase of treatment. The main intent of this (mid-CP) follow-up is to detect patients not responding to treatment around two-three months earlier than at the end of the treatment. However, the utility of mid-CP follow-up under programmatic conditions has been questioned. We undertook a multi-district study to determine if mid-CP follow-up is able to detect cases of treatment failures early among all types of patients with sputum smear-positive TB. Methodology: We reviewed existing records of patients with sputum smear-positive TB registered under the RNTCP in 43 districts across three states of India during a three month period in 2009. We estimated proportions of patients that could be detected as a case of treatment failure early, and assessed the impact of various policy options on laboratory workload and number needed to test to detect one case of treatment failure early. Results: Of 10055 cases, mid-CP follow-up was done in 6944 (69%) cases. Mid-CP follow-up could benefit 117/8015 (1.5%) new and 206/2040 (10%) previously-treated sputum smear-positive cases by detecting their treatment failure early. Under the current policy, 31 patients had to be tested to detect one case of treatment failure early. All cases of treatment failure would still be detected early if mid-CP follow-up were discontinued for new sputum smear-positive cases who become sputum smear-negative after the intensive-phase of treatment. This would reduce the related laboratory workload by 69% and only 10 patients would need to be tested to detect one case of treatment failure early. Conclusion: Discontinuation of mid-CP follow-up among new sputum smear-positive cases who become sputum smear-negative after completing the intensive-phase of treatment will reduce the laboratory workload without impacting overall early detection of cases of treatment failure. Answer: The studies conducted in Karnataka, India, indicate that the majority of registered sputum smear-negative tuberculosis (TB) patients are not diagnosed according to the nationally recommended algorithm. In one study, complete information was available for 170 patients, and the diagnostic algorithm was completed in only 14 (8.2%) of these cases. Specifically, only 1 out of 140 patients with negative or unknown HIV status and 13 out of 30 HIV-positive patients had the algorithm completed (PUBMED:25517817). Another study found that the diagnostic algorithm was completed in only 13 (5.1%) of 256 participants, with most medical officers being unaware of the algorithm and not following the recommended steps, such as prescribing antibiotics for less than 10 days and advising chest X-ray without repeat sputum examination (PUBMED:25216839). These findings suggest that there is a significant gap between the national guidelines and the actual practice in diagnosing sputum smear-negative TB patients in Karnataka. The reasons for non-completion of the algorithm include patients not returning to health centers, a switch to the private sector, and lack of awareness among medical officers about the diagnostic algorithm. Consequently, measures such as refresher training courses for medical officers, raising patient awareness, and active follow-up of patients are suggested to improve adherence to the diagnostic process for smear-negative pulmonary TB (PUBMED:25216839).
Instruction: Is it necessary to suspend betablockers in decompensated heart failure with low output? Abstracts: abstract_id: PUBMED:20721517 Is it necessary to suspend betablockers in decompensated heart failure with low output? Background: there is evidence that the suspension of betablockers (BB) in decompensated heart failure may increase mortality. Dobutamine (dobuta) is the most commonly used inotrope in decompensation, however, BB and dobuta act with the same receptor with antagonist actions, and concurrent use of both drugs could hinder compensation. Objective: to evaluate whether the maintenance of BB associated with dobuta difficults cardiac compensation. Methods: we studied 44 patients with LVEF &lt; 45% and the need for inotropics. Divided into three groups according to the use of BB. Group A (n=8): those who were not using BB at baseline; Group B (n=25): those who used BB, but was suspended to start dobuta; Group C (n = 11): those who used BB concomitant to dobuta. To compare groups, we used the Student t, Fisher exact and chi-square tests. Considered significant if p &lt; 0.05. Results: mean LVEF 23.8 ± 6.6%. The average use of dobuta use was similar in all groups (p = 0.35), and concomitant use of dobutamine with BB did not increase the length of stay (BB 20.36 ± 11.04 days vs without BB 28.37 ± 12.76 days, p = NS). In the high dose, BB was higher in patients whose medication was not suspended (35.8 ± 16.8 mg/day vs 23.0 ± 16.7 mg/day, p = 0.004). Conclusion: maintaining BB associated with dobutamine did not increase the length of hospitalization and was not associated with the worst outcome. Patients who did not suspend BB were discharged with higher doses of the drug. abstract_id: PUBMED:21484040 Predictors of low cardiac output in decompensated severe heart failure. Objective: To identify predictors of low cardiac output and mortality in decompensated heart failure. Introduction: Introduction: Patients with decompensated heart failure have a high mortality rate, especially those patients with low cardiac output. However, this clinical presentation is uncommon, and its management is controversial. Methods: We studied a cohort of 452 patients hospitalized with decompensated heart failure with an ejection fraction of &lt;0.45. Patients underwent clinical-hemodynamic assessment and Chagas disease immunoenzymatic assay. Low cardiac output was defined according to L and C clinical-hemodynamic profiles. Multivariate analyses assessed clinical outcomes. P&lt;0.05 was considered significant. Results: The mean age was 60.1 years; 245 (54.2%) patients were &gt;60 years, and 64.6% were men. Low cardiac output was present in 281 (63%) patients on admission. Chagas disease was the cause of heart failure in 92 (20.4%) patients who had higher B type natriuretic peptide levels (1,978.38 vs. 1,697.64 pg/mL; P = 0.015). Predictors of low cardiac output were Chagas disease (RR: 3.655, P&lt;0.001), lower ejection fraction (RR: 2.414, P&lt;0.001), hyponatremia (RR: 1.618, P = 0.036), and renal dysfunction (RR: 1.916, P = 0.007). Elderly patients were inversely associated with low cardiac output (RR: 0.436, P = 0.001). Predictors of mortality were Chagas disease (RR: 2.286, P&lt;0.001), ischemic etiology (RR: 1.449, P = 0.035), and low cardiac output (RR: 1.419, P = 0.047). Conclusions: In severe decompensated heart failure, predictors of low cardiac output are Chagas disease, lower ejection fraction, hyponatremia, and renal dysfunction. Additionally, Chagas disease patients have higher B type natriuretic peptide levels and a worse prognosis independent of lower ejection fraction. abstract_id: PUBMED:20977620 Decreased cardiac output: clinical validation in patients with decompensated heart failure. Purpose: Clinically validate (using Fehring's model) characteristics of the nursing diagnosis (ND) of decreased cardiac output (DCO) in 29 patients with decompensated heart failure. Methods: Cross-sectional study conducted in a Brazilian university hospital. Findings: According to the reliability rate (R) between the experts, the major characteristics (R ≥ 0.80) were fatigue, dyspnea, edema, orthopnea, paroxysmal nocturnal dyspnea, and elevated central venous pressure, and the secondary characteristics were weight gain, hepatomegaly, jugular vein distension, palpitations, crackles, oliguria, coughing, clammy skin, and skin color changes. Conclusion: Characteristics with R &gt; 0.50 and ≤1 were valid in the ND of DCO. IMPLICATIONS FOR THE NURSING PRACTICE: Clinical validation studies are necessary to determine the adequacy of this diagnosis and its determining characteristics with Taxonomy II. abstract_id: PUBMED:36045314 Comprehensive and Safe Decongestion in Acutely Decompensated Heart Failure. Purpose Of The Review: Progressive intravascular, interstitial, and alveolar fluid overload underlies the transition from compensated to acutely decompensated heart failure and loop diuretics are the mainstay of treatment. Adverse effects and resistance to loop diuretics received much attention while the contribution of a depressed cardiac output to diuretic resistance was downplayed. Recent Findings: Analysis of experience with positive inotropic agents, especially dobutamine, indicates that enhancement of cardiac output is not consistently associated with increased renal blood flow. However, urinary output and renal sodium excretion increase likely due to dobutamine-mediated decrease in renal and systemic reduced activation of sympathetic nervous- and renin-angiotensin-aldosterone system. Mechanical circulatory support with left ventricular assist devices ascertained the contribution of low cardiac output to diuretic resistance and the pathogenesis and progression of kidney disease in acutely decompensated heart failure. Diuretic resistance commonly occurs in acutely decompensated heart failure. However, failure to resolve fluid overload despite high doses of loop diuretics should alert to the presence of a low cardiac output state. abstract_id: PUBMED:37062727 Discontinuation of Intravenous Catecholamine by Oral Ivabradine in a Patient with Decompensated Heart Failure with Low Cardiac Output Syndrome. Ivabradine has been shown to improve heart failure with sinus tachycardia by reducing the heart rate without affecting left ventricular systolic function or blood pressure. Here we report a case of a catecholaminedependent patient, New York Heart Association (NYHA) class IV, LVEF of 18%, and low cardiac output, who was able to discontinue intravenous catecholamine by oral administration of ivabradine. abstract_id: PUBMED:20428619 Angiotensin II receptor blocker add-on therapy for low cardiac output in decompensated heart failure Background: During heart failure (HF) decompensation, an intense activation of the renin-angiotensin-aldosterone system occurs; however, the use of angiotensin-converting enzyme inhibitor (ACEI) cannot block it completely. Otherwise, the addition of angiotensin II receptor blocker (ARB) can be useful when the inotropic dependence occurs. We evaluated the efficacy of the ARB-ACEI association on dobutamine withdrawal in advanced decompensated HF. Objective: To assess the efficacy of association angiotensin receptor blocker--angiotensin converting enzyme inhibitor to withdraw the intravenous inotropic support in decompensated severe heart failure. Methods: In a case-control study (N = 24), we selected patients admitted at the hospital due to HF that had been using dobutamine for more than 15 days, with one or more unsuccessful drug withdrawal attempts; optimized dose of ACEI and ejection fraction (EF) &lt; 0.45. Then, the patients additionally received ARB (n=12) or not (control, n=12). The outcome was the successful dobutamine withdrawal, evaluated by logistic regression, with a p &lt; 0.05. Results: The EF was 0.25 and the age was 53 years, with a dobutamine dose of 10.7 microg/kg x min. The successful drug withdrawal was observed in 8 patients from the ARB group (67.7%) and in 2 patients from the control group (16.7%). The odds ratio (OR) was 10.0 (95%CI: 1.4 to 69.3; p = 0.02). The worsening in renal function was similar (ARB group: 42% vs. control group: 67%; p=0.129). Conclusion: In this pilot study, the ARB-ACEI association was associated with successful dobutamine withdrawal in advanced decompensated heart failure. The worsening in renal function was similar in both groups. Further studies are necessary to clarify the issue. abstract_id: PUBMED:14728041 Drug treatment of patients with decompensated heart failure. Decompensated heart failure (HF) may be defined as sustained deterioration of at least one New York Heart Association functional class, usually with evidence of sodium retention. Episodes of decompensation are most commonly precipitated by sodium retention, often associated with medication noncompliance. Our therapeutic approach to hospitalized patients is based on the documented hemodynamic responses to vasodilator therapy, with redistribution of mitral regurgitant flow to forward cardiac output and decompression of the left atrium. Invasive hemodynamic monitoring is seldom required for the effective management of patients with HF and there are risks associated with pulmonary artery catheterization. The currently available parenteral vasoactive drugs for decompensated heart failure include: (i) vasodilators such as nesiritide, nitroprusside and nitroglycerin (glyceryl trinitrate); (ii) catecholamine inotropes, primarily dobutamine; and (iii) inodilators such as milrinone, a phosphodiesterase inhibitor. Vasodilators are most appropriate for those patients who are primarily volume-overloaded, but with adequate peripheral perfusion. In this class of agents, nesiritide (recombinant human B-type natriuretic peptide) offers advantages over currently available drugs. Nesiritide produces rapid and sustained decreases in right atrial and pulmonary capillary wedge pressures, with reduction in pulmonary and systemic vascular resistance and increases in cardiac index. The hemodynamic effects of nesiritide infusion were sustained over a duration of 1 week and the drug may be used without intensive monitoring in patients with decompensated HF. Treatment with dobutamine is indicated in patients in whom low cardiac output rather than elevated pulmonary pressure is the primary hemodynamic aberration. However, milrinone reduces left atrial congestion more effectively than dobutamine, and is well tolerated and effective when used in patients receiving beta-blockers. In-patient therapy for decompensated HF is a short term exercise for symptom relief and provides an opportunity to re-assess management in the continuum of care. abstract_id: PUBMED:15373019 Acute heart failure Acute heart failure is always an indication for referral to an intensive care unit. In the widest sense, the term acute heart failure includes the manifestation forms of pulmonary edema, cardiogenic shock or rapid-onset decompensated cardiac insufficiency unaccompanied by shock or pulmonary edema (low-output syndrome). Acute heart failure may occur in the absence of previously known heart disease. Existing prior specific diseases that may end in acute cardiac insufficiency include acute myocardial infarction, decompensated cardiomyopathy, myocarditis, cardiac tamponade, endocarditis or arrhythmogenic heart failure. abstract_id: PUBMED:25959208 Clinical usefulness of the definitions for defining characteristics of activity intolerance, excess fluid volume and decreased cardiac output in decompensated heart failure: a descriptive exploratory study. Aims And Objectives: To assess the clinical usefulness of the operational definitions for the defining characteristics of the NANDA International nursing diagnoses, activity intolerance, decreased cardiac output and excess fluid volume, and the concomitant presence of those diagnoses in patients with decompensated heart failure. Background: Content validity of the operational definitions for the defining characteristics of activity intolerance, excess fluid volume and decreased cardiac output have been previously validated by experts. Their clinical usefulness requires clinical validation. Design: This was a descriptive exploratory study. Methods: Two expert nurses independently assessed 25 patients with decompensated heart failure for the presence or absence of 29 defining characteristics. Interrater reliability was analysed using the Kappa coefficient as a measure of clinical usefulness. The Fisher's exact test was used to test the association of the defining characteristics of activity intolerance and excess fluid volume in the presence of decreased cardiac output, and the correlation between the three diagnoses. Results: Assessments regarding the presence of all defining characteristics reached 100% agreement, except with anxiety. Five defining characteristics of excess fluid volume were significantly associated with the presence of decreased cardiac output. Concomitant presence of the three diagnoses occurred in 80% of the patients. However, there was no significant correlation between the three diagnoses. Conclusions: The operational definitions for the diagnoses had strong interrater reliability, therefore they were considered clinically useful. Only five defining characteristics were representative of the association between excess fluid volume and decreased cardiac output. Therefore, excess fluid volume is related to decreased cardiac output, although these diagnoses are not necessarily associated with activity intolerance. Relevance To Clinical Practice: The operational definitions may favour early recognition of the sequence of responses to decompensation, guiding the choice of common interventions to improve or resolve excess fluid volume and decreased cardiac output. abstract_id: PUBMED:34859180 Emergent catheter ablation for atrial fibrillation in a patient with acute decompensated heart failure on a mechanical haemodynamic support: a case report. Background: Atrial fibrillation (AF) is associated with the exacerbation of heart failure (HF). Although AF ablation has become an established treatment for patients with HF, it is usually an elective procedure. Here, we present a case of acute decompensated heart failure (ADHF) exacerbated by refractory AF, which was successfully treated with emergent AF ablation. Case Summary: A 53-year-old, obese man with a history of myocardial infarction presented to our hospital. Heart function deteriorated with an ejection fraction of 9.8%, and he was repeatedly hospitalized due to worsening HF. This time, the patient was emergently admitted due to ADHF associated with persistent AF. Atrial fibrillation was refractory to electrical cardioversion. Despite optimized medical support, the patient developed haemodynamic collapse and multiple organ failure. Intra-aortic balloon pump (IABP) and mechanical ventilation were initiated in addition to intravenous catecholamines. Emergent AF ablation was performed. Following pulmonary vein isolation, sinus rhythm was restored and the patient's haemodynamic status dramatically improved. The IABP and mechanical ventilation were withdrawn within a few days, and the catecholamine dose was reduced. After cardiac rehabilitation, the patient was discharged. Discussion: Our case suggests that an emergent AF ablation is feasible and effective even in a patient with severe ADHF. An emergent AF ablation could be a therapeutic option to treat a critically unwell patient who has deteriorated due to a vicious cycle of AF and HF. Answer: Based on the provided abstracts, it is not necessary to suspend beta-blockers in decompensated heart failure with low output. A study evaluated whether the maintenance of beta-blockers associated with dobutamine difficults cardiac compensation and found that maintaining beta-blockers associated with dobutamine did not increase the length of hospitalization and was not associated with the worst outcome. Patients who did not suspend beta-blockers were discharged with higher doses of the drug (PUBMED:20721517). This suggests that the concurrent use of beta-blockers and dobutamine, an inotrope commonly used in decompensation, does not hinder cardiac compensation and may be beneficial for patient outcomes. Additionally, other abstracts discuss various treatments and strategies for managing decompensated heart failure with low cardiac output, such as the use of angiotensin II receptor blockers (PUBMED:20428619), inotropic agents like dobutamine (PUBMED:36045314), and ivabradine (PUBMED:37062727), but none specifically recommend the suspension of beta-blockers as a necessary step in treatment. Instead, these abstracts highlight the complexity of managing decompensated heart failure and the need for individualized treatment plans that may include a combination of medications and interventions.
Instruction: Is the interruption of antiretroviral treatment during pregnancy an additional major risk factor for mother-to-child transmission of HIV type 1? Abstracts: abstract_id: PUBMED:19309307 Is the interruption of antiretroviral treatment during pregnancy an additional major risk factor for mother-to-child transmission of HIV type 1? Background: There is currently an experts' agreement discouraging interruption of antiretroviral treatment (ART) during the first trimester of pregnancy in women infected with human immunodeficiency virus type 1 (HIV-1). However, this recommendation is poorly supported by data. We evaluated the effects of discontinuing ART during pregnancy on the rate of mother-to-child transmission. Methods: Logistic regression models were performed in a prospective cohort of 937 children who were perinatally exposed to HIV-1 to estimate adjusted odds ratios for confounding factors on mother-to-child transmission, including maternal interruption of ART. Results: Among 937 pregnant women infected with HIV-1, ART was interrupted in 81 (8.6%) in the first trimester and in 11 (1.2%) in the third trimester. In the first trimester, the median time at suspension of ART was 6 weeks (interquartile range [IQR], 5-6 weeks) and the time without treatment was 8 weeks (IQR, 7-11 weeks). In the third trimester, the median time at suspension of ART was 32 weeks (IQR, 23-36 weeks) and the time without treatment was 6 weeks (IQR, 2-9 weeks). The plasma viral load was similar in women who had treatment interrupted in the first trimester and in those who did not have treatment interrupted. Overall, the rate of mother-to-child transmission in the whole cohort was 1.3% (95% confidence interval [CI], 0.7%-2.3%), whereas it was 4.9% (95% CI, 1.9%-13.2%) when ART was interrupted in the first trimester and 18.2% (95% CI, 4.5%-72.7%) when ART was interrupted in the third trimester. In the multiple logistic regression models, only interruption of ART during either the first or the third trimester, maternal mono- or double therapy, delivery by a mode other than elective cesarean delivery, and a viral load at delivery &gt;4.78 log(10) copies/mL were independently associated with an increased rate of mother-to-child transmission. Conclusions: Discontinuing ART during pregnancy increases the rate of mother-to-child transmission of HIV-1, either when ART is stopped in the first trimester and subsequently restarted or when it is interrupted in the third trimester. This finding supports recommendations to continue ART in pregnant women who are already receiving treatment for their health. abstract_id: PUBMED:28540847 Risk factors for transmission of HIV from mother to child in Bangui. the prevalence of human immunodeficiency virus infection (HIV) in pregnant women in Central Africa is 4.5%. Although a program to prevent mother-child transmission of HIV has been implemented throughout the country, the number of infected infants continues to increase. In this context, our study sought to determine the rate of mother-child transmission of HIV and to identify the main risk factors. this prospective cross-sectional study took place from November 1, 2014, to November 1, 2015, in all 16 maternal and child health centers in Bangui. The study population comprised the infants born to HIV-positive mothers and seen at these centers for HIV testing 6 weeks after birth. The mothers were interviewed and their files examined to identify the risk factors of HIV transmission to their babies. the study included 656 infants. The maternal-fetal transmission rate of HIV was 7%. The main risk factors were the absence of antiretroviral prophylaxis for mothers or children and rupture of the membranes more than 24 hours before labor. HIV transmission from mother to child is a public health problem in Bangui. Vigorous action, such as interventions with antiretroviral medication, good obstetric practices, and especially the implementation of the 2013 WHO recommendations, should certainly reduce the number of new infections among newborns in Bangui. abstract_id: PUBMED:34485455 Combination antiretroviral treatment use in prevention of mother-to-child transmission programmes: 6-week HIV prevalence and relationship to time of antiretroviral treatment initiation and mixed feeding. Background: In Zimbabwe, 16% of pregnant women aged 15-49 years are infected with HIV. More than 90% of HIV infection in children is through mother-to-child transmission (MTCT). We investigated the effectiveness of the Option B+ in reducing HIV infection and factors associated with HIV transmission among infants born to mothers enrolled in the prevention of mother-to-child transmission (PMTCT) programme. Methods: We randomly selected 1204 early infant HIV diagnosis test results for HIV-exposed infants and linked these results to maternal clinical records at primary healthcare clinics in Harare to estimate the prevalence of MTCT and to determine the clinical factors associated with MTCT of HIV at 6 weeks. Results: Of the 1204 infants in the study, 2.5% (95% confidence interval [CI], 1.7-3.5) were infected with HIV at 6 weeks post-delivery. Antiretroviral adherence reduced the odds of HIV infection by about 99% (odds ratio [OR] 0.01 [95% CI, 0.00-0.06]). Both mixed feeding (OR 3.89 [95% CI, 0.92-16.50]) and late initiation of antiretroviral treatment (ART) (after delivery) (OR 3.18 [95% CI, 0.42-23.94]) increased the odds of HIV infection. Conclusion: Early initiation of combination ART reduces 6-week MTCT of HIV in PMTCT programmes to levels similar to those found in controlled trial settings. Exclusive breastfeeding remains important even in the presence of ART. abstract_id: PUBMED:17489882 Systematic review of the efficacy of antiretroviral therapies for reducing the risk of mother-to-child transmission of HIV infection. Objective: To evaluate the efficacy of antiretroviral therapies in reducing the risk of mother-to-child transmission of HIV infection. Methods: Systematic review and meta-analysis of randomized controlled trials. Clinical trials of antiretrovirals were identified through electronic searches (MEDLINE, EMBASE, BIOSIS, EBM review and the Cochrane Library) up until November 2006. Historical searches of reference lists of relevant randomized controlled trials, and systematic and narrative reviews were also undertaken. Studies were included if they were (i) randomized controlled trials of any antiretroviral therapy aimed at decreasing the risk of mother-to-child transmission of HIV infection, (ii) reporting outcomes in terms of HIV infection in infant, infant death, stillbirth, premature delivery, or low birth weight. The data were extracted by a single investigator and checked by a second investigator. Disagreements were resolved through discussion or a third investigator. The efficacy was estimated using relative risk (RR), risk difference (RD) and number needed to treat (NNT) together with 95% confidence intervals. Results: Fifteen trials were included in the systematic review. Based on five placebo-controlled trials, a zidovudine regimen reduced the risk of mother-to-child transmission by 43% (95% CI: 29-55%). The incidence of low birth weight seems to be decreased with zidovudine (pooled RR 0.75, 95% CI: 0.57-0.99). The efficacy of short-short course of zidovudine was comparable with that of the long-short course. Nevirapine monotherapy given to mothers and babies as a single dose reduced the risk of vertical transmission compared with an intrapartum and post-partum regimen of zidovudine (RR 0.60, 95% CI: 0.41-0.87). Zidovudine plus lamivudine was effective in reducing the risk of maternal-child transmission of HIV (RR 0.63, 95% CI: 0.45-0.90). Adding zidovudine to single-dose nevirapine in babies was no more effective than nevirapine alone (pooled RR 0.88, 95% CI: 0.47-1.63), nor was there any significant difference between zidovudine plus lamivudine and nevirapine. In mothers who were treated with standard antiretroviral therapy, no additional benefit was observed with the addition of a single dose of nevirapine in mothers and newborns. In addition, for mothers who received zidovudine prophylaxis, a two-dose intrapartum/newborn nevirapine reduced the risk of HIV infection and death of babies by 68% (95% CI: 39-83%) and 80% (95% CI: 10-95%), respectively, when compared with placebo. Conclusions: The available evidence suggests that zidovudine alone or in combination with lamivudine and nevirapine monotherapy is effective for the prevention of mother-to-child transmission of HIV. They may also be beneficial in reducing the risk of infant death. Different antiretroviral regimens appear to be comparably effective in reducing HIV transmission from mothers to babies. In mothers already receiving zidovudine prophylaxis, adding a single dose of nevirapine to mothers during labour and giving the same drug to infants may further decrease the risk of vertical transmission and infant death. abstract_id: PUBMED:25438711 Mother to child transmission of HIV: what works and how much is enough? In 2012, 3.3 million children were living with HIV (Human Immunodeficiency virus), of whom 260,000 were new infections. Prevention of mother to child transmission is vital in reducing HIV-related child mortality and morbidity. With intervention the risk of transmission can be as low as 1% and without it, as high as 45%. The WHO (World Health Organisation) recommends a programmatic approach to the prevention of perinatal HIV transmission and has withdrawn option A and introduced option B+. This recommends that all HIV positive pregnant and breastfeeding women receive lifelong triple ARV (antiretroviral) from the point of diagnosis. The infant would then receive 4-6 weeks of ART (antiretroviral therapy) (NVP, nevirapine or AZT, Zidovudine) regardless of the feeding method. Where resources are not limited an individualised approach can be adopted. Worldwide, health care needs to be accessible and HIV testing performed in pregnancy and followed up in a robust but socially sensitive way so that treatment can be initiated appropriately. In either setting the risk of transmission is never zero and countries need to decide for themselves what is the most practical and sustainable approach for their setting, so that the maximum impact on maternal and child mortality and morbidity can be achieved. abstract_id: PUBMED:16307509 Prevention of mother-to-child transmission of HIV: treatment options. Mother-to-child transmission of HIV-1 is responsible for 1800 new infections in children daily. The use of antiretroviral therapy can significantly reduce the risk of transmission. In settings where highly active antiretroviral therapy is available, mother-to-child transmission rates have been reduced to less than 2%, in the absence of breastfeeding. Women who require ongoing highly active antiretroviral therapy for their own health should receive this in pregnancy, which is also very effective in preventing transmission. Where resources allow, combination highly active antiretroviral therapy can also be used for preventing mother-to-child transmission in those women who do not yet need to receive ongoing treatment. The potential side effects of highly active antiretroviral therapy must be considered in pregnant women and their infants. Where highly active antiretroviral therapy is not possible, a dual combination regimen of antepartum zidovudine with single-dose nevirapine to mother and baby can reduce transmission to below 5%. In many places, the only available option is single-dose nevirapine to mother and baby, which is effective in halving transmission risk, although the effectiveness in practice will be influenced by continued infection through breastfeeding, and by program factors such as the uptake of HIV testing. Exposure to nevirapine for mother-to-child transmission prevention can select for resistant virus in the majority of women. While the long-term implications of this are not completely clear, this selection can be reduced by the addition of short courses of postpartum zidovudine and lamivudine. abstract_id: PUBMED:24582834 Executive summary of the Consensus Statement on monitoring HIV: pregnancy, birth, and prevention of mother-to-child transmission. The main objective in the management of HIV-infected pregnant women is prevention of mother-to-child transmission; therefore, it is essential to provide universal antiretroviral treatment, regardless of CD4 count. All pregnant women must receive adequate information and undergo HIV serology testing at the first visit. If the serological status is unknown at the time of delivery, or in the immediate postpartum, HIV serology testing has to be performed as soon as possible. In this document, recommendations are made regarding the health of the mother and from the perspective of minimizing mother-to-child transmission. abstract_id: PUBMED:19242245 Update on successes and challenges regarding mother-to-child transmission of HIV. Purpose Of Review: There is an unprecedented global commitment to reverse the pediatric HIV epidemic by making prevention of mother-to-child transmission (PMTCT) services accessible in all countries. This review outlines the successes made and the challenges that remain. Recent Findings: In resource-rich countries, mother-to-child transmission rates of HIV as low as 1% have been achieved. The efficacy of short-course antiretrovirals for PMTCT in Africa is estimated at 50%. Coinfections with herpes simplex virus type 2, other sexually transmitted infections resulting in genital ulcers, and endemic infectious diseases (e.g., malaria) may increase the risk of mother-to-child transmission of HIV. Vertical transmission of drug-resistant viruses has been reported; the prevalence and effect of transmitted resistant virus on treatment outcomes are under investigation. Obstacles facing PMTCT in resource-limited countries include the lack of healthcare infrastructure, limited manpower, and competing public health priorities with the limited healthcare budget. Summary: Although the birth of an HIV-infected child in a resource-rich country is now a sentinel health event, in most resource-limited countries the birth of an HIV-infected child continues to be the status quo. Comprehensive PMTCT, including antiretroviral treatment for HIV-infected women and children, should be paramount in resource-limited countries. abstract_id: PUBMED:15166829 Prevention of mother-to-child transmission of HIV infection. Purpose Of Review: Mother-to-child transmission of HIV infection is the primary cause of paediatric HIV infections worldwide. Although clinical trials show that antiretroviral therapy, elective caesarean section and formula feeding can significantly reduce the peripartum or postpartum risk of transmission, their application on a population basis is challenging. There is a need for alternative, easier and more effective interventions for population-based programmes. Recent Findings: This review addresses recent advances in our understanding of mother-to-child transmission risk factors, including maternal viral load (in plasma, genital tract and breast milk) and gender, and determinants and rates of postnatal transmission. New information on prophylactic antiretroviral therapy includes results from randomized trials in Africa and Thailand, in addition to new information on implementation of prevention of mother-to-child transmission programmes in nontrial settings, in both developed and developing countries. Two important issues relating to use of antiretroviral prophylaxis are discussed: safety and toxicity, including new findings on haemopoiesis, prematurity and mitochondrial abnormalities in antiretroviral therapy-exposed infants and children, and resistance. Recent trends and controversies relating to mode of delivery in HIV-infected pregnant women are outlined. Regarding infant feeding, preliminary results on use of mono-antiretroviral therapy to prevent postnatal transmission in breastfeeding HIV-exposed infants are discussed. Summary: In resource-rich settings, virtual elimination of mother-to-child transmission is theoretically possible. Even in these settings, however, a substantial number of infected women are not being identified early enough for optimum application of prevention of mother-to-child transmission interventions. In developing country settings, focus is being directed towards scaling-up prevention programmes now that trials have established a variety of effective antiretroviral prophylactic approaches. abstract_id: PUBMED:11512150 Mother-to-child transmission of drug-resistant HIV. Mother-to-child transmission of HIV-I is responsible for the infection of hundreds of thousands of infants every year. The use of prophylactic antiretroviral treatments has brought about a dramatic decrease in the risk of transmission. Nevertheless, vertical transmission can still occur. In some cases, the presence of drug-resistant HIV-I strains in the mother has been responsible for the failure of the prophylactic scheme. Moreover, these strains have also been detected in the newborn. The aim of this review is to provide updated information on mother-to-child transmission of drug-resistant HIV strains and to help guide treatment decisions during pregnancy. Answer: Yes, the interruption of antiretroviral treatment (ART) during pregnancy is an additional major risk factor for mother-to-child transmission (MTCT) of HIV type 1. A study found that discontinuing ART during pregnancy increases the rate of MTCT of HIV-1, either when ART is stopped in the first trimester and subsequently restarted or when it is interrupted in the third trimester. This finding supports recommendations to continue ART in pregnant women who are already receiving treatment for their health (PUBMED:19309307). Moreover, other studies have shown that the absence of antiretroviral prophylaxis for mothers or children is a main risk factor for HIV transmission to babies (PUBMED:28540847), and that early initiation of combination ART reduces 6-week MTCT of HIV in PMTCT programmes (PUBMED:34485455). The efficacy of antiretroviral therapies in reducing the risk of MTCT has been well documented, with different antiretroviral regimens being comparably effective in reducing HIV transmission from mothers to babies (PUBMED:17489882). The WHO recommends that all HIV positive pregnant and breastfeeding women receive lifelong triple ARV from the point of diagnosis, and the infant should receive 4-6 weeks of ART regardless of the feeding method (PUBMED:25438711). In settings where highly active antiretroviral therapy (HAART) is available, MTCT rates have been reduced to less than 2% in the absence of breastfeeding (PUBMED:16307509). Overall, the evidence suggests that maintaining continuous and appropriate ART during pregnancy is crucial for minimizing the risk of MTCT of HIV.
Instruction: Exercise in severe COPD: is walking different from stair-climbing? Abstracts: abstract_id: PUBMED:28169985 Stair-Climbing Capacity as a Marker of Improvement Following Pulmonary Rehabilitation. Purpose: The aim of this study was to explore the potential and safety of a stair-climbing test as a tool to monitor improvement following pulmonary rehabilitation (PR) in patients with chronic obstructive pulmonary disease (COPD). Methods: Stair-climbing capacity was assessed in 139 patients with COPD before and after a comprehensive 8-week PR program, which included stair-climbing training. Stair-climbing capacity was assessed as the total number of flights of stairs climbed without stopping. A constant work rate endurance test (CET) was also performed before and after PR. Change in stair-climbing after PR (Δstairs) was compared and correlated to the change in endurance time (ΔCET) and, for 40 patients, to the change in COPD assessment test (ΔCAT) score. Results: Most patients had moderate to severe COPD (mean forced expiratory volume in 1 second = 54% ± 20% predicted). Stair-climbing capacity, endurance time, and CAT score improved after PR (2.8 ± 1.4 vs 8.3 ± 3.3 flights, 408 ± 272 vs 717 ± 415 seconds, and 20.0 ± 6.4 vs 17.6 ± 6.6 units, respectively; P value for all &lt; .001). Δstairs was moderately correlated to ΔCET (r = 0.49; P &lt; .001) and well correlated to ΔCAT (r = -0.71; P &lt; .001). Patients with greater change in Δstairs had better baseline resting lung function and aerobic capacity. No adverse events were reported during stair-climbing. Conclusion: Stair-climbing is responsive to training in patients with COPD and is correlated to the change in CAT score following PR. Although the test requires further standardization, it could eventually be used as a simple and safe way to assess improvement following interventions in COPD. abstract_id: PUBMED:18280129 Exercise in severe COPD: is walking different from stair-climbing? Background: It remains unclear whether the 6-min walking test can predict performance during stair-climbing in severe COPD patients. The present study aimed to assess different pathophysiological changes between walking and stair-climbing in these patients. Methods: Sixteen COPD patients (mean FEV1 33+/-13% predicted) underwent a 6-min walking test and performed stair-climbing (44 steps) in a randomized, cross-over design. Blood gases, blood lactate, lung function parameters, maximal inspiratory mouth, sniff nasal and twitch mouth pressures, blood pressure, heart rate, and Borg Dyspnea Scale (BDS) were measured before and after exercise. Results: The median drop of PaO2 during walking (2.6 mmHg) and stair-climbing (2.4 mmHg) was comparable (p=0.93). However, stair-climbing caused more dyspnea (median BDS 6.5 vs. 5.5, p=0.01), a higher median blood lactate (1.1 vs. 0.3 mmol/l p&lt;0.001), a more pronounced drop in mean pH (-0.05+/-0.02 vs. -0.03+/-0.03, p=0.02) and a higher increase in mean systolic blood pressure (27+/-11 vs. 13+/-16 mmHg; p=0.009). Stair-climbing, but not walking, caused prolonged lung hyperinflation (mean TLC difference 4.4+/-4.7% predicted, p=0.003). There was no relationship between the 6-min walking distance (314+/-104 m) and the time needed for stair-climbing (55+/-33 s), nor were there any differences in inspiratory muscle strength and heart rate. Conclusion: Although the drop of PaO2 was comparable, stair-climbing resulted in more prolonged hyperinflation of the lungs, higher blood lactate production and more dyspnea than walking. The walking distance was not related to the time needed to manage stair-climbing. Therefore, pathophysiological changes during the 6-min walking test do not anticipate those during stair-climbing in patients with severe COPD. abstract_id: PUBMED:23547631 The effect of continuous positive airway pressure on stair-climbing performance in severe COPD patients. Stair climbing is associated with dynamic pulmonary hyperinflation and the development of severe dyspnea in patients with chronic obstructive pulmonary disease (COPD). This study aimed to assess whether (i) continuous positive airway pressure (CPAP) applied during stair climbing prevents dynamic hyperinflation and thereby reduces exercise-induced dyspnea in oxygen-dependent COPD-patients, and (ii) the CPAP-device and oxygen tank can be carried in a hip belt. In a randomised cross-over design, oxygen-dependent COPD patients performed two stair-climbing tests (44 steps): with supplemental oxygen only, then with the addition of CPAP (7 mbar). The oxygen tank and CPAP-device were carried in a hip belt during both trials. Eighteen COPD patients were included in the study. Although all patients could tolerate stair climbing with oxygen alone, 4 patients were unable to perform stair climbing while using CPAP. Fourteen COPD patients (mean FEV1 36 ± 14% pred.) completed the trial and were analyzed. The mean flow rate of supplemental oxygen was 3 ± 2 l/min during stair climbing. Lung hyperinflation, deoxygenation, hypoventilation, blood lactate production, dyspnea and the time needed to manage stair climbing were not improved by the application of CPAP (all p &gt; 0.05). However, in comparison to climbing with oxygen alone, limb discomfort was reduced when oxygen was supplemented with CPAP (p = 0.008). In conclusion, very severe COPD patients are able to carry supporting devices such as oxygen tanks or CPAP-devices in a hip belt during stair climbing. However, the application of CPAP in addition to supplemental oxygen during stair climbing prevents neither exercise-induced dynamic hyperinflation, nor dyspnea. abstract_id: PUBMED:37445215 Stair-Climbing Tests or Self-Reported Functional Capacity for Preoperative Pulmonary Risk Assessment in Patients with Known or Suspected COPD-A Prospective Observational Study. Background: This prospective study aims to determine whether preoperative stair-climbing tests (SCT) predict postoperative pulmonary complications (PPC) better than self-reported poor functional capacity (SRPFC) in patients with known or suspected COPD. Methods: A total of 320 patients undergoing scheduled for major non-cardiac surgery, 240 with verified COPD and 80 with GOLD key indicators but disproved COPD, underwent preoperative SRPFC and SCT and were analyzed. Least absolute shrinkage and selection operator (LASSO) regression was used for variable selection. Two multivariable regression models were fitted, the SRPFC model (baseline variables such as sociodemographic, surgical and procedural characteristics, medical preconditions, and GOLD key indicators plus SRPFC) and the SCT model (baseline variables plus SCTPFC). Results: Within all stair-climbing variables, LASSO exclusively selected self-reported poor functional capacity. The cross-validated area under the receiver operating characteristic curve with bias-corrected bootstrapping 95% confidence interval (95% CI) did not differ between the SRPFC and SCT models (0.71; 0.65-0.77 for both models). SRPFC was an independent risk factor (adjusted odds ratio (OR) 5.45; 95% CI 1.04-28.60; p = 0.045 in the SRPFC model) but SCTPFC was not (adjusted OR 3.78; 95% CI 0.87-16.34; p = 0.075 in the SCT model). Conclusions: Our findings indicate that preoperative SRPFC adequately predicts PPC while additional preoperative SCTs are dispensable in patients with known or suspected COPD. abstract_id: PUBMED:8222791 Estimation of ventilatory reserve by stair climbing. A study in patients with chronic airflow obstruction. Clinicians and surgeons have used the subjective response to the climb of "one or two flights of stairs" to assess the "reserve" of patients with chronic airflow obstruction (CAO). Very little objective data exist regarding the metabolic and ventilatory cost for any level of stair climbing in these patients. Therefore, this study was designed to evaluate the use of symptom-limited stair climbing as a simple method to estimate the peak oxygen uptake (VO2) and minute ventilation (VE) in patients with CAO. We studied 31 men with varying degrees of CAO, who climbed stairs until they stopped at their symptom-limited maximum. During this climb, timed expired gas was intermittently collected and analyzed, and oxygen saturation and heart and respiratory rates were recorded. The patients achieved 81 +/- 14 percent of their predicted maximal heart rate and 90 +/- 27 percent of their predicted maximal VE. The number of steps or flights climbed correlated linearly with peak VO2 (r = 0.72, p &lt; 0.01) and with VE (r = 0.7, p &lt; 0.01). Stair climbing peak VE, VO2, heart and respiratory rate correlated well with those achieved during standard leg cycle ergometry. The mean +/- SD number of flights climbed was 4.2 +/- 1.7 with most patients (87%) reaching at least 3 flights (54 steps). The group of eight patients with very severe CAO (FEV1 &lt; 0.9 L) climbed 3.4 +/- 0.9 flights (61 +/- 16 steps). We conclude that a symptom-limited maximal stair climb helps estimate peak VO2 and VE in patients with CAO. The frequently advocated test to climb one to two flights to evaluate cardiopulmonary reserve is not adequate for most patients with CAO. Symptom-limited maximal stair climbing is a simple, inexpensive and readily available test that may be used to evaluate the cardiopulmonary reserve of stable patients with CAO. abstract_id: PUBMED:38034469 Predictive Value of Skeletal Muscle Function Test Combined with Climbing Test for Postoperative Cardiopulmonary Complications in Patients with COPD and Concurrent Lung Cancer. Purpose: To explore the predictive value of skeletal muscle function measurement combined with stair climbing test for postoperative cardiopulmonary complications in patients with chronic obstructive pulmonary disease (COPD) and non-small cell lung cancer (NSCLC). Patients And Methods: A prospective study was conducted from June 2022 to July 2023 at West China Hospital of Sichuan University, including 335 COPD patients with lung cancer who underwent surgery. The patients were divided into two groups based on the occurrence of postoperative cardiopulmonary complications: the complication group and the non-complication group. The demographic data, including gender, age, smoking history, quadriceps strength, body mass index (BMI), respiratory muscle strength, 6-minute walk test (6MWD), stair climbing test, and preoperative pulmonary function tests, were compared between the two groups. Logistic regression analysis was performed to evaluate the predictive power of each parameter for postoperative cardiopulmonary complications. Results: Among the enrolled patients, 103 (30.7%) developed postoperative cardiopulmonary complications. Significant differences were observed between the two groups in terms of quadriceps strength, respiratory muscle strength, 6MWD, smoking history, stair climbing test, DLCO%, FEV1%, heart rate, oxygen saturation, surgical duration, surgical approach, resection range, and blood loss (P&lt;0.05). Logistic regression analysis revealed that respiratory muscle strength, quadriceps strength, stair climbing test, FEV1%, DLCO%, ΔHR, ΔSPO2, surgical approach were identified as risk factors for postoperative cardiopulmonary complications in patients with COPD and lung cancer. Conclusion: Skeletal muscle function measurement, stair climbing test, FEV1, surgical approach, and DLCO% can serve as assessment tools for surgical risk in patients with COPD and lung cancer. They can predict the occurrence of postoperative cardiopulmonary complications to a certain extent, providing valuable predictive value for these complications in patients with COPD and NSCLC. abstract_id: PUBMED:22793943 Role of exercise in testing and in therapy of COPD. The stair-climbing test, 6MWT, and shuttle test are exercise tests that requires less technical support than the CPET and are more available to any physician. The 6MWT is the simplest and most likely to be cost effective, as it provides useful information regarding prognosis, ADLs, and health care use at a very low cost. In addition, the 6MWT can be used to evaluate response to several interventions, including physical rehabilitation, medications, lung volume reduction interventions, and transplantation. The 6MWT has also been useful in and has become an integral part of the evaluation and response to treatment in other medical conditions, including congestive heart failure, pulmonary hypertension, and pulmonary fibrosis. The stair-climbing test seems to be most useful for preoperative evaluations when a CPET is not available. We have also used it on patients unable to perform a good CPET because of lack of familiarity with bicycle pedaling. The shuttle walk test may be used to better determine a maximal exercise capacity when a CPET is not available and to measure the effects of pulmonary rehabilitation in patients unfamiliar with a CPET. The role of exercise as a therapeutic tool is central to the concept of pulmonary rehabilitation. Exercise training improves not only functional dyspnea and health-related quality of life, but also has been shown to decrease health care resource use. As part of a comprehensive pulmonary rehabilitation initiated after a hospitalization for exacerbation, it has been shown to decrease readmission rates. abstract_id: PUBMED:11181240 Chronic obstructive pulmonary disease: evaluation of exercise tolerance using three different exercise tests The clinical usefulness of simple exercise tests in patients with chronic obstructive pulmonary disease (COPD) continues to give rise to debate. Questions remain about whether these tests assess maximum effort or only exercise tolerance (submaximal effort). To determine the levels of stress reached during the six-minute walking (6MW) test and stair climbing (SC) test and their relation to maximum aerobic capacity, 50 patients with COPD (FEV1 49 +/- 19%) were studied. Data collected included spirometric variables and VO2, heart rate (HR), ventilatory response and dyspnea during the progressive effort ergometric cycle (PEEC) test. Two 6MW and SC tests were completed on two separate days, with distance, number of steps, HR and dyspnea recorded. HR was higher during the PEEC test. Among the simple tests, SC caused a faster HR than did the 6MW test. Likewise, a significant linear relation was observed between VO2/kgpeak during the 6MW test (r2 = 0.27; p &lt; 0.05) and the SC test (r2 = 0.33; p &lt; 0.01). We therefore conclude that exercise tolerance in patients with COPD can be evaluated using simple stress tests. The SC test is probably the best simple way to determine maximum functional capacity, whereas the 6MW test can be reserved for measurement of exercise tolerance. abstract_id: PUBMED:18242069 Different physical therapy interventions on daily physical activities in chronic obstructive pulmonary disease. Aim: To verify the outcome of different physical therapy interventions in activities of daily living of individuals with chronic obstructive pulmonary disease (COPD). Methods: Twenty-two COPD individuals were randomly divided into three groups: the Lower Limb Training Group (LLTG, n=8) performed treadmill exercise, Upper Limb Training Group (ULTG, n=8) performed upper limb (UL) training, and Control Group (CG, n=6) underwent bronchial hygiene therapy for 6 weeks, three times per week. The following were simulated in the activities of the daily living test: blackboard erasing, weight lifting exercise, stair climbing and treadmill walking. The duration of these tasks was 5 min of blackboard erasing and weight lifting, and 6 min of stair climbing and treadmill walking. Results: Intragroup analysis presented a significant post-treatment ventilatory demand (V(E)/MVV) decrease for the LLTG during basal conditions and walking, and significant decrease during blackboard erasing and walking for the ULTG, with similar metabolic demand (VO(2)/VO(2)max) for both groups. Dyspnoea decreased significantly in the LLTG during post-treatment walking and increased for CG. The number of stairs climbed decreased significantly for the CG and increased for LLTG and ULTG post treatment, along with the walked distance for LLTG. The intergroup analysis presented significant differences in post-treatment dyspnoea with greater values observed for the CG during walking. Conclusion: The different protocols utilized promoted benefits towards physical exercise tolerance, particularly in the ULTG that presented better performance in sustained UL tasks, which may suggest better conditioning and coordination of the muscles involved in UL elevation. abstract_id: PUBMED:16540501 Cardiopulmonary stress during exercise training in patients with COPD. Exercise training is an essential component of pulmonary rehabilitation. However, the cardiopulmonary stress imposed during different modalities of exercise training is not yet known. In the present study, the cardiopulmonary stress of a 12-week exercise training programme in 11 chronic obstructive pulmonary disease (COPD) patients (forced expiratory volume in one second 42+/-12%pred, age 69+/-6 yrs) was measured. Pulmonary gas exchange and cardiac frequency (f(C)) of three training sessions were measured with a portable metabolic system at the beginning, mid-term and end of the programme. Symptoms were assessed with Borg scores. The exercise intensity was compared with the recommendations for exercise training by the American College of Sports Medicine (ACSM). Training effects were significant (maximum change in work: 14+/-11 Watts, 6-min walk test: 44+/-36 m). Whole body exercises (cycling, walking and stair climbing) consistently resulted in higher cardiopulmonary stress (oxygen uptake (V'(O(2))), minute ventilation and f(C)) than arm cranking and resistance training. Dyspnoea was higher during cycling than resistance training. Patients exercised for &gt;70% (&gt;20 min) of the total exercise time at &gt;40% of the V'(O(2)) reserve and f(C) reserve ("moderate" intensity according to the ACSM) throughout the programme. The cardiopulmonary stress resistance training is lower than during whole-body exercise and results in fewer symptoms. In addition, exercise testing based on guidelines using a fixed percentage of baseline peak performance and symptom scores achieves and sustains training intensities recommended according to the American College of Sports Medicine. Answer: Yes, walking and stair climbing are different in terms of the physiological responses they elicit in patients with severe chronic obstructive pulmonary disease (COPD). A study comparing the two activities found that although the drop in arterial oxygen pressure (PaO2) during walking and stair climbing was comparable, stair climbing caused more dyspnea, higher blood lactate levels, a more pronounced drop in pH, and a higher increase in systolic blood pressure than walking. Additionally, stair climbing, but not walking, resulted in prolonged lung hyperinflation. The study also noted that there was no relationship between the distance walked during a 6-minute walking test and the time needed for stair climbing, suggesting that the pathophysiological changes during walking do not predict those during stair climbing in patients with severe COPD (PUBMED:18280129). Furthermore, stair climbing has been shown to be responsive to training in COPD patients and is correlated with changes in the COPD assessment test (CAT) score following pulmonary rehabilitation (PR). This indicates that stair climbing can be a useful tool to monitor improvement following PR in COPD patients (PUBMED:28169985). However, the application of continuous positive airway pressure (CPAP) during stair climbing did not prevent exercise-induced dynamic hyperinflation or reduce dyspnea in oxygen-dependent COPD patients, although it did reduce limb discomfort when combined with supplemental oxygen (PUBMED:23547631). In terms of preoperative risk assessment, stair-climbing tests did not predict postoperative pulmonary complications (PPC) better than self-reported poor functional capacity in patients with known or suspected COPD (PUBMED:37445215). Additionally, stair climbing has been used to estimate ventilatory reserve and peak oxygen uptake in patients with chronic airflow obstruction, providing a simple method to evaluate cardiopulmonary reserve (PUBMED:8222791). Overall, the evidence suggests that stair climbing imposes a greater physiological burden than walking in patients with severe COPD and can be a valuable tool for both assessment and training in this patient population.
Instruction: Can a thoracic surgeon identify lymph node metastases during surgery based on their size? Abstracts: abstract_id: PUBMED:20176204 Evaluation of mediastinal lymph nodes with endobronchial ultrasound: the thoracic surgeon's perspective. Objective: The objectives of our study are to (1) describe our experience with endobronchial ultrasound-guided fine-needle aspiration of mediastinal lymph nodes and (2) illustrate how thoracic surgeons facile with ultrasound-guided fine-needle aspiration have the potential to streamline patient care. Methods: We performed a retrospective review of all patients within our prospectively maintained database who underwent endobronchial ultrasound-guided fine-needle aspiration of mediastinal lymph nodes by thoracic surgeons at the University of Minnesota from September 1, 2006, to April 15, 2009. We included patients in our analysis if (1) their malignancy diagnosis was based on immediate endobronchial ultrasound-guided fine-needle aspiration cytology or (2) they underwent a confirmatory procedure (ie, mediastinoscopy or thoracoscopy) that sampled the same mediastinal lymph node stations biopsied by endobronchial ultrasound-guided fine-needle aspiration to verify normal, benign, or nondiagnostic endobronchial ultrasound-guided fine-needle aspiration findings. We also collected data on additional diagnostic or therapeutic procedures performed in the same anesthesia setting as endobronchial ultrasound-guided fine-needle aspiration. Results: Over the study period, 192 patients underwent endobronchial ultrasound-guided fine-needle aspiration; 98 patients met our inclusion criteria. We achieved a sensitivity of 87.9%, specificity of 97.4%, and diagnostic accuracy of 91.7%. For patients undergoing lung cancer staging, we sampled a mean of 3.0 +/- 0.9 mediastinal lymph node stations. Half of our patients underwent an additional diagnostic or therapeutic procedure at the time of endobronchial ultrasound-guided fine-needle aspiration. Conclusion: Thoracic surgeons who perform endobronchial ultrasound-guided fine-needle aspiration can achieve excellent sensitivity, specificity, and diagnostic accuracy while adhering to sound oncologic principles. Endobronchial ultrasound-guided fine-needle aspiration adds to the thoracic surgeon's unique capacity to expedite a diagnostic workup and treatment, thereby streamlining patient care. abstract_id: PUBMED:27680062 Optimal lymphadenectomy for thoracic esophageal cancer: three-field or modified two-field lymphadenectomy Differences in operative procedure and knowledge of esophageal cancer exist among surgeons from different countries and regions. There is controversy in the surgical treatment of esophageal cancer, especially in the extent of lymphadenectomy. Until now, results of the three-field lymphadenectomy and two-field lymphadenectomy are mostly reported by retrospective studies from Japan and China. Three-field lymphadenectomy has been initiated in Fujian Provincial Cancer Hospital since 1990s. After evaluating our database, we found that three-field was superior to two-field lymphadenectomy in terms of long-term survival for patients with upper thoracic esophageal cancer, whereas for those with middle or lower thoracic esophageal cancer, the survival benefit of three-field lymphadenectomy was reduced. Therefore, we propose to perform three-field lymphadenectomy for upper thoracic esophageal cancer. In middle or lower thoracic esophageal cancer, we suggest to perform modified two-field lymphadenectomy in most cases, and three-field lymphadenectomy in selective cases. Video-assisted two-field lymphadenectomy is feasible. Based on the national condition of China, we advise to perform thoracic duct removal only in patients with posterior mediastinal or peri-ductus node metastasis to achieve curative effect. abstract_id: PUBMED:17710360 Extramural lymphatic drainage from the thoracic esophagus based on minute cadaveric dissections: fundamentals for the sentinel node navigation surgery for the thoracic esophageal cancers. In order to elucidate the lymphatic pathways from the thoracic esophagus, minute dissection of five adult cadavers, from the neck through the diaphragm, was performed. Peri-esophageal lymphatics were dissected from both the anterior and posterior aspects. The topographical differences between the right and left lymphatic drainage were revealed. The right lymphatic drainage system (RDS) was basically longitudinal and multi-stationed. Longitudinal lymphatics were relatively poorly developed in the left lymphatic drainage system (LDS), and direct drainage to the thoracic duct from the left wall of the thoracic esophagus, was frequently observed. The right uppermost thoracic paratracheal node received almost all levels of the right esophageal wall, and this node was thought to be the key node in the RDS. A contralateral lymphatic pathway was relatively frequently observed in the middle and lower thoracic esophagus. These results seemed to be in agreement with the anatomical and clinicopathological data in the literature, and might serve as a basis for sentinel node navigation surgery for the thoracic esophageal cancers. abstract_id: PUBMED:33676903 The Presence of Metastatic Thoracic Duct Lymph Nodes in Western Esophageal Cancer Patients: A Multinational Observational Study. Background: The thoracic lymphadenectomy during an esophagectomy for esophageal cancer includes resection of the thoracic duct (TD) compartment containing the TD lymph nodes (TDLNs). The role of TD compartment resection is still a topic of debate since metastatic TDLNs have only been demonstrated in squamous cell carcinomas in Eastern esophageal cancer patients. Therefore, the aim of this study was to assess the presence and metastatic involvement of TDLNs in a Western population, in which adenocarcinoma is the predominant type of esophageal cancer. Methods: From July 2017 to May 2020, all consecutive patients undergoing an open or robot-assisted transthoracic esophagectomy with concurrent lymphadenectomy and resection of the TD compartment in the University Medical Center Utrecht in Utrecht, the Netherlands, and the Città della Salute e della Scienza University Hospital in Turin, Italy, were included. The TD compartment was resected en bloc and was separated in the operation room by the operating surgeon after which it was macroscopically and microscopically assessed for (metastatic) TDLNs by the pathologist. Results: A total of 117 patients with an adenocarcinoma (73%) or squamous cell carcinoma (27%) of the esophagus were included. In 61 (52%) patients, TDLNs were found, containing metastasis in 9 (15%) patients. No major complications related to TD compartment resection were observed. Conclusions: This study demonstrates the presence of metastatic TDLNs in adenocarcinomas of the esophagus. This result provides a valid argument to routinely extend the thoracic lymphadenectomy with resection of the TD compartment during an esophagectomy for esophageal cancer. abstract_id: PUBMED:32175383 Clinical significance and prognosis of supraclavicular lymph node metastasis in patients with thoracic esophageal cancer. Background: Although most studies proved that thoracic esophageal cancer surgery with supraclavicular lymph nodes (SCLNs) metastasis could benefit, less than 30% of the 5-year survival rate remained controversy on its surgical treatment. In this study, we aimed to analyze the prognosis of SCLNs on the different segments of thoracic esophageal cancer, which will supply a reference for the treatment of this disease. Methods: Retrospectively collected the clinical data of 163 patients with thoracic esophageal squamous cancer (ESCC) and compared the effects of SCLNs on prognosis in different segments. Results: Patients with SCLNs metastasis had a worse prognosis than the negative group (P&lt;0.001). In the upper thoracic group, there was no significant difference in OS between SCLNs positive group and negative group (P=0.077); however, in the middle and lower thoracic group, SCLNs positive group had a worse prognosis than the negative group (P&lt;0.001) and lymph nodes positive in other sites (except for SCLNs) (P=0.039). Multivariate analysis found that SCLNs metastasis was an independent risk factor affecting the prognosis of ESCC in the middle and lower thoracic segments (P=0.007). Conclusions: For patients with upper thoracic ESCC, SCLNs appear to be regional nodes. For the middle and lower thoracic ESCC, SCLNs should be defined as distant metastasis, and neoadjuvant therapy first may be an available therapy. abstract_id: PUBMED:20545973 Learning of thoracoscopic radical esophagectomy: how can the learning curve be made short and flat? Attainment of proficiency in video-assisted thoracoscopic radical esophagectomy (VATS) for thoracic esophageal cancer requires much experience. We have mastered this procedure safely under the direction of an experienced surgeon. After adoption of the procedure, the educated surgeon directed induction of this surgical procedure at another institution. We evaluated the efficacy of instruction during the induction period by comparing the results at the two institutions in which VATS had been newly induced. We defined the induction period as the time from the beginning of VATS to the time when the last instruction was carried out. From January 2003 to December 2007, 53 patients were candidates for VATS at Kanazawa University (institution 1). Of these, 46 patients underwent curative VATS by a single operator. We divided this period into three parts: the induction period of VATS, post-induction period, and proficient period when the educated surgeon of institution 1 directed the procedure at Maebashi Red Cross Hospital (institution 2). At institution 1, 12 VATS were scheduled, and nine procedures (75%) (group A) including eight instructions were completed during the induction period (from January 2003 to August 2004). Thereafter, VATS was performed without instruction. In the post-induction period, nine VATS were scheduled, and eight procedures (88.8%) (group B) were completed from September 2004 to August 2005. Subsequently, 32 VATS were scheduled, and 29 procedures (90.6%) (group C) were completed during the proficient period (from September 2005 to December 2007). The surgeon at Maebashi Red Cross Hospital (institution 2) started to perform VATS under the direction of the surgeon who had been educated at institution 1 from September 2005. VATS was completed in 13 (76.4%) (group D) of 17 cases by a single surgeon including seven instructions during the induction period at institution 2 from September 2005 to December 2007. No lethal complication occurred during the induction period at both institutions. We compared the results of VATS among four groups from the two institutions. There were no differences in the background and clinicopathological features among the four groups. The number of dissected lymph nodes and amount of thoracic blood loss were similar in the four groups (35 [22-52] vs 41 [26-53] vs 32 [17-69] vs 29 [17-42] nodes, P = 0.139, and 170 [90-380] vs 275 [130-550] vs 220 [10-660] vs 210 [75-543] g, P = 0.373, respectively). There was no difference in the duration of the thoracic procedure during the induction period at the two institutions. However, the duration of the procedure was significantly shorter in the proficient period of institution 1 (group C: 266 [195-555] minutes) than in the induction period of both institutions (group A: 350 [280-448] minutes [P = 0.005] and group D: 345 [270-420] mL [P = 0.002]). There were no surgery-related deaths in any of the groups. The incidence of postoperative complications did not differ among the four groups. Thoracoscopic radical esophagectomy can be mastered quickly and safely with a flat learning curve under the direction of an experienced surgeon. The educated surgeon can instruct surgeons at another institution on how to perform thoracoscopic esophagectomy. The operation time of thoracoscopic surgery is shortened by experience. abstract_id: PUBMED:18978560 Pulmonary metastasectomy: a survey of current practice amongst members of the European Society of Thoracic Surgeons. Objective: Currently, no randomized trials exist to guide thoracic surgeons in the field of pulmonary metastasectomy. This study investigates the current clinical practice among European Society of Thoracic Surgeon (ESTS) members. Methods: A Web-based questionnaire was created exploring the clinical approach to lung metastasectomy. All ESTS members were surveyed. Results: One hundred forty-six complete responses were received from the 494 consultant ESTS members surveyed (29.6%). For most respondents (68%), lung metastasectomy represents a minor proportion (0-10%) of their clinical volume. Approximately 90% of respondents always/usually review their lung metastasectomy cases within a multidisciplinary meeting. Helical computed tomography is most commonly used (74%) for the detection of metastases, while positron emission tomography is used additionally in less than 50%. Most of respondents (92% and 74%, respectively) consider unresectable primary tumor and predicted incomplete metastasectomy as absolute contraindications to lung metastasectomy. The most frequently performed resection is wedge excision (92%). Palpation of the lung is considered necessary by 65%, while 40% use a thoracoscopic approach with therapeutic intent. Though 65% consider pathologically positive nodes a contraindication to metastasectomy, a similar number rarely/never perform mediastinoscopy before metastasectomy. At the time of metastasectomy 55% perform mediastinal lymph node sampling whereas 33% perform no nodal dissection whatsoever. Conclusions: The survey provides a large, time-sensitive database summarizing the clinical practice of pulmonary metastasectomy by members of the ESTS. Responses demonstrate a remarkable consistency of practice patterns, though certain areas of potential controversy showed greater variance. Conceivably, these divergent approaches will encourage future collaborative studies aimed at identifying evidence-based practices for patients with pulmonary metastases. abstract_id: PUBMED:26843431 Nanotechnology applications in thoracic surgery. Nanotechnology is an emerging, rapidly evolving field with the potential to significantly impact care across the full spectrum of cancer therapy. Of note, several recent nanotechnological advances show particular promise to improve outcomes for thoracic surgical patients. A variety of nanotechnologies are described that offer possible solutions to existing challenges encountered in the detection, diagnosis and treatment of lung cancer. Nanotechnology-based imaging platforms have the ability to improve the surgical care of patients with thoracic malignancies through technological advances in intraoperative tumour localization, lymph node mapping and accuracy of tumour resection. Moreover, nanotechnology is poised to revolutionize adjuvant lung cancer therapy. Common chemotherapeutic drugs, such as paclitaxel, docetaxel and doxorubicin, are being formulated using various nanotechnologies to improve drug delivery, whereas nanoparticle (NP)-based imaging technologies can monitor the tumour microenvironment and facilitate molecularly targeted lung cancer therapy. Although early nanotechnology-based delivery systems show promise, the next frontier in lung cancer therapy is the development of 'theranostic' multifunctional NPs capable of integrating diagnosis, drug monitoring, tumour targeting and controlled drug release into various unifying platforms. This article provides an overview of key existing and emerging nanotechnology platforms that may find clinical application in thoracic surgery in the near future. abstract_id: PUBMED:19889448 Pathologic diagnosis and resection of suspicious thoracic metastases in patients with cervical cancer through thoracotomy or video-assisted thoracic surgery. Objectives: The purpose of this study was to clarify the role of resection for suspicious thoracic metastasis in cervical cancer patients with thoracotomy or video-assisted thoracic surgery (VATS). Methods: A retrospective study was performed involving patients with primary and recurrent cervical cancer who underwent surgical resections for suspicious thoracic metastatic lesions at a single academic institution between January 2003 and February 2009. Results: Twenty-three pulmonary resections were performed on 21 patients via thoracotomies (n=10), VATS (n=11), and thoracotomies following VATS (n=2). All suspicious pulmonary lesions were completely resected in all 3 surgical groups. A total of 49 surgical procedures were performed, as follows: 25 wedge resections, 9 lobectomies, 12 mediastinal lymph node dissections, 1 segmentectomy, 1 diaphragmatic resection, and 1 pleurectomy. The pathologic evaluation revealed metastasis from cervical cancer, primary lung cancer, no pathologic abnormality, and tuberculosis in 14, 7, 2, and 1 of the surgical specimens, respectively; 1 patient had primary lung cancer and mediastinal lymph node metastasis from cervical cancer. Different pathologic diagnoses were rendered in 6 patients compared to the image-based diagnoses. After pulmonary metastasectomy, 2 patients died from the disease, 3 patients were alive with disease, and 16 patients were alive without disease, with a median follow-up of 16 months (range, 2-67 months). Conclusions: Resection of suspicious thoracic metastasis with thoracotomy or VATS permits accurate pathologic diagnosis and favorable curative resection in selected patients with cervical cancer. abstract_id: PUBMED:20009772 Can a thoracic surgeon identify lymph node metastases during surgery based on their size? Analysis of 844 metastatic and 10,462 nonmetastatic lymph nodes. Objective: A metastatic lymph node commonly becomes enlarge; however, there is limited data available with regard to the direct measurement of lymph nodes and their clinicopathologic characteristics. Methods: The size of dissected lymph node was quantified in a total of 848 nodes with metastasis and 10,462 nodes without metastasis from 454 patients with lung cancer who underwent a pulmonary resection with lymph node dissection. Results: The short axis and the volume of the metastatic lymph nodes were significantly greater than those of the nonmetastatic ones. The smaller the lymph node, the less frequently the lymph nodes were metastatic; however, the ratios of nodes smaller than the fifth largest lymph node with metastasis of adenocarcinoma and squamous cell carcinoma were 21.8 to 26.2%, respectively. When the hilar and mediastinal lymph node stations were examined, 1.14 to 4.00% of the lung cancer patients had lymph node metastasis in small lymph node despite having no metastases in the largest and second largest lymph nodes. Conclusions: The small lymph nodes in the hilar or mediastinal stations frequently had metastases of carcinoma even though largest and second largest lymph nodes were negative for metastases, especially in adenocarcinoma cases. Surgical oncologists should, therefore, perform systemic lymph node dissection, and not sampling, during a pulmonary resection of lung cancer. Answer: No, a thoracic surgeon cannot reliably identify lymph node metastases during surgery based solely on their size. The study analyzing 844 metastatic and 10,462 nonmetastatic lymph nodes found that although metastatic lymph nodes were significantly larger than nonmetastatic ones on average, small lymph nodes frequently had metastases of carcinoma even when the largest and second largest lymph nodes were negative for metastases. This was particularly true for adenocarcinoma cases. The study concluded that small lymph nodes in the hilar or mediastinal stations could have metastases, and therefore, systemic lymph node dissection, rather than sampling, should be performed during pulmonary resection for lung cancer (PUBMED:20009772).
Instruction: Delivering 'Gold Standards' in end-of-life care in care homes: a question of teamwork? Abstracts: abstract_id: PUBMED:19220608 Delivering 'Gold Standards' in end-of-life care in care homes: a question of teamwork? Aim: This paper reports findings from a study which investigated the introduction of the Gold Standards Framework for improving end-of-life care into care homes in England. Background: The Gold Standards Framework was developed in primary care to improve the care provided for people at the end-of-life. Following its successful introduction to this setting it was adapted and implemented in care homes. Design: A case study approach was appropriate for this study of a care programme into a 'real life' setting. Method: Fourteen managers participated in an initial telephone interview. Ten of these homes participated in the case study phase and 61 staff participated in individual or group interviews. Seven residents and three relatives participated in face to face interviews. Qualitative data were analysed in line with the template approach. Survey data were also analysed and a validated Teamworking Questionnaire was used. Results: It was found that teamwork is central to the successful introduction of the Gold Standards Framework in Care Homes. Good staffing levels and management support were also perceived to be key factors in homes where the Framework became established. Conclusion: Effective teamwork was necessary for changes in end-of-life care to be achieved in the care homes. If end-of-life care and other improvements in practice are to continue, teamwork will need to be supported and developed. Relevance To Clinical Practice: Effective teamwork appears to be a pre-requisite for successful implementation of new programmes of care. Organisations wishing to implement such programmes should assess the quality of teamwork and may need to address this first. abstract_id: PUBMED:28240078 Using the Gold Standards Framework to deliver good end of life care. The aims of excellent end of life care (EOLC) and nursing are at the heart of healthcare. Chelsea and Westminster Hospital NHS Foundation Trust, a multi-site London teaching care provider, recognises and values the importance of good EOLC, and the quality of EOLC is used as one of the key metrics in assessing the quality of patient and family care across the trust. The principles of EOLC, including those enshrined in the Gold Standards Framework (GSF), are closely aligned with the trust's core values. Each member of staff is encouraged and supported, through the GSF process, to recognise and respond as befits their role in implementing the principles of EOLC, agreed by staff, and by patients and their relatives. This article describes the experience of, and collaboration between, trust staff and members of the GSF team, who have worked together for the past 18 months, and how this work has placed EOLC at the heart of patient care and staff recruitment. This collaboration has helped to enrich the culture of compassion and care that the trust aims to deliver, and to focus on providing person-centred care. abstract_id: PUBMED:32571155 Are You Prepared for End-of-Life Care? A Content Review of Canadian End-of-Life Care Standards and Competencies for Saskatchewan. Background: The provision of end-of-life care is receiving attention locally, provincially, and nationally in Canada. It is important to ensure that interprofessional standards and competencies are in place to provide quality end-of-life care that meets the needs of patients and their families. The purpose of this content review was to identify core standards and competencies essential to an interprofessional team providing end-of-life care. Methods: The researchers conducted a review of health professional associations and registration bodies that support professionals providing end-of-life care to identify existing standards and competencies. Key concepts were reviewed and organized using thematic analysis; relationships were developed; and core themes for interprofessional end-of-life care were identified. Results: Four themes essential to the provision of end-of-life care were common across all health professions: (1) access to care, (2) professional practice, (3) person-centered care, and (4) the process and delivery of care. Conclusions: Health professional associations need to ensure end-of-life care standards and competencies are in place for the provision of appropriate and holistic care. Aligning standards and competencies across professions improves the preparedness of health professionals to provide interprofessional end-of-life care. abstract_id: PUBMED:24942282 The effect of using high facilitation when implementing the Gold Standards Framework in Care Homes programme: a cluster randomised controlled trial. Background: The provision of quality end-of-life care is increasingly on the national agenda in many countries. In the United Kingdom, the Gold Standards Framework for Care Homes programme has been promoted as a national framework for improving end-of-life care. While its implementation is recommended, there are no national guidelines for facilitators to follow to undertake this role. Aim: It was hypothesised that action learning alongside high facilitation when implementing the Gold Standards Framework for Care Homes programme will result in a reduced proportion of hospital deaths for residents and improvement in the care home staff ability to facilitate good end-of-life care. Design: A cluster randomised controlled trial where 24 nursing homes received high facilitation to enable them to implement the Gold Standards Framework for Care Homes programme. The managers of 12 nursing homes additionally took part in action learning sets. A third group (14 nursing homes) received the 'standard' Gold Standards Framework for Care Homes facilitation available in their locality. Setting/participants: In total, 38 nursing homes providing care for frail older people, their deceased residents and their nurse managers. Results: A greater proportion of residents died in those nursing homes receiving high facilitation and action learning but not significantly so. There was a significant association between the level of facilitation and nursing homes completing the Gold Standards Framework for Care Homes programme through to accreditation. Year-on-year change occurred across all outcome measures. Conclusion: There is a danger that without national guidelines, facilitation of the Gold Standards Framework for Care Homes programme will vary and consequently so will its implementation. The nurse manager of a care home must be actively engaged when implementing the Gold Standards Framework for Care Homes programme. abstract_id: PUBMED:35465704 The Gold Standards Framework Hospital programme: implementation and progress. Aim: The implementation and impact of the Gold Standards Framework Hospitals (GSFH) service development programme is described. Background: A third of hospital inpatients may be in their last year of life when admitted to hospital. Many will be repeat and unplanned admissions. National policy guidance seeks to change this pattern through implementing patient choice in end-of-life care (EOLC) planning. Method: GSFH training and resources help general hospital staff teams develop skills as they identify patients nearing the end of life, rapidly assess their needs and offer advance care planning (ACP). Findings: Audit data shows teams that have undergone GSFH training demonstrate earlier identification of patients needing EOLC and offer ACP targeting their individual care needs. Conclusion: The model has been taken up by over 350 hospital wards, with a large number seeking accreditation to demonstrate sustainability of good practice in EOLC management. abstract_id: PUBMED:29592534 End-of-Life Care in Intensive Care Units: Nursing strategies of family involvement at the end of life Background: Family members of people dying in the intensive care unit (ICU) are exposed to many stress factors and they often do not experience involvement in End-of-Life (EoL) situations. For example, they criticize a low degree of participation in patients care, delayed or incomplete information and lack of privacy. Even nursing staff is facing various obstacles in EoL situations in ICUs. Aim: This study investigates strategies used by ICU nursing staff in German-speaking Switzerland to increase family members participation in situations at the end of life. Method: Data was collected by conducting 12 semi-structured interviews using an approach based on Grounded Theory. A model was developed to explain nursing strategies for family involvement in EoL situations in the ICU. Conclusions: Nurses provide personal space and tranquillity for family members and allow them to be present at any time. Against this background, they support family members and enable them to say goodbye consciously to a loved one. Subsequent work should examine the effectiveness of the strategies described, particularly in terms of stress reactions displayed by family members in the aftermath of EoL situations. In practice, family members should be provided space for privacy. The entire healthcare team is recommended to identify and pursue common values and objectives. Moreover, intradisciplinary exchange and mentoring need to be encouraged. In order to prepare future nursing staff for EoL situations in the ICU, recognizing and promoting their educational skills is mandatory. abstract_id: PUBMED:25615833 Views and experiences of nurses and health-care assistants in nursing care homes about the Gold Standards Framework. Aim: To explore the views and experiences of nurses and health-care support staff about the use of the Gold Standards Framework (GSF) for end-of-life care (EoLC) for older people in nursing care homes (NCHs) with GSF accreditation. Methods: A qualitative descriptive study was conducted with three purposively selected NCHs in London. Individual interviews were conducted with NCH managers (n=3) and in each NCH, a focus group was conducted with registered nurses (RNs) and health-care assistants (HCAs): focus group 1, n=2 RN, n=2 HCA; focus group 2, n=2 RN, n=3 HCA; focus group 3, n=3 RN, n=3 HCA. Interviews were audio-recorded, transcribed and analysed using framework analysis. Findings: Three core themes were identified: (i) a positive regard for the GSF for care homes (GSFCH); (ii) challenges around EoLC for older people; and (iii) difficulties in using the GSFCH. Conclusions: RNs, HCAs and managers regarded the training and support afforded by the GSFCH programme to inform EoLC for older residents positively. The framework has the potential to promote a coordinated approach to EoLC for older people. In the post accreditation period, there is a need for ongoing support and development to help embed the key tenets of the GSFCH in the culture of caring. abstract_id: PUBMED:27802082 A practice development initiative supporting care home staff deliver high quality end-of-life care. Background: The global population is changing with an increasing percentage of the oldest old, many of whom in the UK are looked after in care homes. Care homes now provide care for a fifth of the UK population who die each year. However, most nursing care homes are privately owned and not part of the NHS, which exposes staff to a lack of skills/knowledge in relation to end-of-life care. Methods: To implement the Gold Standards Framework in Care Homes Programme and audit outcomes within nursing care homes across five Clinical Commissioning Groups over a 7-year period using a research-based model of facilitation. Results: The percentage of residents dying in nursing care homes increased from 57% to 79%, with improvement in other outcomes. Conclusions: A 'high' facilitation model, including a sustainability initiative and ongoing audit, contributed to significant improvements when implementing the Programme. Reciprocity and trustworthiness underpin the success of this initiative. abstract_id: PUBMED:29573591 Facilitation of an end-of-life care programme into practice within UK nursing care homes: A mixed-methods study. Background: The predicted demographic changes internationally have implications for the nature of care that older people receive and place of care as they age. Healthcare policy now promotes the implementation of end-of-life care interventions to improve care delivery within different settings. The Gold Standards Framework in Care Homes (GSFCH) programme is one end-of-life care initiative recommended by the English Department of Health. Only a small number of care homes that start the programme complete it, which raises questions about the implementation process. Aim: To identify the type, role, impact and cost of facilitation when implementing the GSFCH programme into nursing care home practice. Design: A mixed-methods study. Setting: Nursing care homes in south-east England. Participants: Staff from 38 nursing care homes undertaking the GSFCH programme. Staff in 24 nursing care homes received high facilitation. Of those, 12 also received action learning. The remaining 14 nursing care homes received usual local facilitation of the GSFCH programme. Methods: Study data were collected from staff employed within nursing care homes (home managers and GSFCH coordinators) and external facilitators associated with the homes. Data collection included interviews, surveys and facilitator activity logs. Following separate quantitative (descriptive statistics) and qualitative (template) data analysis the data sets were integrated by 'following a thread'. This paper reports study data in relation to facilitation. Results: Three facilitation approaches were provided to nursing home staff when implementing the GSFCH programme: 'fitting it in' facilitation; 'as requested' facilitation; and 'being present' facilitation. 'Being present' facilitation most effectively enabled the completion of the programme, through to accreditation. However, it was not sufficient to just be present. Without mastery and commitment, from all participants, including the external facilitator, learning and initiation of change failed to occur. Implementation of the programme required an external facilitator who could mediate multi-layered learning at an individual, organisational and appreciative system level. The cost savings in the study outweighed the cost of providing a 'being present' approach to facilitation. Conclusions: Different types of facilitation are offered to support the implementation of end-of-life care initiatives. However, in this study 'being present' facilitation, when supported by multi-layered learning, was the only approach that initiated the change required. abstract_id: PUBMED:33970526 Delivering end-of-life care for patients with cancer at home: Interviews exploring the views and experiences of general practitioners. Many patients with terminal cancer wish to die at home and general practitioners in the United Kingdom have a critical role in providing this care. However, it has been suggested general practitioners lack confidence in end-of-life care. It is important to explore with general practitioners their experience and perspectives including feelings of confidence delivering end-of-life care to people with cancer. The aim of this study was to explore general practitioners experiences of providing end-of-life care for people with cancer in the home setting and their perceptions of confidence in this role as well as understanding implications this has on policy design. A qualitative study design was employed using semi-structured interviews and analysed using thematic analysis. Nineteen general practitioners from London were purposively sampled from eight general practices and a primary care university department in 2018-2019, supplemented with snowballing methods. Five main themes were constructed: (a) the subjective nature of defining palliative and end-of-life care; (b) importance of communication and managing expectations; (c) complexity in prescribing; (d) challenging nature of delivering end-of-life care; (e) the unclear role of primary care in palliative care. General practitioners viewed end-of-life care as challenging; specific difficulties surrounded communication and prescribing. These challenges coupled with a poorly defined role created a spread in perceived confidence. Experience and exposure were seen as enabling confidence. Specialist palliative care service expansion had important implications on deskilling of essential competencies and reducing confidence levels in general practitioners. This feeds into a complex cycle of causation, leading to further delegation of care. Answer: The successful introduction of the Gold Standards Framework (GSF) in care homes, which aims to improve end-of-life care, is indeed a question of teamwork. The research indicates that teamwork is central to the effective implementation of the GSF in care homes. Good staffing levels and management support are also perceived as key factors in homes where the Framework becomes established (PUBMED:19220608). Effective teamwork appears to be a prerequisite for the successful implementation of new programs of care, and organizations wishing to implement such programs should assess the quality of teamwork and may need to address this first (PUBMED:19220608). Furthermore, the use of the GSF to deliver good end-of-life care is closely aligned with the core values of healthcare organizations, such as Chelsea and Westminster Hospital NHS Foundation Trust. Staff members are encouraged and supported through the GSF process to recognize and respond appropriately to the principles of end-of-life care, which is integrated into the heart of patient care and staff recruitment (PUBMED:28240078). In addition to teamwork, the facilitation of the GSF program is crucial. A study found that high facilitation and action learning when implementing the GSF in Care Homes program resulted in a reduced proportion of hospital deaths for residents and improvement in the care home staff's ability to facilitate good end-of-life care (PUBMED:24942282). However, without national guidelines, facilitation of the GSF program may vary, and consequently, so will its implementation (PUBMED:24942282). Overall, the successful delivery of 'Gold Standards' in end-of-life care in care homes is contingent upon effective teamwork, adequate staffing, management support, and proper facilitation of the GSF program. These elements are essential for ensuring that end-of-life care is person-centered, professional, and accessible, as indicated by the common themes identified across health professions (PUBMED:32571155).
Instruction: Does new-onset postoperative atrial fibrillation after coronary artery bypass grafting affect postoperative quality of life? Abstracts: abstract_id: PUBMED:22841168 Does new-onset postoperative atrial fibrillation after coronary artery bypass grafting affect postoperative quality of life? Objectives: New-onset postoperative atrial fibrillation (POAF) is a common complication after cardiac surgery. We investigated the effect of POAF on quality of life after coronary artery bypass grafting. Methods: All patients who underwent nonemergency coronary artery bypass grafting between March 2009 and January 2011 were requested to complete a Short Form-36 Health Survey before and 6 months after the procedure. Norm-based scores of 8 health status domains and 2 component summary scores were calculated. Only patients undergoing first-time coronary artery bypass grafting, with no history of atrial fibrillation, were included in the analyses. Results: Of 1608 patients, 360 (22.4%) had POAF diagnosed. Twenty-eight patients died within half a year after the procedure (1.7% in the no POAF group and 1.8% in the POAF group; P = .90). After excluding these patients, data from 1580 patients were analyzed. Preoperative questionnaires were returned by 66% of the patients and postoperative questionnaires by 65%. Preoperative scores did not differ between patients with and without POAF in any subcategory (0/10). After the procedure, 4 of 10 scores were worse in the group with POAF compared to patients without POAF. Patients without POAF improved in all subcategories (10/10) after the procedure, whereas those with POAF did so in only 7 of 10. Multilinear regression showed POAF to be an independent negative predictor for improved quality of life 6 months postoperatively in 7 of 10 subcategories, including both mental and physical component summary scores. Conclusions: New-onset POAF does affect 6-month postoperative mental and physical health, possibly warranting more aggressive POAF treatment. abstract_id: PUBMED:36271783 Prediction of postoperative atrial fibrillation with postoperative epicardial electrograms. Objectives. New-onset postoperative atrial fibrillation (POAF) is a common complication after cardiac surgery. The arrhythmia often entails a longer hospital stay, greater risk of other complications, and higher mortality both short- and long-term. An investigation of the use of early atrial electrograms in predicting POAF in cardiac surgery was performed. Design. In this prospective observational study, a total of 99 consecutive adult patients undergoing coronary artery bypass grafting, valve surgery or both were included. On the first postoperative morning, standard 12-lead electrograms (ECG), unipolar atrial electrograms (aEG), and vital values were recorded. The outcome was new-onset POAF within one month postoperatively. Results. Three multivariable prediction models for POAF were formed using measurements derived from the ECG, aEG, and patient characteristics. Age, body mass index, and two unipolar electrogram measurements quantifying local activation time and fractionation were strongly associated with the outcome POAF. The performance of the POAF prediction models was assessed through receiver operating curve characteristics with cross-validation, and discrimination using the leave-one-out-method to internally validate the models. The cross-validated area under the receiver operating characteristic curve (AUC) was improved in a prediction model using atrial-derived electrogram variables (AUC 0.796, 95% CI 0.698-0.894), compared with previous ECG and clinical models (AUC 0.716, 95% CI 0.606-0.826 and AUC 0.718, 95% CI 0.613-0.822, respectively). Conclusions. This study found that easily obtainable measurements from atrial electrograms may be helpful in identifying patients at risk of POAF in cardiac surgery. abstract_id: PUBMED:38126792 New-onset postoperative atrial fibrillation following coronary artery bypass grafting: Incidence, clinical course and short-term outcomes Introduction: The aims of this retrospective study were to investigate the incidence, clinical course and short term outcomes of new-onset postoperative atrial fibrillation (POAF) following coronary artery bypass surgery (CABG). Materials And Methods: A nation-wide study on 1622 patients who underwent CABG from 2006-2020 at Landspitali University Hospital. Clinical data were extracted from registries and 121 patients with pre-existing AF excluded, leaving 1501 patients for further analysis. Patient charts and postoperative ECGs were manually reviewed for determining details of POAF, which was defined as a postoperative episode of AF before discharge lasting at least 5 minutes. Patients with POAF (n=483) were compared to non-POAF patients (n=1018). Results: Altogether 483 (32.2%) patients developed POAF; the annual incidence decreasing over time (tau= -0,45, p=0.023). Most patients were diagnosed on the second day postoperatively (43.5%) and over 90% were diagnosed within 4 days. The median number of POAF episodes was 3 (IQR: 1-5), the first episode lasting 1-6 hours in half of the cases and the total POAF-duration being 12 hours median (IQR: 5-30). Over 94% of cases converted to sinus rythm before discharge, with 25 (5.3%) patients being discharged in AF. Most patients were treated with beta-blockers (98.8%), amiodarone (95%) and 14.9% with electric cardioversion. POAF-patients were older, had higher EuroSCORE II and a longer hospital stay, however, they had similar rates of early postoperative stroke and 30 day mortality. Conclusion: The incidence of POAF remains high and was associated with prolonged hospital stay, but not significantly higher 30 day mortality or early postoperative stroke compared to patients in sinus rhythm. POAF-episodes were predominantly transient and almost 95% of patients were discharged in sinus rythm. abstract_id: PUBMED:34330576 New-Onset Atrial Fibrillation After Cardiac Surgery is a Significant Risk Factor for Long-Term Stroke: An Eight-Year Prospective Cohort Study. Objectives: This study sought to determine the incidence and significance of new-onset atrial fibrillation as a risk factor for long-term stroke and mortality after cardiac surgery. Design: A prospective cohort study. Setting: Two large tertiary public hospitals. Participants: The study comprised 3008 patients who underwent coronary artery bypass grafting and/or valve surgery from 2008 to 2012. Interventions: None. Measurements And Main Results: New-onset atrial fibrillation was analyzed as a risk factor for postoperative stroke using a multivariate logistic regression model after adjustment for potential confounders. A Cox regression model with time-dependent variables was used to analyze relationships between new-onset atrial fibrillation and postoperative survival. New-onset atrial fibrillation was detected in 573 (19.0%) patients. Stroke occurred in 234 (7.8%) patients during the mean postoperative follow-up period of six ± two years. The incidence of postoperative stroke in patients with new-onset atrial fibrillation (9.9%) and patients with both preoperative and postoperative atrial fibrillation (13.8%) was higher than in patients with no atrial fibrillation (6.8%) (p = 0.002). New-onset atrial fibrillation (odds ratio, 1.53; 95% confidence interval [CI], 1.08-2.18; p = 0.017) was identified as an independent risk factor for postoperative stroke. A total of 518 (17.2%) mortalities occurred within the mean postoperative follow-up period of eight ± two years. New-onset atrial fibrillation was associated with shorter survival (hazard ratio, 1.49; 95% CI, 1.22-1.81; p &lt; 0.001) compared with patients with no atrial fibrillation. Conclusions: New-onset atrial fibrillation is a significant risk factor for long-term stroke and mortality after cardiac surgery. Close monitoring and treatment of this condition may be necessary to reduce the risk of postoperative stroke and mortality. abstract_id: PUBMED:25857671 Predicting New-Onset Postoperative Atrial Fibrillation in Cardiac Surgery Patients. Objective: To derive a simple clinical prediction rule identifying patients at high risk of developing new-onset postoperative atrial fibrillation (POAF) after cardiac surgery. Design: Retrospective analysis on prospectively collected observational data. Setting: A university-affiliated cardiac hospital. Participants: Adult patients undergoing coronary artery bypass grafting and/or valve surgery. Interventions: Observation for the occurrence of new-onset postoperative atrial fibrillation. Measurements And Main Results: Details on 28 preoperative variables from 999 patients were collected and significant predictors (p&lt;0.2) were inserted into multivariable logistic regression and reconfirmed with recursive partitioning. A total of 305 (30.5%) patients developed new-onset POAF. Eleven variables were associated significantly with atrial fibrillation. A multivariable logistic regression model included left atrial dilatation, mitral valve disease, and age. Coefficients from the model were converted into a simple 7-point predictive score. The risk of POAF per score is: 15.0%, if 0; 20%, if 1; 27%, if 2; 35%, if 3; 44%, if 4; 53%, if 5; 62%, if 6; and 70%, if 7. A score of 4 has a sensitivity of 44% and a specificity of 82% for POAF. A score of 6 has a sensitivity of 11% and a specificity of 97%. Bootstrapping with 5,000 samples confirmed the final model provided consistent predictions. Conclusions: This study proposed a simple predictive score incorporating three risk variables to identify cardiac surgical patients at high risk of developing new-onset POAF. Preventive treatment should target patients ≥ 65 years with left atrial dilatation and mitral valve disease. abstract_id: PUBMED:37713475 Cardiopulmonary bypass management and risk of new-onset atrial fibrillation after cardiac surgery. Objectives: Cardiopulmonary bypass (CPB) management may potentially play a role in the development of new-onset atrial fibrillation (AF) after cardiac surgery. The aim of this study was to explore this potential association. Methods: Patients who underwent coronary artery bypass grafting and/or valvular surgery during 2016-2020 were included in an observational single-centre study. Data collected from the Swedish Web System for Enhancement and Development of Evidence-Based Care in Heart Disease Evaluated According to Recommended Therapies registry and a local CPB database were merged. Associations between individual CPB variables (CPB and aortic clamp times, arterial and central venous pressure, mixed venous oxygen saturation, blood flow index, bladder temperature and haematocrit) and new-onset AF were analysed using multivariable logistic regression models adjusted for patient characteristics, comorbidities and surgical procedure. Results: Out of 1999 patients, 758 (37.9%) developed new-onset AF. Patients with new-onset postoperative AF were older, had a higher incidence of previous stroke, worse renal function and higher EuroSCORE II and CHA2DS2-VASc scores and more often underwent valve surgery. Longer CPB time [adjusted odds ratio 1.05 per 10 min (95% confidence interval 1.01-1.08); P = 0.008] and higher flow index [adjusted odds ratio 1.21 per 0.2 l/m2 (95% confidence interval 1.02-1.42); P = 0.026] were associated with an increased risk for new-onset AF, while the other variables were not. A sensitivity analysis only including patients with isolated coronary artery bypass grafting supported the primary analyses. Conclusions: CPB management following current guideline recommendations appears to have minor or no influence on the risk of developing new-onset AF after cardiac surgery. abstract_id: PUBMED:26006137 Efficiency of postoperative statin treatment for preventing new-onset postoperative atrial fibrillation in patients undergoing isolated coronary artery bypass grafting: A prospective randomized study. Objective: Recent studies have demonstrated that preoperative statin therapy reduces the incidence of postoperative atrial fibrillation (AF). The objective of this study was to assess the efficacy of statin therapy started in the early postoperative period for the prevention from new-onset AF after isolated coronary artery bypass grafting (CABG). Methods: This prospective and randomized study consisted of 60 consecutive patients who underwent elective isolated CABG. Patients were divided into two groups to examine the influence of statins: those with postoperative statin therapy (statin group, n=30) and those without it (non-statin group, n=30). Patient data were collected and analyzed prospectively. In the statin group, each extubated patient was given 40 mg of atorvastatin per day, starting from an average of 6 hours after the operation. Results: The overall incidence of postoperative AF was 30%. Postoperative AF occurred in 5 patients (16.7%) in the statin group. This was significantly lower compared with 13 patients (43.3%) in the non-statin group (p=0.049). According to the multivariate analysis, postoperative atorvastatin reduced the risk of postoperative AF by 49% [odds ratio (OR) 0.512, 95% confidence interval (CI) 0.005 to 0.517, p=0.012]. Also, age was an independent predictor of postoperative AF (OR 1.299, 95% CI 1.115 to 1.514, p=0.001). Conclusion: Postoperative statin therapy seems to reduce new-onset AF after isolated CABG in our study. abstract_id: PUBMED:27554222 Preoperative Electrocardiogram Score for Predicting New-Onset Postoperative Atrial Fibrillation in Patients Undergoing Cardiac Surgery. Objective: To investigate if electrocardiogram (ECG) markers from routine preoperative ECGs can be used in combination with clinical data to predict new-onset postoperative atrial fibrillation (POAF) following cardiac surgery. Design: Retrospective observational case-control study. Setting: Single-center university hospital. Participants: One hundred consecutive adult patients (50 POAF, 50 without POAF) who underwent coronary artery bypass grafting, valve surgery, or combinations. Interventions: Retrospective review of medical records and registration of POAF. Measurements And Main Results: Clinical data and demographics were retrieved from the Western Denmark Heart Registry and patient records. Paper tracings of preoperative ECGs were collected from patient records, and ECG measurements were read by two independent readers blinded to outcome. A subset of four clinical variables (age, gender, body mass index, and type of surgery) were selected to form a multivariate clinical prediction model for POAF and five ECG variables (QRS duration, PR interval, P-wave duration, left atrial enlargement, and left ventricular hypertrophy) were used in a multivariate ECG model. Adding ECG variables to the clinical prediction model significantly improved the area under the receiver operating characteristic curve from 0.54 to 0.67 (with cross-validation). The best predictive model for POAF was a combined clinical and ECG model with the following four variables: age, PR-interval, QRS duration, and left atrial enlargement. Conclusion: ECG markers obtained from a routine preoperative ECG may be helpful in predicting new-onset POAF in patients undergoing cardiac surgery. abstract_id: PUBMED:35574947 Stroke Risk Stratification in Patients With Postoperative Atrial Fibrillation After Coronary Artery Bypass Grafting. Background The CHA2DS2-VASc (congestive heart failure, hypertension, age ≥75 years, diabetes, previous stroke or TIA [transient ischemic attack], vascular disease, age 65 to 74 years, sex category female; 2 indicates 2 points, otherwise 1 point) scoring system is recommended to guide decisions on oral anticoagulation therapy for stroke prevention in patients with nonsurgery atrial fibrillation. A score ≥1 in men and ≥2 in women, corresponding to an annual stroke risk exceeding 1%, warrants long-term oral anticoagulation provided the bleeding risk is acceptable. However, in patients with new-onset postoperative atrial fibrillation, the optimal risk stratification method is unknown. The aim of this study was therefore to evaluate the CHA2DS2-VASc scoring system for estimating the 1-year ischemic stroke risk in patients with new-onset postoperative atrial fibrillation after coronary artery bypass grafting. Methods and Results All patients with new-onset postoperative atrial fibrillation and without oral anticoagulation after first-time isolated coronary artery bypass grafting performed in Sweden during 2007 to 2017 were eligible for this registry-based observational cohort study. The 1-year ischemic stroke rate at each step of the CHA2DS2-VASc score was estimated using a Kaplan-Meier estimator. Of the 6368 patients included (mean age, 69.9 years; 81% men), &gt;97% were treated with antiplatelet drugs. There were 147 ischemic strokes during the first year of follow-up. The ischemic stroke rate at 1 year was 0.3%, 0.7%, and 1.5% in patients with CHA2DS2-VASc scores of 1, 2, and 3, respectively, and ≥2.3% in patients with a score ≥4. A sensitivity analysis, with the inclusion of patients on anticoagulants, was performed and supported the primary results. Conclusions Patients with new-onset atrial fibrillation after coronary artery bypass grafting and a CHA2DS2-VASc score &lt;3 have such a low 1-year risk for ischemic stroke that oral anticoagulation therapy should probably be avoided. abstract_id: PUBMED:34950327 Contemporary Anticoagulation Practices for Postoperative Atrial Fibrillation: A Single Center Experience. Aims: Postoperative atrial fibrillation (POAF) is a frequent in-hospital complication after cardiac surgery. Surprisingly, despite its prevalence, management of this condition has not been well studied. One promising approach that has been evaluated in a limited number of studies is use of anticoagulation. However, the trends and patterns of real-world use of anticoagulation in POAF patients has not been systemically investigated. In this study, we aimed to determine real-world patterns of anticoagulation use for patients with POAF. Methods: We identified 200 patients undergoing coronary artery bypass (CABG) or cardiac valve surgery at University Hospitals Cleveland Medical Center over a 2 year period beginning January 2016 with new onset POAF. We reviewed charts to verify candidacy for inclusion in the study and to extract data on anticoagulation use, adverse outcomes, and CHA2DS2-VASc scores. Results: Anticoagulation use was low after CABG, but high after bioprosthetic valve surgery. The most common anticoagulant used was warfarin. Anticoagulation use was not correlated with CHA2DS2-VASc score or cardioversion. Stroke and mortality were higher among patients not receiving anticoagulation, however, confirmation of this finding in larger randomized studies is warranted. Conclusions: Anticoagulation use is low after CABG and this practice does not appear to be affected by CHA2DS2VASc score or cardioversion. This differs with previously reported provider attitudes towards management of this condition. Stroke and mortality appear to be elevated for patients not receiving anticoagulation but further investigation is required to confirm this observation. Answer: Yes, new-onset postoperative atrial fibrillation (POAF) after coronary artery bypass grafting (CABG) does affect postoperative quality of life. A study investigating the effect of POAF on quality of life after CABG found that patients with POAF had worse scores in 4 of 10 health status domains compared to patients without POAF six months after the procedure. While patients without POAF improved in all subcategories, those with POAF only improved in 7 of 10. POAF was shown to be an independent negative predictor for improved quality of life postoperatively in 7 of 10 subcategories, including both mental and physical component summary scores (PUBMED:22841168).
Instruction: Is adult height related to the risk of having chronic obstructive pulmonary disease? Abstracts: abstract_id: PUBMED:19996351 Is adult height related to the risk of having chronic obstructive pulmonary disease? Background: A number of historical studies have suggested that early life deprivation increases the risk of developing chronic obstructive pulmonary disease (COPD) in adult life, but whether this is still true now that living conditions have improved is not known. The aim of this study is to quantify the current association between adult height (a marker of socioeconomic status in early life) and COPD and to determine how this varies by age. Methods: We obtained data on 1,204,110 people (aged over 35 years old) from The Health Improvement Network--a general practice database. We used a cross-sectional analysis to quantify the odds of having doctor-diagnosed COPD in relation to height using logistic regression. Results: In our dataset, we had information on adult height for 1,025,662 (85%) people, and of these 2.7% had a diagnosis of COPD. The risk of having COPD decreased with each increase in quintile of height--OR 0.90 (95% CI 0.89 to 0.91). There was evidence of effect modification by age group such that this association was strongest in people aged 35 to 49 years (OR 0.86, 95% CI 0.82 to 0.89) and decreased progressively with age. Conclusions: The risk of developing COPD is still strongly associated with adult height. This association is strongest in the youngest age category, suggesting that early life experience will remain an important risk factor for COPD for some time to come and possibly that COPD related to early life deprivation is more severe and tends to present at a younger age. abstract_id: PUBMED:10873136 Relation of adult height to cause-specific and total mortality: a prospective follow-up study of 31,199 middle-aged men and women in Finland. The purpose of this study was to analyze the association of adult height with cause-specific and total mortality. The study included 31,199 men and women aged 25-64 years who participated in a risk factor survey in 1972, 1977, 1982, or 1987 in eastern Finland. The cohorts were followed until the end of 1994. The relation between height and mortality was assessed by using Cox proportional hazard models. The authors found that height was associated inversely with most of the measured risk factors and directly with socioeconomic status. For both genders, height was inversely associated with cardiovascular and total mortality; the age- and birth-cohort-adjusted risk ratios per 5 cm increase in height were 0.89 and 0.91 for men and 0.86 and 0.90 for women, respectively. The inverse association also remained after adjustment for the other known risk factors. For men, an independent inverse association also was found between height and mortality from chronic obstructive pulmonary disease and from violence and accidents. Cancer mortality was not associated with height. Thus, genetic factors, and environmental factors during the fetal period, childhood, and adolescence, which determine adult height, appear to be related to a person's health later in life. abstract_id: PUBMED:22825588 Adult height and the risk of cause-specific death and vascular morbidity in 1 million people: individual participant meta-analysis. Background: The extent to which adult height, a biomarker of the interplay of genetic endowment and early-life experiences, is related to risk of chronic diseases in adulthood is uncertain. Methods: We calculated hazard ratios (HRs) for height, assessed in increments of 6.5 cm, using individual-participant data on 174374 deaths or major non-fatal vascular outcomes recorded among 1085949 people in 121 prospective studies. Results: For people born between 1900 and 1960, mean adult height increased 0.5-1 cm with each successive decade of birth. After adjustment for age, sex, smoking and year of birth, HRs per 6.5 cm greater height were 0.97 (95% confidence interval: 0.96-0.99) for death from any cause, 0.94 (0.93-0.96) for death from vascular causes, 1.04 (1.03-1.06) for death from cancer and 0.92 (0.90-0.94) for death from other causes. Height was negatively associated with death from coronary disease, stroke subtypes, heart failure, stomach and oral cancers, chronic obstructive pulmonary disease, mental disorders, liver disease and external causes. In contrast, height was positively associated with death from ruptured aortic aneurysm, pulmonary embolism, melanoma and cancers of the pancreas, endocrine and nervous systems, ovary, breast, prostate, colorectum, blood and lung. HRs per 6.5 cm greater height ranged from 1.26 (1.12-1.42) for risk of melanoma death to 0.84 (0.80-0.89) for risk of death from chronic obstructive pulmonary disease. HRs were not appreciably altered after further adjustment for adiposity, blood pressure, lipids, inflammation biomarkers, diabetes mellitus, alcohol consumption or socio-economic indicators. Conclusion: Adult height has directionally opposing relationships with risk of death from several different major causes of chronic diseases. abstract_id: PUBMED:7300447 Influence of height on the risk of spontaneous pneumothorax. The risk of primary spontaneous pneumothorax was found to be greatest among persons 25 to 34 years old of each sex and greater for men than women, although a gradient of risk with increasing height was found which seemed to explain much of the male predominance in this condition. Minor physical anomalies were relatively common among these patients, but no specific clinical syndromes could be identified. For secondary spontaneous pneumothorax, the risk increased with age and was greater for men, although this may simply have reflected an increased frequency of underlying chronic pulmonary disease. The gradient of risk with height was much less pronounced for secondary spontaneous pneumothorax. Most of the patients in both pneumothorax groups smoked. abstract_id: PUBMED:27874046 Body height as risk factor for emphysema in COPD. Pulmonary emphysema is a phenotypic component of chronic obstructive pulmonary disease (COPD) which carries substantial morbidity and mortality. We explored the association between emphysema and body height in 726 patients with COPD using computed tomography as the reference diagnostic standard for emphysema. We applied univariate analysis to look for differences between patients with emphysema and those without, and multivariate logistic regression to identify significant predictors of the risk of emphysema. As covariates we included age, sex, body height, body mass index, pack-years of smoking, and forced expiratory volume in one second (FEV1) as percent predicted. The overall prevalence of emphysema was 52%. Emphysemic patients were significantly taller and thinner than non-emphysemic ones, and featured significantly higher pack-years of smoking and lower FEV1 (P &lt; 0.001). The prevalence of emphysema rose linearly by 10-cm increase in body height (r2 = 0.96). In multivariate analysis, the odds of emphysema increased by 5% (95% confidence interval, 3 to 7%) along with one-centimeter increase in body height, and remained unchanged after adjusting for all the potential confounders considered (P &lt; 0.001). The odds of emphysema were not statistically different between males and females. In conclusion, body height is a strong, independent risk factor for emphysema in COPD. abstract_id: PUBMED:30022092 Association between Height and Actinic Keratosis: A Nationwide Population-based Study in South Korea. The association between actinic keratosis (AK) and anthropometric measures has not been investigated. This study aims to evaluate the associations between anthropometric measures and the incidence of AK in South Korea. We analyzed clinical data from individuals aged over 20 years who received a health examination arranged by the national insurance program between 2005 and 2008. Newly diagnosed AK was identified using claims data from baseline to the date of diagnosis or until December 31, 2015. The incidence of AK was highest among the elderly (aged over 60 years) and showed a male bias. The risk of AK increased with greater height. The quintile with the greatest height had an increased risk of AK compared with the quintile with the lowest height (hazard ratio = 1.28, 95% confidence interval: 1.24-1.33) after adjustment for age, sex, income, smoking status, alcohol consumption, hypertension, dyslipidemia, myocardial infarction, congestive heart failure, and chronic obstructive pulmonary disease. This study showed a positive association between the incidence of AK and human height. However, it is unclear whether these findings can be generalized to Koreans who have not received an examination or to populations in other countries. abstract_id: PUBMED:29564554 Validation of the All Patient Refined Diagnosis Related Group (APR-DRG) Risk of Mortality and Severity of Illness Modifiers as a Measure of Perioperative Risk. The All Patient Refined Diagnosis Related Group (APR-DRG) is an inpatient visit classification system that assigns a diagnostic related group, a Risk of Mortality (ROM) subclass and a Severity of Illness (SOI) subclass. While extensively used for cost adjustment, no study has compared the APR-DRG subclass modifiers to the popular Charlson Comorbidity Index as a measure of comorbidity severity in models for perioperative in-hospital mortality. In this study we attempt to validate the use of these subclasses to predict mortality in a cohort of surgical patients. We analyzed all adult (age over 18 years) inpatient non-cardiac surgery at our institution between December 2005 and July 2013. After exclusions, we split the cohort into training and validation sets. We created prediction models of inpatient mortality using the Charlson Comorbidity Index, ROM only, SOI only, and ROM with SOI. Models were compared by receiver-operator characteristic (ROC) curve, area under the ROC curve (AUC), and Brier score. After exclusions, we analyzed 63,681 patient-visits. Overall in-hospital mortality was 1.3%. The median number of ICD-9-CM diagnosis codes was 6 (Q1-Q3 4-10). The median Charlson Comorbidity Index was 0 (Q1-Q3 0-2). When the model was applied to the validation set, the c-statistic for Charlson was 0.865, c-statistic for ROM was 0.975, and for ROM and SOI combined the c-statistic was 0.977. The scaled Brier score for Charlson was 0.044, Brier for ROM only was 0.230, and Brier for ROM and SOI was 0.257. The APR-DRG ROM or SOI subclasses are better predictors than the Charlson Comorbidity Index of in-hospital mortality among surgical patients. abstract_id: PUBMED:1989788 Pulmonary function and cardiovascular risk factor relationships in black and in white young men and women. The CARDIA Study. Pulmonary function is known to be related inversely to incidence of coronary heart disease, congestive heart failure, chronic obstructive lung disease, lung cancer, and death from all causes. Reasons for some of these associations are poorly understood. Relationships between cardiovascular disease risk factors and pulmonary function were examined in 5,115 18- to 30-year-old black and white male and female participants in the study of Coronary Artery Risk Development in Young Adults (CARDIA). Forced expiratory volume in 1 s adjusted for height (FEV1/Ht2) was significantly lower in smokers than nonsmokers and in persons who reported shortness of breath; FEV1/Ht2 was correlated positively with a history of strenuous physical activity, duration of exercise on the treadmill, and high-density lipoprotein cholesterol. It was associated negatively with skinfold thicknesses, serum triglycerides, fasting serum insulin, and the Cook Medley scale of hostility. The association between pulmonary function and heart disease risk may reflect associations with physical fitness, vigor, fatness, and lipid profiles, as well as with cigarette smoking. abstract_id: PUBMED:32615023 Genetic impact of CDHR3 on the adult onset of asthma and COPD. Background: Adult-onset asthma and chronic obstructive pulmonary disease (COPD) are heterogeneous diseases caused by complex gene-environment interactions. A functional single nucleotide polymorphism of cadherin-related family member 3 (CDHR3), known as a receptor of rhinovirus-C, is associated with childhood-onset asthma especially in atopic individuals. Objective: Here, we identified risk factors for adult-onset asthma and COPD, focusing on the impact of the CDHR3 variant in atopic individuals. Methods: We conducted a longitudinal, retrospective, observational cohort study of 1523 healthy adults with baseline examinations at Tsukuba Medical Center Hospital in 2008 and retrospectively identified new-onset, physician-diagnosed asthma or COPD from 2009 to 2018. We assessed risk factors by the Cox regression analysis. The impact of CDHR3 variant rs6967330 was also examined in individuals with pre-existing atopy. Results: Over 10 study years, 103 people developed airway diseases (79 asthma and 24 COPD; 52 females, average onset-age 55 years old, range 38-80). Higher body mass index (BMI) and lower forced expiratory volume in one second/forced vital capacity (FEV1 /FVC) ratio were significant risk factors (BMI: HR 1.072 [95% CI 1.005-1.14], P = .034; FEV1 /FVC ratio: HR 1.091 [1.044-1.14], P = .00011). Restriction to atopic individuals saw the A allele at rs6967330 and lower FEV1 /FVC ratio to associate with adult-onset disease (A allele: HR 2.89 [1.57-5.20], P = .00062; FEV1 /FVC ratio: HR 1.10 [1.04-1.17], P = .0010). Conclusion And Clinical Relevance: Genetic susceptibility to rhinovirus-C infection in atopic individuals is a risk factor for chronic airway diseases even in later life. abstract_id: PUBMED:22896775 Precision in Diagnosing and Classifying COPD: Comparison of Historical Height with Current Height and Arm Span to Predict FEV(1). Background: Measured reductions in lung function, as a result of COPD, use a measured current value and make comparisons to a determined 'normal' value arrived at using a regression equation based upon a patients height. Osteoporosis is a recognised co-morbidity in patients with chronic obstructive pulmonary disease (COPD) and may cause excessive height loss resulting in the 'normal' values and disease progression being under-estimated. Purpose: The aim of the study was to examine the height variation in a cohort of COPD patients and controls over a 7-8 years period and evaluate its impact on estimates of lung function and hence COPD progression. Methods: In 1999-2002 we studied a cohort of primary care patients in Sunderland, UK with and without COPD and reexamined 104 (56 male) during 2007-2009. We calculated FEV(1)% predicted for actual and estimated height (armspan/1.03 and armspan/1.01 in males and females respectively). Results: In 1999-2002 the subjects were aged 62.6 ± 9.4 years, BMI was 26.4± 4.7 kg/m2, predicted FEV(1) was 59.0 ±16.0, and mean actual height was 167.3±8.9cm. The actual height changed significantly (p&lt;0.001) by 2cms over time in both genders. Whilst the overall classifications of the cohort did not change significantly when armspan was used to determine height and hence normal lung volume, individual cases did move to a classification of higher severity. Conclusions: The study suggests that current measured height may underestimate the degree of impairment of FEV(1) and hence progression of COPD. The use of height, derived from armspan, may give a more accurate measure of 'normal' lung volumes and hence the degree of impairment. Answer: Yes, adult height is related to the risk of having chronic obstructive pulmonary disease (COPD). A study found that the risk of developing COPD decreased with each increase in quintile of height, with the association being strongest in people aged 35 to 49 years, suggesting that early life experience remains an important risk factor for COPD (PUBMED:19996351). Another study confirmed an inverse association between height and mortality from chronic obstructive pulmonary disease among men (PUBMED:10873136). Additionally, a meta-analysis indicated that height was negatively associated with death from COPD, with the hazard ratios per 6.5 cm greater height being 0.84 for risk of death from COPD (PUBMED:22825588). Furthermore, research on patients with COPD using computed tomography as the reference diagnostic standard for emphysema found that body height is a strong, independent risk factor for emphysema in COPD (PUBMED:27874046). These findings collectively suggest that taller adult height is associated with a lower risk of COPD and its severe phenotype, emphysema.
Instruction: Patient safety in dental care: A challenging quality issue? Abstracts: abstract_id: PUBMED:31077202 Relationships between dental hygienists' work environment and patient safety culture. Background: Patient safety culture is a core factor in increasing patient safety, is related to the quality of medical service, and can lower the risk of patient safety accidents. However, in dentistry, research has previously focused mostly on reporting of patient safety accidents. Dental professionals' patient safety culture must therefore first be assessed, and related factors analyzed to improve patient safety. Methods: This cross-sectional study completed a survey on 377 dental hygienists working in dental settings. To assess patient safety culture, we used a survey with proven validity and reliability by translating the Hospital Survey on Patient Safety Culture (HSOPS) developed by Agency for Healthcare Research and Quality (AHRQ) into Korean. Response options on all of the items were on 5-point Likert-type scales. SPSS v21 was used for statistical analysis. The relationships between workplace factors and patient safety culture were examined using t-tests and one-way analysis of variance (ANOVA) tests(p &lt; 0.05). Results: The work environment of dental hygienists has a close relationship with patient safety. Dental hygienists working ≥40 h/week in Korea had a significantly lower for patient safety grade than those working &lt; 40 h/week. When the number of patients per day was less than 8, the safety level of patients was significantly higher. And significant differences were found depending on institution type, institution size. Conclusions: In order to establish high-quality care and patient safety system practical policies must be enacted. In particular, assurance in the quality of work environment such as sufficient staffing, appropriate work hours, and enough rest must first be realized before patient safety culture can easily be formed. abstract_id: PUBMED:33722142 Developing a Patient Safety Culture in Primary Dental Care. Patient safety should be at the heart of any healthcare service. Systems, teams, individuals and environments must work in tandem to strive for safety and quality. Research into patient safety in dentistry is still in the early stages. The vast majority of the research in this area has originated from the secondary care and academic fields. Approximately 95% of dental care is provided in the primary care sector. In this paper, we provide an overview of the evidence base for patient safety in dentistry and discuss the following aspects of patient safety: human factors; best practice; the second victim concept; potential for over-regulation and creating a patient safety culture. Through discussion of these concepts, we hope to provide the reader with the necessary tools to develop a patient safety culture in their practice. abstract_id: PUBMED:23627844 Patient safety in dental care: A challenging quality issue? An exploratory cohort study. Objective: Little is known about patient safety in primary oral healthcare. The aim of this study was to describe and analyze patient safety incidents in primary oral health care. Materials And Methods: A random sample of 1000 patient records from 20 dental practices was reviewed retrospectively over 60 months. All adverse events (AEs) were noted: unintended events happening during treatment that resulted or could have resulted in harm to the patient. Results: A total of 46 (95% CI = 33-59) AEs was identified, of which 18 (95% CI = 10-26) were considered preventable. From these, 15 related to treatment, 10 to diagnostics and one to communication. Conclusions: The low incidence of AEs and absence of major harm to patients suggests that primary oral care is safe for patients. However, the low quality of record keeping may imply underestimation. abstract_id: PUBMED:30178553 Optimizing quality and safety of dental materials. This paper discusses aspects of quality and safety improvement of the most commonly used dental restorative materials, the resin-based composites. From a patient's perspective, long-lasting resin-based restorations without complications are important. Recurrent caries and fracture are the most common causes for restoration failures. Proper handling and curing of the composites improve the mechanical properties of the restorations and increase safety by reducing exposure to residual methacrylate monomers. A number of compounds have been introduced in restorative materials to reduce the risk of recurrent caries, even though a real breakthrough has not yet been achieved. It is concluded that simple measures may improve the quality and safety of resin-based composite restorations. abstract_id: PUBMED:24789834 Open wide: looking into the safety culture of dental school clinics. Although dentists perform highly technical procedures in complex environments, patient safety has not received the same focus in dentistry as in medicine. Cultivating a robust patient safety culture is foundational to minimizing patient harm, but little is known about how dental teams view patient safety or the patient safety culture within their practice. As a step toward rectifying that omission, the goals of this study were to benchmark the patient safety culture in three U.S. dental schools, identifying areas for improvement. The extensively validated Medical Office Survey on Patient Safety Culture (MOSOPS), developed by the Agency for Healthcare Research and Quality, was administered to dental faculty, dental hygienists, dental students, and staff at the three schools. Forty-seven percent of the 328 invited individuals completed the survey. The "Teamwork" category received the highest marks and "Patient Care Tracking and Follow-Up" and "Leadership Support for Patient Safety" the lowest. Only 48 percent of the respondents rated systems and processes in place to prevent/catch patient problems as good/excellent. All patient safety dimensions received lower marks than in medical practices. These findings and the inherent risk associated with dental procedures lead to the conclusion that dentistry in general, and academic dental clinics in particular, stands to benefit from an increased focus on patient safety. This first published use of the MOSOPS in a dental clinic setting highlights both clinical and educational priorities for improving the safety of care in dental school clinics. abstract_id: PUBMED:37615483 Patient Safety Culture Analysis in Dental Hospital Using Dental Office Survey on Patient Safety Culture Questionnaire: A Cross-cultural Adaptation and Validation Study. Background: The culture of safety and patient safety management in dental practice lags compared with medical practice. Hospitals strive to pursue quality and safety of healthcare services, with evidence of patient safety incidents in medical practice demonstrating the importance of promoting a safety culture. Measuring patient safety culture is a necessary first step to improving safety culture in clinical settings. As a hospital, dental hospital should improve the quality and culture of patient safety. Thus, our objective was to conduct a cross-cultural adaptation of a US measure of dental office patient safety culture for use in Indonesian dental hospitals. Method: A cross-sectional study was conducted on 200 respondents at a dental hospital in Java, Indonesia. The first stage includes cultural adaptation and translation, followed by developing a questionnaire that was tested through expert agreement and analysis of validity and reliability using Spearman, Cronbach correlation coefficients, and correlation coefficients between classes. The Dental Office Survey on Patient Safety Culture consists of 58 items and 10 dimensions (overall perceptions of patient safety and quality, organizational learning, teamwork, staff training, work pressure and pace, management support for patient safety, office processes and standardization, communication about errors, communication openness, and patient care tracking/follow-up). Results: A total of 200 respondents with a response rate of 61.5% and 77 invalid responses due to incomplete filling, so 123 respondents were analyzed. The validity test results on 38 question items from 10 dimensions, with a sign of 0.05%, 35 items are declared valid. Conclusions: The Indonesian version of the Dental Office Survey on Patient Safety Culture shows good validity and good reliability and has the potential to be used to evaluate patient safety culture in dental hospitals in Indonesia. abstract_id: PUBMED:18381849 Attitudes toward patient safety standards in U.S. dental schools: a pilot study. The objective of this study was to assess the patient safety culture among students, staff, and faculty in seven U.S. dental school clinics when compared to those from a similar study in twenty U.S. hospitals. A survey on patient safety culture developed by the Agency for Healthcare Research and Quality (AHRQ) was used to measure attitudes towards patient safety by anonymous faculty, students, and support staff members who work in the clinics of seven U.S. dental schools. This survey instrument was also administered to staff at twenty U.S. hospitals. In three of the twelve sections of the survey (Overall Perceptions of Safety, Management Support for Patient Safety, and Teamwork Across Units), dental school personnel responses rated above the hospital benchmark results. In Section 2 (Frequency of Adverse Events Reported) and Section 4 (Organizational Learning/ Continuous Improvement), average dental school responses were below those recorded for hospital personnel. The overall score from the twelve sections of the survey indicated that patient safety attitudes of dental school participants were higher than those of their hospital counterparts. abstract_id: PUBMED:30582782 Attitudes of dental undergraduate students towards patient safety in a UK dental school. Objectives: Patient safety is an important issue in health care. In the United Kingdom, wrong site tooth extraction contributes to a significant proportion of adverse or harmful events, coined "never events." Therefore, patient safety within the field of dentistry is of paramount importance. This novel study aims to explore the teaching of patient safety to undergraduate dental students and their current attitudes to the subject. Methods: Focus groups were held at Barts' and The London School of Medicine and Dentistry, QMUL in 2018 to ascertain the views and opinions of thirteen third-year dental undergraduate students. Results: Thematic analysis was performed on verbatim transcripts. Key themes were highlighted. All students could provide a complete definition of patient safety and of "never events," with examples. There was a strong emphasis upon awareness of one's own competence and the need for effective communication to maintain patient safety. Small group teaching and the requirement for standardization of teaching were encouraged. The challenge of incorporation of the patient safety concept into the clinical routine without causing repetition during teaching was highlighted. Conclusion: The results show a positive attitude towards the concept and the teaching of patient safety. All students understood patient safety concepts and techniques used to prevent adverse or harmful events. This study proves that teaching on the subject was thought to be of value. It is crucial that the teaching of patient safety is introduced and built upon within the early years of dental undergraduate training so that its practice becomes second nature. abstract_id: PUBMED:31923373 Assessing the Patient Safety Culture in Dentistry. Objectives: Medical errors are among the leading causes of death within the United States. Studies have shown that patients can be harmed while receiving care, sometimes resulting in permanent injury or, in extreme cases, death. To reduce the risk of patient safety incidents, it is imperative that a robust culture of safety be established. The primary objective of this study was to evaluate the patient safety culture among providers at 4 US dental institutions, comparing the results with their medical counterparts in 2016. Methods: This cross-sectional study uses the Medical Office Survey on Patient Safety Culture that was modified for dentistry and administered at 4 US dental institutions during the 2016 calendar year. All dental team members were invited to complete electronic or paper-based versions of the questionnaire. Results: Among 1,615 invited participants, 656 providers responded (rate, 40.6%). Medical institutions outperformed the dental institutions on 9 of the 10 safety culture dimensions, 6 of the 6 overall quality items, and 8 of the 9 patient safety and quality issues. The surveyed dental institutions reported the strongest average percentage positive scores in organizational learning (85%) and teamwork (79%). Conclusion: These findings suggest that the patient safety culture progressed over time. However, there is still heterogeneity within safety culture among academic dental, private (nonacademic), and medical clinics. Knowledge Transfer Statement: Patient safety is the first dimension of quality improvement. Administering the Medical Office Survey on Patient Safety Culture within dental clinics represents a key measure to understand where improvements can be made with respect to patient care safety. abstract_id: PUBMED:25303591 Measuring patient safety in a UK dental hospital: development of a dental clinical effectiveness dashboard. Patient safety is an important marker of quality for any healthcare organisation. In 2008, the British Government white paper entitled High quality care for all, resulting from a review led by Lord Darzi, identified patient safety as a key component of quality and discussed how it might be measured, analysed and acted upon. National and local clinically curated metrics were suggested, which could be displayed via a 'clinical dashboard'. This paper explains the development of a clinical effectiveness dashboard focused on patient safety in an English dental hospital and how it has helped us identify relevant patient safety issues in secondary dental care. Answer: Patient safety in dental care is indeed a challenging quality issue, as evidenced by the research conducted in various dental settings. A study examining patient safety incidents in primary oral healthcare found a low incidence of adverse events (AEs), suggesting that primary oral care is generally safe for patients. However, the quality of record-keeping was noted to be poor, which could imply an underestimation of the actual number of incidents (PUBMED:23627844). The work environment of dental hygienists has been shown to have a close relationship with patient safety, with factors such as working hours, number of patients per day, institution type, and institution size significantly affecting the safety level of patients (PUBMED:31077202). Additionally, the development of a patient safety culture in primary dental care is still in its early stages, with most research originating from secondary care and academic fields. Approximately 95% of dental care is provided in the primary care sector, highlighting the need for a focus on patient safety culture in these settings (PUBMED:33722142). The safety culture in dental school clinics has been assessed, revealing that teamwork received the highest marks, while patient care tracking and follow-up, as well as leadership support for patient safety, received the lowest. This indicates that academic dental clinics could benefit from an increased focus on patient safety (PUBMED:24789834). Moreover, the attitudes of dental undergraduate students towards patient safety have been found to be positive, with students understanding patient safety concepts and the importance of preventing adverse events (PUBMED:30582782). In summary, while primary oral care is considered safe, there are challenges in ensuring patient safety in dental care. These include the need for better record-keeping, the development of a robust patient safety culture, and the improvement of patient safety attitudes and practices among dental professionals.
Instruction: Does a new videolaryngoscope (glidescope) provide better glottic exposure? Abstracts: abstract_id: PUBMED:16235463 Does a new videolaryngoscope (glidescope) provide better glottic exposure? Background: The GlideScope (Saturn Biomedical Systems Inc, Burnaby, British Columbia, Canada) is a new videolaryngoscope designed as an alternative to the conventional laryngoscope. It was designed to facilitate glottic exposure during tracheal intubation. This study assessed the effectiveness of the GlideScope in providing glottic exposure. Methods: One hundred and three patients requiring general anesthesia for elective surgery were enrolled in this study. Under full monitoring, all patients were given fentanyl, propofol or thiopentone and muscle relaxant for induction. In each patient laryngoscopy was performed first with a Macintosh blade (size 3), then with the GlideScope. The optimal view of the larynx that could be achieved with each instrument was recorded and assessed using the grading scale of Cormack and Lehane (C&amp;L grade). Intubation was performed with the GlideScope. Results: The grading decreased in the majority (93.6%, 44/47) of patients with C&amp;L grade &gt; 1 when using the GlideScope. Of the 22 patients who were considered as subjects of difficult intubation, 20 had an improved laryngoscopic grade with GlideScope. One hundred and one patients were intubated successfully at the first attempt. Conclusions: The laryngeal view was better in the GildeScope group using this grading system. The GlideScope provided a better view of the glottis and is a useful alternative in airway management. abstract_id: PUBMED:21860746 A comparison of the Glidescope® to the McGrath® videolaryngoscope in patients. Background: The Glidescope® videolaryngoscope is a new device for tracheal intubation that provides an improved view of the larynx. This study was performed to compare the Glidescope with the McGrath videolaryngoscope in terms of time to intubation (TTI) and number of attempts. Methods: Patients were randomly allocated to one of two groups, Glidescope or McGrath group, by using computer-generated numbers. Tracheal intubation was attempted by an anesthesiologist with extensive experience using these two devices. The operator recorded ease of visualization of glottic structures based on the classification described by Cormack and Lehane. Number of failures, number of attempts and their duration, total intubation time, and events during the whole procedure were recorded. The duration of one attempt was defined as the time elapsed between picking up the endotracheal tube and verification of tracheal intubation with visualization of three expiratory carbon dioxide waveforms. TTI was defined as the sum of the duration of all intubation attempts (as many as three), excluding preoxygenation procedures. Results: TTI was significantly shorter for the Glidescope® compared to the McGrath® laryngoscope (40.5 vs. 53.3 s, respectively, P &lt; 0.05). However, glottic views obtained at intubation were similar between the two groups. Number of intubation attempts was not significantly different between the two groups (1.03 ± 0.19 vs 1.10 ± 0.32, respectively) (mean ± SD). Conclusions: Study results demonstrated that the Glidescope reduced total intubation time in comparison with the McGrath, in terms of TTI in patients with normal airways. abstract_id: PUBMED:37801823 GlideScope versus C-MAC D-blade videolaryngoscope for double-lumen tube intubation in patients with anticipated difficult airways: A multi-center, prospective, randomized, controlled trial. Study Objective: Videolaryngoscopes are widely used to visualize difficult airways. Our aim was to compare the GlideScope and C-MAC D-blade videolaryngoscopes for double-lumen tube (DLT) intubation in patients with difficult airways. Design: A multi-center, prospective, randomized controlled trial. Setting: Three comprehensive tertiary, high-volume hospitals from 5 December 2020 to 4 November 2021. Patients: We included 348 adult patients with anticipated difficult airways who underwent elective thoracic surgery. Interventions: Patients were randomized into two groups: GlideScope and C-MAC D-blade. Following anesthesia induction, DLT intubation was performed using different videolaryngoscopes. Measurements: The primary outcome was the first-pass success rate of DLT intubation. All other results were recorded as secondary outcomes. Main Results: No significant differences were observed in the first-pass success rate of DLT intubation between the GlideScope and C-MAC D-blade (86.21% and 89.66%, respectively; P = 0.323). However, compared with the GlideScope, the C-MAC D-blade provided a lower Cormack-Lehane grade (P &lt; 0.001), lower rates of external laryngeal pressure (48 vs. 15, P &lt; 0.001), and postprocedure sore throat (26 vs. 8, P &lt; 0.001). The numerical rating score for difficulty of videolaryngoscope insertion into the oral cavity, delivery to the glottis, and intubation into the main bronchus were significantly lower when using the C-MAC D-blade (P &lt; 0.001). Moreover, the duration of DLT intubation was shorter in the C-MAC D-blade group (81 s [70-97 s] vs. 95 s [78-115 s], P &lt; 0.001). In each group, two patients underwent fiberoptic intubation after three attempts with a videolaryngoscope failed. Conclusions: In patients with difficult airways, the GlideScope and C-MAC D-blade provided a similar success rate on the first DLT intubation attempt; however, the C-MAC D-blade offers a better glottic view, easier and faster intubation, and lower incidence of sore throat. abstract_id: PUBMED:26028881 Fiberscope Assisted Videolaryngoscope Intubation in the Surgical Treatment of TMJ Ankylosis. Temporomandibular ankylosis is characterized by the formation of a bony mass which replaces normal temporomandibular joint (TMJ) articulation. Anaesthetic management in these patients requires expertise and dependable intubation technique that allows successful intubation due to anticipated difficulty in accessing the airway. A novel technique of endotracheal intubation is used for the successful airway management during the surgical treatment in patients with TMJ ankylosis with the assistance of fiberscope and GlideScope(®) videolaryngoscope. GlideScope(®) videolaryngoscope is a recently introduced system for tracheal intubation that has a dedicated video camera encased into a laryngoscope blade and provides better panoramic view than the conventional laryngoscopes. This technique avoids complications such as trauma to soft tissue structures surrounding the glottis during the passage of the tube over the fiberscope. It gives a clear view of the tube and its cuff position during intubation. It also abbreviates the time required for intubation which is a crucial determinant in this subset of patients owing to the difficult airway associated with paediatric age group. abstract_id: PUBMED:19882199 Comparison of a single-use GlideScope Cobalt videolaryngoscope with a conventional GlideScope for orotracheal intubation. Background: This study was conceived to compare the single-use GlideScope Cobalt videolaryngoscope with the conventional GlideScope videolaryngoscope for orotracheal intubation, as judged by time to intubation (TTI) and ease of intubation. Methods: One hundred patients with normal-appearing airways requiring orotracheal intubation for elective surgery were randomly allocated to have their tracheas intubated by a heterogeneous group of operators with the Cobalt GlideScope or the conventional GlideScope. TTI was assessed by a blinded observer. Operators were blinded until the start of laryngoscopy. A visual analogue scale (VAS) assessed the ease of intubation. The number of intubation attempts, number of failures, glottic grades, and fogging of the video screen were recorded. Results: There was no difference between the median TTI of the GlideScope Cobalt and the conventional GlideScope (40.1 sec, interquartile range [IQR] 34.1, 51.3 vs 39 sec, IQR 32.6, 48.1, respectively; P = 0.75). The ease of intubation was similar between the two devices (median Cobalt VAS: 16 mm, IQR 10.8, 27.3, vs median conventional VAS: 12.5 mm, IQR 10, 20.5, respectively; P = 0.12). There were no significant differences between the two devices with respect to glottic exposure, intubation attempts, failures, or video screen fogging. Conclusions: The GlideScope Cobalt has similar performance characteristics compared with the conventional GlideScope videolaryngoscope when used for orotracheal intubation. The two devices can likely be used interchangeably. (ClinicalTrials.gov number, NCT00459797.). abstract_id: PUBMED:12826557 Use of a new videolaryngoscope (GlideScope) in the management of a difficult airway. Purpose: To describe the clinical use of a new videolaryngoscope in a patient who had repeatedly been difficult or impossible to intubate by conventional direct laryngoscopy. This device provided excellent glottic visualization and permitted easy endotracheal intubation. Clinical Features: A 74-yr-old male presenting for repeat elective surgery had a history of failed intubations by direct laryngoscopy and pulmonary aspiration with a laryngeal mask airway. He refused awake flexible fibreoptic intubation. After the induction of general anesthesia, laryngoscopy was performed using a GlideScope. This provided complete glottic exposure and easy endotracheal intubation. Conclusion: This new videolaryngoscope provided excellent laryngeal exposure in a patient whom multiple experienced anesthesiologists had repeatedly found to be difficult or impossible to intubate using direct laryngoscopy. The clinical role of this device awaits confirmation in a large series of difficult airways. abstract_id: PUBMED:22144924 Use of Airtraq, C-Mac, and Glidescope laryngoscope is better than Macintosh in novice medical students' hands: A manikin study. Background And Aim: Obtaining patent airway is a crucial task for many physicians. When opportunities to practice intubations on patients are really limited, skill gaining methods are needed. We conducted a study among novice 6(th) year medical students to assess their ability to intubate the trachea in normal airway in manikin using four airway tools. Setting And Design: Prospective, cohort study conducted at simulation center of university-based, tertiary care hospital. Methods: FIFTY MEDICAL STUDENTS PERFORMED EITHER ORAL OR NASAL TRACHEAL INTUBATION USING THE FOLLOWING FOUR INTUBATING TOOLS: C-Mac videolaryngoscope, Glidescope, and Airtraq in comparison with regular Macintosh laryngoscope. Intubation time, visualization of glottic opening, ease of intubation, satisfaction of participants, incidence of dental trauma, and the need for optimization manoeuvres' use among different airway tools were recorded. Results: In oral intubation, Airtraq was better than others in regard to intubation time, glottic opening, ease of intubation, and the need for external laryngeal pressure application, followed by Glidescope, C-Mac, and finally Macintosh laryngoscope (P&lt;0.001). Airtraq and Glidescope associated with less dental trauma than C-Mac and Macintosh. In nasal route, fastest intubation time was reported with Airtraq followed by Glidescope, C-Mac, and lastly Macintosh. Airtraq, Glidescope, and C-Mac were similar to each other and better than the Macintosh in regard to ease of intubation, satisfaction, and number of attempts (P≤0.008). Conclusions: New devices like Airtraq, Glidescope, and C-Mac are better than the regular Macintosh when used by novice medical students for oral and nasal intubation on manikin. abstract_id: PUBMED:25885612 Orotracheal intubation of morbidly obese patients, comparison of GlideScope(®) video laryngoscope and the LMA CTrach™ with direct laryngoscopy. Background: Morbidly obese patients are at increased risk of difficult mask ventilation and intubation as well as increased risk of hypoxemia during tracheal intubation. Recently, new video-assisted intubation devices have been developed. The GlideScope(®) videolaryngoscope and LMA CTrach™ (CT) allows continuous video-endoscopy of the tracheal intubation procedure. Objective: this study is to determine whether the GlideScope(®) videolaryngoscope (GVL) and the LMA CTrach™ (CT) provide the best airway management, measured primarily in intubation difficulty scale (IDS) scores, time and numbers of intubation attempts, and improvement in the intubation success rate of morbidly obese patients when compared with the direct Macintosh laryngoscope (DL). Materials And Methods: After Ethics' Committee approval, 90 morbidly obese patients (BMI &gt; 35 kg/m(2)) scheduled for general, gynecological, and bariatric surgery were included in this prospective study. Patients were randomly assigned in three groups: tracheal intubation using direct laryngoscopy (DL), GlideScope(®) videolaryngoscope (GVL) or the LMA CTrach™ (CT). Characteristics and consequences of airway management were evaluated. The primary outcome was the intubation difficulty scale score (IDS), Secondary outcomes were theintubation time, overall success rate, number of attempts, Cormack-Lehane grade, subjective difficulty of intubation, desaturation and upper airway morbidity. Results: Difficulty in facemask ventilation was similar in the three groups. IDS scores were significantly lower with GVL and CT than with DL. The mean TTI was 14 s faster in patients intubated with the GVL (86 s, IQR: 68-115) compared with DL (100 s, IQR; 80-150), and was 34 s faster when compared with CT (120 s, IQR; 95-180). The success rate of tracheal intubation was lower with the DL (80%) compared with the GVL (100%) or the CT (100%). Six cases of failed intubation occurred in group DL, four patients from the six patients were intubated successfully with GVL, and two patients were intubated with the CT. Both the GVL and the CTimproved the Cormack and Lehane view obtained at laryngoscopy, compared with the DL. Significantly high percent of patients in DL (43%) and CT (27%) required optimization maneuvers (external laryngeal pressure) compared with GVL (0%). In the CT group, 30% of the patients required laryngeal mask manipulation (for view optimization) compared with (0%) in GVL and CT groups. Conclusion: The GlideScope(®) videolaryngoscope and the LMA CTrach™ reduced the difficulty, improved laryngoscopic views and overall success rate of tracheal Intubationto a similar extent compared with the Macintosh laryngoscope in morbidly obese patients. The GVL improved intubation time for tracheal intubation compared with the CT and DL but no patient became hypoxic with CT because of prolonged intubation time. abstract_id: PUBMED:32434470 GlideScope® versus C-MAC®(D) videolaryngoscope versus Macintosh laryngoscope for double lumen endotracheal intubation in patients with predicted normal airways: a randomized, controlled, prospective trial. Background: The double lumen endotracheal tube (DLT) is the most widely-used device for single lung ventilation in current thoracic anesthesia practice. In recent years, the routine application of the videolaryngoscope for single lumen endotracheal intubation has increased; nevertheless there are few studies of the use of the videolaryngoscope for DLT. We wondered whether there were benefits to using the videolaryngoscope for DLT placement in patients with predicted normal airways. Therefore, this study was designed to compare the performances of the GlideScope®, the C-MAC®(D) videolaryngoscope and the Macintosh laryngoscope in DLT intubation. Methods: This was a randomized, controlled, prospective study. We randomly allocated 90 adult patients with predicted normal airways into three groups. All patients underwent routine anesthesia using different laryngoscopes according to group allocation. We compared DLT insertion times, first-pass success rates, numerical rating scales (NRS) of DLT delivery and DLT insertion, Cormack-Lehane degrees (C/L), hemodynamic changes and incidences of intubation complications. All outcomes were analyzed using SPSS13.0. Results: Compared with the GlideScope, the Macintosh gave shorter times for DLT insertion (median: 96 (IQR: 51 [min-max: 62-376] s vs 73 (26 [48-419] s, p = 0.003); however, there was no difference between the Macintosh and C-MAC(D) (p = 0.610). The Macintosh had a significantly higher successful first attempt rate than did the GlideScope or C-MAC(D) (p = 0.001, p = 0.028, respectively). NRS of DLT delivery and insertion were significantly lower in the Macintosh than in the others (p &lt; 0.001). However, the C/L degree in the Macintosh was significantly higher than in the others (p &lt; 0.001). The incidences of oral bleeding, hoarseness, sore throat and dental trauma were low in all groups (p &gt; 0.05). There were no significant differences in DLT misplacement, fiberoptic time or hemodynamic changes among the groups. Conclusions: Compared with the Macintosh laryngoscope, the GlideScope® and C-MAC®(D) videolaryngoscopes may not be recommended as the first choice for routine DLT intubation in patients with predicted normal airways. Trial Registration: The study was prospectively registered at the Chinese Clinical Trial Registry (no. ChiCTR1900025718); principal investigator: Z.L.H.; date of registration: September 6, 2019. abstract_id: PUBMED:34405354 Comparison of the strength of various disposable videolaryngoscope blades. Purpose: Breaking of disposable blades during emergency endotracheal intubation has been reported. Breakage can cause serious injury and foreign body ingestion. We aimed to measure and analyze the strength characteristics of different disposable videolaryngoscope blades with the application of an upward-lifting force. Methods: We measured the strength of four disposable videolaryngoscope blades (C-Mac® S Video laryngoscope MAC #3, Glidescope GVL® 3 stat, Pentax AWS® PBlade TL type, and King Vision® aBlade #3) using the fracture test. The strength of 12 samples of each type of disposable videolaryngoscope blade was measured using an Instron 5,966 tensile tester by applying an upward-lifting force. Results: After the fracture test using C-Mac, Glidescope GVL, Pentax AWS, and King Vision, the number of deformed blades were 0, 12, 3, and 7, respectively, and the number of broken blades were 12, 0, 9, and 5, respectively. The mean (standard deviation) maximum force strengths of Pentax AWS, C-Mac, King Vision, and Glidescope GVL blades were 408.4 (27.4) N, 325.8 (26.5) N, 291.8 (39.3) N, and 262.7 (3.8) N, respectively (P &lt; 0.001). Conclusion: Clinicians should be aware of the varied strength characteristics of the four types of disposable videolaryngoscope blades when they are used in endotracheal intubation. Answer: Yes, the GlideScope videolaryngoscope has been shown to provide better glottic exposure. A study involving 103 patients requiring general anesthesia for elective surgery found that the GlideScope provided better glottic views when compared to a conventional Macintosh laryngoscope. The grading of glottic exposure, using the Cormack and Lehane (C&L) grade, decreased in the majority of patients (93.6%) when using the GlideScope, indicating improved visibility of the glottis. This was particularly notable in patients who were considered difficult to intubate, with 20 out of 22 such patients showing an improved laryngoscopic grade with the GlideScope. Additionally, 101 patients were successfully intubated on the first attempt with the GlideScope (PUBMED:16235463). Furthermore, when compared to the McGrath videolaryngoscope, the GlideScope demonstrated a significantly shorter time to intubation (TTI) in patients with normal airways, although the number of intubation attempts was not significantly different between the two devices (PUBMED:21860746). In a multi-center, prospective, randomized controlled trial comparing the GlideScope with the C-MAC D-blade videolaryngoscope for double-lumen tube intubation in patients with anticipated difficult airways, the GlideScope and C-MAC D-blade provided similar first-pass success rates. However, the C-MAC D-blade offered additional advantages such as a better glottic view, easier and faster intubation, and a lower incidence of sore throat (PUBMED:37801823). These findings suggest that the GlideScope videolaryngoscope is a useful alternative in airway management, providing better glottic exposure and potentially improving the ease and success of intubation in various clinical settings.
Instruction: Is the evidence of the Supreme Court Ruling of gefitinib litigation in Japan scientific? Abstracts: abstract_id: PUBMED:25420764 Is the evidence of the Supreme Court Ruling of gefitinib litigation in Japan scientific? Background: Plaintiffs of the gefitinib (Iressa) lawsuits in Japan started in 2004 were defeated in the Supreme Court in 2013. The Court judged it was not possible to foresee the outbreak of deaths caused by interstitial pneumonia due to gefitinib from death cases before approval of this drug. Objective: We attempted to verify validity of this judgment. Methods: We estimated the 95% confidence interval (CI) of the proportion of onset and death cases among 23 onset and 13 death cases occurring from "within 1 week" to "within 4 weeks" from clinical data before approval of this drug using data admitted to the Court. Results: For death cases, all of the upper limits of the 95% CI exceeded 50% within 1-4 weeks. This fact suggested that the cases of acute interstitial pneumonia were included in the clinical trial before the approval of gefitinib. Conclusion: It was possible to foresee the outbreak of death cases after drug approval. This conclusion showed the Court's ruling was not reasonable and was unscientific. abstract_id: PUBMED:25214906 Lessons from gefitinib-induced interstitial lung disease in Japan: Problems in approval, pharmacovigilance, and regulatory decision-making procedures. Objective: The objective of this study was to identify problems in the approval, pharmacovigilance, and post-approval regulatory decision-making procedures involving gefitinib and to propose countermeasures to prevent further drug-induced suffering in Japan in the future. Methods: We comprehensively reviewed reports regarding gefitinib published during the period from 2000 to 2006 by regulatory agencies, the manufacturer of the gefitinib-containing drug, cancer clinical study groups, and a scientific society. Results: We identified the following major problems in the approval, pharmacovigilance, and regulatory decision-making procedures: 1) the results of animal experiments and pre-marketing clinical trials, and reports of adverse drug reactions from other countries were not properly reflected in the label; 2) indications for the drug were expanded without strict evaluation of the external validity of pre-marketing clinical trials; and 3) despite many serious cases of interstitial lung disease (ILD) being spontaneously reported, well-designed post-marketing surveillance was not immediately performed. Conclusions: We propose a mandatory total registry of all drug users and surveillance (i.e. a prospective outcome study) as one of the rational solutions for preventing further drug-induced suffering in Japan. abstract_id: PUBMED:17301554 International clinical trials for a medical oncologist in Japan It is very important to participate into international clinical trials for cancer treatment not only for contribution to worldwide development of new anticancer agents but also for escape from social isolation out of new drug development. Here, we discussed about international clinical trials in Japan as an aspect of medical oncologist in medical school. And also, according to only one our experience of international clinical trial, IDEAL 1, which was a randomized phase II study of gefitinib for patients with previously treated metastatic non-small cell lung cancer, I tried to consider about current challenges of study investigators for cancer treatment, study conducting institution, pharmaceutical company, and regulatory agent in Japan. abstract_id: PUBMED:18463953 Overview of chemoradiation clinical trials for locally advanced non-small cell lung cancer in Japan. The standard of care for unresectable stage III non-small cell lung cancer (NSCLC) is combined-modality therapy with both chemotherapy and thoracic radiation therapy (TRT). A phase III trial by the West Japan Lung Cancer Group revealed that the combination of mitomycin, vindesine, and cisplatin (MVP) with concurrent TRT yielded a median survival time of 16.6 months and a 5-year survival rate of 16% in patients with unresectable stage III NSCLC. Although evidence indicates that concurrent chemotherapy and TRT (chemoradiation) increases survival to a moderately greater extent than sequential therapeutic approaches, the optimal strategies for such concurrent treatment remain to be defined, and differ between full-dose systemic and low-dose radio-enhancing protocols. Two phase III trials have been initiated in Japan to address these issues and they have recently reported preliminary data. Early results of the Okayama Lung Cancer Study Group (OLCSG) trial, comparing chemoradiation based on divided docetaxel and cisplatin chemotherapy with MVP-based chemoradiation, have been reported. The West Japan Oncology Group (WJOG) is comparing the efficacy and toxicity of TRT and concurrent chemotherapy with either carboplatin-paclitaxel or carboplatin-irinotecan, followed by full-dose consolidation chemotherapy, with the efficacy and toxicity of MVP-based chemoradiation. Several phase I/II studies to test the optimal use of new agents such as S-1 (an oral anticancer drug combining tegafur, 5-chloro-2, 4-dihydroxypyridine, and potassium oxonate) and gefitinib (an inhibitor of the tyrosine kinase activity of the epidermal growth factor receptor) are also ongoing. In addition, radiation dose intensification with three-dimensional planning approaches is currently under evaluation. A phase I clinical trial by WJOG to establish, prospectively, the maximum tolerated dose of three-dimensional hyperfractionated radiotherapy with concurrent weekly chemotherapy (carboplatin-paclitaxel) is thus currently under way. This overview of ongoing trials highlights new directions in the treatment of locally advanced NSCLC. abstract_id: PUBMED:11681237 Issues in cancer treatments--EBM, individuation, and standardization This review emphasizes the following conclusions. Every doctor in Japan is required to recognize that individuated treatments of cancer patients are also necessary to be based on the concept of EBM (evidence based medicine). To do so, it is essential for the doctors to conduct or join clinical trials in Japan more and more. This gives us more evidences applicable to our patients, and deeper analysis for foreign clinical trials. In the 90's, the concept of EBM was introduced to the Japanese medical profession. More recently, advances in molecular biology provided possibilities to customize treatments of individual cancer patients. Some of the new approaches became drugs already approved, such as Herceptin (Trastuzumab), Iressa (ZD1839), and Gleevec (ST1571). This trend, however, may justify even experimental treatments without enough evidence, because some Japanese doctors still rarely understand the concept of EBM. This review begins with mention of the origin of EBM and the methodology of EBM. Then topics in the "individuation" are discussed. Lastly, the need for "standardized-individuated-treatments" is argued. abstract_id: PUBMED:36369880 Changes in survival of patients with non-small cell lung cancer in Japan: An interrupted time series study. The epidermal growth factor receptor tyrosine kinase inhibitors (EGFR-TKIs) gefitinib and erlotinib were approved for metastatic or relapsed non-small cell lung cancer (NSCLC) in Japan in 2002 and 2007, respectively. EGFR mutation testing was also approved in 2007. Although clinical trials showed efficacy in NSCLC patients harboring activating EGFR mutations, these effects have rarely been reported in real-world practice. We evaluated changes in survival in NSCLC patients following introduction of these agents and EGFR mutation testing by extracting patients diagnosed with NSCLC from 1993 through 2011 from six prefectural population-based cancer registries in Japan. Relative survival (RS) was calculated by sex, histological subtype, and cancer stage. We conducted interrupted time series analysis to assess survival changes following introduction of EGFR-TKIs and EGFR mutation testing. 120,068 patients with NSCLC were analyzed. One- and three-year RS gradually increased in overall NSCLC for men and women. For adenocarcinoma, among men, slopes of 1- and 3-year RS increased steeply in patients diagnosed from 2007 through 2011; among women, significant level increases were seen in 1-year RS in patients diagnosed in 2002 (4.55% [95% confidence interval: 1.76-7.33]) and 2007 (3.40% [1.27-5.52]). These significant level increases were particularly obvious in women with adenocarcinoma at an advanced stage. Our results suggest that recent improvements in survival in men and women with adenocarcinoma are due at least partly to introduction of EGFR-TKIs into real-world practice, and to prescription based on appropriate patient selection following introduction of EGFR mutation testing into real-world practice in Japan. abstract_id: PUBMED:17496441 Differences in the therapeutic strategies for lung cancer between Europe/United States and Japan Lung cancer is a leading cause of malignancy-related death worldwide. Recently, therapeutic strategies for lung cancer have dramatically progressed, and attempts have been made to standardize the therapy for lung cancer. Because the disease prevalence and the susceptibility to the drugs differ with the circumstances, such as lifestyle, and genetic background, and ethnicity, the standard therapeutic strategies for lung cancer may differ with the individual country or region. Actually, the EGFR tyrosine kinase inhibitor, gefitinib, is well known to be significantly effective for oriental people, including Japanese, female, adenocarcinoma and never-smoker, suggesting an inherited difference. Here we describe not only the standard therapeutic strategies but also the difference in therapeutic strategies for lung cancer between Europe/United States and Japan. abstract_id: PUBMED:18794545 EGFR mutations predict survival benefit from gefitinib in patients with advanced lung adenocarcinoma: a historical comparison of patients treated before and after gefitinib approval in Japan. Purpose: This study evaluated whether the presence of epidermal growth factor receptor (EGFR) mutations is a predictive marker for survival benefit from gefitinib and/or a prognostic marker in patients with advanced lung adenocarcinoma. Patients And Methods: Overall survival (OS) was compared between patients with advanced lung adenocarcinoma who began first-line systemic therapy before and after gefitinib approval in Japan (January 1999 to July 2001 and July 2002 to December 2004, respectively). Deletional mutations in exon 19 or the L858R mutation in exon 21 of EGFR were evaluated using high-resolution melting analysis. Results: EGFR mutations were detected in 136 (41%) of the 330 patients included in this study. OS was significantly longer among the EGFR-mutant patients treated after gefitinib approval compared with the OS of patients treated before gefitinib approval (median survival time [MST], 27.2 v 13.6 months, respectively; P &lt; .001), whereas no significant survival improvement was observed in patients without EGFR mutations (MST, 13.2 v 10.4 months, respectively; P = .13). A significant interaction between the presence of EGFR mutations and a survival improvement was seen (P = .045). Among patients treated before gefitinib approval, those with EGFR mutations lived longer than those without EGFR mutations (MST, 13.6 v 10.4 months, respectively; P = .034). The response rates to first-line cytotoxic chemotherapy were not significantly different between patients with and without EGFR mutations (31% v 28%, respectively; P = .50). Conclusion: EGFR mutations significantly predict both a survival benefit from gefitinib and a favorable prognosis in patients with advanced lung adenocarcinoma. abstract_id: PUBMED:19755624 Evidence-based treatment options for the management of skin toxicities associated with epidermal growth factor receptor inhibitors. Objective: To compile evidence from randomized controlled trials, case series, and case reports to identify the effectiveness of various therapeutic agents for the treatment and prevention of dermatologic effects secondary to epidermal growth factor receptor inhibitor (EGFRI) administration. Data Sources: Literature was accessed through PubMed (2002-May 2009) and Scopus (2002-March 2009), using the terms epidermal growth factor receptor inhibitor, cetuximab, erlotinib, gefitinib, panitumumab, management, skin toxicity, and cutaneous effects. In addition, reference citations from publications identified were reviewed. Study Selection And Data Extraction: An evaluation of published clinical trials, case series, case reports, and clinical management guidelines that studied the treatment options for EGFRI-induced skin toxicities was performed. Studies that reported dosing regimens and treatment outcomes were included in this review. Data Synthesis: Management of EGFRI-induced skin toxicities has been documented in 2 randomized controlled trials, 3 case series, and 10 case reports in a total of 156 patients. There is strong evidence for the use of topical antibiotics to manage EGFRI skin toxicities. Controversy exists regarding the use of corticosteroids and retinoids for the management of EGFRI-induced rash. Several variables are noted among the number of case series and case reports, such as follow-up duration of the intervention, making any comparisons difficult. Conclusions: Overall, antibiotics are the most common treatment option and have the potential to reduce the severity of skin rash. Well-designed clinical studies with proper recording of relevant data on the management of EGFRI-induced dermatologic effects are needed to properly evaluate the use of various therapeutic agents. Apart from randomized controlled trials, comprehensive case reports are also important for clinical evaluation on a case-by-case basis. abstract_id: PUBMED:18976796 The predictive value of epidermal growth factor receptor tests in patients with pulmonary adenocarcinoma: review of current "best evidence" with meta-analysis. Epidermal growth factor receptor signaling pathway plays an important role in pulmonary adenocarcinoma biology. Targeted therapy with tyrosine kinase inhibitors like gefitinib and erlotinib are being used in selected patients with variable response rates. Several RCT and other studies have evaluated the value of various tests such as immunohistochemistry, polymerase chain reaction, and fluorescent in situ hybridization for epidermal growth factor receptor detection. The clinical validity and applicability of these tests remain controversial. Evidence-based pathology promotes the use of systematic review of the literature and meta-analysis rather than subjective appraisal of the literature. We performed a systematic review of the literature to identify the "best evidence" regarding the use of these tests. The data were analyzed using Comprehensive meta-analysis software (Biostat, Inc, Englewood, NJ). Most of the information regarding epidermal growth factor receptor tests has been published in retrospective case series with few double-blind and prospective RCT. Estimated positive predictive values of immunohistochemistry, polymerase chain reaction, and fluorescent in situ hybridization range from 6.5% to 82%%, 7% to 100%, and 11% to 89%, respectively. Meta-analysis of nearly 5000 cases in the literature estimates that all 3 tests significantly predict response to gefitinib in patients with lung cancer. It shows lack of heterogeneity within the study results, although the current best evidence is limited by variations in study methodologies, patient ethnicity, test interpretation criteria, and variable definitions of treatment response. There is only one study evaluating the value of epidermal growth factor receptor tests in predicting response to erlotinib. Further studies are needed to clarify the predictive value of epidermal growth factor receptor tests in patients with pulmonary adenocarcinoma. Answer: The evidence of the Supreme Court Ruling of gefitinib litigation in Japan has been challenged on scientific grounds. A study attempted to verify the validity of the Supreme Court's judgment that it was not possible to foresee the outbreak of deaths caused by interstitial pneumonia due to gefitinib from death cases before the approval of the drug. The study estimated the 95% confidence interval (CI) of the proportion of onset and death cases among patients from clinical data before the approval of gefitinib. The results showed that for death cases, all of the upper limits of the 95% CI exceeded 50% within 1-4 weeks, suggesting that cases of acute interstitial pneumonia were included in the clinical trial before the approval of gefitinib. The conclusion was that it was possible to foresee the outbreak of death cases after drug approval, indicating that the Court's ruling was not reasonable and was unscientific (PUBMED:25420764). Additionally, another study identified major problems in the approval, pharmacovigilance, and regulatory decision-making procedures for gefitinib. It highlighted that the results of animal experiments and pre-marketing clinical trials, as well as reports of adverse drug reactions from other countries, were not properly reflected in the drug's label. The study also pointed out that indications for the drug were expanded without strict evaluation of the external validity of pre-marketing clinical trials, and despite many serious cases of interstitial lung disease (ILD) being reported, well-designed post-marketing surveillance was not immediately performed. These findings suggest that there were significant shortcomings in the scientific and regulatory processes surrounding the approval and monitoring of gefitinib in Japan (PUBMED:25214906). In light of these studies, it appears that the evidence used in the Supreme Court Ruling of gefitinib litigation in Japan may not have been fully grounded in scientific analysis and consideration of the available clinical data.
Instruction: Are waist circumference and body mass index independently associated with cardiovascular disease risk in Chinese adults? Abstracts: abstract_id: PUBMED:24698481 Waist-to-height ratio is an appropriate index for identifying cardiometabolic risk in Chinese individuals with normal body mass index and waist circumference. Background: The waist-to-height ratio (WHtR), a novel index that has been reported to correlate more strongly than body mass index (BMI) and waist circumference (WC) with cardiometabolic risk factors, has not been studied in Chinese individuals with normal body mass index and waist circumference. The present study compared the predictive power of WHtR with those of BMI and WC for such factors in non-obese Chinese, and to define optimal cutoffs of WHtR in this population. Methods: A total of 2137 subjects aged 40-75 years were recruited. Three anthropometric indices (WHtR, BMI, and WC) were compared and the optimal cutoffs of WHtR were identified by receiver operating characteristic curve (ROC) analysis. WHtR was divided into four quartiles (WHtR-Q), and multiple linear regression analyses were used to calculate the relationship between WHtR-Q and clinical biochemical index. Results: Waist-to-height ratio was more efficient than WC to identify cardiometabolic risk factors in both genders, but was only superior to BMI in females. WHtR-Q was positively correlated with fasting plasma glucose, 2-h postprandial blood glucose, and systolic blood pressure, and negatively connected with high density lipoprotein cholesterol in both genders after controlling for age, current smoking and drinking, moderate-intensity physical activity, daily sedentary time, daily screen time and menopause (only for females). The optimal cutoffs of WHtR for detecting cardiometabolic risk factors were 0.47 in males and 0.51 in females. Conclusion: Waist-to-height ratio might be an effective index to identify cardiometabolic risk factors in Chinese with normal BMI and WC, particularly in females. abstract_id: PUBMED:21351667 Obesity, body mass index, waist circumference and mortality Obesity belongs to civilization diseases with its incidence and prevalence increasing all over the world. Complications of obesity represent a serious socioeconomic problem. Obesity is a part of the metabolic syndrome and an independent mortality risk factor in all age categories, however, this relation is twice as strong in individuals under fifty years of age. Slight overweight in old age is associated with reduced total mortality risk. Simple evaluation of obesity via calculation of body mass index and measurement of waist circumference can be carried out in every medical consulting room. abstract_id: PUBMED:23165487 Differentiating the associations of waist circumference and body mass index with cardiovascular disease risk in a Chinese population. It is not known which obesity index best explains variations in cardiovascular disease risk across populations. The objective of this study was to differentiate the associations of waist circumference (WC) and body mass index (BMI) with cardiovascular disease risk in a Chinese population. Cardiovascular risk factors, WC, and BMI were measured in 13 817 adults aged more than 18 years in Shanghai. Higher WC tertiles were associated with higher blood pressure and higher cholesterol, triacylglycerol, and glucose concentrations within each tertile of BMI and vice versa. The odds ratios (ORs) of hypertension, dyslipidemia, and metabolic syndrome increased with successive WC (or BMI) tertiles after adjustment for BMI (or WC) and several covariates. However, BMI tertiles were not associated with the ORs of diabetes after adjustment for WC. WC may be better than BMI as an alternative measure of body fatness or fat distribution for predicting diabetic risks in Chinese adults. abstract_id: PUBMED:32062912 Association of body mass index and waist circumference with major chronic diseases in Chinese adults Objective: To examine the association of BMI with major chronic diseases morbidity and all-cause mortality in Chinese adults. Methods: This study is based on China Kadoorie Biobank. Anthropometric indexes were objectively measured at the baseline survey during 2004-2008. After excluding participants with heart disease, stroke, cancer, COPD and diabetes, 428 113 participants aged 30 to 79 years were included in the analysis. Cox regression models were used to investigate the associations of BMI and waist circumference with incidence of major chronic diseases (including cardiovascular disease, cancer, COPD, and type 2 diabetes) and all-cause mortality. Results: Over an average of 10 years, 131 454 participants developed any one of major chronic diseases. A total of 26 892 all-cause deaths were reported. The risk of major chronic diseases increased with BMI. Compared with normal BMI (18.5-24.0 kg/m(2)), the HR (95%CI) of overweight (BMI 24.0-28.0 kg/m(2)) and obesity (BMI≥28.0 kg/m(2)) were 1.26 (95%CI: 1.24-1.27) and 1.59 (95%CI: 1.57-1.62) respectively. Underweight and obesity were both associated with risk of all-cause mortality. Waist circumference was positively associated with risk of major chronic diseases and all-cause mortality. According to recommended cut-off points of BMI and waist circumference for Chinese adults, maintaining a healthy body weight would prevent 12% incident cases of major chronic diseases. Conclusion: General and central obesity were risk factors for major chronic disease among Chinese adults. abstract_id: PUBMED:37700278 Association of waist-calf circumference ratio, waist circumference, calf circumference, and body mass index with all-cause and cause-specific mortality in older adults: a cohort study. Background: Waist circumference (WC), calf circumference (CC), and body mass index (BMI) have been independently linked to mortality. However, it's not yet clear how the waist-calf circumference ratio (WCR) relates to mortality. This study aims to investigate the relationship between WCR, WC, CC, and BMI with all-cause and cause-specific mortality in older adults. Methods: In the 2014 Chinese Longitudinal Healthy Longevity Survey, 4627 participants aged 65 years and older were included, and they were subsequently followed up in 2018. Cox proportional hazards models were utilized to estimate hazard ratios (HRs) and 95% confidence intervals (CIs) for all-cause and cause-specific mortality, based on WCR, WC, CC, and BMI. Results: During a median follow-up of 3.4 years, 1671 deaths (36.1%) occurred. Compared to the second quartile of WCR, the highest quartile had a higher risk of mortality from all causes (HR 1.42, 95%CI 1.24-1.64), cardiovascular disease (CVD) (HR 1.88, 95%CI 1.38-2.56), and other causes (HR 1.37, 95%CI 1.15-1.63). The first and fourth quartiles of WC had HRs of 2.19 (1.00-4.79) and 2.69 (1.23-5.89), respectively, for cancer mortality. The highest quartile of CC was associated with a lower risk of all-cause and other-cause mortality, whereas the lowest quartile was associated with a higher risk of all-cause, CVD, and other-cause mortality compared to the second CC quartile. Additionally, the lowest quartile of BMI was associated with a higher risk of all-cause and respiratory disease mortality. Interaction analyses showed that the effects of CC on all-cause and CVD mortality were more pronounced in adults aged ≥ 80 years (P-interaction &lt; .05). Conclusions: Higher WCR and lower CC increased the risk of all-cause, CVD, and other-cause mortality. Lower BMI was associated with higher all-cause and respiratory disease mortality risk, while WC only predicted cancer mortality. abstract_id: PUBMED:16332651 Are waist circumference and body mass index independently associated with cardiovascular disease risk in Chinese adults? Background: In Western populations, waist circumference (WC) is more predictive of cardiovascular disease (CVD) risk than is body mass index (BMI). It is unclear whether the same is true in Asian populations. Objective: The objective was to examine the independent effects of WC and BMI on CVD risk factors in China. Design: CVD risk factors, BMI, and WC were measured in a nationally representative cross-sectional study of 15 540 Chinese adults aged 35-74 y. Results: Higher WC tertiles were associated with higher blood pressure and higher cholesterol, triacylglycerol, and glucose concentrations within each tertile of BMI and vice versa. In men, the odds of hypertension, dyslipidemia, and the metabolic syndrome (MS) increased with successive WC tertiles (1.0, 1.1, and 1.8, respectively, for hypertension; 1.0, 1.4, and 2.0, respectively, for dyslipidemia; and 1.0, 2.3, and 4.8, respectively, for MS; P for trend &lt; 0.001 for all), even after adjustment for BMI. Similarly, the odds of hypertension, dyslipidemia, and MS increased with successive BMI tertiles (1.0, 1.5, and 2.6, respectively, for hypertension; 1.0, 1.3, and 1.8, respectively, for dyslipidemia; 1.0, 1.3, and 2.9, respectively for MS; P for trend &lt; 0.001 for all), even after adjustment for WC. However, BMI tertiles were not associated with the odds of diabetes after adjustment for WC (P for trend = 0.67), whereas tertiles of WC were significantly associated with the odds of diabetes after adjustment for BMI (1.0, 1.6, and 2.1, respectively; P for trend = 0.002). The results were similar in women. Conclusions: These data show that WC adds additional risk information to that of BMI in Chinese adults. Measurement of both WC and BMI in Chinese adults may enhance CVD risk stratification. abstract_id: PUBMED:37864476 Ratio of waist circumference to body mass index: A novel predictor of clinical outcome in hypertension patients. We aim to investigate the influence of waist circumference and body mass index (BMI) on all-cause death and cardiovascular-specific death in patients with hypertension. This prospective cohort study, based on waist circumference and body mass index measurements in patients with hypertension, provided risk estimates of all-cause mortality and cardiovascular events. The waist circumference-to-BMI ratio (WtBR) is an anthropometric measure integrating waist circumference and BMI. We utilized multivariable Cox regression analysis, restricted cubic spline model, Kaplan-Meier plot, random forest analysis, and sensitivity analysis to assess the relationship of WtBR with all-cause mortality. Subsequently, Fine-Gray competing risk regression models were applied to precisely evaluate the probability of cardiovascular-specific death attributed to high WtBR. The results indicate that thea deceased group showed significantly higher WtBR and lower BMI compared with the alive groups (P &lt; .05), while no significant difference was observed in waist circumference (P = .373). When analyzed as continuous, the risk of all-cause death elevated with increasing WtBR in the adjusted model with an HR of 2.42 (95% CI, 2.06-2.85). The restricted cubic spline illustrated an elevated risk of all-cause mortality as WtBR increased (J-shaped curve). Nevertheless, WtBR showed no significant association with cardiovascular-specific death and the prediction model exhibited a reliable performance in the testing set. This study supported that WtBR, an anthropometric measure, is independently associated with all-cause death in hypertensive patients. It's advisable to routinely assess waist circumference in hypertensive patients regardless of BMI, in order to more effectively manage the risk of obesity-related health. abstract_id: PUBMED:35180873 Combined consideration of body mass index and waist circumference identifies obesity patterns associated with risk of stroke in a Chinese prospective cohort study. Background: In China, few studies have examined the relationship between the combination of body mass index and waist circumference and the risk of stroke. Moreover, the relationship may also be different in different genders. Thus, we investigated the association between the combination of body mass index and waist circumference and the risk of stroke in Chinese. Methods: This prospective cohort study included 36 632 participants aged 18 to 90 years. Participants were recruited from 60 surveillance sites (25 urban sites and 35 rural sites) across China in 2010 China Chronic Disease Risk Factor Surveillance, and followed up in 2016-2017. Incident cases of stroke were identified through questionnaires (including the basis of clinical diagnosis, imaging tests, time of diagnosis, diagnosis unit) and Cardiovascular Event Report System. Risk factors for stroke were collected at baseline using questionnaire, physical measurements and laboratory tests. Cox proportional hazards regression models were used to generate adjusted hazard ratios and 95%CI. All analyses were duplicated by gender stratification. Results: During 6.42 ± 0.50 years of follow-up, 1 333 (597 males, 736 females) stroke events were observed among the 27 112 participants who did not have cardiovascular diseases at baseline. Compared with the general population who have normal weight or underweight with normal WC, those who have normal weight or underweight with abdominal obesity (adjusted hazard ratios 1.45, 95%CI 1.07-1.97 in males; 0.98, 95%CI 0.78-1.24 in females), overweight with abdominal obesity (1.41, 95%CI 1.14-1.75 in males; 1.33, 95%CI 1.10-1.61 in females), obesity with abdominal obesity (1.46, 95%CI 1.11-1.91 in males; 1.46, 95%CI 1.17-1.81 in females). Overweight with normal WC was found to be not statistically significant for both males and females (all P&gt;0.05). Subgroup analysis found a multiplicative interaction between age and anthropometric group in females (P for interaction &lt;0.05). Sensitivity analysis results did not change. In the subjects with CVD risk factors, we found a similar relationship as in the general population . Conclusions: Combined assessment of body mass index and waist circumference identifies obesity patterns associated with stroke risk. abstract_id: PUBMED:34135867 Quotient of Waist Circumference and Body Mass Index: A Valuable Indicator for the High-Risk Phenotype of Obesity. Objective: Measuring the body mass index (BMI) or waist circumference (WC) alone is insufficient for assessing possible health risks due to obesity. This study aimed to investigate whether the quotient of WC and BMI can be used as a proxy of the high-risk phenotype of obesity. Methods: Data for analysis were derived from the National Health and Nutrition Examination Survey (NHANES 1999-2014). The Waist-BMI Ratio was defined as WC divided by BMI. The associations between Waist-BMI Ratio and mortality were estimated using Cox regression models. Restricted cubic spline and two-piecewise linear regression models were used to identify non-linear relationships. The discriminative abilities of different anthropometric measures were compared using receiver operating characteristic curves (ROC). Results: This study is based on data from 35557 adults (51.1% female, mean age 44.9 years). During an average follow-up of 101.8 months, 3680 participants died, including 807 of cardiovascular causes. In fully adjusted models, Waist-BMI Ratio was independently associated with overall (hazard ratio [HR], 1.78; 95% confidence interval [CI], 1.48-2.13) and cardiovascular (HR, 1.77; 95% CI, 1.25-2.52) mortality. Spline analyses revealed that dose-response relationships existed between Waist-BMI Ratio and death. The mortality risk rises dramatically above the cut-off point of the Waist-BMI Ratio (HR, 3.22; 95% CI, 2.43-4.26 for overall mortality and HR, 3.07; 95% CI, 1.71-5.52 for cardiovascular mortality). ROC curve analysis suggested that Waist-BMI Ratio was a better discriminator of mortality (AUC 0.637 for overall and 0.639 for cardiovascular mortality) than BMI, WC, and waist-to-height ratio (Delong's test all P &lt;0.001). Conclusions: Waist-BMI Ratio was independently associated with overall and cardiovascular mortality in a J-shaped pattern, offering an immense potential risk marker for obesity in the clinical setting. abstract_id: PUBMED:29649376 Untangling Waist Circumference and Hip Circumference from Body Mass Index with a Body Shape Index, Hip Index, and Anthropometric Risk Indicator. Body mass index (BMI) corrects body weight for height and is well established for diagnosing overweight and obesity and correlating with many medical conditions. Waist circumference (WC) is used to diagnose abdominal obesity. However, the correlation of BMI and WC is high, around 0.9, making the use of BMI and WC in concert challenging. A body shape index (ABSI) is a new measure of risk associated with abdominal size. Derived in 2012, ABSI is analogous to BMI in that it normalizes WC for weight and height (and thus to BMI). A similar derivation led to hip index (HI), normalizing hip circumference for BMI. Hazard ratios (HRs) for univariate risk associations of the independent measures height, BMI, ABSI, and HI can be multiplied to give a combined HR, the anthropometric risk indicator (ARI). Compared to any one anthropometric factor, including BMI and WC, ARI more accurately estimates personalized mortality hazard. Published work on ABSI, HI, and ARI supports their association with many aspects of health, including cardiometabolic conditions related to the metabolic syndrome. Answer: Yes, waist circumference (WC) and body mass index (BMI) are independently associated with cardiovascular disease (CVD) risk in Chinese adults. Studies have shown that both WC and BMI are linked to various cardiometabolic risk factors and CVD risk in the Chinese population. For instance, higher WC tertiles were associated with higher blood pressure and higher cholesterol, triacylglycerol, and glucose concentrations within each tertile of BMI and vice versa. The odds ratios of hypertension, dyslipidemia, and metabolic syndrome increased with successive WC (or BMI) tertiles after adjustment for BMI (or WC) and several covariates (PUBMED:23165487, PUBMED:16332651). However, BMI tertiles were not associated with the odds of diabetes after adjustment for WC, whereas tertiles of WC were significantly associated with the odds of diabetes after adjustment for BMI, indicating that WC may be a better predictor of diabetic risks in Chinese adults (PUBMED:23165487). Furthermore, the waist-to-height ratio (WHtR) has been suggested as an effective index to identify cardiometabolic risk factors in Chinese individuals with normal BMI and WC, particularly in females (PUBMED:24698481). The waist circumference-to-BMI ratio (WtBR) has also been identified as an anthropometric measure that is independently associated with all-cause death in hypertensive patients, suggesting the importance of assessing waist circumference in hypertensive patients regardless of BMI (PUBMED:37864476). In summary, both WC and BMI are independently associated with CVD risk in Chinese adults, and they may enhance CVD risk stratification when measured together (PUBMED:16332651).
Instruction: Does physician specialty affect the survival of elderly patients with myocardial infarction? Abstracts: abstract_id: PUBMED:11130812 Does physician specialty affect the survival of elderly patients with myocardial infarction? Objective: To determine the effect of treatment by a cardiologist on mortality of elderly patients with acute myocardial infarction (AMI, heart attack), accounting for both measured confounding using risk-adjustment techniques and residual unmeasured confounding with instrumental variables (IV) methods. Data Sources/study Setting: Medical chart data and longitudinal administrative hospital records and death records were obtained for 161,558 patients aged &gt; or =65 admitted to a nonfederal acute care hospital with AMI from April 1994 to July 1995. Our principal measure of significant cardiologist treatment was whether a patient was admitted by a cardiologist. We use supplemental data to explore whether our analysis would differ substantially using alternative definitions of significant cardiologist treatment. Study Design: This retrospective cohort study compared results using least squares (LS) multivariate regression with results from IV methods that accounted for additional unmeasured patient characteristics. Primary outcomes were 30-day and one-year mortality, and secondary outcomes included treatment with medications and revascularization procedures. Data Collection/extraction Methods: Medical charts for the initial hospital stay of each AMI patient underwent a comprehensive abstraction, including dates of hospitalization, admitting physician, demographic characteristics, comorbid conditions, severity of clinical presentation, electrocardiographic and other diagnostic test results, contraindications to therapy, and treatments before and after AMI. Principal Findings: Patients admitted by cardiologists had fewer comorbid conditions and less severe AMIs. These patients had a 10 percent (95 percent CI: 9.5-10.8 percent) lower absolute mortality rate at one year. After multivariate adjustment with LS regression, the adjusted mortality difference was 2 percent (95 percent CI: 1.4-2.6 percent). Using IV methods to provide additional adjustment for unmeasured differences in risk, we found an even smaller, statistically insignificant association between physician specialty and one-year mortality, relative risk (RR) 0.96 (0.88-1.04). Patients admitted by a cardiologist were also significantly more likely to have a cardiologist consultation within the first day of admission and during the initial hospital stay, and also had a significantly larger share of their physician bills for inpatient treatment from cardiologists. IV analysis of treatments showed that patients treated by cardiologists were more likely to undergo revascularization procedures and to receive thrombolytic therapy, aspirin, and calcium channel-blockers, but less likely to receive beta-blockers. Conclusions: In a large population of elderly patients with AMI, we found significant treatment differences but no significant incremental mortality benefit associated with treatment by cardiologists. abstract_id: PUBMED:10781778 Care and outcomes of elderly patients with acute myocardial infarction by physician specialty: the effects of comorbidity and functional limitations. Purpose: Whether patients with acute myocardial infarction who are treated by cardiologists have better outcomes than patients treated by generalist physicians is controversial. Because some of the survival benefit associated with cardiology care may be due to baseline differences in patient characteristics, we evaluated how differences in case-mix of comorbid illness and functional limitations may explain the association between specialty care and survival. Materials And Methods: We examined the records of 109,243 Medicare beneficiaries hospitalized for myocardial infarction from 1994 to 1995 from the national Cooperative Cardiovascular Project to evaluate the association of physician specialty with 30-day and 1-year mortality. We assessed the extent to which this relation was mediated by differences in the use of guideline-supported therapies (aspirin, beta-blockers, reperfusion, angiotensin-converting enzyme inhibitors) or differences in the clinical characteristics of the patients. Results: Patients who had board-certified cardiologists as attending physicians had the least number of comorbid conditions, whereas patients who had general practitioners or internal medicine subspecialists as attending physicians usually had the most comorbidities. Cardiologists had the greatest use of most guideline-supported therapies, and general practitioners had the lowest use. After adjustment for severity of myocardial infarction, clinical presentation, and hospital characteristics, patients treated by cardiologists were less likely to die within 1 year (relative risk [RR] = 0.92, 95%, confidence interval [CI]: 0.89 to 0. 95), and patients cared for by other general practitioners were more likely to die within 1 year (RR = 1.09, 95% CI: 1.03 to 1.14), than patients cared for by general internists. After adjusting for additional measures of comorbid illness and functional limitations, the 1-year survival benefit associated with cardiology care was attenuated relative to internists (RR = 0.97, 95% CI: 0.94 to 1.0), and the excess mortality associated with general practitioners decreased (RR = 1.05, 95% CI: 1.00 to 1.11). After further adjustment for the use of guideline-supported therapies, both differences in 1-year survival between patients treated by cardiologists or general practitioners were not significantly different from those of patients treated by internists. Conclusion: Studies comparing outcomes by physician specialties that do not adjust adequately for differences in patient characteristics may attribute more benefit than is appropriate to specialists who treat patients who have fewer comorbid conditions. Some of the remaining benefit-at least among patients with myocardial infarction-may be attributable to greater use of recommended therapies. abstract_id: PUBMED:29322738 Longterm survival of elderly patients after myocardial infarction depending on management in the acute period of the disease. The paper presents the results of study 5-year survival of elderly patient after acute myocardial infarction. It is shown that this category of patients, used less invasive diagnostic and treatment strategy, compared with younger. In our study the effectiveness of primary percutaneous coronary intervention confirmed in patients with myocardial infarction with ST-segment elevation (STEMI) in relation to long-term survival. However, between the groups of successful thrombolytic therapy and pharmacoinvasive treatment strategy, no significant differences have been identified. This fact and the conflicting data of modern researches in this area confirm that the choice of treatment strategy of acute myocardial infarction in the elderly is challenging and requires an individual approach. abstract_id: PUBMED:18367265 Physician volume, physician specialty and in-hospital mortality for patients with acute myocardial infarction. This study sets out to assess the relationship between in-hospital mortality rates and physician acute myocardial infarction (AMI) volume, along with an examination of the impact of physician specialty on in-hospital mortality rates in Taiwan. Analysis was undertaken on a total of 19,086 patients hospitalized for AMI, following the division of the sample patients into four roughly equivalent groups. Within each physician specialty, the AMI patients were also subsequently grouped into four roughly equivalent groups based upon physician volume. After adjusting for other factors, the likelihood of in-hospital mortality among patients treated by low-volume physicians was 2.141 (p&lt;0.001) times as high as that for patients treated by high-volume physicians, and 2.410 (p&lt;0.001) times as high as that for patients treated by very high-volume physicians. However, while such an inverse relationship was found to persist for those physicians specializing in general internal medicine and 'others', this was not the case for cardiologists. abstract_id: PUBMED:37342856 Elderly patients over 80 years undergoing colorectal cancer resection: Development and validation of a predictive nomogram for survival. Background: Surgery remains the primary treatment for localized colorectal cancer (CRC). Improving surgical decision-making for elderly CRC patients necessitates an accurate predictive tool. Aim: To build a nomogram to predict the overall survival of elderly patients over 80 years undergoing CRC resection. Methods: Two hundred and ninety-five elderly CRC patients over 80 years undergoing surgery at Singapore General Hospital between 2018 and 2021 were identified from the American College of Surgeons - National Surgical Quality Improvement Program (ACS-NSQIP) database. Prognostic variables were selected using univariate Cox regression, and clinical feature selection was performed by the least absolute shrinkage and selection operator regression. A nomogram for 1- and 3-year overall survival was constructed based on 60% of the study cohort and tested on the remaining 40%. The performance of the nomogram was evaluated using the concordance index (C-index), area under the receiver operating characteristic curve (AUC), and calibration plots. Risk groups were stratified using the total risk points derived from the nomogram and the optimal cut-off point. Survival curves were compared between the high- and low-risk groups. Results: Eight predictors: Age, Charlson comorbidity index, body mass index, serum albumin level, distant metastasis, emergency surgery, postoperative pneumonia, and postoperative myocardial infarction, were included in the nomogram. The AUC values for the 1-year survival were 0.843 and 0.826 for the training and validation cohorts, respectively. The AUC values for the 3-year survival were 0.788 and 0.750 for the training and validation cohorts, respectively. C-index values of the training cohort (0.845) and validation cohort (0.793) suggested the excellent discriminative ability of the nomogram. Calibration curves demonstrated a good consistency between the predictions and actual observations of overall survival in both training and validation cohorts. A significant difference in overall survival was seen between elderly patients stratified into low- and high-risk groups (P &lt; 0.001). Conclusion: We constructed and validated a nomogram predicting 1- and 3-year survival probability in elderly patients over 80 years undergoing CRC resection, thereby facilitating holistic and informed decision-making among these patients. abstract_id: PUBMED:8948564 Outcome of acute myocardial infarction according to the specialty of the admitting physician. Background: In order to limit costs, health care organizations in the United States are shifting medical care from specialists to primary care physicians. Although primary care physicians provide less resource-intensive care, there is little information concerning the effects of this strategy on outcomes. Methods: We examined mortality according to the specialty of the admitting physician among 8241 Medicare patients who were hospitalized for acute myocardial infarction in four states during a seven-month period in 1992. Proportional-hazards regression models were used to examine survival up to one year after the myocardial infarction. To determine the generalizability of our findings, we also examined insurance claims and survival data for all 220,535 patients for whom there were Medicare claims for hospital care for acute myocardial infarction in 1992. Results: After adjustment for characteristics of the patients and hospitals, patients who were admitted to the hospital by a cardiologist were 12 percent less likely to die within one year than those admitted by a primary care physician (P&lt;0.001). Cardiologists also had the highest rate of use of cardiac procedures and medications, including medications (such as thrombolytic agents and beta-blockers) that are associated with improved survival. Conclusions: Health care strategies that shift the care of elderly patients with myocardial infarction from cardiologists to primary care physicians lower rates of use of resources (and potentially lower costs), but they may also cause decreased survival. Additional information is needed to elucidate how primary care physicians and specialists should interact in the care of severely ill patients. abstract_id: PUBMED:24133510 One-year clinical outcomes in invasive treatment strategies for acute ST-elevation myocardial infarction complicated by cardiogenic shock in elderly patients. Objective: To investigate the clinical outcomes of an invasive strategy for elderly (aged ≥ 75 years) patients with acute ST-segment elevation myocardial infarction (STEMI) complicated by cardiogenic shock (CS). Methods: Data on 366 of 409 elderly CS patients from a total of 6,132 acute STEMI cases enrolled in the Korea Acute Myocardial Infarction Registry between January 2008 and June 2011, were collected and analyzed. In-hospital deaths and the 1-month and 1-year survival rates free from major adverse cardiac events (MACE; defined as all cause death, myocardial infarction, and target vessel revascularization) were reported for the patients who had undergone invasive (n = 310) and conservative (n = 56) treatment strategies. Results: The baseline clinical characteristics were not significantly different between the two groups. There were fewer in-hospital deaths in the invasive treatment strategy group (23.5% vs. 46.4%, P &lt; 0.001). In addition, the 1-year MACE-free survival rate after invasive treatment was significantly lower compared with the conservative treatment (51% vs. 66%, P = 0.001). Conclusions: In elderly patients with acute STEMI complicated by CS, the outcomes of invasive strategy are similar to those in younger patients at the 1-year follow-up. abstract_id: PUBMED:26048389 Long-term survival of elderly patients undergoing percutaneous coronary intervention for myocardial infarction complicated by cardiogenic shock. Background: The long-term benefit of early percutaneous coronary intervention (PCI) for cardiogenic shock (CS) in elderly patients remains unclear. We sought to assess the long-term survival of elderly patients (age ≥ 75 years) with myocardial infarction (MI) complicated by CS undergoing PCI. Methods: We analyzed baseline characteristics, early outcomes, and long-term survival in 421 consecutive patients presenting with MI and CS who underwent PCI from the Melbourne Interventional Group registry from 2004 to 2011. Mean follow-up of patients who survived to hospital discharge was 3.0 ± 1.8 years. Results: Of the 421 consecutive patients, 122 patients were elderly (≥ 75 years) and 299 patients were younger (&lt; 75 years). The elderly cohort had significantly more females, peripheral and cerebrovascular disease, renal impairment, heart failure (HF) and prior MI (all p &lt; 0.05). Procedural success was lower in the elderly (83% vs. 92%, p &lt; 0.01). Long-term mortality was significantly higher in the elderly (p &lt; 0.01), driven by high in-hospital mortality (48% vs. 36%, p &lt; 0.05). However, in a landmark analysis of hospital survivors in the elderly group, long-term mortality rates stabilized, approximating younger patients with CS (p = 0.22). Unsuccessful procedure, renal impairment, HF and diabetes mellitus were independent predictors of long-term mortality. However, age ≥ 75 was not a significant predictor (HR 1.2; 95% CI 0.9-1.7; p = 0.2). Conclusions: Elderly patients with MI and CS have lower procedural success and higher in-hospital mortality compared to younger patients. However, comparable long-term survival can be achieved, especially in patients who survive to hospital discharge with the selective use of early revascularization. abstract_id: PUBMED:29167613 Clinical Outcomes of Polytetrafluoroethylene-Covered Stents for Coronary Artery Perforation in Elderly Patients Undergoing Percutaneous Coronary Interventions. Background: Coronary artery perforation (CAP) during percutaneous coronary intervention (PCI) is associated with increased mortality. Polytetrafluoroethylene covered stents (CS) are an effective approach to treat CAP, but data regarding elderly patients requiring CS implantation for CAP are limited. The aim of this study is to report clinical data for elderly CAP patients undergoing CS implantation during PCI. Methods: Nineteen consecutive elderly patients (≥ 65 years) undergoing CS implantation due to PCI-induced CAP in a tertiary referral center from July 2003 to April 2016 were retrospectively examined. Results: There were 13 men and six women, with a mean age of 75.3 ± 5.6 years (range: 65-86 years). Perforation grade was Ellis type II in five patients (26.3%), and Ellis type III in 14 patients (73.7%). Cardiac tamponade developed in six patients (31.6%), and intra-aortic balloon pumping was needed in four patients (21.1%). The overall success rate for CS implantation rate was 94.7%. The overall in-hospital mortality rate was 15.8%; the in-hospital myocardial infarction rate was 63.2%. Among 16 survival-to-discharge cases, dual antiplatelet therapy (DAPT) was prescribed in 14 cases (87.5%) for a mean duration of 14 months. Overall, there were five angiogram- proven CS failures among 18 patients receiving successful CS implantation. The 1, 2 and 4 years of actuarial freedom from the CS failure were 78%, 65%, and 43% in the angiogram follow-up patients. Conclusions: CS implantation for CAP is feasible and effective in elderly patients, while CS failure remains a major concern that encourages regular angiographic follow-up in these case. abstract_id: PUBMED:19265550 A systematic review of controlled studies: do physicians increase survival with prehospital treatment? Background: The scientific evidence of a beneficial effect of physicians in prehospital treatment is scarce. The objective of this systematic review of controlled studies was to examine whether physicians, as opposed to paramedical personnel, increase patient survival in prehospital treatment and if so, to identify the patient groups that gain benefit. Methods: A systematic review of studies published in the databases PubMed, EMBASE and Cochrane from January 1, 1990 to November 24, 2008. Controlled studies comparing patient survival with prehospital physician treatment vs. treatment by paramedical personnel in trauma patients or patients with any acute illness were included. Results: We identified 1.359 studies of which 26 met our inclusion criteria. In nine of 19 studies including between 25 and 14.702 trauma patients in the intervention group, physician treatment increased survival compared to paramedical treatment. In four of five studies including between nine and 85 patients with out of hospital cardiac arrest, physician treatment increased survival. Only two studies including 211 and 2.869 patients examined unselected, broader patient groups. Overall, they demonstrated no survival difference between physician and paramedical treatment but one found increased survival with physician treatment in subgroups of patients with acute myocardial infarction and respiratory diseases. Conclusion: Our systematic review revealed only few controlled studies of variable quality and strength examining survival with prehospital physician treatment. Increased survival with physician treatment was found in trauma and, based on more limited evidence, cardiac arrest. Indications of increased survival were found in respiratory diseases and acute myocardial infarction. Many conditions seen in the prehospital setting remain unexamined. Answer: Physician specialty does appear to have an impact on the survival of elderly patients with myocardial infarction (MI), although the extent of this impact and its significance can vary depending on the study and the methodologies used. One study found that patients admitted by cardiologists had a 10 percent lower absolute mortality rate at one year compared to those not admitted by cardiologists. However, after adjusting for unmeasured differences in risk, the association between physician specialty and one-year mortality was smaller and statistically insignificant (PUBMED:11130812). Another study showed that after adjusting for severity of MI, clinical presentation, and hospital characteristics, patients treated by cardiologists were less likely to die within 1 year compared to those treated by general internists. However, after further adjustment for comorbid illness, functional limitations, and the use of guideline-supported therapies, the survival benefit associated with cardiology care was attenuated (PUBMED:10781778). In the context of acute ST-elevation myocardial infarction (STEMI) complicated by cardiogenic shock in elderly patients, invasive treatment strategies were associated with fewer in-hospital deaths and a significantly lower 1-year major adverse cardiac event (MACE)-free survival rate compared with conservative treatment (PUBMED:24133510). Similarly, elderly patients undergoing percutaneous coronary intervention (PCI) for MI complicated by cardiogenic shock had lower procedural success and higher in-hospital mortality compared to younger patients, but those who survived to hospital discharge had comparable long-term survival rates (PUBMED:26048389). Another study found that patients admitted by a cardiologist were 12 percent less likely to die within one year than those admitted by a primary care physician, with cardiologists having the highest rate of use of cardiac procedures and medications associated with improved survival (PUBMED:8948564). Overall, the evidence suggests that while cardiologists may provide more specialized care that can lead to better short-term outcomes, the long-term survival benefit may be less pronounced when accounting for patient characteristics and the use of guideline-supported therapies. The choice of treatment strategy for acute MI in the elderly is complex and requires an individual approach, considering the patient's overall health status and comorbidities (PUBMED:29322738).
Instruction: Do geodemographic typologies explain variations in uptake in colorectal cancer screening? Abstracts: abstract_id: PUBMED:20410067 Do geodemographic typologies explain variations in uptake in colorectal cancer screening? An assessment using routine screening data in the south of England. Background: Uptake of colorectal cancer (CRC) screening in UK is less than 60%. Geodemographic typologies are useful in describing patterns of individual preventive health behaviour but little is known of their value in assessing uptake of CRC screening, or how this compares to traditional measures of area deprivation. Methods: We used data on CRC screening uptake in the South Central, South-East Coast and South-West England National Health Service regions in multilevel logistic regression to describe the effects of individual composition and contextual factors (area deprivation and geodemographic segments) on non-response to screening invitation. The relative impact of geodemographic segmentation and the index of multiple deprivation (IMD) 2007 was compared. The potential population impact of a targeted increase in uptake in specific geodemographic segments was examined. Results: About 88 891 eligible adults were invited to be screened from 2006 to 2008. Uptake rate was 57.3% (CI: 57.0-57.7) and was lower amongst younger persons, men, residents of more deprived areas and people in specific geodemographic segments. Age and gender were significant determinants of uptake and contextual factors explained an additional 3% of the variation. Geodemographic segmentation reduced this residual contextual variation in uptake more than the IMD 2007 (72% vs. 53% reduction). The three geodemographic types that best predicted non-response were characterized by both ethnic mix and a higher than average proportion of single pensioner households renting council properties. Achieving average uptake in the 2.3% of the study population in these geodemographic segments would only increase the total population uptake rate by 0.5% (57.3-57.8%). Conclusion: Variation in the CRC screening uptake in Southern England is principally explained by characteristics of individuals but contextual factors also have a small but significant effect. This effect is captured in greater detail by geodemographic segmentation than by IMD 2007. This information could be used to inform the design of interventions aiming to improve uptake. abstract_id: PUBMED:32785586 Does geodemographic segmentation explain differences in route of cancer diagnosis above and beyond person-level sociodemographic variables? Background: Emergency diagnosis of cancer is associated with poorer short-term survival and may reflect delayed help-seeking. Optimal targeting of interventions to raise awareness of cancer symptoms is therefore needed. Methods: We examined the risk of emergency presentation of lung and colorectal cancer (diagnosed in 2016 in England). By cancer site, we used logistic regression (outcome emergency/non-emergency presentation) adjusting for patient-level variables (age, sex, deprivation and ethnicity) with/without adjustment for geodemographic segmentation (Mosaic) group. Results: Analysis included 36 194 and 32 984 patients with lung and colorectal cancer. Greater levels of deprivation were strongly associated with greater odds of emergency presentation, even after adjustment for Mosaic group, which nonetheless attenuated associations (odds ratio [OR] most/least deprived group = 1.67 adjusted [model excluding Mosaic], 1.28 adjusted [model including Mosaic], P &lt; 0.001 for both, for colorectal; respective OR values of 1.42 and 1.18 for lung, P &lt; 0.001 for both). Similar findings were observed for increasing age. There was large variation in risk of emergency presentation between Mosaic groups (crude OR for highest/lowest risk group = 2.30, adjusted OR = 1.89, for colorectal; respective values of 1.59 and1.66 for lung). Conclusion: Variation in risk of emergency presentation in cancer patients can be explained by geodemography, additional to deprivation group and age. The findings support proof of concept for public health interventions targeting all the examined attributes, including geodemography. abstract_id: PUBMED:8594132 Geodemographic segmentation systems for screening health data. Aim: To describe how geodemographic segmentation systems might be useful as a quick and easy way of exploring postcoded health databases for potential interesting patterns related to deprivation and other socioeconomic characteristics. Design And Setting: This is demonstrated using GB Profiles, a freely available geodemographic classification system developed at Leeds University. It is used here to screen a database of colorectal cancer registrations as a first step in the analysis of that data. Results And Conclusion: Conventional geodemographics is a fairly simple technology and a number of outstanding methodological problems are identified. A solution to some problems is illustrated by using neural net based classifiers and then by reference to a more sophisticated geodemographic approach via a data optimal segmentation technique. abstract_id: PUBMED:30196989 Uptake of the English Bowel (Colorectal) Cancer Screening Programme: an update 5 years after the full roll-out. Background: The initial roll-out of the English Bowel (Colorectal) Cancer Screening programme, during 2006 and 2009, found uptake to be low (54%) and socially graded. The current analysis used data from 2010 to 2015 to test whether uptake is increasing and becoming less socially graded over time. Methods: Postcode-derived area-level uptake of 4.4 million first-time invitees, stratified by gender and the year of the first invitation (2010-2015), was generated using the National Bowel Cancer Screening System. Data were limited to people aged 60-64 years. Binomial regression tested for variations in uptake by the year of invitation, gender, region, area-based socio-economic deprivation and area-based ethnic diversity. Results: Overall, the first-time colorectal cancer (CRC) screening uptake across 6 years was 52% (n = 2,285,996/4,423,734) with a decline between 2010 and 2015 (53%, 54%, 52%, 50%, 49%, 49% respectively). Uptake continued to be socially graded between the most and the least deprived area-level socio-economic deprivation quintiles (43% vs 57%), the most and the least area-based ethnic diversity quintiles (41% vs 56%) and men and women (47% vs 56%). Multivariate analysis demonstrated the effects of year, deprivation, ethnicity and gender on uptake. The effect of deprivation was more pronounced in the most deprived area quintile between men and women (40% vs 47%) than the least deprived area quintile (52% vs 62% respectively). Conclusion: We did not find evidence of change in uptake patterns in CRC screening since its initial launch 10 years ago. The programme is unlikely to realise its full public health benefits and is en route to widening inequalities in CRC outcomes. abstract_id: PUBMED:36740959 The role of defensive information processing in population-based colorectal cancer screening uptake. Background: Internationally, colorectal cancer screening participation remains low despite the availability of home-based testing and numerous interventions to increase uptake. To be effective, interventions should be based on an understanding of what influences individuals' decisions about screening participation. This study investigates the association of defensive information processing (DIP) with fecal immunochemical test (FIT)-based colorectal cancer screening uptake. Methods: Regression modeling of data from a cross-sectional survey within a population-based FIT screening program was conducted. The survey included the seven subdomains of the McQueen DIP measure. The primary outcome variable was the uptake status (screening user or nonuser). Multivariable logistic regression was used to estimate the odds ratio (OR) for screening nonuse by DIP (sub)domain score, with adjustments made for sociodemographic and behavioral factors associated with uptake. Results: Higher scores (equating to greater defensiveness) on all DIP domains were significantly associated with lower uptake in the model adjusted for sociodemographic factors. In the model with additional adjustments for behavioral factors, the suppression subdomains of "deny immediacy to be tested" (OR, 0.53; 95% confidence interval [CI], 0.43-0.65; p &lt; .001) and "self-exemption" (OR, 0.80; 95% CI, 0.68-0.96; p &lt; .001) independently predicted nonuse of FIT-based screening. Conclusions: This is the first study outside the United States that has identified DIP as a barrier to colorectal cancer screening uptake, and it is the first focused specifically on FIT-based screening. The findings suggest that two suppression barriers, namely denying the immediacy to be tested and self-exempting oneself from screening, may be promising targets for future interventions to improve uptake. abstract_id: PUBMED:38461798 Uptake and detection rate of colorectal cancer screening with colonoscopy in China: A population-based, prospective cohort study. Background: Colorectal cancer is the leading cause of cancer-related death worldwide. Colonoscopy is widely used as a screening test for detecting colorectal cancer in many countries. However, there is little evidence regarding the uptake and diagnostic yields of colonoscopy in population-based screening programs in countries with limited medical resources. Objective: We reported the uptake of colonoscopy and the detection of colorectal lesions and explored related factors based on a colorectal cancer screening program in China. Design: Individuals aged 45-74 years who were asymptomatic for colorectal cancer and had no history of colorectal cancer were recruited. An established risk score system was used to identify individuals at high risk for colorectal cancer, and they were subsequently recommended for colonoscopy. Setting: A population-based, prospective cohort study was implemented in 169 communities, 14 districts of Chongqing, Southwest China. Participants: A total of 288,150 eligible participants were recruited from November 2013 to June 2021, and 41,315 participants were identified to be at high risk of colorectal cancer. Methods: Generalized linear mixed model was used to explore the individual and community structural characteristics associated with uptake of colonoscopy. Additionally, the detection rate of colorectal lesions under colonoscopy screening was also reported, and their associated factors were explored. Results: 7859 subjects underwent colonoscopy, with an uptake rate of 19.02 % (95 % CI 18.64 %-19.40 %). Lower uptake rates were associated with older age, lower education, more physical activity, and structural characteristics, including residing in developing areas (OR 0.73, 95 % CI 0.69-0.78), residing more than 5 km from screening hospital (5-10 km: OR 0.85, 95 % CI 0.79-0.91; &gt;10 km: OR 0.85, 95 % CI 0.80-0.91), and not being exposed to social media publicity (OR 0.63, 95 % CI 0.53-0.75). Overall, 8 colorectal cancers (0.10 %), 423 advanced adenomas (5.38 %), 820 nonadvanced adenomas (10.43 %), and 684 hyperplastic polyps (8.70 %) were detected, with an adenoma detection rate of 15.92 %. Several factors, including older age, male, current smoking and a family history of colorectal cancer, were positively related to colorectal neoplasms. Conclusions: The uptake of colonoscopy for colorectal cancer screening was not optimal among a socioeconomically diverse high-risk population. The screening strategy should attempt to ensure equitable access to screening according to regional characteristics, and enhance the uptake of colonoscopy by recommended multifaceted interventions, which focus on individuals with poor compliance, select a closer screening hospital, and strengthen social media publicity at the structural level. abstract_id: PUBMED:29628776 The association between long-term conditions and uptake of population-based screening for colorectal cancer: results from two English cohort studies. Introduction: Uptake of screening for colorectal cancer (CRC) can reduce mortality, and population-based screening is offered in England. To date, there is little evidence on the association between having a long-term condition (LTC) and CRC screening uptake. The objective of this study was to examine the association between having an LTC and uptake of CRC screening in England with the guaiac fecal occult blood test, with a particular focus on common mental disorders. Methods: The study was a preregistered secondary analysis of two cohorts: first, a linked data set between the regional Yorkshire Health Study (YHS) and the National Health Service National Bowel Cancer Screening Program (BCSP, years 2006-2014); second, the national English Longitudinal Study of Ageing (ELSA, years 2014-2015). Individuals eligible for BCSP screening who participated in either the YHS (7,142) or ELSA Wave 7 (4,099) were included. Study registration: ClinicalTrials.gov, number NCT02503969. Results: In both the cohorts, diabetes was associated with lower uptake (YHS odds ratio [OR] for non-uptake 1.35, 95% CI 1.03-1.78; ELSA 1.33, 1.03-1.72) and osteoarthritis was associated with increased uptake (YHS 0.75, 0.57-0.99; ELSA 0.76, 0.62-0.93). After controlling for broader determinants of health, there was no evidence of significantly different uptake for individuals with common mental disorders. Conclusion: Two large independent cohorts provided evidence that uptake of CRC screening is lower among individuals with diabetes and higher among individuals with osteoarthritis. Further work should compare barriers and facilitators to screening among individuals with either of these conditions. This study also demonstrates the benefits of data linkage for improving clinical decision-making. abstract_id: PUBMED:33016227 Do we know the effects of colorectal cancer screening? A short communication on selective uptake and differential effects of colorectal cancer screening in Scandinavia. Colorectal cancer (CRC) contributes extensively to the overall cancer burden, with substantial and increasing social inequality in both incidence and survival. In several countries, this social inequality in incidence and survival has been increasing over time, and the increase is expected to continue. To overcome this, it is advised to implement nationwide CRC screening programmes, as these are effective in detecting possible signs of CRC, hence identifying earlier-stage cancer and reducing mortality. However, little is known about the distribution of these effects across population groups. It is possible that the outcomes of CRC screening are not equally distributed among participants, but rather that the screening programme serves some population groups better than others. The aim of this short communication based on published data is to describe the status of selective uptake according to sociodemographic and economic factors in CRC screening in Scandinavia. Furthermore, we raise questions that need to be addressed in future research in order to grasp the full effects of the screening programme and ultimately to ensure high uptake as well as participation in subsequent diagnostic procedures across population groups. abstract_id: PUBMED:36368343 How effective are digital technology-based interventions at promoting colorectal cancer screening uptake in average-risk populations? A systematic review and meta-analysis of randomized controlled trials. Despite the global prevalence of colorectal cancer (CRC) and efforts in screening advocacy, screening uptake remains relatively low. Considering the greater accessibility and popularity of telemedicine in behaviour change interventions, this meta-analysis seeks to examine the usefulness of digital interventions in promoting CRC screening uptake as compared to existing non-digital strategies. A systematic search on five databases identified articles published before September 2022. Randomized controlled trials comparing the effectiveness of digital interventions to usual care were included and assessed using the Cochrane's Risk of Bias tool. Effectiveness of interventions was measured by CRC screening completion rates, and pooled effect sizes were computed for both digital intervention subtypes identified - decision-making aids and tailored educational interventions. 14 studies (17,075 participants) assessed to have low or some risk of bias were included in this meta-analysis. A random-effects model revealed that digital interventions were more likely to promote CRC screening uptake (OR = 1.31, 95% CI: 1.11-1.56), and using a decision-making aid was almost 1.5 times more likely to result in screening completion (i.e., completed a colorectal investigation using stool-based or direct visualization test) (OR = 1.42, 95% CI: 1.24-1.63). Meanwhile, the tailored educational intervention subtype failed to achieve statistical significance in promoting screening uptake, bearing in mind the significant heterogeneity across studies (I2 = 88.6%). Digital decision-making aids significantly improved CRC screening uptake compared to tailored digital educational interventions and usual care. However, as all included studies were conducted in Western settings, its role in augmenting existing CRC screening promotion strategies especially among Asians should be further evaluated. abstract_id: PUBMED:33505841 Implementation of long-term non-participant reminders for flexible sigmoidoscopy screening. The clinical effectiveness of screening is highly dependent on uptake. Previous randomised controlled trials suggest that non-participant reminders, which highlight the opportunity to re-book an appointment, can improve participation. The present analysis examines the impact of implementing these reminders within the English Flexible Sigmoidoscopy (FS) Screening Programme, which offers once-only FS screening to adults aged 55-59 years. We assessed the screening status of 26,339 individuals invited for once-only FS screening in England. A total of 10,952 (41.6%) had attended screening, and were subsequently ineligible. The remaining 15,387 had not attended screening, and were selected to receive a reminder, 1-2 years after their invitation. Descriptive statistics were used to assess the increase in uptake and the adenoma detection rate (ADR) of those who self-referred, six months after the delivery of the final reminder. Pearson's Chi-Square was used to compare the ADR between those who attended when invited and those who self-referred. Of the 15,387 adults eligible to receive a reminder, 13,626 (88.6%) were sent a reminder as intended (1,761 were not sent a reminder, due to endoscopy capacity). Of these, 8.0% (n = 1,086) booked and attended an appointment, which equated to a 4.1% increase in uptake from 41.6% at baseline, to 45.7% at follow-up. The ADR was significantly higher for those who self-referred, compared with those who attended when invited (13.3% and 9.5%, respectively; X2 = 16.138, p = 0.000059). The implementation of non-participant reminders led to a moderate increase in uptake. Implementing non-participant reminders could help mitigate the negative effects of COVID-19 on uptake. Answer: Yes, geodemographic typologies do explain variations in uptake in colorectal cancer (CRC) screening. Studies have shown that geodemographic segmentation can be useful in describing patterns of individual preventive health behavior, including the uptake of CRC screening. For instance, in the south of England, geodemographic segmentation was found to explain variations in CRC screening uptake more effectively than traditional measures of area deprivation, such as the index of multiple deprivation (IMD) 2007. The study found that geodemographic segments characterized by both ethnic mix and a higher than average proportion of single pensioner households renting council properties were predictive of non-response to screening invitations (PUBMED:20410067). Another study examining the risk of emergency presentation of lung and colorectal cancer in England found that geodemographic segmentation, in addition to patient-level variables like age, sex, deprivation, and ethnicity, could explain variations in the risk of emergency presentation. This suggests that geodemographic factors have an impact on cancer diagnosis routes and, by extension, on screening uptake (PUBMED:32785586). Furthermore, geodemographic segmentation systems have been proposed as a quick and easy way to explore health databases for patterns related to socioeconomic characteristics, which can be relevant for understanding variations in health behaviors such as CRC screening uptake (PUBMED:8594132). Overall, the evidence suggests that geodemographic typologies are indeed relevant in explaining variations in CRC screening uptake, and this information could be used to design targeted interventions to improve uptake rates (PUBMED:20410067; PUBMED:32785586; PUBMED:8594132).
Instruction: Does ICU length of stay influence quality of life? Abstracts: abstract_id: PUBMED:16045659 Does ICU length of stay influence quality of life? Background: Patients with prolonged stay in the intensive care unit (ICU) use a disproportionate share of resources. However, it is not known if such treatment results in impaired quality of life (QOL) as compared to patients with a short length of stay (LOS) when taking into account the initial severity of illness. Methods: Prospective, observational case-control study in a university hospital surgical and trauma adult ICU. All patients admitted to the ICU during a 1-year period were included. Patients with a cumulative LOS in the ICU &gt; 7 days, surviving up to 1 year after ICU admission and consenting were identified (group L, n = 75) and matched to individuals with a shorter stay (group S). Matching criteria were diagnostic group and severity of illness. Health-related quality of life (HRQOL) was assessed 1 year after admission using the short-form 36 (SF-36) and was compared between groups and to the general population. Further, overall QOL was estimated using a visual analogue scale (VAS) and willingness to consent to future intensive care, and was compared between groups L and S. Results: Based on ANCOVA, a significant difference between groups L and S was noted for two out of eight scales: role physical (P = 0.033) and vitality (P = 0.041). No differences were found for the physical component summary (P = 0.065), the mental component summary (P = 0.267) or the VAS (P = 0.316). Further, there was no difference in expectation to consent to future intensive care (P = 0.149). As compared to the general population, we found similar scores for the mental component summary and for three of eight scales in group L and five of eight scales in group S. Conclusions: When taking into account severity of illness, HRQOL 1 year after intensive care is comparable between patients with a short and a long LOS in the ICU. Thus, prolonged stay in the ICU per se must not be taken as an indicator of future poorer HRQOL. However, as compared to the general population, significant differences, mostly in physical aspects of QOL, were found for both groups of patients. abstract_id: PUBMED:30398978 Survival, Quality of Life, and Functional Status Following Prolonged ICU Stay in Cardiac Surgical Patients: A Systematic Review. Objectives: Compared with noncardiac critical illness, critically ill postoperative cardiac surgical patients have different underlying pathophysiologies, are exposed to different processes of care, and thus may experience different outcome trajectories. Our objective was to systematically review the outcomes of cardiac surgical patients requiring prolonged intensive care with respect to survival, residential status, functional recovery, and quality of life in both hospital and long-term follow-up. Data Sources: MEDLINE, Embase, CINAHL, Web of Science, and Dissertations and Theses Global up to July 21, 2017. Study Selection: Studies were included if they assessed hospital or long-term survival and/or patient-centered outcomes in adult patients with prolonged ICU stays following major cardiac surgery. After screening 10,159 citations, 114 articles were reviewed in full; a final 34 articles met criteria for data extraction. Data Extraction: Two reviewers independently extracted data and assessed risk of bias using the National Institutes of Health Quality Assessment Tool for Observational Studies. Extracted data included the used definition of prolonged ICU stay, number and characteristics of prolonged ICU stay patients, and any comparator short stay group, length of follow-up, hospital and long-term survival, residential status, patient-centered outcome measure used, and relevant score. Data Synthesis: The definition of prolonged ICU stay varied from 2 days to greater than 14 days. Twenty-eight studies observed greater in-hospital mortality among all levels of prolonged ICU stay. Twenty-five studies observed greater long-term mortality among all levels of prolonged ICU stay. Multiple tools were used to assess patient-centered outcomes. Long-term health-related quality of life and function was equivalent or worse with prolonged ICU stay. Conclusions: We found consistent evidence that patients with increases in ICU length of stay beyond 48 hours have significantly increasing risk of hospital and long-term mortality. The significant heterogeneity in exposure and outcome definitions leave us unable to precisely quantify the risk of prolonged ICU stay on mortality and patient-centered outcomes. abstract_id: PUBMED:30336918 The influence of prolonged intensive care stay on quality of life, recovery, and clinical outcomes following cardiac surgery: A prospective cohort study. Objective: To examine the influence of prolonged intensive care unit (ICU) stay on quality of life and recovery following cardiac surgery. Methods: Quality of life was assessed using the Short Form 36 Health Survey (SF36). The Postoperative Quality of Recovery Scale was used to assess quality of recovery, disability, and cognition after ICU discharge over 12 months' follow-up. Prolonged ICU stay was defined as ≥3 postoperative days. Mortality and major adverse cardiac and cerebrovascular events were recorded up to 12 months. Results: For quality of life, the physical component improved over time in both groups (P &lt; .01 for both groups), as did the mental component (P &lt; .01 for both groups). The long ICU group had lower physical and mental components over time (both P values &lt; .01), but by 12 months the values were similar. The overall quality of recovery was lower for the long ICU group (P &lt; .01). Likewise, we found higher rates of recovery in the normal ICU group than in the long ICU group in terms of emotive recovery (P &lt; .01), activities of daily living (P &lt; .01), and cognitive recovery (P = .03) but no differences in terms of physiologic (P = .91), nociceptive (P = .89), and satisfaction with anesthetic care (P = .91). Major adverse cardiac and cerebrovascular events (P &lt; .01), 30-day mortality (P &lt; .01), and length of ward stay (P &lt; .01) were all higher with prolonged ICU stay. Conclusions: Patients with prolonged ICU stay have lower quality of life scores; however, they achieve similar midterm quality of recovery, but with reduced survival, increased major adverse cardiac and cerebrovascular events, and longer hospital length of stay. abstract_id: PUBMED:35061188 Illness cognitions and health-related quality of life in liver transplant patients related to length of stay, comorbidities and complications. Purpose: Illness cognitions regarding helplessness and acceptance are known to play a role in health-related quality of life (HRQoL). Our study examined the evolution of these illness cognitions and the physical (PQoL) and mental QoL (MQoL) in liver transplantation (LT) patients over time in relation to pre- and postoperative clinical factors. Methods: We performed an analytical cross-sectional study using self-report questionnaires at 4 timeframes: preLT, postLT0-3 m, postLT1y, and postLT2y. T-test was used to identify the influence of different clinical factors related to the LT on postLT2y QoL and illness cognition. Linear mixed models were used to determine evolution. Results: PostLT patients showed significant less helplessness and more acceptance cognitions. PQoL and MQoL decreased postLT0-3 m, then started to increase and are highest at postLT1y. Patients with preLT ascites showed significantly less helplessness postLT2y, while patients with a low preLT MELD score &lt; 20 showed a significant better MQoL postLT2y. Biliary complications and re-transplantation were associated with more helplessness and a worse PQoL postLT1y-2y. Length of stay in ICU and hospital was negatively correlated with illness cognitions and PQoL and MQoL postLT1y. CONCLUSIONS: Our findings confirm that liver transplant patients have improvement of illness cognitions and mental and physical HRQoL at 1 and 2 years after liver transplantation. A postoperative period without complications and with short stay in ICU and in hospital, is important to achieve PQoL and feeling less helpless, while the MQoL is influenced by acceptance and preLT PQoL. Multidisciplinary approach preLT and postLT should be standard care. abstract_id: PUBMED:35160032 Hypophosphatemia on ICU Admission Is Associated with an Increased Length of Stay in the ICU and Time under Mechanical Ventilation. Hypophosphatemia is frequently observed in the ICU and is associated with several impairments such as respiratory failure or infections. We hypothesized that hypophosphatemia on ICU admission is associated with a prolonged duration of mechanical ventilation and ICU length of stay (LOS), particularly in COVID-19 patients. This cross-sectional study analyzed data from 1226 patients hospitalized in the ICU of the Geneva University Hospitals from August 2020 to April 2021. Patients were categorized as having hypophosphatemia (phosphatemia ≤ 0.8 mmol/L) or non-hypophosphatemia (phosphatemia &gt; 0.8 mmol/L) on ICU admission. Linear regressions were performed to investigate the association between hypophosphatemia on ICU admission and ICU LOS and duration of mechanical ventilation. Overall, 250 (20%) patients presented hypophosphatemia on ICU admission. In the univariable analysis, hypophosphatemic patients had longer ICU LOS than non-hypophosphatemic patients, 7.4 days (±10.4) versus 5.6 days (±8.3), (p &lt; 0.01). Hypophosphatemia on ICU admission was associated with a prolonged duration of mechanical ventilation, 7.4 days (±11.2) versus 5.6 days (±8.9), (p &lt; 0.01). These associations were confirmed in the multivariable analysis (p &lt; 0.01). In the subgroup of COVID-19 patients, a significant association between hypophosphatemia and ICU LOS and duration of mechanical ventilation was also observed. In conclusion, hypophosphatemia on ICU admission is associated with a longer ICU LOS and time under mechanical ventilation, both in the general ICU population and in COVID-19 patients. abstract_id: PUBMED:30636653 Electronic medical orders for life-sustaining treatment in New York State: Length of stay, direct costs in an ICU setting. Objective: In the United States, approximately 20% patients die annually during a hospitalization with an intensive care unit (ICU) stay. Each year, critical care costs exceed $82 billion, accounting for 13% of all inpatient hospital costs. Treatment of sepsis is listed as the most expensive condition in US hospitals, costing more than $20 billion annually. Electronic Medical Orders for Life-Sustaining Treatment (eMOLST) is a standardized documentation process used in New York State to convey patients' wishes regarding cardiopulmonary resuscitation and other life-sustaining treatments. No study to date has looked at the effect of eMOLST as an advance care planning tool on ICU and hospital costs using estimates of direct costs. The objective of our study was to investigate whether signing of eMOLST results in any reduction in length of stay and direct costs for a community-based hospital in New York State. Method: A retrospective chart review was conducted between July 2016 and July 2017. Primary outcome measures included length of hospital stay, ICU length of stay, total direct costs, and ICU costs. Inclusion criteria were patients ≥65 years of age and admitted into the ICU with a diagnosis of sepsis. An independent samples t test was used to test for significant differences between those who had or had not completed the eMOLST form. Result: There were no statistical differences for patients who completed or did not complete the eMOLST form on hospital's total direct cost, ICU cost, total length of hospital stay, and total hours spent in the ICU. Significance Of Results: Completing an eMOLST form did not have any effect on reducing total direct cost, ICU cost, total length of hospital stay, and total hours spent in the ICU. abstract_id: PUBMED:27473872 Systematic review of factors influencing length of stay in ICU after adult cardiac surgery. Background: Intensive care unit (ICU) care is associated with costly and often scarce resources. In many parts of the world, ICUs are being perceived as major bottlenecks limiting downstream services such as operating theatres. There are many clinical, surgical and contextual factors that influence length of stay. Knowing these factors can facilitate resource planning. However, the extent at which this knowledge is put into practice remains unclear. The aim of this systematic review was to identify factors that impact the duration of ICU stay after cardiac surgery and to explore evidence on the link between understanding these factors and patient and resource management. Methods: We conducted electronic searches of Embase, PubMed, ISI Web of Knowledge, Medline and Google Scholar, and reference lists for eligible studies. Results: Twenty-nine papers fulfilled inclusion criteria. We recognised two types of objectives for identifying influential factors of ICU length of stay (LOS) among the reviewed studies. These were general descriptions of predictors and prediction of prolonged ICU stay through statistical models. Among studies with prediction models, only two studies have reported their implementation. Factors most commonly associated with increased ICU LOS included increased age, atrial fibrillation/ arrhythmia, chronic obstructive pulmonary disease (COPD), low ejection fraction, renal failure/ dysfunction and non-elective surgery status. Conclusion: Cardiac ICUs are major bottlenecks in many hospitals around the world. Efforts to optimise resources should be linked to patient and surgical characteristics. More research is needed to integrate patient and surgical factors into ICU resource planning. abstract_id: PUBMED:28883768 Comparison of Statistical Models of Predict the Factors Affecting the Length of Stay (LOS) in the Intensive Care Unit (ICU) of a Teaching Hospital. Introduction: Modeling can be a useful tool to find out how the distributions of hospital length of stay (LOS) and the factors affecting the length of stay. The present study aims to determine factors affecting the length of stay and selecting suitable statistical models. Material And Method: this is a cross - sectional study of 565 patients who were treated in the intensive care unit of Imam Khomeini hospital in Ahwaz. Preliminary data were collected retrospectively through the medical records of all patients admitted on intensive care units of Ahwaz Imam Khomeini Hospital in 2015. Statistical analysis and multivariate regression models were done using of SPSS 21 and STATA 7 software. Results: Average length of stay in ICU was 8.16±0.75 days. The Mean and Median age of patients were 58.61±20 and 61 respectively, The Mean LOS for females (16.44±9.37 days) was more than the men (11.5±5.35 days) (p&lt;0.01). The maximum and minimum lengths of stay belonged to patients with endocrine disorders (14.7±3.1 days) and patients with gastrointestinal disorders (5.53±1.1 days) respectively (p&lt;0.01). The goodness of fit for Gamma model showed that this model was more suitable and powerful than Log-normal model to predict the factors affecting the patient's length of stay in intensive care units of hospital. Conclusion: Gamma regression model was more robust to predict factors regarding the hospital length of stay. According to Gamma model the key factor in predicting the length of stay in ICU was the type of disease diagnosis. The result of statistical modeling can help managers and policy makers to estimate hospital resources and allocate them for different hospital services. abstract_id: PUBMED:17323668 Survival and quality of life after prolonged intensive care unit stay. There are few data on long-term outcomes in mixed groups of intensive care unit (ICU) patients with prolonged stays. We evaluated the relationship between length of stay in the ICU and long-term outcome in all patients admitted to our 31-bed department of medico-surgical intensive care over a one-year period who stayed in the department for more than 10 days (n = 189, 7% of all ICU admissions). Mortality increased with length of stay from 1 to 10 days (1 day 5%, 5 days 15%, 9 days 24%, 10 days 33%) but remained stable at about 35% for longer ICU stays. In the long-stay patients, the most common reasons for ICU admission were intracranial bleeding (23%), polytrauma (14%), respiratory failure (13%) and septic shock (11%). The main reasons for prolonged ICU stay were ventilator dependency (40%), infectious complications (23%) and coma (16%). Long-stay patients had a 65% ICU survival, 55% hospital survival and 37% one-year survival. At one-year follow-up, 73% of surviving patients reported no or minor persistent physical complaints compared to before the acute illness; 27% had a major functional impairment, including 8% who required daily assistance. In conclusion, in ICU patients, mortality increases with length of stay up to 10 days. Patients staying in the ICU for more than 10 days have a relatively good long-term survival. Most survivors have an acceptable quality of life. abstract_id: PUBMED:34181955 SICQ Coping and the Health-Related Quality of Life and Recovery of Critically Ill ICU Patients: A Prospective Cohort Study. Background: The coping styles of the Sickness Insight in Coping Questionnaire (SICQ; positivism, redefinition, toughness, fighting spirit, nonacceptance) may affect the health and recovery of hospitalized critically ill patients. Research Question: Do the SICQ coping styles of hospitalized critically ill patients relate to the patients health-related quality of life (HRQoL) and recovery? Study Design And Methods: A prospective cohort study was conducted in a single university-affiliated Dutch hospital. Participants were critically ill adult patients admitted to a mixed medical-surgical ICU (start: n = 417; pre-ICU: n = 391; hospital discharge: n = 350; 3-month follow-up: n = 318; 6-month follow-up: n = 308; 12-month follow-up: n = 285). Coping was recorded with the SICQ pre-ICU and at discharge. HRQoL was measured with the SF-12 pre-ICU, at discharge, and 3, 6, and 12 months after discharge. Indicators of recovery were ICU and hospital length of stay, discharge disposition, and mortality. Correlation and regression analyses were used for data analysis. Results: Positivism (r = 0.28-0.51), fighting spirit (r = 0.14-0.35), and redefinition (r = 0.12-0.23) associated significantly (P &lt; .05) with mental HRQoL after discharge. Furthermore, positivism associated positively (P &lt; .01) with physical HRQoL (r = 0.17-0.26) after discharge. Increase in positivism (r = 0.13), redefinition (r = 0.13), and toughness (r = 0.13) across the period of hospitalization associated positively (P ≤ .05) with mental HRQoL at discharge. Pre-ICU positivism associated with hospital length of stay (ρ = -.21, P ≤ .05) and hazard for death (HR = 0.57, P &lt; .01) and had a unidirectional effect on mental HRQoL (β = .30, P &lt; .001). Interpretation: SICQ coping is associated with long-term mental HRQoL, hospital length of stay, and hazard for death among hospitalized critically ill patients. Answer: ICU length of stay does influence quality of life, but the impact varies depending on several factors, including the initial severity of illness, the type of patient, and the presence of complications or comorbidities. For patients with prolonged ICU stays, there is evidence of lower quality of life scores compared to those with shorter stays. However, over time, some studies have found that these differences may diminish, and patients can achieve similar midterm quality of recovery, although with reduced survival, increased major adverse cardiac and cerebrovascular events, and longer hospital length of stay (PUBMED:30336918). In the context of cardiac surgery, prolonged ICU stay has been associated with greater in-hospital and long-term mortality, and long-term health-related quality of life and function were found to be equivalent or worse compared to those with shorter ICU stays (PUBMED:30398978). A study on liver transplant patients showed that a postoperative period without complications and with a short stay in ICU and hospital is important for achieving better physical quality of life and feeling less helpless, while mental quality of life is influenced by acceptance and pre-transplant physical quality of life (PUBMED:35061188). Another study found that hypophosphatemia on ICU admission is associated with a longer ICU length of stay and time under mechanical ventilation, both in the general ICU population and in COVID-19 patients (PUBMED:35160032). However, one study found that when taking into account the severity of illness, health-related quality of life one year after intensive care is comparable between patients with a short and a long length of stay in the ICU, suggesting that prolonged stay in the ICU per se must not be taken as an indicator of future poorer health-related quality of life (PUBMED:16045659). In conclusion, while ICU length of stay can influence quality of life, the relationship is complex and influenced by multiple factors, including the patient's initial health status, the presence of complications, and the type of illness or surgery. Some studies suggest that while there may be an initial impact on quality of life, this can improve over time, and a prolonged ICU stay does not necessarily predict poorer long-term quality of life outcomes.
Instruction: Can the Kawashima procedure be performed in younger patients? Abstracts: abstract_id: PUBMED:19632416 Can the Kawashima procedure be performed in younger patients? Background: The prudence of performing early palliative cavopulmonary connection that includes superior vena cava in association with azygous-hemiazygous continuation of the inferior vena cava, Kawashima procedure (KP), has been questioned. We document our experience with KP performed at a relatively younger age than usually reported. Methods: A retrospective review of patients undergoing KP (October 2000 to April 2008) was done. Results: Initial palliation was carried out in 13 of 15 patients. Age and weight at KP was 8.4 months (5.1 to 15.1) and 6.8 kg (4.6 to 11.0). The pre-KP catheterization showed the following: pulmonary artery pressure = 14.5 mm Hg (9 to 17); end-diastolic pressure of systemic ventricle = 8 (2 to 14); oxygenation saturation = 76% (63 to 82); and atrioventricular (AV) valve insufficiency moderate or greater in 5 patients. The post-KP characteristics included the following intubation = 1 day (0 to 19); nitric oxide = 4 patients; superior caval pressure = 14 mm Hg (6 to 18); inotrope score = 7.5 (2.5 to 14.3); intensive care unit stay = 3 days (1 to 9); hospital stay = 7 days (3 to 77); and oxygen saturation at discharge = 84% (76 to 90%). There was one hospital death that required takedown of KP. Fontan completion was performed in 8 patients at an interval of 2.7 years (1.8 to 5.8) after KP. There was one post-Fontan mortality from severe ventricular and AV valve dysfunction. Pulmonary arteriovenous malformations (PAVMs) were diagnosed in 4 patients with 3 resolving post-Fontan. With a median follow-up of 4.2 years (0.1 to 7.9), 13 of 15 remain alive yielding a series survival of 87%. Conclusions: The Kawashima procedure can be safely performed at an earlier age than previously reported. The incidence of PAVMs after the KP appears to be similar to other reports where KP was performed at a later age. abstract_id: PUBMED:33679056 Pulmonary arteriovenous malformations in children after the Kawashima procedure: Risk factors and midterm outcome. Background: Pulmonary arteriovenous malformations (PAVMs) are the major cause of progressive cyanosis in patients palliated with bidirectional cavopulmonary connection (BCPC). The aim of our study is to analyze the occurrence of PAVMs in patients after Kawashima procedure, to study the effect of total cavopulmonary connection (TCPC) on PAVMs, to evaluate the effect of axillary arteriovenous fistula (AAVF) creation on PAVMs, and to study the risk factors for PAVMs. Methods: In this retrospective cohort study, all patients with left isomerism and azygous continuation of an interrupted inferior vena cava who underwent Kawashima procedure from July 2001 to December 2017 were included. Results: Twenty.six patients after Kawashima procedure were included in our study. PAVMs were diagnosed in 12 patients (46%). Five of these 12 patients underwent TCPC with complete resolution of hypoxemia. Three patients underwent AAVF creation, 2 had complete resolution, while 1 had partial resolution of hypoxemia. Fourteen patients (54%) did not develop PAVMs. Nakata index below 267 mm2/m2and McGoon ratio below 1.9 predicted the development of PAVMs with high sensitivity and specificity. Conclusions: PAVMs represent a serious complication in patients who undergo Kawashima procedure. Small size of pulmonary arteries is an important risk factor for the development of PAVMs. Resolution of hypoxemia after TCPC completion supports the hepatic factor hypothesis. Early TCPC completion in these patients may help to avoid the development of PAVMs by restoring the hepatic factor. Resolution of hypoxemia after AAVF creation may support the lack of pulsatile flow hypothesis. abstract_id: PUBMED:35381553 Bilateral chylothorax: A late complication of Kawashima procedure despite normal pulmonary pressures. Introduction: Chylothorax, a relatively rare congenital heart disease early postoperative complication, is occurring more frequently due to complexity of cardiac surgeries. Presentation Of Case: We present a 9-month-old boy who had hypoplastic left heart (HLH) syndrome with interrupted inferior vena cava (IVC) and bilateral superior vena cava (SVC) palliated with left sided modified Blalock-Taussig (MBT) shunt during neonatal period and second stage palliation with left sided bidirectional glen (BDG) procedure and right sided Kawashima procedure develop bilateral chylothorax two weeks after discharge. Discussion: This is the first reported case in the literature of a patient who developed chylothorax with relatively low Fontan systemic venous pressures after a Kawashima procedure. Clinically important chylothorax may be a marker of poor long-term outcomes, demonstrating an inability to handle overwhelming lymphatic congestion. Conclusion: Early diagnosis of chylothorax in complex cardiac surgeries may permit successful conservative management. abstract_id: PUBMED:30799723 Outcomes of Single Ventricle Patients Undergoing the Kawashima Procedure: Can We Do Better? Objectives: Current technology advances in virtual surgery modeling and computational flow dynamics allow preoperative individualized computer-based design of Fontan operation. To determine potential role of those innovations in patients undergoing hepatic vein incorporation (HVI) following Kawashima operation, we retrospectively examined historic cohort of patients who underwent HVI following Kawashima with focus on regression of pulmonary arteriovenous malformations (PAVMs). Methods: Twenty-two children with single ventricle and interrupted inferior vena cava underwent Kawashima operation (2002-12). Twenty-one (96%) patients had left atrial isomerism and 21 (96%) had undergone prior first-stage palliation. Clinical outcomes were examined. Results: Mean O2 saturation (SaO2) increased from 77% ± 8% to 85% ± 6% ( P = .002) after Kawashima. Fifteen (68%) patients developed PAVMs. Eighteen patients underwent HVI (median age and interval from Kawashima: 4.4 and 3.7 years, respectively). Mean SaO2 prior to HVI was 77% ± 8% and increased to 81% ± 10% at the time of hospital discharge ( P = .250), with five patients requiring home oxygen. On follow-up, mean SaO2 increased to 95% ± 4% ( P &lt; .001). Overall ten-year survival following Kawashima was 94%. Conclusions: A large number of patients develop PAVMs and subsequent cyanosis after Kawashima operation. Early following HVI, SaO2 is commonly low and insignificantly different from that prior to HVI. Although SaO2 will improve on follow-up in most patients, a number of patients continue to have low saturations, indicating incomplete resolution of PAVMs. Given the heterogeneity of those patients and lack of preoperative predictors for complete PAVM regression, our findings suggest a role for virtual surgery to determine optimal individual procedure design that would provide even distribution of hepatic blood flow to both pulmonary arteries. abstract_id: PUBMED:24403356 High incidence of abdominal venovenous collaterals after Kawashima operation. Introduction: Systemic venous collaterals have been found at different stages of single-ventricle palliation, specifically after Kawashima operation. In this study, we present the incidence, clinical features, associated risk factors, and management of such venovenous collaterals (VVCs) after Kawashima procedure. Materials And Methods: A retrospective review of all the patients who underwent Kawashima procedure prior to December 2011 at Hamad General Hospital, Qatar, was performed. Data were collected and reviewed from medical records. Results: The study group involved a total of six patients with Kawashima procedure. The median age of patients was 112.7 months (range 46-336 months), and median age at the time of Kawashima operation was 31.7 months (range 15-187 months). Mean systemic arterial oxygen saturation (Spo 2) after Kawashima operation was 94.5% ± 1.5%. In five patients, during a median follow-up of 53.4 months (range 16.9-147.9 months), the Spo 2 declined to a mean of 78.8% ± 8.2%. A total of 16 venous collaterals were observed in the study group, and majority (70%) of these collaterals were subdiaphragmatic. Conclusion: In our study, abdominal VVCs were observed in 100% of the patients who were followed after Kawashima operation. We therefore recommend that thorough evaluation for such VVCs should be part of the ongoing evaluation of patients after Kawashima operation, especially in those with low Spo 2. We also recommend early surgical rerouting of hepatic veins to pulmonary arteries in all post-Kawashima patients. abstract_id: PUBMED:35076098 The role of the hepatic factor on the improvement of arterial oxygenation after a Kawashima procedure: A case report. Background: Anomalies of the systemic venous return to the heart are infrequent, occurring more often in patients with congenital heart disease. Case Report: We present a patient with heterotaxy, univentricular circulation, and hemiazygos return who underwent a Kawashima procedure at 3 years of age. Fourteen years later, she developed increased cyanosis and tiredness. The symptoms clearly improved after an off-pump derivation of the hepatic drainage from the common atrium to the hemiazygos system. Conclusions: This rare case report addresses the surgical challenges imposed by patients with complex congenital heart disease combined with anomalies in systemic venous return. Although improvement in arterial oxygenation after redirecting hepatic flow to the hemiazygos system in patients previously submitted to a Kawashima procedure has already been reported, the existence of a hepatic factor remains to be recognized. abstract_id: PUBMED:27011689 Arterial desaturation due to pulmonary arteriovenous malformations after the Kawashima Operation. Arterial desaturation may occur after the Kawashima procedure and, in the absence of venovenous collaterals is usually due to pulmonary arteriovenous malformations. Stenting of the pulmonary arteries, oxygen therapy, and pulmonary vasodilators such as sildenafil have not been able to resolve the arterial desaturation and the only way to do this has been Fontan completion. The time course of the formation of these malformations after the Kawashima and the progression of cyanosis and its resolution after the Fontan has only been demonstrated in case reports and small case series. We pool the available data to model arterial saturations in patients with pulmonary arteriovenous malformations after the Kawashima procedure. abstract_id: PUBMED:28884283 Clinical outcomes of early scheduled Fontan completion following Kawashima operation. Objectives: This study reviewed late clinical features after Kawashima operation to confirm the impact of scheduled subsequent early Fontan completion. Methods: Of the 17 consecutive patients who underwent the Kawashima operation between 1987 and 2010, 11 underwent the procedure as inter-stage palliation (scheduled Fontan group). Ten of these patients underwent subsequent early Fontan completion after a median interval of 0.6 years. The remaining 6 patients underwent the Kawashima operation initially as definitive surgery (non-scheduled group). Late Fontan completion was performed in 4 of these patients as salvage surgery to prevent progression of hypoxia after a median interval of 6.6 years. Results: All patients completed follow-up; the mean follow-up period was 12 ± 7 years (range 0.6-28.2). Cumulative survival at 10 years was 66% in the non-scheduled group and 79% in the scheduled group (p = 0.66). Pulmonary arteriovenous malformations developed after Kawashima operation in all 4 patients without antegrade pulmonary blood flow in the non-scheduled group but in only 2 of 11 patients in the scheduled group, both of which completely resolved after Fontan completion. In the scheduled Fontan group, 3 patients developed venovenous malformations between the Fontan pathway and the pulmonary veins or atrium after Fontan completion. Conclusions: Issues related to pulmonary arteriovenous malformations after Kawashima operation resolved with early scheduled Fontan completion. However, data on long-term outcomes are limited and the risk of death continues throughout early life. For unsuitable Fontan candidates, the Kawashima operation with antegrade pulmonary blood flow may provide definitive palliation. abstract_id: PUBMED:29945505 Kawashima by Fenestrated Hemi-Fontan for Palliation Following Prior Stage I Norwood Operation. Single ventricle patients with interrupted inferior vena cava (IVC) and azygos continuation to the superior vena cava (SVC) are typically palliated with a bidirectional cavopulmonary shunt (BCPS), known as the Kawashima operation in this setting. Because the volume of venous blood directed to the pulmonary arteries is substantially greater in the presence of interrupted IVC, Kawashima procedures are commonly delayed to older age compared to other single ventricle patients undergoing BCPS. We report two young infant single ventricle patients with interrupted IVC and azygos continuation to the SVC who underwent stage I Norwood procedures for initial palliation. In both cases, a fenestrated hemi-Fontan procedure achieved successful Kawashima circulations. abstract_id: PUBMED:36896981 Fontan completion of a 10-year-old Kawashima patient with extensive arteriovenous malformations: consideration for a lobectomy. A significant contributing factor to the progression of late cyanosis in individuals undergoing Kawashima operation is pulmonary arteriovenous malformations. Following the Fontan procedure, arteriovenous malformations may regress. However, in cases with extensive malformations causing severe cyanosis, lobectomy can also be a possible treatment approach. Thereby, we present our two-step treatment strategy in a late Fontan completion complicated by arteriovenous malformations in a Kawashima patient. Answer: Yes, the Kawashima procedure can be performed in younger patients. A retrospective review documented experience with the Kawashima procedure (KP) performed at a relatively younger age than usually reported. The age and weight at KP were 8.4 months (ranging from 5.1 to 15.1 months) and 6.8 kg (ranging from 4.6 to 11.0 kg), respectively. The results indicated that KP could be safely performed at an earlier age than previously reported, with a series survival of 87% and the incidence of pulmonary arteriovenous malformations (PAVMs) after KP appearing to be similar to other reports where KP was performed at a later age (PUBMED:19632416).
Instruction: Unwitnessed out-of-hospital cardiac arrest: is resuscitation worthwhile? Abstracts: abstract_id: PUBMED:25319908 Cardiopulmonary resuscitation above 75 years: is it worthwhile?. N/A abstract_id: PUBMED:9209229 Unwitnessed out-of-hospital cardiac arrest: is resuscitation worthwhile? Study Objective: To determine the epidemiology of unwitnessed out-of-hospital cardiac arrest and the factors associated with survival after resuscitation using the Utstein style data collection. Methods: We conducted a prospective cohort study in a 525,000-population city served by a single EMS system comprising a tiered response with physicians in the field. We studied consecutive unwitnessed out-of-hospital cardiac arrests that occurred between January 1, 1994, and December 31, 1995. We determined survival from cardiac arrest to discharge from hospital and the factors associated with survival. Results: Of the 809 patients for whom resuscitation was considered, 205 (25.3%) had sustained unwitnessed arrests. Cardiac origin of arrest was verified in 52% of cases. The most common noncardiac causes of arrest were trauma, intoxication, near-drowning, and hanging. In 150 patients (73.2%) the presenting rhythm was asystole, in 28 (13.6%) it was pulseless electrical activity, and in 27 (13.2%) it was ventricular fibrillation. Resuscitation was attempted in 162 cases, 59 (36.4%) of whom demonstrated return of spontaneous circulation; 45 (27.8%) were hospitalized alive, and 8 (4.9%) were discharged. The survivors represented 6.7% of all out-of-hospital cardiac arrest survivors during the study period. Survival was most likely if patients presented with pulseless electrical activity; none of the patients with asystole of cardiac origin survived. Sex (P = .032), age (inverse relationship, P = .0004), scene of collapse (P = .042), and interval from call receipt to arrival of first responders (P = .004) were associated with survival. In a logistic-regression model, near-drowning remained an independent factor of survival (odds ratio, 15.5; 95% confidence interval, 1.2 to 200). A routine priority dispatching protocol differentiated cardiac arrest patients with survival potential from those who already had irreversible signs of death. Conclusion: This survey shows that survival after unwitnessed out-of-hospital cardiac arrest is unlikely with an initial response of basic life support alone. Withdrawal of resuscitation should be considered if an adult victim of unwitnessed cardiac arrest is found in asystole and the arrest is of obvious cardiac origin. abstract_id: PUBMED:2302278 Is emergency department resuscitation of out-of-hospital cardiac arrest victims who arrive pulseless worthwhile? It is still a common practice to continue unsuccessful field resuscitations in the emergency department (ED) even after prolonged estimated down times. The authors studied patients who arrested in the field and did not regain a pulse before their arrival in the ED to determine if any ever leave the hospital neurologically intact. All cardiac arrests in the urban St Louis area that were brought to our facility over a 2 1/2-year period by advanced life support units (excluding all patients with hypothermia, drug overdose, near drowning, and traumatic cardiac arrest) were reviewed. Of 243 such patients 32 (13%) arrived with a pulse. Twenty-three of these patients were admitted and 10 discharged alive, 7 were neurologically intact. Out of 211 patients who arrived without a pulse, 24 (11%) developed a pulse with further resuscitative efforts in the ED. Eighteen of these patients were admitted but only one was discharged neurologically intact. The only survivor in the group without a pulse arrested while en route to the ED. It is concluded that cardiac arrest victims who arrive in the ED without a pulse on arrival or en route have almost no chance of functional recovery. abstract_id: PUBMED:26340128 Resuscitation and post resuscitation care of the very old after out-of-hospital cardiac arrest is worthwhile. Background: Out-of-hospital cardiac arrest (OHCA) is associated with a poor prognosis. As comorbidity and frailty increase with age; ethical dilemmas may arise when OHCA occur in the very old. Objectives: We aimed to investigate mortality, neurological outcome and post resuscitation care in octogenarians (≥80) to assess whether resuscitation and post resuscitation care should be avoided. Methods: During 2007-2011 consecutive OHCA-patients were attended by the physician-based Emergency Medical Services-system in Copenhagen. Pre-hospital data based on Utstein-criteria, and data on post resuscitation care were collected. Primary outcome was successful resuscitation; secondary endpoints were 30-day mortality and neurological outcome (Cerebral Performance Category (CPC)). Results: 2509 OHCA-patients with attempted resuscitation were recorded, 22% (n=558) were octogenarians/nonagenarians. 166 (30% of all octogenarians with resuscitation attempted) octogenarians were successfully resuscitated compared to 830 (43% with resuscitation attempted) patients &lt;80 years. 30-day mortality in octogenarians was significantly higher after adjustment for prognostic factors (HR=1.61 CI: 1.22-2.13, p&lt;0.001). Octogenarians received fewer coronary angiographies (CAG) (14 vs. 37%, p&lt;0.001), and had lower odds of receiving CAG by multivariate logistic regression (OR: 0.19, CI: 0.08-0.44, p&lt;0.001). A favorable neurological outcome (CPC 1/2) in survivors to discharge was found in 70% (n=26) of octogenarians compared to 86% (n=317, p=0.03) in the younger patients. Conclusion: OHCA in octogenarians was associated with a significantly higher mortality rate after adjustment for prognostic factors. However, the majority of octogenarian survivors were discharged with a favorable neurological outcome. Withholding resuscitation and post resuscitation care in octogenarians does not seem justified. abstract_id: PUBMED:11940786 Should time from cardiac arrest until call to emergency medical services (EMS) be collected in EMS research? Because of the difficulty and cost of collecting the time from collapse to placing the 911 call in instances of out-of-hospital cardiac arrest and because of the potential noise and bias that might be inherent in such data, a simulation study was conducted to quantitate the impact that such data might have on estimates of the relationship between time from collapse to defibrillation and probability of survival. In the absence of bias, an underestimate of the slope on the order of 20% to 30% might be expected. However, in the presence of bias, the impact on the slope estimate is unpredictable. The most likely bias would tend to cause an overestimate of the slope. It is suggested that unless the time from collapse to placing the 911 call can be obtained accurately and without bias, it is probably not worthwhile to do so. abstract_id: PUBMED:36890974 Brief Report: Diabetic Keto-Acidosis (DKA) Induced Hypothermia may be Neuroprotective in Cardiac Arrest. Despite the decreased survival associated with diabetes with out-of-hospital cardiac arrest and the overall low survival to hospital discharge, we would like to present two cases of OHCA in diabetics who despite prolonged resuscitation efforts had complete neurological recovery likely due to concomitant hypothermia. There is a steady decreasing rate of ROSC with longer durations of CPR so that outcomes are best when &lt;20 minutes compared to prolonged resuscitation efforts (&gt;30-40 minutes). It has been previously recognized that hypothermia prior to cardiac arrest can be neurologically protective even with up to 9 hours of cardiopulmonary resuscitation. Hypothermia has been associated with DKA and although often indicates sepsis with mortality rates of 30-60%, it may indeed be protective if occurring prior to cardiac arrest. The critical factor for neuroprotection may be a slow drop to a temperature &lt;250C prior to OHCA as is achieved in deep hypothermic circulatory arrest for operative procedures of the aortic arch and great vessels. It may be worthwhile continuing aggressive resuscitation efforts even for prolonged periods before attaining ROSC for OHCA in patients found hypothermic from metabolic illnesses as compared to only from environmental exposures (avalanche victims, cold water submersions, etc.) as has been traditionally reported in the medical literature. abstract_id: PUBMED:30059713 Traumatic cardiac arrest is associated with lower survival rate vs. medical cardiac arrest - Results from the French national registry. Introduction: The survival from traumatic vs. medical out-of-hospital cardiac arrest (OHCA) are not yet well described. The objective of this study was to compare survival to hospital discharge and 30-day survival of non-matched and matched traumatic and medical OHCA cohorts. Material & Methods: National case-control, multicentre study based on the French national cardiac arrest registry. Following descriptive analysis, we compared survival rates of traumatic and medical cardiac arrest patients after propensity score matching. Results: Compared with medical OHCA (n = 40,878) trauma victims (n = 3209) were younger, more likely to be male and away from home at the time and less likely to be resuscitated. At hospital admission and at 30 days their survival odds were lower (OR: respectively 0.456 [0.353;0.558] and 0.240 [0.186;0.329]). After adjustment the survival odds for traumatic OHCA were 2.4 times lower at admission (OR: 0.416 [0.359;0.482]) and 6 times lower at day 30 (OR: 0.168 [0.117;0.241]). Conclusions: The survival rates for traumatic OHCA were lower than for medical OHCA, with wider difference in matched vs. non-matched cohorts. Although the probability of survival is lower for trauma victims, the efforts are not futile and pre-hospital resuscitation efforts seem worthwhile. abstract_id: PUBMED:1317084 Should semi-automatic defibrillators be used by emergency medical technicians in Belgium? The Belgian Cerebral Resuscitation Study Group. Early external defibrillation is the single most effective intervention in patients with out-of-hospital cardiac arrest. Literature data indicate that instructing emergency medical technicians (EMTs) to use defibrillators is beneficial, provided the local emergency medical system is well organized. We tried to estimate the potential benefit of early defibrillation in some centres in Belgium by retrospectively analyzing the data from the Belgian Cardio-Pulmonary-Cerebral Resuscitation Registry collected between 1983 and 1987 in Belgian centres with a Mobile Intensive Care Unit (MICU). The data show that 2310 out of 3371 patients (69%) were first attended by the EMTs; on subsequent arrival of the MICU-teams, 584 of these 2310 patients i.e. 17% of the whole study population, presented with ventricular fibrillation. Analysis of estimated time factors in these 2310 patients revealed that the median time interval between collapse and start of resuscitation by EMTs was 8 min; the median time interval between collapse and start of MICU-resuscitation attempts was 16 min. The duration of EMT-resuscitation before MICU-arrival was probably more than 5 min and 10 min in 58% and 23% of the cases respectively. It is concluded that EMTs can be expected to reach a substantial number of ventricular fibrillation victims within a few minutes after the collapse and many minutes before arrival of the MICU. Therefore, training of EMTs in the use of semi-automatic defibrillators seems worthwhile in MICU-served regions in Belgium. abstract_id: PUBMED:25168100 Predictors of favorable and poor prognosis in unwitnessed out-of-hospital cardiac arrest with a non-shockable initial rhythm. Background: Unwitnessed OHCA patients with non-shockable initial rhythms account for nearly half of all OHCA patients, and their prognosis is extremely poor. To date, no studies have focused on these patients. This study aimed to investigating the predictors of favorable and poor prognosis in these patients. Methods: We conducted a nationwide, population-based, observational study of data from the All Japan Utstein Registry, which included 121,081 adult OHCA patients subjected to resuscitation attempts from January 1, 2010 to December 31, 2010. The primary endpoint was favorable neurological outcome one month after OHCA. Results: Of the eligible 120,721 patients, 68,024 (56.3%) were unwitnessed OHCA patients with non-shockable initial rhythms. A younger age (18-64 years: as a reference; 65-84 years: OR 0.68, 95% CI 0.54-0.87, p=0.0019; ≥85 years: OR 0.46, 95% CI 0.33-0.63, p&lt;0.0001), conversion to shockable rhythm (OR 2.14, 95% CI 1.43-3.13, p=0.0003), and pre-hospital ROSC (OR 94.85, 95% CI 75.71-119.35, p&lt;0.0001) were independently associated with a favorable neurological outcome. Favorable neurological outcome rate was 28.8% in unwitnessed OHCA patients with non-shockable initial rhythms with all three favorable predictors, and 0.18% in patients without any of the three predictors (OR 230.34, 95% CI 127.37-402.96, P&lt;0.0001). Conclusions: It may be worthwhile to provide maximum lifesaving medical resources for patients with all of the favorable predictors (&lt;65 years, conversion to shockable rhythm, and pre-hospital ROSC); however, continued resuscitation efforts for patients without these predictors should likely be restrained. abstract_id: PUBMED:9243146 Out-of-hospital cardiac arrests of non-cardiac origin. Epidemiology and outcome. Aims: The aim of the study was to determine the epidemiology of out-of-hospital cardiac arrests of non-cardiac origin and survival following resuscitation, using the Utstein method of data collection. Methods And Results: The study was of prospective cohort design and was conducted in a middle-sized urban city (population 525000) served by a single emergency medical services system. Consecutive out-of-hospital cardiac arrests of non-cardiac origin occurring between 1 January 1994 and 31 December 1995 were included. Survival from cardiac arrest to hospital discharge, and factors associated with survival were considered as main outcome measures. Of the 809 patients, 276 (34.1%) had a cardiac arrest of non-cardiac origin. The mean (SD) age of the patients was 49.8 (20.9) years. Resuscitation was attempted in 204 cases, 82 of whom (40.2%) were hospitalized alive and 23 (11.3%) were discharged. Thirteen (56.5%) of the survivors were discharged neurologically intact or with mild disability (overall performance category I or II). The survivors, during the study period, who suffered an out-of-hospital cardiac arrest of non-cardiac origin comprised 19.2% of all out-of-hospital cardiac arrest survivors. Trauma (62), non-traumatic bleeding (36), intoxication (31), near drowning (22) and pulmonary embolism (18) were the most common aetiologies, comprising 61.2% of cases. The non-cardiac aetiology was suspected pre-hospital in 176 (63.8%) cases; in the remaining cases, the aetiology was revealed only after in-hospital investigations or autopsy. In a logistic regression model, time interval to first responding unit, collapse outside the home, and aetiologies of near-drowning, airway obstruction, intoxication and convulsions were associated with survival. Conclusions: These results indicate that sudden out-of-hospital cardiac arrest more often has a non-cardiac cause than previously believed. Although survival is not as likely as from cardiac arrest of cardiac origin, since non-cardiac-cause survivors comprise one fifth of all out-of-hospital cardiac arrest survivors, resuscitation efforts are worthwhile. Answer: The question of whether resuscitation is worthwhile in cases of unwitnessed out-of-hospital cardiac arrest (OHCA) is complex and depends on various factors, including the initial rhythm, the patient's age, and the cause of the arrest. A study found that survival after unwitnessed OHCA is unlikely with an initial response of basic life support alone, and withdrawal of resuscitation should be considered if an adult victim of unwitnessed cardiac arrest is found in asystole and the arrest is of obvious cardiac origin (PUBMED:9209229). Another study concluded that cardiac arrest victims who arrive in the emergency department (ED) without a pulse have almost no chance of functional recovery (PUBMED:2302278). However, other research suggests that resuscitation and post-resuscitation care of the very old after OHCA can be worthwhile, as the majority of octogenarian survivors were discharged with a favorable neurological outcome (PUBMED:26340128). Additionally, certain cases, such as diabetic keto-acidosis (DKA) induced hypothermia, may be neuroprotective in cardiac arrest, suggesting that aggressive resuscitation efforts could be worthwhile even for prolonged periods before attaining return of spontaneous circulation (ROSC) (PUBMED:36890974). In the context of traumatic cardiac arrest, survival rates are lower compared to medical OHCA, but pre-hospital resuscitation efforts are not futile and seem worthwhile (PUBMED:30059713). Furthermore, training emergency medical technicians (EMTs) in the use of semi-automatic defibrillators is considered beneficial and worthwhile in regions served by Mobile Intensive Care Units (MICUs) (PUBMED:1317084). For unwitnessed OHCA patients with non-shockable initial rhythms, predictors of favorable prognosis include a younger age, conversion to shockable rhythm, and pre-hospital ROSC. It may be worthwhile to provide maximum lifesaving medical resources for patients with these favorable predictors, while continued resuscitation efforts for patients without these predictors should likely be restrained (PUBMED:25168100). Lastly, despite the lower likelihood of survival from non-cardiac causes of OHCA, since non-cardiac-cause survivors comprise a significant portion of all OHCA survivors, resuscitation efforts are considered worthwhile (PUBMED:9243146). In summary, while the prognosis for unwitnessed OHCA can be poor, especially with non In summary, while the prognosis for unwitnessed OHCA can be poor, especially with non-shockable rhythms and asystole of cardiac origin, there are scenarios where resuscitation is considered worthwhile. These include cases where patients are very old but still have a chance of favorable neurological outcomes, instances where hypothermia may provide neuroprotection, and situations where patients meet certain favorable prognostic criteria. Additionally, efforts in traumatic cardiac arrests and the use of semi-automatic defibrillators by EMTs are supported. The decision to initiate or continue resuscitation efforts should be made on a case-by-case basis, considering the specific circumstances and potential for a good outcome.
Instruction: A comparison of EMIT and FPIA methods for the detection of cyclosporin A blood levels: does impaired liver function make a difference? Abstracts: abstract_id: PUBMED:9272414 A comparison of EMIT and FPIA methods for the detection of cyclosporin A blood levels: does impaired liver function make a difference? Objective: Apparent cyclosporin A (CSA) blood levels, as determined by fluorescence polarization immunoassay (FPIA) and enzyme-multiplied immunoassay technique (EMIT), were compared in CSA-treated patients with various degrees of liver dysfunction. Methods: FPIA and EMIT were performed in parallel according to test manufacturer instructions in blood from kidney (n = 82), liver (n = 96) and heart transplant (n = 20) patients. Results: The precision of both techniques was greatest in patients with the highest blood levels, and at each blood level greater for the FPIA than for the EMIT. Apparent CSA blood levels, as determined by EMIT, were typically approximately 70% of those determined by FPIA, indicating greater cross-reaction of the antibody in the FPIA with CSA metabolites. However, the ratio of values determined with EMIT and FPIA was very similar in kidney, liver and heart transplant patients. Among liver transplant patients it was also very similar in those without major alterations of hepatic function and in those with impaired excretory (increased bilirubin and gamma GT) or synthetic (i.e., reduced thromboplastin time) function. Extended storage of blood samples for up to 10 days did not affect apparent CSA blood level estimates by EMIT in a clinically relevant manner. Conclusions: We conclude that the greater specificity of the antibody in the EMIT for the CSA parent compound does not translate into a clinically relevant advantage for CSA monitoring. abstract_id: PUBMED:11128239 Considerable lack of agreement between S-FPIA and EMIT cyclosporine assay in therapeutic drug monitoring, of heart transplant recipients. The authors performed a comparative analysis of 60 whole blood samples containing cyclosporine (CsA) from heart transplant (HTx) recipients (n = 60) by the two "specific" monoclonal immunoassays, enzyme-multiplied immunoassay technique (EMIT) and fluorescence polarization immunoassay (S-FPIA), using the Altman-Bland approach based on graphical techniques and simple calculations. The CsA blood concentrations measured by S-FPIA [mean (SD): 268.1 (108.8) ng/mL] showed a statistically significant difference (P &lt; 0.001) from the corresponding concentrations measured by EMIT [219.6 (118.7) ng/mL]. The CsA concentrations were 27% (median) higher when determined by monoclonal S-FPIA than by EMIT. The comparison between EMIT and S-FPIA showed a good correlation (S-FPIA conc. (ng/mL) = EMIT conc. (ng/mL) x 0.88 + 76.1, r = 0.96, P &lt; 0.001). However, a high correlation does not mean that the two methods agree, and their use as interchangeable might be misleading. The authors summarized the degree of agreement by calculating the bias estimated by the mean difference (d) and the standard deviation of the difference (SD). For CsA concentration data, the mean difference (S-FPIA minus EMIT) is +49.9 ng/mL and SD is 31.2 ng/mL. Altman-Bland analysis indicates considerable lack of agreement between EMIT and S-FPIA, with discrepancies of more than 100 ng/mL. The present study's data clearly show that there is a considerable and clinically unacceptable lack of agreement between the S-FPIA and the EMIT techniques in HTx recipients for the whole range of concentrations evaluated (25-500 ng/mL), and this is caused by the variation in the overestimation of the CsA parent compound. Even though a similar CsA reference range was reported during maintenance therapy for both methods (150-250 ng/mL), which might encourage their interchangeability in the clinical setting, this approach should be avoided. Laboratory reports should always state both the concentration of CsA and the analytical method. abstract_id: PUBMED:2053089 Cyclosporine determinations in heart and liver transplant recipients: comparison of HPLC and FPIA methods on whole blood and plasma. One hundred and eighty five whole blood samples were analysed for cyclosporine levels by fluorescence polarization immunoassay (FPIA) and high performance liquid chromatography (HPLC). 123 came from 4 heart transplant recipients (mean age +/- SD: 47.50 +/- 20.56 years) and 62 from 4 liver transplant recipients (44.50 +/- 16.52 years). FPIA was done on plasma and whole blood in heart transplant recipients, on plasma in the liver recipients. HPLC was always done on whole blood. The results show a good correlation between FPIA on plasma (y) and HPLC (x) in liver recipients (n = 62, r = 0.935, y = 1.23x + 70 ng/ml), slightly worse between FPIA on plasma (y) and HPLC (x) in heart recipients (n = 64, r = 0.610, y = 0.78x + 189 ng/ml) and mediocre for FPIA on whole blood (y) and HPLC (x) in heart recipients (n = 123, r = 0.566, y = 1.35x + 594 ng/ml). abstract_id: PUBMED:9850184 Conversion of cardiac and liver transplant recipients from HPLC and FPIA (polyclonal) to an FPIA (monoclonal) technique for measurement of blood cyclosporin A. In an effort to replace HPLC and FPIA (polyclonal) for whole blood determination of Cyclosporin A (CsA), this study examined the application of FPIA (monoclonal) in patients post cardiac and liver transplantation. The assay had a minimum detectable dose of 15 microg/L, an overall recovery of 97% and was linear to 1200 microg/L, and gave inter-assay precision values of &lt; 5% (CV). On comparing FPIA (monoclonal) and HPLC for 59 cardiac transplant recipient blood samples, a correlation of FPIA (monoclonal) = 1.30 (HPLC) + 36.34, r = 0.96 was obtained. With liver transplant samples (n = 348), the correlation was FPIA (monoclonal) = 1.21 (HPLC) + 42.15, r = 0.98. Correlation on 131 cardiac transplant recipients gave FPIA (monoclonal) = 0.31 FPIA (polyclonal) + 43.97, r = 0.68. It is concluded that when converting from HPLC to FPIA (monoclonal) a positive bias of 21%-30% is observed, and in replacing FPIA (polyclonal) with FPIA (monoclonal), a negative bias of 50%-69% is seen with liver and cardiac patients respectively. These data indicate that therapeutic ranges should be re-established or adjustments in CsA dosing would be necessary. abstract_id: PUBMED:11156361 Comparison of five cyclosporin immunoassays with HPLC. High performance liquid chromatography (HPLC) is the reference method for cyclosporin (CyA) measurements but therapeutic monitoring of the drug is frequently made using the more practical immunoassays. Cross-reactivity with CyA metabolites may compromise the specificity of immunoassays, particularly in liver graft recipients where metabolites may accumulate. The aim of this study was to compare with HPLC the performance of two recently introduced CyA immunoassays (the AxSYM fluorescent polarisation immunoassay (FPIA) and non-extraction CEDIA assay). The comparison was extended to the well-established TDx monoclonal FPIA (TDx mono) and the enzyme multiplied (EMIT)-specific assays and to the polyclonal FPIA (TDx poly), in which metabolite cross-reactivity is extensive. Assays were performed on 106 blood samples (taken 6 days to 118 months post-liver transplant) and results were compared by non-parametric regression analysis and difference plots. AxSYM and CEDIA showed both constant and proportional bias against HPLC (unlike EMIT) but the mean difference from HPLC was least for AxSYM (2.7 microg/l vs. 11.7, 9.4 and 54 microg/l for CEDIA, EMIT and TDx mono, respectively. (TDx poly - HPLC) values were proportional to all immunoassay results, with slopes of 0.33, 0.38 and 0.45 for EMIT, AxSYM and CEDIA, respectively. Our data suggest close agreement between AxSYM, CEDIA and EMIT results. abstract_id: PUBMED:1448852 Evaluation of instrumental, nonisotopic immunoassays (fluorescence polarization immunoassay and enzyme-multiplied immunoassay technique) for cyclosporine monitoring in whole blood after kidney and liver transplantation. Two new instrumental methods, an enzyme-multiplied immunoassay technique (EMIT) and a fluorescence polarization immunoassay (FPIA), were evaluated for monitoring of cyclosporine (CyA) in whole blood samples of renal and liver transplant patients. They are considered as being specific to the parent drug and they were compared with a specific radioimmunoassay (RIA) and a nonspecific FPIA. The data reveal that the novel procedures provide slightly overestimated CyA levels compared with specific RIA (6-12% for EMIT, 20-25% for FPIA). For both assays, intrarun and interrun reproducibilities were found to be in the 2-8% range. The ease of performance and the possibility of performing approximately 40 assays/h make the two methodologies very attractive for both routine and emergency analyses. These approaches are viewed to be complementary to the only previously available instrumental method, the nonspecific FPIA, which provides three- to fourfold higher CyA levels than those obtained with specific methods. Specific and nonspecific monitoring of CyA levels allowed variations in proportions of metabolites to total CyA and metabolites to be distinguished. A higher percentage and variability of cross-reacting metabolites were found in whole blood samples after liver transplantation compared with those in blood of kidney transplant recipients. abstract_id: PUBMED:1424128 Measurement of cyclosporine by liquid chromatography and three immunoassays in blood from liver, cardiac, and renal transplant recipients. In an effort to replace HPLC for whole-blood determination of cyclosporine (CsA), we compared HPLC with radioimmunoassay (RIA; INCSTAR, Cyclo-Trac SP assay), fluorescence polarization immunoassay (FPIA; Abbott TDx), and in-house modified enzyme-multiplied immunoassay technique (EMIT; Syva Co.). For blood samples from 200 various transplant recipients, RIA = 1.262 (HPLC) - 8.16, r = 0.983; FPIA = 1.200 (HPLC) + 19.90, r = 0.981; and EMIT = 1.038 (HPLC) + 11.28, r = 0.985. For segregation by transplant type, RIA, FPIA, and EMIT demonstrated positive biases of 27%, 12%, and 3%, respectively, for liver transplant recipients (n = 50) when compared with HPLC. Heart transplant recipients (n = 50) gave positive bias values of 23%, 14%, and 4% for RIA, FPIA, and EMIT, respectively. Adult renal transplant recipients (n = 50) demonstrated positive bias values of 30%, 31%, and 0% for RIA, FPIA, and EMIT, respectively. For pediatric renal transplant recipients (n = 50), positive biases of 40%, 31%, and 9% were obtained for RIA, FPIA, and EMIT, respectively. We conclude that the modified EMIT represents the best replacement for HPLC. abstract_id: PUBMED:10478141 Evaluation of four automated methods for determination of whole blood cyclosporine concentrations. Cyclosporine is a widely used and potent immunosuppressant drug with a narrow therapeutic index. Therefore, cyclosporine concentrations should be monitored closely. Various automated immunologic methods for cyclosporine whole blood determinations are available. Two new methods, fluorescence polarization immunoassay (FPIA) for the AxSYM by Abbott Laboratories, Chicago, IL, and the cloned enzyme donor immunoassay (CEDIA) by Boehringer Mannheim, Mannheim, Germany, have been introduced. In addition, Dade Behring improved its enzyme multiplied immunoassay (EMIT) assay. The present study evaluated all 3 new methods in comparison with high-performance liquid chromatography (HPLC) and the FPIA for the TDx analyzer. We measured whole blood cyclosporine concentrations of 179 samples obtained from 127 patients after kidney, bone marrow, heart-lung, and liver transplantation. All 4 automated immunologic methods can be used for routine measurement of cyclosporine whole blood concentrations. Disadvantages, such as higher cross-reactivity (Abbott TDx, CEDIA) or a limited linearity range (EMIT), are accompanied by advantages, such as a high precision (Abbott TDx) or an easy sample handling procedure (CEDIA). Information presented in this article should help to find the most adequate cyclosporine method for each medical laboratory. abstract_id: PUBMED:9127698 The EMIT Cyclosporine Assay: development of application protocols for the Boehringer Mannheim Hitachi 911 and 917 analyzers. Objective: The purpose of this work was to develop applications for the EMIT Cyclosporine (CsA) Assay on the Hitachi 911 and 917 analyzers. Methods And Results: Instrument settings were optimized to arrive at the following assay characteristics on the Hitachi 917. Limit of sensitivity was 50 micrograms/L. Intra-assay coefficients of variation (CV) were 8.1% (n = 20; mean = 62 micrograms/L) and 4.2% (n = 20; mean = 315 micrograms/L), while interassay CVs were 13.0% (n = mean = 73 micrograms/L) and 5.7% (n = 43; mean = 391 micrograms/L). Recoveries of 95-104% were obtained by spiking aliquots of 3 whole blood patient pools of known CsA concentrations with CsA. Serial dilutions of 3 patient specimens demonstrated linear relationships between expected and actual CsA concentrations (r = 0.99, 0.99, 0.98; regression lines: y = 1.19x -17.1; y = 0.75x + 18.0; y = 1.01x + 3.7). Specimen carryover was not evident. Calibration stability is at least 10 days. Comparable assay characteristics were found for the Hitachi 911. Sequentially-collected trough whole blood specimens from renal (n = 3), liver (n = 3) and heart (n = 4) transplant patients prescribed CsA were collected up to 78 days post-transplant and analyzed by EMIT on the Hitachi 917 and also by fluorescence polarization immunoassay (FPIA) and high performance liquid chromatography (HPLC). The following linear regression equations were produced for the renal [EMIT = 0.801 (TDx) + 4.98, r = 0.91, Sy/x = 32, n = 37; EMIT = 0.877 (HPLC) + 56, r = 0.87, Sy/x = 38, n = 37]; liver [EMIT = 0.808 (TDx) - 27, r = 0.94, Sy/x = 42, n = 37; EMIT = 0.953 (HPLC) + 44, r = 0.89, Sy/x = 57, n = 37] and heart [EMIT = 0.820 (TDx) - 24, r = 0.94, Sy/x = 31, n = 45, EMIT = 0.956 (HPLC) + 54, r = 0.91, Sy/x = 38, n = 45] patient samples. FPIA values average 32% more than EMIT-derived CsA concentrations on the Hitachi 917, which in turn averaged 15% more than HPLC values. In addition, these levels were compared intra-individually. CsA concentrations within all patients were significantly higher (p &lt; 0.05, paired t-test) by FPIA compared to EMIT and by FPIA compared to HPLC. Although CsA concentrations within most patients were significantly higher (p &lt; 0.05) by EMIT compared to HPLC, levels determined in 4 transplant patients (1 renal, 1 liver, 2 heart) were not different. Conclusion: Development of applications for the EMIT CsA Assay on two highly automated, random access instruments, the Hitachi 911 and Hitachi 917, enhances the versatility of the immunoassay for routine therapeutic drug monitoring of this immunosuppressant in the clinical setting. abstract_id: PUBMED:2658211 Comparison of whole-blood cyclosporine levels measured by radioimmunoassay and fluorescence polarization in patients post orthotopic liver transplant. This study compared the analysis of whole blood cyclosporine concentrations measured by fluorescence polarization immunoassay (FPIA) and radioimmunoassay (RIA) polyclonal and monoclonal procedures. Fifteen orthotopic liver transplant patients with a mean age of 39 +/- 11.06 years were included in the study. One hundred thirteen levels were analyzed using FPIA, RIA polyclonal, and RIA monoclonal procedures. There was no difference statistically in comparing FPIA and RIA polyclonal results (p greater than 0.05). There was a statistical difference between FPIA and RIA monoclonal results p = 0.0001). With use of least squares simple linear regression analysis, FPIA results showed good correlation with RIA polyclonal results (R2 = 0.87). Poor correlation was shown between FPIA and RIA monoclonal results (R2 = 0.51). In this study population, FPIA produced results 2.5% higher than the RIA polyclonal procedure. Answer: The comparison of EMIT (enzyme-multiplied immunoassay technique) and FPIA (fluorescence polarization immunoassay) methods for the detection of cyclosporin A (CSA) blood levels in patients with varying degrees of liver function has been investigated. The precision of both techniques was found to be greater for FPIA than for EMIT, with the highest precision observed in patients with the highest blood levels. Apparent CSA blood levels determined by EMIT were typically around 70% of those determined by FPIA, suggesting a greater cross-reaction of the antibody in FPIA with CSA metabolites. However, the ratio of values determined with EMIT and FPIA was very similar across kidney, liver, and heart transplant patients, indicating that the degree of liver dysfunction (including both excretory and synthetic function impairment) did not significantly affect the ratio of EMIT to FPIA values (PUBMED:9272414). Furthermore, extended storage of blood samples for up to 10 days did not significantly affect the apparent CSA blood level estimates by EMIT, suggesting that EMIT is not sensitive to sample storage time in a clinically relevant manner. The study concluded that the greater specificity of the antibody in the EMIT for the CSA parent compound does not translate into a clinically relevant advantage for CSA monitoring, implying that impaired liver function does not make a significant difference in the comparison of EMIT and FPIA methods for detecting CSA blood levels (PUBMED:9272414).
Instruction: Does online dating lead to higher sexual risk behaviour? Abstracts: abstract_id: PUBMED:27295949 Does online dating lead to higher sexual risk behaviour? A cross-sectional study among MSM in Amsterdam, the Netherlands. Background: Men having sex with men (MSM) frequently use the Internet to find sex partners. We examined the association between unprotected anal intercourse (UAI) with partners dated online and with partners dated offline (met elsewhere), and examined whether differences can be explained by self-perceived HIV status of the index and knowledge of partnership characteristics. Methods: MSM were recruited at the Sexually Transmitted Infections Clinic in Amsterdam, in 2008-2009. Participants completed a questionnaire concerning sexual behaviour. Only men reporting both online and offline casual sex partners were included. We assessed the association between online/offline partner dating and UAI, using random-effects logistic regression analysis. Results: Five hundred seventy-seven men (351 HIV-negative, 153 HIV-positive, and 73 HIV-unaware) reported UAI in 26 % of 878 online, and 23 % of 903 offline casual partnerships. The crude OR of online dating for UAI was 1.36 (95 % CI 1.03-1.81). HIV-positive men were more likely to report UAI than HIV-negative men (49 % vs. 28 % of partnerships). Adjusted for demographic characteristics, online dating had no significant effect on UAI among HIV-negative and HIV status-unaware men, but HIV-positive men were more likely to have UAI with online partners (aOR = 1.65 [95 % CI 1.05-2.57]). After correction for partner and partnership characteristics the effect of online/offline dating on UAI among HIV-positive MSM was reduced and no longer significant. Conclusions: Online dating was not significantly associated with UAI among HIV-negative MSM. HIV-positive MSM were more likely to practise UAI with partners dated online; however, after correction for partner and partnership characteristics, online partnership acquisition was not associated with a significantly increased risk of UAI. abstract_id: PUBMED:36776752 Sexual Experiences and Attachment Styles in Online and Offline Dating Contexts. Nearly 50% of adults under age 29 report using some form of online dating to find sexual partners or romantic relationships. Limited evidence suggests online and offline dating behaviors and experiences can vary. We aimed to expand understanding these differences by examining how attachment styles are associated with online and offline search for partners and emotional responses to sexual experiences. In a sample of single adults (N = 247; Mage= 27.34), we examine how attachment styles relate to individual differences in how people search for partners and respond to sexual experiences with partners met online and offline. Individuals high in attachment anxiety reported higher use of online dating apps and more negative experiences (e.g., lower satisfaction, more guilt) following their most recent sexual encounters with partners met online and offline, and reported feeling more used after engaging in sexual activity specifically with partners met offline. Individuals high in avoidance also reported more negative experiences following their most recent sexual encounters, but only with partners met offline. This study moves our understanding by highlighting nuances in how attachment insecurity is associated with differences in online dating app use and emotions following sexual experiences when meeting partners online and offline. abstract_id: PUBMED:37726556 Young Adult Women and Sexual Awareness in the Digital Age: Examining Pathways Linking Online Dating Debut and Mindfulness with Sexual and Mental Health. Sexual awareness is an understudied phenomenon, despite its known effects on mental health. Little is known about the predictors of sexual awareness, including how early sexual debut and early engagement with online dating and hookup apps might impact the development of sexual mindfulness. Given the conceptual overlap between mindfulness and sexual awareness, this study tested a model that hypothesized that general mindfulness and early sexual and online dating debuts may be associated with mental health outcomes and sexual behavior through pathways involving sexual awareness (assertiveness, consciousness, appeal, and monitoring). A sample of 2,379 heterosexually active young adult women completed an online survey. Path models indicated that both early sexual debut (prior to age 15) and early online dating debut (prior to age 18) had significant, positive direct associations with anxiety and depression scores. Early online dating was also associated with condomless sex with casual male partners. However, both early sexual debut and early online dating debut were indirectly linked to greater sexual risk through greater appeal, and to lower sexual risk through sexual assertiveness. Additionally, greater monitoring contributed to more depression for those with an earlier sexual debut. These findings point to potential risks associated with early online dating. They also highlight benefits of studying sexual awareness as a multi-dimensional construct, especially as it helps to clarify divergent findings in the existing literature on the long-term consequences associated with early sexual debut. While some domains were associated with risk (monitoring and appeal), others were indicative of resilience (assertiveness). abstract_id: PUBMED:35227116 Predictors of Revictimization in Online Dating. Introduction: While a significant association between childhood maltreatment and sexual victimization in adulthood has been established in previous research, it is unknown whether this also applies to the context of online dating. Therefore, we aimed to investigate whether revictimization is common in online users and which mechanisms mediate this risk. Method: The participants were 413 heterosexual women aged between 18 and 35 who used mobile dating applications in the year before the assessment. The participants reported information on using mobile dating applications, motives for engaging in casual sex, protective dating strategies, and general motives for online dating. Results: Childhood maltreatment severity was positively related to both cyber and in-person sexual victimization severity. Motives related to regulating negative affect and self-esteem mediated the relationship between childhood maltreatment severity and in-person sexual victimization severity in adulthood. Furthermore, those motives moderated the association between cyber and in-person sexual victimization. The effect of cyber victimization on in-person sexual victimization was stronger at higher levels of affect/self-esteem regulatory sex motives compared to lower levels. The affect/self-esteem regulatory sex motives were not related to protective dating strategies. Discussion: The results of the study imply that a history of childhood maltreatment is a risk factor for sexual victimization in adulthood among young heterosexual women who use online dating. One of the factors linking these variables in this population might be affect/self-esteem regulatory sex motives. Future studies should aim at replicating these associations prospectively. abstract_id: PUBMED:31099335 Google for Sexual Relationships: Mixed-Methods Study on Digital Flirting and Online Dating Among Adolescent Youth and Young Adults. Background: According to a 2015 report from the Pew Research Center, nearly 24% of teens go online almost constantly and 92% of teens are accessing the internet daily; consequently, a large part of adolescent romantic exploration has moved online, where young people are turning to the Web for romantic relationship-building and sexual experience. This digital change in romantic behaviors among youth has implications for public health and sexual health programs, but little is known about the ways in which young people use online spaces for sexual exploration. An examination of youth sexual health and relationships online and the implications for adolescent health programs has yet to be fully explored. Objective: Although studies have documented increasing rates of sexually transmitted infections and HIV among young people, many programs continue to neglect online spaces as avenues for understanding sexual exploration. Little is known about the online sexual health practices of young people, including digital flirting and online dating. This study explores the current behaviors and opinions of youth throughout online sexual exploration, relationship-building, and online dating, further providing insights into youth behavior for intervention opportunities. Methods: From January through December 2016, an exploratory study titled TECHsex used a mixed-methods approach to document information-seeking behaviors and sexual health building behaviors of youth online in the United States. Data from a national quantitative survey of 1500 youth and 12 qualitative focus groups (66 youth) were triangulated to understand the experiences and desires of young people as they navigate their sexual relationships through social media, online chatting, and online dating. Results: Young people are using the internet to begin sexual relationships with others, including dating, online flirting, and hooking up. Despite the fact that dating sites have explicit rules against minor use, under 18 youth are using these products regardless in order to make friends and begin romantic relationships, albeit at a lower rate than their older peers (19.0% [64/336] vs 37.8% [440/1163], respectively). Nearly 70% of youth who have used online dating sites met up with someone in person (44.78% [30/67] under 18 vs 74.0% [324/438] over 18). Focus group respondents provided further context into online sexual exploration; many learned of sex through pornography, online dating profiles, or through flirting on social media. Social media played an important role in vetting potential partners and beginning romantic relationships. Youth also reported using online dating and flirting despite fears of violence or catfishing, in which online profiles are used to deceive others. Conclusions: Youth are turning to online spaces to build sexual relationships, particularly in areas where access to peers is limited. Although online dating site use is somewhat high, more youth turn to social media for online dating. Sexual relationship-building included online flirting and online dating websites and/or apps. These findings have implications for future sexual health programs interested in improving the sexual health outcomes of young people. Researchers may be neglecting to include social media as potential sources of youth hookup culture and dating. We implore researchers and organizations to consider the relationships young people have with technology in order to more strategically use these platforms to create successful and youth-centered programs to improve sexual health outcomes. abstract_id: PUBMED:35356329 Types of Male Adolescent Violence Against Women in Three Contexts: Dating Violence Offline, Dating Violence Online, and Sexual Harassment Online Outside a Relationship. There has been little investigation of male adolescent violence against women as acknowledged by boys themselves, and even less on such violence in different contexts with comparative studies of behavior between those who perpetrate this violence and the population at large. This study used cluster analysis to establish a male adolescent typology based on boys' self-reporting of violence against women in three contexts. The participants were 3,132 Spanish teenage boys aged 14-18 with experience of relationships with girls. Three discrete, identifiable types were obtained: the first group (69.8%), of non-violent boys; the second group (26%), more involved in sexual harassment online outside a relationship but with a low incidence of dating violence offline; the third group (4.2%), with abuse in the three contexts but less involved in sexual harassment online than the second group. The logistic regression analysis showed that justification of male dominance and violence was the main risk condition for inclusion in the second and third groups, followed by low self-esteem (for the third group) and risky sexual behaviors online (for the second and third groups). The findings based on these results are important for preventing male adolescent dating violence against women in the three male types detected. abstract_id: PUBMED:32442128 Inferring Sexually Transmitted Infection Risk From Attractiveness in Online Dating Among Adolescents and Young Adults: Exploratory Study. Background: Sexually transmitted infection (STI) rates are on the rise among adolescents and young adults in the United States. With the popularity of online dating, adolescents and young adults must increasingly rely on limited cues to make initial judgments about potential sexual partners, including judgments about STI risk. Objective: This study aimed to assess whether in the context of online dating, an attractiveness heuristic would be used for STI risk assessment. We hypothesized that consistent with research on halo effects, decision makers would judge more attractive people to be less likely to have STIs. Methods: In a survey experiment, we asked participants to determine which individual in each of 20 sets of paired photographs was enrolled in a personals website for people with publicly disclosed STIs. Results: Despite financial incentives for accuracy and high levels of self-confidence in their judgments, participants performed no better than chance at identifying individuals with self-reported STIs. Contrary to our hypothesis, however, more attractive people were judged as being more likely to have an STI. This relationship appears to be mediated by inferences regarding the target individual's sexual behavior, with more attractive individuals considered to have more partners. Conclusions: On showing adolescents and young adults photographs offering no diagnostic information about STIs, they appeared to use attractiveness as a cue for sexual risk, which was mediated by the belief that attractive individuals have more sexual opportunities. Health care providers may wish to address this heuristic process among their adolescent patients in discussions about sexual health. abstract_id: PUBMED:33647708 Adolescents who solely engage in online sexual experiences are at higher risk for compulsive sexual behavior. In addition to its expression offline, sexual behavior is frequently expressed online. Some adolescents solely engage in online sexual activities, whereas others in offline and online activities, or have no sexual experience at all. In the current research, we examined whether those who solely engage in online sexual activities are at greater risk for compulsive sexual behavior. In Study 1 (n = 164), we examined differences in compulsive sexual behavior (CSB) and risky sexual action tendencies between study groups. In Study 2 (n = 713), we statistically replicated the classification into different sexual behaviors and examined differences between the clusters in CSB, social tendencies and socio-demographic measures. Results indicated that adolescents who only engaged in online sexual activities had significantly higher percentage of clinical CSB, were less likely boys and more likely to be religious. There were not more prone to risky sexual behavior, however. The current research expands the knowledge about offline or online sexually related activities among adolescence. abstract_id: PUBMED:35162558 Mobile Dating Applications and the Sexual Self: A Cluster Analysis of Users' Characteristics. The online environment has had a profound sociocultural impact, and its implications pose new challenges to modern-day societies. The digital sexual and dating domains have dramatically affected sexual and romantic behavior and norms, and yet, no empirical studies have addressed the role of the sexual self-concept in driving sexual and romantic online behavior. The aim of this study is to identify reliable sexual self-configurations through a cluster analysis approach and determine whether these profiles are informative in relation to app use motives and sexual outcomes. For this purpose, a total of 244 subjects were recruited. Survey questions measured a set of demographic parameters, motives for app use, online sexual activities, attachment orientations, the sexual self-concept and sexual satisfaction. Five clusters were identified, including two with high levels of sexual drive ("Driven" and "Adventurous"), two with low levels of sexual drive ("Unassured" and "Naïve") and one with an intermediate level of sexual drive ("Competent"). The clusters differed on gender, attachment styles, app use patterns and sexual characteristics. These findings provide insight into how the sexual self-concept relates to the interaction between individuals and the online sexual and dating scenes. abstract_id: PUBMED:37998650 Adolescent Use of Dating Applications and the Associations with Online Victimization and Psychological Distress. In this study, we examined the relationships between the use of online dating applications (apps), online victimization, and psychosocial distress among adolescents. This study was conducted in 2020. A sample of 2595 seventh-grade students from 30 Taiwanese middle schools was surveyed. We conducted a self-administered survey. Overall, 15% of the adolescents reported using online dating apps in the past year, while 78% reported having seen dating app advertisements on the internet in the past year. Multivariate analysis results indicated that adolescents' exposure to the marketing of dating apps and poor academic performance were both associated with the use of online dating apps. Adolescents who used dating apps were more likely to experience online privacy victimization, cyberbullying victimization, and online sexual harassment. The use of dating apps by adolescents, online privacy victimization, cyberbullying victimization, and online sexual harassment were associated with higher levels of depression, anxiety, and stress. In conclusion, adolescent use of dating apps is related to online victimization and psychological distress. Answer: The relationship between online dating and sexual risk behavior is complex and varies depending on various factors, including HIV status, attachment styles, and individual experiences. One study found that online dating was not significantly associated with unprotected anal intercourse (UAI) among HIV-negative men who have sex with men (MSM). However, HIV-positive MSM were more likely to report UAI with partners they met online, although this association was not significant after adjusting for partner and partnership characteristics (PUBMED:27295949). Another study highlighted that individuals with higher attachment anxiety reported more negative experiences, such as lower satisfaction and more guilt, following sexual encounters with partners met both online and offline. They also felt more used after engaging in sexual activity specifically with partners met offline. Those with higher avoidance also reported more negative experiences, but only with partners met offline (PUBMED:36776752). Research on young adult women suggested that early engagement with online dating and hookup apps might impact the development of sexual mindfulness, with early online dating debut associated with condomless sex with casual male partners. However, early sexual debut and early online dating debut were indirectly linked to greater sexual risk through greater appeal, and to lower sexual risk through sexual assertiveness (PUBMED:37726556). A study on heterosexual women using mobile dating applications found that a history of childhood maltreatment was a risk factor for sexual victimization in adulthood, with affect/self-esteem regulatory sex motives mediating this relationship (PUBMED:35227116). Adolescents and young adults are increasingly using the internet for sexual relationships, including online flirting and dating. Despite concerns about violence or deception, youth continue to engage in online sexual exploration (PUBMED:31099335). In summary, while some studies suggest that online dating can be associated with higher sexual risk behavior, particularly among certain populations such as HIV-positive MSM or individuals with a history of childhood maltreatment, the evidence is not conclusive. Factors such as attachment styles, early sexual and online dating experiences, and individual motives for engaging in online dating play a significant role in shaping sexual risk behavior associated with online dating.
Instruction: Is stroke a paradoxical embolism in patients with patent foramen ovale? Abstracts: abstract_id: PUBMED:38185629 The association between patent foramen ovale and unexplained syncope in pediatric patients. Background: Patent foramen ovale (PFO) is associated with transient ischemia attack (TIA) or stroke, paradoxical embolism, and migraines. PFO closure decreases the recurrent incidence of cerebral ischemic events and reduces the incidence of syncope in adults. However, whether PFO is associated with syncope in pediatric patients has not been investigated. Methods: 1001 pediatric patients (aged 4 to 17 years, mean age 10.31 ± 2.61 years, 519 males) who complained of unexplained syncope, palpitation, headache, dizziness and chest pain and were hospitalized in the Syncope Ward, The Second Xiangya Hospital, Central South University between January 2013 and April 2022 were recruited. Children with definite etiology of syncope, neurological, cardiogenic, psychological and other system diseases were excluded. PFO was measured by transthoracic echocardiography and right-heart contrast echocardiography was performed to identify the presence of right-to-left shunting. The demographic data and medical records were retrospectively reviewed and analyzed. Results: 276 cases were included in the simple syncope group, 379 cases in the headache/dizziness group, 265 cases in the chest pain group, and 81 cases in the palpitation group. The incidence of PFO between the four groups was insignificant (4.71%, 4.74%, 4.15%, 6.17%, respectively, P = 0.903). Multivariate Logistic regression demonstrated that PFO is not associated with the increased risk of syncope (P = 0.081). Conclusion: PFO may not increase the risk of syncope in pediatric patients. Further study may include a large and multicenter sample to investigate the association between PFO and unexplained syncope. abstract_id: PUBMED:32921262 Risk of Paradoxical Embolism (RoPE)-Estimated Attributable Fraction Correlates With the Benefit of Patent Foramen Ovale Closure: An Analysis of 3 Trials. Background And Purpose: In patients with cryptogenic stroke and patent foramen ovale (PFO), the Risk of Paradoxical Embolism (RoPE) Score has been proposed as a method to estimate a patient-specific "PFO-attributable fraction"-the probability that a documented PFO is causally-related to the stroke, rather than an incidental finding. The objective of this research is to examine the relationship between this RoPE-estimated PFO-attributable fraction and the effect of closure in 3 randomized trials. Methods: We pooled data from the CLOSURE-I (Evaluation of the STARFlex Septal Closure System in Patients With a Stroke and/or Transient Ischemic Attack due to Presumed Paradoxical Embolism through a Patent Foramen Ovale), RESPECT (Randomized Evaluation of Recurrent Stroke Comparing PFO Closure to Established Current Standard of Care Treatment), and PC (Clinical Trial Comparing Percutaneous Closure of Patent Foramen Ovale [PFO] Using the Amplatzer PFO Occluder With Medical Treatment in Patients With Cryptogenic Embolism) trials. We examine the treatment effect of closure in high RoPE score (≥7) versus low RoPE score (&lt;7) patients. We also estimated the relative risk reduction associated with PFO closure across each level of the RoPE score using Cox proportional hazard analysis. We estimated a patient-specific attributable fraction using a PC trial-compatible (9-point) RoPE equation (omitting the neuroradiology variable), as well as a 2-trial analysis using the original (10-point) RoPE equation. We examined the Pearson correlation between the estimated attributable fraction and the relative risk reduction across RoPE strata. Results: In the low RoPE score group (&lt;7, n=912), the rate of recurrent strokes per 100 person-years was 1.37 in the device arm versus 1.68 in the medical arm (hazard ratio, 0.82 [0.42-1.59] P=0.56) compared with 0.30 versus 1.03 (hazard ratio, 0.31 [0.11-0.85] P=0.02) in the high RoPE score group (≥7, n=1221); treatment-by-RoPE score group interaction, P=0.12. The RoPE score estimated attributable fraction anticipated the relative risk reduction across all levels of the RoPE score, in both the 3-trial (r=0.95, P&lt;0.001) and 2-trial (r=0.92, P&lt;0.001) analyses. Conclusions: The RoPE score estimated attributable fraction is highly correlated to the relative risk reduction of device versus medical therapy. This observation suggests the RoPE score identifies patients with cryptogenic stroke who are likely to have a PFO that is pathogenic rather than incidental. abstract_id: PUBMED:28886840 Cryptogenic Stroke and Patent Foramen Ovale Risk Assessment. Stroke is a devastating condition. It is the fifth leading cause of death in the United States, and a leading cause of serious long-term disability. Stroke occurs at any age. Younger patients tend to have strokes of undetermined cause, termed cryptogenic. Herein, the authors describe the classification of stroke cause; the risk of recurrent cryptogenic stroke with patent foramen ovale (PFO); a risk assessment model to stratify incidental versus a pathogenic PFO in patients presenting with stroke; and patient selection for device occluder therapy in the context of the long-term follow-up of the RESPECT randomized clinical trial. abstract_id: PUBMED:18306662 Stroke in cancer patients Various causes of cerebrovascular complications exist in patients with malignant tumor. The pathogenesis of thrombosis in malignancy is complex and it has not been completely clarified as yet. A hypercoagulable state in these patients occurs through several interactive processes between the cancer cells and the patient. Tumor cells can directly and indirectly enhance activation of coagulation cascade. Trousseau's syndrome is a paraneoplastic neurologic syndrome which is caused by remote effects of cancers. Tumor cells produce cytokines which promote coagulation and suppress anticoagulant activities, thereby causing stroke in cancer patients. Brain tends to be a target organ of stroke in the conditions of disseminated intravascular coagulopathy (DIC) in cancer patients. Nonbacterial thrombotic endocarditis (NBTE) is characterized by the presence of relatively acellular aggregates of fibrin and platelets attached to normal heart valves. NBTE can be found in DIC. Paradoxical embolism due to patent foramen ovale in patients with deep vein thrombosis is also one of the cause of cerebral infarction. Stroke can occur under various setting of cancer chemotherapy. Tamoxifen increases the risk of stroke in patients with breast cancer. In paticular, it has been reported that the combination of chemotherapy and tamoxifen for breast cancer patients frequently produces thrombotic episodes. Patients with head and neck cancer treated with local radiotherapy have a high risk of developing significant carotid stenosis, and an increased risk of stroke. Malignancy as a risk factor for stroke is becoming increasingly recognized by physicians caring for these patients. The probability of stroke occurring in an individual patient depends on several factors such as intrinsic medical problems, the type of cancer, the clinical stage, performance status, and the treatment modalities employed. Understanding these factors is important for stroke prevention and provision in this population. abstract_id: PUBMED:3362165 Prevalence of patent foramen ovale in patients with stroke. The cause of ischemic stroke in younger adults is undefined in as many as 35 percent of patients. We studied the prevalence of patent foramen ovale as detected by contrast echocardiography in a population of 60 adults under 55 years old with ischemic stroke and a normal cardiac examination. We compared the results with those in a control group of 100 patients. The prevalence of patent foramen ovale was significantly higher in the patients with stroke (40 percent) than in the control group (10 percent, P less than 0.001). Among the patients with stroke, the prevalence of patent foramen ovale was 21 percent in 19 patients with an identifiable cause of their stroke, 40 percent in 15 patients with no identifiable cause but a risk factor for stroke, such as mitral valve prolapse, migraine, or use of contraceptive agents, and 54 percent in 26 patients with no identifiable cause (P less than 0.10). These results suggest that because of the high prevalence of clinically latent venous thrombosis, paradoxical embolism through a patent foramen ovale may be responsible for stroke more often than is usually suspected. abstract_id: PUBMED:16825590 Embolic lesion pattern in stroke patients with patent foramen ovale compared with patients lacking an embolic source. Background And Purpose: Multiple acute ischemic lesions on diffusion-weighted magnetic resonance imaging (DWI-MRI) are thought to be of embolic origin. However, in several patients with multiple ischemic lesions on DWI-MRI, no embolic source was detected, despite a thorough clinical work-up. Stroke etiology in such cases is then classified as cryptogenic. In other patients, a potential embolic source is limited to a patent foramen ovale (PFO) that may act as an embolic source of unsure relevance. We therefore examined the prevalence of the multiple-lesion pattern in patients with cryptogenic stroke compared with patients with PFO. Methods: We screened 650 stroke patients by DWI-MRI. For the subsequent evaluation, we excluded patients with a cardiac embolic source other than PFO, symptomatic carotid artery disease, and other apparent stroke causes, such as dissection or vasculitis, and patients whose diagnostic work-up was incomplete. For the remaining 106 patients, we found DWI lesions in 73, who were subjected to further evaluation. Results: There were no differences in the occurrence of the multiple-lesion pattern in patients with cryptogenic stroke compared with patients with PFO, either for the entire group or for the subgroup of young stroke patients who were &lt;50 years old. Patients with PFO showed a significantly higher incidence of multiple lesions in the posterior circulation. Conclusions: The multiple-lesion pattern on DWI-MRI is not uncommon, even when extensive testing does not reveal any embolic source. Therefore, it is not possible to discriminate between cryptogenic stroke and stroke from an assumed paradoxical embolism. abstract_id: PUBMED:19048835 Paradoxical embolism in patients with ischemic stroke The present study included 84 patients with ischemic stroke of known and unknown etiology. The risk of paradoxical embolism (PE) was evaluated using contrast transthoracic echocardiography, contrast transcranial Doppler monitoring, and transoesophagal echocardiography. The majority 64%) of the patients with strokes of known etiology were found to be predisposed to PE because they had the patent interatrial foramen (PIF) and pulmonary shunt (51 and 13 respectively). PIF was of small functional and anatomical size. Predisposition to PE in most patients with cryptogenic stroke (94%) was due to the presence of PIF (in 88%) and was often the sole cause of the stroke. These patients frequently had the shunt of a large functional size. abstract_id: PUBMED:12932318 Hypercoagulable States in patients with patent foramen ovale. A patent foramen ovale (PFO) is a risk factor for stroke with the presumed mechanism being paradoxical embolism. Hypercoagulable states have been associated with the propensity to develop venous thromboses and have been separately associated with the risk for ischemic stroke. A PFO could provide the missing link. Having a PFO and a hypercoagulable state may place a person at an increased risk for paradoxical embolism. However, the prevalence of PFO in patients with hypercoagulable states and vice versa is unknown, and the best therapeutic strategy for these patients is unelucidated. This review assesses the literature, provides some insight on these topics, and identifies areas for further investigation. abstract_id: PUBMED:8357111 The prevalence of deep venous thrombosis in patients with suspected paradoxical embolism. Objective: To determine the prevalence of venous thrombosis in patients with suspected paradoxical embolism. Design: Observational study. Patients: Two hundred sixty-four patients with clinically suspected embolic events underwent contrast transesophageal echocardiographic evaluation. Forty-nine patients (24 women, 25 men) had a patent foramen ovale. Forty-one patients had acute stroke, and 8 had acute limb ischemia. Setting: Echocardiography laboratory of a community hospital. Measurements: The presence of a patent foramen ovale was assessed by transesophageal contrast echocardiography. Forty-two patients had venographic evaluation of the lower-extremity venous system. Results: Venous thrombosis was clinically suspected in 6 patients and documented in 24 of the 42 patients with patent foramen ovale who underwent venographic study (57%; 95% Cl, 41% to 72%). Venous thrombosis was confined to calf or popliteal veins in 15 cases. Fifteen of 17 patients who had venographic evaluation within 7 days of the index event had thrombosis compared with 9 of 25 patients who had later evaluations (P = 0.001). More patients with venous thrombosis than without venous thrombosis had a history of previous thromboembolism (13 of 24 compared with 1 of 18 [corrected], respectively; P = 0.001). Conclusion: When a patent foramen ovale is detected in a patient with embolism, occult leg vein thrombosis is frequently present. abstract_id: PUBMED:34718891 Stroke Prevention in Patients with Patent Foramen Ovale. Purpose Of Review: Patent foramen ovale (PFO) is widely prevalent and studies have suggested an association with ischemic stroke. In this review, we aim to highlight current management of patients with ischemic stroke in the setting of PFO and discuss some areas of controversy. Recent Findings: Upon reviewing the literature, we have found that the evidence regarding the management of patients with cryptogenic stroke and PFO has come a long way in the past several years, and many uncertainties remain in clinical practice. The Risk of Paradoxical Embolism (RoPE) score helps to predict the probability of a pathogenic PFO, and recent trial data confirms the benefit of closure in carefully selected patients. The benefit of closure in older patients and in patients with alternate, competing mechanisms is still uncertain, and the long-term risks of closure are not known. Finally, the efficacy of direct oral anticoagulants (DOACs) in this patient population as compared to other medical therapy or mechanical closure has not yet been investigated. Randomized data is needed to help answer these questions. PFO closure is a safe and effective strategy in reducing stroke risk in carefully selected patients with cryptogenic stroke in the setting of a PFO. More studies are needed to test optimal medical treatment strategies and the safety and efficacy of PFO closure in patient subgroups not included in prior PFO closure trials. Answer: Yes, stroke can be a paradoxical embolism in patients with patent foramen ovale (PFO). A PFO is a passageway between the left and right atria of the heart that normally closes after birth, but in some individuals, it remains open. This condition can allow a blood clot from the venous system to bypass the lungs (where it would typically be filtered out) and enter the arterial system, potentially causing a stroke if it travels to the brain. This type of event is known as a paradoxical embolism. The association between PFO and stroke, particularly cryptogenic stroke (stroke of unknown origin), has been the subject of research. Studies have found a higher prevalence of PFO in patients with stroke compared to control groups (PUBMED:3362165). In patients with cryptogenic stroke, a PFO may be the only identifiable potential source of embolism, and these patients often have a shunt of a large functional size, suggesting a predisposition to paradoxical embolism (PUBMED:19048835). Furthermore, the Risk of Paradoxical Embolism (RoPE) score has been developed to estimate the probability that a PFO is causally related to a stroke, rather than being an incidental finding. The RoPE score estimated attributable fraction has been shown to correlate with the relative risk reduction of device versus medical therapy, suggesting that it can identify patients whose PFO is likely pathogenic (PUBMED:32921262). However, it is important to note that not all strokes in patients with PFO are due to paradoxical embolism. Other potential causes of stroke must be considered, and the decision to close a PFO to prevent recurrent stroke should be based on a careful assessment of individual patient risk factors and the presence of other potential sources of embolism (PUBMED:34718891). Additionally, the presence of a hypercoagulable state, which increases the risk of venous thrombosis, may also contribute to the risk of paradoxical embolism in patients with PFO (PUBMED:12932318). In summary, while PFO is associated with an increased risk of paradoxical embolism leading to stroke, not all strokes in patients with PFO are due to this mechanism, and a thorough evaluation is necessary to determine the appropriate management strategy for each patient.
Instruction: Online medical control and initial refusal of care: does it help to talk with the patient? Abstracts: abstract_id: PUBMED:11435188 Online medical control and initial refusal of care: does it help to talk with the patient? Unlabelled: The use of online medical control (OLMC) for initial refusal of care is time-consuming and has medical-legal risks. Objectives: This study attempted to answer the following: Does physician-patient communication (PPC) increase the rate of transport and at what cost in terms of increased physician time? Do altered mental status (AMS) patients agree to transport more frequently and do they require more physician time? Can senior emergency medicine resident (RES) and emergency medicine faculty (FAC) physicians be equally efficient in handling refusal calls? Methods: The study evaluated a retrospective cohort for six months at a single base station, university hospital. Online medical control audiotapes and written records of radio and telephone communications were reviewed. Results: One hundred forty-seven refusal cases were analyzed, PPC was used 70 times, and 37 patients were transported. Twenty-four of 70 (34%) patients with PPC agreed to be transported, while ten of 77 (13%) patients without PPC agreed to be transported (p = 0.002). Sixteen of 30 (53%) patients with AMS were transported compared with 21 of 117 (18%) patients with normal mental status (p = 0.00007). Call times were longer with PPC utilization [406.3 sec PPC vs 230.1 sec no PPC (p &lt; 0.001)] and with AMS patients [411.2 sec AMS vs 289.1 sec no AMS (p = 0.028)]. The RES and FAC physicians did not differ in transport rates [21% RES vs 26% FAC (p = 0.612)] and call times [329.4 sec RES vs 310.4 sec FAC (p = 0.659)]. Conclusions: Although time-consuming, the use of PPC is associated with more patients' agreeing to be transported. Patients with AMS are transported more frequently and they use more physician time. Emergency medicine RES and FAC physicians have equal efficiency and efficacy in handling these calls. abstract_id: PUBMED:36584441 Small talk is a big deal: A discursive analysis of online off-topic doctor-patient interaction in Traditional Chinese Medicine. This study aims to examine the small talk of Traditional Chinese Medicine (TCM) during online medical consultation (OMC). With the textual and audio doctor-patient conversations collected from Chunyu Doctor, an asynchronous and synchronous e-health platform in mainland China, this study systematically analyzed 432 pieces of TCM consultations. Results indicate that TCM doctors actively initiate small talks in online scenarios to acquire holistic information for diagnosis and boost patients' face for rapport management, both of which further contribute to patient-centeredness in Chinese OMC. Importantly, TCM doctors attach great importance to small talk, while patients perceive it with insufficient attention. To some extent, this study contributes to existing knowledge of small talk by examining its informative and interpersonal functions under the online circumstance of TCM in Oriental scenarios. abstract_id: PUBMED:22129809 Refusal of medical treatment in an acute comatose patient Background: The Dutch Law on Medical Services ('Wet Geneeskundige Behandelingsovereenkomst') of 1995 declares that doctors must adhere to an Informed Refusal (a written Refusal of Medical Treatment) if the patient is unable to give informed consent on presentation at hospital. Case Description: A comatose 81-year-old man was presented at the resuscitation area following attempted suicide with insulin. Treatment was suspended after the family presented the doctor with an Informed Refusal that had been signed by the patient. The question was to what extent this statement of intention or the family's wishes could prevail over the medical insight of the doctor. During subsequent discussion it became apparent that opinions vary greatly between doctors on this subject. Conclusion: Patients and doctors have little knowledge of negative statements of intention. The doctor can help the patient in formulating a realistic statement of intention, and document the patient's wishes in the patient's written or electronic medical dossier. A representative designated in writing can help to interpret the declaration of intention. abstract_id: PUBMED:22326409 Effect of gender on prehospital refusal of medical aid. Background: "Refusal of medical aid" (RMA) is the term commonly used by emergency medical technicians (EMTs) when someone calls 911 for care (usually the patient or a family member) but, after the initial encounter with the EMTs, the patient refuses emergency medical services transport to the hospital. Some intervention may have been performed, such as taking vital signs or an electrocardiogram, before the RMA. Although there have been multiple studies of the characteristics and outcomes of patients who RMA, little analysis has been done of the role of EMTs in these cases. Objective: To analyze the association between EMT gender and the patient's decision to refuse medical aid in the prehospital setting. Methods: The study was performed using data from one hospital-based ambulance service in an urban setting that participates in the 911 system. This was a case control study that examined the data from consecutive patients who refused medical aid for a 1-year period compared to a control group of non-RMA patients. Results: There was a significantly higher representation of all-male EMT teams in the RMA group (p&lt;0.0001). Using propensity score-matching methodology to control for other factors, all-male EMT teams were 4.75 times more likely to generate an RMA as compared to all-female and mixed-gender EMT teams (95% confidence interval 1.63-13.96, p=0.0046). Conclusion: We found that the gender of the EMTs was one of the most important factors associated with RMA, with a much higher frequency of RMAs occurring when both members of the team were male. abstract_id: PUBMED:16204141 Adequacy of online medical command communication and emergency medical services documentation of informed refusals. Background: In the out-of-hospital setting, when emergency medical services (EMS) providers respond to a 9-1-1 call and encounter a patient who wishes to refuse medical treatment and/or transport to the hospital, the EMS providers must ensure the patient possesses medical decision-making capacity and obtain an informed refusal. In the city of Cleveland, Ohio, Cleveland EMS completes a nontransport worksheet that prompts the paramedics to evaluate specific patient characteristics that can influence medical decision-making capacity and then discuss the risks of refusing with the patient. Cleveland EMS then contacts an online medical command (OLMC) physician to authorize the refusal. OLMC calls are recorded for review. Objectives: To assess the ability of EMS to determine medical decision-making capacity and obtain an informed refusal of transport. Methods: This study was a retrospective review of a cohort of recorded OLMC refusal calls and of the accompanying written documentation by Cleveland EMS. The completeness of the verbal communication between the paramedic and OLMC physician and the written documentation on the nontransport worksheet were measured as surrogate markers of the adequacy of determining medical decision-making capacity and obtaining an informed refusal. Results: One hundred thirty-seven OLMC calls for patient-initiated refusals were reviewed. Vital signs and alertness/orientation were verbally communicated more than 83% of the time. The presence of head injury, presence of alcohol or drug intoxication, and presence of hypoglycemia were verbally communicated less than 31% of the time. Verbal communication stating that the risks of refusing had been discussed with the patient occurred 44.5% of the time. The written documentation of the refusal encounter was more complete, exceeding 95% for vital signs and alertness/orientation, and exceeding 80% for the remaining patient characteristics. The rate of written documentation that the risks of refusing had been discussed with the patient was 48.7%. Discrepancies between the verbal and written paramedic reports were clinically insignificant. Conclusions: Paramedic and OLMC physician communication for patients refusing out-of-hospital medical treatment and/or transport is inadequate in the Cleveland EMS system. A written nontransport worksheet improves documentation of the refusal encounter but does not ensure that every patient who refuses possesses medical decision-making capacity and the capacity to provide an informed refusal. abstract_id: PUBMED:29958360 Small talk as work talk: Enacting the patient-centered approach in nurse-practitioner-patient visits. Small talk in medical visits has received ample attention; however, small talk that occurs at the close of a medical visit has not been explored. Small talk, with its focus on relational work, is an important aspect of medical care, particularly so considering the current focus in the US on the patient-centered approach and the desire to construct positive provider- patient relationships, which have been shown to contribute to higher patient satisfaction and better health outcomes. Therefore, even small talk that is unrelated to the transactional aspect of the medical visit in fact serves an important function. In this article, I analyze small talk exchanges between nurse practitioners (NPs) and their patients which occur after the transactional work of the visit is completed. I focus on two exchanges which highlight different interactional goals. I argue that these examples illustrate a willingness on the part of all participants to extend the visit solely for the purpose of constructing positive provider-patient relationships. Furthermore, because exchanges occur after the 'work' of the visit has been completed, they have the potential to construct positive relationships that extend beyond the individual visit. abstract_id: PUBMED:18770960 Conscience, principled refusal and ethics of refusal to provide treatment to a patient's request There are situations in which refusal to provide treatment according to a patient's request may be justified by ethical rules and principles of medical ethics. This article explores such situations, analyzing the various factors which a physician may consider when refusal to provide treatment is at stake. The article focuses on three major states of affairs under which the question of whether a physician has liberty to refuse to treat may occur: refusals deriving from conflicts between the physician's personal values and ones reflected in the proposed treatment; refusals stemming from general considerations to be classified as policy reasons; and refusals following a clinical assessment as to the proposed treatment, expected chances of recovery and expected benefit to the patient. The ethical analysis of the article leads to the conclusion that a physician may (although he/she does not have a right to) refuse to provide medical treatment following a patient's request when the physician's refusal originates in conscientious resistance. However, the physician may not refuse to treat when refusal is based on policy considerations or concerns third parties not immediately involved in the physician-patient relationship. In addition, a physician may refuse to provide treatment following a medical assessment when and if the proposed treatment is not urgently required, if the provision of treatment may create risks to the patient's health, if its conferral may inflict pain or suffering exceeding its expected benefits to the patient, or the proposed treatment is futile and its chances to combat the disease are significantly minimal. abstract_id: PUBMED:29478572 Refusal of treatments by an adult cancer patient Introduction: Refusal of treatment questions the treatment's adequacy as well as the quality of the care relationship. A rigorous analysis of these situations is necessary in order to respect the patient's fundamental right to decide for him/herself while preventing a potential loss of chance. This paper proposes practical guidelines for assessment and management of the refusal of treatment by adult cancer patients. Methods: The French Association for Supportive Care in Cancer and the French Society for Psycho-Oncology formed a task force that applied a consensus methodology to draft guidelines. Results: We propose five guidelines: (1) be informed of the conditions most often associated with refusal of treatment so as to reinforce adequate support measures; (2) understand the complexity of the process of refusal and accurately identify what is precisely refused; (3) apply an approach of systematic analysis to refusal, to try and increase the possibilities of finding an agreement while reinforcing the respect of the patient's position; (4) establish a legal procedure to address refusal of treatment that safeguards the stakeholders when no accord can be found; and (5) know the indications for ethical collective decision-making. Conclusion: A systematic assessment procedure of treatment refusal is necessary in order to ensure that all the physical, psychological and contextual aspects of it are taken into account, and to provide patients with the best treatment possible. The setting of good care relationship, the improvement of communication skills training and of comprehensive multidisciplinary care are all crucial elements in the prevention of these situations. abstract_id: PUBMED:36626204 The Effect of Offline Medical Resource Distribution on Online Physician-Patient Interaction: Empirical Study With Online and Offline Data. Background: The relationship between online health communities (OHCs) and offline medical care is unclear because both provide physician-patient interaction services and channels. Taking advantage of information and communication technology, patients have been using OHCs widely. However, some physical medical resources (such as hospital beds and medical devices) cannot be replicated by information and communication technologies. Therefore, it is worth studying how offline medical resources affect physician-patient interactions in OHCs and how OHCs help to solve resource scarcity and the uneven distribution of traditional medical treatment. Objective: This study aimed to support the notion that physician-patient consultations in OHCs are influenced by the objective distribution of offline health care capital (accessibility and availability) and to provide suggestions for the allocation of medical resources in practice through the judicious use of offline and online channels. Methods: The empirical data in this study were collected from both online and offline channels. The offline data include 9 years (2006-2014) of medical resource statistics of 31 provincial administrative regions in mainland China. Moreover, data regarding the geolocation-based physician-patient interaction network in the OHC were also collected. The online data come from one of China's largest OHCs. We obtained 92,492 telephone consultation records of 6006 physicians using an automatic web crawler program. Social network analysis was used to visualize the descriptive statistics of the offline geolocation-based physician-patient interaction network in the OHC. A regression model with a squared variable was applied to analyze online and offline empirical data to further test our hypothesis. Two types of robustness tests were used to increase the reliability of the test results of the initial model. Results: The results of our social network analysis show that there is a uniform geographic distribution of patients who use OHCs, whereas the physician relies more on geographic advantage (eg, a higher medical resource capability). Moreover, the empirical results of the regression model support the notion that physician-patient telephone consultations are positively influenced by physicians' online contributions (βcontribution=.210; P&lt;.001) and capital availability (βbed=.935; P=.07), and, interestingly, spatial accessibility has an inverted U-shaped effect (βdistance=.199; P&lt;.001 and βdistance2=-.00449; P=.008). The results indicate that the use of OHCs, although constrained by offline medical resources, provides a channel for offline resources to flow from areas with high availability to those with low availability. Conclusions: This study explores the relationship between online and offline channels by investigating online physician-patient interactions and offline medical resources. In particular, this study analyzes the impact of offline channels on online channels and verifies the possibility of OHC capital use shifting from a high-availability area to a low-availability area. In addition, it provides a theoretical and practical basis for understanding the interaction of online and offline channels of medical care. abstract_id: PUBMED:31046866 Medical Control Decisions: When Does a Neonate Become a Separate Patient? Patient refusal for care or transport is a common request to medical control physicians, and it is an especially challenging decision in the case of minors. Parents or guardians are able to refuse medical care for a minor if there is not an imminent threat of harm to the minor. However, if a minor patient is presumed to be in need of emergent medical care to prevent harm, medical personnel have the right to treat the minor, even if the parent or guardian objects. If the minor patient is a fetus or a neonate, it is not always clear when they are considered to be a separate patient. Apparently, there is no over-riding general rule or law and, consequently, Emergency Medical Services (EMS) protocols vary greatly from state to state. This case report describes one patient case that involved some of these unclear legal areas and how it fit with local EMS protocols. The legal question arose when a pregnant patient delivered her baby, but the umbilical cord was not cut. Are the mother's rights violated by cutting the umbilical cord if she objects to the procedure? How is the medical control physician to decide when to go beyond established EMS protocols to ensure that the safest and most ethical care is provided to a patient in the field? Does the care of the infant or the mother take precedence? Continued analyses of cases are required to ensure that protocols and guidelines are protecting both patients and providers.Venegas A, Ann Maggiore W, Wells R, Baker R, Watts S. Medical control decisions: when does a neonate become a separate patient? Prehosp Disaster Med. 2019;34(2):224-225. Answer: Yes, online medical control (OLMC) and physician-patient communication (PPC) can help when a patient initially refuses care. A study found that PPC is associated with a higher rate of patients agreeing to be transported for medical care. Specifically, 34% of patients with PPC agreed to be transported, compared to only 13% of patients without PPC. This suggests that direct communication between the physician and the patient can increase the likelihood of the patient accepting transport and care. However, it is important to note that this increased rate of transport comes at the cost of longer call times, with calls involving PPC taking significantly longer than those without PPC. Additionally, patients with altered mental status (AMS) were more likely to agree to transport and required more physician time during calls (PUBMED:11435188). This finding is consistent with the broader understanding of the importance of communication in medical interactions. For instance, small talk in Traditional Chinese Medicine (TCM) online medical consultations is used by doctors to acquire holistic information for diagnosis and manage rapport with patients, contributing to patient-centeredness in Chinese online medical care (PUBMED:36584441). Moreover, the adequacy of OLMC communication and EMS documentation of informed refusals is critical, as studies have shown that verbal and written communication can be inadequate in ensuring that patients possess medical decision-making capacity and provide an informed refusal (PUBMED:16204141). In summary, talking with the patient through OLMC can be beneficial in convincing them to accept care, but it requires more time and resources. Effective communication, including small talk and thorough documentation, plays a crucial role in patient-centered care and informed medical decision-making.
Instruction: Can data in optometric practice be used to provide an evidence base for ophthalmic public health? Abstracts: abstract_id: PUBMED:27199180 Can data in optometric practice be used to provide an evidence base for ophthalmic public health? Purpose: The purpose of this paper is to investigate the potential of using primary care optometry data to support ophthalmic public health, research and policy making. Methods: Suppliers of optometric electronic patient record systems (EPRs) were interviewed to gather information about the data present in commercial software programmes and the feasibility of data extraction. Researchers were presented with a list of metrics that might be included in an optometric practice dataset via a survey circulated by email to 102 researchers known to have an interest in eye health. Respondents rated the importance of each metric for research. A further survey presented the list of metrics to 2000 randomly selected members of the College of Optometrists. The optometrists were asked to specify how likely they were to enter information about each metric in a routine sight test consultation. They were also asked if data were entered as free text, menus or a combination of these. Results: Current EPRs allowed the input of data relating to the metrics of interest. Most data entry was free text. There was a good match between high priority metrics for research and those commonly recorded in optometric practice. Conclusions: Although there were plenty of electronic data in optometric practice, this was highly variable and often not in an easily analysed format. To facilitate analysis of the evidence for public health purposes a UK based minimum dataset containing standardised clinical information is recommended. Further research would be required to develop suitable coding for the individual metrics included. The dataset would need to capture information from all sectors of the population to ensure effective planning of any future interventions. abstract_id: PUBMED:29108078 Big Data and Public Health - Results of the Working Group 1 of the Forum Future Public Health, Berlin 2016 Big Data is a diffuse term, which can be described as an approach to linking gigantic and often unstructured data sets. Big Data is used in many corporate areas. For Public Health (PH), however, Big Data is not a well-developed topic. In this article, Big Data is explained according to the intention of use, information efficiency, prediction and clustering. Using the example of application in science, patient care, equal opportunities and smart cities, typical challenges and open questions of Big Data for PH are outlined. In addition to the inevitable use of Big Data, networking is necessary, especially with knowledge-carriers and decision-makers from politics and health care practice. abstract_id: PUBMED:8414386 The optometric educators' role in public health issues. The optometric educator has a very important role in the development, analysis, interpretation, and promotion of public health issues. Because public health provides the framework for making rational decisions about the complexities of health care, the optometric educator has a high-order responsibility to educate students in the principles, concepts, and practice of public health. As both a creator and a deliverer of knowledge, the optometric educator also must ensure that the outcomes of his scientific inquiry are translated into relevant health policy and used effectively and efficiently for the benefit of all society. Advancement of the public health philosophy within the venue of optometric education should be the responsibility of all faculty, not just those members of the faculty who have direct responsibility for the public health curriculum. abstract_id: PUBMED:29020953 "Layers of translation" - evidence literacy in public health practice: a qualitative secondary analysis. Background: Strengthening public health systems has been a concern in Canada in the wake of public health emergencies. In one Canadian province, British Columbia, a high priority has been placed on the role of evidence to guide decision making; however, there are numerous challenges to using evidence in practice. The National Collaborating Centre for Methods and Tools therefore developed the Evidence Informed Public Health Framework (EIPH), a seven step guide to assist public health practitioners to use evidence in practice. We used this framework to examine the evidence literacy of public health practitioners in BC. Methods: We conducted a secondary analysis of two separate qualitative studies on the public health renewal process in which the use and understanding of evidence were key interview questions. Using constant comparative analysis, we analyzed the evidence-related data, mapping it to the categories of the EIPH framework. Results: Participants require both data and evidence for multiple purposes in their daily work; data may be more important to them than research evidence. They are keen to provide evidence-based programs in which research evidence is balanced with community knowledge and local data. Practitioners recognise appraisal as an important step in using evidence, but the type of evidence most often used in daily practice does not easily lend itself to established methods for appraising research evidence. In the synthesis stage of the EIPH process, synthesized evidence in the form of systematic reviews and practice guidelines is emphasized. Participants, however, need to synthesize across the multiple forms of evidence they use and see the need for more skill and resources to help them develop skill in this type of synthesis. Conclusions: Public health practitioners demonstrated a good level of evidence literacy, particularly at the collective level in the organization. The EIPH framework provides helpful guidance in how to use research evidence in practice, but it lacks support on appraising and synthesizing across the various types of evidence that practitioners consider essential in their practice. We can better support practitioners by appreciating the range of evidence they use and value and by creating tools that help them to do this. abstract_id: PUBMED:19050432 How can we strengthen the evidence base in public health preparedness? The lack of frequent real-world opportunities to study preparedness for large-scale public health emergencies has hindered the development of an evidence base to support best practices, performance measures, standards, and other tools needed to assess and improve the nation's multibillion dollar investment in public health preparedness. In this article, we argue that initial funding priorities for public health systems research on preparedness should focus on using engineering-style methods to identify core preparedness processes, developing novel data sources and measures based on smaller-scale proxy events, and developing performance improvement approaches to support the translation of research into practice within the wide variety of public health systems found in the nation. abstract_id: PUBMED:16684207 Utility of qualitative research findings in evidence-based public health practice. Epidemiological data, derived from quantitative studies, provide important information about the causes, prevalence, risk correlates, treatment and prevention of diseases, and health issues at a population level. However, public health issues are complex in nature and quantitative research findings are insufficient to support practitioners and administrators in making evidence-informed decisions. Upshur's Synthetic Model of Evidence (2001) situates qualitative research findings as a credible source of evidence for public health practice. This article answers the following questions: (1) where does qualitative research fit within the paradigm of evidence-based practice and (2) how can qualitative research be used by public health professionals? Strategies for using qualitative research findings instrumentally, conceptually, and symbolically are identified by applying Estabrooks' (1999) conceptual structure of research utilization. Different research utilization strategies are illustrated through the use of research examples from the field of work on intimate partner violence against women. Recommendations for qualitative researchers disseminating findings and for public health practitioners/policy makers considering the use of qualitative findings as evidence to inform decisions are provided. abstract_id: PUBMED:34231267 Twenty years with the public health intervention wheel: Evidence for practice. The Public Health Intervention Wheel (PHI Wheel) is a population-based practice model for public health nursing practice that encompasses three levels of practice (community, systems, individual/family) and 17 public health interventions. This article shares the story of how the PHI Wheel was created, disseminated, implemented by public health nurses (PHNs) and educators across the globe, and updated with new evidence published in the second edition of Public Health Interventions: Applications for Public Health Nursing in 2019. Evidence on the relevance of PHI Wheel interventions for public health practice in cultural and international settings supports the model's value in explaining PHN practice. This article highlights the experiences of various countries with the PHI Wheel including Canada, Ireland, New Zealand, Norway, Sweden, the United Kingdom, and the United States. The evidence update confirms the relevance of the model to PHN education and practice and reinforces the conviction that development of new evidence is essential for promoting population health. abstract_id: PUBMED:23287816 Using evaluability assessment to support the development of practice-based evidence in public health. Practice-based evidence arises from programs implemented in real-world settings. Program success may be judged on the basis of experience; however, formal evaluation studies of methodological rigor can provide a high level of credible evidence to inform public health practice. Such studies can be lengthy and expensive. Furthermore, even well-designed studies may not reach conclusive findings, for example, when a program lacks full implementation, when data systems do not have capacity to collect evaluation data, or when program implementation has not attained stability. An evaluability assessment is used to determine the capacity and readiness of a program for full-scale effectiveness evaluation. Evaluators at the Centers for Disease Control and Prevention use evaluability assessment as a preevaluation consisting of brief, focused, criteria-based assessments, document review, and a site visit. Evaluability assessment is used to guide investments in subsequent rigorously designed evaluations that yield conclusive findings to build strong and credible practice-based evidence. abstract_id: PUBMED:11067661 Building the science base for public health practice. This article explores the need for and current state of the science base in public health practice. In addition, it discusses how the National Public Health Performance Standards Program will help build the science base in the future and how this can have a positive effect on public health practice and community health status. abstract_id: PUBMED:16416689 Demonstrating excellence in practice-based research for public health. This document explores the opportunity for scholarship to enhance the evidence base for academic public health practice and practice-based research. Demonstrating Excellence in Practice-Based Research for Public Health defines practice-based research; describes its various approaches, models, and methods; explores ways to overcome its challenges; and recommends actions for its stakeholders in both academic and practice communities. It is hoped that this document will lead to new partnership opportunities between public health researchers and public health practitioners to strengthen the infrastructure of public health and add new dimensions to the science of public health practice. Demonstrating Excellence in Practice-Based Research for Public Health is intended for those who produce, participate in, and use practice-based research. This includes academic researchers and educators, public health administrators and field staff, clinical health professionals, community-based organizations and professionals, and interested members of the public. Answer: Yes, data in optometric practice can be used to provide an evidence base for ophthalmic public health. A study investigating the potential of using primary care optometry data to support ophthalmic public health found that current electronic patient record systems (EPRs) in optometric practice allowed the input of data related to metrics of interest for research. However, most data entry was in free text, which is not easily analyzed. Researchers and optometrists agreed on the importance of certain metrics for research, suggesting a good match between high-priority metrics for research and those commonly recorded in optometric practice. To facilitate analysis for public health purposes, the study recommended the creation of a UK-based minimum dataset containing standardized clinical information. This would require further research to develop suitable coding for individual metrics and ensure the dataset captures information from all population sectors for effective planning of future interventions (PUBMED:27199180). The use of data in public health, including ophthalmic public health, aligns with the broader concept of Big Data in public health, which involves linking large and often unstructured datasets to support various corporate and health-related areas. Big Data can be used for prediction, clustering, and improving information efficiency, although it is not yet well-developed in public health (PUBMED:29108078). Optometric educators also play a crucial role in promoting public health issues, including the development and interpretation of data for policy-making (PUBMED:8414386). Furthermore, public health practitioners require both data and evidence for their work, and while they value research evidence, they also balance it with community knowledge and local data. There is a need for tools to help practitioners appraise and synthesize various types of evidence, including that from optometric practice (PUBMED:29020953). Overall, the integration of optometric data into the evidence base for ophthalmic public health is feasible and can be enhanced with the development of standardized data collection and analysis methods.
Instruction: Is thrombophilia a major risk factor for deep vein thrombosis of the lower extremities among Lebanese patients? Abstracts: abstract_id: PUBMED:22645898 Acquired and genetic risk factors for deep vein thrombosis of lower extremities among Lebanese patients. Aim: Venous thrombosis results from the interaction of environmental and genetic risk factors. These factors vary according to the ethnic and geographic distribution of the populations. The aim of this study is to define the role of acquired and genetic risk factors for venous thrombosis of lower extremities among Lebanese patients assessed in a university hospital and to discuss them according to the international literature. Material And Methods: From January 2005 to January 2010, 166 patients (72 males and 94 females) were diagnosed with lower extremity deep vein thrombosis. Mean age was 67 years (range: 25 to 96 years). Results: The most frequently reported acquired risk factors for venous thrombosis in this study were advanced age, obesity, history of venous thromboembolism, immobilization, surgery, varicose veins and malignancy. Screening for prothrombotic genetic abnormalities was requested in patients with conditions highly suggestive of hypercoagulation state such as young patients, patients with spontaneous, recurrent or extensive venous thrombosis, patients with family history, oral contraceptives, air travel and pregnancy. All the 45 patients (27.1%) tested for thrombophilia were positive and were carriers for factors V-Leiden (17.4%), MTHFR C 677 T (16.8%), MTHFR A 1298 C (4.8%), II G 20210 A (1.8%) and V H 1299 R (1.2%) mutation. Twelve patients (7.2%) had increased homocysteine level. Conclusion: Advanced age is the most common risk factor for venous thrombosis in these series. Thrombophilia is the second most frequently observed risk factor and is related to the high prevalence of factor V-Leiden and MTHFR C 677 T mutation among the Lebanese population. abstract_id: PUBMED:19688103 Is thrombophilia a major risk factor for deep vein thrombosis of the lower extremities among Lebanese patients? Aim: Factor V Leiden (R506Q) mutation is the most commonly observed inherited genetic abnormality related to vein thrombosis. Lebanon has one of the highest frequencies of this mutation in the world with a prevalence of 14.4% in the general population. The aim of this study is to define risk factors including inherited genetic abnormalities among Lebanese patients with lower extremity deep vein thrombosis. We report the clinical outcome of patients with thrombophilia. Methods: From January 1998 to January 2008, 162 patients (61 males and 101 females) were diagnosed with lower extremity deep vein thrombosis. Mean age was 61 years (range: 21 to 95 years). Results: The most frequent risk factors for vein thrombosis were surgery, advanced age, obesity, and cancer. Twenty-five patients had thrombophilia, 16 patients had factor V Leiden (R506Q) mutation, and seven patients had MTHFR C677T mutation. Ninety-two percent of patients screened for thrombophilia were positive. Screening was requested in young patients (16), patients with recurrent (11), spontaneous (8), and extensive (5) venous thrombosis, familial history (5), pregnancy (4), estroprogestative treatment (3), and air travel (1). Nine patients had one, 11 patients had two, and five had three of these conditions. Follow-up (6 to 120 months) of these 25 patients treated with antivitamin K did not reveal recurrences or complications related to venous thromboembolism. Conclusion: Factor V Leiden mutation followed by MTHFR mutation are the most commonly observed genetic abnormalities in these series. Defining risk factors and screening for thrombophilia when indicated reduce recurrence rate and complications. Recommendations for thrombophilia screening will be proposed. abstract_id: PUBMED:22737581 Factor V-Leiden Mutation: A Common Risk Factor for Venous Thrombosis among Lebanese Patients. Aim. Lebanon exhibits one of the highest prevalences of factor V-Leiden (FVL) in the world (14.4%). The aim of this study is to evaluate the incidence of FVL mutation among Lebanese patients with lower extremity venous thrombosis. Material and Methods. From January 2003 to January 2011, 283 consecutive Lebanese patients, diagnosed with deep venous thrombosis (DVT) by duplex scan, were retrospectively reviewed. FVL mutation was tested among patients with conditions highly suggestive of hypercoagulation states (65 patients). Results. FVL mutation was detected among 56.9% of patients, 68.6% of patients younger than 50 years, and 43.4% of patients older than 50 years (P = 0.041). FVL mutation was commonly reported in young adults, in patients with pregnancy, estrogen drugs, recurrent DVT, and resistance to anticoagulation. Conclusion. The high rate of FVL mutation observed among Lebanese patients with venous thrombosis is related to the high prevalence of this mutation in the Lebanese population. Thrombophilia screening should be tailored to accommodate a population's risk factor. In countries with high prevalence of FVL, this mutation should be screened among patients younger than 50 years and patients with situations highly suggestive of hypercoagulation states. abstract_id: PUBMED:22454560 Lower extremity venous thrombosis in patients younger than 50 years of age. Aim: Lower extremity deep venous thrombosis in the young adult is uncommon and has not been well studied in the literature. The aim of this study is to define risk factors for deep venous thrombosis among patients younger than 50 years of age, to compare them with a control group, and to suggest recommendations for the management and treatment of venous thrombosis in this particular group of patients. Methods: From January 2003 to January 2011, 66 consecutive Lebanese patients (29 males and 37 females) younger than 50 years, diagnosed in an academic tertiary-care center with lower extremity deep venous thrombosis by color flow duplex scan, were retrospectively reviewed. Their age varied between 21 and 50 years (mean 38.7 years). The control group included 217 patients (86 males and 131 females) older than 50 years (range: 50-96 years; mean 72.9 years). Results: The most commonly reported risk factors in the younger age group were inherited thrombophilia (46.9% compared with 13.8% in the control group; P &lt; 0.001), pregnancy (18.2% compared with 0.5%; P &lt; 0.001), treatment with estrogen drugs (13.6% compared with 2.3%; P = 0.001), and family history of venous thromboembolism (9.1% compared with 3.8%; P = 0.084). Conclusion: Inherited thrombophilia is the most commonly observed risk factor among patients younger than 50 years, with a prevalence of three times more than the control group. Young adults should be screened for thrombophilia even in the presence of transient acquired risk factors. Pregnancy and treatment with estrogen drugs essentially when associated with inherited thrombophilia represent a frequent cause of venous thrombosis among young female patients. Inferior vena cava abnormalities should be excluded in young patients with spontaneous proximal venous thrombosis especially when recurrent venous thrombosis or resistance to anticoagulation are observed. abstract_id: PUBMED:9139557 Risk factors for deep venous thrombosis of the upper extremities. Background: Hypercoagulable states and triggering factors (surgery, trauma, immobilization, pregnancy, and use of oral contraceptives) are associated with an increased risk for deep venous thrombosis of the lower extremities. In contrast, risk factors for deep venous thrombosis of the upper extremities have not been identified. Objective: To evaluate the prevalence of hypercoagulable states and triggering factors in patients with primary deep venous thrombosis of the upper extremities. Design: Frequency-matched case-control study. Setting: Hemophilia and thrombosis center at a university hospital. Patients: 36 patients who had primary deep venous thrombosis of the upper extremities, 121 patients who had primary deep venous thrombosis of the lower extremities, and 108 healthy controls. Patients who had deep venous thrombosis of the lower extremities and study controls were frequency-matched by age, sex, geographic origin, and social status with patients who had deep venous thrombosis of the upper extremities. Measurements: Resistance to activated protein C was evaluated by a clotting method based on the activated partial thromboplastin time. If test results were abnormal or borderline, DNA analysis for substitution in coagulation factor V gene was done. Antithrombin, protein C, protein S, antiphospholipid antibodies, and total plasma homocysteine levels were also measured. Results: Prevalences of abnormalities of the natural anticoagulant system (9%) and hyperhomocysteinemia (6%) in patients who had deep venous thrombosis of the upper extremities were similar to prevalences of both factors in controls (6% and 7%, respectively) but lower than in patients who had deep venous thrombosis of the lower extremities (31% and 14%, respectively). Antiphospholipid antibodies were found only in patients who had venous thrombosis of the lower extremities (7%). The overall prevalence of hypercoagulable states in patients who had thrombosis of the upper extremities (15%) was similar to that in controls (12%) but was significantly lower than that in patients who had thrombosis of the lower extremities (56%). A recent history of strenuous exercise of muscles in the affected extremity was the most frequent triggering factor for patients who had deep venous thrombosis in the upper extremities (33%). Conclusions: This preliminary study indicates that the prevalence of hypercoagulable states is low in patients who have primary deep venous thrombosis of the upper extremities. abstract_id: PUBMED:32021228 Inherited Thrombophilia in a Lebanese Family of Four Generations: A Case Report of Recurrent Miscarriage. Introduction: Factor V Leiden (G1691A), prothrombin (G20210A) and MTHFR (C677T) gene mutations were investigated in many studies for their association with Deep Venous Thrombosis. Case Presentation: A North Lebanese family has been examined, from an index case, a 40-year-old woman, who had a history of venous thrombosis with unexplained recurrent miscarriage. The index case was found to be heterozygous for factor V Leiden G1691A, prothrombin G20210A, and methylenetetrahydrofolate reductase C677T gene variants. Her family members were heterozygous for at least two of the three-point mutations, and multiple risk factors associated with thrombophilia were identified. Conclusion: Our findings emphasize the need for clarifying the utility and futility of thrombophilia testing in the era of molecular diagnostics. abstract_id: PUBMED:10669145 High plasma concentration of factor VIIIc is a major risk factor for venous thromboembolism. Background: Established risk factors, including deficiencies of protein C, protein S or antithrombin and the factor V Leiden and prothrombin mutation, are present in about one third of unselected patients with venous thromboembolism. In addition to these inherited thrombophilic defects, elevated plasma levels of factor VIIIc have been suggested to be important in the pathogenesis of (recurrent) venous thromboembolism. The objective of this study was to assess the relevance of factor VIIIc plasma concentration in consecutive patients with venous thromboembolism. Method: We studied the prevalence of elevated plasma levels of factor VIIIc in 65 patients with a proven single episode and in 60 matched patients with documented recurrent venous thromboembolism. The reference group consisted of 60 age- and sex-matched patients who were referred for suspected venous thromboembolism, which was refuted by objective testing and long-term clinical follow-up. To minimalize the influence of the acute phase, blood was obtained at least 6 months after the thromboembolic event and results were adjusted for fibrinogen and C-reactive protein. Factor VIIIc was re-determined several years after the first measurement in a subset of patients to evaluate the variability over time. To study a possible genetic cause, a family study was done. Findings: In the control, single and recurrent episode group, the prevalences of plasma levels of factor VIIIc above 175 IU/dl (90th percentile of controls) were 10% (95% CI: 4 to 21%), 19% (95% CI: 10 to 30%) and 33% (95% CI: 22 to 47%), respectively. For each 10 IU/dl increment of factor VIIIc, the risk for a single and recurrent episode of venous thrombosis increased by 10% (95% CI: 0.9 to 21%) and 24% (95% CI: 11 to 38%), respectively. Both low and high plasma levels of factor VIIIc were consistent over time (R = 0.80, p = 0.01). A family study indicated a high concordance for elevated factor VIIIc plasma concentrations among first degree family members. Adjustment for fibrinogen, C-reactive protein and known thrombophilic risk factors did not change the observed association of elevated factor VIIIc with thrombosis. Interpretation: Elevated plasma levels of factor VIIIc are a significant, prevalent, independent and dose-dependent risk factor for venous thromboembolism. It also predisposes to recurrent venous thromboembolism. abstract_id: PUBMED:15230481 Superficial venous thrombosis of the lower extremities: analysis of risk factors, and recurrence and role of anticoagulation. Superficial venous thrombosis (SVT) of the lower extremities is a common ailment seen in outpatient offices of vascular medicine and surgery practices. This study of 60 consecutive outpatients was carried out to examine the incidence of concomitant deep venous thrombosis (DVT), risk factors associated with SVT, recurrence of SVT and/or new DVT, and the role of anticoagulant therapy in the prevention of recurrence. Concomitant SVT and DVT (13%) were significantly less likely to be present in patients with varicose veins as compared to patients without varicose veins (p &lt; 0.04) and more likely to be present in patients with a previous history of DVT (p &lt; 0.02). Fifteen patients (25%) developed either recurrent SVT or new DVT, with two patients developing both SVT and DVT. The absence of varicose veins and the presence of a hypercoagulable condition (n = 12) appeared to influence the development of new DVT but not the recurrence of SVT. Recurrent SVT was much more likely in patients with thrombosis of the tributaries (p &lt; 0.0008). New DVT was seen significantly less frequently in patients on anticoagulants (p &lt; 0.02). abstract_id: PUBMED:10477778 The risk of recurrent deep venous thrombosis among heterozygous carriers of both factor V Leiden and the G20210A prothrombin mutation. Background: Point mutations in the factor V gene (factor V Leiden) and the prothrombin gene (the substitution of A for G at position 20210) are the most common causes of inherited thrombophilia. Whether or not factor V Leiden increases the risk of recurrent deep venous thrombosis is controversial, and there is no information on the risk of recurrence among carriers of both mutations. Methods: We studied a retrospective cohort of 624 patients who were referred for a first episode of deep venous thrombosis. After excluding 212 patients with other inherited or acquired causes of thrombophilia, we compared 112 patients who were heterozygous carriers of factor V Leiden with 17 patients who were heterozygous for both factor V Leiden and the prothrombin mutation and 283 patients who had neither mutation. The relative risk of recurrent deep venous thrombosis was calculated with use of a proportional-hazards model. Results: Patients who were heterozygous for factor V Leiden alone had a risk of recurrent deep venous thrombosis that was similar to that among patients who had neither mutation (relative risk, 1.1; 95 percent confidence interval, 0.7 to 1.6; P=0.76). In contrast, patients who were heterozygous for both factor V Leiden and the prothrombin mutation had a higher risk of recurrent thrombosis than did carriers of factor V Leiden alone (relative risk, 2.6; 95 percent confidence interval, 1.3 to 5.1; P=0.002). When the analysis was restricted to patients with spontaneous recurrences (i.e., ones that occurred in the absence of transient risk factors for venous thrombosis), the risk among carriers of both mutations, as compared with carriers of factor V Leiden alone, remained high (relative risk, 3.7; 95 percent confidence interval, 1.7 to 7.7; P&lt;0.001), particularly if the first event had also been spontaneous (relative risk, 5.4; 95 percent confidence interval, 2.0 to 14.1; P&lt;0.001). In contrast, the risk of recurrence in the presence of transient risk factors was similar among carriers of both mutations and carriers of factor V Leiden alone. Conclusions: The risk of recurrent deep venous thrombosis is similar among carriers of factor V Leiden and patients without this mutation. Carriers of both factor V Leiden and the G20210A prothrombin mutation have an increased risk of recurrent deep venous thrombosis after a first episode and are candidates for lifelong anticoagulation. abstract_id: PUBMED:14635611 A new thrombophilia risk factor: the increase of plasma factor VIII Factor VIII (FVIII) is key component of the fluid phase of the blood coagulation system. Recent evidence suggests a direct relationship between high plasma levels of FVIII and an increased risk for arterial and venous thrombosis. Thus material reviews the most important clinical and epidemiological evidence about this prothrombotic association. Main function of FVIII is to activate FX functioning as a cofactor for activated FIX in the presence of phospholipids and calcium. Since its deficiency has been historically associated with a hemorrhagic disease (namely hemophilia A), it was never studied its role in thrombosis. In order to explain the association FVIII and thrombosis, defects in its synthesis that increase its plasma concentration as well as postranslational modifications that allow a higher activity, have been proposed. Since 1977 it was suggested that increased plasma concentrations of FVIII and thrombosis may be associated. Shortly after, several other studies confirmed this association. Indeed, patients with stroke of acute myocardial infarction having high plasma levels of FVIII have a shorter survival. On the other hand, deep venous thrombosis is more frequent in patients with high plasma levels of FVIII. This rise in plasma FVIII concentration is also associated with recurrent venous thrombosis. The increment of plasma FVIII concentration is not due to an acute phase reaction. Plasma concentrations of FVIII above 100-150 IU/dL increase 3-fold the risk of thrombosis while concentrations above 150 IU/dL increase the the same risk 6-fold. While it is established the real importance of FVIII as a cause of thrombosis, every patient at risk of thrombosis must have a quantification of this factor. Evaluation of plasma FVIII concentration must be performed in patients with suspected thrombophilia since there is evidence that shows that high plasma FVIII levels is an independent thrombophilic risk factor. There are not effective therapeutic interventions able to normalize the high concentrations of FVIII. Therefore, appropriate prophylaxis during high thrombosis risk clinical episodes is the best alternative for the patient. Answer: Yes, thrombophilia is a major risk factor for deep vein thrombosis (DVT) of the lower extremities among Lebanese patients. Studies have shown that genetic risk factors such as Factor V Leiden (FVL) mutation and MTHFR mutations are prevalent among Lebanese individuals with DVT. One study reported that FVL mutation was detected in 56.9% of patients with conditions highly suggestive of hypercoagulation states, and it was particularly common in young adults, patients with pregnancy, estrogen drugs, recurrent DVT, and resistance to anticoagulation (PUBMED:22737581). Another study found that 92% of patients screened for thrombophilia were positive, with FVL mutation and MTHFR mutation being the most commonly observed genetic abnormalities (PUBMED:19688103). Additionally, a study of Lebanese patients younger than 50 years found that inherited thrombophilia was the most commonly observed risk factor in this age group, with a prevalence three times higher than the control group (PUBMED:22454560). These findings suggest that thrombophilia, due to genetic mutations such as FVL and MTHFR, is a significant risk factor for DVT in the Lebanese population. The high prevalence of these genetic risk factors in Lebanon indicates the importance of screening for thrombophilia, particularly in patients with a suggestive clinical history or those younger than 50 years (PUBMED:22737581). Moreover, the presence of these genetic risk factors can influence the management and treatment of venous thrombosis, as identifying them can help reduce recurrence rates and complications (PUBMED:19688103).
Instruction: Civilian blast injuries: an underestimated problem? Abstracts: abstract_id: PUBMED:26286180 Civilian blast injuries: an underestimated problem? : Results of a retrospective analysis of the TraumaRegister DGU® Background: Blast injuries are a rare cause of potentially life-threatening injuries in Germany. During the past 30 years such injuries were seldom the cause of mass casualties, therefore, knowledge and skills in dealing with this type of injury are not very extensive. Material And Methods: A retrospective identification of all patients in the TraumaRegister DGU® of the German Trauma Society (TR-DGU) who sustained blast injuries between January 1993 and November 2012 was carried out. The study involved a descriptive characterization of the collective as well as three additional collectives. The arithmetic mean, standard deviation and 95 % confidence interval of the arithmetic mean for different demographic parameters and figures for prehospital and in-hospital settings were calculated. A computation of prognostic scores, such as the Revised Injury Severity Classification (RISC) and the updated version RISC II (TR-DGU-Project-ID 2012-035) was performed. Results: A total of 137 patients with blast injuries could be identified in the dataset of the TR-DGU. Of the patients 90 % were male and 43 % were transported by the helicopter emergency service (HEMS) to the various trauma centres. The severely injured collective with a mean injury severity scale (ISS) of 18.0 (ISS ≥ 16 = 52 %) had stable vital signs. In none of the cases was it necessary to perform on-site emergency surgery but a very high proportion of patients (59 %) had to be surgically treated before admittance to the intensive care unit (ICU). Of the patients 27 % had severe soft tissue injuries with an Abbreviated Injury Scale (AIS) ≥ 3 and 90 % of these injuries were burns. The 24 h in-hospital fatality was very low (3 %) but the stay in the ICU tended to be longer than for other types of injury (mean 5.5 ventilation days and 10.7 days in the intensive care unit). Organ failure occurred in 36 % of the cases, multiorgan failure in 29 % and septic events in 14 %. Of the patients 16 % were transferred to another hospital during the first 48 h. The RISC and the updated RISC II tended to underestimate the severity of injuries and mortality (10.2 % vs. 6.8 % and 10.7 % vs. 7.5 %, respectively) and the trauma associated severe hemorrhage (TASH) score underestimated the probability for transfusion of more than 10 units of packed red blood cells (5.0 % vs. 12.5 %). Conclusion: This article generates several hypotheses, which should be confirmed with additional investigations. Until then it has to be concluded that patients who suffer from accidental blast injuries in the civilian setting (excluding military operations and terrorist attacks) show a combination of classical severe trauma with blunt and penetrating injuries and additionally a high proportion of severe burns (combined thermomechanical injury). They stay longer in the ICU than other trauma patients and suffer more complications, such as sepsis and multiorgan failure. Established scores, such as RISC, RISC II and TASH tend to underestimate the severity of the underlying trauma. abstract_id: PUBMED:25422066 The problem of axonal injury in the brains of veterans with histories of blast exposure. Introduction: Blast injury to brain, a hundred-year old problem with poorly characterized neuropathology, has resurfaced as health concern in recent deployments in Iraq and Afghanistan. To characterize the neuropathology of blast injury, we examined the brains of veterans for the presence of amyloid precursor protein (APP)-positive axonal swellings typical of diffuse axonal injury (DAI) and compared them to healthy controls as well as controls with opiate overdose, anoxic-ischemic encephalopathy, and non-blast TBI (falls and motor vehicle crashes). Results: In cases with blast history, we found APP (+) axonal abnormalities in several brain sites, especially the medial dorsal frontal white matter. In white matter, these abnormalities were featured primarily by clusters of axonal spheroids or varicosities in a honeycomb pattern with perivascular distribution. Axonal abnormalities colocalized with IBA1 (+) reactive microglia and had an appearance that was distinct from classical DAI encountered in TBI due to motor vehicle crashes. Opiate overdose cases also showed APP (+) axonal abnormalities, but the intensity of these lesions was lower compared to cases with blast histories and there was no clear association of such lesions with microglial activation. Conclusions: Our findings demonstrate that many cases with history of blast exposure are featured by APP (+) axonopathy that may be related to blast exposure, but an important role for opiate overdose, antemortem anoxia, and concurrent blunt TBI events in war theater or elsewhere cannot be discounted. abstract_id: PUBMED:29193256 Post-Traumatic Heterotopic Ossification: An Old Problem in Need of New Solutions. Heterotopic ossification (HO) is the formation of pathological bone in ectopic sites and it can have serious consequences for functional outcomes. For many years, its main clinical relevance was as a rare complication of elective joint arthroplasty or CNS injury and a number of prophylaxes were developed to mitigate against it in these settings. As a consequence of changes in patterns of wounding and survival in conflicts since the turn of the century, post-traumatic HO has become much more common and case severity has increased. It represents one of the main barriers to rehabilitation in a large cohort of combat-injured patients. However, extant prophylaxes have not been shown to be effective or appropriate in this patient cohort. In addition, the lack of reliable early detection or means of predicting which patients will develop HO is another barrier to effective prevention. This review examines the current state of understanding of post-traumatic HO including the historical context, epidemiology, pathophysiology, clinical issues, currently prophylaxis and detection, management, and potential future approaches. Our aims are to highlight the current lack of effective means of early detection and prevention of HO after major trauma and to stimulate research into novel solutions to this challenging problem. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 36:1061-1068, 2018. abstract_id: PUBMED:1654607 Problem of surgical treatment of wounds N/A abstract_id: PUBMED:19992318 The Problem of Blast Injuries: (Sections of Surgery and Pathology). N/A abstract_id: PUBMED:11307682 Behind armour blunt trauma--an emerging problem. Behind Armour Blunt Trauma (BABT) is the non-penetrating injury resulting from the rapid deformation of armours covering the body. The deformation of the surface of an armour in contact with the body wall arises from the impact of a bullet or other projectile on its front face. The deformation is part of the retardation and energy absorbing process that captures the projectile. In extreme circumstances, the BABT may result in death, even though the projectile has not perforated the armour. An escalation of the available energy of bullets and the desire of armour designers to minimise the weight and bulk of personal armour systems will increase the risk of BABT in military and security forces personnel. In order to develop materials that can be interposed between the armour and the body wall to attenuate the transfer of energy into the body, it is essential that the mechanism of BABT is known. There is a great deal of activity within UK and NATO to unravel the interactions; the mechanism is likely to be a combination of stress (pressure) waves generated by the rapid initial motion of the rear of the armour, and shear deformation to viscera produced by gross deflection of the body wall. Physical and computer model systems are under development to characterise the biophysical processes and provide performance targets for materials to be placed between armours and the body wall in order to attenuate the injuries (trauma attenuating backings-TABs). The patho-physiological consequences of BABT are being clarified by research, but the injuries will have some of the features of blunt chest trauma observed in road traffic accidents and other forms of civilian blunt impact injury. The injuries also have characteristics of primary blast injury. An overview diagnosis and treatment is described. abstract_id: PUBMED:5093819 Pulmonary contusion. A continuing management problem. N/A abstract_id: PUBMED:30514746 Dispatches from the editor: blast injury is everyone's problem. N/A abstract_id: PUBMED:9490549 Gunshot wounds as a problem in the modern surgery of injuries N/A abstract_id: PUBMED:19514110 A critical problem begging for new insight and new therapies. N/A Answer: Civilian blast injuries appear to be an underestimated problem. A retrospective analysis of the TraumaRegister DGU® by the German Trauma Society (TR-DGU) identified 137 patients with blast injuries between January 1993 and November 2012. The study found that these injuries often involved a combination of severe trauma with blunt and penetrating injuries, as well as a high proportion of severe burns (combined thermomechanical injury). Patients with blast injuries tended to have longer stays in the ICU compared to other trauma patients and were more likely to suffer complications such as sepsis and multiorgan failure. Established prognostic scores, such as the Revised Injury Severity Classification (RISC), RISC II, and the trauma associated severe hemorrhage (TASH) score, tended to underestimate the severity of the injuries and mortality, indicating that the unique nature of blast injuries may not be fully captured by these scoring systems (PUBMED:26286180). The problem of blast injuries is not limited to the civilian setting but is also a significant concern for veterans. Research on the brains of veterans with histories of blast exposure revealed the presence of amyloid precursor protein (APP)-positive axonal swellings typical of diffuse axonal injury (DAI), which were distinct from classical DAI encountered in non-blast traumatic brain injury (TBI). This suggests that blast injuries can cause unique patterns of brain damage that may be related to the blast exposure itself (PUBMED:25422066). Moreover, post-traumatic heterotopic ossification (HO), which is the formation of pathological bone in ectopic sites, has become a more common and severe problem due to changes in wounding patterns and survival in recent conflicts. Current prophylaxes developed for elective joint arthroplasty or central nervous system (CNS) injury have not been shown to be effective or appropriate for combat-injured patients, highlighting the need for new solutions to this old problem (PUBMED:29193256). In summary, civilian blast injuries are an underestimated problem that presents unique challenges in terms of clinical management and prognosis. The complexity of these injuries requires further research and development of new approaches to improve outcomes for affected individuals.
Instruction: Can a simplified algorithm prevent incomplete laparoscopic pyloromyotomy? Abstracts: abstract_id: PUBMED:25783316 Can a simplified algorithm prevent incomplete laparoscopic pyloromyotomy? Purpose: The purpose of this study is to analyze an algorithm intended to prevent incomplete pyloromyotomy in 3-port laparoscopic (3TP) and laparoendoscopic single-site (LESS-P) procedures in a teaching hospital. Methods: We defined the pyloroduodenal and pyloroantral junctions as anatomical margins prior pyloromyotomy by palpating and coagulating the serosa with the hook cautery instrument. Incomplete pyloromyotomies, mucosa perforations, serosa lacerations, and wound infections were recorded for pediatric surgical trainees (PST) and board-certified pediatric surgeons (BC). Results: We reviewed the medical files of 233 infants, who underwent LESS-P (n=21), 3TP (n=71), and open pyloromyotomy (OP, n=141). No incomplete pyloromyotomies occurred. In contrast to OP, mucosa perforations did not occur in the laparoscopic procedures during the study period (6.38% vs. 0%, P=.013). OP had insignificantly more serosal lacerations (3.5% vs. 1.4%, P=.407). There was no difference in the rate of wound infections between OP and laparoscopic procedures (2.8% vs. 4.3%, P=.715). In the latter, all wound infections were associated with the use of skin adhesive. Conclusions: This algorithm helps avoiding incomplete laparoscopic pyloromyotomy during the learning curve and in a teaching setting. It is not risky to assist 3TP and LESS-P to PST as this led to a decreased rate of mucosa perforations without experiencing incomplete pyloromyotomies. abstract_id: PUBMED:24952793 Risk of incomplete pyloromyotomy and mucosal perforation in open and laparoscopic pyloromyotomy. Background: Despite randomized controlled trials and meta-analyses, it remains unclear whether laparoscopic pyloromyotomy (LP) carries a higher risk of incomplete pyloromyotomy and mucosal perforation compared with open pyloromyotomy (OP). Methods: Multicenter study of all pyloromyotomies (May 2007-December 2010) at nine high-volume institutions. The effect of laparoscopy on the procedure-related complications of incomplete pyloromyotomy and mucosal perforation was determined using binomial logistic regression adjusting for differences among centers. Results: Data relating to 2830 pyloromyotomies (1802 [64%] LP) were analyzed. There were 24 cases of incomplete pyloromyotomy; 3 in the open group (0.29%) and 21 in the laparoscopic group (1.16%). There were 18 cases of mucosal perforation; 3 in the open group (0.29%) and 15 in the laparoscopic group (0.83%). The regression model demonstrated that LP was a marginally significant predictor of incomplete pyloromyotomy (adjusted difference 0.87% [95% CI 0.006-4.083]; P=0.046) but not of mucosal perforation (adjusted difference 0.56% [95% CI -0.096 to 3.365]; P=0.153). Trainees performed a similar proportion of each procedure (laparoscopic 82.6% vs. open 80.3%; P=0.2) and grade of primary operator did not affect the rate of either complication. Conclusions: This is one of the largest series of pyloromyotomy ever reported. Although laparoscopy is associated with a statistically significant increase in the risk of incomplete pyloromyotomy, the effect size is small and of questionable clinical relevance. Both OP and LP are associated with low rates of mucosal perforation and incomplete pyloromyotomy in specialist centers, whether trainee or consultant surgeons perform the procedure. abstract_id: PUBMED:27532616 Top to Bottom: A New Method for Assessing Adequacy of Laparoscopic Pyloromyotomy. Introduction: Hypertrophic pyloric stenosis is a commonly encountered pediatric surgical issue. Initially treated with open surgical techniques, many pediatric surgeons have adopted the minimally invasive approach using laparoscopy. However, some concerns exist that the rate of incomplete pyloromyotomy is elevated in laparoscopy. We propose a new technique to assess the adequacy of laparoscopic pyloromyotomy. Methods: Adequacy of laparoscopic pyloromyotomy was assessed by confirming that the top of the serosa on one side of the pylorus has adequate freedom to reach the bottom of the muscle on other side. A retrospective review of patients undergoing laparoscopic pyloromyotomy confirmed by this method from March 2012 to January 2016 was conducted. Demographics, laboratory values on admission, and postoperative outcomes were collected. Descriptive statistics was utilized. Results: Thirty-three patients were included. Median age was 30 days (interquartile range [IQR]: 24, 47). Median pylorus length and thickness were 19 mm (IQR 17.3, 21) and 4.5 mm (IQR: 4.0, 4.8), respectively. Median time to first full feed was 8.5 hours (IQR: 6.6, 15.6). Twenty-three (69%) patients had postoperative emesis. Median length of stay postoperation was 26.5 hours (IQR: 21.1, 44.7). There were no reoperations for incomplete pyloromyotomy and no infections. On follow-up, 1 patient had prolonged postoperative emesis that resolved without further intervention and 1 patient on peritoneal dialysis before surgery had an incisional hernia that required operation in the setting of bilateral inguinal hernias. Conclusion: In a small series, the top to bottom assessment appears to confirm adequacy of pyloromyotomy. abstract_id: PUBMED:29582798 Our experience of laparoscopic pyloromyotomy with ultrasound-guided parameters. Traditional management of infantile Hypertrophic Pyloric Stenosis is open pyloromyotomy after initial adequate resuscitation of the patient. From 1991, laparoscopic approach is considered feasible and safe. Today, diagnosis of hypertrophic pyloric stenosis is made most often made by ultrasound. With use of ultrasound-guided parameters (length of pyloric tumour and thickness of pyloric tumour), we could avoid 'incomplete pyloromyotomy' and 'mucosal perforation' (most common complications in laparoscopic approach) to achieve 100% adequacy and safety in laparoscopic pyloromyotomy. abstract_id: PUBMED:24250061 Trocarless laparoscopic pyloromyotomy with conventional instruments: Our experience. Background: The incidence of hypertrophic pyloric stenosis is approximately 1-3 per 1,000 live births. Hypertrophic pyloric stenosis is seen more often in males, with a male-to female ratio of 4:1. Laparoscopic pyloromyotomy is becoming increasingly popular as the standard treatment for hypertrophic pyloric stenosis. Materials And Methods: We describe our initial experience with laparoscopic pyloromyotomy in 16 infants using conventional laparoscopic instruments. Laparoscopic pyloromyotomy was performed through 5-mm umbilical port with 5mm 30 endoscope. Two 3-mm working instruments were inserted directly into the abdomen via separate lateral incisions. Results: All patients were prospectively evaluated. The procedure was performed in 16 infants with a mean age of 36 days and mean weight of 3.1 kg. All procedures, except two, were completed laparoscopically with standard instruments. Average operating time was 28 mins, and average postoperative length of stay was 2.8 days. There were no major intraoperative and postoperative complications. Conclusion: Laparoscopic pyloromyotomy can be safely performed by using standard conventional laparoscopic trocarless instruments. abstract_id: PUBMED:32984197 Laparoscopic vs. Open Pyloromyotomy in Treatment of Infantile Hypertrophic Pyloric Stenosis. Background/Purpose: Laparoscopic pyloromyotomy gained wide popularity in management of pyloric stenosis with contradictory results regarding its benefits over classic open approach. This study aimed at comparing both regarding their safety, efficiency, and outcome. Methods: This is a prospective randomized controlled study performed from April 2017 to April 2019. It included 80 patients, divided randomly into two groups, where laparoscopic pyloromyotomy was performed in group A and open pyloromyotomy in group B. Both groups were compared regarding operative time, post-operative pain score, time required to reach full feeding, hospital stay, complications, and parents' satisfaction. Results: Median operative time was 21 min in group A vs. 30 min in group B (P = 0). Pain Assessment in Neonates scores were generally higher in group B with more doses of analgesics required (P = 0). Mean time needed to reach full feeding was 15.2 and 18.8 h in groups A and B, respectively (P = 0). Median hospital stay was 19 h in group A and 22 h in group B (P = 0.004). Parents' satisfaction also was in favor of group A (P = 0.045). Although no significant difference was reported between both groups regarding early and late complications, some complications such as mucosal perforation and incomplete pyloromyotomy occurred in the laparoscopic group only. Conclusion: Laparoscopic pyloromyotomy was found superior to open approach regarding faster operative time, less need of analgesics, easier development of oral feeding, shorter hospital stay, and better parents' satisfaction. Yet, there are still some concerns about the safety and efficiency of this procedure over open technique. abstract_id: PUBMED:24739509 Laparoscopic pyloromyotomy: comparing the arthrotomy knife to the Bovie blade. Background: Laparoscopic pyloromyotomy was performed at our institution using an arthrotomy knife until it became unavailable in 2010. Thus, we adapted the use of the blunt Bovie tip, which can be used with or without electrocautery to perform the myotomy. This study compared the outcomes between using the arthrotomy knife versus the Bovie blade in laparoscopic pyloromyotomies. Materials And Methods: Retrospective review was performed on all laparoscopic pyloromyotomy patients from October 2007 to September 2012. Arthrotomy knife pyloromyotomy patients were compared with those performed with the Bovie blade. Patient demographics, diagnostic measurements, electrolyte levels, length of stay, operative time, and complications were compared. Results: A total of 381 patients were included, with 191 in the arthrotomy group and 190 in the Bovie blade group. No significant differences existed between groups in age, weight, gender, pyloric dimensions, electrolyte levels, or length of stay. Mean operative times were 15.8±5.6 min with knife and 16.4±5.3 min for Bovie blade (P=0.24). In the arthrotomy knife group, there was one incomplete pyloromyotomy and one omental herniation. There was one wound infection in each group. Readmission rate was greater in the arthrotomy knife group (5.7%) versus the Bovie blade group (3.1%). Conclusions: The Bovie blade appears to offer no objective disadvantages compared with the arthrotomy knife when performing laparoscopic pyloromyotomy. abstract_id: PUBMED:35104694 Open Versus Laparoscopic Pyloromyotomy for Pyloric Stenosis-A Systematic Review and Meta-Analysis. Introduction: Infantile hypertrophic pyloric stenosis is treated by either open pyloromyotomy (OP) or laparoscopic pyloromyotomy (LP). The aim of this meta-analysis was to compare the open versus laparoscopic technique. Methods: A literature search was conducted from 1990 to February 2021 using the electronic databases MEDLINE, Embase, and Cochrane Central Register of Controlled Trials. Primary outcomes were mucosal perforation and incomplete pyloromyotomy. Secondary outcomes consisted of length of hospital stay, time to full feeds, operating time, postoperative wound infection/abscess, incisional hernia, hematoma/seroma formation, and death. Results: Seven randomized controlled trials including 720 patients (357 with OP and 363 with LP) were included. Mucosal perforation rate was not different between groups (relative risk [RR] LP versus OP 1.60 [0.49-5.26]). LP was associated with nonsignificant higher risk of incomplete pyloromyotomy (RR 7.37 [0.92-59.11]). There was no difference in neither postoperative wound infections after LP compared with OP (RR 0.59 [0.24-1.45]) nor in postoperative seroma/hematoma formation (RR 3.44 [0.39-30.43]) or occurrence of incisional hernias (RR 1.01 [0.11-9.53]). Length of hospital stay (-3.01 h for LP [-8.39 to 2.37 h]) and time to full feeds (-5.86 h for LP [-15.95 to 4.24 h]) were nonsignificantly shorter after LP. Operation time was almost identical between groups (+0.53 min for LP [-3.53 to 4.59 min]). Conclusions: On a meta-level, there is no precise effect estimate indicating that LP carries a higher risk for mucosal perforation or incomplete pyloromyotomies compared with the open equivalent. Because of very low certainty of evidence, we do not know about the effect of the laparoscopic approach on postoperative wound infections, postoperative hematoma or seroma formation, incisional hernia occurrence, length of postoperative stay, time to full feeds, or operating time. abstract_id: PUBMED:29506847 Postoperative outcomes of open versus laparoscopic pyloromyotomy for hypertrophic pyloric stenosis. Background: Hypertrophic pyloric stenosis (HPS) is one of the most common indications for non-elective surgery in the neonatal population. Multiple small prospective trials have shown that postoperative outcomes for laparoscopic versus open pyloromyotomy are equivalent or slightly better with the minimally invasive approach. This study uses a prospective national database to compare postoperative complications and length of stay for infants undergoing laparoscopic versus open pyloromyotomy. Methods: Infants undergoing either open or laparoscopic pyloromyotomy for HPS were identified from the 2013 National Surgical Quality Improvement Project-Pediatric Participant Use File. Post-operative complication rates for an open versus laparoscopic approach were compared using chi square or Fisher's exact test as appropriate. Operative time and post-operative length of stay were compared using the Wilcoxon two-sample test. Results: A total of 3256 infants were identified from the 2013 and 2014 American College of Surgeons National Surgical Quality Improvement Project-Pediatric Participant Use File who underwent pyloromyotomy for HPS. Of these, 1182 were open and 2074 were laparoscopic. Postoperative outcomes are depicted in Table 2. Laparoscopic pyloromyotomy was associated with statistically significant decrease in overall morbidity (2.4% versus 1.2%, P = 0.007). Conclusions: We have demonstrated that post-operative outcomes after laparoscopic pyloromyotomy are equivalent or superior to an open approach. The minimally invasive approach to pyloromyotomy is a safe and feasible option in the treatment of HPS in infants. abstract_id: PUBMED:31670292 Laparoscopic hybrid pyloromyotomy for infantile hypertrophic pyloric stenosis: A simplified technique. Introduction: Laparoscopic pyloromyotomy (LP) for the treatment of infantile hypertrophic pyloric has advantage of smaller incisions, faster recovery, reduction in wound-related complications and better cosmesis. Various laparoscopic knives and spreaders have been used for LP, but they do not provide the depth and tissue perception as in open surgery. We describe the laparoscopic hybrid pyloromyotomy (LHP) which makes procedure simple and safe without the requirement of any special instrument. Materials And Methods: This retrospective and prospective comparative study was conducted over a period of 4.5 years in a tertiary teaching hospital in central India. All patients with infantile hypertrophic pyloric stenosis diagnosed on the basis of clinical history, examination and ultrasonography were included in the study. Retrospective data of three-port conventional LP (CLP) using monopolar diathermy hook for incision was used as control group against prospective data of 25 patients undergoing LHP. After a proper layout, LHP was done using one umbilical optical port, right paraumbilical grasper of holding the pyloric olive and an epigastric incision for hybrid pyloromyotomy using 11 no blade and blunt-tipped mosquito artery forceps. Results: Prospective group of LHP included 25 patients which were compared with a retrospective group of CLP consisting of 25 patients. On comparison of two groups, it was found that LHP reduces operative duration significantly. The outcome in terms of complications and recovery was comparable in two groups. None of the patients developed recurrence and required any redo surgery. Conclusion: LHP is a simplified approach which is easy to learn and teach, improves safety and accuracy of the procedure. Answer: Yes, a simplified algorithm can help prevent incomplete laparoscopic pyloromyotomy. A study analyzed an algorithm designed to prevent incomplete pyloromyotomy in laparoscopic procedures, including 3-port laparoscopic (3TP) and laparoendoscopic single-site (LESS-P) pyloromyotomy, in a teaching hospital setting. The algorithm involved defining the pyloroduodenal and pyloroantral junctions as anatomical margins before pyloromyotomy by palpating and coagulating the serosa with a hook cautery instrument. The results showed that no incomplete pyloromyotomies occurred when this algorithm was used. Additionally, the laparoscopic procedures had a decreased rate of mucosa perforations compared to open pyloromyotomy (OP), and there was no significant difference in the rate of wound infections between laparoscopic procedures and OP. The study concluded that the algorithm helps avoid incomplete laparoscopic pyloromyotomy during the learning curve and in a teaching setting (PUBMED:25783316). Other studies have also proposed methods to assess the adequacy of laparoscopic pyloromyotomy, such as confirming that the top of the serosa on one side of the pylorus has adequate freedom to reach the bottom of the muscle on the other side, which appeared to confirm the adequacy of pyloromyotomy in a small series (PUBMED:27532616). Additionally, the use of ultrasound-guided parameters has been suggested to avoid incomplete pyloromyotomy and mucosal perforation, achieving 100% adequacy and safety in laparoscopic pyloromyotomy (PUBMED:29582798). In summary, the implementation of a simplified algorithm, along with other methods like ultrasound-guided parameters and intraoperative assessment techniques, can help prevent incomplete laparoscopic pyloromyotomy and improve the safety and efficacy of the procedure.
Instruction: Usability of quality measures for online health information: Can commonly used technical quality criteria be reliably assessed? Abstracts: abstract_id: PUBMED:16043090 Usability of quality measures for online health information: Can commonly used technical quality criteria be reliably assessed? Purpose: Many criteria have been developed to rate the quality of online health information. To effectively evaluate quality, consumers must use quality criteria that can be reliably assessed. However, few instruments have been validated for inter-rater agreement. Therefore, we assessed the degree to which two raters could reliably assess 22 popularly cited quality criteria on a sample of 42 complementary and alternative medicine Web sites. Methods: We determined the degree of inter-rater agreement by calculating the percentage agreement, Cohen's kappa, and prevalence- and bias-adjusted kappa (PABAK). Results: Our un-calibrated analysis showed poor inter-rater agreement on eight of the 22 quality criteria. Therefore, we created operational definitions for each of the criteria, decreased the number of assessment choices and defined where to look for the information. As a result 18 of the 22 quality criteria were reliably assessed (inter-rater agreement &gt; or = 0.6). Conclusions: We conclude that even with precise definitions, some commonly used quality criteria cannot be reliably assessed. However, inter-rater agreement can be improved with precise operational definitions. abstract_id: PUBMED:15361026 Inter-observer agreement for quality measures applied to online health information. Many quality criteria have been developed to rate the quality of online health information. However, few instruments have been validated for inter-observer reliability. Therefore, we assessed the degree to which two raters agree upon the presence or absence of information based on 22 popularly cited quality criteria on a sample of 21 complementary and alternative medicine websites. Our preliminary analysis showed a poor inter-rater agreement on 10 out of the 22 quality criteria. Therefore, we created operational definitions for each of the criteria, decreased the allowed choices and defined a location to look for the information. As a result 15 out of the 22 quality criteria had a kappa &gt;0.6. We conclude that even with precise definitions some commonly used quality criteria to assess the quality of health information online cannot be reliably assessed. However, inter-rater agreement can be improved by providing precise operational definitions. abstract_id: PUBMED:27888834 Exploring the impact of word-of-mouth about Physicians' service quality on patient choice based on online health communities. Background: Health care service is a high-credence service and patients may face difficulties ascertaining service quality in order to make choices about their available treatment options. Online health communities (OHCs) provide a convenient channel for patients to search for physicians' information, such as Word-of-Mouth (WOM), particularly on physicians' service quality evaluated by other patients. Existing studies from other service domains have proved that WOM impacts consumer choice. However, how patients make a choice based on physicians' WOM has not been studied, particularly with reference to different patient characteristics and by using real data. Methods: One thousand eight hundred fifty three physicians' real data were collected from a Chinese online health community. The data were analyzed using ordinary least squares (OLS) method. Results: The study found that functional quality negatively moderated the relationship between technical quality and patient choice, and disease risk moderated the relationship between physicians' service quality and patient choice. Conclusions: Our study recommends that hospital managers need to consider the roles of both technical quality and functional quality seriously. Physicians should improve their medical skills and bedside manners based on the severity and type of disease to provide better service. abstract_id: PUBMED:31045507 Consumer Evaluation of the Quality of Online Health Information: Systematic Literature Review of Relevant Criteria and Indicators. Background: As the quality of online health information remains questionable, there is a pressing need to understand how consumers evaluate this information. Past reviews identified content-, source-, and individual-related factors that influence consumer judgment in this area. However, systematic knowledge concerning the evaluation process, that is, why and how these factors influence the evaluation behavior, is lacking. Objective: This review aims (1) to identify criteria (rules that reflect notions of value and worth) that consumers use to evaluate the quality of online health information and the indicators (properties of information objects to which criteria are applied to form judgments) they use to support the evaluation in order to achieve a better understanding of the process of information quality evaluation and (2) to explicate the relationship between indicators and criteria to provide clear guidelines for designers of consumer health information systems. Methods: A systematic literature search was performed in seven digital reference databases including Medicine, Psychology, Communication, and Library and Information Science to identify empirical studies that report how consumers directly and explicitly describe their evaluation of online health information quality. Thirty-seven articles met the inclusion criteria. A qualitative content analysis was performed to identify quality evaluation criteria, indicators, and their relationships. Results: We identified 25 criteria and 165 indicators. The most widely reported criteria used by consumers were trustworthiness, expertise, and objectivity. The indicators were related to source, content, and design. Among them, 114 were positive indicators (entailing positive quality judgments), 35 were negative indicators (entailing negative judgments), and 16 indicators had both positive and negative quality influence, depending on contextual factors (eg, source and individual differences) and criteria applied. The most widely reported indicators were site owners/sponsors; consensus among multiple sources; characteristics of writing and language; advertisements; content authorship; and interface design. Conclusions: Consumer evaluation of online health information is a complex cost-benefit analysis process that involves the use of a wide range of criteria and a much wider range of quality indicators. There are commonalities in the use of criteria across user groups and source types, but the differences are hard to ignore. Evidently, consumers' health information evaluation can be characterized as highly subjective and contextualized, and sometimes, misinformed. These findings invite more research into how different user groups evaluate different types of online sources and a personalized approach to educate users about evaluating online health information quality. abstract_id: PUBMED:36851068 Chinese Technical Guideline for Deriving Water Quality Criteria for Protection of Freshwater Organisms. In recent years, China has determined the national goal of "developing national environmental criteria", thereby promoting the rapid development of environmental quality criteria research in China. In 2017, the Ministry of Ecology and Environment of China (MEEC, formerly the Ministry of Environmental Protection of China) issued the technical guideline for deriving water quality criteria (WQC) for protection of freshwater organisms (HJ 831-2017), and in 2022, they organized the guideline revision and issued an updated version (HJ 831-2022). The primary contents of the revision included the following. The minimum toxicity data requirements were upgraded from 6 to 10, and the species mean toxicity value was replaced by the same effect toxicity value for the criteria calculation. It is now required that the tested organisms must be distributed in China's natural fresh waters, and the toxicity data of non-native model species will no longer be used. The list of freshwater invasive species in China that cannot be used as test species was added into the guideline. The acute/chronic ratio (ACR) method for the criteria derivation and the extreme value model were deleted, and the provisions for testing the toxicity data distribution were also deleted. The exposure time of the toxicity test of various tested organisms was refined, and the priority of the toxicity data was clearly specified. This paper introduces the framework and specific technical requirements of HJ 831-2022 in detail, including data collection, pre-processing of toxicity data, criteria derivation, fitting models, and quality control. This introduction is helpful for international peers to understand the latest research progress of China's WQC. abstract_id: PUBMED:36311181 Readability and Quality of Online Health Information Regarding Parathyroidectomy. Objective: Assessment of the readability and quality of online health information regarding parathyroidectomy. Study Design: Cross-sectional analysis. Setting: Websites providing patient-oriented health information regarding parathyroidectomy obtained via the Google search engine. Methods: The top 75 Google search results for "parathyroidectomy,""parathyroid surgery," and "parathyroid gland removal" were reviewed. Websites were categorized by website type and country of origin. Readability was assessed by Flesch-Kincaid Grade Level and Simple Measure of Gobbledygook. Website quality was assessed per JAMA benchmark criteria and the DISCERN instrument. Results: A total of 74 unique websites were evaluated. The mean readability of the assessed websites exceeded the recommended sixth-grade reading level on the Flesch-Kincaid Grade Level and Simple Measure of Gobbledygook (P &lt; .001). Readability did not vary significantly by website type. Websites originating from the United Kingdom were significantly more readable than those from the United States. The majority of assessed websites were of poor quality (n = 42, 56.8%) on assessment based on the DISCERN instrument. Quality varied significantly by website category on the JAMA benchmark criteria (P &lt; .001) and DISCERN score (P = .049) with commercial websites receiving the highest scores. DISCERN score also varied significantly by country of origin (P = .036) with UK sites receiving highest mean DISCERN scores. Conclusion: Online health information regarding parathyroidectomy is largely of poor quality and is poorly readable for many patients. Institutions utilizing well-defined guidelines for development of patient educational resources may provide online health information of greater quality and readability. abstract_id: PUBMED:35940574 A Comprehensive Evaluation of the Quality, Readability, and Technical Quality of Online Information on Glaucoma. Purpose: To evaluate the quality and reliability of medical information, the technical quality of the presentation of information, and the readability of informational websites that publish content on the definition, causes, symptoms, and treatment of glaucoma. Design: A cross-sectional study was conducted to assess information published on websites with regard to glaucoma. Subjects: The top 150 websites populated on a Google search using the keywords glaucoma, high intraocular pressure, and high eye pressure were chosen for evaluation. Methods: Two independent reviewers assessed quality and reliability of each website using the DISCERN, Health on the Net Code (HONcode), and Journal of the American Medical Association (JAMA) criteria. The reviewers also evaluated technical quality by determining each website's ability to satisfy 10 unique features. Readability was assessed using the Readability Studio software (Oleander Software). Main Outcome Measures: Quality of information was analyzed using the DISCERN, HONcode, and JAMA criteria. To assess readability, the Bormuth Cloze Mean, Bormuth Grade Placement, Flesch Kincaid Reading Ease, Coleman-Liau Index, Gunning Fog Score, Simple Measure of Gobbledygook Index, Readability Score, Fry Estimate, Raygor Estimate, and the Overall Mean Readability metrics were used. A separate subanalysis categorized websites into institutional and private categories. Results: Readability was poor among all websites, with most websites requiring a reading level higher than the 11th grade. The overall mean DISCERN score ± standard deviation (SD) was 3.0 ± 0.4, the mean HONcode score ± SD was 9.6 ± 1.8, and the mean JAMA score ± SD was 2.1 ± 1.1. The reviewers had moderate to excellent interrater reliability. Institutional websites (n = 39) had a higher mean DISCERN score (3.18 ± 0.33 vs. 2.95 ± 0.39, P &lt; 0.05) and mean HONcode score (10.18 ± 1.90 vs. 9.34 ± 1.71, P &lt; 0.05) than those of private websites (n = 111). Technical quality was higher among institutional websites (P &lt; 0.05). Conclusions: An overwhelming majority of websites presented information of low quality, reliability, and readability. Institutional websites generally received higher scores than those received by private websites; however, overall scores were still substandard, which necessitates improvement of online information on glaucoma. abstract_id: PUBMED:20969777 An online network tool for quality information to answer questions about occupational safety and health: usability and applicability. Background: Common information facilities do not always provide the quality information needed to answer questions on health or health-related issues, such as Occupational Safety and Health (OSH) matters. Barriers may be the accessibility, quantity and readability of information. Online Question &amp; Answer (Q&amp;A) network tools, which link questioners directly to experts can overcome some of these barriers. When designing and testing online tools, assessing the usability and applicability is essential. Therefore, the purpose of this study is to assess the usability and applicability of a new online Q&amp;A network tool for answers on OSH questions. Methods: We applied a cross-sectional usability test design. Eight occupational health experts and twelve potential questioners from the working population (workers) were purposively selected to include a variety of computer- and internet-experiences. During the test, participants were first observed while executing eight tasks that entailed important features of the tool. In addition, they were interviewed. Through task observations and interviews we assessed applicability, usability (effectiveness, efficiency and satisfaction) and facilitators and barriers in use. Results: Most features were usable, though several could be improved. Most tasks were executed effectively. Some tasks, for example searching stored questions in categories, were not executed efficiently and participants were less satisfied with the corresponding features. Participants' recommendations led to improvements. The tool was found mostly applicable for additional information, to observe new OSH trends and to improve contact between OSH experts and workers. Hosting and support by a trustworthy professional organization, effective implementation campaigns, timely answering and anonymity were seen as important use requirements. Conclusions: This network tool is a promising new strategy for offering company workers high quality information to answer OSH questions. Q&amp;A network tools can be an addition to existing information facilities in the field of OSH, but also to other healthcare fields struggling with how to answer questions from people in practice with high quality information. In the near future, we will focus on the use of the tool and its effects on information and knowledge dissemination. abstract_id: PUBMED:32367206 Toolbox for user-experience evaluation of online health information The Internet is now a central source of health information. An ideal design of web services and e‑health programs requires valid evaluations of users' perceptions, the so-called user experience. As part of a research project with the German Federal Centre for Health Education (BZgA), a toolbox for website evaluation (for screening and monitoring) was compiled for this purpose from freely available quantitative empirical questionnaires. With this toolbox, key aspects of the web user experience can be reliably and validly assessed. In this article, we describe the toolbox's components, underlying quality criteria, and practical experiences.The aim of the toolbox is to facilitate methodically high-quality, comprehensive, and time-efficient analyses. A total of nine standardized instruments and supplementary single items from three areas are documented. In the standard version of the toolbox, 35 items are used that can be answered in less than 10 min. For an extended examination of a website, the toolbox documents additional measures on website content, usability, and aesthetics. We discuss both the advantages and limitations of the toolbox and give an outlook on possible future developments. abstract_id: PUBMED:34326038 A Health Information Quality Assessment Tool for Korean Online Newspaper Articles: Development Study. Background: Concern regarding the reliability and accuracy of the health-related information provided by online newspaper articles has increased. Numerous criteria and items have been proposed and published regarding the quality assessment of online information, but there is no standard quality assessment tool available for online newspapers. Objective: This study aimed to develop the Health Information Quality Assessment Tool (HIQUAL) for online newspaper articles. Methods: We reviewed previous health information quality assessment tools and related studies and accordingly developed and customized new criteria. The interrater agreement for the new assessment tool was assessed for 3 newspaper articles on different subjects (colorectal cancer, obesity genetic testing, and hypertension diagnostic criteria) using the Fleiss κ and Gwet agreement coefficient. To compare the quality scores generated by each pair of tools, convergent validity was measured using the Kendall τ ranked correlation. Results: Overall, the HIQUAL for newspaper articles comprised 10 items across 5 domains: reliability, usefulness, understandability, sufficiency, and transparency. The interrater agreement for the article on colorectal cancer was in the moderate to substantial range (Fleiss κ=0.48, SE 0.11; Gwet agreement coefficient=0.74, SE 0.13), while for the article introducing obesity genetic testing it was in the substantial range, with values of 0.63 (SE 0.28) and 0.86 (SE 0.10) for the two measures, respectively. There was relatively low agreement for the article on hypertension diagnostic criteria at 0.20 (SE 0.10) and 0.75 (SE 0.13), respectively. Validity of the correlation assessed with the Kendall τ showed good correlation between tools (HIQUAL vs DISCERN=0.72, HIQUAL vs QUEST [Quality Evaluation Scoring Tool]=0.69). Conclusions: We developed a new assessment tool to evaluate the quality of health information in online newspaper articles, to help consumers discern accurate sources of health information. The HIQUAL can help increase the accuracy and quality of online health information in Korea. Answer: The usability of quality measures for online health information, particularly technical quality criteria, has been a subject of research to determine whether these criteria can be reliably assessed. Studies have shown that while many criteria have been developed to rate the quality of online health information, not all of them can be reliably assessed by different raters. For instance, an assessment of 22 popularly cited quality criteria on complementary and alternative medicine websites revealed poor inter-rater agreement on several criteria. However, after creating operational definitions for each criterion, reducing the number of assessment choices, and defining where to look for the information, the reliability of the assessment improved significantly. In one study, 18 out of 22 quality criteria were reliably assessed with inter-rater agreement of 0.6 or higher after these adjustments were made (PUBMED:16043090). Another study found that 15 out of 22 quality criteria had a kappa greater than 0.6 following similar modifications (PUBMED:15361026). These findings suggest that while some commonly used technical quality criteria for online health information cannot be reliably assessed in their original form, the reliability of assessments can be improved by providing precise operational definitions and simplifying the assessment process. It is important to note that even with these improvements, some criteria may still not be reliably assessed, indicating that the usability of quality measures for online health information is contingent on the clarity and operationalization of the criteria used for evaluation.
Instruction: Is co-administration of ethanol to the distension medium in surgical hysteroscopy a screening method to prevent fluid overload? Abstracts: abstract_id: PUBMED:8672906 Is co-administration of ethanol to the distension medium in surgical hysteroscopy a screening method to prevent fluid overload? A prospective randomized comparative study of ablative versus non-ablative hysteroscopy and various ethanol concentrations Objective: Is it possible to diagnose early a beginning fluid absorption during operative hysteroscopy by adding ethanol to the distension medium? Methods: A prospectively randomised comparative study of ablative versus non-ablative operative hysteroscopy with differing ethanol concentration was performed. Purisole (a mannitol/sorbitol solution) was used as distension medium. Results: The results of the study show that at those hysteroscopical procedures at which the endometrium is not or only minimally injured (e.g. syneciolysis, hysteroscopic proximal tubal catheterisation) an intraoperative screening is not necessary due to the low absorbing amounts. At the hysteroscopical procedures as the resection of myoma, endometriumablation and septumresection, however, an addition of ethanol of 2% to the distension medium has proved to be useful, because with this method absorbing amounts from 400 mls can be established by positive values of breath alcohol. As the result of a further absorption of fluid, but delayed in time compared to the first positive value of breath alcohol, there is an increase of the central venous pressure and a hyponatraemia. Conclusion: The intraoperative ethanol-monitoring is a non-invasive procedure which can be performed at ablative-operative hysteroscopies and has no negative influence on the course of the intervention and the general condition of the patients. abstract_id: PUBMED:8991843 Addition of ethanol to the distension medium in surgical hysteroscopy as screening to prevent "fluid overload". A prospective randomized comparative study of ablative versus non-ablative surgical hysteroscopy and different ethanol concentration For answering the question at which hysteroscopical procedures an intraoperative screening method is necessary to avoid a fluid overload and whether a beginning fluid absorption can be diagnosed early by adding ethanol to the distension medium, a prospectively randomised comparative study of ablative versus non-ablative operative hysteroscopy with differing ethanol concentrations was performed (n = 120). Purisole (a mannit/sorbit solution) was used a distension medium. The measuring parameters (breath alcohol, amount of absorbed fluid, haematocrit and haemoglobin values, central venous pressure, heart frequency) were intraoperatively determined at 5-minute intervals. The results of the study show that with those hysteroscopical procedures during which the endometrium is not or only minimally injured (e.g. syneciolysis, hysteroscopic proximal tubal catheterisation). Intraoperative screening is not necessary due to the low absorbing amounts. With hysteroscopical procedures such as resection of myoma, endometrium ablation and septum resection, however, an addition of ethanol of 2% to the distension medium has proved useful, because with this method absorption amounts of 400 ml and more can be detected by positive values of breath alcohol. As the result of a further absorption of fluid, delayed in time compared to the first positive value of breath alcohol, there is an increase in central venous pressure and hyponatraemia. Intraoperative ethanol monitoring is a non-invasive procedure which can be performed during ablative-operative hysteroscopies and has no negative influence on the course of the intervention and the general condition of the patients. abstract_id: PUBMED:9004974 Risk of fluid overload syndrome during hysteroscopy procedures At present increasingly liquid media are preferred to gaseous ones in hysteroscopic procedures. In operations the former media are used almost exclusively. The use of liquid media involves, however, some risks, the greatest one being the so-called fluid overload syndrome. It develops as a result of overload and upset of the milieu interior due to excessive escape of the distension medium. Medium escapes also via the oviducts into the peritoneal cavity. It is therefore necessary to adhere to safety measures which include in particular selection of a suitable medium, the time factor, correct surgical procedure and specially continuous control of consumption of the medium. abstract_id: PUBMED:32410478 Distension media delivery systems in hysteroscopy: past, present and future. To examine the uterine cavity and/or to perform hysteroscopic surgery, one has to access the uterine cavity through the cervix, distend the cavity with a fluid (liquid or gas) to visualize it with a telescope and/or a camera system and use energy (mechanical or thermal) to affect and/or remove tissue. Distension of the uterine cavity then is an important component of hysteroscopy, and during the last century, numerous attempts have been made to achieve an undistorted and unobstructed panoramic view of the uterine cavity. In order to achieve this goal, the uterine cavity has been distended with fluids using a variety of techniques, including gravity-assisted systems, pressure cuffed systems, and electronic pumps. Excessive fluid intravasation during hysteroscopy can lead to significant complications, and hence, automated fluid delivery systems have been developed recently to provide a safe and more efficient method of fluid delivery. This review aims to describe the evolution of distension media delivery systems chronologically from the 1900s to the present day. abstract_id: PUBMED:34735388 A Fluid-Management Drape for Hysteroscopy: Innovation for Improved Patient Safety and Surgical Care. Background: Hysteroscopy requires accurate collection of unabsorbed distension media to measure patient fluid absorption. We assessed the effectiveness and usability of a novel total capture drape compared with a standard drape during hysteroscopy. Method: Simulation trials were followed by an early-phase study to compare fluid-capture efficiency and measures of drape usability during hysteroscopy randomizing the total capture drape compared with a standard drape. Experience: Simulation trials indicated complete collection of unabsorbed fluid with the total capture drape and progressive loss of unabsorbed fluid with the standard drape. An early-phase study with 68 women found no statistical difference between groups for the hysteroscopic fluid deficit, but saw fewer cases with lost fluid in the total capture drape compared with the standard drape. Direct observation and focus group data indicated a trend for better capture of unabsorbed fluid with the total capture drape, along with increased usability once surgeons became familiar with correct placement. Conclusion: Simulation and early-phase study results are favorable for the total capture drape, demonstrating comparable fluid collection with the standard drape. With repeated use and in-service training, surgeons expressed greater confidence in the accuracy of the hysteroscopic fluid deficit with the total capture drape compared with the standard drape. Design modifications should improve overall usability and fluid-capture efficiency. abstract_id: PUBMED:27892449 Otorrhagia and Nosebleed as first signs of Intravascular Absorption Syndrome During Hysteroscopy: From Bench to Bedside. Hysteroscopic surgery is indicated for the treatment of several intrauterine diseases. The surgeon needs to be aware of, and know how to prevent, possible complications related to these procedures. In the case of operative hysteroscopy, the systemic effects of low-viscosity fluid uptake must be considered in order to prevent the complications in the patient. We report on two unusual clinical signs of intravascular absorption syndrome (IAS) that developed during an operative hysteroscopy with glycine 1.5% as the fluid of distension. Based on our experience, we recommend that practitioners reduce operating times, monitor fluid balances, check electrolytes and kinetic heart rates, and monitor for symptoms including otorrhagia and nosebleed, in order to identify and possibly prevent IAS due to an overload of low-viscosity fluids. abstract_id: PUBMED:16554185 Office hysteroscopy and adenomyosis. Adenomyosis, the heterotopic presence of endometrial glands and stroma within the myometrium, has traditionally been diagnosed by the pathologist in hysterectomy specimens. However, the recent development of high-quality non-invasive techniques such as transvaginal sonography (TVS), magnetic resonance imaging (MRI) and hysteroscopy has renewed interest in diagnosing adenomyosis in the office prior to any treatment. Hysteroscopy offers the advantage of direct visualization of the uterine cavity, and since nowadays it is performed in the office, it can be offered as a first-line diagnostic tool for evaluation of uterine abnormalities in patients with abnormal uterine bleeding and/or infertility. The available data clearly indicate that high-quality mini-hysteroscopes, saline as a distension medium, and atraumatic insertion technique are essential for the success of office hysteroscopy. The procedure is indicated in any situation in which an intrauterine anomaly is suspected; it is immediately preceded by a physical exam and a TVS to evaluate uterine characteristics, and it is followed by a second TVS to take advantage of the intracavitary fluid for a contrast image of the uterus. Although diagnostic hysteroscopy does not provide pathognomonic signs for adenomyosis, some evidence suggests that irregular endometrium with endometrial defects, altered vascularization, and cystic haemorrhagic lesion are possibly associated with the entity. In addition to the direct visualization of the uterine cavity, the hysteroscopic approach offers the possibility of obtaining endometrial/myometrial biopsies under visual control. Since they can be performed in the office, the combination of TVS, fluid hysteroscopy and contrast sonography is therefore a powerful screening tool for detecting endometrial and myometrial abnormalities in association with adenomyosis. abstract_id: PUBMED:7936343 Excessive reabsorption of irrigation fluid during operative hysteroscopy for uterine myoma Operative hysteroscopy procedures can present complications connected to necessity of kneeping, usually with low viscosity fluids, an uniform distension of uterine cavity. The instilled solutions are reabsorbed through the peritoneum and open uterine venous channels producing a hyperhydration syndrome. The case report shows the rising up of tis syndrome during a hysteroscopy for an uterine myoma resection. The physiopathology includes cardiovascular overload and haemodilution, causing pulmonary, cerebral and tissutal oedema. In this case report the volumes of arterial oxygen saturation and end tidal carbon dioxide obtained on line were the first warning signals. A high CVP value, a plasmatic reduction of Na, albumin, proteins, haemoglobin, haematocrit and osmolarity reflected the excessive fluid reabsorption. For this reason the necessity, during these surgical methods, of considering some practical indications and the absolutely necessary use of clinical and biochemical monitoring systems for these patients. abstract_id: PUBMED:32917330 Operative hysteroscopy intravascular absorption syndrome is more than just the gynecological transurethral resection of the prostate syndrome: A case series and literature review. Objective: Operative hysteroscopy intravascular absorption (OHIA) syndrome refers to fluid overload complications from operative hysteroscopies. Despite guidelines for safe operative hysteroscopies, instances of OHIA syndrome have been reported. Case Report: We reported three cases of OHIA syndrome. A 48-year-old female patient presented net irrigation fluid of 11,900 mL and developed severe metabolic acidosis, conscious disturbance, acute pulmonary edema, and unexpected intensive care unit admission. A 49-year-old female patient presented net irrigation fluid of 4500 mL and developed desaturation and acute pulmonary edema. A 45-year-old female patient presented net irrigation fluid of 2400 mL and developed hyponatremia, increased hilum lung marking, and prolonged postanesthesia care unit observation. Conclusion: For safety, clinicians should use isotonic electrolyte-containing distension media and bipolar electrosurgical instruments in operative hysteroscopies, and fluid status should be monitored closely, particularly at net and total irrigation amounts &gt;3000 and &gt; 8000 mL, respectively. Intrauterine pressure should also be minimized to reduce intravascular and intraperitoneal absorption. abstract_id: PUBMED:24835861 Hysteroscopy: guidelines for clinical practice from the French College of Gynaecologists and Obstetricians. The objective of this study was to provide guidelines for clinical practice from the French College of Obstetrics and Gynecology (CNGOF), based on the best evidence available, concerning hysteroscopy. Vaginoscopy should be the standard technique for diagnostic hysteroscopy (Grade A) using a miniature (≤3.5mm sheath) (Grade A) rigid hysteroscope (Grade C), using normal saline solution distension medium (Grade C), without any anaesthesia (conscious sedation should not be routinely used), without cervical preparation (Grade B), without vaginal disinfection and without antibiotic prophylaxy (Grade B). Misoprostol (Grade A), vaginal oestrogens (Grade C), or GnRH agonist routine administration is not recommended before operative hysteroscopy. Before performing hysteroscopy, it is important to purge the air out of the system (Grade A). The uterine cavity distention pressure should be maintained below the mean arterial pressure and below 120mm Hg. The maximum fluid deficit of 2000ml is suggested when using normal saline solution and 1000ml is suggested when using hypotonic solution. When uterine perforation is recognized during operative hysteroscopy using monopolar or bipolar loop, the procedure should be stopped and a laparoscopy should be performed in order to eliminate a bowel injury. Diagnostic or operative hysteroscopy is allowed when an endometrial cancer is suspected (Grade B). Implementation of this guideline should decrease the prevalence of complications related to hysteroscopy. Answer: Yes, the co-administration of ethanol to the distension medium in surgical hysteroscopy has been studied as a screening method to prevent fluid overload. A prospective randomized comparative study showed that adding 2% ethanol to the distension medium during ablative hysteroscopic procedures, such as resection of myoma, endometrium ablation, and septum resection, is useful for detecting fluid absorption amounts of 400 ml or more through positive values of breath alcohol. This method allows for early diagnosis of fluid absorption, which is important because further absorption of fluid can lead to an increase in central venous pressure and hyponatremia. The intraoperative ethanol monitoring is a non-invasive procedure that does not negatively influence the course of the intervention or the general condition of the patients (PUBMED:8672906, PUBMED:8991843). The use of ethanol in the distension medium serves as an intraoperative screening method to avoid fluid overload, which is a risk associated with the use of liquid media in hysteroscopic procedures. Fluid overload syndrome can develop due to excessive escape of the distension medium, leading to an upset of the milieu interior (PUBMED:9004974). Therefore, the addition of ethanol to the distension medium can be an important safety measure to prevent such complications during operative hysteroscopy.
Instruction: Sex ratios: are there natural variations within the human population? Abstracts: abstract_id: PUBMED:24075622 Sex assessment using clavicle measurements: inter- and intra-population comparisons. We studied sexual dimorphism of the human clavicle in order to describe size variation and create population-specific discriminant tools for morphometric sex assessment. The studied sample consisted of 200 skeletons of adult individuals obtained from the University of Athens Human Skeletal Reference Collection, Athens, Greece. The specimens were well-documented and represented a modern population from cemeteries in the Athens area. Six dimensions typically used for clavicle measurements were recorded. For sexing clavicles, we used both traditional univariate (limiting, demarking and sectioning points) and multivariate discriminant function analysis. The accuracy of the best five classification equations/functions ranged from 91.62% to 92.55% of correctly assigned specimens. By testing new and previously published sexing functions (Greeks, Polynesians, Guatemalans) on four available population samples (English, Indians from Amritsar, Indians from Varanasi, and data from the present study) we found that, for some combinations of tested and reference samples, the accuracy of the sex assessment may decrease even below the probability given by random sex assignment. Therefore, measurements of the clavicle should not be used for sex assessment of individual cases (both forensic and archeological) whose population origin is unknown. However, significant metric differences were also recorded among three different Greek samples (i.e. within a population). As a consequence, application of a sexing method generated from one Greek sample and applied to another Greek sample led to negligible reduction in the success of sex assessment, despite general similarities in ethnic origin (Greeks), generation structure and presumed social background of the samples. Therefore, we believe that future studies should focus on understanding the nature of the differences among within-population reference samples. abstract_id: PUBMED:33943019 Sternal human variability and population affinity: Frequency of discrete traits and their relationship with sex and age. Sternal morphological variations differ among populations and are influenced by the interaction between inheritance, development, and environment. There are currently no studies that include all variability from a morphogenesis approach following a homogeneous definition. The aims of this study were (a) to develop a standardized method for the anatomical study of the sternum; (b) to analyze the prevalence of the morphological variations and their relationship between sex and age; (c) to compare the results with other populations. The sterna of 155 skeletons from a Spanish population were studied. The age at the time of death was 17 to 97 years. We analyzed two metric and 22 sternal morphological variations described in the literature and designed an illustrated atlas. The atlas was validated using the intraclass correlation coefficient (ICC). A descriptive statistical analysis was conducted to measure the prevalence and relationship between sex and age. To analyze the interpopulation variability, we compared our results with those from other authors. The atlas with definitions and reference images improves the observation and detection of all morphological variations of the sternum (ICC = 0.90). The dependence between the morphological traits and sex was significant for the variations in the sternal angle, the number of esternebra, and the development of the xiphoid process. No significant differences were found between age group and morphological traits. The expression of the sternal morphological variation and sex are population-specific. The results will help standardize future studies and provide valuable information on the variability of the sternal morphological variation. abstract_id: PUBMED:9307533 Sex ratios: are there natural variations within the human population? Objective: To investigate on a population basis the suggestion that certain factors naturally alter the odds of having a boy or a girl, and that some women are predisposed towards having children of one particular gender. Design: Routine data analysis. Population: Routinely collected data on singleton infants born in Scotland from 1975 to 1988, linked so that births (live and still) to the same mother could be identified. The analyses relate to 549,048 first to fifth order births occurring to 330,088 women whose records were complete from the first delivery onwards. Main Outcome Measures: Gender of infant. Results: Of 549,048 births, 51.4% were male. Apart from random variation, the sex ratio of 1.06 remained constant at all birth orders (P = 0.18). The probability of a male infant appeared unrelated to the genders of the preceding siblings (P &gt; 0.20 in second to fifth deliveries), and there was no evidence of variation with maternal age (P = 0.31), maternal height (P = 0.69), paternal social class (P = 0.12), maternal social class (P = 0.57), year of delivery (P = 0.84) or season of birth (P = 0.41). Whilst mothers whose children were all the same gender were more likely to continue childbearing than those with children of different genders, there was no evidence that those with daughters were more likely to continue than those with sons. Conclusions: The suggestion that some women have a natural predisposition towards having children of a particular gender is not supported by these data. On a population basis there is no evidence to suggest that gender determination is anything other than a chance process. abstract_id: PUBMED:32876546 Do parents really know best? Informed consent to sex assigning and 'normalising' treatment of minors with variations of sex characteristics. This paper aims to explore how pervasive constructions regarding a person's sex and gender identity in society and law limit the human rights of persons with variations of sex characteristics. Societal culture promulgates a binary (male/female) gender ideal which includes standards of normality for our bodies. People who do not easily fit these culturally constructed norms, such as persons with variations of sex characteristics, encounter numerous difficulties. The legal conceptualisation of 'sex' according to the binary maintains the medicalisation of variations of sex characteristics and reinforces the focus on sex 'normalising' treatment of children who are too young to provide their informed consent. The paper makes use of Belgium as an illustration. Not only is comprehensive legal research concerning variations of sex characteristics absent in Belgium, but the country has also been responsive to human rights claims regarding sexual identity in recent years. With regards to the sex assigning or 'normalising' treatment of persons with variations of sex characteristics, this paper argues that by accepting the substitution of the child's informed consent for the opinion of the legal representative in the absence of urgent medical necessity, Belgian law fails to protect the former's right to bodily integrity and best interests. abstract_id: PUBMED:34169681 Seasonal Variations in Sex Steroids in a Young Male Population and Their Relationship with Plasma Levels of Vitamin D. Purpose: Vitamin D (VD) acts on sperm motility, capacitation and survival but its role in steroidogenesis is less clear. Aims: To analyze seasonal variations in sex steroids and VD in a healthy male population. Materials And Methods: Twenty-nine healthy males, 34.0±4.8 years were included. Blood collection in winter (W) and summer (S) was performed to measure: 25OHD, total testosterone (TT), free testosterone (FT), estradiol (E2), luteinizing hormone (LH), and sex hormone binding globulin (SHBG). Testosterone/estradiol (T/E2) ratio was calculated. Results: In W, lower levels of 25OHD: 18.8±7.2 ng/mL vs. 38.8±11.9 ng/mL (p&lt;0.0001) and LH: 3.5±1.2 mU/mL vs. 3.9±1.5 mU/mL (p=0.05), and higher levels of TT: 501.9±157.7 ng/dL vs. 405.0±128.0 ng/dL (p=0.0003), FT: 11.8±4.1 ng/dL vs. 10.2±3.7 ng/dL (p=0.017), SHBG: 28.5±10.9 nmol/L vs. 23.6±7.9 nmol/L (p=0.002) and T/E2 ratio: 30.7±19.7 ng/dL/pg/mL vs. 17.3±3.6 ng/dL/pg/mL (p=0.0015) with no variation in E2 levels were observed. A positive correlation between 25OHD and E2 (r=0.28, p=0.04) and negative correlations between 25OHD and TT (r=-0.27, p=0.049), 25OHD and FT (r=-0.32, p=0.01), and 25OHD and T/E2 (r=-0.44, p=0.0008) were found. Conclusions: In healthy young male population, seasonal variations were observed in 25OHD and LH levels (higher in S) and in TT, FT, SHBG levels, and T/E2 (higher in W). Lower values of TT and FT in S are accompanied by higher levels of LH, which rules out a central mechanism for lowering testosterone. 25OHD negatively correlated with TT, FT, and T/E2 and positively correlated with E2, suggesting a relationship between VD status and changes in gonadal steroids. abstract_id: PUBMED:24946221 DNA copy number variations in patients with 46,XY disorders of sex development. Purpose: Less than 50% of cases of 46,XY disorders of sex development are genetically defined after karyotyping and/or sequencing of known causal genes. Since copy number variations are often missed by karyotyping and sequencing, we assessed patients with unexplained 46,XY disorders of sex development using array comparative genomic hybridization for possible disease causing genomic variants. Materials And Methods: DNA from unexplained cases of 46,XY disorders of sex development were tested by whole genome array comparative genomic hybridization. In cases where novel copy number variations were detected parental testing was performed to identify whether copy number variations were de novo or inherited. Results: Of the 12 patients who underwent array comparative genomic hybridization testing 2 had possible copy number variations causing disorders of sex development, both maternally inherited microdeletions. One case, with a maternal history of premature ovarian failure, had a cosegregating microdeletion on 9q33.3 involving NR5A1. The other case, with a maternal family history of congenital heart disease, had a cosegregating microdeletion on 8p23.1 upstream of GATA4. Conclusions: In this cohort copy number variations involving or adjacent to known causal genes led to 46,XY disorders of sex development in 2 of 12 previously unexplained cases (17%). Copy number variation testing is clinically indicated for unexplained cases of 46,XY disorders of sex development to aid in genetic counseling for family planning. abstract_id: PUBMED:12346256 Rural urban and regional variations in the sex structure of Ethiopia's population. "This paper examines the sex structure of the population of Ethiopia with emphasis on the sex structure of the urban population which is somewhat unique in Africa. An attempt is made to show rural-urban and north-south differences in sex structure using sex [ratio] classes and [the] 'Rural-Urban Sex Ratio Gap'.... Some possible factors and implications of the situation are stated." Data are from a number of official sources, including the preliminary report from the 1984 census. abstract_id: PUBMED:36932330 Temporal and spatial variations in local sex ratios in a suburban population of the European green toad Bufotes viridis. Background: Sex ratios of animal populations are important factors of population demographics. In pond-breeding amphibians, the operational sex ratio (OSR) among the breeding population is usually male-biased. Also, in European green toads (Bufotes viridis), males usually outnumber females at breeding sites, while the sex ratio of the total adult population (ASR) is assumed to be balanced. It has been suggested that sex-specific breeding behavior causes male-predominance at the breeding sites. We used a dataset of 5 years of street patrols to test this hypothesis. For this we analyzed local sex ratios of green toads in terrestrial habitats and at two artificial breeding ponds. We expected temporal and/or spatial changes of local sex ratios which would indicate sex dependent differences in breeding behavior. Results: Overall observed ASR among 2111 green toads, counted in the course of street patrols from 2016 to 2020, was slightly male-biased (ASR = 0.56, annual ASRs = 0.49-0.63). Based on the data of 1631 toads (920 males, 711 females) captured within a radius of 300 m around nine main breeding sites, temporal and spatial variations in local ASRs were evaluated. Resulting values were compared to the calculated OSR at two artificial breeding ponds in 2021 (645 adult: 553 males, 92 females). Estimates predict more equally distributed females and males prior to the main breeding season. During breeding season, males predominated at both breeding sites (B1: 0.83, B2: 0.89), whereas females are estimated to outnumber males in terrestrial habitats. Proportions of females highly significantly increased with advancing time of the year and increasing distance to the breeding sites. While males tended to accumulate in proximity to water bodies, females dispersed soon after breeding to more distant areas. Conclusions: Observed sex ratios in the studied green toad population changed with time and sampling site, deviating from the population-wide sex ratio. Expanding sampling effort in amphibian conservation assessments in time and space, i.e., outside the main breeding season and away from the breeding sites, would be important to encompass such variations. abstract_id: PUBMED:37199261 Navigating the complexities of adult healthcare for individuals with variations of sex characteristics: from paediatric emergencies to a sense of abandonment. Intersex people and those with variations of sex characteristics face significant health and social issues. This paper analyses the complexities of adult healthcare for this diverse population, including the root causes of deficiencies in care provision. Many minors with variations of sex characteristics are subjected to irreversible, non-consensual medical interventions, which can have negative effects on their health and wellbeing as adults. This 'emergency' approach to intersex paediatric healthcare has been challenged since the 1990s, but there is still a lack of understanding about how the paradigm affects adult care. This paper aims to raise awareness of the health challenges faced by adults with variations of sex characteristics. It identifies themes related to the challenges associated with accessing appropriate adult care, including the repercussions of childhood treatment, the lack of transitional services and psychological support, the limited general medical knowledge about variations of sex characteristics, and the reluctance to access services due to fear of stigma or past medical trauma. The paper indicates the need for more attention to intersex people's health needs as adults, moving away from attempts to 'fix' them as minors towards approaches which consider and provide for their diverse healthcare needs in a broader temporal context. abstract_id: PUBMED:19966329 Population-genetic nature of copy number variations in the human genome. Copy number variations (CNVs) are universal genetic variations, and their association with disease has been increasingly recognized. We designed high-density microarrays for CNVs, and detected 3000-4000 CNVs (4-6% of the genomic sequence) per population that included CNVs previously missed because of smaller sizes and residing in segmental duplications. The patterns of CNVs across individuals were surprisingly simple at the kilo-base scale, suggesting the applicability of a simple genetic analysis for these genetic loci. We utilized the probabilistic theory to determine integer copy numbers of CNVs and employed a recently developed phasing tool to estimate the population frequencies of integer copy number alleles and CNV-SNP haplotypes. The results showed a tendency toward a lower frequency of CNV alleles and that most of our CNVs were explained only by zero-, one- and two-copy alleles. Using the estimated population frequencies, we found several CNV regions with exceptionally high population differentiation. Investigation of CNV-SNP linkage disequilibrium (LD) for 500-900 bi- and multi-allelic CNVs per population revealed that previous conflicting reports on bi-allelic LD were unexpectedly consistent and explained by an LD increase correlated with deletion-allele frequencies. Typically, the bi-allelic LD was lower than SNP-SNP LD, whereas the multi-allelic LD was somewhat stronger than the bi-allelic LD. After further investigation of tag SNPs for CNVs, we conclude that the customary tagging strategy for disease association studies can be applicable for common deletion CNVs, but direct interrogation is needed for other types of CNVs. Answer: Yes, there are natural variations in sex ratios within the human population. A study analyzing routine data on singleton infants born in Scotland from 1975 to 1988 found that the sex ratio, which is the proportion of males to females at birth, remained constant at all birth orders and appeared unrelated to the genders of the preceding siblings, maternal age, maternal height, paternal social class, maternal social class, year of delivery, or season of birth. The study concluded that there is no evidence to suggest that gender determination is anything other than a chance process on a population basis (PUBMED:9307533). Additionally, another study examining the sex structure of Ethiopia's population highlighted rural-urban and north-south differences in sex structure, indicating that there can be regional variations in sex ratios (PUBMED:12346256).
Instruction: Extended resection of T4 lung cancer with invasion of the aorta: is it justified? Abstracts: abstract_id: PUBMED:35000034 Atrial resection for T4 non-small cell lung cancer with left atrium involvement: a systematic review and meta-analysis of survival. Purpose: Extended resection for non-small cell lung cancer (NSCLC) with T4 left atrium involvement is controversial. We performed a systematic review and meta-analysis to evaluate the short- and long-term outcomes of this treatment strategy. Methods: We searched the PubMed database for studies on atrial resection in NSCLC patients. The primary investigated outcome was the effectiveness of the surgery represented by survival data and the secondary outcomes were postoperative morbidity, mortality, and recurrence. Results: Our search identified 18 eligible studies including a total of 483 patients. Eleven studies reported median overall survival and 17 studies reported overall survival rates. The estimated pooled 1, 3, 5-year overall survival rates were 69.1% (95% CI 61.7-76.0%), 21.5% (95% CI 12.3-32.3%), and 19.9% (95% CI 13.9-26.6%), respectively. The median overall survival was 24 months (95% CI 17.7-27 months). Most studies reported significant associations between better survival and N0/1 status, complete resection status, and neoadjuvant therapy. Conclusion: Extended lung resection, including the left atrium, for NSCLC is feasible with acceptable morbidity and mortality when complete resection is achieved. Lymph node N0/1 status coupled with the use of neoadjuvant therapies is associated with better outcomes. abstract_id: PUBMED:28349384 Extended surgery for T4 lung cancer: a 30 years' experience. T4 non-small cell lung carcinomas (NSCLC) were deemed unresectable. Advances in surgery have challenged this dogma. We describe technical aspects and result on superior vena cava (SVC), carinal, thoracic inlet tumor surgeries, and resection under cardiopulmonary bypass (CPB). SVC reconstruction requires hemodynamic control to reverse SVC clamping cerebral effects and excellent cephalic venous bed patency. Among 50 SVC resections, including 25 carinal pneumonectomies, post-operative mortality rate was 8%. In the N0-N1 group, 5- and 10-year survival rates were 46.6 and 37.7%, respectively. Right carinal pneumonectomy was performed through right thoracotomy. Sternotomy was favored for left carinal pneumonectomy or carinal resection alone. Among 138 carinal resections, including eight right upper lobectomies, 123 right pneumonectomies, four left pneumonectomies, and three isolated carinal resections, the post-operative mortality rate was 9.4%. In the N0-N1 patients, 5-year survival rate was 47%. 191 patients underwent resections of thoracic inlet tumors through a transclavicular cervicothoracic anterior approach combined in 63 patients with a posterior midline incision for limited spine invasion. In N0-N1 group, 5- and 10-year survival rates were 41.5 and 29.7%, respectively. CPB allowed resection of tumors invading the heart or great vessels in 13 patients. R0 resection and post-operative mortality rate were 94.4 and 5.5%, respectively. In this series of 388 T4 NSCLC, the post-operative mortality rate was 4%. In the R0 and N0-N1 groups, the 5-year survival rates were 44 and 41%, respectively. Surgical resection of T4 locally advanced NSCLC is worth being performed in selected N0-N1 patients, provided that a radical resection is expected. abstract_id: PUBMED:33967430 Extended resection for unexpected invasion of the left sided lung cancer into the liver: combined lung, diaphragm, and liver resection. Complete anatomic lung resection remains the best curative option in patients with early-stage lung cancer. In some cases, extended lung resections are required to achieve R0 resection. Although diaphragmatic invasion and resection is a well-known condition in lung cancer, direct invasion of the diaphragm and liver in lung cancer is rare. We report a 66-year-old man with left-sided lung cancer. Preoperative evaluation revealed the risk of diaphragm invasion, but the liver invasion was detected intraoperatively. In addition to left pneumonectomy, left-sided partial liver and diaphragm resection was performed. At 24 months from the operation, the patient is alive without any disease progression. We believe that combined resection including lung, diaphragm, and liver may have survival benefits in selected cases. abstract_id: PUBMED:16311976 Extended resection of T4 lung cancer with invasion of the aorta: is it justified? Background: We report our 10-year experience of performing surgical resection of T4 lung cancer invading the thoracic aorta. Patients And Methods: From 1994 to 2004, sixteen patients with T4 primary lung cancer with local invasion of the thoracic aorta underwent tumor resection. Surgical resection included 8 pneumonectomies and 8 lobectomies. The histologic type was squamous cell carcinoma in 7 patients, adenocarcinoma in 7, large cell carcinoma in 1, and small cell carcinoma in 1. Complete resection of the tumor with mediastinal lymph node dissection was achieved in 8 patients (50 %), while the resection was incomplete in the other 8 cases. Results: The overall cumulative survival of the 16 patients at 3 and 5 years was 34.7 % and 17.4 %, respectively. The survival of the patients in the complete resection group was found to be 36.5 % at 5 years, with 2 patients surviving more than 5 years without a recurrence, which was significantly better than that of the incomplete resection group ( p = 0.005). Conclusions: Extended aortic resection with primary lung cancer is complex and possibly high risk, but can achieve long-term survival in selected patients. Surgical resection should be considered as a treatment option for T4 lung cancer for this T4 subcategory. abstract_id: PUBMED:25441139 Extended resections of non-small cell lung cancers invading the aorta, pulmonary artery, left atrium, or esophagus: can they be justified? T4 tumors that invade the heart, great vessels, or esophagus comprise a heterogenous group of locally invasive lung cancers. Prognosis depends on nodal status; this relationship has been consistently demonstrated in many of the small series of extended resection. Current National Comprehensive Cancer Network guidelines do not recommend surgery for T4 extension with N2-3 disease (stage IIIB). However, biopsy-proven T4 N0-1 (stage IIIA) may be operable. Localized tumors with invasion of the aorta, pulmonary artery, left atrium, or esophagus represent a small subset of T4 disease. Acquiring sufficient randomized data to provide statistical proof of a survival advantage for patients undergoing extended resections for these neoplasms will likely never be possible.Therefore, we are left to critically analyze current documented experience to make clinical decisions on a case-by-case basis.It is clear that the operative morbidity and mortality of extended resections for locally advanced T4 tumors have significantly improved over time,yet the risks are still high. The indications for such procedures and the anticipated outcomes should be clearly weighed in terms of potential perioperative complications and expertise of the surgical team. Patients with T4 N0-1 have the best prognosis and with complete resection may have the potential for cure. The use of induction therapy and surgery for advanced T4 tumors may improve survival. Current data suggest that for tumors that invade the aorta, pulmonary artery,left atrium, or esophagus, resection should be considered in relation to multidisciplinary care.For properly selected patients receiving treatment at high volume, experienced centers, extended resections may be warranted. abstract_id: PUBMED:26730755 Extended resection of non-small cell lung cancer invading the left atrium, is it worth the risk? Only few reports of surgical approach to T4 lung carcinoma invading the heart have been reported in the medical literature. It is also controversial if such cancer should be treated by surgery. The aim of this review is to assess the current risk/benefit ratio of the surgical management of non-small cell lung cancer (NSCLC) invading the left atrium, especially in the light of a multidisciplinary approach. We also expose our surgical experience and the procedure we have developed in order to increase our rate of complete resection as this criterion appears to be mandatory as well as patients' nodal status in order to increase life expectancy. abstract_id: PUBMED:32676942 The prognostic impact of the mediastinal fat tissue invasion in patients with non-small cell lung cancer. Background: The prognosis of the mediastinal fat tissue invasion in non-small cell lung cancer (NSCLC) patients has not yet been clearly defined. The present study aimed to investigate the prognostic impact of the mediastinal fat tissue invasion in NSCLC patients. Method: We analyzed 36 patients who were found mediastinal fat tissue invasion by pathological evaluation (mediastinal fat group) and 248 patients who were classified as T4-NSCLC according to the 8th TNM classification (T4 group; invasion of other mediastinal structures in 78 patients, ipsilateral different lobe satellite pulmonary nodule in 32 patients, and tumor diameter &gt; 7 cm in 138 patients). Result: Resection was regarded as complete (R0) in 255 patients (89.7%). Mediastinal fat group showed significantly higher incidence of incomplete resection (R1) and more left-sided tumors than the T4 group (p = 0.01, and p = 0.002, respectively). The survival was better in T4 group than mediastinal fat group (median 57 months versus 31 months), although it was not significant (p = 0.205). Even when only N0/1 or R0 patients were analyzed, the survival was not different between two groups (p = 0.420, and p = 0.418, respectively). 5-year survival rates for T4 subcategories (invasion of other structures, ipsilateral different lobe pulmonary nodule, and tumor diameter &gt; 7 cm) were 39.4%, 41.9%, and 50.3%, respectively (p = 0.109). Multivariate analysis showed that age (p &lt; 0.0001), nodal status (p = 0.0003), and complete resection (p &lt; 0.0001) were independently influenced survival. Conclusion: There is no significant difference in the prognosis between mediastinal fat tissue invasion and T4 disease in NSCLC patients. abstract_id: PUBMED:32082816 Extended left pneumonectomy and left atrial reconstruction with autogenous graft in patient with T4 lung cancer. In selective patients with T4 non-small cell lung cancer with the primary tumor invading the mediastinal organs, extended lung resection may contribute to long-term survival. Adequate patients should be given a chance for surgery if complete resection and required reconstruction can be achieved. In this article, we report a 63-year-old male patient with T4 non-small cell lung cancer invading the left atrium. In the patient, we performed an extended left pneumonectomy with en bloc partial resection of the left atrium wall (4×2.5 cm) where the defect was repaired with pericardial patch via cardiopulmonary bypass. No severe complication developed postoperatively. The patient who was given adjuvant chemotherapy has been living for more than 10 years without disease. abstract_id: PUBMED:36630102 The oncologic efficacy of extended thoracic resections. Locally invasive lung cancers pose unique challenges for management. Surgical resection of these tumors can pose high morbidity due to the invasion into surrounding structures, including the spine, chest wall, and great vessels. With advances in immunotherapy and chemoradiation, the role for radical resection of these malignancies and associated oncologic outcomes is evolving. This article reviews the current literature of extended thoracic resections with a focus on technical approach, functional outcomes, and oncologic efficacy. abstract_id: PUBMED:32030268 Extended resections for the treatment of patients with T4 stage IIIA non-small cell lung cancer (NSCLC) (T4N0-1M0) with or without cardiopulmonary bypass: a 15-year two-center experience. Background: Stage IIIA non-small cell lung cancer (NSCLC) is a heterogeneous group of patients, often requiring variable and individualized approaches. The dilemma to operate or not frequently arises, since more than 75% of the cases of NSCLC are diagnosed in advanced stages (IIIA). The main objective of this study was to assess whether the benefits outweigh surgical risks for the T4N0-1M0 subgroup. Methods: Data from 857 patients with locally advanced T4 NSCLC were retrospectively collected from two different institutions, between 2002 and 2017. Clinical data that were retrieved and analyzed, included demographics, comorbidities, surgical details, neoadjuvant or/and adjuvant therapy and postoperative complications. Results: Twelve patients were in the cardiopulmonary bypass (CPB) group and thirty in the non-CPB. The most common types of lung cancer were squamous cell carcinoma (50.0%) and adenocarcinoma (35.7%). The most frequent invasion of the tumor was seen in main pulmonary artery and the superior vena cava. Significantly more patients of the CPB group underwent pneumonectomy as their primary lung resection (P=0.006). In all patients R0 resection was achieved according to histological reports. The overall 5-year survival was 60%, while the median overall survival was 22.5 months. Analysis revealed that patient age (P=0.027), preoperative chronic obstructive pulmonary disease (COPD) (P=0.001), tumor size (4.0 vs. 6.0 cm) (P=0.001), postoperative respiratory dysfunction (P=0.001) and postoperative atelectasis (P=0.036) are possible independent variables that are significantly correlated with patient outcome. Conclusions: We suggest that in patients with stage IIIA/T4 NSCLC, complete resection of the T4 tumor, although challenging, can be performed in highly selected patients. Such an approach seems to result in improved long-term survival. More specific studies on this area of NSCLC probably will further enlighten this field, and may result in even better outcomes, as advanced systemic perioperative approaches such as modern chemotherapy, immunotherapy and improvements in radiation therapy have been incorporated in daily practice. Answer: Extended resection of T4 lung cancer with invasion of the aorta is a complex and potentially high-risk procedure, but it can achieve long-term survival in selected patients (PUBMED:16311976). The decision to perform such surgery should be made on a case-by-case basis, considering the potential perioperative complications and the expertise of the surgical team (PUBMED:25441139). The overall cumulative survival of patients who underwent extended aortic resection with primary lung cancer was 34.7% at 3 years and 17.4% at 5 years, with significantly better outcomes observed in the complete resection group compared to the incomplete resection group (PUBMED:16311976). The prognosis of patients undergoing extended resections depends on factors such as nodal status, with patients having N0-N1 disease showing better survival rates (PUBMED:28349384; PUBMED:25441139). Complete resection (R0) is a critical factor influencing survival, and it is associated with improved outcomes (PUBMED:32676942; PUBMED:32030268). While the risks of extended resections are high, the operative morbidity and mortality have significantly improved over time (PUBMED:25441139). For properly selected patients receiving treatment at high-volume, experienced centers, extended resections may be warranted (PUBMED:25441139). In conclusion, extended resection of T4 lung cancer with invasion of the aorta can be justified in selected patients, particularly when a radical resection is expected, and the patient has favorable nodal status (N0-N1). The decision should involve a multidisciplinary team and consider the individual patient's condition, the potential for complete resection, and the surgical team's expertise (PUBMED:25441139; PUBMED:16311976).
Instruction: Does coronary artery size really matter? Abstracts: abstract_id: PUBMED:15966932 Does coronary artery size really matter? Objective: To evaluate ischemia in right ventricle (RV) in patients with small caliber of right coronary artery (RCA). Patients And Methods: The study population consisted of 60 consecutive patients undergoing coronary angiography within 3 months. The patients were divided into three different groups according to lumen diameter of RCA and coronary dominance. Group 1, 2, and 3 consisted of patients with small diameter of RCA, dominant RCA, and dominant circumflex artery, respectively. RV ischemia was assessed by using pulse-wave tissue Doppler sampling obtained from RV free wall close to the lateral tricuspid annulus at the apical four-chamber view during dobutamine stress echocardiography (DSE). Results: When the mean systolic velocity percentages of increase from low to peak dose dobutamine in patients with small RCA were compared to those of other groups, statistically significant difference was found between group 1 and the other groups (P = 0.007 for group 1 vs group 2; P = 0.01 for group 1 vs group 3). The mean systolic velocity at peak dobutamine dose of patients with small caliber of right coronary artery was statistically lower than the other groups (P = 0.001 for group 1 vs group 2, P &lt; 0.001 group 1 vs group 3). Conclusions: There are a group of patients with small diameter of RCA causing probable ischemia in RV and small caliber of RCA can really matter in these patients. abstract_id: PUBMED:32035525 Coronary artery size in North Indian population - Intravascular ultrasound-based study. Objective: The coronary artery dimensions have important diagnostic and therapeutic implications in management of coronary artery disease (CAD). There is paucity of data on the coronary artery size in the Indian population as measured by intravascular ultrasound (IVUS). Methods: A total of 303 patients with acute coronary syndrome (ACS) undergoing percutaneous coronary intervention (PCI) with intravascular ultrasound underwent analysis along with quantitative coronary angiography (QCA). Of the 492 proximal coronary segments; 221 relating to left main (LM), 164 to left anterior descending artery (LAD), 45 to left circumflex artery (LCX), and 62 to right coronary artery (RCA) were considered. Results: Patient's mean age was 53.37 ± 3.5 years; men 80%; hypertension 35% and diabetes 24.8%. On IVUS, mean minimal lumen diameter as compared to QCA in LM (4.60 mm versus 4.50 mm, p &lt; 0.001), LAD (3.71 mm versus 3.45 mm, p &lt; 0.001), LCX (3.55 mm versus 3.16 mm, p &lt; 0.001) and RCA (3.85 mm versus 3.27 mm, p &lt; 0.001) were significantly larger. Lumen and external elastic membrane (EEM) cross-sectional area (CSA) were larger in males as compared to females with statistical significance for lumen CSA in LM (p = 0.04); RCA (p = 0.02) and EEM CSA in LM (p = 0.03); RCA (p = 0.006) but no significance for adjusted body surface area (BSA). In multivariate models, BSA and age were independent predictors of LM and LAD diameters and areas, but age was an independent predictor indexed to BSA. Conclusion: The coronary artery dimensions by IVUS are significantly larger than QCA. No gender difference in coronary artery size. Age was an independent predictor of coronary artery size in left main and LAD. The coronary artery size may not be a risk factor for acute coronary syndrome. abstract_id: PUBMED:28822520 Coronary artery dimensions in normal Indians. Background: Diameter of coronary artery is an important predictor of outcome after percutaneous coronary interventions and coronary artery bypass graft surgery. There is very limited data available about coronary artery dimensions in an Indian population. Aims: To study the normal dimensions of the coronary artery segments in Indians without coronary artery disease by using quantitative coronary angiography and also to compare the dimensions in Indians with Western. Material And Method: 229 patients who have undergone coronary angiography with entirely normal coronary angiogram were included in our study. Results: This study showed the diameter of vessels in males and females when taken together the left main was larger in size followed by proximal LAD, proximal RCA &amp; proximal LCX respectively (4.08±0.44mm, 3.27±0.23mm, 3.20±0.37mm, 2.97±0.37mm).When the vessel diameter was indexed to body surface area there was no statistical difference between male and female (p value&gt;0.05). The computed value of proximal coronary artery diameter unadjusted for individual body surface area, when compared to Caucasians showed that Caucasians have larger coronary artery dimensions than Indians. But when the proximal vessel diameter was indexed to body surface area there was no statistical significant difference between Indians and Caucasians (p value&gt;0.05). Conclusions: We found that coronary artery size when indexed to body surface area is not statistically different in Indian males and females and compared to Caucasians. However with a smaller body habitus Indians have smaller coronary arteries. abstract_id: PUBMED:8857475 Effect of coronary artery size on the prevalence of atherosclerosis. To investigate the effect of coronary artery size on the prevalence of atherosclerosis, we measured the diameters of the major coronary arteries prospectively in 884 consecutive patients referred for coronary arteriography. For each artery, we assigned patients to 3 groups: group S (small) and group L (large) with diameters &gt;1SD smaller and larger, respectively, than the mean; and group A (average), with diameters within 1SD of the mean. As specified during study design, we compared the frequency of lesions &gt; or = 50% diameter stenosis in groups S and L for each artery. We adjusted for relevant covariates by performing logistic regression on data from all 884 patients with coronary diameter entered as a continuous variable. In group S versus L, respectively, the frequency of &gt; or = 50% lesion was 6.5% versus 2.4% (p = 0.13) in the left main artery; 61.3% versus 35.8% (p = 0.0001) in the right coronary artery; 58.1% versus 40.7% (p = 0.008) in the left anterior descending artery, and 47.4% versus 22.2% (p = 0.0001) in the circumflex artery. Multivariate analysis showed that coronary diameter was a significant independent predictor of lesions in the right coronary artery (p = 0.000001), left anterior descending artery (p = 0.001), and circumflex artery (p = 0.0002) and nearly significant in the left main artery (p = 0.077). Thus, small coronary artery size may be a risk factor for atherosclerosis. abstract_id: PUBMED:36523268 Polygenic coronary artery disease association with brain atrophy in the cognitively impaired. While a number of low-frequency genetic variants of large effect size have been shown to underlie both cardiovascular disease and dementia, recent studies have highlighted the importance of common genetic variants of small effect size, which, in aggregate, are embodied by a polygenic risk score. We investigate the effect of polygenic risk for coronary artery disease on brain atrophy in Alzheimer's disease using whole-brain volume and put our findings in context with the polygenic risk for Alzheimer's disease and presumed small vessel disease as quantified by white-matter hyperintensities. We use 730 subjects from the Alzheimer's disease neuroimaging initiative database to investigate polygenic risk score effects (beyond APOE) on whole-brain volumes, total and regional white-matter hyperintensities and amyloid beta across diagnostic groups. In a subset of these subjects (N = 602), we utilized longitudinal changes in whole-brain volume over 24 months using the boundary shift integral approach. Linear regression and linear mixed-effects models were used to investigate the effect of white-matter hyperintensities at baseline as well as Alzheimer's disease-polygenic risk score and coronary artery disease-polygenic risk score on whole-brain atrophy and whole-brain atrophy acceleration, respectively. All genetic associations were examined under the oligogenic (P = 1e-5) and the more variant-inclusive polygenic (P = 0.5) scenarios. Results suggest no evidence for a link between the polygenic risk score and markers of Alzheimer's disease pathology at baseline (when stratified by diagnostic group). However, both Alzheimer's disease-polygenic risk score and coronary artery disease-polygenic risk score were associated with longitudinal decline in whole-brain volume (Alzheimer's disease-polygenic risk score t = 3.3, PFDR = 0.007 over 24 months in healthy controls) and surprisingly, under certain conditions, whole-brain volume atrophy is statistically more correlated with cardiac polygenic risk score than Alzheimer's disease-polygenic risk score (coronary artery disease-polygenic risk score t = 2.1, PFDR = 0.04 over 24 months in the mild cognitive impairment group). Further, in our regional analysis of white-matter hyperintensities, Alzheimer's disease-polygenic risk score beyond APOE is predictive of white-matter volume in the occipital lobe in Alzheimer's disease subjects in the polygenic regime. Finally, the rate of change of brain volume (or atrophy acceleration) may be sensitive to Alzheimer's disease-polygenic risk beyond APOE in healthy individuals (t = 2, P = 0.04). For subjects with mild cognitive impairment, beyond APOE, a more inclusive polygenic risk score including more variants, shows coronary artery disease-polygenic risk score to be more predictive of whole-brain volume atrophy, than an oligogenic approach including fewer larger effect size variants. abstract_id: PUBMED:22394037 Low density lipoprotein size in relation to carotid intima-media thickness in coronary artery disease. Objective: With increasing interest in the role of non-traditional lipid risk factors in coronary artery disease, we undertook this study to relate LDL subclass size and carotid intima-media thickness of the common carotid artery in coronary artery disease patients. Methods: The study was conducted in 106 patients during their first visit (study group I) and after 12 months (study group II). Intima-media thickness of the common carotid artery was determined using B-mode ultrasound. Separation of LDL subclasses was preformed by 3-31 % polyacrylamide gradient gel electrophoresis. Results: LDL3 was the dominant subclass in both study groups, but there was statistically significant difference in the distribution of dominant LDL subclasses (p&lt;0.01). The mean carotid intima- media thickness was significantly increased (p&lt;0.001) in coronary artery disease patients after 12 months period. There was significant negative correlation between intima-media thickness and LDL size in both study groups (p&lt;0.05). Intima-media thickness was not significantly correlated with plasma lipid concentrations. Multiple regression analyses show that strongest independent predictor of the intima-media thickness variation was diastolic blood pressure, followed by LDL size and age, and accounted for 29 % of the observed variability in intima-media thickness. Conclusion: LDL particle size is independently associated with carotid intima-media thickness in coronary artery disease patients with normal levels of traditional lipid risk. These results imply that small, dense LDL subclasses are an important indicator for assessing atherosclerosis and its progression (Tab. 4, Ref. 39). abstract_id: PUBMED:38409473 Cerebral white matter hyperintensities indicate severity and progression of coronary artery calcification. Cerebral white matter hyperintensities (WMH) have been associated with subclinical atherosclerosis including coronary artery calcification (CAC). However, previous studies on this association are limited by only cross-sectional analysis. We aimed to explore the relationship between WMH and CAC in elderly individuals both cross-sectionally and longitudinally. The study population consisted of elderly stroke- and dementia-free participants from the community-based Austrian Stroke Prevention Family Study (ASPFS). WMH volume and CAC levels (via Agatston score) were analyzed at baseline and after a 6-year follow-up period. Of 324 study participants (median age: 68 years), 115 underwent follow-up. Baseline WMH volume (median: 4.1 cm3) positively correlated with baseline CAC levels in multivariable analysis correcting for common vascular risk factors (p = 0.010). While baseline CAC levels were not predictive for WMH progression (p = 0.447), baseline WMH volume was associated CAC progression (median Agatston score progression: 27) in multivariable analysis (ß = 66.3 ± 22.3 [per cm3], p = 0.004). Ten of 11 participants (91%) with severe WMH (Fazekas Scale: 3) at baseline showed significant CAC progression &gt; 100 during follow-up. In this community-based cohort of elderly individuals, WMH were associated with CAC and predictive of its progression over a 6-year follow-up. Screening for coronary artery disease might be considered in people with more severe WMH. abstract_id: PUBMED:37648961 Is coronary artery calcium an independent risk factor for white matter hyperintensity? Background: Cardiovascular diseases have been considered the primary cause of disability and death worldwide. Coronary artery calcium (CAC) is an important indicator of the severity of coronary atherosclerosis. This study is aimed to investigate the relationship between CAC and white matter hyperintensity (WMH) in the context of diagnostic utility. Methods: A retrospective analysis was conducted on 342 patients with a diagnosis of WMH on magnetic resonance images (MRI) who also underwent chest computed tomography (CT) scans. WMH volumes were automatically measured using a lesion prediction algorithm. Subjects were divided into four groups based on the CAC score obtained from chest CT scans. A multilevel mixed-effects linear regression model considering conventional vascular risk factors assessed the association between total WMH volume and CAC score. Results: Overall, participants with coronary artery calcium (CAC score &gt; 0) had larger WMH volumes than those without calcium (CAC score = 0), and WMH volumes were statistically different between the four CAC score groups, with increasing CAC scores, the volume of WMH significantly increased. In the linear regression model 1 of the high CAC score group, for every 1% increase in CAC score, the WMH volume increases by 2.96%. After including other covariates in model 2 and model 3, the β coefficient in the high CAC group remains higher than in the low and medium CAC score groups. Conclusion: In elderly adults, the presence and severity of CAC is related to an increase in WMH volume. Our findings suggest an association between two different vascular bed diseases in addition to traditional vascular risk factors, possibly indicating a comorbid mechanism. abstract_id: PUBMED:34000829 Association of Coronary Artery Atherosclerosis With Brain White Matter Hyperintensity. Background And Purpose: We aim to determine, in healthy high-risk adults, the association between subclinical coronary artery disease and white matter hyperintensity (WMH) volume and location, independent of atherosclerotic risk factors. Methods: Seven hundred eighty-two asymptomatic first-degree relatives of index cases with early-onset coronary artery disease (&lt;60 years old) from GeneSTAR (Genetic Study of Atherosclerosis Risk) with contemporaneous coronary computed tomography angiography and brain magnetic resonance imaging were analyzed. Multilevel mixed-effects linear regression models, accounting for family structure, evaluated the association of total WMH volume and 3 regions (deep WMH, periventricular WMH [PVWMH], or borderzone [cuff]) with markers of coronary artery disease. Separate models were created for total WMH, deep WMH, PVWMH, and cuff volumes, each, as dependent variables, across coronary computed tomography angiography variables, adjusted for covariates. Results: Mean age was 51 years ±10, with 58% women and 39% African American people. Participants with any coronary plaque had 52% larger WMH volumes than those without plaque (95% CI, 0.24–0.59). Per 1% greater coronary plaque volume, total WMH volumes were 0.07% larger (95% CI, 0.04–0.10). Every 1% higher total coronary plaque volume was associated with 5.03% larger deep WMH volume (95% CI, 4.67–5.38), 5.10% PVWMH larger volume (95% CI, 4.72–5.48), and 2.74% larger cuff volume (95% CI, 2.38–3.09) with differences in this association when comparing deep WMH to PVWMH (P interaction, 0.001) or cuff (P interaction, &lt;0.001), respectively. Conclusions: In healthy, high-risk individuals, the presence and volume of coronary artery plaque are associated with larger WMH volumes, appearing the strongest for PVWMH. These findings in high-risk families suggest a disease relationship in 2 different vascular beds, beyond traditional risk factors, possibly due to genetic predisposition. abstract_id: PUBMED:33466162 Coronary artery size as a predictor of Y-graft patency following coronary artery bypass surgery. Abstract: This study aims to evaluate the association between target vessel size and graft patency in the right IMA-right coronary territory anastomosis.A total of 402 patients who underwent CABG between 2005 and 2016 using the right IMA Y-graft to the posterolateral branch or posterior descending artery were enrolled. Preoperative coronary angiography was utilized to measure the size of the target coronary arteries in the right territory. Follow-up angiography and computed tomography at 6 to 12 months were used to determine graft patency.Thirty patients (7.5%) were found to have occluded graft. Larger target vessel size was associated with lower risk of graft occlusion (OR: 0.18, 95% CI:0.05-0.62, P = .007). The receiver operating characteristic (ROC) curve showed that the cutoff-value of 1.93 mm was found to have the maximum sum of sensitivity and specificity for graft patency (Area under the curve (AUC): 0.65, P = .01). Excluding patients with right coronary artery total occlusion, the discriminative performance of target vessel size became more prominent (AUC: 0.76, P &lt; .01), with same cutoff value.In the setting of bilateral IMA composite grafting, the patency of right IMA to right coronary territory was influenced by the size of the target coronary artery. The influence of target coronary artery size was prominent in non-total occlusion patients. Careful selection of a target anastomosis site is recommended. Answer: Yes, coronary artery size does matter. Several studies have demonstrated the clinical significance of coronary artery size in various contexts. One study found that patients with a small diameter of the right coronary artery (RCA) showed probable ischemia in the right ventricle, indicating that a small caliber of RCA can be clinically relevant (PUBMED:15966932). Another study using intravascular ultrasound (IVUS) found that coronary artery dimensions are significantly larger than those measured by quantitative coronary angiography (QCA), suggesting that coronary artery size may not be a risk factor for acute coronary syndrome (PUBMED:32035525). Research on the Indian population showed that while coronary artery size, when indexed to body surface area, is not statistically different between Indian males and females or compared to Caucasians, Indians generally have smaller coronary arteries due to a smaller body habitus (PUBMED:28822520). Furthermore, a study indicated that small coronary artery size may be a risk factor for atherosclerosis, as smaller arteries had a higher frequency of lesions (PUBMED:8857475). In the context of cognitive impairment and dementia, polygenic risk for coronary artery disease was associated with brain atrophy, suggesting a link between coronary artery health and brain structure (PUBMED:36523268). Additionally, LDL particle size was found to be independently associated with carotid intima-media thickness in coronary artery disease patients, implying that small, dense LDL subclasses are an important indicator for assessing atherosclerosis and its progression (PUBMED:22394037). White matter hyperintensities (WMH), which are indicative of small vessel disease in the brain, have been associated with the severity and progression of coronary artery calcification (CAC), further highlighting the importance of coronary artery size and health in relation to other vascular diseases (PUBMED:38409473, PUBMED:37648961, PUBMED:34000829). Lastly, coronary artery size has been shown to be a predictor of graft patency following coronary artery bypass surgery, with larger target vessel size associated with lower risk of graft occlusion (PUBMED:33466162). In summary, coronary artery size is an important factor in the risk and progression of atherosclerosis, ischemia, and its association with other vascular diseases and surgical outcomes.
Instruction: Number of non-diabetic drugs: a simple comorbidity indicator for diabetes? Abstracts: abstract_id: PUBMED:11599200 Number of non-diabetic drugs: a simple comorbidity indicator for diabetes? Introduction: The number of non-diabetic drugs, taken by a patient with diabetes at any one point in time, has been validated in previous studies as a comorbidity indicator. Aim: The aim of the paper is to examine the relationship between this comorbidity indicator and health status in people with Type 2 diabetes. Method: The analysis presented is from a prospective cohort study of people with Type 2 diabetes before and after commencing insulin therapy, with simultaneous collection of health status, clinical and other comparative data. Results: Of the 48 people for whom both health status and drug data were available, 26 (54%) were taking at least one non-diabetic drug and 16 (33%) were taking 3 or more non-diabetic drugs, at the baseline assessment. There were no significant relationships between the number of non-diabetic drugs taken, and age, duration of diabetes or baseline HbA1c measurements. However, there were statistically significant relationships between the number of non-diabetic drugs and health status, in terms of depression and physical function. Conclusion: Drug data are routinely recorded in primary care and therefore the number of non-diabetic drugs is a potentially widely available indicator. This indicator could be a useful, simple addition to datasets that not only proxies comorbidity but also relates to patients' physical function and depression status. abstract_id: PUBMED:32208583 Geographic variability of major non-traumatic lower limb amputations in diabetic and non-diabetic patients in Poland. Introduction: High amputation of the lower limb not only causes immense physical disability but also the destruction of the patient's mental state, and helps to shorten life of patients with diabetes. The incidence of amputations in diabetic patients is 10 times higher in comparison to non-diabetic subjects (2.8% vs. 0.29%). The purpose of the study is an analysis of the geographic variability of major non-traumatic lower limb amputation in diabetic and non-diabetic patients in Poland. Material And Methods: All major non-traumatic lower limb amputations performed for the first time, in particular databetween 1 January 2013 - 31 December 2013, and between 1 January 2014 - December 2014, were identified in the National Health Fund (NHF) database. In the presented study, the patients were grouped in relevant provincial departments of the NHF according to their place of residence, and not according to the hospital where lower limb amputation was performed. Results: In 2013 in Poland, 4,727 major non-traumatic lower limb amputation were performed in diabetic patients, and 4,350 in 2014. On the other hand, in non-diabetic patients, 3,469 major non-traumatic lower limb amputations were performed in 2013, and 3149 in 2014. The mean number indicator of major non-traumatic lower limb amputations in diabetic patients in Poland, compared to the average indicator of amputations in patients without diagnosed diabetes in Poland was 19.9-fold in 2013 and 19.4-fold higher in 2014. Conclusions: In populations of diabetic patients and individuals without diagnosed diabetes major non-traumatic lower limb amputations are performed over 19-fold more frequently. abstract_id: PUBMED:38419958 Parallelism and non-parallelism in diabetic nephropathy and diabetic retinopathy. Diabetic nephropathy (DN) and diabetic retinopathy (DR), as microvascular complications of diabetes mellitus, are currently the leading causes of end-stage renal disease (ESRD) and blindness, respectively, in the adult working population, and they are major public health problems with social and economic burdens. The parallelism between the two in the process of occurrence and development manifests in the high overlap of disease-causing risk factors and pathogenesis, high rates of comorbidity, mutually predictive effects, and partial concordance in the clinical use of medications. However, since the two organs, the eye and the kidney, have their unique internal environment and physiological processes, each with specific influencing molecules, and the target organs have non-parallelism due to different pathological changes and responses to various influencing factors, this article provides an overview of the parallelism and non-parallelism between DN and DR to further recognize the commonalities and differences between the two diseases and provide references for early diagnosis, clinical guidance on the use of medication, and the development of new drugs. abstract_id: PUBMED:15372417 Perceived health-related quality of life and comorbidity in diabetic patients starting dialysis (CALVIDIA study). Background: Diabetes mellitus (DM) is a widespread prevalent illness, currently the main cause of end-stage renal disease (ESRD). Material And Methods: In a longitudinal, prospective study we compared two cohorts of patients starting dialysis therapy, diabetic and non-diabetic ESRD patients. Perceived health was measured by the Medical Outcomes Study Short-Form 36 (SF-36) questionnaire, functional status by the Karnofsky scale and comorbidity by the Charlson age-comorbidity index. A broad spectrum of variables in relation to diabetes, ESRD, comorbidity and renal replacement therapy (RRT) were studied, as well as the distribution of comorbidity frequencies at dialysis start. Results: Thirty-four Spanish centers included 232 diabetic patients, 43 type 1 and 189 type 2, mean diabetes duration 18 +/- 9 yrs, and five centers included 121 non-diabetic patients. Out of the 232 diabetic patients, 187 patients (81%) started hemodialysis (HD) and 45 patients (19%) started peritoneal dialysis (PD) (vs. 82% and 18%, respectively in non-diabetic patients). Transient vascular access (VA) for starting RRT was required in 54% of the diabetic patients vs. 53% in the nondiabetic patients. When both study groups were compared, diabetic patients required antihypertensive drugs more frequently than non-diabetic patients and showed higher systolic blood pressure (BP), as well as higher cardiovascular (CV) complication incidences, poorer SF-36 physical component summary scores and mental component summary scores and worse Karnofsky scale scores, with the Charlson age-comorbidity score being higher. Conclusion: Diabetic patients starting dialysis in Spain are more often type 2 diabetics, have worse perceived health-related quality of life (HRQoL) in relation to non-diabetic patients, worse functional status and higher incidences of prognostic mortality markers. abstract_id: PUBMED:35522863 THE ROLE OF COMORBIDITY IN THE CLINICAL COURSE AND QUALITY OF LIFE OF PATIENTS WITH DIABETIC POLYNEUROPATHY. Objective: The aim: To identify and substantiate the role of comorbidity in the clinical course and quality of life (QOL) of patients with diabetic polyneuropathy (DP). Patients And Methods: Materials and methods: We examined 139 patients aged from 19 to 69 years with DP occured as a consequence from type I and II diabetes mellitus (DM). The examined persons were divided into two groups: DP due to type I and II DM with comorbidity (group A,n=93) and without comorbidity (group B,n=46). For the patients was done a comprehensive clinical and neurological examination, laboratory, instrumental methods of examination. Results: Results: We observe hypo- or areflexia much more in group A respect to reflexes on the upper and lower extremities than in group B, where the changes are more noticeable on the lower extremities. The level of QOL in group A is significantly lower than in group B. According to the McGill scale in group A, all indicators of pain characteristics are higher. Quite a high score in group A on the Pain Rating Index(PRI) - 32.17±1.57points. The lowest rates of the nerve conduction velocity (NCV) on the motor fibers were registered in group A, on the sensitive fibers of the upper extremities has got lower rates in groups A and B than in the control group, but in group A it is slightly higher. Conclusion: Conclusions: Clinical manifestations of DP in group A are more pronounced than in the comparison group and a wide range of comorbidity was diagnosed, including cardiovascular, which aggravates the manifestations of DP. abstract_id: PUBMED:26247456 Comorbidity and Health Care Resource Use Among Commercially Insured Non-Elderly Patients With Diabetic Macular Edema. Background And Objective: Diabetic macular edema (DME) is a leading cause of blindness for non-elderly adults; however, health care-associated burden data from this population is lacking. The authors describe health care-associated burden in non-elderly patients with DME compared to those with diabetes and no DME. Patients And Methods: In this retrospective, large-cohort study examines enrollment and health care claims (2007 to 2011) from a national database of insured patients aged 18 to 63 years (mean: 51). Comorbidity and health care utilization differences between patients with DME (n = 24,326) and matched controls with diabetes but no DME (n = 122,710) were analyzed over 1 and 3 years. Results: DME patients had significantly more baseline comorbidities, and generally developed them at a higher rate over the study. Health care resource utilization rates were significantly higher in DME patients for every category analyzed. Patients with DME averaged more than 10 health care visits more than those with diabetes but no DME (25.5 vs 14.9; P &lt; .001). Conclusion: Working-age patients with DME exhibit a complicated comorbidity profile and high associated burden of health care consumption. Considering this burden is critical for managing this complex population. abstract_id: PUBMED:26707757 Screening for diabetic retinopathy by non-mydriatic retinal camera in the region of Fez Purpose: The goal of our study was to determine the benefits of screening by non-mydriatic retinal camera in the Fez region and to present the results of this screening. Patients And Methods: We report a prospective study of 430 diabetic patients, or 860 eyes, compiled between December 2012 and June 2013 in the Dokkarat health center, "integrated referral center for management of diabetes and hypertension" in the region of Fez. All patients underwent an examination by non-mydriatic fundus camera, obtaining two digital images per eye. The photos are taken by a nurse and a general practitioner. Interpretation of the images was performed in the ophthalmology department of the Hassan II University Medical Center of Fez. Results: A total of 430 patients were screened (78.4% female, 21.6% male). The mean age was 54.7 years (range, 10-80 years). The average duration of diabetes was 7.7 years (±5.9). Most patients had type 2 diabetes (94%). Insulin therapy was found in 35.3%. The mean HbA1c (obtained from 421 patients) was 8.21%. 38,8% of patients had never consulted an ophthalmologist before. The fundus photographs were not interpretable for at least one eye in 75 cases (17.4%). Among the interpretable images, the prevalence of diabetic retinopathy (DR) was 44.6%. Proliferative diabetic retinopathy was present in at least one eye in 2.5% and diabetic maculopathy in 32 cases (9%). After screening, 69% of patients with DR were seen by the ophthalmologist. Conclusion: These data confirm the benefits of screening by non-mydriatic retinal camera, because it helps reintegrate patients into the healthcare system and thus provides a real benefit in terms of public health and prevention of complications. abstract_id: PUBMED:33825028 Electrophysiology as a prognostic indicator of visual recovery in diabetic patients undergoing cataract surgery. Purpose: Visual outcomes after cataract surgery in diabetic patients with retinal or visual pathway disease are difficult to predict as the fundus may be obscured, and assessment of visual potential is challenging. This study assessed the value of visual electrophysiology as a prognostic indicator of visual recovery in diabetic patients with cataract, prior to cataract surgery. Methods: Forty-one diabetic patients (aged 52-80; 74 eyes) and 13 age-matched non-diabetic control patients (21 eyes) were examined prior to cataract surgery. Pre-surgical examinations included best-corrected visual acuity (BCVA), slit-lamp bio-microscopy, ISCEV-standard full-field electroretinography (ffERG), and flash visual evoked potential (flash VEP) testing. Electrophysiological assessments included quantification of the DA and LA ERG, oscillatory potentials (OPs; OP1, OP2, OP3, OP4) and flash VEP P1, P2, and P3 components. Post-operative BCVA was measured in all cases and the diabetic patients grouped according to the severity of visual acuity loss: mild (logMAR ≤ 0.1), moderate (0.1 &lt; logMAR &lt; 0.5), or severe (logMAR ≥ 0.5). A fourth group included those without diabetes. The pre-surgical electrophysiological data was compared between the four groups by analysis of variance. Results: The severity of post-surgical visual acuity loss in the diabetic patients was classified as mild (N=22 eyes), moderate (N=31 eyes), or severe (N=21 eyes). In the group without diabetes, post-surgical visual impairment was classified as mild (N=21 eyes). The pre-operative DA 10.0 ERG a-wave amplitudes, DA 3.0 ERG OP2 amplitudes, and the LA 3.0 a- and b-wave amplitudes showed best significant differences among the four groups. The flash VEP did not show significant difference between groups. Conclusion: Electrophysiological assessment of diabetic patients with cataract can provide a useful measure of retinal function. Full-field ERG components, including the DA 10.0 ERG a-wave, DA 3.0 ERG OP2 component, and the LA 3.0 a- and b-wave amplitudes, are of prognostic value in predicting post-surgical visual acuity, and may inform the surgical management of cataract patients with diabetes. abstract_id: PUBMED:29551929 Comparative analysis of diabetic nephropathy and non-diabetic nephropathy disease. Objective: Clinical symptoms of diabetic nephropathy patients and non-diabetic nephropathy are compared and analyzed, hemodialysis effect and quality of life of two kinds of nephrotic patients are analyzed. Methods: Respectively extract 1300 cases of diabetic nephropathy and non-diabetic nephropathy patients admitted to different hospitals during December 2011-December 2014. Based on whether the patient suffers from diabetes, they were divided into diabetic group and control group. Hemodialysis of two groups of patients were followed up to observe effectiveness of blood treatment, and complications were observed after one year of follow-up. Results: Hematodialysis effectiveness of diabetic nephropathy patients is significantly lower than that of non-diabetic nephropathy group. After 1 year's follow-up, it can be found that survival rate of diabetic nephropathy patients is much lower than that of control group. In statistical comparison of data involved in the two groups of patients, P &lt; 0.05, the difference is statistically significant. Conclusion: Treatment effect of diabetic nephropathy patients is relatively poor compared to that of non-diabetic patients. In clinics, management and prevention of diabetic patients should be strengthened to avoid complication of nephropathy which brings serious injury to patients. abstract_id: PUBMED:36538528 Bidirectional association between depression and diabetic nephropathy by meta-analysis. Background: Studies suggested that the association between depression and diabetic nephropathy may be bi-directional, but this hypothesis remains investigating. In this meta-analysis, the bi-directional relationship between depression and diabetic nephropathy was investigated. Methods: A search for the publications on depression and diabetic nephropathy in the databases of PubMed, Web of science, and Embase from the earliest available to August 2022 was conducted. Two sets of pooled risk estimates were calculated using random effects models: diabetic nephropathy predicting depression and depression predicting diabetic nephropathy. Cross-sectional studies were assessed using Agency for Healthcare Research and Quality (AHRQ), cohort and case-control studies were assessed using Newcastle-Ottawa Scale (NOS). Result: Of the 974,121 patients in 30 clinical studies, 24 studies met eligibility for diabetic nephropathy predicting onset of depression, representing 28,438 incident cases. The other 6 studies met criteria for depression predicting onset of diabetic nephropathy, representing 945,683 incident cases. The pooled odds ratio (OR) of diabetic nephropathy predicting depression was 1.46 (95% CI 1.27-1.67). The OR of depression predicting diabetic nephropathy was 1.22 (95% CI 1.13-1.31). Conclusion: This meta-analysis shows that the relationship between depression and diabetic nephropathy may be bidirectional. Diabetic nephropathy may be a predictor of depression, and depression may also be an indicator of diabetic nephropathy. The mechanisms underlying the bidirectional relationship need to be further investigated and interventions of the comorbidity of depression and diabetic nephropathy need be studied in clinical practice. Answer: Yes, the number of non-diabetic drugs taken by a patient with diabetes has been validated as a comorbidity indicator for diabetes. A study examining the relationship between this comorbidity indicator and health status in people with Type 2 diabetes found that there were statistically significant relationships between the number of non-diabetic drugs and health status, specifically in terms of depression and physical function (PUBMED:11599200). This suggests that the number of non-diabetic drugs could serve as a simple and useful addition to datasets to proxy comorbidity and relate to patients' physical function and depression status.
Instruction: "Excessive tears": a diagnostic sign for cocaine-induced mood disorder? Abstracts: abstract_id: PUBMED:17445519 "Excessive tears": a diagnostic sign for cocaine-induced mood disorder? Objective: This study aims to investigate excessive tearfulness as a sign to help determine the etiology of depressive symptoms. Methods: Clinicians were instructed to document the occurrence of excessive tearfulness in all patients seen in the Psychiatric Emergency Service. Excessive tearfulness was defined as crying to the severity that at least 1 tear drops off the face. A comparison group was formed from the general population of the Psychiatric Emergency Service. Results: Excessive tearfulness was documented in 36 different patients. The prevalence of the sign in the general population was 1.9%. Patients with excessive tearfulness were more likely to have cocaine in their urine (P &lt; .0001, chi(2) test), receive a substance-related primary diagnosis (P &lt; .0001, chi(2) test), and be admitted for psychiatric hospitalization (P &lt; .001 chi(2) test). Patients with excessive tearfulness had significantly shorter voluntary hospital stays (P &lt; .05, t test). Conclusion: Excessive tearfulness could be a useful clinical sign of cocaine-induced depression. abstract_id: PUBMED:31049275 A Case Study of Acute Stimulant-induced Psychosis. Psychosis resulting from stimulant overuse is commonly observed in clinical practices today. This is in large part due to the significant increase in attention-deficit/hyperactive disorder (ADHD) diagnoses in recent years, the increase in methamphetamine trafficking, and the prevalence of stimulant use in the day-to-day activities by many of those living in today's fast-paced society. The current inability to predict those users who will experience stimulant-induced psychosis from those users who will not needs further investigation. In this case study, we examine why one particular stimulant user experiences stimulant-induced psychosis. We give an account of a patient with an acute psychotic episode admitted to the emergency room at a local hospital. During initial evaluation, it was discovered that he had been diagnosed with ADHD one month ago and that the onset of his symptoms was likely related to an overuse of amphetamines. The patient had no personal history or family history of any psychiatric or mood disorders. He had no history of recreational drug use prior to this incident, and had no history of seizures or head trauma. After appropriate pharmacological intervention with antipsychotics and psychological intervention, the patient showed gradual improvement over the next five days of his hospitalization. After the five days, he was allowed to be discharged. We conclude with the cautionary advice that a patient's susceptibility of experiencing stimulant-induced psychosis should be assessed before prescribing stimulants. In cases where stimulant use is not supervised by a physician, appropriate understanding of management of stimulant-induced psychosis is of utmost importance in order to provide the very best patient education and care. abstract_id: PUBMED:18199307 Psychiatric comorbidity in young cocaine users: induced versus independent disorders. Aims: To examine the psychiatric status of young cocaine users using a validated instrument for the evaluation of psychiatric comorbidity, emphasizing the distinction between independent and induced psychiatric conditions. Design: Cross-sectional study. Setting: Barcelona, Spain. Participants: A cohort of 139 young (18-30 years) adult current regular cocaine users. Measurements: The Psychiatric Research Interview for Substance and Mental Disorders (PRISM-IV, which produces diagnoses according to DSM-IV criteria, including Axis II antisocial and borderline personality disorders). Findings: Nearly 42.5% of the subjects presented psychiatric comorbidity. The most common Axis I diagnoses were mood disorders (26.6%) and anxiety disorders (13%). Increasing age, having ever received treatment for drug use and freebase cocaine use were associated with substance-induced disorders diagnoses relative to primary Axis I disorders. Conclusions: This study has shown a relatively high prevalence of psychiatric comorbidity in cocaine users recruited in non-clinical settings. Future studies examining potential differential factors associated with primary versus substance-induced disorders are necessary to optimize the implementation of more suitable approaching programmes for young regular cocaine users. abstract_id: PUBMED:10365197 Cocaine-induced mood disorder: prevalence rates and psychiatric symptoms in an outpatient cocaine-dependent sample. This paper attempts to examine and compare prevalence rates and symptom patterns of DSM substance-induced and other mood disorders. 243 cocaine-dependent outpatients with cocaine-induced mood disorder (CIMD), other mood disorders, or no mood disorder were compared on measures of psychiatric symptoms. The prevalence rate for CIMD was 12% at baseline. Introduction of the DSM-IV diagnosis of CIMD did not substantially affect rates of the other depressive disorders. Patients with CIMD had symptom severity levels between those of patients with and without a mood disorder. These findings suggest some validity for the new DSM-IV diagnosis of CIMD, but also suggest that it requires further specification and replication. abstract_id: PUBMED:15081791 Prenatal cocaine exposure potentiates paroxetine-induced desensitization of 5-HT2A receptor function in adult male rat offspring. This study investigated the ability of prenatal exposure to cocaine to alter serotonin(2A) (5-HT(2A)) receptor function and paroxetine-induced desensitization of 5-HT(2A) receptor function in rat offspring. Following exposure to saline or (-)cocaine (15 mg/kg, s.c., b.i.d.), during gestational days 13-20, adult male offspring were treated with either saline or paroxetine (10 mg/kg/day, i.p. 14 days). Eighteen hours post-treatment, changes in the stimulation of oxytocin, adrenocorticotropic hormone (ACTH) and corticosterone by (-)4-iodo-2,5-dimethoxyphenylisopropylamine (DOI, 0.5 or 2.0 mg/kg, s.c.) and in 5-HT(2A) receptor densities were determined. Prenatal cocaine exposure did not alter 5-HT(2A) receptor-mediated neuroendocrine responses or 5-HT(2A) receptor densities. In contrast, paroxetine treatment reduced cortical 5-HT(2A) receptors (18-25%) and desensitized 5-HT(2A) receptor-mediated oxytocin responses in both offspring groups. Furthermore, in cocaine offspring, paroxetine produced an inhibition of 5-HT(2A) receptor-mediated increase in plasma ACTH levels and a greater attenuation of the oxytocin responses to (-)DOI. Paroxetine-induced reductions in body weight gain (-8.8%) were comparable in both offspring groups. These data, demonstrating that prenatal exposure to cocaine potentiates paroxetine-induced desensitization of 5-HT(2A) receptor function, may be clinically relevant with respect to treating mood disorders in adults exposed in utero to cocaine. abstract_id: PUBMED:15990489 When acute-stage psychosis and substance use co-occur: differentiating substance-induced and primary psychotic disorders. Substances such as alcohol, cocaine, amphetamine, and cannabis can produce psychotic reactions in individuals who are otherwise free of serious mental illness. However, persons with primary psychotic disorders, such as schizophrenia and bipolar disorder, who use these substances often present for treatment with signs and symptoms similar to those whose psychosis resulted from the use of drugs alone. While it is often difficult to distinguish substance-induced from primary psychoses, especially early in the course of treatment, this differential diagnosis has important implications for treatment planning. To help clinicians distinguish these two types of presentations, the authors first review the types of psychotic symptoms that can co-occur with substance use. They discuss the prevalence and patterns of substance use that have been found in patients with schizophrenia and other primary psychotic disorders and review the negative outcomes associated with substance use in this population. The prevalence of and types of symptoms and problems associated with psychotic symptoms that occur as a result of substance use alone are also reviewed. The authors describe assessment procedures for differentiating substance-induced and primary psychotic disorders. They stress the importance of accurately establishing the temporal relationship between the substance use and the onset and continuation of psychotic symptoms in making a differential diagnosis, as well as the importance of being familiar with the types of psychological symptoms that can occur with specific substances. The authors review the utility and limitations of a number of diagnostic instruments for assessing patients with co-occurring psychosis and substance use problems, including The Addiction Severity Index, The Michigan Alcohol Screening Test, and diagnostic interviews such as the Schedule for Affective Disorders and Schizophrenia and the Structured Clinical Interview for DSM. They then discuss the Psychiatric Research Interview for Substance and Mental Disorders (PRISM), an instrument that has been developed to address the lack of a diagnostic interview that is suitable for assessing the comorbidity of substance use and psychiatric disorders. The article concludes with a discussion of the importance of an appropriate match between diagnosis and treatment and the current state of our knowledge concerning the most appropriate types of treatment interventions for patients with substance-induced psychosis and those with dual diagnoses. abstract_id: PUBMED:8760520 Cocaine abuse and mental disease The comorbidity between psychiatric diseases and cocaine abuse is extensive. Several studies show that the prevalence of specific personality disorders, eating disorders, attention-deficit-disorder and affective disorders is high in cocaine abusers. Excessive cocaine consumption can lead to a toxic schizophrenic psychosis. The comorbidity has been understood in several ways, using many theoretical approaches. The different hypotheses should not be considered contradictive. Explanations involving genetic, psychosocial, psychological and biological approaches are necessary for a complete understanding of the comorbidity. abstract_id: PUBMED:15488477 Chronic antidepressant medication attenuates dexamethasone-induced neuronal death and sublethal neuronal damage in the hippocampus and striatum. Dexamethasone, a synthetic corticosteroid, which can induce a range of mood disorders including depression and affective psychosis, is toxic to specific hippocampal and striatal neuronal populations. Chronic administration of antidepressants can induce neuroprotective effects, potentially by raising cellular levels of brain-derived neurotrophic factor (BDNF). We accordingly tested the hypothesis that chronic pretreatment of rats (Sprague-Dawley, male) with antidepressants would attenuate dexamethasone-induced neuronal damage as revealed by reductions in the level of neuronal death and in sublethal neuronal damage shown by the increase in the number of MAP-2 immunoreactive neurons. In support of this hypothesis, we demonstrate that chronic treatment with a range of antidepressants prior to dexamethasone administration (0.7 mg/kg, i.p.) attenuated the levels of neuronal death and loss of MAP-2 immunoreactivity in both the hippocampus and striatum. The antidepressants used were: desipramine (8 mg/kg, i.p., tricyclic), fluoxetine (8 mg/kg, i.p., selective serotonin reuptake inhibitor) and tranylcypromine (10 mg/kg, i.p., monoamine oxidase inhibitor) with each drug being injected once per day for 10 days. In contrast, acute injection of none of the antidepressants exerted a protective effect from dexamethasone-associated neuronal damage. Similarly, injection of neither cocaine nor chlordiazepoxide (benzodiazepine) exerted protective effects when injected either chronically or acutely. The observed protection from dexamethasone-induced neuronal damage is in keeping with the potential of chronic antidepressant medication to increase BDNF levels. The potential for dexamethasone to induce disorders of mood by damaging specific neuronal populations in the hippocampus and dorsomedial striatum is discussed. abstract_id: PUBMED:22910475 Psychiatric comorbidity in cocaine users treated in therapeutic community: substance-induced versus independent disorders. This is a cross-sectional study of 227 cocaine dependent individuals in six different therapeutic communities (TCs) within a single treatment network in Andalusia (Spain). The primary aim of the study is to examine the prevalence of lifetime psychiatric comorbidity in this sample using the Psychiatric Research Interview for Substance and Mental Disorders (PRISM). Diagnoses were assessed 15-20 days after admission. The data indicate that more than 65% of the sample experienced a lifetime co-occurring psychiatric comorbidity. Substance-induced mood (21.6%) and psychotic (11.5%) disorders were more prevalent in this population than independent mood (12.3%) and psychotic (7.5%) disorders. These data suggest the need to introduce changes in these centers, both in the diagnostic aspects and in the treatment programs. abstract_id: PUBMED:31271832 Cholinergic Receptor Blockade in the VTA Attenuates Cue-Induced Cocaine-Seeking and Reverses the Anxiogenic Effects of Forced Abstinence. Drug relapse after periods of abstinence is a common feature of substance abuse. Moreover, anxiety and other mood disorders are often co-morbid with substance abuse. Cholinergic receptors in the ventral tegmental area (VTA) are known to mediate drug-seeking and anxiety-related behavior in rodent models. However, it is unclear if overlapping VTA cholinergic mechanisms mediate drug relapse and anxiety-related behaviors associated with drug abstinence. We examined the effects of VTA cholinergic receptor blockade on cue-induced cocaine seeking and anxiety during cocaine abstinence. Male Sprague-Dawley rats were trained to self-administer intravenous cocaine (~0.5 mg/kg/infusion, FR1 schedule) for 10 days, followed by 14 days of forced abstinence. VTA infusion of the non-selective nicotinic acetylcholine receptor antagonist mecamylamine (0, 10, and 30 μg/side) or the non-selective muscarinic receptor antagonist scopolamine (0, 2.4 and 24 μg /side) significantly decreased cue-induced cocaine seeking. In cocaine naïve rats, VTA mecamylamine or scopolamine also led to dose-dependent increases in open arm time in the elevated plus maze (EPM). In contrast, rats that received I.V. cocaine, compared to received I.V. saline rats, displayed an anxiogenic response on day 14 of abstinence as reflected by decreased open arm time in the EPM. Furthermore, low doses of VTA mecamylamine (10 μg /side) or scopolamine (2.4 μg /side), that did not alter EPM behavior in cocaine naive rats, were sufficient to reverse the anxiogenic effects of cocaine abstinence. Together, these data point to an overlapping role of VTA cholinergic mechanisms to regulate relapse and mood disorder-related responses during cocaine abstinence. Answer: The study with PUBMED:17445519 investigated the occurrence of excessive tearfulness in patients seen in the Psychiatric Emergency Service and found that excessive tearfulness was more likely to be present in patients who had cocaine in their urine, received a substance-related primary diagnosis, and were admitted for psychiatric hospitalization. The study concluded that excessive tearfulness could be a useful clinical sign of cocaine-induced depression. Therefore, based on this study, "excessive tears" could indeed be considered a diagnostic sign for cocaine-induced mood disorder.
Instruction: Can granisetron injection used as primary prophylaxis improve the control of nausea and vomiting with low- emetogenic chemotherapy? Abstracts: abstract_id: PUBMED:23534775 Can granisetron injection used as primary prophylaxis improve the control of nausea and vomiting with low- emetogenic chemotherapy? Background: The purpose of this study is to examine the risk of uncontrolled chemotherapy-induced nausea and vomiting (CINV) among patients receiving low emetogenic chemotherapy (LEC) with and without granisetron injection as the primary prophylaxis in addition to dexamethasone and metochlopramide. Materials And Methods: This was a single-centre, prospective cohort study. A total of 96 patients receiving LEC (52 with and 42 without granisetron) were randomly selected from the full patient list generated using the e-Hospital Information System (e-His). The rates of complete control (no CINV from days 1 to 5) and complete response (no nausea or vomiting in both acute and delayed phases) were identified through patient diaries which were adapted from the MASCC Antiemesis Tool (MAT). Selected covariates including gender, age, active alcohol consumption, morning sickness and previous chemotherapy history were controlled using the multiple logistic regression analyses. Results: Both groups showed significant difference with LEC regimens (p&lt;0.001). No differences were found in age, gender, ethnic group and other baseline characteristics. The granisetron group indicated a higher complete response rate in acute emesis (adjusted OR: 0.1; 95%CI 0.02-0.85; p=0.034) than did the non-granisetron group. Both groups showed similar complete control and complete response rates for acute nausea, delayed nausea and delayed emesis. Conclusions: Granisetron injection used as the primary prophylaxis in LEC demonstrated limited roles in CINV control. Optimization of the guideline-recommended antiemetic regimens may serve as a less costly alternative to protect patients from uncontrolled acute emesis. abstract_id: PUBMED:35290510 A retrospective study on chemotherapy-induced nausea and vomiting in highly/moderately emetogenic chemotherapy: incidence and prescribing practice. Background: Chemotherapy-induced nausea vomiting (CINV) is a common and significant problem in oncology patients and rated as one of cancer chemotherapy's most distressing side effects. The objectives of this study are to describe the incidence of CINV in highly and moderately emetogenic chemotherapy-treated patients and the prescribing pattern of CINV prophylaxis. Methods: This retrospective, cross-sectional single-center study randomly collected data on demographics, CINV episodes, and prescribing patterns for adult oncology patients receiving intravenous highly or moderately emetogenic chemotherapy (HEC/MEC) between January and December 2019. Results: A total of 419 randomly selected records of HEC/MEC recipients with 2388 total chemotherapy cycles were included. The mean age was 53.6 ± 12.6 years old. The majority was female (66%), Malay (54.4%), diagnosed with cancer stage IV (47.7%), and with no comorbidities (47%). All patients were prescribed with IV granisetron and dexamethasone before chemotherapy for acute prevention, whereas dexamethasone and metoclopramide were prescribed for delayed prevention. Aprepitant was not routinely prescribed for the prevention of CINV. CINV incidence was 57% in the studied population and 20% in the total cycle. This study found a significant association between CINV incidence with performance status and cisplatin-based chemotherapy (OR = 3.071, CI = 1.515-6.223, p = 0.002; OR = 4.587, CI = 1.739-12.099, p = 0.02, respectively). Conclusion: CINV incidence was rather high per patient but relatively low per cycle. Most patients were prescribed with dual regimen antiemetic prophylaxis. Impact: This study provides evidence that there was suboptimal use of recommended agents for CINV, and there is a clear need for further improvements in CINV management. abstract_id: PUBMED:19068468 Prophylaxis of chemotherapy-induced vomiting and nausea Chemotherapy-induced vomiting and nausea is the most common adverse event of anticancer therapy. In different guide-lines (MASCC, NCCN, ESMO and ASCO) antiemetic prophylaxis is directed toward the emetogenic potential of the chemotherapy and the type of vomiting and nausea. Chemotherapeutic agents are classified into four emetic risk groups: high, moderate, low, and minimal. Steroids, dexamethasone, metoclopramide, cannabinoids, benzodiazepines, 5-HT3 receptor antagonists (ondansetron, granisetron, tropisetron) and a new group of antiemetics, the neurokinin1 receptor antagonists are used to prevent anticipatory, acute and delayed vomiting and nausea. This paper examines evidence-based recommendations for optimal use of antiemetics. abstract_id: PUBMED:27259339 Economic evaluation of 5-HT3 receptor antagonists in combination with dexamethasone for the prevention of 'overall' nausea and vomiting following highly emetogenic chemotherapy in Chinese adult patients. Background Two pivotal Phase III trials compared the efficacy of palonosetron, ondansetron and granisetron, combined with dexamethasone, for the prevention of nausea and vomiting following highly emetogenic chemotherapy. However, an economic evaluation of these three regimens in the real-world setting of Chinese adult patients has not been determined. Objectives To estimate, from the perspective of the Chinese healthcare system, which of these frequently used strategies consisting of 0.25 mg palonosetron (0.25P), 16 mg ondansetron (Onda), and 3 mg granisetron (Gran), is the most cost-effective option in patients following highly emetogenic chemotherapy. Methods A Markov decision-analytic model was developed. The health and economic outcomes of the three strategies; 0.25P, Onda, and Gran were investigated. The clinical and utility data were taken from published studies. The cost data were calculated according to current local Chinese practices. Sensitivity analyses were performed to determine the impact of uncertainty regarding the results. Results The base-case analysis showed that the 0.25P strategy yielded maximum health benefits compared with the other two strategies. However, the probabilistic sensitivity analysis demonstrated that the Gran strategy was the most cost-effective approach when the willingness-to-pay threshold was not more than US$22,515/quality-adjusted life year. Moreover, palonosetron is not cost-effective in preventing 'overall' nausea and vomiting following highly emetogenic chemotherapy in Chinese patients. Conclusions Our analysis suggests that, compared with palonosetron and ondansetron, 3 mg granisetron may be a cost-effective treatment option in the current Chinese healthcare setting. abstract_id: PUBMED:10408847 Granisetron compared with prednisolone plus metopimazine as anti-emetic prophylaxis during multiple cycles of moderately emetogenic chemotherapy. This randomized, double-blind, double-dummy parallel study compared the anti-emetic efficacy and tolerability of the serotonin antagonist granisetron with prednisolone plus the dopamine D2 antagonist metopimazine during nine cycles of moderately emetogenic chemotherapy. Chemotherapy naive women with stage I or II breast cancer scheduled to intravenous cyclophosphamide, fluorouracil and methotrexate or cyclophosphamide, epirubicin and fluorouracil every 3 weeks were included. Patients received a single intravenous dose of granisetron 3 mg or a 3-day oral treatment with prednisolone 25 mg once a day plus metopimazine 30 mg four times a day. A total of 223 women were enrolled and 218 patients (97.8%) were evaluable for efficacy. Granisetron (n = 109) was superior to prednisolone plus metopimazine (n = 109) in the prophylaxis of acute nausea and vomiting during the first cycle of chemotherapy (P &lt; 0.001) and prednisolone plus metopimazine was superior on days 2-5 (P = 0.002). Overall, granisetron was superior on days 1-5 (P = 0.009). The median number of cycles completed with granisetron was five (95% confidence interval 4-6) compared with two (95% confidence interval 2-2) for prednisolone plus metopimazine (P = 0.0019). Constipation and rash were reported more frequently with granisetron (P &lt; 0.001 and P = 0.043 respectively) and palpitations more frequently with prednisolone plus metopimazine (P = 0.015). In conclusion, the number of cycles completed with granisetron was significantly higher than the number completed with prednisolone plus metopimazine, but the anti-emetic efficacy of both treatments declined during multiple cycles of moderately emetogenic chemotherapy. abstract_id: PUBMED:27583032 Neurokinin-1 inhibitors in the prevention of nausea and vomiting from highly emetogenic chemotherapy: a network meta-analysis. A network meta-analysis of the comparative effectiveness of neurokinin 1 (NK-1) inhibitors in the prophylaxis of highly emetogenic chemotherapy induced nausea and vomiting has been conducted. Eligible studies included randomized trials evaluating aprepitant, fosaprepitant, netupitant (NEPA), casopitant and rolapitant containing regimens in the setting of highly emetogenic chemotherapy. Primary outcomes of interest include complete response (CR) and rate of no significant nausea. After preclusion of ineligible studies, 19 studies were included in the final analysis. The majority of the regimens containing NK-1 inhibitors (including NEPA, aprepitant/palonosetron (palono)/dexamethasone (dexa), casopitant/granisetron (grani) or ondansetron (ondan)/dexa, aprepitant/ondan/dexa) are better than regimens not containing them (palono/dexa, ondan/dexa, grani/dexa) in terms of achieving a CR in the overall phase. Moreover, casopitant/grani or ondan/dexa and aprepitant/grani or ondan/dexa are better than rolapitant/ondan or grani/dexa in terms of CR achievement [odds ratio (OR) 1.62, 95% credible interval (CrI) 1.14-2.23, and OR 1.28, 95% CrI 1.01-1.59, respectively]. Taking into consideration the limitations of cross-trial comparisons, regimens containing neurokinin inhibitors are associated with higher CR rates than regimens not containing them. Moreover, casopitant and aprepitant regimens seem to be more effective than rolapitant regimens. abstract_id: PUBMED:15838618 Low-dose granisetron for prophylaxis of acute chemotherapy-induced nausea and vomiting: a pilot study. Objectives: Chemotherapy-induced nausea and vomiting (QTNV) are very uncomfortable symptoms for patients with cancer, which can be circumvented in most of them with drug combinations containing serotonin receptor antagonists (5-HT3 receptor antagonists) such as granisetron. In an attempt to decrease costs of QTNV prophylaxis, we studied a lower dose regimen of granisetron. Patients And Methods: Sixty patients with cancer scheduled to receive moderately/highly emetogenic chemotherapy were pretreated 1 h before with 0.5 mg granisetron p.o. combined with dexamethasone 20 mg i.v. Results: We observed complete control for nausea, vomiting, and nausea and vomiting in 78% [95% confidence interval (CI), 67-89%], 61% (95% CI, 47.5-74.5%), and 58% (95% CI, 44.3-71.7%) of the patients, respectively. This regimen was very well tolerated; headache (35%), xerostomia (11%), and constipation (5%) were the most frequent adverse symptoms reported. Conclusions: The regimen with lower dose granisetron is effective for acute QTNV prophylaxis and offers a cheaper alternative for QTNV control. We feel that these encouraging results should be confirmed in a randomized comparative trial. abstract_id: PUBMED:9261525 Delayed emesis induced by moderately emetogenic chemotherapy: do we need to treat all patients? The Italian Group for Antiemetic Research. Background: The pattern and prognostic factors of delayed nausea and vomiting induced by moderately emetogenic chemotherapy have not yet been adequately studied. Patients And Methods: Data are derived from a large and well defined population of patients studied to evaluate the efficacy and tolerability of granisetron, dexamethasone and their combination in the control of acute emesis over repeated cycles of moderately emetogenic chemotherapy. Patients were monitored on days 2-5 without receiving any prophylactic treatment for delayed emesis. Results: There were 395 evaluable patients at the first, 352 at the second and 319 at the third cycle of chemotherapy. Overall, 32.7% patients suffered symptoms of delayed vomiting and/or moderate-severe nausea at the first, 32.1% at the second and 35.1% at the third cycle of chemotherapy. More precisely, the incidence of delayed vomiting was 21.3% at the first, 18.5% at the second and 21.0% at the third cycle of chemotherapy, while the incidence of delayed moderate-severe nausea was 25.1%, 25.0% and 26.0%, respectively. The incidence of delayed vomiting and moderate-severe nausea was very low in patients who obtained good control of acute vomiting and moderate-severe nausea, but it remained substantial in patients who experienced them during the first 24 hours after chemotherapy. In fact, at first/second/third cycle of chemotherapy delayed vomiting occurred in 12.3%/11.8%/ 13.1% of patients who did not suffer from acute vomiting and in 54.1%/48.4%/55.9% of those who had acute vomiting, respectively, Similar data were obtained for delayed moderate-severe nausea. Conclusions: Patients without acute vomiting or moderate-severe acute nausea may not need any antiemetic prophylaxis for delayed vomiting or nausea, while those with a history of acute vomiting or moderate-severe acute nausea should always be treated for delayed emesis. Selection bias and dependence effect of delayed emesis on acute emesis can cause misinterpretation of data derived from clinical trials in patients submitted to multiple cycles of chemotherapy. abstract_id: PUBMED:9391542 A comparative study of intravenous granisetron versus intravenous and oral ondansetron in the prevention of nausea and vomiting associated with moderately emetogenic chemotherapy. We conducted a prospective, randomized, open, single-center, parallel group study comparing the anti-emetic efficacy and toxicity of granisetron with that of ondansetron in patients receiving moderately emetogenic chemotherapy. From December 1994 to May 1995, patients who were to receive moderately emetogenic chemotherapy for the first time or who had not received chemotherapy (80 to 100 mg/m2 of cisplatin or 40 mg/m2 of doxorubicin) within 4 weeks previously were enrolled in this study. The following anti-emetic regimens were used: 3 mg of granisetron were given intravenously before chemotherapy for a single dose; 8 mg of ondansetron were given intravenously before chemotherapy and then every 8 hours for a total of 3 doses, plus 8 mg of an oral maintenance dose every 12 hours for 5 consecutive days. We evaluated 97 patients (48 received granisetron and 49 received ondansetron). In the first 24 hours after chemotherapy, complete and major responses were achieved in 76.6% of the patients receiving granisetron and in 72.9% of patients receiving ondansetron (p = 0.9033). Additionally, there was no difference in the control of delayed nausea and vomiting between the two groups (51.1% versus 54.2%, p = 0.9200), and there were no significant adverse effects or toxicities. We have concluded that a single dose of granisetron is as effective in prophylaxis of emesis induced by moderately emetogenic chemotherapy as a triple dose of ondansetron plus oral maintenance. abstract_id: PUBMED:37564446 The Effect and Safety of Olanzapine on Nausea and Vomiting in Children Receiving Moderately Emetogenic Chemotherapy. Background: In order to improve the complete recovery of nausea and vomiting, we conducted a study with the aim of preventing acute and delayed nausea and vomiting in children undergoing moderate emetogenic chemotherapy. Materials And Methods: A clinical trial study was done on 130 children received chemotherapy. Patients received olanzapine and placebo. All groups received granisetron along with dexamethasone (DEX). The severity of chemotherapy-induced nausea and vomiting (CINV) induced by chemotherapy was compared in two groups. Results: The severity of nausea on the first, second, third, and fourth days was not significantly different (P &gt; .05) in two groups. The number of patients without vomiting was significantly different during the first 24 hours after chemotherapy between patients in the two groups (82.3% vs 64.5%; P = .016). Conclusion: This study showed that olanzapine, which acts as an inhibitor of neurotransmitters, had a favorable efficacy in controlling acute and delayed CINV. More studies with large sample size are needed to compare the effect of olanzapine with other agents including aprepitant and palonosetron in the prevention of CINV. Answer: The study described in abstract PUBMED:23534775 investigated the effectiveness of granisetron injection as primary prophylaxis for controlling chemotherapy-induced nausea and vomiting (CINV) in patients receiving low emetogenic chemotherapy (LEC). The study was a single-centre, prospective cohort study that included 96 patients, with 52 receiving granisetron and 42 not receiving it. The results showed that the granisetron group had a higher complete response rate in acute emesis compared to the non-granisetron group. However, both groups showed similar complete control and complete response rates for acute nausea, delayed nausea, and delayed emesis. The conclusion drawn from this study was that granisetron injection used as primary prophylaxis in LEC demonstrated limited roles in CINV control, suggesting that optimization of guideline-recommended antiemetic regimens may be a less costly alternative to protect patients from uncontrolled acute emesis. Therefore, based on this study, granisetron injection can improve the control of acute emesis in patients receiving low emetogenic chemotherapy, but its role in controlling acute nausea, delayed nausea, and delayed emesis is limited. It suggests that while granisetron may be beneficial for some aspects of CINV control, other antiemetic strategies should also be considered for comprehensive management of CINV in patients undergoing LEC.
Instruction: Do male and female general practitioners differently prescribe chronic pain drugs to older patients? Abstracts: abstract_id: PUBMED:25521663 Do male and female general practitioners differently prescribe chronic pain drugs to older patients? Objective: The aim of this study was to identify the relationship between general practitioner (GP) gender and prescribing practice of chronic pain drugs in older adults. Design: Cross-sectional observational study. Setting: GPs in private practice throughout France. Subjects: Two hundred and sixty GPs (80.8% male and 19.2% female) enrolled 1,379 (28.4% male and 71.6% female) noninstitutionalized patients over 65 years of age, suffering from chronic pain. Methods: A comparison of prescribing habits between male and female GPs was performed on baseline data with univariate analyses followed by multivariate analyses after taking several confounding factors into account. Results: No significant differences were found when comparing male and female GPs' prescriptions of World Health Organization step 1, step 2, and step 3 analgesics. Male GPs were more likely than female GPs to prescribe antineuropathic pain drugs (11.3% of patients with male GPs versus 4.8% of patients with female GPs, P = 0.004) and less likely to prescribe symptomatic slow-acting drugs for osteoarthritis (SySADOA) (10.2% of male GPs' patients versus 18.8% of female GPs' patients, P = 0.0003). After adjusting for several confounding factors, male GPs were still more likely to prescribe antineuropathic pain drugs (OR 2.43, 95% CI 1.15-5.14, P = 0.02) and less likely to prescribe symptomatic slow-acting drugs (OR 0.64, 95% CI 0.42-0.97, P = 0.03). Conclusion: Male and female GPs prescribe analgesics in a similar manner. However, male GPs prescribe more antineuropathic pain drugs, but fewer SySADOA. abstract_id: PUBMED:29970456 Knowledge and attitudes of Australian general practitioners towards medicinal cannabis: a cross-sectional survey. Objectives: To examine the knowledge and attitudes of Australian general practitioners (GP) towards medicinal cannabis, including patient demand, GP perceptions of therapeutic effects and potential harms, perceived knowledge and willingness to prescribe. Design, Setting And Participants: A cross-sectional survey completed by 640 GPs (response rate=37%) attending multiple-topic educational seminars in five major Australian cities between August and November 2017. Main Outcome Measures: Number of patients enquiring about medicinal cannabis, perceived knowledge of GPs, conditions where GPs perceived it to be beneficial, willingness to prescribe, preferred models of access, perceived adverse effects and safety relative to other prescription drugs. Results: The majority of GPs (61.5%) reported one or more patient enquiries about medicinal cannabis in the last three months. Most felt that their own knowledge was inadequate and only 28.8% felt comfortable discussing medicinal cannabis with patients. Over half (56.5%) supported availability on prescription, with the preferred access model involving trained GPs prescribing independently of specialists. Support for use of medicinal cannabis was condition-specific, with strong support for use in cancer pain, palliative care and epilepsy, and much lower support for use in depression and anxiety. Conclusions: The majority of GPs are supportive or neutral with regards to medicinal cannabis use. Our results highlight the need for improved training of GPs around medicinal cannabis, and the discrepancy between GP-preferred models of access and the current specialist-led models. abstract_id: PUBMED:29187328 Survey of the prescription of step-2 analgesics in persons aged 75 years and older in general practitioners. Although step-2 analgesics (S2A) are recommended when step-1 analgesics (S1A) are ineffective or in cases of moderate to severe pain, their efficacy varies. This study evaluated the place attributed by general practitioners (GPs) to S2A in the treatment of nociceptive pain in elderly persons (EP). Descriptive, transversal and declarative study conducted among GPs in Burgundy. In total, 115 GPs took part in the survey. Among these, 60.8% had at least one consultation/day for pain. After paracetamol, which was systematically preferred, 56.4% prescribed S2A for EP. The reasons for prescribing S2A included pain not relieved by S1A in 93% of cases, another medical treatment or not (42.6%), or the existence of moderate to severe pain (31.3%). The use of S2A was more frequent in acute pain (79.1%) than in chronic pain (53%). Among GPs, 77.4% initially prescribed the minimal effective dose. GPs were reluctant to prescribe S2As for the following reasons: poor tolerance (78.3%) and the fear of severe adverse effects (57.4%). The most frequently cited adverse effects were nausea/vomiting and constipation by 66.1% of GPs. Nonetheless, 73.9% thought that S2As were necessary for EPs requiring outpatient care. Although the benefit/risk ration of S2As is controversial and their analgesic efficacy seems to be more variable than that of low-dose morphine, they have an important place in the therapeutic arsenal of GPs to treat pain in EP. abstract_id: PUBMED:32516932 Prescribing Analgesics to Older People: A Challenge for GPs. Background: Due to population aging, there is an increase in the prevalence of chronic diseases, and in particular musculoskeletal diseases. These trends are associated with an increased demand for prescription analgesics and an increased risk of polypharmacy and adverse medication reactions, which constitutes a challenge, especially for general practitioners (GPs), as the providers who are most responsible for the prescription policy. Objectives: To identify patterns of analgesics prescription for older people in the study area and explore associations between a long-term analgesic prescription and comorbidity patterns, as well as the prescription of psychotropic and other common medications in a continuous use. Methods: A retrospective study was conducted in 2015 in eastern Croatia. Patients were GP attenders ≥40 years old (N = 675), who were recruited during their appointments (consecutive patients). They were divided into two groups: those who have been continuously prescribed analgesics (N = 432) and those who have not (N = 243). Data from electronic health records were used to provide information about diagnoses of musculoskeletal and other chronic diseases, as well as prescription rates for analgesics and other medications. Exploratory methods and logistic regression models were used to analyse the data. Results: Analgesics have been continuously prescribed to 64% of the patients, mostly to those in the older age groups (50-79 years) and females, and they were indicated mainly for dorsalgia symptoms and arthrosis. Non-opioid analgesics were most common, with an increasing tendency to prescribe opioid analgesics to older patient groups aged 60-79 years. The study results indicate that there is a high rate of simultaneous prescription of analgesics and psychotropic medications, despite the intention of GPs to avoid prescribing psychotropic medications to patients who use any option with opioid analgesics. In general, receiving prescription analgesics does not exceed the prescription for chronic diseases over the rates that can be found in patients who do not receive prescription analgesics. Conclusion: Based on the analysis of comorbidities and parallel prescribing, the results of this study can improve GPs' prescription and treatment strategies for musculoskeletal diseases and chronic pain conditions. abstract_id: PUBMED:28970388 General Practitioners and Chronic Non-Malignant Pain Management in Older Patients: A Qualitative Study. Chronic non-malignant pain (CNMP) is commonly managed by General Practitioners (GPs) in primary care. Analgesics are the mainstay of CNMP management in this setting. Selection of medications by GPs may be influenced by micro factors which are relevant to the practice setting, meso factors which relate to the local or regional environment or macro factors such as those arising from national or international influences. The aim of this study is to explore influences on GP practises in relation to pain management for older adults with CNMP. Semi-structured interviews were conducted with 12 GPs. Transcripts were organised using the Framework Method of Data Management while an applied thematic analysis was used to identify the themes emerging from the data. Clinical considerations such as the efficacy of analgesics, adverse effects and co-morbidities strongly influence prescribing decisions. The GPs interviewed identified the lack of guidance on this subject in Ireland and described the impact of organisational and structural barriers of the Irish healthcare system on the management of CNMP. Changes in practice behaviours coupled with health system reforms are required to improve the quality and consistency of pharmacotherapeutic management of CNMP in primary care. abstract_id: PUBMED:36184866 Diagnosing and managing patients with chronic pain who develop prescription opioid use disorder: A scoping review of general practitioners' experience. Background And Objectives: Prescription opioid use disorder (pOUD) is an important sequela of long-term prescribed opioids for chronic pain. General practitioners (GPs) may not systematically diagnose or manage this; however, it is unclear why. Method: This scoping review searched multiple databases to assess GPs' experience diagnosing and managing patients prescribed opioids for chronic pain who have developed pOUD. Results: The 19 included articles report high levels of GP concern regarding opioid diversion, inappropriate use, abuse, misuse, diversion, dependence and addiction. Confidence screening and detecting pOUD is mixed, and few screen systematically. The most common response is declining to prescribe rather than diagnosing and managing pOUD. Discussion: GPs experience high levels of conflict when considering potential pOUD in their patients with chronic pain prescribed opioids. Their experiences diagnosing and managing pOUD are not fully understood. Further theory-based research may help to understand this and assist future policy directions, programs and research priorities. abstract_id: PUBMED:23279722 An evaluation of the prescription of opioids for chronic nonmalignant pain by Australian general practitioners. Objective: Our objective was to evaluate the quality of opioid analgesia prescribing in chronic nonmalignant pain (CNMP) by general practitioners (GPs, family physicians). Design: An anonymous, cross-sectional questionnaire-based survey. Setting: The setting was five Australian divisions of general practice (geographically based associations of GPs). Methods: A questionnaire was mailed to all division members. Outcome measures were adherence to individual recommendations of locally derived CNMP practice guidelines. Results: We received 404 responses (response rate 23.3%). In the previous fortnight, GPs prescribed long-term continuous opioids for CNMP for a median of 4 and a mean of 7.1 (±8.7) patients with CNMP. Guideline concordance (GLC) was poor, with no GP always compliant with all guideline items, and only 31% GPs usually employing most items. GLC was highest for the avoidance of high dosages or fast-acting formulations. It was lowest for strategies minimizing individual and public health harms, such as the initiation of opioids on a time-limited trial basis, use of contracts, and the preclusion or management of aberrant behaviors. GLC was positively associated with relevant training or qualifications, registration with the Australian Prescription Drug Monitoring Programme, being an opioid substitution therapy prescriber, and female gender. Conclusions: In this study, long-term opioids were frequently initiated for CNMP without a quality use-of-medicine approach. Potential sequelae are inadequate treatment of pain and escalating opioid-related harms. These data suggest a need for improved resourcing and training in opioid management across pain and addictions. abstract_id: PUBMED:21415808 How do general practitioners assess low back pain Web sites? Study Design: Cross-sectional study of teaching general practitioners (TGPs) in the South of France. Objective: To determine what TGPs think about francophone Internet sites dedicated to chronic low back pain (CLBP). Summary Of Background Data: Patients with CLBP often seek medical information about their symptoms on the Internet. The quality of the information is usually poor. To date, the opinion of TGPs concerning the quality of CLBP Web sites remains unknown. Methods: We invited the 112 TGPs in the Southern region to participate. The participants used an assessment scale to evaluate 7 Web sites about CLBP in semicontrolled conditions. The quality and the importance of the different types of information (medical, nonmedical) and the design were measured on a visual analog scale (0-100) and with the point-sharing method (scale of 0-10). Results: In total, 47 TGPs responded. The median quality score for the 7 Web sites was 50 (range 30-79). The score for medical information was 52.5 (10-80), for nonmedical information it was 60.4 (20-90), and for design it was 56.4 (30-85). For nonmedical information, the median relative weight was 4, and for both medical information and design it was 3. Conclusion: French TGPs believe that CLBP Web sites should focus on nonmedical, practical advice; they consider the design at least as important as the medical information. This viewpoint seems to conflict in part with patients' expectations. Good-quality and interactive Web sites could reduce this discordance. A list of approved sites should be available for general practitioners to recommend to their patients, to supplement information given during the consultation. abstract_id: PUBMED:27993126 What influences general practitioners' use of exercise for patients with chronic knee pain? Results from a national survey. Background: Exercise is a recommended 'core' treatment for chronic knee pain (CKP), however it appears to be underused by general practitioners (GPs). While behavioural theories suggest that attitudes and beliefs influence behaviours, no single theory reliably predicts GPs' behaviours. A theoretical analysis framework, developed from sociocognitive theories, was used to underpin investigation of the key influences associated with GPs' use of exercise for patients with CKP, to inform future interventions to optimise GPs' use of exercise. Methods: A cross-sectional postal questionnaire survey investigated UK GPs' reported use of exercise based on a patient case vignette. Factors influencing GPs' exercise use (behaviour) were examined using attitude statements, free-text questions and multiple response option questions related to factors within the analysis framework. Unadjusted logistic regression analyses explored the associations between GPs' attitudes/beliefs and behaviour. Results: From a total sample of 5000 GPs, 835 (17%) returned a questionnaire. Most respondents (n = 729, 87%) reported that they would use exercise. Factors significantly associated with exercise use (OR (95% CI)) included GPs' beliefs about their role (belief that GPs should give information on type, duration and frequency of exercise (30.71 (5.02,188.01)), beliefs about consequences (agreement that knee problems are improved by local (3.23 (1.94,5.39)) and general exercise (2.63 (1.38,5.02))), moral norm (agreement that GPs should prescribe all patients local (3.08 (1.96,4.83)) and general exercise (2.63 (1.45,4.76))), and GP-related beliefs about capabilities (prior experience of insufficient expertise to give detailed exercise information (0.50 (0.33,0.76)). Whilst perceived time limitations were not associated with exercise use (1.00 (0.33,3.01)), GPs who disagreed that they experienced time limitations were more likely to suggest general (2.17 (1.04,4.55)), or demonstrate local (2.16 (1.06,4.42)), exercises. Conclusion: GPs' attitudes and beliefs are associated with their use of exercise for patients with CKP, particularly beliefs about role, responsibilities and skills in initiating exercise, and about the efficacy of exercise. Although the low response risks response bias, these results can inform future interventions to optimise GPs' behaviour. The role of GP uncertainty and influences on clinical decision-making need further exploration, thus an amended analysis framework is suggested, which should be tested in future research. abstract_id: PUBMED:24119920 The role of somatic health problems in the recognition of depressive and anxiety disorders by general practitioners. Background: Recognition of depression and anxiety by general practitioners (GPs) is suboptimal and there is uncertainty as to whether particular somatic health problems hinder or facilitate GP recognition. The objective of this study was to investigate the associations between somatic health problems and GP recognition of depression and anxiety. Methods: We studied primary care patients with a DSM-IV based psychiatric diagnosis of depressive or anxiety disorder during a face-to-face interview (n=778). GPs' registrations of depression and anxiety diagnoses, based on medical file extractions, were compared with the DSM-IV based psychiatric diagnoses as reference standard. Somatic health problems were based on self-report of several chronic somatic diseases and pain symptoms, using the Chronic Pain Grade (CPG), during the interview. Results: Depression and anxiety was recognized in sixty percent of the patients. None of the health problems were negatively associated with recognition. Greater severity of pain symptoms (OR=1.18, p=.02), and chest pain (OR=1.56, p=.02), in particular, were associated with more GP recognition of depression and anxiety. Mediation analyses showed that depression and anxiety in these patients were better recognized through the presence of more severe psychiatric symptoms. Limitations: Some specific chronic diseases had low prevalence. Conclusions: This study shows that the presence of particular chronic diseases does not influence GP recognition of depression and anxiety. GPs tend to recognize depression and anxiety better in patients with pain symptoms, partly due to more severe psychiatric symptoms among those with pain. Answer: Yes, male and female general practitioners (GPs) do prescribe chronic pain drugs differently to older patients. A study found that while male and female GPs prescribe analgesics in a similar manner, male GPs are more likely to prescribe antineuropathic pain drugs and less likely to prescribe symptomatic slow-acting drugs for osteoarthritis (SySADOA) compared to female GPs. Specifically, 11.3% of patients with male GPs were prescribed antineuropathic pain drugs versus 4.8% of patients with female GPs, and 10.2% of male GPs' patients were prescribed SySADOA compared to 18.8% of female GPs' patients. These differences remained significant even after adjusting for several confounding factors (PUBMED:25521663).
Instruction: Is the recent WHO histological classification for gastric cancer helpful for application to endoscopic resection? Abstracts: abstract_id: PUBMED:26324820 Is the recent WHO histological classification for gastric cancer helpful for application to endoscopic resection? Background: Endoscopic resection is performed in undifferentiated-type early gastric cancer (UD-EGC), including poorly differentiated (PD) adenocarcinoma and signet ring cell (SRC) carcinoma. We previously found that different approaches are needed for PD adenocarcinoma and SRC carcinoma for curative resection. However, according to the 2010 WHO classification, diffuse-type PD adenocarcinoma and SRC carcinoma are categorized in the "poorly cohesive carcinomas." Thus, we assessed whether the WHO classification is helpful when endoscopic resection is performed for treatment of UD-EGC. Methods: We analyzed clinicopathological features of 1295 lesions with SRC carcinoma and PD adenocarcinoma treated by open surgery. We recategorized them into intestinal-type PD adenocarcinomas and poorly cohesive carcinomas (SRC carcinoma, diffuse-type PD adenocarcinoma). We also recategorized 176 lesions treated by endoscopic resection into intestinal-type PD adenocarcinomas and poorly cohesive carcinomas. Results: According to the open surgery data, the rates of lymph node metastasis (LNM) and lymphovascular invasion were significantly lower in SRC carcinoma than in diffuse-type and intestinal-type PD adenocarcinomas. The rates of LNM and lymphovascular invasion were significantly higher in diffuse-type PD adenocarcinoma than in SRC carcinoma. Endoscopic resection data showed no recurrence if the carcinoma was curatively resected. However, the commonest cause of noncurative resection was different in SRC carcinoma and PD adenocarcinoma. A positive lateral margin was the commonest cause in SRC carcinoma versus a positive vertical margin in both intestinal-type and diffuse-type PD adenocarcinoma. Conclusions: The clinical behavior differs in diffuse-type PD adenocarcinoma and SRC carcinoma. On the basis of LNM and outcomes of endoscopic resection, the recent WHO classification may not be helpful when endoscopic resection is performed for treatment of UD-EGC. abstract_id: PUBMED:24661135 Endoscopic mucosal resection and endoscopic submucosal dissection of epithelial neoplasia of the colon. Endoscopic mucosal resection (EMR) with curative intent has evolved into a safe and effective technique and is currently the gold standard for management of large colonic epithelial neoplasms. Piecemeal EMR is associated with a high risk of local recurrence requiring vigilant surveillance and repeat interventions. Endoscopic submucosal dissection (ESD) was introduced in Japan for the management of early gastric cancer, and has recently been described for en bloc resection of colonic lesions greater than 20 mm. En bloc resection allows accurate histological assessment of the depth of invasion, minimizes the risk of local recurrence and helps determine additional therapy. Morphologic classification of lesions prior to resection allows prediction of depth of invasion and risk of nodal metastasis, allowing selection of the appropriate intervention. This review provides an overview of the assessment of epithelial neoplasms of the colon and the application of EMR and ESD techniques in their management. abstract_id: PUBMED:37274561 Recent advances in endoscopic management of gastric neoplasms. The development and clinical application of new diagnostic endoscopic technologies such as endoscopic ultrasonography with biopsy, magnification endoscopy, and narrow-band imaging, more recently supplemented by artificial intelligence, have enabled wider recognition and detection of various gastric neoplasms including early gastric cancer (EGC) and subepithelial tumors, such as gastrointestinal stromal tumors and neuroendocrine tumors. Over the last decade, the evolution of novel advanced therapeutic endoscopic techniques, such as endoscopic mucosal resection, endoscopic submucosal dissection, endoscopic full-thickness resection, and submucosal tunneling endoscopic resection, along with the advent of a broad array of endoscopic accessories, has provided a promising and yet less invasive strategy for treating gastric neoplasms with the advantage of a reduced need for gastric surgery. Thus, the management algorithms of various gastric tumors in a defined subset of the patient population at low risk of lymph node metastasis and amenable to endoscopic resection, may require revision considering upcoming data given the high success rate of en bloc resection by experienced endoscopists. Moreover, endoscopic surveillance protocols for precancerous gastric lesions will continue to be refined by systematic reviews and meta-analyses of further research. However, the lack of familiarity with subtle endoscopic changes associated with EGC, as well as longer procedural time, evolving resection techniques and tools, a steep learning curve of such high-risk procedures, and lack of coding are issues that do not appeal to many gastroenterologists in the field. This review summarizes recent advances in the endoscopic management of gastric neoplasms, with special emphasis on diagnostic and therapeutic methods and their future prospects. abstract_id: PUBMED:27686105 Histomorphologic features of early gastric carcinoma treated by endoscopic submucosal dissection: relation to efficiency of endoscopic resection. Objective: Early gastric cancer (EGC) is defined as cancer invasion confined to the mucosa or submucosa, irrespective of lymph node metastasis. Recently endoscopic submucosal dissection (ESD) has been widely accepted for the treatment for dysplasia and EGC without lymph node metastases. While the method has been advanced in Far East countries, ESD is still being developed in Europe and has not gained enough popularity although it has been recommended as the treatment of choice for superficial gastric neoplastic lesions by European Society of Gastrointestinal Endoscopy (ESGE) in 2015. Methods: The aim of the study was to perform a retrospective analysis of clinical and histomorphologic features of 58 cases of EGCs removed by ESD in a university hospital in Western Pomerania in Poland and to evaluate factors related to the efficiency of ESD resection. Results: With univariate analysis, indications for ESD with the highest R0 rate were found in EGCs limited to mucosa (T1a, small mucosal, M), without muscularis mucosa invasion, localised in the middle/lower part of stomach and intestinal type in histological examination. The R0 complete resection rate was significantly (p &lt; 0.0001) lower for T1b than that for T1a tumours (21.4% vs. 100%). Tumours with submucosal involvement were associated with lower efficiency of ESD procedure. Conclusions: Our data showed that in EGCs with favourable histomorphologic characteristics, ESD seemed to be a totally efficient and safe method of treatment in a European small-volume centre. R0 resection rate reached 81.1% of cases and median time of the ESD procedure was 61.5 min. abstract_id: PUBMED:33014861 Histological Differentiated/Undifferentiated Mixed Type Should Not Be Considered as a Non-Curative Factor of Endoscopic Resection for Patients With Early Gastric Cancer. Background: Histological differentiated/undifferentiated mixed-type adenocarcinomas are frequently found in patients with early gastric cancer (EGC). Yet it is unclear whether these mixed-type adenocarcinomas can be treated by endoscopic resection (ER) in EGC patients. Aims: To evaluate the lymph node metastasis (LNM) rate and long-term outcomes in mixed-type EGC patients and assess the feasibility of ER in these patients. Methods: Clinicopathological features, risk factors of LNM, and overall survival (OS) and progression-free survival (PFS) rates of EGC patients were analyzed according to different histological types. Results: Patients with mixed-type EGC had higher LNM rates than patients with non-mixed-type EGC (11.4 vs. 6.2%, P = 0.044). In the multivariate analysis, larger tumor diameter, presence of an ulcer, submucosal invasion, histological undifferentiated type, histological mixed type, and lymphovascular invasion resulted as independent risk factors for LNM in EGC patients (all P &lt; 0.05). The LNM rate in mixed-type patients who met the Japanese ER criteria was 3.3%, including fulfilling the absolute criteria 0%. The 5-year OS and PFS rates in mixed-type patients were 94.59 and 91.47%, respectively. There was no statistical significance in the OS (P = 0.870) and PFS (P = 0.705) between mixed-type and non-mixed-type EGC patients fulfilling the Japanese ER criteria. Conclusion: Histological differentiated/undifferentiated mixed type in EGC patients meeting the Japanese absolute criteria for ER are associated with low risk of LNM and favorable prognosis, and thus, it should not be considered as a non-curative factor for ER. abstract_id: PUBMED:21160647 Endoscopic mucosal resection and endoscopic submucosal dissection for early gastric cancer: Current and original devices. Compared with endoscopic submucosal dissection (ESD), endoscopic mucosal resection (EMR) is easier to perform and requires less time for treatment. However, EMR has been replaced by ESD, because achieving en bloc resection of specimens &gt; 20 mm in diameter is difficult with EMR. The technique of ESD was introduced to resect large specimens of early gastric cancer in a single piece. ESD can provide precise histological diagnosis and can also reduce the rate of recurrence, but has a high level of technical difficulty, and is consequently associated with a high rate of complications, a need for advanced endoscopic techniques, and a lengthy procedure time. To overcome disadvantages in both EMR and ESD, various advances have been made in submucosal injections, knives, other accessories, and in electrocoagulation systems. abstract_id: PUBMED:29570860 Undifferentiated-type component mixed with differentiated-type early gastric cancer is a significant risk factor for endoscopic non-curative resection. Background And Aim: Previous studies of surgical specimens have found that the presence of mixed histological components in differentiated-type early gastric cancer (EGC) is a risk factor for lymph node metastasis and indicates greater malignancy. As there have been no reports on its endoscopic treatment, we examined endoscopic curative resection in relation to differentiated-type-predominant mixed-type (MT). Methods: This was a single-center, retrospective study. Subjects were 2585 differentiated-type EGC in 2551 patients treated with endoscopic submucosal dissection (ESD) in our hospital between March 2005 and January 2016. These lesions comprised 2231 cases of curative resection and 354 cases of non-curative resection. Histologically, 2339 lesions were pure differentiated-type (PDT) and 246 lesions were MT. Rates of curative resection were compared, according to tumor size, between different histological types. Results: When curative and non-curative cases were compared using multivariate analysis among lesions measuring ≤20 mm for factors other than those determining curative resection, significant differences were obtained for tumor diameter, ulcer, and MT. Among lesions measuring 21-30 mm, ulcer and MT were associated with significant differences. Among lesions measuring &gt;30 mm, upper region and MT were associated with significant differences. Curative resection rates, in relation to histological type among lesions measuring ≤20 mm, were 93.4% for PDT and 63.4% for MT, whereas corresponding rates were 85.1% and 60.0%, respectively, among lesions measuring 21-30 mm, and 55.3% and 30.2%, respectively, among lesions measuring &gt;30 mm. Conclusion: We found that MT was a risk factor for non-curative resection irrespective of lesion size. abstract_id: PUBMED:35036617 Endoscopic resection for gastrointestinal tumors (esophageal, gastric, colorectal tumors): Japanese standard and future prospects. Endoscopic resection (ER) techniques such as polypectomy, endoscopic mucosal resection (EMR) and endoscopic submucosal dissection (ESD) are widely accepted as a less invasive treatment for gastrointestinal (GI) tumors. Since there is a limit to the size that can be resected by EMR and it is often divided, it is not possible to accurately evaluate the degree of cancer progression, and the cancer remains or causes recurrence. ESD is a technology that overcomes these weaknesses. ER techniques are considered for tumors that have a very low possibility of lymph node metastasis and are suitable for en-bloc resection. As ESD became more widespread, the difficulty of treating ESD was gradually resolved by the development of technology and equipment, the curative resection rate increased, and the complication rate decreased. ER techniques have become the standard treatment for early cancer and precancerous lesions in Japan, and the therapeutic indications are expanding day by day. The indications for whether endoscopic treatment can be performed are defined by the guidelines for each organ such as the esophagus, stomach, and colorectum. In the coming aging society, it is also necessary to evaluate the indications for endoscopic treatment and invasive treatment. In addition, recent advances in endoscopic technology are making it possible to remove submucosal tumors that previously required surgery. In this review, we summarize the recent Japanese standard indications of ER for each GI location and future prospects of ER. abstract_id: PUBMED:24714247 Endoscopic management of early gastric cancer: endoscopic mucosal resection or endoscopic submucosal dissection: data from a Japanese high-volume center and literature review. As detection of early gastric cancer (EGC) has improved, endoscopic mucosal resection (EMR) has been adopted as a treatment option for small intramucosal carcinoma. Endoscopic submucosal dissection (ESD) has enabled high en bloc resection rate for small and large lesions, as well as those with scarring. Moreover, the specimens obtained by ESD facilitate precise histological assessment of curability compared with the piecemeal specimens obtained by EMR. Accordingly, ESD has been established as a standard treatment for management of EGC in Japan. The long-term outcome of endoscopic management of EGC is based on: a) the accuracy of endoscopic diagnosis which defines the optimal treatment; b) endoscopist's expertise on methods for tumor removal (currently techniques of ESD); c) precise histological assessment of the resected specimen for curability; and d) surveillance endoscopy for early detection of metachronous multiple cancer. Efforts to establish a standardized protocol for practice and training can accelerate dissemination of gastric ESD in regions where gastric cancer is highly prevalent, and may help endoscopists worldwide to adopt this technique for other organs in the digestive tract. abstract_id: PUBMED:36880341 Efficacy of endoscopic ultrasound and endoscopic resection for esophageal schwannoma. Background: Esophageal schwannoma (ES) is a rare submucosal tumor, and its complete and safe resection is a topic that deserves special attention. Aim: This study aimed to investigate the clinical value of endoscopic ultrasound (EUS) in the diagnosis of ES and the clinical efficacy of endoscopic resection for ES. Methods: The clinical data, endoscopic characteristics, endoscopic treatment, postoperative complications, immunohistochemical results, and follow-up records of patients with ES admitted to the Tianjin Medical University General Hospital from January 2012 to January 2022 were retrospectively analyzed. Results: Under white-light endoscopy, 81.8% (9/11) of lesions were submucosal elevations, covering the normal esophageal epithelium. Two of the lesions with redness and erosive surface. Eight lesions (72.7%) appear on EUS originating from the muscularis propria were homogeneous or inhomogeneous hypoechoic signals. Two lesions were inhomogeneous hyperechoic originating from the submucosa or muscularis propria, respectively. One lesion was homogeneous hypoechoic originating from the submucosa. All lesions had no blood flow signals, cystic changes, or calcification, and were completely removed by submucosal tunnel endoscopic resection (STER) or endoscopic submucosal dissection (ESD). All patients did not experience serious adverse events as well as recurrence, metastasis, or cicatricial esophageal stenosis during the follow-up period. Conclusion: ES is a rare submucosal lesion, which endoscopic characteristics are difficult to distinguish from other esophageal submucosal tumors. Endoscopic resection can provide a minimally invasive and alternative treatment for ES. Answer: The recent WHO histological classification for gastric cancer may not be particularly helpful for application to endoscopic resection. According to a study analyzing clinicopathological features of lesions treated by open surgery and endoscopic resection, the clinical behavior differs significantly between diffuse-type poorly differentiated (PD) adenocarcinoma and signet ring cell (SRC) carcinoma, which are both categorized as "poorly cohesive carcinomas" in the WHO classification. The study found that the rates of lymph node metastasis (LNM) and lymphovascular invasion were significantly lower in SRC carcinoma than in diffuse-type and intestinal-type PD adenocarcinomas. Additionally, the commonest cause of noncurative resection was different for SRC carcinoma (positive lateral margin) compared to PD adenocarcinoma (positive vertical margin). These findings suggest that the WHO classification may not be sufficiently nuanced to guide the treatment of undifferentiated-type early gastric cancer (UD-EGC) with endoscopic resection (PUBMED:26324820). Other studies have emphasized the importance of accurate histological assessment and the selection of appropriate intervention based on the morphologic classification of lesions prior to resection (PUBMED:24661135). The presence of undifferentiated-type components mixed with differentiated-type in early gastric cancer (EGC) has been identified as a significant risk factor for endoscopic non-curative resection (PUBMED:29570860), and histological differentiated/undifferentiated mixed type should not be considered as a non-curative factor for endoscopic resection in patients meeting the Japanese absolute criteria for ER (PUBMED:33014861). In summary, while the WHO classification provides a broad categorization of gastric cancers, for the purpose of endoscopic resection, a more detailed and nuanced approach that considers the specific histological types and their associated risks is necessary for optimal treatment planning (PUBMED:26324820).
Instruction: Should abdominal sequences be included in prostate cancer MR staging studies? Abstracts: abstract_id: PUBMED:25795196 Should abdominal sequences be included in prostate cancer MR staging studies? Objectives: Prostate cancer staging MR examinations commonly include abdominal sequences to assess for non-regional (common iliac or para-aortic) nodal metastasis. In our experience the diagnostic yield of this is limited, but incidental findings are frequent, often necessitating further investigations. The aim of this study is to assess the diagnostic utility of abdominal sequences in routine prostate cancer MR staging studies. Methods: Findings on abdominal sequences of consecutive MRI prostate studies performed for staging newly diagnosed prostate cancer between September 2011 and September 2013 were reviewed with respect to adenopathy and additional incidental findings. Results were correlated with Gleason grade and serum prostate-specific antigen (PSA) level in each case. Results: 355 MRI prostate examinations were reviewed. 4 (1.1%) showed enlarged non-regional lymph nodes. Incidental findings were found in 82(23.1%) cases, neccessitating further investigation in 45 (12.7%) cases. Enlarged non-regional nodes were associated with higher PSA level and Gleason grade (p=0.007, p=0.005 respectively). With a combined threshold of PSA &gt; 20 ng/mL and/or Gleason grade ≥ 8 the sensitivity, specificity, PPV and NPV were 100, 60, 3 and 100% respectively for predicting the presence of non-regional adenopathy. Conclusions: Routine abdominal sequences are of very low yield in routine prostate cancer MR staging, frequently resulting in incidental findings requiring further work-up and should be reserved for high-risk cases. Our experience supports the use of an abdominal staging sequence in high-risk cases only. abstract_id: PUBMED:17689908 MR imaging in local staging of prostate cancer. Clinical staging to differentiate between localized and advanced disease stage appear to be unreliable. Curative therapy can only be performed in patients with localized prostate cancer. Accurate staging is therefore especially important for proper disease management. Since 1984 magnetic resonance (MR) imaging has been applied for this purpose. However, the role of MR imaging of the prostate is debated extensively in the literature. Initially MR imaging was performed using a conventional body coil with subsequent limited anatomical detail due to insufficient spatial resolution. With the introduction of new MR sequences, new coils and other technical developments numerous studies have attempted to improve local staging. The diagnostic capability of MR imaging in preoperative staging of prostate cancer is currently being established. In this review the role of MR imaging in staging prostate cancer is discussed. abstract_id: PUBMED:29420979 Multiparametric Prostate MR Imaging: Impact on Clinical Staging and Decision Making. Meaningful changes to the approach of prostate cancer staging and management have been made over the past decade with increasing demand for high-quality multiparametric MR imaging (mpMRI) of the prostate. This article focuses on the evolving paradigm of prostate cancer staging, with emphasis on the role of mpMRI on staging and its integration into clinical decision making. Current prostate cancer staging systems are defined and mpMRI's role in the detection of non-organ-confined disease and how it has an impact on the selection of appropriate next steps are discussed. Several imaging pitfalls, limitations, and future directions of mpMRI also are discussed. abstract_id: PUBMED:30031465 Multiparametric Prostate MR Imaging: Impact on Clinical Staging and Decision Making. Meaningful changes to the approach of prostate cancer staging and management have been made over the past decade with increasing demand for high-quality multiparametric MR imaging (mpMRI) of the prostate. This article focuses on the evolving paradigm of prostate cancer staging, with emphasis on the role of mpMRI on staging and its integration into clinical decision making. Current prostate cancer staging systems are defined and mpMRI's role in the detection of non-organ-confined disease and how it has an impact on the selection of appropriate next steps are discussed. Several imaging pitfalls, limitations, and future directions of mpMRI also are discussed. abstract_id: PUBMED:3797645 Prostatic carcinoma: staging by clinical assessment, CT, and MR imaging. The sensitivity, specificity, accuracy, and positive and negative predictive values of clinical assessment, computed tomography (CT), and magnetic resonance (MR) imaging were compared in the differentiation of stage B from stage C prostatic carcinoma. Forty-six patients who had undergone radical prostatectomy were included in the study. Surgical-pathologic staging was considered the "truth measure." Clinical staging had an accuracy of 61%, and CT, 65%. Accuracy for MR imaging depended on the instrument parameters and plane of section used. When only transverse T1-weighted images were analyzed, MR accuracy was 61%. However, when transverse T1- and T2-weighted images supplemented by additional T2-weighted coronal or sagittal images were studied, accuracy increased to 83%. At present, MR imaging is the most accurate diagnostic modality for the local staging of carcinoma of the prostate, but for optimal results, multiple sequences and two orthogonal planes of imaging are needed. abstract_id: PUBMED:32594404 Clinical PET/MR. Oncologic imaging has been a major focus of clinical research on PET/MR over the last 10 years. Studies so far have shown that PET/MR with 18F-Fluorodeoxyglucose (FDG) overall provides a similar accuracy for tumor staging as FDG PET/CT. The effective radiation dose of whole-body FDG PET/MR is more than 50% lower than for FDG PET/CT, making PET/MR particularly attractive for imaging of children. However, the longer acquisition times and higher costs have so far limited broader clinical use of PET/MR technology for whole-body staging. With the currently available technology, PET/MR appears more promising for locoregional staging of diseases for which MR is the anatomical imaging modality of choice. These include brain tumors, head and neck cancers, gynecologic malignancies, and prostate cancer. For instance, PET imaging with ligands of prostate-specific membrane antigen, combined with multi-parametric MR, appears promising for detection of prostate cancer and differentiation from benign prostate pathologies as well as for detection of local recurrences. The combination of functional parameters from MR, such as apparent diffusion coefficients, and molecular parameters from PET, such as receptor densities or metabolic rates, is feasible in clinical studies, but clinical applications for this multimodal and multi-parametric imaging approach still need to be defined. abstract_id: PUBMED:7604393 Staging in prostate carcinoma A correct staging of clinically localized prostate cancer should nowadays consist of: digital rectal examination (DRE) (+/- -transrectal ultrasound [TRUS]) bone scan abdominal computed tomography or body coil magnetic resonance imaging (+/- laparoscopic lymph node dissection according to the PSA level) endorectal surface coil MR imaging (if available) The above-mentioned methods are discussed with preference to the new technology of endorectal surface coil magnetic resonance imaging (E-MRI) of the prostate. abstract_id: PUBMED:3797644 Clinical stage B prostate carcinoma: staging with MR imaging. Magnetic resonance (MR) imaging studies were performed in 29 patients with clinical findings of stage B prostatic carcinoma with use of both T1 and T2-weighted spin echo sequences. MR imaging findings were correlated with surgical/pathologic results in 18 patients who underwent an operation. Periprostatic fat, periprostatic venous plexus, seminal vesicles, and lymph nodes were assessed from MR imaging studies. When each was assessed independently, the periprostatic fat signal had a sensitivity of 29%, specificity of 100%, and accuracy of 85% in detecting extracapsular tumor extension, whereas the periprostatic venous plexus had a sensitivity of 57%, specificity of 86%, and accuracy of 80%. MR imaging had a sensitivity of 50%, specificity of 97%, and accuracy of 89% for detecting seminal vesicular involvement. When all four anatomic features were taken into consideration, MR imaging had an accuracy of 89% (16 of 18 patients) in differentiating stage B from stage C or D disease (sensitivity 87%, specificity 90%). Our data indicate that MR imaging is a promising method for staging prostatic carcinoma. abstract_id: PUBMED:16344335 Prediction of organ-confined prostate cancer: incremental value of MR imaging and MR spectroscopic imaging to staging nomograms. Purpose: To assess retrospectively the incremental value of endorectal coil magnetic resonance (MR) imaging and combined endorectal MR imaging-MR spectroscopic imaging to the staging nomograms for predicting organ-confined prostate cancer (OCPC). Materials And Methods: The institutional review board approved this HIPAA-compliant study and issued a waiver of informed consent for review of the MR reports and clinical data. Between November 1, 1999, and November 1, 2004, 229 patients underwent endorectal MR imaging and 383 underwent combined endorectal MR imaging-MR spectroscopic imaging before radical prostatectomy. Mean patient age was 58 years (range, 32-74 years). MR studies were interpreted prospectively by 12 radiologists who were informed of patients' clinical data. On the basis of the MR reports, the risks of extracapsular extension, seminal vesicle invasion, and lymph node metastasis were scored retrospectively from 1 to 5; the highest score was subtracted from 6 to determine a score (from 1 to 5) for the likelihood of OCPC on MR studies. The staging nomograms were used to calculate the likelihood of OCPC on the basis of serum prostate-specific antigen level, Gleason grade at biopsy, and clinical stage. Histopathologic findings constituted the reference standard. Logistic regression was used to estimate the multivariable relations between OCPC and MR findings. The area under the receiver operator characteristic curve was calculated for each model. The jackknife method was used for bias correction. Results: MR findings contributed significant incremental value (P &lt;/= .02) to the nomograms in the overall study population. The contribution of MR findings was significant in all risk groups but was greatest in the intermediate- and high-risk groups (P &lt; .01 for both). Accuracy in the prediction of OCPC with MR was higher when MR spectroscopic imaging was used, but the difference was not significant. Conclusion: Endorectal MR imaging and combined endorectal MR imaging-MR spectroscopic imaging contribute significant incremental value to the staging nomograms in predicting OCPC. abstract_id: PUBMED:11193312 Staging and tissue characterization of prostate carcinoma: role of endorectal MR imaging and MR spectroscopy The role of magnetic resonance (MR) imaging and MR spectroscopy with an endorectal coil in tissue characterization and local staging was reviewed. Endorectal coil (ERC) MR imaging demonstrated the detailed zonal anatomy of the normal prostate. The sensitivity and specificity of staging prostate cancer for ERC MR imaging was superior to both conventional MR imaging and transrectal ultrasound. ERC MR imaging is the most accurate noninvasive method of staging prostate cancer. However, the accuracy of the diagnosis made by inexperienced radiologists was significantly inferior to that made by experienced radiologists. Endorectal MRI failed to differentiate benign from malignant lesions in some patients demonstrating low signal intensity on T2-weighted imaging in the peripheral zone. MR spectroscopy may provide additional information on tissue characterization, monitoring after treatment and staging. Answer: The inclusion of abdominal sequences in prostate cancer MR staging studies is a subject of debate. According to the findings from one study (PUBMED:25795196), routine abdominal sequences have a very low yield in routine prostate cancer MR staging and often result in incidental findings that require further investigation. The study suggests that abdominal sequences should be reserved for high-risk cases, as they are associated with higher PSA levels and Gleason grades. The study found that with a combined threshold of PSA > 20 ng/mL and/or Gleason grade ≥ 8, the sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for predicting the presence of non-regional adenopathy were 100%, 60%, 3%, and 100%, respectively. Other studies have focused on the evolving role of multiparametric MR imaging (mpMRI) in prostate cancer staging and decision making (PUBMED:29420979, PUBMED:30031465). These studies emphasize the importance of mpMRI in detecting non-organ-confined disease and its impact on clinical decision making. However, they also discuss the limitations and future directions of mpMRI. Historically, MR imaging has been used since 1984 for prostate cancer staging, but its role has been debated extensively (PUBMED:17689908). Technological advancements have improved the diagnostic capability of MR imaging in preoperative staging of prostate cancer. PET/MR imaging has also been studied for its potential in oncologic imaging, including prostate cancer staging. PET/MR with 18F-Fluorodeoxyglucose (FDG) has shown similar accuracy for tumor staging as FDG PET/CT, with a significantly lower effective radiation dose. PET/MR appears promising for locoregional staging of diseases, including prostate cancer (PUBMED:32594404). In conclusion, the current evidence suggests that abdominal sequences in prostate cancer MR staging studies may not be necessary for all patients and should be considered mainly for those with high-risk features. The decision to include abdominal sequences should be based on individual risk assessments, including PSA levels and Gleason grades, and the potential benefits should be weighed against the risk of incidental findings and additional investigations.
Instruction: Deep Hypothermic Cardiac Arrest Treated by Extracorporeal Life Support in a Porcine Model: Does the Rewarming Method Matter? Abstracts: abstract_id: PUBMED:27391370 Cardiopulmonary responses during the cooling and the extracorporeal life support rewarming phases in a porcine model of accidental deep hypothermic cardiac arrest. Background: This study aimed to assess cardiac and pulmonary pathophysiological responses during cooling and extracorporeal life support (ECLS) rewarming in a porcine model of deep hypothermic cardiac arrest (DHCA). In addition, we evaluated whether providing a lower flow rate of ECLS during the rewarming phase might attenuate cardiopulmonary injuries. Methods: Twenty pigs were cannulated for ECLS, cooled until DHCA occurred and subjected to 30 min of cardiac arrest. In order to assess the physiological impact of ECLS on cardiac output we measured flow in the pulmonary artery using Doppler echocardiography as well as a modified thermodilution technique using the Swan-Ganz catheter (injection site in the right ventricle). The animals were randomized into two groups during rewarming: a group with a low blood flow rate of 1.5 L/min (LF group) and a group with a normal flow rate of 3.0 L/min (NF group). The ECLS temperature was adjusted to 5 °C above the central core. Cardiac output, hemodynamics and pulmonary function parameters were evaluated. Results: During the cooling phase, cardiac output, heart rhythm and blood pressure decreased continuously. Pulmonary artery pressure tended to increase at 32 °C compared to the initial value (20.2 ± 1.7 mmHg vs. 29.1 ± 5.6 mmHg, p = 0.09). During rewarming, arterial blood pressure was higher in the NF than in the LF group at 20° and 25 °C (p = 0.003 and 0.05, respectively). After rewarming to 35 °C, cardiac output was 3.9 ± 0.5 L/min in the NF group vs. 2.7 ± 0.5 L/min in LF group (p = 0.06). At the end of rewarming under ECLS cardiac output was inversely proportional to the ECLS flow rate. Moreover, the ECLS flow rate did not significantly change pulmonary vascular resistance. Discussion: Using a newly developed experimental model of DHCA treated by ECLS, we assessed the cardiac and pulmonary pathophysiological response during the cooling phase and the ECLS rewarming phase. Despite lower metabolic need during hypothermia, a low ECLS blood flow rate during rewarming did not improved cardiopulmonary injuries after rewarming. Conclusion: A low ECLS flow rate during the rewarming phase did not attenuate pulmonary lesions, increased blood lactate level and tended to decrease cardiac output after rewarming. A normal ECLS flow rate did not increase pulmonary vascular resistance compared to a low flow rate. This experimental model on pigs contributes a number of pathophysiological findings relevant to the rewarming strategy for patients who have undergone accidental DHCA. abstract_id: PUBMED:26728797 Deep Hypothermic Cardiac Arrest Treated by Extracorporeal Life Support in a Porcine Model: Does the Rewarming Method Matter? Objectives: Extracorporeal life support (ECLS) is the reference rewarming technique of accidental deep hypothermic cardiac arrest (DHCA). This study was designed to examine the impact of different rewarming blood flow rates and temperature setting of ECLS on cardiopulmonary lesions after DHCA in a porcine model of accidental hypothermia. Methods: Twenty-four pigs were cannulated for ECLS, cooled until DHCA occurred, and subjected to 30 minutes of cardiac arrest. During the rewarming phase, we compared a low blood flow rate of 1.5 L/min versus a high flow rate of 3.0 L/min as well as two-temperature-setting rewarming strategies: a temperature during ECLS adjusted to 5°C above the central core temperature versus 38°C maintained throughout the rewarming phase. Cardiac output, hemodynamics and pulmonary function parameters were evaluated. Biologic markers of ischemia-reperfusion injuries were analyzed at baseline and at the end of the experiment. Results: DHCA occurred at 21.2 ± 2°C. There was a trend for better cardiac output in groups with high blood flow (p = 0.053), with no interaction between ECLS flow and temperature (p = 0.63), a trend toward lower pulmonary vascular resistance (PVR; p = 0.075) and a significant decrease in arterial PVR in groups with high blood flow (p = 0.013) with no interaction (p = 0.47 and p = 0.60 for PVR and arterial PVR, respectively). Serum interleukin-6, tumor necrosis factor-α, receptor for advanced glycation end products (RAGE), and neuron-specific enolase were significantly increased between baseline and endpoint. The increase in the serum RAGE concentration was higher in the 38°C rewarming temperature groups compared to 5°C above adjusted temperature. There were no other significant differences in biomarkers. Conclusions: We developed a porcine model of DHCA treated by ECLS. Our data suggest that cardiac output tended to improve with a high-flow-rate rewarming strategy while a high-temperature delta between core temperature and ECLS increased the RAGE markers of lung injury. abstract_id: PUBMED:38161851 Extracorporeal Life Support for Recurrent Hypothermic Cardiac Arrest: A Case Report. Hypothermia-associated cardiac arrest (HACA) is a challenge for emergency physicians. Standard cardiopulmonary resuscitation (CPR) remains the primary intervention for the treatment of HACA, but extracorporeal life support (ECLS) may be needed as an adjunct to CPR. In this report, we present the case of an adult Asian patient who experienced two episodes of HACA at a two-year interval. In both episodes, the patient was treated with ECLS in addition to standard CPR. We discuss the fundamentals of HACA and how to safely and effectively incorporate ECLS into its management. No-flow time, age, comorbidities, and the cause of the cardiac arrest are criteria to consider when deciding on the duration of CPR and the intensity of the resources deployed. Hypothermia is a reversible cause of cardiac arrest, justifying prolonged CPR. According to the Hypothermia Outcome Prediction after ECLS (HOPE) score, active rewarming through ECLS is recommended. However, a history of cardiac arrest is rare and might be considered a severe comorbidity contraindicating ECLS use. Nevertheless, the indication is determined on a case-by-case basis. abstract_id: PUBMED:34055829 Rewarming From Hypothermic Cardiac Arrest Applying Extracorporeal Life Support: A Systematic Review and Meta-Analysis. Introduction: This systematic review and meta-analysis aims at comparing outcomes of rewarming after accidental hypothermic cardiac arrest (HCA) with cardiopulmonary bypass (CPB) or/and extracorporeal membrane oxygenation (ECMO). Material and Methods: Literature searches were limited to references with an abstract in English, French or German. Additionally, we searched reference lists of included papers. Primary outcome was survival to hospital discharge. We assessed neurological outcome, differences in relative risks (RR) of surviving, as related to the applied rewarming technique, sex, asphyxia, and witnessed or unwitnessed HCA. We calculated hypothermia outcome prediction probability score after extracorporeal life support (HOPE) in patients in whom we found individual data. P &lt; 0.05 considered significant. Results: Twenty-three case observation studies comprising 464 patients were included in a meta-analysis comparing outcomes of rewarming with CPB or/and ECMO. One-hundred-and-seventy-two patients (37%) survived to hospital discharge, 76 of 245 (31%) after CPB and 96 of 219 (44 %) after ECMO; 87 and 75%, respectively, had good neurological outcomes. Overall chance of surviving was 41% higher (P = 0.005) with ECMO as compared with CPB. A man and a woman had 46% (P = 0.043) and 31% (P = 0.115) higher chance, respectively, of surviving with ECMO as compared with CPB. Avalanche victims had the lowest chance of surviving, followed by drowning and people losing consciousness in cold environments. Assessed by logistic regression, asphyxia, unwitnessed HCA, male sex, high initial body temperature, low pH and high serum potassium (s-K+) levels were associated with reduced chance of surviving. In patients displaying individual data, overall mean predictive surviving probability (HOPE score; n = 134) was 33.9 ± 33.6% with no significant difference between ECMO and CPB-treated patients. We also surveyed 80 case reports with 96 victims of HCA, who underwent resuscitation with CPB or ECMO, without including them in the meta-analysis. Conclusions: The chance of surviving was significantly higher after rewarming with ECMO, as compared to CPB, and in patients with witnessed compared to unwitnessed HCA. Avalanche victims had the lowest probability of surviving. Male sex, high initial body temperature, low pH, and high s-K+ were factors associated with low surviving chances. abstract_id: PUBMED:34629290 Successful Non-Extracorporeal Rewarming from Hypothermic Cardiac Arrest: 2 Cases. Accidental hypothermia (core temperature &lt;35°C) is a complication in persons who have fallen into crevasses; hypothermic cardiac arrest is the most serious complication. Extracorporeal life support (ECLS) is the optimal method for rewarming hypothermic cardiac arrest patients, but it may not be readily available and non-ECLS rewarming may be required. We report the medical course of 2 patients with hypothermic cardiac arrest, each of whom had fallen into a crevasse. They were treated successfully with non-ECLS rewarming using peritoneal and thoracic lavage. We discuss non-ECLS treatment options for hypothermic cardiac arrest and describe successful non-ECLS rewarming in an outlying hospital without ECLS rewarming capability in the Grossglockner region of Austria in 1990 and 2003. Both patients survived neurologically intact. Non-ECLS rewarming in a trauma center without ECLS capabilities is feasible and can result in a good outcome when ECLS is not available. The best non-ECLS rewarming method for hypothermic cardiac arrest patients has not yet been established. Non-ECLS rewarming should be adapted to local capabilities. To obtain more robust evidence, it seems reasonable to pool data on the treatment and outcome of non-ECLS rewarming in hypothermic cardiac arrest patients. abstract_id: PUBMED:33000030 Accidental hypothermic cardiac arrest and extracorporeal membrane oxygenation: a case report. Background: Accidental hypothermic cardiac arrest, an involuntary drop in core body temperature resulting in cardiopulmonary arrest, is linked to 1500 deaths annually. We highlight the challenges with the treatment of accidental hypothermic cardiac arrest and describe improved preparations necessary for an integrated health system to care for similar patients. Case Report: Emergency medical services (EMS) were dispatched to a 34-year-old female who had been missing for several hours during a January snowfall. The patient was found unconscious over an embankment. The patient was found with a weak carotid pulse and two empty bottles of clozapine, an atypical antipsychotic. The EMS crew extricated the patient, performed a rapid trauma assessment, passive rewarming, and airway management. During transport, the patient suffered a ventricular fibrillation cardiac arrest, received defibrillation, and advanced life support measures. Resuscitative efforts continued in the emergency department while the treatment team addressed environmental exposure, assessed for traumatic injury and toxicologic exposure. On emergency department (ED) arrival, the patient's core temperature was 24°C, and despite aggressive resuscitation, the patient remained in cardiac arrest. The ED care team used extracorporeal membrane oxygenator (ECMO) and successfully resuscitated the patient with extracorporeal cardiopulmonary resuscitation. The patient achieved full neurologic recovery 15 days post-ED arrival. Conclusion: This case highlights the importance of early recognition of accidental hypothermic cardiac arrest by EMS clinicians, rapid transport to a tertiary facility, and the timely application of active rewarming and in-hospital ECMO. Accidental hypothermic cardiac arrest is a reversible state; prompt and correct treatment allows for a high probability of a favorable neurologic outcome. abstract_id: PUBMED:38204663 Electroencephalography as a tool to predict cerebral oxygen metabolism during deep-hypothermic circulatory arrest in neonates with critical congenital heart disease. Objectives: Recent research suggests that increased cerebral oxygen use during surgical intervention for neonates with congenital heart disease may play a role in the development of postoperative white matter injury. The objective of this study is to determine whether increased cerebral electrical activity correlates with greater decrease of cerebral oxygen saturation during deep hypothermic circulatory arrest. Methods: Neonates with critical congenital heart disease requiring surgical intervention during the first week of life were studied. All subjects had continuous neuromonitoring with electroencephalography and an optical probe (to quantify cerebral oxygen saturation) during cardiac surgical repair that involved the use of cardiopulmonary bypass and deep hypothermic circulatory arrest. A simple linear regression was used to investigate the association between electroencephalography metrics before the deep hypothermic circulatory arrest period and the change in cerebral oxygen saturation during the deep hypothermic circulatory arrest period. Results: Sixteen neonates had both neuromonitoring modalities attached during surgical repair. Cerebral oxygen saturation data from 5 subjects were excluded due to poor data quality, yielding a total sample of 11 neonates. A simple linear regression model found that the presence of electroencephalography activity at the end of cooling is positively associated with the decrease in cerebral oxygen saturation that occurs during deep hypothermic circulatory arrest (P &lt; .05). Conclusions: Electroencephalography characteristics within 5 minutes before the initiation of deep hypothermic circulatory arrest may be useful in predicting the decrease in cerebral oxygen saturation that occurs during deep hypothermic circulatory arrest. Electroencephalography may be an important tool for guiding cooling and the initiation of circulatory arrest to potentially decrease the prevalence of new white matter injury in neonates with critical congenital heart disease. abstract_id: PUBMED:26929238 Minimally invasive extracorporeal circulation resuscitation in hypothermic cardiac arrest. Current guidelines for the treatment of hypothermic cardiocirculatory arrest recommend extracorporeal life support and rewarming, using cardiopulmonary bypass or extracorporeal membrane oxygenation circuits. Both have design-related shortcomings which may result in prolonged reperfusion time or insufficient oxygen delivery to vital organs. This article describes clear advantages of minimally invasive extracorporeal circulation systems during emergency extracorporeal life support in hypothermic arrest. The technique of minimally invasive extracorporeal circulation for reperfusion and rewarming is represented by the case of a 59-year-old patient in hypothermic cardiocirculatory arrest at 25.3°C core temperature, with multiple trauma. With femoro-femoral cannulation performed under sonographic and echocardiographic guidance, extracorporeal life support was initiated using a minimally invasive extracorporeal circulation system. Perfusing rhythm was restored at 28°C. During rewarming on the mobile circuit, trauma surveys were completed and the treatment initiated. Normothermic weaning was successful on the first attempt, trauma surgery was completed and the patient survived neurologically intact. For extracorporeal resuscitation from hypothermic arrest, minimally invasive extracorporeal circulation offers all the advantages of conventional cardiopulmonary bypass and extracorporeal membrane oxygenation systems without their shortcomings. abstract_id: PUBMED:33792740 Training module extracorporeal life support (ECLS): consensus statement of the DIVI, DGTHG, DGfK, DGAI, DGIIN, DGF, GRC and DGK Mechanical circulatory support using extracorporeal life support systems (ECLS) has significantly increased in recent years. These critically ill patients pose special challenges to the multiprofessional treatment team and require comprehensive, interdisciplinary and interprofessional concepts. For this reason, to ensure the best possible patient care a standardized ECLS training module has been created at national specialist society level, taking emergency and intensive care management into account. abstract_id: PUBMED:34156120 Surgery for renal cell carcinoma extending to the right atrium in Japanese institutions: Focusing on cardiopulmonary bypass with or without deep hypothermic circulatory arrest. Objectives: To evaluate the outcomes of Japanese patients with renal cell carcinoma undergoing surgery for tumor thrombus invading the right atrium. Methods: We retrospectively evaluated 23 patients who underwent extracorporeal circulation-assisted surgery at two institutions. Perioperative outcomes and survival rates were evaluated and compared between two groups of patients, which were set according to the use or not of deep hypothermic circulatory arrest. Data on systemic treatments were assessed. Results: The median age was 64 years; the majority of patients were fit according to the Charlson Comorbidity Index. Five (21.7%) patients had at least one distant metastasis, and 17 (73.9%) received systemic therapy. A total of 16 (69.6%) patients underwent deep hypothermic circulatory arrest. Baseline characteristics were comparable between groups. Patients who underwent deep hypothermic circulatory arrest had a non-significant reduction in blood loss compared with those who did not undergo this procedure (1866.0 vs 3513.0 mL, P = 0.102). The complication rate, both of any grade (43.8% vs 71.4%, P = 0.215) and grade ≥3 (6.3% vs 28.6%, P = 0.162), tended to be lower in patients who underwent deep hypothermic circulatory arrest. The mean 90-day mortality rate was 8.7%, with no difference among groups (6.3% vs 14.3%, respectively; P = 0.545). The overall median cancer-specific and overall survival were both 64.4 months, and did not differ between groups. Conclusions: Renal cell carcinoma patients undergoing extracorporeal circulation-assisted surgery and systemic therapy for right atrial tumor thrombus have acceptable long-term survival rates. Outcomes are comparable regardless of the use of deep hypothermic circulatory arrest. Answer: Yes, the rewarming method does matter when treating deep hypothermic cardiac arrest (DHCA) with extracorporeal life support (ECLS) in a porcine model. Studies have shown that different rewarming blood flow rates and temperature settings of ECLS can impact cardiopulmonary lesions after DHCA. One study compared a low blood flow rate of 1.5 L/min versus a high flow rate of 3.0 L/min, as well as two temperature-setting rewarming strategies: a temperature during ECLS adjusted to 5°C above the central core temperature versus 38°C maintained throughout the rewarming phase. The results indicated a trend for better cardiac output in groups with high blood flow, with no interaction between ECLS flow and temperature. There was also a trend toward lower pulmonary vascular resistance (PVR) and a significant decrease in arterial PVR in groups with high blood flow. Additionally, the increase in the serum RAGE concentration, a marker of lung injury, was higher in the 38°C rewarming temperature groups compared to the 5°C above adjusted temperature groups. These findings suggest that cardiac output tended to improve with a high-flow-rate rewarming strategy, while a high-temperature delta between core temperature and ECLS increased the RAGE markers of lung injury (PUBMED:26728797). Another study assessed cardiac and pulmonary pathophysiological responses during the cooling phase and the ECLS rewarming phase in a porcine model of DHCA. The study found that during the cooling phase, cardiac output, heart rhythm, and blood pressure decreased continuously. During rewarming, arterial blood pressure was higher in the normal flow (NF) group than in the low flow (LF) group at certain temperatures, and cardiac output was inversely proportional to the ECLS flow rate at the end of rewarming under ECLS. The study concluded that a low ECLS flow rate during the rewarming phase did not attenuate pulmonary lesions, increased blood lactate level, and tended to decrease cardiac output after rewarming. A normal ECLS flow rate did not increase pulmonary vascular resistance compared to a low flow rate (PUBMED:27391370). These experimental models on pigs contribute to the understanding of the rewarming strategy for patients who have undergone accidental DHCA, highlighting the importance of the rewarming method in the treatment process.
Instruction: Suspicious urine cytology (class III) in patients with bladder cancer: should it be considered as negative or positive? Abstracts: abstract_id: PUBMED:12201938 Suspicious urine cytology (class III) in patients with bladder cancer: should it be considered as negative or positive? Objectives: Urine cytology is the gold standard in the diagnosis and follow-up of bladder cancer. Cytology, however, exhibits variable sensitivity depending on tumour grade and interpretation of urine specimens is highly dependent on the skill of the examiner. Positive cytology, classes IV and V by Papanicolaou classification, is a strong predictor for coexisting or subsequent malignancy, while the role of suspicious cytology, class III, is controversial. The objective of the study was to evaluate the role of the suspicious finding in cytological analysis, and whether it should be considered as a negative or positive sign for coexisting malignancy. Material And Methods: Six hundred and fifty-two consecutive patients with bladder cancer were studied in a prospective multicenter trial. One hundred and fifty-one of the patients were newly diagnosed, and the remaining 501 patients were under follow-up. A voided urine sample was obtained prior to TURB or prior to routine follow-up cystoscopy in those under the surveillance and split for culture and cytology. The cytopathological results were analyzed by a central review and only patients with samples available for review analysis were included. Sensitivity and specificity, as well as positive (PPV) and negative (NPV) predictive values of urine cytology were calculated by classifying the class III samples as negative or positive. Results: A total of 570 patients were evaluable. One hundred and twenty nine (22.6%) were newly diagnosed and 441 were under follow-up, of whom 117 (26.5%) had recurrence. Cytology was classified as suspicious in 33/129 (25.6%) patients with primary tumour, and in 41/441 (9.3%) of those under the follow-up, of whom 20 (48.8%) had recurrence. Sensitivity increased from to 31.0% to 56.6% in primary tumours (p &lt; 0.001) and from 17.8% to 34.7% in recurrent tumours (p &lt; 0.001) if class III was determined as positive, whereas the specificity decreased from 96.6% to 90.1% (p &lt; 0.001). Accordingly, the NPV increased from 76.3% to 79.1% and the PPV decreased from 65.6% to 56.2%. Conclusions: The poor sensitivity of voided urine cytology improved significantly when suspicious samples were determined as positive while the specificity remained high, a clear advantage compared with most of the new tumour marker tests. In addition, nearly half of the follow-up patients with suspicious class III cytology had recurrence implying that this patient category is at substantial risk for co-existing malignancy. Therefore, it is recommended that suspicious class III cytology together with class IV and V specimens should be considered positive. abstract_id: PUBMED:19347824 Reflex UroVysion testing in suspicious urine cytology cases. Background: UroVysion is a US Food and Drug Administration-approved fluorescence in situ hybridization (FISH) probe set for use in the detection of recurrent urothelial carcinoma and in patients with hematuria. The objective of the current study was to evaluate the usefulness of UroVysion as a reflex test in patients with a suspicious urine cytology diagnosis. The rationale was that a more aggressive workup might be indicated in patients with a suspicious cytology diagnosis and positive UroVysion test. Methods: The study population included 161 urine specimens diagnosed as suspicious over a period of 12 months. The sensitivity, specificity, negative predictive value (NPV), and positive predictive value (NPV) were calculated based on the histologic and cystoscopic correlation. Results: The results using the reporting criteria suggested by the manufacturer demonstrated a sensitivity of 68.3%, a specificity of 39.7%, a PPV of 56.8%, and a NPV of 51.9%. The results using the presence of any cytogenetic abnormality as a positive FISH test demonstrated a sensitivity of 82.9%, a specificity of 21.7%, a PPV of 54.8%, and an NPV of 51.7%. Conclusions: A negative UroVysion test did not rule out the presence of low-grade or high-grade urothelial carcinoma in urine specimens diagnosed as suspicious. The use of less strict criteria dramatically increased the sensitivity of UroVysion FISH; however, there was a marked decrease in specificity noted. The results in this current study appear to indicate that a more aggressive workup of patients with a suspicious cytology, positive UroVysion result, and negative cystoscopic evaluation is not currently justified. abstract_id: PUBMED:32558388 The predictive value of positive and suspicious urine cytology: Are they different? Background: Urine cytology evaluation is an effective test in the detection of high-grade urothelial carcinoma (HGUC). While the guideline distinguishes the 2 categories: "positive for HGUC" (PHGUC) and "suspicious for HGUC" (SHGUC), the association between these categories with their subsequent follow-up biopsies remains unclear. This study aims to determine and compare the positive predictive value (PPV) of the specimens in PHGUC and SHGUC categories with their respective histologic diagnoses. Methods: During the period of 03/01/2008 to 07/31/2018, urine cytology cases diagnosed as PHGUC and SHGUC with subsequent bladder biopsy within 12 months were retrieved. All cases were correlated with first biopsy obtained during 12 months of cytology specimen. Biopsy result with HGUC, carcinoma in situ, or non-urothelial carcinoma diagnoses were considered as concordance. Results: 378 cases (229 SHGUC and 149 PHGUC) were identified from 263 patients. For the 229 SHGUC cases, the PPV was 72% (n = 166) and for the 149 PHGUC cases, the PPV was 85% (n = 127). While both categories have high PPV, they are statistically significant (p &lt; 0.0001). Additionally, 33 cases were found to have low-grade urothelial carcinoma (LGUC), constituting a portion of discordant results. Conclusion: PHGUC and SHGUC categories are both associated with a high risk of malignancy, however, there is a statistically significant difference between them in our study, supporting the PSRUC guidelines of two separate categories. In instances when urine cytology is discordant with biopsy results, further investigation and clinical follow up is warranted. LGUC appears to remain a common pitfall especially in the suspicious category. abstract_id: PUBMED:11820413 The dilemma of suspicious urine cytology in patients being followed for bladder cancer. Background And Aims: Urine cytology is gold standard for clinical tests used in the diagnosis and follow-up of bladder cancer. Cytology, however, exhibits variable sensitivity depending on tumour grade and interpretation of urine specimens is highly dependent on the skill of the examiner. Furthermore, a "suspicious" cytology report (class III) makes clinicians uncomfortable. In these cases, a more objective test, such as the BTA stat Test, may be useful in providing clarification. The aim of this study was to evaluate the dilemma of suspicious routine urine cytology and to determine whether the BTA stat Test provides diagnostic aid in this rare but controversial category. Material And Methods: 506 consecutive patients who were being followed for bladder cancer were included in the study. A voided urine sample was obtained prior to routine follow-up cystoscopy and split for culture and testing with the BTA stat Test. Clinical status of the disease was evaluated in patients with suspicious urine cytology, and the diagnostic aid of the BTA stat Test in these patients was determined. Results: A total of 57 patients (11.3%) had urine cytology classified as suspicious. The BTA stat Test was positive in 29 (50.9%) and negative in 28 (49.1%) patients. Nineteen (33.3%) patients had recurrence at routine cystoscopy. Of the remaining 38 patients, 10 were further investigated due to a positive BTA stat Test. Two additional recurrences were detected bringing the total number of recurrences to 21 (36.8%), 48.3% (14/29) of the patients with positive and 25.0% (7/28) of the patients with negative BTA stat Test had recurrence (p = 0.069). Overall, 65.5% (19/29) of the patients with a positive BTA stat Test were found to have recurrence either at routine cystoscopy, at further investigations, or at the next cystoscopy compared to that of 35.7% (19/28) in those with negative testing (p = 0.024). The overall sensitivity of the BTA stat Test was 66.7%, and the specificity was 58.3%. Conclusions: At least a third of the patients under follow-up for bladder cancer with suspicious cytology had a recurrence, indicating that these patients are a risk group for recurrence. More importantly, a BTA stat Test result seems to provide some help in distinguishing those patients with very high risk for recurrence, for whom invasive further investigations should be conducted and a close follow-up policy maintained. abstract_id: PUBMED:24909774 The value of the "suspicious for urothelial carcinoma" cytology category: a correlative study of 4 years including 337 patients. Background: The terminology used in reporting urine cytology lacks uniformity and the significance of the "atypical" and "suspicious" categories is still not well established. This results in variable clinical follow-up and management of those cases. The authors sought to investigate the prognostic value of a diagnosis of "suspicious for high-grade urothelial carcinoma" (HGUCA). Methods: All cases with a "suspicious" or "positive" cytological diagnosis spanning 4 years were included and correlated with the subsequent biopsies obtained within 6 months of urine collection. Results: A total of 447 correlative events (57% positive and 43% suspicious) corresponding to 773 cytology specimens and 337 patients were included. The morphology of the "suspicious" cells was similar to what has recently been reported in the literature as "atypical urothelial cells, cannot exclude HGUCA." A "suspicious" diagnosis was more often rendered than a "positive" one in voided specimens (80% vs 65%, respectively). The mean interval between cytology and biopsy was 31 days. On follow-up, 92% of "suspicious" diagnoses (176 of 191 diagnoses) and 90% of "positive" diagnoses (230 of 256 diagnoses) were found to have a biopsy with a diagnosis of carcinoma (low grade or high grade). A diagnosis of HGUCA followed a "suspicious" and a "positive" diagnosis in 79% and 86% of cases, respectively. Conclusions: A "suspicious" diagnosis as defined in the current study warrants close investigations and repeat biopsy to rule out HGUCA. In addition, the findings of the current study raise the question of the need for quantitative criteria for diagnosing HGUCA on cytology. abstract_id: PUBMED:32083810 Urine cytology suspicious for urothelial carcinoma: Prospective follow-up of cases using cytology and urine biomarker-based ancillary techniques. Background: Urine cytology results that are suspicious for urothelial carcinoma (UC) are challenging. The objective of this study was to elucidate the clinical significance of such results in patients who have a negative cystoscopy. Methods: In this prospective study, 83 patients who had urine cytology that was suspicious of UC and a negative cystoscopy underwent a second cystoscopy and urine evaluation by cytology, UroVysion fluorescence in situ hybridization (FISH) assay, FGFR3 (fibroblast growth factor receptor 3) and TERT (telomerase reverse transcriptase) mutations and an 8-gene expression classifier (GEC). Results from all techniques were compared with patients' clinical outcomes. Results: The presence of tumor was identified in 41% of patients; of these, 82% had tumors identified at their second evaluation (76% high-grade [HG] tumors), and 18% had tumors identified at a later follow-up (50% were HG tumors). After The Paris System for Reporting urinary Cytology (TPS) reclassification, 53 cytology results still had an indeterminate diagnosis (13 were suspicious for HGUC, and 40 had atypical urothelial cells (AUCs)]. Complete results from second evaluations using urine cytology, cytology-TPS, FISH, and GEC were available for 6 cases that were suspicious for HGUC and 34 cases that had AUCs. The sensitivity of these techniques to detect HG tumors in cases that were suspicious for HGUC was 100%, except for cytology-TPS, for which the sensitivity was 50%. The sensitivity of cytology and cytology-TPS to detect HG tumors in cases with AUCs was 33%, whereas the sensitivity of fluorescence in situ hybridization and GEC in these cases was 83% and 75%, respectively, to detect HG tumors at the second evaluation. Conclusions: The current results indicate the relevant clinical significance of indeterminate urine cytology findings and strongly suggest the use of complementary evaluations by urine biomarker-based, ancillary techniques to elucidate their significance. abstract_id: PUBMED:35652594 Cyto-histo correlation and false-negative urine: Before and after the Paris system for reporting urinary cytology. Background: The impact of implementing the Paris system (TPS) on the rate of discrepant cases in the negative for high-grade urothelial carcinoma (NHGUC) category that had a subsequent diagnosis of high-grade urothelial carcinoma (HGUC) on histology is not well studied. Methods: We adopted TPS in May 2019. We searched discrepant cases with negative urine cytology 2017-2019 in our cyto-histo correlation database. The urine cytology and follow-up biopsy/resection were reviewed by a cytopathologist who also did Genitourinary (GU) Pathology subspecialty sign-out. Voided urine and instrumented urine were included in this study. Results: There were total of 70 discrepant cases with negative cytology interpretation but HGUC on the subsequent biopsy or resected specimen. Following the TPS criteria, the rate of discrepant negative cytology cases increased from 6 cases between January 2017 and May 2019 to 64 cases after May 2019 when we adopted TPS. There were 2 discrepant negative cases in 2017, 3 cases in 2018, and 65 cases in 2019. Out of 65 cases in 2019, 64 cases were identified after May 2019. Additional 55 urine cytology slides were reviewed according to the TPS criteria, of which, the diagnoses remained unchanged in 45 (82%) cases and 10 (19%) cases were reassigned to either atypical or suspicious categories. The discrepancy was noted more on the instrumented urine and the upper tract urine. However, the false-negative rate rose faster in voided urine and lower tract urine. The risk of HGUC with the category of NHGUC was 0.03% in 2017, 0.05% in 2018, and 1.06% in 2019 at our institution. The increase in false-negative rate could not be attributed to a single cytopathologist. Conclusion: After adopting TPS for reporting urine cytology, there was an increase in HGUC from negative urine cytology which was subsequently confirmed on histology as cases of HGUC. The quality control of negative urines could be important monitoring the process when implementing TPS. abstract_id: PUBMED:28833848 An institutional experience with The Paris System: A paradigm shift from ambiguous terminology to more objective criteria for reporting urine cytology. Background: Urine cytology is a highly specific modality for diagnosing high-grade urothelial carcinoma (HGUC), but plagued by low sensitivity and wide inter-observer variability mainly ascribed to the lack of an established template of reporting. The Paris System (TPS) working group proposed such a template at the 2013 International Congress of Cytology, replete with objective criteria for categorising specimens into one of the seven categories: non-diagnostic, negative for HGUC, atypical urothelial cells, suspicious for HGUC, HGUC, low-grade urothelial neoplasm and others (including non-malignant entities). This study was undertaken to determine the impact of TPS criteria in the morphological interpretation of urine samples. Methods: A total of 255 voided urine specimens from 97 patients who had follow-up biopsies were re-assessed according to TPS criteria and correlated with the final histological diagnosis. Results: Sixty-three patients were diagnosed with HGUC, and 34 had low-grade papillary UC on biopsy. Earlier samples from 40 (41.2%) patients were categorised as merely "atypical" wheereas the "positive for malignancy" category was assigned to 33 (34%) patients. After re-evaluation of the same cases using TPS criteria, cytological features in 44 (69.8%) out of 63 HGUC patients were correctly recognised as HGUC and samples from additional seven patients were re-categorised as suspicious for HGUC (total 81%). The sensitivity of the HGUC category in predicting HGUC was 69.8% which rose to 81% when HGUC was grouped with suspicious for HGUC category. Conclusion: The criteria outlined by TPS facilitated the standardisation of urine cytology reporting and significantly increased the sensitivity of diagnosing HGUC. abstract_id: PUBMED:24128299 Reflex fluorescence in situ hybridization assay for suspicious urinary cytology in patients with bladder cancer with negative surveillance cystoscopy. Objective: To assess the ability of reflex UroVysion fluorescence in situ hybridization (FISH) testing to predict recurrence and progression in patients with non-muscle-invasive bladder cancer (NMIBC) with suspicious cytology but negative cystoscopy. Patients And Methods: Patients under NMIBC surveillance were followed with office cystoscopy and urinary cytology every 3-6 months. Between March 2007 and February 2012, 500 consecutive patients with suspicious cytology underwent reflex FISH analysis. Clinical and pathological data were reviewed retrospectively. Predictors for recurrence, progression and findings on subsequent cystoscopy (within 2-6 months after FISH) were evaluated using univariate and multivariate Cox regression. Results: In all, 243 patients with suspicious cytology also had negative surveillance cystoscopy. Positive FISH was a significant predictor of recurrence (hazard ratio [HR] = 2.35, 95% confidence interval [CI]: 1.42-3.90, P = 0.001) in multivariate analysis and for progression (HR = 3.01, 95% CI: 1.10-8.21, P = 0.03) in univariate analysis, compared with negative FISH. However, positive FISH was not significantly associated with evidence of tumour on subsequent surveillance cystoscopy compared with negative FISH (odds ratio = 0.8, 95% CI: 0.26-2.74, P = 1). Conclusions: Positive FISH predicts recurrence and progression in patients under NMIBC surveillance with suspicious cytology but negative cystoscopy. However, there was no association between the FISH result and tumour recurrence in the immediate follow-up period. Reflex FISH testing for suspicious cytology might have limited ability to modify surveillance strategies in NMIBC. abstract_id: PUBMED:19044300 Usefulness of urine cytology for bladder carcinoma diagnosis: comparative study with biopsy Objectives: Traditionally, urine cytology has been considered as the gold standard for bladder cancer screening. However, new methods are playing new roles in these cases. In order to assess the value of cytology of voided urine we performed one comparative study between cytology and biopsy. Methods: We retrospectively analyzed the results of urine cytology and biopsy follow-up in 109 patients. All cytologies were from voided urine. They were cytocentrifuged and stained with Papanicolaou stain. Results: We found 70 true positive cases and 24 true negative cases. Sensitivity was calculated to be 97% and specifity 96-100%. 12 cases had the first cystoscopy test and biopsy negative, as the cancer was diagnosed in the second biopsy. Conclusions: Patients with clearly positive urine cytology, which was not confirmed in a first cystoscopic study, should be carefully followed up to identify a possible bladder or upper urinary tract cancer. The urine cytology still has a significant role as the gold standard for bladder cancer screening. Answer: Suspicious urine cytology, class III, in patients with bladder cancer presents a diagnostic challenge. The interpretation of class III cytology as negative or positive significantly impacts the sensitivity and specificity of urine cytology in detecting bladder cancer. When class III cytology is considered positive, the sensitivity of urine cytology improves significantly, suggesting that patients with class III cytology are at a substantial risk for coexisting malignancy (PUBMED:12201938). Nearly half of the follow-up patients with suspicious class III cytology had recurrence, indicating that this category should be considered positive alongside classes IV and V (PUBMED:12201938). However, the use of UroVysion fluorescence in situ hybridization (FISH) as a reflex test in patients with suspicious urine cytology did not justify a more aggressive workup, as a negative UroVysion test did not rule out the presence of low-grade or high-grade urothelial carcinoma (PUBMED:19347824). The predictive value of positive and suspicious urine cytology categories shows that both are associated with a high risk of malignancy, but there is a statistically significant difference between them, supporting the use of two separate categories (PUBMED:32558388). The dilemma of suspicious urine cytology in patients being followed for bladder cancer is further complicated by the fact that at least a third of these patients had a recurrence, indicating that they are a risk group for recurrence (PUBMED:11820413). The "suspicious for urothelial carcinoma" cytology category has been shown to warrant close investigations and repeat biopsy to rule out high-grade urothelial carcinoma (PUBMED:24909774). In conclusion, the evidence suggests that suspicious urine cytology, class III, should be considered a positive indicator for coexisting malignancy in patients with bladder cancer, warranting further investigation and close follow-up (PUBMED:12201938; PUBMED:32558388; PUBMED:11820413; PUBMED:24909774).
Instruction: Pediatric burns research: A history or an evolution? Abstracts: abstract_id: PUBMED:26122558 Pediatric burns research: A history or an evolution? Background/purpose: Pediatric burns research has increasingly been recognized as a sub-specialty of its own. The aim of this study was to assess and analyze the publication patterns of the pediatric burns literature over the last six decades. Methods: A search strategy for the Web of Science database was designed for pediatric burns publications, with output analyzed between two periods: 1945-1999 (period 1) and 2000-2013 (period 2). Results: There were 1133 and 1194 publications for periods 1 (1945-1999) and 2 (2000-2013), respectively. The mean citation counts of the top 50 publications were 77 (range 45-278) and 49 (range 33-145) for periods 1 and 2, respectively. There were 26 and 20 authors with two or more publications in the top 50 list in periods 1 and 2, respectively. Of these there are two authors that have published 47 papers in both combined time-periods. There were 29 and 9 journals that have published 50% of the publications for time-period 1 and 2 respectively. In period 2, there were two burns journals that have published 37.2% of the total articles. Conclusions: Pediatric burns research has evolved from an associated, dispersed entity into a consolidated sub-specialty that has been successfully integrated into mainstream burns journals. abstract_id: PUBMED:38074958 Pediatric Burns-Time to Collaborate Together. How to cite this article: Baalaaji M. Pediatric Burns-Time to Collaborate Together. Indian J Crit Care Med 2023;27(12):873-875. abstract_id: PUBMED:19949498 Pediatric facial burns: Is facial transplantation the new reconstructive psychosurgery? Introduction: Current pediatric burn care has resulted in survival being the expectation for most children. Composite tissue allotransplantation in the form of face or hand transplantation may present opportunities for reconstructive surgery of patients with burns. The present paper addresses the question "Could facial transplantation be of therapeutic benefit in the treatment of pediatric burns associated with facial disfigurement?" Methods: Therapeutic benefit of facial transplantation was defined in terms of psychiatric adjustment and quality of life (QOL). To ascertain therapeutic benefit, studies of pediatric burn injury and associated psychiatric adjustment and QOL in children, adolescents and adults with pediatric burns, were reviewed. Results: Pediatric burn injury is associated with anxiety disorders, including post-traumatic stress disorder and depressive disorders. Many patients with pediatric burns do not routinely access psychiatric care for these disorders, including those for psychiatric assessment of suicidal risk. A range of QOL outcomes were reported; four were predominantly satisfactory and one was predominantly unsatisfactory. Discussion: Facial transplantation may reduce the risk of depressive and anxiety disorders other than post-traumatic stress disorder. Facial transplantation promises to be the new reconstructive psychosurgery, because it may be a surgical intervention with the potential to reduce the psychiatric suffering associated with pediatric burns. Furthermore, patients with pediatric burns may experience the stigma of disfigurement and psychiatric conditions. The potential for improved appearance with facial transplantation may reduce this 'dual stigmata'. Studies combining surgical and psychiatric research are warranted. abstract_id: PUBMED:26170779 Pediatric burns in Mosul: an epidemiological study. A cross-sectional study was conducted to determine the characteristics and case fatality rate of pediatric burns in Mosul, Iraq. The study group was burn patients aged 14 years and under who were admitted to the Burns Unit in Al-Jamhoori Teaching Hospital from the 1(st) of March 2011 to the 1(st) of March 2012. Of the 459 emergency burn admissions, 209 (45.53%) were pediatric patients up to 14 years of age, with a mean age of 4.73±3.61 years. Scald was the most common type of burn and occurred mainly in domestic settings. The mean total body surface area (TBSA) burned was 19.73±17.15%. Thirty-five patients died during the study period, giving a case fatality rate of 16.75%. The maximum number of deaths occurred in the 2-4 years age group. The case fatality rate was high in patients having more than 40% TBSA involvement. Flame burns were significantly more fatal than scalds, with a fatality rate of 35.35% and 12.05% respectively; (p=0.0001). In conclusion, given that most pediatric burn accidents occur at home, burn prevention should be focused on improving living conditions and on providing an educational program for parents. abstract_id: PUBMED:23875121 Pediatric burns mortality risk factors in a developing country's tertiary burns intensive care unit. Aim: This study aimed at identifying risk factors related to pediatric burns mortality in a middle income country such as Ghana. Methods: The data for the three years retrospective study (May 2009 - April 2012) was obtained from the pediatric burn admissions records and patients' folders of the Reconstructive Plastic Surgery &amp; Burns Unit (RPSBU), Komfo Anokye Teaching Hospital (KATH), Ghana. Data retrieved included: Demographic features, Total Burned Surface Area (TBSA) incurred; Aetiology of burns; Duration of the admission; Outcome of admission; Part of the body affected and Cost incurred. Ethical approval for this study was obtained from the KNUST-SMS/KATH Committee on Human Research, Publications and Ethics. Data analyses were performed with SPSS 17.0 version. Results: Information on 197 patients was completely retrieved for the study. Burns mortality rate for the study was identified to be 21.3% (N=42). The mean age of the 42 dead patients was 3.7±0.3 years, ranging from 0-13 years, while, males (54.8%, N= 23) outnumbered females (45.2%, N=19). The TBSA burned interquartile range was 48%. In terms of etiology of burns Scald (73.8%, N=31) was the commonest cause of injury. Mortality risk factors identified were Age &lt;6 years (P=0.028); Scald especially hot water and soup (P=0.016); TBSA &gt;36% (P=0.028) and Inhalation injury (P=0.040). Conclusion: Age, scald, TBSA and Inhalation Injury were identified as pediatric burns mortality risk factors in a developing country such as Ghana's RPSBU. These identified factors will serve as a guideline for plastic surgeons and other health professionals practicing in countries such as Ghana. abstract_id: PUBMED:30824139 Early resuscitation and management of severe pediatric burns. Pediatric burns are a leading cause of injury and mortality in children in the United States. Prompt resuscitation and management is vital to survival in severe pediatric burns. Although management principles are similar to their adult counterparts, children have unique pathophysiologic responses to burn injury thus an understanding of the differences in fluid resuscitation requirements, airway management, burn and wound care is essential to optimize their outcomes. abstract_id: PUBMED:32173069 Craniofacial and neck burns in the pediatric population. Introduction: Burn injuries can present with catastrophic physical and psychiatric harm with extensive, long-term sequelae. The pediatric population may especially be at-risk given this population's early neurocognitive and behavioral state of development. Innovations in treatment modalities and the development of evidence-based guidelines have helped mitigate burn morbidity and mortality in the pediatric population. Unfortunately, a surprising dearth of literature identifies risk-factors, epidemiological data, injury mechanisms, and prognostic factors within the pediatric population in the setting of craniofacial burns. Methods: An analysis of emergency department visits under the National Electronic Injury Surveillance System was conducted for the most recent 5-year period available (2014-2018). Available information includes demographical data, such as age and sex, mechanism of injury, visit circumstances, as well as visit disposition. Additionally, details surrounding the injury, including type of burn and anatomical location of injury, were compared. Results: After a review of results, a total of 2599 patients were included for analysis. Our study shows that infants and young children are at increased risk for grave injury 27.3% and 13% of infants and toddlers transferred or admitted, respectively, p &lt; 0.05). 59.8% of infant burns in particular were caused by liquid or kitchen products, while 44.5% of burns in toddlers were caused by chemical products (p &lt; 0.05 for both). Conversely, adolescents are at greater risk of burns in the setting of occupational and hobby-based activities (20.4% of adolescent burns). Conclusions: Craniofacial burns in the pediatric population may present with complex pathology and sometimes necessitate advanced care. Presentations and prognoses are different dependent upon age and injury mechanism. These findings may serve as important framework in the establishment of guidelines for medical and legislative reform. abstract_id: PUBMED:29232903 Pediatric Thermal Burns and Treatment: A Review of Progress and Future Prospects. Burn injuries are a devastating critical care problem. In children, burns continue to be a major epidemiologic problem around the globe resulting in significant morbidity and death. Apparently, treating these burn injuries in children and adults remains similar, but there are significant physiological and psychological differences. The dermal layer of the skin is generally thinner in neonates, infants, and children than in adults. Enhanced evaporative loss and need for isotonic fluids increases the risk of hypothermia in the pediatric population. The pain management of the children with major burns challenges the skills of the personnel of every unit. Managing these wounds requires intensive therapeutic treatment for multi-organ dysfunction, and surgical treatment to prevent sepsis and other complications that further delay wound closure. Alternatives to the practice of donor site harvest and autografting for the treatment of severe burns and other complex skin defects are urgently needed for both adult and pediatric populations. This review article focuses on thermal burn pathophysiology and pain management and provides an overview of currently approved products used for the treatment of pediatric burn wounds. A new promising approach has been presented as a first-line therapy in the treatment of burns to reduce surgical autografting in pediatric patients. abstract_id: PUBMED:33736899 Analysis of 12,661 pediatric burns in Wuhan institute of burns: A retrospective study of fifteen years. Background: By analyzing the epidemic characteristics of pediatric burns in a burn center serving large areas of Hubei Provence and partly surrounding provinces around Wuhan City, the aim of this study is to provide better strategies for the prevention and care for pediatric burns. Methods: Pediatric burn patients who were younger than 13 years old in Wuhan Third Hospital from 2004 to 2018 were included. Demographic and clinical data were collected, analyzed and compared among groups. Results: 12,661 pediatric burns, mean aged 2.37 ± 2.30 y, were admitted during the 15 years, with 7973 boys (62.97%) and 4688 girls (37.03%). By age groups, infant (&lt;3y), preschool (&gt;3-6y) and school children (&gt;6-13y) accounted for 81.12% (10,270 cases), 12.08% (1530 cases) and 6.80% (861 cases) respectively. The most cause of injury was scalds (11,232, 88.71%), followed by flame burns (917, 7.24%), electric burns (201, 1.59%), contact burns (127, 1.00%), firework or firecracker (124, 0.98%), chemical burns (40, 0.32%) and hot crush injury (20, 0.16%). The mean age of firework or firecracker burns was 6.19 ± 2.83y, electric burns 5.18 ± 3.31y, flame burns 4.73 ± 3.53y, hot crush injury 3.85 ± 2.37 y, contact burns 3.66 ± 3.35y, chemical burns 3.03 ± 2.50y, and scald 2.06 ± 1.91y. Over half cases (57.34%) were small burns less than 10% total burn surface area (TBSA) and the larger TBSA, the fewer number of patients. The mortality rate was 0.11% and correlated with TBSA, age and etiology. The mean length of stay (LOS) was 12.63 ± 11.91 days and highly correlated with etiology and TBSA. The mean hospital cost was 11210.76 ± 21248.87 RMB (about 1600 USD) or 1626.91 ± 3957.59 RMB (about 230 USD) per % TBSA, which was correlated with depth of burn, TBSA, etiology, LOS and age. Conclusion: Pediatric burns in central China was still common and even increasing. Majority of the pediatric burn victims were boys under three years old, while the mean ages of different etiologies varied from about 2-6 years old. Education and prevention aiming the high risks are the key point to decrease pediatric burns. abstract_id: PUBMED:32245570 Surgical management and epidemiological trends of pediatric electrical burns. Electrical burns are an uncommon yet devastating class of burn injuries. Shriners Hospitals for Children - Boston a pediatric burn center in New England and cares for both domestic and international patients. We utilized our experience over the past 13 years to review surgical management and evaluate historical trends for this unique patient group. A retrospective chart review was conducted on 68 patients aged 0-18 years admitted to our pediatric center with an electrical burn from January 2005 to December 2018. We collected and analyzed data pertaining to patient demographics, burn characteristics, clinical course, and surgical interventions. Our cohort included 31 patients from the US (46%) and 37 transferred from a variety of international countries (54%). The majority of US patients were admitted with low voltage burns (81%), whereas the majority of international patients were admitted with high voltage burns (95%). Acute and reconstructive surgical interventions were performed mainly for high voltage burns (94% and 89%). Based on our experience, epidemiology and surgical intervention varied based on voltage of the burn injury and residence of the patient. We have seen a reduction in US pediatric high voltage injuries over the past two decades, likely due to enhancement of electrical safety. It may be possible to use a similar strategy to reduce the frequency of severe high voltage electrical burn injuries in developing countries. Answer: Pediatric burns research has undergone a significant evolution over the years. Initially, pediatric burns research was not recognized as a distinct sub-specialty, but over time, it has become a consolidated field with a growing body of literature. This evolution is evident from the publication patterns observed over the last six decades. A study assessing these patterns found that there were 1133 publications in the period from 1945 to 1999, and 1194 publications from 2000 to 2013, indicating a steady increase in research output (PUBMED:26122558). The research has been successfully integrated into mainstream burns journals, with two burns journals publishing 37.2% of the total articles in the latter period (PUBMED:26122558). The focus of pediatric burns research has also shifted to address various aspects of burn care, including epidemiology, risk factors, management, and outcomes. For instance, studies have been conducted to identify mortality risk factors in pediatric burns in developing countries (PUBMED:23875121), analyze the epidemiological characteristics of pediatric burns (PUBMED:33736899), and review the progress and future prospects in the treatment of pediatric thermal burns (PUBMED:29232903). Furthermore, research has delved into the psychiatric impact of pediatric burns, exploring the potential therapeutic benefits of facial transplantation for children with facial disfigurement due to burns (PUBMED:19949498). The unique pathophysiological responses of children to burn injuries have also been recognized, leading to an understanding of the differences in fluid resuscitation requirements, airway management, and wound care (PUBMED:30824139). Overall, pediatric burns research has evolved from a dispersed entity into a specialized sub-field with a focus on improving prevention, care, and outcomes for pediatric burn patients. This evolution is reflected in the increasing number of publications, the consolidation of research within specialized journals, and the broadening scope of topics being investigated.
Instruction: Is the standard retention of syndesmotic positioning screws after ankle fracture fixation safe and feasible? Abstracts: abstract_id: PUBMED:25531514 Is the standard retention of syndesmotic positioning screws after ankle fracture fixation safe and feasible? A retrospective cohort study in 140 consecutive patients at a North American trauma centre Background: Unstable ankle injuries with associated disruption of the distal-fibular syndesmosis are typically managed by adjunctive placement of temporary syndesmotic positioning screws. The widespread notion that positioning screws must be removed by default after healing of the syndesmosis remains a topic of debate which lacks scientific support. The present study was designed to test the hypothesis that syndesmotic positioning screws are safely retained per protocol in asymptomatic patients. Patients And Methods: A retrospective analysis of an institutional prospective database was performed during a 5-year time-window at an academic level 1 trauma centre in the United States. All ankle fractures requiring surgical fixation were included in the analysis. The primary outcome parameter consisted of the rate of elective hardware removal for syndesmotic positioning screws within 6 months after surgical fixation. Results: A total of 496 consecutive patients with 496 isolated ankle fractures managed by surgical fixation were included in this study. Of these, 140 injuries were managed by placement of syndesmotic positioning screws. Within 6 months follow-up, 17.1% of all syndesmotic screws were found to be radiographically broken, and 13.6% of syndesmotic screws revealed radiographic signs of loosening. Only 2 patients (1.4%) required the elective removal of symptomatic positioning screws within 6 months of surgical fracture fixation. Conclusion: Despite the high rate of radiographic complications related to breaking or loosening of syndesmotic screws in almost one third of all cases, more than 98% of all patients remain asymptomatic and do not require a scheduled hardware removal. The routine removal of syndesmotic positioning screws does not appear to be justified from a patient safety perspective. abstract_id: PUBMED:37679997 The Clinical Outcomes of Syndesmotic Flexible Fixation for Syndesmotic Injury With Ankle Fracture. Background: We evaluated whether the quality of syndesmotic reduction affects the short-term clinical outcomes of flexible fixation in patients with a rotational ankle fracture. Methods: This study included 59 patients (32 men and 27 women) who underwent syndesmotic flexible fixation. The degree of syndesmotic reduction was evaluated on computed tomography (CT) images acquired within 3 days after surgery. We measured the divergence between anterior and posterior incisura at 1 cm above the distal tibial articular joint, then evaluated the degree of fibular rotation relative to the tibia. At 1 year after surgery, an objective clinical evaluation was performed using the American Orthopaedic Foot &amp; Ankle Society (AOFAS) score, the visual analog scale (VAS), and the Foot and Ankle Outcome Score (FAOS). Additionally, repeat arthroscopy was performed during routine implant removal at nearly 1 year postoperatively. Results: Among 59 patients who underwent syndesmotic flexible fixation, 56 patients had syndesmotic stability on repeat arthroscopy. At 1 year postoperation, AOFAS, VAS, and FAOS scores were, respectively, 90, 2.0, and 94 in the accurate reduction group (n = 24) and 90, 1.0, and 94.5 in the malreduction group (n = 35). Conclusion: Reduction quality after syndesmotic injury with flexible fixation, as determined by early postoperative CT imaging, did not affect patient prognosis. In this cohort, syndesmotic reduction and flexible fixation may produce good clinical outcomes in patients with syndesmotic injury and ankle fracture. Level Of Evidence: Level III, retrospective cohort study. abstract_id: PUBMED:37724307 Predictors of Hardware Removal in Orthopaedic Trauma Patients Undergoing Syndesmotic Ankle Fixation With Screws. Background: Indications for removal of syndesmotic screws are not fully elucidated. This study aimed to determine factors related to elective syndesmotic screw removal. Methods: Patients who underwent fixation of ankle syndesmotic injuries were included. Screw removal was offered after a minimum of 12 weeks after surgery for pain, stiffness or patient desire to remove painful or broken hardware. Patient demographics, surgical data, distance of the syndesmotic screw from the joint, location of the screw at the physeal scar, and number of syndesmotic screws placed were collected for all patients. Bivariate and multivariate analyses were performed to determine the relationship between patient characteristics and screw removal and independent predictors of hardware removal. Results: Of 160 patients, 60 patients (38%) with an average age of 36.1 (range: 18-84) years underwent elective syndesmotic screw removal at a mean of 7 (range, 3-47) months after initial fixation. The most common reason for screw removal (50/60 patients) was ankle stiffness and pain (83%). Patients who underwent screw removal were more likely to be younger (36.1 years ± 13.0 vs 46.6 years ± 18.2, P &lt; .001) and have a lower ASA score (2 ± 0.8 vs 2.1 ± 0.7, P = .003) by bivariate analysis. Of patients who underwent screw removal, 21.7% (13/60) had a broken screw at the time of removal. Whether the screw was placed at the physeal scar was not significantly associated with patient decision for hardware removal (P = .80). Conclusion: Younger and healthier patients were more likely to undergo elective removal of syndesmotic hardware. Screw distance from joint and screw placement at the physeal scar were not significantly associated with hardware removal. Level Of Evidence: Level III, retrospective cohort study. abstract_id: PUBMED:28390687 Overtightening of the syndesmosis revisited and the effect of syndesmotic malreduction on ankle dorsiflexion. Background: Ankle syndesmotic injuries are a significant source of morbidity and require anatomic reduction to optimize outcomes. Although a previous study concluded that maximal dorsiflexion during syndesmotic fixation was not required, methodologic weaknesses existed and several studies have demonstrated improved ankle dorsiflexion after removal of syndesmotic screws. The purposes of the current investigation are: (1) To assess the effect of compressive syndesmotic screw fixation on ankle dorsiflexion utilizing a controlled load and instrumentation allowing for precise measurement of motion. (2) To assess the effect of anterior &amp; posterior syndesmotic malreduction after compressive syndesmotic screw fixation on ankle dorsiflexion. Material And Methods: Fifteen lower limb cadaveric leg specimens were utilized for the study. Ankle dorsiflexion was measured utilizing a precise micro-sensor system after application of a consistent load in the (1) intact state, (2) after compression fixation with a syndesmotic screw and (3) after anterior &amp; (4) posterior malreduction of the syndesmosis. Results: Following screw compression of the nondisplaced syndesmosis, dorsiflexion ROM was 99.7±0.87% (mean±standard error) of baseline ankle ROM. Anterior and posterior malreduction of the syndesmosis resulted in dorsiflexion ROM that was 99.1±1.75% and 98.6±1.56% of baseline ankle ROM, respectively. One-way ANOVA was performed showing no statistical significance between groups (p-value=0.88). Two-way ANOVA comparing the groups with respect to both the reduction condition (intact, anatomic reduction, anterior displacement, posterior displacement) and the displacement order (anterior first, posterior first) did not demonstrate a statistically significant effect (p-value=0.99). Conclusion: Maximal dorsiflexion of the ankle is not required prior to syndesmotic fixation as no loss of motion was seen with compressive fixation in our cadaver model. Anterior or posterior syndesmotic malreduction following syndesmotic screw fixation had no effect on ankle dorsiflexion. Poor patient outcomes after syndesmotic malreduction may be due to other factors and not loss of dorsiflexion motion. Level Of Evidence: IV. abstract_id: PUBMED:34522560 Suture-Button Versus Syndesmotic Screw Fixation of Ankle Fractures: A Comparative Retrospective Review Over One Year. Background Syndesmotic fixation is essential in the management of unstable ankle fractures with syndesmotic disruption. It can be achieved either by rigid fixation with screws or dynamic fixation with suture-button devices. Syndesmosis screws are considered the gold standard for the stabilisation and fixation of distal tibiofibular syndesmotic injuries. We use both types of devices in our unit for the stabilisation of syndesmotic injuries. In our department, Arthrex TightRope® (Arthrex, Naples, FL) is the preferred suture-button device for the dynamic fixation of distal tibiofibular syndesmosis. Some studies have reported better outcomes with the use of suture-button devices. In this study, we aim to compare the outcomes with the use of both these devices. Methodology This retrospective cohort study was conducted from 1 June 2019 until 31 May 2020 which included all patients who underwent syndesmotic fixation. Data were collected from the Northern Ireland Electronic Care Record and the Northern Ireland Picture Archiving and Communication System. Primary outcomes included reoperation rates due to failure of syndesmotic hardware. Secondary outcomes included the comparison of complications (e.g. infection, wound problems, or loss of reduction) radiographic fixation/stability, duration of follow-up, and significant ongoing symptoms at the time of discharge from the clinic. Results A total of 53 patients underwent syndesmotic stabilisation during this period. In total, 34 patients underwent fixation with TightRope and 19 underwent syndesmotic screw fixation. Three patients (9%) had reoperation due to any cause in the TightRope group compared to four patients (21%) in the syndesmotic screw group. All four patients in the syndesmotic screws group underwent implant removal due to failure or symptoms due to implant. Only one patient (3%) in the TightRope group underwent implant removal due to ongoing pain [the other two (6%) patients included one with lateral ankle abscess and one with long distal fibula screws, both unrelated to the use of TightRope and TightRope left in situ at reoperation]. Rediastasis was reported in one (5.2%) patient with syndesmotic screws. The median duration of follow-up was 82 days in the syndesmotic screw group compared to 77.5 days in the TightRope group. Conclusions Based on our data, we report a lower incidence of implant-related problems and the need for implant removal with the use of TightRope. Quicker return to weight-bearing and slightly better radiographic stabilisation were noted as well. Our study has its limitations in terms of a small number of patients and the absence of functional outcomes as these were not routinely recorded at the time of discharge from the clinic. abstract_id: PUBMED:27469402 Changes in the syndesmotic reduction after syndesmotic screw fixation for ankle malleolar fractures: One-year longitudinal evaluations using computer tomography. Introduction: To evaluate time-dependent changes in the syndesmotic reduction after syndesmotic screw fixation and one year after screw removal for ankle malleolar fractures, and to assess whether the incidence of syndesmotic malreduction changes depending on the measurement method. Methods: We assessed twenty patients who underwent syndesmotic screw fixation for ankle fractures. The syndesmotic screws were removed after six weeks of the fracture surgery. Syndesmotic reduction was assessed within two weeks of the fracture surgery and one year after the screw removal using the axial computer tomographic images. Side-to-side differences in the anterior and posterior tibiofibular distances, anteroposterior fibular translation, and fibular rotation were measured. Results: The mean anterior tibiofibular distance was 0.7mm after syndesmotic fixation. It increased to 1.9mm at one year after screw removal (p=0.002). After syndesmotic fixation, four ankles had malreduction of the anterior tibiofibular distance, including three ankles with widening and one with overtightening. At one year, eight ankles had malreduction, all of whom had widening. The other measurement values did not change over time (0.1mm vs. 0.6mm for the posterior tibiofibular distance, 0.2mm vs. 0.3mm for the anteroposterior fibular translation, and 0.7° vs. 0° for the fibular rotation). The incidences of malreduction were significantly different depending on the definition of malreduction, ranging from 10% to 50% after syndesmotic fixation (p=0.01) and from 20% to 60% at one year after screw removal (p=0.02). Conclusions: The anterior tibiofibular distance widened after one year of syndesmotic screw removal. The incidence of malreduction varied depending on the measurement method. abstract_id: PUBMED:24875967 Ankle syndesmotic fixation using two screws: risk of injury to the perforating branch of the peroneal artery. Trans-syndesmotic screws are commonly used to repair syndesmosis ruptures and stabilize the ankle joint. Just as with any surgery, the neurovascular structures can be compromised, causing complications. We evaluated the position of the perforating branch of the peroneal artery to define the risk of arterial compromise during placement of 2 trans-syndesmotic screws. In 37 cadaveric specimens, 2 trans-syndesmotic screws were inserted 2 and 4 cm proximal to the ankle joint. The distances between the perforating branch of the peroneal artery and the screws and the other anatomic landmarks were measured. Significant differences were calculated between the male and female limbs, and regression analysis was used to determine the significant associations between the tibial length and screw location. The perforating branch of the peroneal artery passed through the interosseous membrane 3.42 ± 0.6 cm proximal to the tibial plafond. The artery was located up to 4 mm from the superior and inferior screws 51.4% and 10.8% of the time, respectively. A greater percentage of male specimens displayed close proximity between the artery and the superior screw, and the distance of the artery from the distal fibula was statistically significant compared with the distance in the female specimens. Regression analysis revealed that the greater the tibial length, the closer the superior screw was to the artery, with a negative correlation discovered for the inferior screw. We concluded that superior screw placement increased the risk of injuring the perforating branch of the peroneal artery injury, and the likelihood of injuring the artery with the inferior screw increased as the length of the tibia decreased. abstract_id: PUBMED:27803225 Should syndesmotic screws be removed after surgical fixation of unstable ankle fractures? a systematic review. Aims: In approximately 20% of patients with ankle fractures, there is an concomitant injury to the syndesmosis which requires stabilisation, usually with one or more syndesmotic screws. The aim of this review is to evaluate whether removal of the syndesmotic screw is required in order for the patient to obtain optimal functional recovery. Materials And Methods: A literature search was conducted in Medline, Embase and the Cochrane Library for articles in which the syndesmotic screw was retained. Articles describing both removal and retaining of syndesmotic screws were included. Excluded were biomechanical studies, studies not providing patient related outcome measures, case reports, studies on skeletally immature patients and reviews. No restrictions regarding year of publication and language were applied. Results: A total of 329 studies were identified, of which nine were of interest, and another two articles were added after screening the references. In all, two randomised controlled trials (RCT) and nine case-control series were found. The two RCTs found no difference in functional outcome between routine removal and retaining the syndesmotic screw. All but one of the case-control series found equal or better outcomes when the syndesmotic screw was retained. However, all included studies had substantial methodological flaws. Conclusions: The currently available literature does not support routine elective removal of syndesmotic screws. However, the literature is of insufficient quality to be able to draw definitive conclusions. Secondary procedures incur a provider and institutional cost and expose the patient to the risk of complications. Therefore, in the absence of high quality evidence there appears to be little justification for routine removal of syndesmotic screws. Cite this article: Bone Joint J 2016;98-B:1497-1504. abstract_id: PUBMED:34404589 The Clinical Efficacy of Suture-Button Fixation and Trans-Syndesmotic Screw Fixation in the Treatment of Ankle Fracture Combined With Distal Tibiofibular Syndesmosis Injury: A Retrospective Study. Twenty percent of ankle fractures present with concomitant syndesmosis injury which results in poor clinical outcomes. Surgical stabilization of the syndesmosis can be achieved with either trans-syndesmotic screws or a suture button device. The aim of this study is to investigate the clinical efficacy of suture button fixation and trans-syndesmotic screw fixation in the treatment of ankle fracture combined with distal tibiofibular syndesmosis injury. A retrospective analysis was conducted by enrolling 76 patients with ankle fractures combined with distal tibiofibular syndesmosis injury who were admitted in our trauma center from January 2018 to January 2019, including 34 cases of suture button group and 42 cases of the syndesmotic screw group with a mean follow-up period of 16 ± 7 (range 12-21) months. The demographic data included gender, age, injury mechanism, AO classification and the operation duration were recorded, the radiographic and clinical outcomes were determined by tibiofibular clear space, tibiofibular overlap distance, complications, and the Olerud-Molander Ankle Score at the last follow-up. All the indexes were compared between the 2 groups to discover the related statistical differences. With the numbers available, no significant difference could be detected in the surgical duration, tibiofibular clear space, tibiofibular overlap distance, total complication rate, and middle-term Olerud-Molander Ankle scores between the 2 groups. However, the suture button fixation group showed higher early stage Olerud-Molander Ankle scores (p = .027) and shorter full weightbearing time (p = .018) than that of syndesmotic screw fixation group. Considering the outcomes, we conclude that the suture button fixation not only shows equivalent efficacy to the traditional syndesmotic screw, but also has advantages of allowing early weightbearing, low requirements for routine removal. abstract_id: PUBMED:17728986 Fixation of syndesmotic disruption using bioresorbable screws Objective: Fixation of syndesmotic disruption using bioresorbable screws to avoid secondary surgery for removal of the positioning screw. Indications: Syndesmotic ligament disruption, i.e., associated with ankle fractures (AO type B and C injuries). Contraindications: Allergies to bioresorbable materials such as sutures containing polylactic acids. Open fractures with severe comminution or bone loss. Successful conservative management in older patients. Nonambulatory patients. Surgical Technique: In case of a syndesmotic disruption, one or two cannulated bioresorbable positioning screws are placed bicortically under image intensifier control to allow healing of the distal ligamentous junction between the fibula and tibia. Screws must be placed proximal to the cartilaginous part of the tibiofibular joint to avoid uneventful cartilage damage and arthrosis. Postoperative Management: Postoperatively, most patients are treated with a non-weight-bearing cast for 6 weeks. Casts are changed after 1 and 2 weeks postoperatively. After 6 weeks, weight bearing is initiated. Return to physically demanding work and sports is allowed after 10-12 weeks. Results: Eight patients with a 6- to 12-month follow-up period showed good clinical and radiologic results. Answer: The standard retention of syndesmotic positioning screws after ankle fracture fixation appears to be safe and feasible based on the findings from several studies. A retrospective cohort study conducted at a North American trauma center found that despite a high rate of radiographic complications related to breaking or loosening of syndesmotic screws in almost one-third of all cases, more than 98% of patients remained asymptomatic and did not require a scheduled hardware removal, suggesting that routine removal of syndesmotic positioning screws may not be justified from a patient safety perspective (PUBMED:25531514). Another study evaluating the clinical outcomes of syndesmotic flexible fixation for syndesmotic injury with ankle fracture found that the quality of syndesmotic reduction did not affect short-term clinical outcomes, indicating that flexible fixation may produce good clinical outcomes in patients with syndesmotic injury and ankle fracture (PUBMED:37679997). A study aimed at determining factors related to elective syndesmotic screw removal found that younger and healthier patients were more likely to undergo elective removal of syndesmotic hardware, but screw distance from the joint and screw placement at the physeal scar were not significantly associated with hardware removal (PUBMED:37724307). Research on the effect of syndesmotic malreduction on ankle dorsiflexion concluded that maximal dorsiflexion of the ankle is not required prior to syndesmotic fixation as no loss of motion was seen with compressive fixation in a cadaver model (PUBMED:28390687). A comparative retrospective review over one year reported a lower incidence of implant-related problems and the need for implant removal with the use of suture-button devices compared to syndesmotic screw fixation (PUBMED:34522560). A longitudinal evaluation using computer tomography found that the anterior tibiofibular distance widened after one year of syndesmotic screw removal, but the incidence of malreduction varied depending on the measurement method (PUBMED:27469402). A systematic review concluded that the available literature does not support routine elective removal of syndesmotic screws, as the studies reviewed did not find a difference in functional outcome between routine removal and retaining the syndesmotic screw (PUBMED:27803225). In summary, the retention of syndesmotic positioning screws after ankle fracture fixation is generally considered safe and feasible, with a low incidence of symptomatic complications necessitating removal.
Instruction: Retinal hemorrhage asymmetry in inflicted head injury: a clue to pathogenesis? Abstracts: abstract_id: PUBMED:14571228 Retinal hemorrhage asymmetry in inflicted head injury: a clue to pathogenesis? Objective: To determine whether regional cerebral parenchymal injury patterns correlate with the distribution of retinal hemorrhages after inflicted head injury. Study Design: Retrospective case series of funduscopic photographs and serial computerized tomographic imaging of 14 children with confirmed inflicted head injury. Main Outcome Measures: Retinal Hemorrhage Score per eye and per subject, visual field examination, regional patterns of parenchymal injury on computerized tomographic scans and necropsy, and retinal/optic nerve sheath hemorrhage distribution at necropsy. Results: Ten of 14 children had retinal hemorrhages (71%); 90% were asymmetric (mean retinal score, 4.89 vs 2.56; P=.006). Retinal hemorrhages were maximal on the side of greatest cerebral injury in seven of 10 children initially. Subsequent imaging asymmetry predicted retinal hemorrhage distribution in all eight survivors. Children's Coma Scores, apnea or cardiorespiratory arrest, initial hemoglobin, and plasma glucose concentration did not predict laterality. Asymmetry was greatest if dilated ophthalmoscopy was performed during the first 24 hours (P=.03). Visual outcome was poor; three had homonymous hemianopia and four had cortical visual loss, all correlating with parenchymal atrophy patterns. Conclusion: The distribution of retinal hemorrhages after inflicted head injury correlates with acute and evolving regional cerebral parenchymal injury patterns. abstract_id: PUBMED:20024631 Pathology of retinal hemorrhage in abusive head trauma. Abusive head injury, characterized by repeated acceleration-deceleration forces, is associated with retinal hemorrhages as demonstrated in many clinical and postmortem studies. The theory that vitreoretinal traction is the major factor in the pathogenesis of retinal hemorrhages is presently the most widely accepted explanation based on different lines of research. Postmortem examination of the eye and orbital structures is essential for recognizing abusive head injury and also for identifying other possible medical conditions which can cause retinal hemorrhage. abstract_id: PUBMED:24139052 The pathogenesis of retinal and subdural haemorrhage in non-accidental head injury in infancy: assessment using Bradford Hill criteria. Fatal non-accidental head injury (NAHI) in infancy is characterised by severe bilateral retinal haemorrhage (sbRH), thin film diffuse bilateral subdural haemorrhage (dbSDH) and severe fatal hypoxic ischaemic encephalopathy (sfHIE). The Geddes hypothesis is that sfHIE is the direct cause of sbRH and dbSDH. The conventional hypothesis (post-Geddes) is that trauma precipitates hypoxia to cause sfHIE and independently tears veins to cause sbRH and dbSDH. These hypotheses are assessed using Bradford Hill criteria of causation. The triad (sbRH, dbSDH and sfHIE) is a strong, specific and consistent association which suggests a causative link. Furthermore sbRH and dbSDH are positively correlated with severity of HIE but negatively correlated with objective evidence of trauma. The conventional hypothesis (post-Geddes) fails the test of biological plausibility because trauma causes asymmetrical and focal bleeding not diffuse bilateral capillary and venular oozing. The conventional hypothesis lacks coherence since it was shown that the encephalopathy of the triad is not traumatic. Arguments by analogy also support the Geddes hypothesis. There is a paucity of experimental evidence for either hypothesis but this is likely to be rectified as molecular methods (genomics and proteomics) are used in the infant autopsy. abstract_id: PUBMED:1891223 Vitreoretinal traction and perimacular retinal folds in the eyes of deliberately traumatized children. The pathophysiology of perimacular folds in eyes of deliberately traumatized children is disputed. The authors reviewed the clinical and forensic records and systemic and ocular findings at autopsy of three children with perimacular retinal folds who died after being violently shaken. Two of the children suffered direct head trauma in addition to being shaken; one patient was violently shaken without any physical or forensic evidence of direct head trauma. No direct ocular trauma was detected. In each case, the vitreous had partially separated from the retina but remained attached to the internal limiting membrane at the apices of the folds and the vitreous base, implicating traction in the pathogenesis of these folds. Although some intraocular findings in deliberately traumatized children may be explained by direct head injury, the possibility of both direct head trauma and shaking must be considered. Perimacular folds may develop without direct ocular or head trauma and may constitute evidence supporting violent shaking. abstract_id: PUBMED:15204569 The 'unified hypothesis' of Geddes et al. is not supported by the data. Inflicted head injury to the developing brain frequently results in serious disability. The pathogenesis of the neuraxial and ocular findings in infants believed to have suffered inflicted head injury remains the subject of considerable debate. Recent neuropathology studies of fatal cases of inflicted head injury and of a foetal/perinatal non-traumatic model have led to the proposal that there is a 'unified hypothesis', the essential feature of which is hypoxic brain swelling secondary to cervicomedullary injury. It has been suggested that less than violent forces may be involved and even that some cases may not be due to trauma at all. The purpose of this paper is to provide a critical review of the data upon which these suppositions are based on a background of what is already known. It is submitted that there are serious flaws in the methodology; the conclusions reached cannot logically be drawn from the data; and the 'unified hypothesis' is not supported by the evidence. On the basis of the data presented, it is also difficult to sustain the secondary hypothesis purporting to describe a minority cohort with 'infantile encephalopathy with subdural and retinal bleeding' of non-traumatic causation. abstract_id: PUBMED:20195617 Subdural hematoma in infants: can it occur spontaneously? Data from a prospective series and critical review of the literature. Background: Subdural hematomas (SDH) in infants often result from nonaccidental head injury (NAHI), which is diagnosed based on the absence of history of trauma and the presence of associated lesions. When these are lacking, the possibility of spontaneous SDH in infant (SSDHI) is raised, but this entity is hotly debated; in particular, the lack of positive diagnostic criteria has hampered its recognition. The role of arachnoidomegaly, idiopathic macrocephaly, and dehydration in the pathogenesis of SSDHI is also much discussed. Purpose: We decided to analyze apparent cases of SSDHI from our prospective databank. Materials And Methods: We selected cases of SDH in infants without systemic disease, history of trauma, and suspicion of NAHI. All cases had fundoscopy and were evaluated for possible NAHI. Head growth curves were reconstructed in order to differentiate idiopathic from symptomatic macrocrania. Results: Sixteen patients, 14 males and two females, were diagnosed with SSDHI. Twelve patients had idiopathic macrocrania, seven of these being previously diagnosed with arachnoidomegaly on imaging. Five had risk factors for dehydration, including two with severe enteritis. Two patients had mild or moderate retinal hemorrhage, considered not indicative of NAHI. Thirteen patients underwent cerebrospinal fluid drainage. The outcome was favorable in almost all cases; one child has sequels, which were attributable to obstetrical difficulties. Conclusion: SSDHI exists but is rare and cannot be diagnosed unless NAHI has been questioned thoroughly. The absence of traumatic features is not sufficient, and positive elements like macrocrania, arachnoidomegaly, or severe dehydration are necessary for the diagnosis of SSDHI. abstract_id: PUBMED:12581336 Dural haemorrhage in non-traumatic infant deaths: does it explain the bleeding in 'shaken baby syndrome'? A histological review of dura mater taken from a post-mortem series of 50 paediatric cases aged up to 5 months revealed fresh bleeding in the dura in 36/50, the bleeding ranging from small perivascular haemorrhages to extensive haemorrhage which had ruptured onto the surface of the dura. Severe hypoxia had been documented clinically in 27 of the 36 cases (75%). In a similar review of three infants presenting with classical 'shaken baby syndrome', intradural haemorrhage was also found, in addition to subdural bleeding, and we believe that our findings may have relevance to the pathogenesis of some infantile subdural haemorrhage. Recent work has shown that, in a proportion of infants with fatal head injury, there is little traumatic brain damage and that the significant finding is craniocervical injury, which causes respiratory abnormalities, severe global hypoxia and brain swelling, with raised intracranial pressure. We propose that, in such infants, a combination of severe hypoxia, brain swelling and raised central venous pressure causes blood to leak from intracranial veins into the subdural space, and that the cause of the subdural bleeding in some cases of infant head injury is therefore not traumatic rupture of bridging veins, but a phenomenon of immaturity. Hypoxia with brain swelling would also account for retinal haemorrhages, and so provide a unified hypothesis for the clinical and neuropathological findings in cases of infant head injury, without impact or considerable force being necessary. abstract_id: PUBMED:15542267 Inflicted head injury in infants. There is scant neuropathological information in the child abuse literature; even the best reviews include assumptions based on the findings of a few inadequate early studies. Our recent series of 53 fatal cases (Brain 124 (2001) 1290, 1299 [1,2]) demonstrated age-related patterns of brain injury and showed the substrate of severe encephalopathy in the infants to be hypoxic brain damage, not diffuse traumatic axonal injury ('DAI'), as had previously been thought. About one-third had craniocervical injuries, particularly in the brain stem, suggestive of stretch injury to the neuraxis. Our interpretation was that this finding implied a mechanism of injury--brain stem damage from stretch injury to the neck with resultant apnoea--that could account for the clinical scenario in many cases, and for which violence would not necessarily be required. Since publishing this study we have turned our attention to the subgroup of infants who die without objective signs of injury, such as skull fracture or impact, whose carers are accused of abuse, usually, "violent shaking", on the pathologic findings alone. Given the striking discrepancy that there often is in such cases between the relatively trivial findings in the brain and the accusations of violence, we have been looking at the pathogenesis of the typical intracranial bleeding. A histologic study of dura from 50 paediatric autopsies, none of whom had suffered a head injury, has led us to propose that the subdural and retinal bleeding in such cases may well have a physiological aetiology, rather than being caused directly by trauma. abstract_id: PUBMED:25869837 Mechanistic hypothesis for eye injury in infant shaking : An experimental and computational study. The terms abusive head injury and shaken baby syndrome are used to describe a unique pattern of nonaccidental traumatic injuries occurring in children that many clinicians and researchers have good reason to believe is caused by violent shaking. Typical injuries include severe brain injury, with intracranial and retinal hemorrhages, but the pathogenesis of injuries is poorly understood. A major paradox in head trauma in infants is that the injuries induced by a shaking event are much more severe than those caused by even very violent single-impact head trauma, despite the relatively low accelerations in shaking.We have developed a finite element computer model of the eye, orbit, and orbital bone and used it to simulate the effects of single-impact and oscillatory motion inputs. The model was informed by data from semiquantitative in vitro anatomical traction experiments on in situ rabbit eyes. The new results reported here strongly suggest that suction between the eye and its surrounding fat dominates the dynamical stability of the system composed of the eye, its socket, and the components and material supporting the eye. Computer simulations incorporating this functional anatomical relationship show that deceleration of the head generates pressure gradients inside and outside the eye; these could cause damaging shear stresses in structures such as the retina and blood vessels. Simulations also show that oscillating the bone of the orbit causes the eye to move anteriorly and posteriorly with an increasing amplitude, building up the stresses within the eye over time. This is the first time that any biomechanical mechanism has been identified that might explain the disproportionally severe injuries caused by an oscillatory mechanism such as violent shaking of an abused infant. However, further study is required and this conclusion is therefore preliminary and provisional. abstract_id: PUBMED:35689145 Animal models of pediatric abusive head trauma. Background: Abusive head trauma (AHT), previously known as the shaken baby syndrome, is a severe and potentially fatal form of traumatic brain injury in infant children who have been shaken, and sometimes also sustained an additional head impact. The clinical and autopsy findings in AHT are not pathognomonic and, due to frequent obfuscation by perpetrators, the circumstances surrounding the alleged abuse are often unclear. The concept has evolved that the finding of the combination of subdural hemorrhage, brain injury, and retinal hemorrhages ("the triad") is the result of shaking of an infant ("shaken baby syndrome") and has led to the ongoing controversy whether shaking alone is able to generate sufficient force to produce these lesions. Objective: In an attempt to investigate whether shaking can engender this lesion triad, animal models have been developed in laboratory rodents and domestic animal species. This review assesses the utility of these animal models to reliably reproduce human AHT pathology and evaluate the effects of shaking on the immature brain. Results: Due largely to irreconcilable anatomic species differences between these animal brains and human infants, and a lack of resemblance of the experimental head shaking induced by mechanical devices to real-world human neurotrauma, no animal model has been able to reliably reproduce the full range of neuropathologic AHT changes. Conclusion: Some animal models can simulate specific brain and ophthalmic lesions found in human AHT cases and provide useful information on their pathogenesis. Moreover, one animal model demonstrated that shaking of a freely mobile head, without an additional head impact, could be lethal, and produce significant brain pathology. Answer: The asymmetry of retinal hemorrhages in inflicted head injury may indeed provide clues to the pathogenesis of such injuries. A retrospective case series study found that the distribution of retinal hemorrhages after inflicted head injury correlates with acute and evolving regional cerebral parenchymal injury patterns (PUBMED:14571228). This suggests that the location and severity of brain injury may be related to the pattern of retinal hemorrhages observed. The widely accepted explanation for the pathogenesis of retinal hemorrhages in abusive head injury is vitreoretinal traction, which is supported by clinical and postmortem studies (PUBMED:20024631). This traction is thought to result from the repeated acceleration-deceleration forces characteristic of abusive head trauma. Further supporting the significance of retinal hemorrhage asymmetry, a study on the pathogenesis of retinal and subdural hemorrhage in non-accidental head injury in infancy assessed the conventional hypothesis and the Geddes hypothesis using Bradford Hill criteria. The study found that retinal hemorrhages and subdural hemorrhages are positively correlated with the severity of hypoxic ischemic encephalopathy but negatively correlated with objective evidence of trauma, challenging the conventional hypothesis that trauma independently tears veins to cause such hemorrhages (PUBMED:24139052). Additionally, a study on perimacular retinal folds in deliberately traumatized children implicated vitreoretinal traction in the pathogenesis of these folds, which may develop without direct ocular or head trauma and could be evidence supporting violent shaking (PUBMED:1891223). However, it is important to note that the pathogenesis of retinal hemorrhages in inflicted head injury remains a subject of debate, with some studies suggesting alternative explanations for the findings typically associated with inflicted head injury, such as hypoxic brain swelling secondary to cervicomedullary injury (PUBMED:15204569, PUBMED:15542267). In summary, retinal hemorrhage asymmetry in inflicted head injury may be a clue to pathogenesis, with the distribution of hemorrhages potentially reflecting the pattern of cerebral injury. The concept of vitreoretinal traction due to acceleration-deceleration forces is a key element in the current understanding of the pathogenesis of these hemorrhages.
Instruction: Do New Zealand children with non-cystic fibrosis bronchiectasis show disease progression? Abstracts: abstract_id: PUBMED:20717910 Do New Zealand children with non-cystic fibrosis bronchiectasis show disease progression? Background: There is minimal literature available on the long-term outcome of pediatric non-cystic fibrosis (CF) bronchiectasis. Aim: To document 5-year outcomes of children with chest computerized tomography (CT) scan diagnosed bronchiectasis from a tertiary New Zealand (NZ) respiratory clinic. Methods: Review of a clinical database identified 91 children. Demographics, clinical data, lung function, chest X-ray (CXR), sputum, presumed etiology, admission data, and the NZ deprivation index (NZDep) were collected. Univariate and multivariate regression were used to correlate clinical findings with lung function data and CXR scores using the Brasfield Scoring System. Results: Of the 91 children, 53 (59%) were Pacific Island, 22 (24%) Maori, 14 (15%) European, and 2 (2%) Other. The median follow-up period was 6.7 years (range 5.0-15.3 years) and median age at diagnosis was 7.3 years (range 11 months-16 years). Lung function data (n = 64) showed a mean decline of -1.6% predicted/year. In 30 children lung function declined (mean FEV(1) -4.4% predicted/year, range 1-17%), remained stable in 13 and improved in 21 children (mean FEV(1) of +3% predicted/year, range 1-15%). Reduced lung function was associated with male gender, chronic Haemophilus influenzae infection, longevity of disease, and Maori and Pacific Island ethnicity. There was a significant correlation with FEV(1) and CXR score at beginning (n = 47, r = 0.45, P = 0.001) and end (n = 26, r = 0.59, P = 0.002) of the follow-up period. The only variable consistently related to CXR score was chronic Haemophilus influenzae infection occurring in 27 (30%) (r(2) = 0.52, P = &lt;0.0001). Only four children were chronically infected with Pseudomonas species. Six children died. Conclusion: In our experience despite management in a tertiary multidisciplinary bronchiectasis clinic, progression of lung disease continues in a group of children and young adults. abstract_id: PUBMED:20854242 Chronic suppurative lung disease and bronchiectasis in children and adults in Australia and New Zealand. Consensus recommendations for managing chronic suppurative lung disease (CSLD) and bronchiectasis, based on systematic reviews, were developed for Australian and New Zealand children and adults during a multidisciplinary workshop. The diagnosis of bronchiectasis requires a high-resolution computed tomography scan of the chest. People with symptoms of bronchiectasis, but non-diagnostic scans, have CSLD, which may progress to radiological bronchiectasis. CSLD/bronchiectasis is suspected when chronic wet cough persists beyond 8 weeks. Initial assessment requires specialist expertise. Specialist referral is also required for children who have either two or more episodes of chronic (&gt; 4 weeks) wet cough per year that respond to antibiotics, or chest radiographic abnormalities persisting for at least 6 weeks after appropriate therapy. Intensive treatment seeks to improve symptom control, reduce frequency of acute pulmonary exacerbations, preserve lung function, and maintain a good quality of life. Antibiotic selection for acute infective episodes is based on results of lower airway culture, local antibiotic susceptibility patterns, clinical severity and patient tolerance. Patients whose condition does not respond promptly or adequately to oral antibiotics are hospitalised for more intensive treatments, including intravenous antibiotics. Ongoing treatment requires regular and coordinated primary health care and specialist review, including monitoring for complications and comorbidities. Chest physiotherapy and regular exercise should be encouraged, nutrition optimised, environmental pollutants (including tobacco smoke) avoided, and vaccines administered according to national immunisation schedules. Individualised long-term use of oral or nebulised antibiotics, corticosteroids, bronchodilators and mucoactive agents may provide a benefit, but are not recommended routinely. abstract_id: PUBMED:36222592 Bronchiectasis in children following kidney transplantation in New Zealand. Aim: Bronchiectasis is an acquired chronic respiratory condition with a relatively high incidence in New Zealand children. Bronchiectasis following kidney transplant has been reported internationally. This study aimed to identify the incidence rate of bronchiectasis following paediatric kidney transplantation. Secondary aims were to assess the impact on kidney allograft function and identify risk factors that might prompt earlier diagnosis. Methods: Case control study of children who developed bronchiectasis following kidney transplant in New Zealand. All children who were transplanted during the 16-year period from 2001 to 2016 were included. Each identified case was matched with two controls (children who did not develop bronchiectasis and received a kidney transplant within the closest time period to their matched case). Data were collected on baseline demographics, clinical variables, immunosuppression and allograft function. Results: Of 95 children who had a kidney transplant during the specified time period, eight (8.4%) developed bronchiectasis at a median of 4 years post-transplant. The mean incidence rate of bronchiectasis was 526 cases per 100 000 paediatric kidney transplant population per year. The majority of children were Māori or Pasifika ethnicity and lived in areas of greater socio-economic deprivation. Immunosuppression burden and allograft function were not significantly different between groups. Conclusions: The incidence rate of bronchiectasis following paediatric kidney transplantation is substantially higher than the baseline paediatric incidence rate in New Zealand. A high index of suspicion for bronchiectasis and prompt investigation of children post kidney transplantation with a history of recurrent lower respiratory tract infection or chronic cough are advised. abstract_id: PUBMED:36863703 Thoracic Society of Australia and New Zealand (TSANZ) position statement on chronic suppurative lung disease and bronchiectasis in children, adolescents and adults in Australia and New Zealand. This position statement, updated from the 2015 guidelines for managing Australian and New Zealand children/adolescents and adults with chronic suppurative lung disease (CSLD) and bronchiectasis, resulted from systematic literature searches by a multi-disciplinary team that included consumers. The main statements are: Diagnose CSLD and bronchiectasis early; this requires awareness of bronchiectasis symptoms and its co-existence with other respiratory diseases (e.g., asthma, chronic obstructive pulmonary disease). Confirm bronchiectasis with a chest computed-tomography scan, using age-appropriate protocols and criteria in children. Undertake a baseline panel of investigations. Assess baseline severity, and health impact, and develop individualized management plans that include a multi-disciplinary approach and coordinated care between healthcare providers. Employ intensive treatment to improve symptom control, reduce exacerbation frequency, preserve lung function, optimize quality-of-life and enhance survival. In children, treatment also aims to optimize lung growth and, when possible, reverse bronchiectasis. Individualize airway clearance techniques (ACTs) taught by respiratory physiotherapists, encourage regular exercise, optimize nutrition, avoid air pollutants and administer vaccines following national schedules. Treat exacerbations with 14-day antibiotic courses based upon lower airway culture results, local antibiotic susceptibility patterns, clinical severity and patient tolerance. Patients with severe exacerbations and/or not responding to outpatient therapy are hospitalized for further treatments, including intravenous antibiotics and intensive ACTs. Eradicate Pseudomonas aeruginosa when newly detected in lower airway cultures. Individualize therapy for long-term antibiotics, inhaled corticosteroids, bronchodilators and mucoactive agents. Ensure ongoing care with 6-monthly monitoring for complications and co-morbidities. Undertake optimal care of under-served peoples, and despite its challenges, delivering best-practice treatment remains the overriding aim. abstract_id: PUBMED:30231872 Health care experiences of mothers of children with bronchiectasis in Counties Manukau, Auckland, New Zealand. Background: Bronchiectasis is a worsening public health problem in New Zealand. This study aimed to explore the health care experiences of mothers of children with bronchiectasis in the Counties Manukau District Health Board area of Auckland, New Zealand. Methods: Semi-structured interviews were undertaken with ten mothers of children with bronchiectasis. Data were analysed using thematic analysis. Results: Five themes emerged: 1) Searching for answers, describing mothers' search for a diagnosis; 2) (Dis)empowerment, describing mothers' acquisition of knowledge, leading to empowerment; 3) Health care and relationships, describing the impact of relationships on the mother's health care experiences; 4) A juggling act, describing the challenges of juggling health care with school, work and family; 5) Making it work, describing how mothers overcome barriers to access health care for their child. Conclusions: The health provider-parent relationship was crucial for fostering positive health care experiences. Mothers' acquisition of knowledge facilitated empowerment within those relationships. Additionally, mothers' perceptions of the quality and benefit of health services motivated them to overcome barriers to accessing care. Study findings may help to improve health care experiences for parents of children with bronchiectasis if identified issues are addressed. abstract_id: PUBMED:12603799 Paediatric bronchiectasis in the twenty-first century: experience of a tertiary children's hospital in New Zealand. Objective: Despite its decline in developed countries, bronchiectasis appeared to be a common diagnosis in Auckland, New Zealand children. The aims of this study were: to document the number of children in Auckland with bronchiectasis, their severity, clinical characteristics and possible aetiologies; to assess whether there was a relationship between ethnicity and poverty; and to estimate a crude bronchiectasis prevalence rate for New Zealand. Methods: A retrospective review of the case histories of all children attending a tertiary children's hospital in Auckland with bronchiectasis diagnosed by high-resolution chest computed tomography (CT) scan, during the period 1998-2000 was undertaken. Data collected included patient demographics, number of hospitalizations pre- and post-diagnosis, lung function tests, radiology and investigations. The New Zealand deprivation 1996 index was applied to the data to obtain a measure of socio-economic status. Results: Bronchiectasis was found to be common, with an estimated prevalence of approximately one in 6000 in the Auckland paediatric population. It was disproportionately more common in the Pacific Island and Maori children. In Pacific Island children, bronchiectasis not caused by cystic fibrosis was nearly twice as common in the general population than cystic fibrosis. Socio-economic deprivation and low immunization rates may be significant contributing factors. The bronchiectasis seen was extensive. Ninety-three percent had bilateral disease and 64% had involvement of four or more lobes on chest CT scan. A wide range of comorbidities and underlying aetiologies were evident. Conclusions: Paediatric bronchiectasis in Auckland, New Zealand, is common but underresourced. Only the most severe cases are being recognized, providing a significant challenge for paediatric health professionals. abstract_id: PUBMED:16916011 Respiratory health and lung research in New Zealand. This article summarizes the recent history of respiratory research in New Zealand which was triggered by an epidemic of asthma mortality in the 1980s and which led to the rapid emergence of quality research groups. This unique event led to major initiatives to improve the standards of care for people with asthma in New Zealand. Perspectives on other lung health issues including tobacco control, the COPD epidemic and the emergence of bronchiectasis as a serious respiratory disease are also provided. abstract_id: PUBMED:31949117 Early markers of cystic fibrosis structural lung disease: follow-up of the ACFBAL cohort. Little is known about early predictors of later cystic fibrosis (CF) structural lung disease. This study examined early predictors of progressive structural lung abnormalities in children who completed the Australasian CF Bronchoalveolar Lavage (ACFBAL) clinical trial at age 5-years and participated in an observational follow-up study (CF-FAB).Eight Australian and New Zealand CF centres participated in CF-FAB and provided follow-up chest computed-tomography (CT) scans for children who had completed the ACFBAL study with baseline scans at age 5-years. CT scans were annotated using PRAGMA-CF scoring. Ordinal regression analysis and linear regression were used to investigate associations between PRAGMA-CF (Perth-Rotterdam Annotated Grid Morphometric Analysis for CF) outcomes at follow-up and variables measured during the ACFBAL study.99 out of 157 ACFBAL children (mean±sd age 13±1.5 years) participated in the CF-FAB study. The probability of bronchiectasis at follow-up increased with airway disease severity on the baseline CT scan. In multiple regression (retaining factors at p&lt;0.05) the extent of bronchiectasis at follow-up was associated with baseline atelectasis (OR 7.2, 95% CI 2.4-22; p≤ 0.001), bronchoalveolar lavage (BAL) log2 interleukin (IL)-8 (OR 1.2, 95% CI 1.05-1.5; p=0.010) and body mass index z-score (OR 0.49, 95% CI 0.24-1.00; p=0.05) at age 5 years. Percentage trapped air at follow-up was associated with BAL log2 IL-8 (coefficient 1.3, 95% CI 0.57-2.1; p&lt;0.001) at age 5 years.The extent of airway disease, atelectasis, airway inflammation and poor nutritional status in early childhood are risk factors for progressive structural lung disease in adolescence. abstract_id: PUBMED:28339144 Australian and New Zealand Pulmonary Rehabilitation Guidelines. Background And Objective: The aim of the Pulmonary Rehabilitation Guidelines (Guidelines) is to provide evidence-based recommendations for the practice of pulmonary rehabilitation (PR) specific to Australian and New Zealand healthcare contexts. Methods: The Guideline methodology adhered to the Appraisal of Guidelines for Research and Evaluation (AGREE) II criteria. Nine key questions were constructed in accordance with the PICO (Population, Intervention, Comparator, Outcome) format and reviewed by a COPD consumer group for appropriateness. Systematic reviews were undertaken for each question and recommendations made with the strength of each recommendation based on the GRADE (Gradings of Recommendations, Assessment, Development and Evaluation) criteria. The Guidelines were externally reviewed by a panel of experts. Results: The Guideline panel recommended that patients with mild-to-severe COPD should undergo PR to improve quality of life and exercise capacity and to reduce hospital admissions; that PR could be offered in hospital gyms, community centres or at home and could be provided irrespective of the availability of a structured education programme; that PR should be offered to patients with bronchiectasis, interstitial lung disease and pulmonary hypertension, with the latter in specialized centres. The Guideline panel was unable to make recommendations relating to PR programme length beyond 8 weeks, the optimal model for maintenance after PR, or the use of supplemental oxygen during exercise training. The strength of each recommendation and the quality of the evidence are presented in the summary. Conclusion: The Australian and New Zealand Pulmonary Rehabilitation Guidelines present an evaluation of the evidence for nine PICO questions, with recommendations to provide guidance for clinicians and policymakers. abstract_id: PUBMED:15871981 New Zealand national incidence of bronchiectasis "too high" for a developed country. Aims: To prospectively estimate the incidence of bronchiectasis among New Zealand (NZ) children, to consider aetiology and severity, and to evaluate regional and ethnic variation. Methodology: NZ paediatricians were surveyed monthly for new cases of bronchiectasis during 2001 and 2002 via the NZ paediatric surveillance unit (with coverage of &gt;94% of NZ paediatricians). Notified cases had their computed tomography scans reviewed and scored for severity. Confirmed cases were followed up by postal questionnaire one year after diagnosis. Demographic, aetiological, and severity data were collected. Results: Ninety nine notifications were received. Sixty five cases were confirmed. An overall incidence of 3.7 per 100,000 under 15 year old children per year was estimated. Incidence was highest in Pacific children at 17.8 compared with 4.8 in Maori, 1.5 in NZ European, and 2.4 other per 100,000 per year. Incidence varied significantly by region. The median age at diagnosis was 5.2 years; the majority had symptoms for more than two years. Eighty three per cent had bilateral disease, with a median of three lobes affected, mean FEV1 of 77% predicted, and modified Bhalla HRCT score of 18. Conclusions: The incidence of bronchiectasis is high in NZ children, nearly twice the rate for cystic fibrosis and seven times that of Finland, the only other country reporting a childhood national rate. Incidence varied substantially between ethnicities. Most cases developed disease in early childhood and had delayed diagnosis. Answer: Yes, New Zealand children with non-cystic fibrosis bronchiectasis do show disease progression. A study reviewing a clinical database of children with chest computerized tomography (CT) scan diagnosed bronchiectasis from a tertiary New Zealand respiratory clinic found that despite management in a tertiary multidisciplinary bronchiectasis clinic, progression of lung disease continues in a group of children and young adults. Lung function data showed a mean decline of -1.6% predicted/year, with 30 children experiencing a decline in lung function (mean FEV(1) -4.4% predicted/year), while it remained stable in 13 and improved in 21 children (mean FEV(1) of +3% predicted/year). Reduced lung function was associated with male gender, chronic Haemophilus influenzae infection, longevity of disease, and Maori and Pacific Island ethnicity. There was also a significant correlation between FEV(1) and chest X-ray score at the beginning and end of the follow-up period. Chronic Haemophilus influenzae infection was consistently related to chest X-ray score (PUBMED:20717910).
Instruction: Immunochemical faecal occult blood testing to screen for colorectal cancer: can the screening interval be extended? Abstracts: abstract_id: PUBMED:37469171 Interval cancers in a national colorectal screening programme based on faecal immunochemical testing: Implications for faecal haemoglobin concentration threshold and sex inequality. Objective: To compare interval cancer proportions (ICP) in the faecal immunochemical test (FIT)-based Scottish Bowel Screening Programme (SBoSP) with the former guaiac faecal occult blood test (gFOBT)-based SBoSP and investigate associations between interval cancer (IC) and faecal haemoglobin concentration (f-Hb) threshold, sex, age, deprivation, site, and stage. Methods: The ICP data from first year of the FIT-based SBoSP and the penultimate year of the gFOBT-based SBoSP were compared in a prospective cohort design. Results: With FIT, 801 colorectal cancers (CRCs) were screen detected (SDC), 802 were in non-participants, 548 were ICs, 39 were colonoscopy missed and 72 were diagnosed after incomplete screening; with gFOBT: 540, 904, 556, 45, and 13, respectively. FIT had a significantly higher proportion of SDC compared to IC than gFOBT. For FIT and gFOBT, ICP was significantly higher in women than men. As f-Hb threshold increased, ICP increased and, for any f-Hb threshold for men, a lower threshold was required for comparable ICP in women. In Scotland, the current threshold of ≥80 µg Hb/g faeces would have to be lowered to ≥40 µg Hb/g faeces for women to achieve sex equality for ICP. In the FIT-based SBoSP, there were four times as many stage I SDC than IC. This was reversed in advanced stages, with twice as many stage IV CRC diagnosed as IC versus SDC. Conclusions: Reducing the numbers of IC requires lowering the f-Hb threshold. Using different f-Hb thresholds for women and men could eliminate the sex disparity, but with additional colonoscopy. abstract_id: PUBMED:30644142 Intention to undergo faecal occult blood testing in an ethnic minority. Aim: To explore factors associated with the intention of an ethnic minority of Israeli Arabs to undergo faecal occult blood testing using the Health Belief Model. Methods: This was a quantitative correlational study that utilized a questionnaire constructed based on the Health Belief Model. The questionnaire was administered to a convenience sample of 200 Israeli Arabs aged 50 and over. Data were collected during January to March 2018. Findings: The research findings show that respondents had a high intention to undergo faecal occult blood testing in the future, although only 66.5% of the respondents had ever undergone it in the past. Physicians and nurses were the most significant sources of information regarding the faecal occult blood testing. The Health Belief Model explained 46% of variance in the intention to undergo the faecal occult blood testing among Israeli Arabs, with perceived benefits of testing being the most significant predictor of this intention. Conclusion: The Health Belief Model can serve as a basis for planning intervention programmes to be implemented by health care professionals to raise the intention of ethnic minorities to undergo faecal occult blood testing. abstract_id: PUBMED:28862521 Repeated faecal occult blood testing is associated with decreased advanced colorectal cancer risk: A population-based study. Objective: To evaluate the association between repeated faecal occult blood testing and advanced colorectal cancer risk at population level in Canada. Methods: A retrospective cohort study of all Ontario residents aged 56-74 diagnosed with colorectal cancer from 1 April 2007 to 31 March 2010, identified using health administrative data. The primary outcome was stage IV colorectal cancer, and primary exposure was faecal occult blood testing use within five years prior to colorectal cancer diagnosis. Patients were categorized into four mutually exclusive groups based on their exposure to faecal occult blood testing in the five years prior to colorectal cancer diagnosis: none, pre-diagnostic, repeated, and sporadic. Logistic regression was utilized to adjust for confounders. Results: Of 7753 patients (median age 66, interquartile range 61-70, 62% male) identified, 1694 (22%) presented with stage I, 2056 (27%) with stage II, 2428 (31%) with stage III, and 1575 (20%) with stage IV colorectal cancer. There were 4092 (53%) with no record of prior faecal occult blood testing, 1485 (19%) classified as pre-diagnostic, 1693 (22%) as sporadic, and 483 (6%) as repeated faecal occult blood testing. After adjusting for confounders, patients who had repeated faecal occult blood testing were significantly less likely to present with stage IV colorectal cancer at diagnosis (Odds ratio 0.46, 95% Confidence Interval 0.34-0.62) than those with no prior faecal occult blood testing. Conclusions: Repeated faecal occult blood testing is associated with a decreased risk of advanced colorectal cancer. Our findings support the use of organized screening programmes that employ repeated faecal occult blood testing to improve colorectal cancer outcomes at population level. abstract_id: PUBMED:32393153 Interval colorectal cancers after negative faecal immunochemical test in a 13-year screening programme. Objective: To assess faecal immunochemical test sensitivity for cancer in a very large population-based cohort followed up for six rounds with biennial faecal immunochemical test repetition. Methods: This study is based on interval colorectal cancers diagnosed in a cohort of subjects aged 50-69 undergoing repeated faecal immunochemical test screening (six rounds) from 2002 to 2015. Test sensitivity was calculated using both the Proportional Interval Cancer Rate and the Interval Cancer Proportion method. Results: Among 441,647 faecal immunochemical tests (123,347 individuals), 150 interval colorectal cancers were detected after a negative faecal immunochemical test. Interval colorectal cancer incidence rate was 1.87 per 10,000 person-years (95%CI: 1.60-2.20), and it was higher during the second interval year (rate ratio: 1.78; 95%CI: 1.28-2.47), for proximal locations (rate ratio: 3.00; 95%CI: 1.92-4.68), and among 60-71 year old subjects (rate ratio: 2.37; 95%CI: 1.61-3.50). The Proportional Interval Cancer Rate was 13.1%, with an overall faecal immunochemical test sensitivity of 86.9% (95%CI: 84.7-89.0). Sensitivity was lowest at the first round (81.5%; 95%CI: 75.6-86.2), and increased to 91.9% (95%CI: 83.9-96.5) for subsequent rounds. Applying the Interval Cancer Proportion method, sensitivity was 83.9% (95%CI: 81.3-86.2), and it was highest at the first round (89.0%; 95%CI: 85.7-91.6), ranging between 73% and 83.1% at subsequent rounds. Conclusions: A faecal immunochemical test sensitivity for cancer higher than 80% resulted in a low incidence of interval colorectal cancers, representing an accurate estimate of one of the major limits of screening programmes. Due to intrinsic biases, the Proportional Interval Cancer Rate and the Interval Cancer Proportion methods generated different trends in faecal immunochemical test sensitivity by screening round. abstract_id: PUBMED:33983066 Impact of changing from a guaiac faecal occult blood test to a faecal immunochemical test in a national screening programme: Results from a pilot study within the national bowel cancer screening programme in England. Objective: Testing for occult blood in faeces is widely used in bowel cancer screening around the world. In many programmes, the faecal immunochemical test (FIT) is replacing the traditional guiaic faecal occult blood test (gFOBT). There have been a number of reports on the clinical impact of making this change; yet, no-one has considered the pre-analytical and analytical impact of moving from a gFOBT to a FIT bowel cancer screening programme. Methods: We interrogated data obtained in a FIT pilot carried out in England in 2014 to assess the timeliness of specimen collection device return time and analysis for gFOBT and FIT, the impact of time to analysis on faecal haemoglobin (f-Hb) concentration, and any differences observed between analyses carried out at two different testing laboratories. Results: FIT kits were returned on average 5.6 days sooner than gFOBT. The time to analysis for FIT leads to an overall rise in f-Hb concentration within the manufacturer's stated 14-day stability period. Conclusion: Both these factors are important considerations for laboratories when considering setting up a bowel cancer screening programme, especially if transitioning from gFOBT to FIT. Our data also support previous evidence of males having a higher f-Hb than females and demonstrate that after adjusting for sex, age and screening hub, neither index of multiple deprivation nor screening episode significantly affected f-Hb. abstract_id: PUBMED:31690179 Randomized controlled trial: Flexible sigmoidoscopy as an adjunct to faecal occult blood testing in population screening. Objectives: Flexible sigmoidoscopy screening at around age 60 can reduce colorectal cancer incidence. Insufficient evidence exists on flexible sigmoidoscopy at age 60 in a population being offered biennial faecal occult blood test screening from age 50. This randomized controlled trial assessed if flexible sigmoidoscopy would be an effective adjunct to faecal occult blood test. Methods: In the Scottish Bowel Screening Programme between June 2014 and December 2015, 51,769 individuals were randomized to be offered flexible sigmoidoscopy instead of faecal occult blood test at age 60 or to continue faecal occult blood test. Those not accepting flexible sigmoidoscopy and those with normal flexible sigmoidoscopy were offered faecal occult blood test. All with flexible sigmoidoscopy-detected neoplasia or a positive faecal occult blood test result were offered colonoscopy. Results: Overall flexible sigmoidoscopy uptake was 17.8%, higher in men than women, and decreased with increasing deprivation (25.7% in the least to 9.2% in the most deprived quintile). In those who underwent flexible sigmoidoscopy, detection rate for colorectal cancer was 0.13%, for adenoma 7.27%, and for total neoplasia 7.40%. In those who underwent colonoscopy after a positive flexible sigmoidoscopy, detection rate for colorectal cancer was 0.28%, adenoma 8.66%, and total neoplasia 8.83%. On an intention to screen basis, there was no difference in colorectal cancer detection rate between the study and control groups. Adenoma and total neoplasia detection rate were significantly higher in the study group, with odds ratios of 5.95 (95%CI: 4.69-7.56) and 5.10 (95%CI: 4.09-6.35), respectively. Conclusions: In a single screening round at age 60, there was low uptake and neoplasia detection rate. Flexible sigmoidoscopy detected significantly more neoplasia than faecal occult blood test alone. abstract_id: PUBMED:29534613 Evaluation of sample stability for a quantitative faecal immunochemical test and comparison of two sample collection approaches. Background Faecal immunochemical testing is increasingly being used to triage symptomatic patients for suspected colorectal cancer. However, there are limited data on the effect of preanalytical factors on faecal haemoglobin when measured by faecal immunochemical testing. The aim of this work was to evaluate the stability of faecal haemoglobin in faeces and to compare two methods of faecal haemoglobin sampling for faecal immunochemical testing. Methods Six patients provided faeces for faecal haemoglobin measurement which were transferred into specialized collection devices at baseline and at 1, 2, 3 and 7 days after storage at either room temperature or 4°C. A total of 137 patients returned both faeces transferred into the specialized collection device and faeces in a standard collection pot. A quantitative immunoturbidometric method was used to measure faecal haemoglobin and results were compared categorically. Discrepant results were assessed against diagnosis. Results Faecal haemoglobin concentration declined rapidly within a day of storage at room temperature but results remained ≥10 μg Hb/g faeces in 5/6 patients after two days. A faecal haemoglobin result ≥10 μg Hb/g faeces was obtained in 4/6 patients after storage for seven days at 4°C. Results obtained when patients used specialized collection devices were significantly different from results obtained when faeces was transferred into the specialized collection device in the laboratory. Conclusion There is considerable heterogeneity in the sample stability of faecal haemoglobin; therefore, samples should be transferred rapidly into specialized collection devices to prevent false-negative results. Use of collection devices by patients can lead to false-positive results compared with their use in a laboratory. abstract_id: PUBMED:17227634 Colorectal cancer screening using immunochemical faecal occult blood testing in Japan. Japan has had a programme of colorectal cancer screening with immunochemical faecal occult blood testing since 1992. Participation rates are around 17%. Approximately 7% of subjects test positive and, of these, around 60% comply with the request to undergo further diagnostic testing. Colorectal cancer rates have been increasing rapidly in Japan in recent decades, and there is an urgent need to improve participation rates. abstract_id: PUBMED:26623043 Characteristics of colorectal tumours in asymptomatic patients with negative immunochemical faecal occult blood test results. The immunochemical faecal occult blood test (iFOBT) is a simple, non-invasive colorectal cancer (CRC) screening method for reducing CRC-related mortality. However, the sensitivity of iFOBT is imperfect and certain colonic neoplasms that require removal may be missed. The aim of this study was to investigate the incidence and characteristics of CRC in asymptomatic, iFOBT-negative patients who underwent opportunistic screening. A total of 919 subclinical patients (276 iFOBT-positive and 643 iFOBT-negative) in the health screening program of our hospital underwent total colonoscopy (TCS) within 2 years after iFOBT. The patients were divided into an iFOBT-positive and an iFOBT-negative group and the TCS findings were compared between the two groups. Although the incidence of advanced neoplasia (CRC, high-grade dysplasia, adenoma sized ≥10 mm and tubulovillous adenoma) was significantly higher in the iFOBT-positive group, these lesions were also found in 6.3% of iFOBT-negative patients. The lesions tended to be proximally located and non-protruding. In conclusion, screening with iFOBT remains clinically significant. However, colonoscopy is indispensable for reducing the incidence and mortality of CRC. abstract_id: PUBMED:27646731 The Effect of Educational Intervention on the Patient's Willingness to Carry out the Immunochemical Faecal Occult Blood Test for Colorectal Cancer. Background: There is now compelling evidence that screening for colorectal cancer may result in significantly reduced mortality. Screening tests for colorectal cancer are not systematically performed in Bulgaria. Aim: This article explores the effect of an educational intervention on the willingness of patients to participate in the screening for colorectal cancer with the immunochemical faecal occult blood test in the home setting. Materials And Methods: A before-after design study of the effects of educational intervention comprising distribution of a brochure and one-to-one discussion with a GP. A self-administered, original questionnaire was administered before and after the intervention to 600 randomly selected patients in 40 general practices (15 patients per practice) in Plovdiv district. Results: The intervention led to an increase with &gt;20% of the patient's knowledge of the importance of the test and on how to carry out the test. Statistical analysis indicated that there was an increase in knowledge after the educational intervention about the usefulness of the test (24.8% in males, 18.3% in females) and its performance (22.7% in males, 25.4% in females). Conclusion: The educational intervention has significantly influenced the patient's awareness about the test's usefulness and its self-administration. It improved the awareness by providing an easy access to information, thus fostering the active involvement of the patients. A strength of the intervention was the patient-centered approach in providing additional information through one-to-one discussions, and it ensured a higher quality of the preventive screening in the general practice. Answer: The provided abstracts do not directly address the question of whether the screening interval for immunochemical faecal occult blood testing (iFOBT) can be extended for colorectal cancer screening. However, some insights can be drawn from the data presented: 1. The study on interval cancers in a national colorectal screening programme (PUBMED:37469171) suggests that the faecal immunochemical test (FIT) has a higher proportion of screen-detected colorectal cancers (CRCs) compared to interval cancers (ICs) than the guaiac faecal occult blood test (gFOBT). This could imply that FIT is more sensitive and may potentially allow for longer intervals between screenings, but the study does not provide direct evidence for extending screening intervals. 2. The study on repeated faecal occult blood testing (PUBMED:28862521) found that repeated faecal occult blood testing is associated with a decreased risk of advanced colorectal cancer. This suggests that regular screening is important for early detection and that extending the interval might increase the risk of advanced disease. 3. The study on interval colorectal cancers after negative FIT in a 13-year screening programme (PUBMED:32393153) reported that FIT sensitivity for cancer was higher than 80%, resulting in a low incidence of interval colorectal cancers. This high sensitivity might support the possibility of extending screening intervals, but the study does not provide a recommendation on interval length. 4. The study on the impact of changing from gFOBT to FIT in a national screening programme (PUBMED:33983066) and the study on sample stability for a quantitative FIT (PUBMED:29534613) focus on the pre-analytical and analytical aspects of FIT and do not address screening intervals. 5. The study on colorectal cancer screening using immunochemical faecal occult blood testing in Japan (PUBMED:17227634) and the study on characteristics of colorectal tumours in asymptomatic patients with negative iFOBT results (PUBMED:26623043) provide information on the use of iFOBT in screening programs but do not discuss the optimal screening interval. 6. The study on educational intervention and the patient's willingness to carry out the immunochemical faecal occult blood test (PUBMED:27646731) focuses on patient education and does not provide data on screening intervals.
Instruction: Can an incomplete ossicular discontinuity be predicted by audiometric and clinical findings? Abstracts: abstract_id: PUBMED:23640088 Can an incomplete ossicular discontinuity be predicted by audiometric and clinical findings? Objective: To investigate a pathology of conductive hearing loss caused by an incomplete ossicular discontinuity. It can manifest as a triad of the following: 1) conductive hearing loss most prominent in the high frequencies (hfCHL), defined as [ABG for 4 kHz] &gt; [mean ABG for 0.25-0.5 kHz] + 10 dB or more; 2) fluctuating hearing loss; and 3) short-lasting improvement of hearing after Valsalva maneuver. Study Design: Retrospective clinical trial. Setting: Tertiary referral center. Patients: Fourteen patients with an incomplete ossicular discontinuity who underwent incus interposition were included. Intervention: Incus interposition, mathematical model. Main Outcome Measures: First, the prevalence of the triad was documented. Second, the hypothesis that mechanical ossicular compliance was responsible for the triad of symptoms was evaluated and simulated in a mathematical model. Finally, the postoperative hearing results with a follow-up of 12 months were analyzed and compared with those reported in the literature. Results: The presence of the triad of symptoms is a strong indicator for detecting patients with an incomplete ossicular discontinuity. High frequency conductive hearing loss was present in 93% (13/14 patients). Ten (71%) of the 14 patients presented with fluctuating hearing loss and improvement of hearing after Valsalva maneuver. The hfCHL could be simulated adequately in the mathematical model. Success rate for surgical intervention (ABG &lt; 20 dB; 0.5, 1, 2, and 3 kHz) was 93% and was comparable to the results reported in the literature. Conclusion: Patients with hfCHL, fluctuating hearing loss, and improvement of hearing after Valsalva maneuver are likely to have an incomplete ossicular discontinuity. A favorable postoperative hearing recovery by incus interposition can be expected. abstract_id: PUBMED:26510125 The Audiometric and Mechanical Effects of Partial Ossicular Discontinuity. Objectives: Ossicular discontinuity may be complete, with no contact between the disconnected ends, or partial, where normal contact at an ossicular joint or along a continuous bony segment of an ossicle is replaced by soft tissue or simply by contact of opposing bones. Complete ossicular discontinuity typically results in an audiometric pattern of a large, flat conductive hearing loss. In contrast, in cases where otomicroscopy reveals a normal external ear canal and tympanic membrane, high-frequency conductive hearing loss has been proposed as an indicator of partial ossicular discontinuity. Nevertheless, the diagnostic utility of high-frequency conductive hearing loss has been limited due to gaps in previous research on the subject, and clinicians often assume that an audiogram showing high-frequency conductive hearing loss is flawed. This study aims to improve the diagnostic utility of high-frequency conductive hearing loss in cases of partial ossicular discontinuity by (1) making use of a control population against which to compare the audiometry of partial ossicular discontinuity patients and (2) examining the correlation between high-frequency conductive hearing loss and partial ossicular discontinuity under controlled experimental conditions on fresh cadaveric temporal bones. Furthermore, ear-canal measurements of umbo velocity and wideband acoustic immittance measurements were investigated to determine the usefulness regarding diagnosis of ossicular discontinuity. Design: The authors analyzed audiograms from 66 patients with either form of surgically confirmed ossicular discontinuity and no confounding pathologies. The authors also analyzed umbo velocity (n = 29) and power reflectance (n = 12) measurements from a subset of these patients. Finally, the authors performed experiments on six fresh temporal bone specimens to study the differing mechanical effects of complete and partial discontinuity. The mechanical effects of these lesions were assessed via laser Doppler measurements of stapes velocity. In a subset of the specimen (n = 4), wideband acoustic immittance measurements were also collected. Results: (1) Calculations comparing the air-bone gap (ABG) at high and low frequencies show that when high-frequency ABGs are larger than low-frequency ABGs, the surgeon usually reported soft-tissue bands at the point of discontinuity. However, in cases with larger low-frequency ABGs and flat ABGs across frequencies, some partial discontinuities as well as complete discontinuities were reported. (2) Analysis of umbo velocity and power reflectance (calculated from wideband acoustic immittance) in patients reveal no significant difference across frequencies between the two types of ossicular discontinuities. (3) Temporal bone experiments reveal that partial discontinuity results in a greater loss in stapes velocity at high frequencies when compared with low frequencies, whereas with complete discontinuity, large losses in stapes velocity occur at all frequencies. Conclusion: The clinical and experimental findings suggest that when encountering larger ABGs at high frequencies when compared with low frequencies, partial ossicular discontinuity should be considered in the differential diagnosis. abstract_id: PUBMED:15825577 Ossicular discontinuity and exostoses in Proteus syndrome: a case report. Proteus syndrome (PS) is a rare hamartomatous disorder characterized by mosaic overgrowth of multiple tissues that manifests early in life and is progressive. The presence of unilateral external auditory canal exostoses in a patient who is not a swimmer or surfer is suggestive of PS. However, hearing loss is not a typical feature. Here, we describe exostoses and ossicular discontinuity with conductive hearing loss in a patient with PS. The treatment consisted of canalplasty and ossicular chain reconstruction. A postoperative reduction was demonstrated in the patient's air-bone gap, from 21 dB to 13 dB for the pure tone average (four frequencies) and from 41 dB to 15 dB in the high-frequency range (6,000 to 8,000 Hz). Causes of ossicular discontinuity are discussed. Routine annual audiometric and otolaryngological evaluation should be considered in all patients with temporal bone involvement of PS. abstract_id: PUBMED:28796085 Audiometric Outcomes Following Endoscopic Ossicular Chain Reconstruction. Objective: To evaluate the audiometric outcomes following endoscopic ossicular chain reconstruction (OCR). Study Design: Retrospective case series. Setting: Two tertiary referral centers. Patients: Sixty two ears with ossicular discontinuity. Intervention(s): Endoscopic and microscopic OCR in patients with ossicular discontinuity. Main Outcome Measures: Bone and air pure-tone averages (PTA), air-bone gap (ABG), and word recognition scores (WRS). Results: Sixty two ears were included for analysis. Patients that underwent ossiculoplasty were subdivided based on prosthesis type (total ossicular replacement prosthesis [TORP] and partial ossicular replacement prosthesis [PORP], primary and staged ossiculoplasties, and surgical approach [microscopic and total endoscopic]). Forty two ears required PORP reconstructions, while 20 ears required TORP reconstructions. The microscope was used to reconstruct the ossicular chain in 31 cases, while an exclusive endoscopic approach was used in the remaining 31 patients. Controlling for the prosthesis, there were no significant postoperative differences in bone PTA, air PTA, and ABG between primary and staged ossiculoplasties, or surgical approach. Conclusions: Controlling for the type of prosthesis, there were no significant differences in hearing outcomes with respect to staged ossicular chain reconstruction or whether the endoscope or microscope was used for visualization. Thus, in this series, endoscopic OCR yields similar audiometric outcomes when compared with microscopic OCR. abstract_id: PUBMED:33187789 Wideband absorbance pattern in adults with otosclerosis and ossicular chain discontinuity. Objectives: Evidence from previous literature had shown that the use of a single frequency probe tone is not sensitive enough to detect middle ear pathologies, especially related to the ossicles, which hinders accurate diagnosis. The goal of the present study was to compare the outcome of wideband absorbance (WBA) tympanometry and to determine the difference in WBA pattern in adults with otosclerosis and ossicular chain discontinuity. Materials And Methods: Estimated adult cases of otosclerosis (10 ears) and ossicular chain discontinuity (06 ears) along with healthy individuals (10 ears) in the age range of 24 to 48 years (mean age: 38.6 years) were considered for the study. WBA was measured at peak and ambient pressure along with resonance frequency and compared with the data obtained from the healthy individuals to determine the WBA pattern. Results: Data analysis revealed a distinct WBA pattern showing high absorbance at 750 Hz for ossicular chain discontinuity compared to healthy individuals, whereas the otosclerosis group showed reduced absorbance (p &lt; 0.05) at low frequencies (250 Hz to 1500 Hz). WBA measured at the peak and ambient pressure did not elicit any significant difference across the frequencies. Also, the average WBA tympanogram measured between 375 Hz and 2000 Hz showed a significant difference in ambient pressure only in the otosclerosis group. In comparison to healthy individuals (901 Hz), ossicular chain discontinuity showed a significant reduction in resonance frequency (674 Hz), whereas in cases with otosclerosis had higher resonance frequency (1445 Hz). Conclusions And Significance: The present study showed different WBA patterns between the groups and the absorbance values were significantly different at the low frequencies. This suggests that WBA has the potential to differentiate ossicles related pathologies from normal and also between the ear with otosclerosis and ossicular chain discontinuity. abstract_id: PUBMED:12544024 Relationship of preoperative findings and ossicular discontinuity in chronic otitis media. Background: Ossicular discontinuity may result from chronic otitis media, with or without cholesteatoma, and usually cannot be determined unless an operation is performed. If preoperative information can be used to determine whether or not the ossicular chain is intact, the patient can be better informed and the surgeon can be better prepared before surgery. Objective: To examine the correlation between preoperative findings and the status of the ossicular chain at surgery. Study Design: Retrospective. Setting: A tertiary referral center. Patients: Data from 190 patients (212 ears) who were undergoing their first operations for chronic otitis media were reviewed. Intervention: The operations for the cholesteatoma or noncholesteatoma ears were classified into different groups and analyzed separately. Ossicular discontinuity was identified only if the discontinuity of the ossicular chain was confirmed at the time of operation. Results: The cholesteatoma group consisted of 59 ears, and the noncholesteatoma group contained 153 ears. In the cholesteatoma group, two statistically significant preoperative parameters were observed to be significantly correlated with ossicular discontinuity: persistently draining ears (p = 0.026) and a cholesteatoma that extended to the tympanic sinus (p = 0.006). In the noncholesteatoma group, only one parameter was observed to be predictive of ossicular discontinuity. In this group of patients, perforation of the tympanic membrane with an edge adherent to the promontory occurred significantly more frequently in patients with ossicular discontinuity (p = 0.004) than in those with intact ossicular chains. Surprisingly, an air-bone gap was not significantly correlated with ossicular discontinuity. The cholesteatoma and granulation tissue in the middle ear cavity may serve as transmission bridges of acoustic energy, resulting in narrowed air-bone gaps. Conclusions: In the cholesteatoma group, both cholesteatoma extension into the tympanic sinus and persistently draining ears were predictive of ossicular discontinuity. In the noncholesteatoma group, perforation edges adhering to the promontory also revealed a higher incidence of ossicular discontinuity. Therefore, all these findings can be considered as potential preoperative predictors of ossicular discontinuity in chronic otitis media. abstract_id: PUBMED:8229436 Clinical views of cases with discontinuity of the ossicular chain It is well known that reconstruction for discontinuity of the ossicular chain, without inflammatory disease, has a good prognosis. Eleven cases of congenital ossicular chain malformation without ossicular fixation and 6 cases of ossicular chain injury due to the head trauma or an earpick , were treated surgically. The method of reconstruction and the prognoses, according to short-term and long-term follow up, were studied in these 17 cases. The following results were obtained: 1) In almost all cases, the pure tone audiograms were flat or slightly rising types, and the mean air conductive hearing loss was about 60 dB. Only 2 cases showed the falling type with conductive hearing loss. Connective tissue was observed at the incudo-stapedial joint intraoperatively in these two cases. 2) Mean hearing improvement was in 31.8 dB in the low frequency region (125, 250, 500 Hz), 22.7 dB in the middle frequency region (500, 1000, 2000 Hz), and 12.9 dB in the high frequency region (2000, 4000, 8000 Hz), within 3 weeks after operation, in all 17 cases. 3) No clear changes in hearing level were seen postoperatively, for the four periods evaluated; 1-3 weeks, 1-3 months, 4-6 months and over 7 months in all cases. 4) The cases in whom the reconstruction was performed between the incus and stapes, especially the foot plate of the stapes, showed poor hearing recovery. 5) There was no relation between hearing recovery and the prostheses used in reconstructions. However, it was thought that pieces of the patient's own cartilage or bone should be used whenever possible. abstract_id: PUBMED:37377024 Predictive Factors and Audiometric Outcome Comparison Between Titanium Prosthesis and Autologous Incus in Traumatic Ossicular Injury. Objectives: To investigate the etiology and ossicular pathology of traumatic ossicular injury in Taiwan and examine the hearing outcomes and predictive factors between the titanium prosthesis and autologous incus groups. Methods: We retrospectively analyzed patients with traumatic ossicular injury from 2011 to 2020 in Taiwan. Patients were divided into the titanium or autologous group according to the surgical materials used. The audiometric outcomes and predictive factors of ossiculoplasty were analyzed between groups. Results: Twenty patients with ossicular chain discontinuity were enrolled (8 in the titanium group and 12 in the autologous group). The postoperative hearing threshold (26.6 ± 8.9 dB) and air-bone gap (10.3 ± 5.6 dB) improved significantly compared with the preoperative hearing threshold (50.7 ± 13.3 dB) and air-bone gap (29.9 ± 11.0 dB). The improvements in the hearing threshold and air-bone gap were not significantly different between the titanium and autologous groups. Our patients presented an improvement in hearing restoration with 65% closure of the air-bone gap in 0 to 10 dB range and 30% in 11 to 20 dB range, without sensorineural hearing loss during surgery. Univariate regression analysis revealed that vertigo, benign paroxysmal positional vertigo, and temporal bone fracture may serve as negative factors influencing the air-bone gap gain. Conclusions: Ossiculoplasty with both titanium prosthesis and autologous materials demonstrated favorable hearing recovery in traumatic ossicular injury. Vertigo, benign paroxysmal positional vertigo, and temporal bone fracture may serve as negative predictive factors of the hearing benefit after surgery. abstract_id: PUBMED:37245325 Is pre-operative audiometry a reliable predictor of ossicular chain condition? Objective: Investigation of ossicular chain (OC) status before surgery is important for preoperative patient consultation. This research aimed to investigate the relationship between pre-operative audiometric values and intra-operative OC condition in a relatively large population of chronic otitis media (COM) surgeries. Methods: In this descriptive-analytic cross-sectional study, we evaluated 694 patients who underwent COM surgeries. We analyzed pre-operative audiometric data and intraoperative findings including ossicular anatomy, ossicular mobility, and the condition of middle ear mucosa. Results: The optimal cut-off values of pre-operative speech reception threshold (SRT), mean air-conduction (AC), and mean air-bone gap (ABG) for predicting OC discontinuity were 37.5 dB, 37.2 dB, and 28.4 dB, respectively. For the prediction of OC fixation, the optimal cut-off points of SRT, mean AC, and mean ABG were 37.5 dB, 40.3 dB, and 32.8 dB, respectively. The computing of Cohen's d (95 % confidence interval) demonstrated the greater mean ABG in ears with OC discontinuity in comparison with ears with normal ossicles in all types of pathologies. There was a descending trend of Cohen's d from cholesteatoma to tympanosclerosis and then to granulation tissue and hypertrophic mucosa. There was a substantial relation between the type of pathology and OC status (P &lt; 0.001). Ears with tympanosclerosis plaque had the most fixed OC among all types of pathologies (40 ears, 30.8 %), and ears with no pathology had the most normal OC (135 ears, 83.3 %). Conclusions: The results supported the view that pre-operative hearing is a key determining factor for the prediction of OC status. abstract_id: PUBMED:29267386 High-frequency conductive hearing loss as a diagnostic test for incomplete ossicular discontinuity in non-cholesteatomatous chronic suppurative otitis media. Chronic suppurative otitis media, with or without cholesteatoma, may lead to erosion of the ossicles and discontinuity of the ossicular chain. In incomplete ossicular discontinuity (IOD), partial erosion of the ossicles occurs, but some sound transmission is noted throughout the ossicular chain. High-frequency conductive hearing loss (HfCHL) has been considered a hallmark of incomplete ossicular discontinuity. This study aims to evaluate the use of HfCHL as a preoperative predictor of IOD in patients with non-cholesteatomatous chronic suppurative otitis media. The HfCHL test was defined as the preoperative air-bone gap (ABG) at 4 kHz minus the average of the ABG at 0.25 and 0.5 kHz. The test was applied in 328 patients before surgery and compared to intraoperative findings as the gold standard. At surgery, 201 (61.3%) patients had an intact ossicular chain, 44 (13.4%) had a complete ossicular discontinuity, and 83 (25.3%) exhibited an IOD. The best cutoff level was calculated as 10 dB. The HfCHL test to diagnose IOD had a sensitivity of 83% and a specificity of 92% with a post-test probability of 78% and a likelihood ratio of 10.2. We concluded that the HfCHL test is highly effective in predicting IOD in patients with non-cholesteatomatous chronic suppurative otitis media and that it should be used routinely as a screening test prior to surgery. Answer: Yes, an incomplete ossicular discontinuity can be predicted by audiometric and clinical findings. Studies have shown that a triad of symptoms, including high-frequency conductive hearing loss (hfCHL), fluctuating hearing loss, and short-lasting improvement of hearing after the Valsalva maneuver, is a strong indicator for detecting patients with an incomplete ossicular discontinuity (PUBMED:23640088). Additionally, when high-frequency air-bone gaps (ABGs) are larger than low-frequency ABGs, partial ossicular discontinuity should be considered in the differential diagnosis (PUBMED:26510125). Moreover, certain preoperative findings have been correlated with the status of the ossicular chain at surgery. For example, in chronic otitis media, parameters such as persistently draining ears and cholesteatoma extending to the tympanic sinus were predictive of ossicular discontinuity (PUBMED:12544024). In non-cholesteatoma cases, perforation of the tympanic membrane with an edge adherent to the promontory was also indicative of ossicular discontinuity. Furthermore, the high-frequency conductive hearing loss test, defined as the preoperative ABG at 4 kHz minus the average of the ABG at 0.25 and 0.5 kHz, has been found to be highly effective in predicting incomplete ossicular discontinuity in patients with non-cholesteatomatous chronic suppurative otitis media, with a sensitivity of 83% and a specificity of 92% (PUBMED:29267386). In addition to audiometric tests, wideband absorbance tympanometry has shown potential in differentiating ossicular chain pathologies, with distinct patterns observed in adults with otosclerosis and ossicular chain discontinuity (PUBMED:33187789). Pre-operative audiometry has also been considered a key determining factor for predicting ossicular chain status (PUBMED:37245325). Overall, these findings suggest that a combination of audiometric data and clinical symptoms can be used to predict incomplete ossicular discontinuity with a reasonable degree of accuracy.
Instruction: Clinical utility of apparent diffusion coefficient (ADC) values in patients with prostate cancer: can ADC values contribute to assess the aggressiveness of prostate cancer? Abstracts: abstract_id: PUBMED:21182135 Clinical utility of apparent diffusion coefficient (ADC) values in patients with prostate cancer: can ADC values contribute to assess the aggressiveness of prostate cancer? Purpose: To retrospectively evaluate the relationship between apparent diffusion coefficient (ADC) values and Gleason score (GS) in prostate cancer. Methods: A total of 60 patients who underwent radical prostatectomy for clinically localized prostate cancer were selected for this study. Diffusion-weighted magnetic resonance (MR) images were obtained using a 1.5 T system. ADC values were analyzed between three groups: GS of 6 or less (n = 7); GS of 7 (n = 37); and GS of 8 or higher (n = 16). ADC values of the three GS groups were statistically analyzed in order to determine the relationship with GS. In the 37 patients with GS = 7 the difference in ADC values between GS 3+4 and GS 4+3 was analyzed. Results: Median ADC values (10⁻³ mm² /s) of the three GS groups were 1.04 (GS = 6 or less), 0.867 (GS = 7), and 0.729 (GS = 8 or higher). Although there was considerable overlap among the groups, the differences in ADC were statistically significant (P &lt; 0.0001). There was a significant inverse correlation between GS and ADC values (z = -0.437, P &lt; 0.0005). Median ADC values (10⁻³ mm² /s) of GS 3+4 and GS 4+3 patients were 0.88 and 0.814, respectively (P &lt; 0.05). Conclusion: ADC values showed a negative correlation with GS. Pathologically, however, there was considerable intrasubject heterogeneity. abstract_id: PUBMED:27055916 Apparent diffusion coefficient values are superior to transrectal ultrasound-guided prostate biopsy for the assessment of prostate cancer aggressiveness. Background Few studies have focused on comparing the utility of diffusion-weighted imaging (DWI) and transrectal ultrasound (TRUS)-guided biopsy in predicting prostate cancer aggressiveness. Whether apparent diffusion coefficient (ADC) values can provide more information than TRUS-guided biopsy should be confirmed. Purpose To retrospectively assess the utility of ADC values in predicting prostate cancer aggressiveness, compared to the TRUS-guided prostate biopsy Gleason score (GS). Material and Methods The DW images of 54 patients with biopsy-proven prostate cancer were obtained using 1.5-T magnetic resonance (MR). The mean ADC values of cancerous areas and biopsy GS were correlated with prostatectomy GS and D'Amico clinical risk scores, respectively. Meanwhile, the utility of ADC values in identifying high-grade prostate cancer (with Gleason 4 and/or 5 components in prostatectomy) in patients with a biopsy GS ≤ 3 + 3 = 6 was also evaluated. Results A significant negative correlation was found between mean ADC values of cancerous areas and the prostatectomy GS ( P &lt; 0.001) and D'Amico clinical risk scores ( P &lt; 0.001). No significant correlation was found between biopsy GS and prostatectomy GS ( P = 0.140) and D'Amico clinical risk scores ( P = 0.342). Patients harboring Gleason 4 and/or 5 components in prostatectomy had significantly lower ADC values than those harboring no Gleason 4 and/or 5 components ( P = 0.004). Conclusion The ADC values of cancerous areas in the prostate are a better indicator than the biopsy GS in predicting prostate cancer aggressiveness. Moreover, the use of ADC values can help identify the presence of high-grade tumor in patients with a Gleason score ≤ 3 + 3 = 6 during biopsy. abstract_id: PUBMED:37763146 Quantitative ADC: An Additional Tool in the Evaluation of Prostate Cancer? Prostate cancer is one of the most common tumors among the male population. Magnetic resonance imaging (MRI), standardized by the PI-RADS version 2.1 scoring system, has a fundamental role in detecting prostate cancer and evaluating its aggressiveness. Diffusion-weighted imaging sequences and apparent diffusion coefficient values, in particular, are considered fundamental for the detection and characterization of lesions. In 2016 the International Society of Urological Pathology introduced a new anatomopathological 5-grade scoring system for prostate cancer. The aim of this study is to evaluate the correlation between quantitative apparent diffusion coefficient values (ADC) derived from diffusion-weighted imaging (DWI) sequences and the International Society of Urological Pathology (ISUP) and PI-RADS groups. Our retrospective study included 143 patients with 154 suspicious lesions, observed on prostate magnetic resonance imaging and compared with the histological results of the biopsy. We observed that ADC values can aid in discriminating between not clinically significant (ISUP 1) and clinically significant (ISUP 2-5) prostate cancers. In fact, ADC values were lower in ISUP 5 lesions than in negative lesions. We also found a correlation between ADC values and PI-RADS groups; we noted lower ADC values in the PI-RADS 5 and PI-RADS 4 groups than in the PI-RADS 3 group. In conclusion, quantitative apparent diffusion coefficient values can be useful to assess the aggressiveness of prostate cancer. abstract_id: PUBMED:31559188 Correlation between ADC values and Gleason score in evaluation of prostate cancer: multicentre experience and review of the literature. Prostate cancer (PCa) is one of the most common cancers in male population. Multiparametric prostate magnetic resonance imaging (mp-MRI) has assumed a primary role in the diagnosis of PCa, combining morphological and functional data. Among different sequences, functional diffusion weighted imaging (DWI) is a powerful clinical tool which provides information about tissue on a cellular level. However, there is a considerable overlap between either BPH (Benign Prostate Hypertrophy) and prostatic cancer condition, as a different DWI signal intensity could be shown in the normal architecture gland. Apparent diffusion coefficient (ADC) has shown an increasing accuracy in addition to the DWI analysis in detection and localization of PCa. Notably, ADC maps derived DWI sequences has shown an overall high correlation with Gleason score (GS), considering the importance of an accurate grading of focal lesion, as main predictor factor. Furthermore, beyond the comparative analysis with DWI, ADC values has proven to be an useful marker of tumor aggressiveness, providing quantitative information on tumor characteristics according with GS and Gleason pattern, even more strenuous data are needed in order to verify which ADC analysis is more accurate. abstract_id: PUBMED:34136043 The utility of ADC parameters in the diagnosis of clinically significant prostate cancer by 3.0-Tesla diffusion-weighted magnetic resonance imaging. Purpose: This study has focused on investigating the relationship between the exponential apparent diffusion coefficient (exp-ADC), selective apparent diffusion coefficient (sel-ADC) values, the ADC ratio (ADCr), and prostate cancer aggressiveness with transrectal ultrasound-guided prostate biopsy in patients with prostate cancer. Material And Methods: All patients underwent a multiparametric magnetic resonance imaging (mpMRI) including tri-planar T2-weighted (T2W), dynamic contrast-enhanced (DCE), diffusion-weighted sequences using a 3.0-Tesla MR scanner (Skyra, Siemens Medical Systems, Germany) with a dedicated 18-channel body coil and a spine coil underneath the pelvis, with the patient in the supine position. Exp-ADC, sel-ADC, and ADCr of defined lesions were evaluated using region-of-interest-based measurements. Exp-ADC, sel-ADC, and ADCr were correlated with the Gleason score obtained through transrectal ultrasound-guided biopsy. Results: Patients were divided into 2 groups. Group I is Gleason score ≥ 3 + 4, group II is Gleason score = 6. Sel-ADC and exp-ADC were statistically significant between 2 groups (0.014 and 0.012, respectively). However, the ADCr difference between nonclinical significant prostate cancer from clinically significant prostate cancer was not significant (p = 0.09). Conclusions: This study is the first to evaluate exp-ADC and sel-ADC values of prostate carcinoma with ADCr. One limitation of this study might be the limited number of patients. Exp-ADC and sel-ADC values in prostate MRI imaging improved the specificity, accuracy, and area under the curve (AUC) for detecting clinically relevant prostate carcinoma. Adding exp-ADC and sel-ADC values to ADCr can be used to increase the diagnostic accuracy of DWI. abstract_id: PUBMED:25484999 The Utility of Diffusion Weighted Imaging (DWI) Using Apparent Diffusion Coefficient (ADC) Values in Discriminating Between Prostate Cancer and Normal Tissue. Background: The aim of this study was to investigate the utility of diffusion weighted imaging (DWI) using Apparent Diffusion Coefficient (ADC) values in discriminating between patients with tumors and normal prostate tissue before the initial systematic core biopsy. The relationship between histological grade of prostate cancer and ADC values in the peripheral zone was also investigated. Material/methods: Our study included 62 patients who underwent magnetic resonance imaging (MRI) of the pelvis. The examinations were performed in T1-, T2-weighted, DWI and T1 after dynamic contrast administration sequences. In all patients there were abnormal foci within the peripheral zone determined in DWI/ADC. ADC values were compared with the Gleason score (GS) after core needle biopsy (CNB) in patients with low, medium and high stage tumors. Results: Within the examined group of patients, ADC was statistically higher for normal tissue than for cancerous tissue (p=0.00). Mean ADC values for patients with low, intermediate and high GS were 0.85±0.03, 0.72±0.03, and 0.61±0.04, respectively. Conclusions: DWI/ADC is useful in differentiating high-risk patients from those at low and intermediate risk, since there is a significant correlation between ADC values determined in patients included in three different groups according to their Gleason score. This information may be helpful in the assessment of prostate cancer aggressiveness. abstract_id: PUBMED:25892290 A literature review of the association between diffusion-weighted MRI derived apparent diffusion coefficient and tumour aggressiveness in pelvic cancer. Diffusion-weighted magnetic resonance imaging (DW-MRI) is used extensively to improve tumour detection and localization because it offers excellent soft tissue contrast between malignant and non-malignant tissues. It also provides a quantitative biomarker; the apparent diffusion coefficient (ADC) can be derived from DW-MRI sequences using multiple diffusion weightings. ADC reflects the tumour microenvironment, e.g. cell membrane integrity and cellularity and has potential for reporting on tumour aggressiveness. This review focuses on the use of the DW-MRI derived imaging biomarker ADC to reflect tumour aggressiveness and its potential impact in managing pelvic cancer patients. The clinical studies which evaluate the role of ADC in pelvic tumours (prostate, bladder, rectal, ovary, cervix and uterus) are summarized and the evidence linking ADC values with tumour aggressiveness is evaluated. abstract_id: PUBMED:27687831 Correlation between apparent diffusion coefficient value on diffusion-weighted MR imaging and Gleason score in prostate cancer. Objectives: To investigate whether diffusion-weighted imaging (DWI) apparent diffusion coefficient (ADC) correlates with prostate cancer aggressiveness and further to compare the diagnostic performance of ADC and normalized ADC (nADC: normalized to non-tumor tissue). Patients And Methods: Thirty pre-treatment patients (mean age, 69years; range: 59-78years) with prostate cancer underwent magnetic resonance imaging (MRI) examination, including DWI with three b values: 50, 400, and 800s/mm2. Both ADC and nADC were correlated with the Gleason score obtained through transrectal ultrasound-guided biopsy. Results: The tumor minimum ADC (ADCmin: the lowest ADC value within tumor) had an inverse correlation with the Gleason score (r=-0.43, P&lt;0.05), and it was lower in patients with Gleason score 3+4 than in those with Gleason score 3+3 (0.54±0.11×103mm2/s vs. 0.64±0.12×10-3mm2/s, P&lt;0.05). Both the nADCmin and nADCmean correlated with the Gleason score (r=-0.52 and r=-0.55, P&lt;0.01; respectively), and they were lower in patients with Gleason score 3+4 than those with Gleason score 3+3 (P&lt;0.01; respectively). Receiver operating characteristic (ROC) analysis showed that the area under the ROC curve was 0.765, 0.818, or 0.833 for the ADCmin, nADCmin, or nADCmean; respectively, in differentiating between Gleason score 3+4 and 3+3 tumors. Conclusion: Tumor ADCmin, nADCmin, and nADCmean are useful markers to predict the aggressiveness of prostate cancer. abstract_id: PUBMED:28161826 Assessment of prostate cancer aggressiveness using apparent diffusion coefficient values: impact of patient race and age. Purpose: To assess the impact of patient race and age on the performance of apparent diffusion coefficient (ADC) values for assessment of prostate cancer aggressiveness. Materials And Methods: 457 prostate cancer patients who underwent 3T phased-array coil prostate MRI including diffusion-weighted imaging (DWI; maximal b-value 1000 s/mm2) before prostatectomy were included. Mean ADC of a single dominant lesion was measured in each patient, using histopathologic findings from the prostatectomy specimen as reference. In subsets defined by race and age, ADC values were compared between Gleason score (GS) ≤ 3 + 4 and GS ≥ 4 + 3 tumors. Results: 81% of patients were Caucasian, 12% African-American, 7% Asian-American. 13% were &lt;55 years, 42% 55-64 years, 41% 65-74 years, and 4% ≥75 years. 63% were GS ≤ 3 + 4, 37% GS ≥ 4 + 3. ADC was significantly lower in GS ≥ 4 + 3 tumors than in GS ≤ 3 + 4 tumors in the entire cohort, as well as in Caucasian, African-American, and all four age groups (P ≤ 0.015). AUC for differentiation of GS ≤ 3 + 4 and GS ≥ 4 + 3 as well as optimal ADC threshold was Caucasian: 0.73/≤848; African-American: 0.76/≤780; Asian-American: 0.66/≤839: &lt;55 years, 0.73/≤830; 55-64 years, 0.71/≤800; 65-74 years, 0.74/≤872; ≥75 years, 0.79/≤880. A race-optimized ADC threshold resulted in higher specificity in African-American than Caucasian men (84.9% vs. 67.1%, P = 0.045); age-optimized ADC threshold resulted in higher sensitivity in patients aged ≥75 years than &lt;55 years or 55-64 years (100.0% vs. 53.6%-73.3%; P &lt; 0.001). Conclusion: Patients' race and age may impact the diagnostic performance and optimal threshold when applying ADC values for evaluation of prostate cancer aggressiveness. abstract_id: PUBMED:35368407 Do the variations in ROI placement technique have influence for prostate ADC measurements? Background: Prostate apparent diffusion coefficient (ADC) values calculated from diffusion-weighted imaging have been used for evaluating prostate cancer (PCa) aggressiveness. However, the way of measuring ADC values has varied depending on the study. Purpose: To investigate inter- and intra-reader variability and diagnostic performance of three kinds of shaped 2D regions of interests (ROIs) for tumor ADC measurements in PCa. Material And Methods: Seventy-four patients with PCa undergoing 3-T MRI before surgery were included. Histologic findings from radical prostatectomy specimens were reviewed to define each patient's dominant tumor. Three readers independently measured the tumor ADCs using three different ROI methods: freehand, large-circle, and small-circles ROIs. Readers repeated measurements after 3 weeks. Bland-Altman analysis was performed to evaluate the inter- and intra-reader variability. Receiver Operating Characteristic analysis was used for assessment of tumor aggressiveness for PCa. Results: For intra-reader and inter-reader variability, the mean coefficient of repeatability for freehand ROIs, large-circle ROIs, and small-circles ROIs were as follows: 13.7%, 12.4%, and 11.5%; 9.4%, 9.7%, and 9.5%. For differentiating Gleason score (GS) = 3 + 3 from GS ≥ 3 + 4 tumors, the area under the curves were 0.90 for freehand ROIs, 0.89 for large-circle ROIs, and 0.94 small-circles ROIs (p = 0.31). Conclusion: The variations in ROI method did not have a major influence on intra-reader or inter-reader reproducibility or diagnostic performance for prostate ADC measurements. Answer: Yes, apparent diffusion coefficient (ADC) values can contribute to assessing the aggressiveness of prostate cancer. Multiple studies have demonstrated a significant negative correlation between ADC values and prostate cancer aggressiveness, as determined by the Gleason score (GS) and other clinical risk scores. For instance, one study found that median ADC values were significantly lower in patients with higher Gleason scores, indicating a negative correlation between ADC values and GS. The study concluded that ADC values could be used to assess the aggressiveness of prostate cancer, despite some intrasubject heterogeneity (PUBMED:21182135). Another study supported these findings by showing that ADC values of cancerous areas in the prostate were better indicators of prostate cancer aggressiveness than biopsy GS. This study also suggested that ADC values could help identify the presence of high-grade tumors in patients with a biopsy GS ≤ 3 + 3 = 6 (PUBMED:27055916). Further research has indicated that quantitative ADC values can aid in discriminating between clinically significant and non-clinically significant prostate cancers, with lower ADC values observed in more aggressive lesions (PUBMED:37763146). A multicenter experience also confirmed a high correlation between ADC values and GS, emphasizing the importance of ADC as a marker of tumor aggressiveness (PUBMED:31559188). Additional studies have explored the utility of different ADC parameters, such as exponential ADC (exp-ADC), selective ADC (sel-ADC), and ADC ratio (ADCr), in diagnosing clinically significant prostate cancer. These studies found that certain ADC parameters were statistically significant in distinguishing between different Gleason score groups, thus improving the specificity and accuracy of prostate MRI imaging (PUBMED:34136043). Moreover, ADC values have been shown to be statistically higher for normal tissue compared to cancerous tissue, with a significant correlation between ADC values and different Gleason score stages, which may be helpful in assessing prostate cancer aggressiveness (PUBMED:25484999). In summary, ADC values derived from diffusion-weighted MRI have been consistently shown to be useful in evaluating the aggressiveness of prostate cancer, providing a non-invasive biomarker that correlates with traditional histopathological grading systems like the Gleason score.
Instruction: Is there truly an oncologic indication for interval appendectomy? Abstracts: abstract_id: PUBMED:25543294 Is there truly an oncologic indication for interval appendectomy? Background: The rate of recurrent appendicitis is low following nonoperative management of complicated appendicitis. However, recent data suggest an increased rate of neoplasms in these cases. Methods: The study was a retrospective review of patients with acute appendicitis at 2 university-affiliated community hospitals over a 12-year period. The primary outcome measure was the incidence of appendiceal neoplasm following interval appendectomy. Results: Six thousand thirty-eight patients presented with acute appendicitis. Appendectomy was performed in 5,851 (97%) patients at the index admission. Of the 188 patients treated with initial nonoperative management, 89 (47%) underwent interval appendectomy. Appendiceal neoplasms were identified in 11 of the 89 (12%) patients. These included mucinous neoplasms (n = 6), carcinoid tumors (n = 4), and adenocarcinoma (n = 1). The rate of neoplasm in patients over age 40 was 16%. Conclusions: There is a significant rate of neoplasms identified in patient over age 40 undergoing interval appendectomy. This should be considered following nonoperative management of complicated appendicitis. abstract_id: PUBMED:22988397 Interval appendectomy in adults: A necessary evil? The management of appendiceal mass remains a matter of major controversy in the current literature. Currently, initial nonoperative management followed by interval appendectomy is favored over immediate appendicectomy. However, the necessity of doing an interval appendectomy has been questioned - is it a necessary evil? The present review revisits the above controversy, evaluates the current literature, assesses the need for interval appendectomy in adults, and provides recommendations. abstract_id: PUBMED:34572243 The Histopathology of the Appendix in Children at Interval Appendectomy. Whilst most surgeons agree that conservative treatment of appendiceal abscess in children is an adequate treatment, the need for subsequent interval appendectomy is still controversial. We analyzed the histopathology in interval appendectomy in search of signs of inflammation. All patients admitted between 2010 and 2017 with appendiceal abscess and scheduled for interval appendectomy were reviewed. The specimens were evaluated for grade of inflammation, type and distribution of cellular infiltrate, presence of necrosis or hemorrhage and infiltrate in the serosa. Forty-two patients had appendiceal abscess and were treated conservatively. Seven underwent emergent appendectomy. Thirty-three out of 35 patients underwent elective interval appendectomy. Thirty-two specimens were revised. Carcinoid tumor or other malignant lesions were not found. All of them presented some amount of inflammation, grade 1 to 2 in 53%, grade 3 to 4 in 47%. Twenty-five percent of the specimens had signs of necrosis accompanied by hemorrhage and in more than the half (53%) the infiltrate extended to the serosa. Conclusions: Although the appendix was mostly found not macroscopically inflamed intraoperatively, histology confirmed a certain grade of inflammation even months after the conservative treatment. No correlation was found between histopathologic findings and lapse of time between abscess treatment and interval appendectomy. abstract_id: PUBMED:35637469 Periappendiceal fluid collection on preoperative computed tomography can be an indication for interval appendectomy: a retrospective study. Background: The treatment strategies for acute appendicitis, such as emergency appendectomy (EA), interval appendectomy (IA), and repeating nonoperative management (NOM), are controversial. In this study, we examined the preoperative factors that can be used to distinguish which patients should undergo IA. Methods: We retrospectively identified 902 patients who underwent surgery for appendicitis in our hospital from January 2010 to December 2021. Of these patients, 776 were included in this study. The patients were divided into two groups: those with a periappendiceal fluid collection (PAFC) on preoperative computed tomography (PAFC-positive group, n = 170) and those without a PAFC (PAFC-negative group, n = 606). In each group, we compared patients who underwent EA and IA. Results: In the PAFC-positive group, patients who underwent EA had a significantly higher postoperative complication rate than those who underwent IA (40.5% vs. 24.0%, p = 0.037). In the multivariate analysis, only the presence of PAFC was significantly associated with an increased risk of postoperative complications (odds ratio, 7.11; 95% confidence interval, 2.73-18.60; p &lt; 0.001). The presence of PAFC alone was not significantly associated with an increased risk of IA or NOM failure (odds ratio, 1.48; 95% confidence interval, 0.19-11.7; p = 0.71). The rate of neoplasia on pathologic examination was significantly higher in the PAFC-positive than PAFC-negative group (7.6% vs. 1.5%, p &lt; 0.001); the rate of carcinoma was also higher in the PAFC-positive group (2.4% vs. 0.17%, p = 0.02). Conclusions: The presence of PAFC on preoperative computed tomography was found to be a risk factor for postoperative complications but not IA or NOM failure. It was also correlated with neoplasia as the etiology of appendicitis. Therefore, PAFC positivity is useful as an indication for IA. abstract_id: PUBMED:32054244 Increased Risk of Neoplasms in Adult Patients Undergoing Interval Appendectomy. Purpose: The low rate of recurrent appendicitis after initial nonsurgical management of complicated appendicitis supports the recently implemented strategy of omitting routine interval appendectomy. However, several reports have suggested an increased incidence rate of neoplasms in these patients. We aimed to identify the risk of neoplasms in the population undergoing interval appendectomy. Methods: This study retrospectively analyzed consecutive cases of appendicitis that were treated surgically between January 2014 and December 2018 at a single tertiary referral center. Patients were divided into 2 groups depending on whether they underwent immediate or interval appendectomy. Demographics and perioperative clinical and pathologic parameters were analyzed. Results: All 2,013 adults included in the study underwent surgical treatment because of an initial diagnosis of acute appendicitis. Of these, 5.5% (111 of 2,013) underwent interval appendectomy. Appendiceal neoplasm was identified on pathologic analysis in 36 cases (1.8%). The incidence of neoplasm in the interval group was 12.6% (14 of 111), which was significantly higher than that of the immediate group (1.2% [22 of 1,902], P &lt; 0.001). Conclusion: The incidence rate of neoplasms was significantly higher in patients undergoing interval appendectomy. These findings should be considered when choosing treatment options after successful nonsurgical management of complicated appendicitis. abstract_id: PUBMED:34796402 Laparoscopic delayed and interval appendectomy in the workstyle reform era. Purpose: Excessive working hours have been reported to contribute to burnout among surgeons. In Japan, work-style reform is a problem that needs immediate attention. Acute appendectomy, which often occurs at nighttime, is one of the most common emergency surgeries. The feasibility of delayed and interval appendectomy remains to be investigated. Methods: Two hundred forty-five consecutive patients who underwent laparoscopic appendectomy in our hospital were enrolled. They were divided into three groups: emergency appendectomy (immediate surgery, soon after the diagnosis [EA group], n = 153), delayed appendectomy (surgery during daytime the following day [DA group], n = 38) and interval appendectomy (antibiotics treatment followed by selective surgery three to four months later [IA group], n = 54). The clinical background and surgical outcomes were compared. Next, the residents' excess working time per month was calculated. Results: The surgical outcomes (operation time, blood loss, length of hospital stay, postoperative complications) were similar between the EA and DA groups. However, no DA was performed during nighttime hours whereas 15.7% of EA was performed during nighttime hours (p = 0.0007). The surgical outcomes of the IA group were also comparable. The residents' excess working time declined following the introduction of DA and workstyle reform. Conclusion: Delayed and interval laparoscopic appendectomy are feasible, and can be performed to promote workstyle reform without impairing patient safety. abstract_id: PUBMED:36641313 Nationwide management of perforated pediatric appendicitis: Interval versus same-admission appendectomy. Purpose: Although conservative management followed by readmission for interval appendectomy is commonly used to manage perforated appendicitis, many studies are limited to individual or noncompeting pediatric hospitals. This study sought to compare national outcomes following interval or same-admission appendectomy in children with perforated appendicitis. Methods: The Nationwide Readmission Database was queried (2010-2014) for patients &lt;18 years old with perforated appendicitis who underwent appendectomy using ICD9-CM Diagnosis codes. A propensity score-matched analysis (PSMA) utilizing 33 covariates between those with (Interval Appendectomy) and without a prior admission (Same-Admission Appendectomy) was performed to examine postoperative outcomes. Results: There were 63,627 pediatric patients with perforated appendicitis. 1014 (1%) had a prior admission for perforated appendicitis within one calendar year undergoing interval appendectomy compared to 62,613 (99%) Same-Admission appendectomy patients. The Interval Appendectomy group was more likely to receive a laparoscopic (87% vs. 78% same-admission) than open (13% vs. 22% same-admission; p &lt; 0.001) operation. Patients receiving interval appendectomy were more likely to have their laparoscopic procedure converted to open (5% vs. 3%) and receive more concomitant procedures. PSMA demonstrated a higher rate of small bowel obstruction in those receiving Same-Admission appendectomy while all other complications were similar. Although those receiving Interval Appendectomy had a shorter index length of stay (LOS) and lower admission costs, they incurred an additional $8044 [$5341-$13,190] from their prior admission. Conclusion: Patients treated with interval appendectomy experienced more concomitant procedures and incurred higher combined hospitalization costs while still having a similar postoperative complication profile compared to those receiving same-admission appendectomy for perforated appendicitis. Level Of Evidence: III. Type Of Study: Retrospective Comparative Study. abstract_id: PUBMED:32676725 Is interval appendectomy really needed? A closer look at neoplasm rates in adult patients undergoing interval appendectomy after complicated appendicitis. Background: The treatment algorithm for appendicitis is evolving, with recent interest in non-operative management. However, the safety of non-operative management for patients with complicated appendicitis has been questioned due to concern for increased risk of occult appendiceal neoplasm in this patient population. Our study aims to determine the rate of neoplasms discovered during interval appendectomy for patients with complicated appendicitis and determine the necessity of interval appendectomy. Methods: A retrospective chart review was conducted on interval appendectomies performed in adult patients for complicated appendicitis at our institution over a 9-year period. Interval appendectomy was defined as appendectomy delayed from initial presentation with appendicitis. Complicated appendicitis was defined as perforation, phlegmon, and/or abscess as seen on computed tomography at time of presentation. Results: We identified 402 patients who underwent interval appendectomy for complicated appendicitis. A total of 36 appendiceal neoplasms were discovered on final pathology with an overall neoplasm rate of 9%. Patients with an appendiceal neoplasm were significantly older (56.6 years vs 45.1 years, p &lt; 0.01). No patients under the age of 30 had a neoplasm. The rate of appendiceal neoplasms in patients 30 years and older was 11%. The rate for patients 50 years and older was 16%. For patients 80 years and older, the rate of appendiceal neoplasm was 43%. Conclusion: The risk of occult appendiceal neoplasm is low in patients under the age of 30; however, there was an 11% rate of appendiceal neoplasm in patients 30 years and older. The risk increases with increased age, with a 16% risk in patients 50 years and older. Given these findings, we recommend consideration of interval appendectomy in all patients 30 years and older with complicated appendicitis. abstract_id: PUBMED:31822922 Intermediate-term evaluation of interval appendectomy in the pediatric population. Background: Initial non-operative management of children with complicated appendicitis has been well studied but when comparing emergency procedures the long-term effectiveness of interval appendectomy remains undefined. This study aimed to determine the effects of interval appendectomy from the perspective of long-term follow-up. Methods: A retrospective review of patients with complicated appendicitis between 2010 and 2017 was performed. The medical records of 471 patients with initial non-operative therapy and 377 patients treated with emergency appendectomy who served as controls were reviewed. Propensity score matching was performed to adjust for any potential selection bias in the two strategies. A comparison of the clinical outcomes, including short- and long-term postoperative complications, was conducted in the 348 matched patients. Results: On presentation, there were no differences in age, weight, sex distribution, white blood cell (WBC), or procalcitonin (PCT) between the two groups, except for days of symptoms. The patients undergoing emergency appendectomy had a high American Society of Anesthesiology (ASA) score (p = 0.002). The patients who underwent initial non-operative therapy had a lower complication rate, including surgical wound infection [odds ratio (OR), 3.77; 95% CI 2.59-5.50; p &lt; 0.001) and postoperative peritonitis or abscess (OR, 3.81; 95% CI 1.79-8.12; p &lt; 0.001) than those who underwent emergency appendectomy. Furthermore, the incidence of adhesive small bowel obstruction (ASBO) was lower in patients who underwent initial non-operative therapy than in patients who underwent emergency appendectomy (OR, 4.6; 95% CI 0.99-21.41; p = 0.032). Conclusions: Initial non-operative therapy with interval appendectomy was feasible for most patients with appendiceal abscesses and had advantages in terms of postoperative complications, especially regarding long-term obstruction events. Therefore, initial non-operative therapy with interval appendectomy should be considered the first treatment of choice for pediatric patients with complicated appendicitis. abstract_id: PUBMED:31175422 Predictors for interval appendectomy in non-operatively treated complicated appendicitis. Purpose: To determine the incidence rate and identify predictive factors for interval appendectomy after non-operatively treated complicated appendicitis. Methods: Single-center retrospective cohort study conducted between January 2008 and June 2017. Adult patients with acute appendicitis were identified. Patients with complicated appendicitis initially treated non-operatively were included. Outcomes included abscess rate on imaging, results of additional imaging during follow-up, incidence rate of and surgical indications for interval appendectomy, and outcomes of histological reports. Results: Of all adult patients with acute appendicitis (n = 1839), 9% (170/1839) was initially treated non-operatively. Median age of these patients was 55 years (IQR 42-65) and 48.8% (83/170) were men. In 36.4% (62/170) of the patients, an appendicular abscess was diagnosed. 62.4% (106/170) did not require subsequent surgery (no interval appendectomy group) and in 37.6% (64/170), an interval appendectomy was performed (interval appendectomy group). Median follow-up was 80 weeks (17-192) and 113 weeks (34-246), respectively. Most frequent reason to perform subsequent surgery was recurrent appendicitis (45% (29/64)). Increasing age was significantly associated with a lower risk of undergoing interval appendectomy (OR 0.7; CI 0.6-0.89); p = 0.002). In the interval appendectomy group, appendicular neoplasm was found in 11% (7/64) of the patients, in contrast to 1.5% (25/1669) of the patients that had acute surgery (p &lt; 0.001). Conclusions: One out of three patients non-operatively treated for complicated appendicitis required an interval appendectomy. The incidence of appendicular neoplasms was high in these patients compared with those that had acute surgery. Therefore, additional radiological imaging following non-operatively treated complicated appendicitis is recommended. Answer: The question of whether there is an oncologic indication for interval appendectomy is supported by several studies that have identified a significant rate of neoplasms in patients undergoing interval appendectomy, particularly in those over the age of 40. A retrospective review of patients with acute appendicitis found that 12% of patients who underwent interval appendectomy had appendiceal neoplasms, with the rate increasing to 16% in patients over age 40 (PUBMED:25543294). Another study reported an incidence rate of neoplasms of 12.6% in the interval appendectomy group, which was significantly higher than the 1.2% in the immediate appendectomy group (PUBMED:32054244). Furthermore, a study focusing on adult patients who underwent interval appendectomy after complicated appendicitis found an overall neoplasm rate of 9%, with the rate increasing with age, reaching 16% in patients 50 years and older (PUBMED:32676725). Additionally, a study that examined preoperative factors for interval appendectomy found that the presence of periappendiceal fluid collection (PAFC) on preoperative computed tomography was associated with a higher rate of neoplasia and carcinoma, suggesting that PAFC positivity could be useful as an indication for interval appendectomy (PUBMED:35637469). Another study also highlighted the increased incidence of appendicular neoplasms in patients who were non-operatively treated for complicated appendicitis and subsequently required interval appendectomy (PUBMED:31175422). These findings suggest that there is an oncologic indication for interval appendectomy, especially in older patients and those with certain radiological findings, due to the increased risk of discovering neoplasms in these cases. However, the necessity of interval appendectomy in all cases remains controversial, and the decision should be individualized based on patient factors such as age, clinical presentation, and radiological findings.
Instruction: Is there an association between leukoaraiosis volume and diabetes? Abstracts: abstract_id: PUBMED:26740385 Is there an association between leukoaraiosis volume and diabetes? Objectives: The relation between white matter loss (WML) and diabetes is still debated. The aim of this study was to investigate the correlation between typical WML- and diabetes-related magnetic resonance imaging (MRI) findings in a cohort of patients scheduled for carotid endarterectomy (CEA). Materials And Methods: Ninety-three consecutive patients (mean age 71±9years; male 71) were included in a single-centre retrospective study. All the patients underwent MRI as baseline evaluation prior to CEA. A neuroradiologist blinded to the presence of risk factors calculated WML volume and number of lesions on FLAIR images using a semi-automated segmentation technique. Receiver operating characteristics analysis was performed to search for any association between WML volume and the number of WML lesions. The Mann-Whitney tests were used to determine significant WML differences between diabetic and non-diabetic patients. Logistic regression analysis was performed to evaluate the potential association of other variables. Results: The prevalence of diabetes was 20.4% (n=19). WML volume and number of WML lesions were significantly associated with diabetes (P=0.001). A statistically significant difference in WML volume was found between diabetic and non-diabetic patients (P&lt;0.0001). Only diabetes, among all the investigated variables (WML volume, CAD status, age, smoking status, gender, hypertension, hyperlipidemia, diabetes) was significantly associated with WML (P=0.0001). Conclusion: Our results demonstrate a strong statistical correlation between diabetes and WML. Future scientific challenges could include the identification of potential therapeutic targets and the creation of dedicated screening protocols for WML in diabetic patients other than the simple measurement of leukoaraiosis total burden. abstract_id: PUBMED:31489777 Association of serum cystatin C with white matter abnormalities in patients with amnestic mild cognitive impairment. Aim: White matter hyperintensities (WMH) on MRI have been reported to be a risk factor for the conversion from mild cognitive impairment (MCI) to Alzheimer's disease, although the reason remains unclear. In the present study, we hence investigated the associations between WMH volumes and cognitive function, blood levels of various molecules, and the presence of lifestyle-associated diseases in patients with amnestic MCI. Methods: The initial data of 38 patients with amnestic MCI and 10 normal control individuals were analyzed. The volumes of periventricular hyperintensities (PVH) and deep WMH (DWMH) were measured on T2 fluid-attenuated inversion recovery using the imaging software, 3D Slicer; and the association between PVH/DWMH volumes and cognitive function, blood levels of molecules (such as cystatin C [CysC], 25-hydroxyvitamin D and homocysteine) and the presence of lifestyle-associated diseases (such as hypertension, hyperlipidemia and diabetes mellitus) were analyzed. Results: In the MCI group, the PVH volume : intracranial volume ratio significantly correlated with Trail Making Test-A/B scores and CysC level by Pearson's analysis, and the PVH volume : intracranial volume ratio significantly correlated with only CysC levels, whereas the DWMH volume : intracranial volume ratio did not correlate with any items at all by linear multiple regression analysis. Conclusions: PVH volume was closely associated with frontal lobe dysfunction, particularly with attention and executive dysfunction. Serum CysC level was associated with PVH volume, which suggests that CysC might be a useful marker for determining treatment strategies for white matter abnormalities in amnestic MCI. Geriatr Gerontol Int 2019; 19: 1036-1040. abstract_id: PUBMED:23868156 Automated white matter total lesion volume segmentation in diabetes. Background And Purpose: WM lesion segmentation is often performed with the use of subjective rating scales because manual methods are laborious and tedious; however, automated methods are now available. We compared the performance of total lesion volume grading computed by use of an automated WM lesion segmentation algorithm with that of subjective rating scales and expert manual segmentation in a cohort of subjects with type 2 diabetes. Materials And Methods: Structural T1 and FLAIR MR imaging data from 50 subjects with diabetes (age, 67.7 ± 7.2 years) and 50 nondiabetic sibling pairs (age, 67.5 ± 9.4 years) were evaluated in an institutional review board-approved study. WM lesion segmentation maps and total lesion volume were generated for each subject by means of the Statistical Parametric Mapping (SPM8) Lesion Segmentation Toolbox. Subjective WM lesion grade was determined by means of a 0-9 rating scale by 2 readers. Ground-truth total lesion volume was determined by means of manual segmentation by experienced readers. Correlation analyses compared manual segmentation total lesion volume with automated and subjective evaluation methods. Results: Correlation between average lesion segmentation and ground-truth total lesion volume was 0.84. Maximum correlation between the Lesion Segmentation Toolbox and ground-truth total lesion volume (ρ = 0.87) occurred at the segmentation threshold of k = 0.25, whereas maximum correlation between subjective lesion segmentation and the Lesion Segmentation Toolbox (ρ = 0.73) occurred at k = 0.15. The difference between the 2 correlation estimates with ground-truth was not statistically significant. The lower segmentation threshold (0.15 versus 0.25) suggests that subjective raters overestimate WM lesion burden. Conclusions: We validate the Lesion Segmentation Toolbox for determining total lesion volume in diabetes-enriched populations and compare it with a common subjective WM lesion rating scale. The Lesion Segmentation Toolbox is a readily available substitute for subjective WM lesion scoring in studies of diabetes and other populations with changes of leukoaraiosis. abstract_id: PUBMED:25040435 Sleep duration is associated with white matter hyperintensity volume in older adults: the Northern Manhattan Study. Self-reports of long or short sleep durations have indicated an association with cardiovascular morbidity and mortality, but there are limited data evaluating their association with white matter hyperintensity volume (WMHV), a marker of cerebral small vessel disease. We conducted a cross-sectional analysis of self-reported sleep duration to test for a correlation with white matter hyperintensities, measured by quantitative magnetic resonance imaging (MRI), in the Northern Manhattan Study. We used multivariable linear regression models to assess associations between both short (&lt;6 h) and long (≥9 h) sleep durations and log-transformed WMHV, adjusting for demographic, behavioural and vascular risk factors. A total of 1244 participants, mean age 70 ± 9 years, 61% women and 68% Hispanics were analysed with magnetic resonance brain imaging and self-reported sleep duration. Short sleep was reported by 23% (n = 293) and long sleep by 10% (n = 121) of the sample. Long sleep (β = 0.178; P = 0.035), but not short sleep (β = -0.053; P = 0.357), was associated with greater log-WMHV in fully adjusted models. We observed an interaction between sleep duration, diabetes mellitus and log-WMHV (P = 0.07). In fully adjusted models, stratified analysis showed that long sleep duration was associated with greater WMHV only in those with diabetes (β = 0.78; P = 0.0314), but not in those without diabetes (β = 0.022; P = 0.2), whereas short sleep was not associated with white matter hyperintensities in those with or without diabetes. In conclusion, long sleep duration was associated with a greater burden of white matter lesions in this stroke-free urban sample. The association was seen mainly among those with diabetes mellitus. abstract_id: PUBMED:35229016 Vascular risk profile and white matter hyperintensity volume among Mexican Americans and non-Hispanic Whites: The HABLE study. Introduction: Among vascular risk factors we hypothesized that an increased prevalence of diabetes in Hispanics would be associated with greater white matter hyperintensity (WMH) volume, which may contribute to cognitive decline. Methods: A total of 1318 participants (60% female; 49% Hispanic, 51% non-Hispanic White; age 66.2 ± 8.9 years) underwent clinical evaluation and brain magnetic resonance imaging (MRI). WMH volume associations were assessed with age, sex, and ethnicity and then with vascular risk factors in a selective regression model. Results: WMH volume was greater with older age (P &lt; .0001), Hispanic ethnicity (P = .02), and female sex (P = .049). WMH volume was best predicted by age, diastolic blood pressure, hypertension history, hemoglobin A1c (HbA1c), white blood cell count, and hematocrit (P &lt; .01 for all). Elevated HbA1c was associated with greater WMH volume among Hispanics (parameter estimate 0.08 ± 0.02, P &lt; .0001) but not non-Hispanic Whites (parameter estimate 0.02 ± 0.04, P = .5). Discussion: WMH volume was greater in Hispanics, which may be partly explained by increased WMH volume related to elevated HbA1c among Hispanics but not non-Hispanic Whites. abstract_id: PUBMED:34105230 Association of regional white matter hyperintensity volumes with cognitive dysfunction and vascular risk factors in patients with amnestic mild cognitive impairment. Aim: White matter hyperintensities (WMH) obtained by magnetic resonance imaging (MRI) have been reported to promote neurodegeneration and cognitive decline in patients with mild cognitive impairment (MCI). However, little is known about the association between regional WMH (rWMH) and cognitive dysfunction in MCI. We hence investigated the associations between rWMH volumes and cognitive dysfunction in MCI. Methods: Thirty-eight subjects with amnestic MCI were analysed. The volumes of periventricular hyperintensities (PVH) and deep WMH (DWMH) were measured on a T2-FLAIR MRI using a 3D-slicer, and regional PVH and DWMH (rPVH and rDWMH) volumes were calculated. The associations of rPVH and rDWMH volumes with cognition and blood levels of various molecules were investigated. Furthermore, rPVH and rDWMH volumes were compared between MCI with vascular risk factors, such as hypertension, diabetes mellitus (DM), and dyslipidemia, and those without these risk factors. Results: rPVH volume (bilateral cornu frontale, pars parietalis, and cornu occipitale) positively correlated with Trail Making Test-A/B scores and CysC level, whereas rDWMH volume did not correlate with any of the items. rPVH volumes (right cornu frontale, bilateral pars parietalis and cornu occipitale, and right pars temporalis) and rDWMH volumes (left frontal and parietal lobes) were significantly larger in MCI patients with DM than in those without. Conclusions: PVH volumes (bilateral areas of cornu frontale, pars parietalis, and cornu occipitale) were closely associated with attention and executive dysfunction. Serum CysC level and DM were associated with WMH volume, suggesting that CysC level and DM might be important markers for determining treatment strategies for white matter abnormalities in MCI. Geriatr Gerontol Int 2021; 21: 644-650. abstract_id: PUBMED:19467838 The relationship between leukoaraiosis volume and parameters of carotid artery duplex ultrasonographic scanning in asymptomatic diabetic patients. The significance of asymptomatic leukoaraiosis is unknown although cerebral microangiopathy is regarded as its pathology. To confirm the relationship between leukoaraiosis and cerebral microangiopathy, the pulsatility index (PI) at the cervical internal carotid artery (CA) which has been proposed as an indicator of microangiopathy, was evaluated in relation to leukoaraiosis. With 122 asymptomatic diabetic patients, leukoaraiosis volume was calculated with magnetic resonance imaging and its correlation with age, microalbuminuria and parameters of extracranial CA duplex ultrasonographic scanning were analyzed. The leukoaraiosis volume correlated with PI on the right side (p=0.027), and intima-media thickness on the left (p=0.017). However multivariate analysis indicated that age alone was a significant independent factor, positively correlating with leukoaraiosis volume on both sides (p&lt;0.0001 on both sides). The underlying pathology of leukoaraiosis should be multifactorial. Further exploration is necessary to distinguish "ischemic" and "non-ischemic" leukoaraiosis. abstract_id: PUBMED:33026847 Association of Genetic Variation in a Wnt Signaling Pathway Gene (β-Catenin) with Susceptibility to Leukoaraiosis. Aim: Blood-brain barrier (BBB) disruption is the primary initiating cause of cerebral small-vessel diseases including leukoaraiosis (LA). β-Catenin is a key regulator of the BBB and plays an important role in cell-cell adhesion at adherens junctions by interacting with cadherin molecules. Thus, β-Catenin may be a good candidate gene for LA. We performed a genetic analyses to investigate the association between β-catenin alleles and LA. Materials and Methods: A total of 339 LA cases and 203 controls were enrolled from individuals who underwent brain magnetic resonance imaging with obtainable vascular risk factors. Genotyping of β-catenin single nucleotide polymorphisms (SNPs), including rs1880481 C &gt; A, rs13072632 C &gt; T, and rs4135385 A &gt; G, was performed by real-time polymerase chain reaction using a LightCycler 2.0. Results: Two SNPs, rs1880481 and rs4135385, showed significant differences in their allelic frequencies between the control and LA groups and the combinatorial effects of the risk alleles for these two SNPs also significantly increased the risk of LA. The G-T-A, A-T-A, and A-T-G haplotypes for the three SNPs showed significant differences in both types of LA: LA-periventricular white matter and LA-deep white matter. However, the C-T-G haplotype was only significantly different for LA-PVWM, while the A-C-A was only significantly different for LA-DWM. The combination of diabetes mellitis, hypertension, and these risk alleles increased the likelihood of both types of LA. Conclusion: This study provides evidence that β-catenin polymorphisms and their associated haplotypes are associated with susceptibility to LA. abstract_id: PUBMED:33686727 Vitamin D and white matter hyperintensities: results of the population-based Heinz Nixdorf Recall Study and 1000BRAINS. Background And Purpose: Cross-sectional studies showed an inverse association between serum 25-hydroxyvitamin D (25OHD) and white matter hyperintensities (WMHs) whereas the few longitudinal studies did not. The association between baseline 25OHD and WMHs at 10-year follow-up in the Heinz Nixdorf Recall Study plus 1000BRAINS was investigated. Methods: Data of 505 participants (49% women, 56.2 ± 6.6 years) with 25OHD at baseline (2000-2003) and WMH volume and grade of WMHs using the Fazekas classification at 10-year follow-up were analysed. The association between deseasonalized 25OHD and the base-10 logarithm of WMH volume was evaluated by multiple linear regression, adjusted for age, sex, education, smoking, alcohol consumption, sports, diabetes mellitus, systolic blood pressure and total cholesterol. β-estimators were transformed back (10β ). Using multiple logistic regression, odds ratios (ORs) and 95% confidence intervals (95% CI) were calculated to evaluate the association between deseasonalized 25OHD and Fazekas grades (0, absence and 1, punctate foci vs. 2, beginning and 3, large confluence). Results: Mean 25OHD was 17.0 ± 8.2 ng/ml, and mean deseasonalized 25OHD was 16.9 ± 7.5 ng/ml. Mean WMH volume was 16.6 ± 17.4 ml, range 1-132 ml. Most grade 2-3 WMHs were found to be periventricular (39% of the participants), parietal (32%) and frontal (31%) (temporal 6%, occipital 3%). The linear regression showed an inverse association between 25OHD and WMH volume. On average, a 25OHD increase of 1 ng/ml was associated with a reduced WMH volume by a factor of 0.99 (95% CI 0.98; 1.00) (fully adjusted). There was also some indication for an inverse association between 25OHD and extent of periventricular (OR 0.98 [95% CI 0.96; 1.01]), frontal (0.99 [0.97; 1.02]) and parietal (0.98 [0.95; 1.00]) WMHs according to the Fazekas classification. Conclusions: Lower 25OHD may be a risk factor for the occurrence of WMHs. abstract_id: PUBMED:36002958 Different cardiovascular risk factors are related to distinct white matter hyperintensity MRI phenotypes in older adults. The underlying mechanisms of the association between cardiovascular risk factors and a higher white matter hyperintensity (WMH) burden are unknown. We investigated the association between cardiovascular risk factors and advanced WMH markers in 155 non-demented older adults (mean age: 71 ± 5 years). The association between cardiovascular risk factors and quantitative MRI-based WMH shape and volume markers were examined using linear regression analysis. Presence of hypertension was associated with a more irregular shape of periventricular/confluent WMH (convexity (B (95 % CI)): -0.12 (-0.22--0.03); concavity index: 0.06 (0.02-0.11)), but not with total WMH volume (0.22 (-0.15-0.59)). Presence of diabetes was associated with deep WMH volume (0.89 (0.15-1.63)). Body mass index or hyperlipidemia showed no association with WMH markers. In conclusion, different cardiovascular risk factors seem to be related to a distinct pattern of WMH shape markers in non-demented older adults. These findings may suggest that different underlying cardiovascular pathological mechanisms lead to different WMH MRI phenotypes, which may be valuable for early detection of individuals at risk for stroke and dementia. Answer: Yes, there is an association between leukoaraiosis volume and diabetes. Studies have demonstrated a significant correlation between diabetes and white matter lesions (WML), also known as leukoaraiosis. One study found that both the volume and number of WML lesions were significantly associated with diabetes, with a statistically significant difference in WML volume between diabetic and non-diabetic patients. Diabetes was the only variable among several investigated (including WML volume, coronary artery disease status, age, smoking status, gender, hypertension, hyperlipidemia) that was significantly associated with WML (PUBMED:26740385). Another study observed that long sleep duration was associated with a greater burden of white matter lesions, particularly among those with diabetes mellitus (PUBMED:25040435). Additionally, elevated hemoglobin A1c (HbA1c) levels were associated with greater WMH volume among Hispanics but not non-Hispanic Whites, suggesting that diabetes-related metabolic control may influence leukoaraiosis volume differently across ethnicities (PUBMED:35229016). Furthermore, patients with amnestic mild cognitive impairment (MCI) and diabetes mellitus had significantly larger regional periventricular hyperintensities (PVH) and deep white matter hyperintensities (DWMH) volumes compared to those without diabetes, indicating that diabetes is associated with WMH volume and could be an important marker for determining treatment strategies for white matter abnormalities in MCI (PUBMED:34105230). These findings collectively suggest that there is a notable association between leukoaraiosis volume and diabetes.
Instruction: Do the British public recognise differences in survival between three common cancers? Abstracts: abstract_id: PUBMED:22555400 Do the British public recognise differences in survival between three common cancers? Background: The recognition that cancer is not a single entity, rather that different cancers have different causes and trajectories, has been a key development in the scientific understanding of cancer. However, little is known about the British public's awareness of differences between cancers. This study examined differences in perceived survivability for three common cancers with widely disparate survival rates (breast, colorectal and lung). Method: In a population-based survey, using home interviews (N=2018), respondents answered a quantitative (numeric) question on 5-year survival and a qualitative (non-numeric) question on curability, for each of the three cancers. Results: British adults correctly recognised that 5-year survival for breast cancer was higher than for colorectal cancer (CRC), which in turn was recognised to be higher than for lung cancer. Similarly, curability was perceived to be higher for breast than CRC, and both were perceived to be more curable than lung cancer. Awareness of survival differences did not vary by sex, age or socioeconomic status. In terms of absolute values, there was a tendency to underestimate breast cancer survival and overestimate lung cancer survival. Conclusion: The British public appear to be aware that not all cancers are equally fatal. abstract_id: PUBMED:17696733 Ethnic differences in survival for female cancers of the breast, cervix and colorectum in British Columbia, Canada. Background: Chinese and South Asians are among the fastest growing minority populations in Canada; however little is known about the burden of cancer in these populations. Objective: The objective is to examine survival rates for breast, cervical and colorectal cancers in women within these two ethnic populations, as compared to the BC general population. Methods: Survival rates were calculated for three time periods in the Chinese, South Asian and BC general populations, using the BC cancer registry. Ethnicity within the registry was determined using surnames. Results: Survival rates for female breast, cervical and colorectal cancers have improved over time in all three population groups, however general differences were found among the groups. Chinese women had higher survival rates than both South Asians and all BC women for breast and cervical cancer, and intermediate survival rates between South Asians and all BC women for colorectal cancer. South Asian women had the highest survival rates for colorectal cancer, similar survival rates to all BC women for breast cancer, and lower survival rates for cervical cancer. Interpretation: Differences in the observed survival rates may be explained by variations in screening and early detection, treatment practices, and cancer biology. This is discussed more fully for each cancer site. abstract_id: PUBMED:19863791 Comparison of breast cancer survival in two populations: Ardabil, Iran and British Columbia, Canada. Background: Patterns in survival can provide information about the burden and severity of cancer, help uncover gaps in systemic policy and program delivery, and support the planning of enhanced cancer control systems. The aim of this paper is to describe the one-year survival rates for breast cancer in two populations using population-based cancer registries: Ardabil, Iran, and British Columbia (BC), Canada. Methods: All newly diagnosed cases of female breast cancer were identified in the Ardabil cancer registry from 2003 to 2005 and the BC cancer registry for 2003. The International Classification of Disease for Oncology (ICDO) was used for coding cancer morphology and topography. Survival time was determined from cancer diagnosis to death. Age-specific one-year survival rates, relative survival rates and weighted standard errors were calculated using life-tables for each country. Results: Breast cancer patients in BC had greater one-year survival rates than patients in Ardabil overall and for each age group under 60. Conclusion: These findings support the need for breast cancer screening programs (including regular clinical breast examinations and mammography), public education and awareness regarding early detection of breast cancer, and education of health care providers. abstract_id: PUBMED:23053792 Gender- and ethnicity-specific survival trends of oral cavity and oropharyngeal cancers in British Columbia. Introduction: A shift in etiology of oral cancers has been associated with a rise in incidence for oropharyngeal cancers (OPC) and decrease for oral cavity cancers (OCC); however, there is limited information about population-based survival trends. We report epidemiological transitions in survival for both OPC and OCC from a population-based cancer registry, focusing upon gender and ethnic differences. Methods: All primary oral cancers diagnosed between 1980 and 2005 were identified from the British Columbia Cancer Registry and regrouped into OPC and OCC by topographical subsites, time periods (1980-1993 and 1994-2005), stage at diagnosis, and ethnicity. Cases were then followed up to December 2009. Using gender-based analysis, actuarial life tables were used to calculate survival rates, which were compared using Kaplan-Meier curves and log-rank tests. Results: For OPC, survival improved, significant for tonsil and base of tongue in men and marginally significant at base of tongue in women. This improvement occurred in spite of an increase in late-stage diagnosis for OPC in both genders. Interestingly, there was no difference in survival for early- and late-stage disease for OPC in men. For OCC, there was a decrease in survival for floor of mouth cancers in both genders although significant in women only. South Asians had the poorest survival for OCC in both genders. Conclusion: Survival for OPC improved, more dramatically in men than women, in spite of late-stage diagnosis and increasing nodal involvement. Given the poor survival rates and need for early detection, targeted OCC screening programs are required for South Asians. abstract_id: PUBMED:30149161 Persistent Disparities in Survival for Patients with Glioblastoma. Background: Glioblastoma (GBM) is the most common malignant primary brain cancer in adults. Recent efforts have elucidated genetic features of tumor cells and thus enhanced our knowledge of GBM pathophysiology. The most recent clinical trials report median overall survival between 14 and 20 months. However, population level outcomes are quite variable and there is a paucity of such data within the literature. Methods: Three hundred seventy-two patients with GBM were diagnosed in the Canadian province of British Columbia between January 2013 and January 2015 and 278 patients had surgery. Of these, 268 had surgery in British Columbia and we have performed a retrospective review of their survival outcomes. Results: Our results indicate a median age of 61.8 years at time of diagnosis, with a slight preponderance of male patients. The median overall survival was 10 months for patients in our cohort. As expected, patients older than the age of 65 and those with worse initial Karnofsky Performance Status scores had a poorer prognosis. Moreover, we have found extent of resection, treatment strategies, and treatment location affect overall survival. Conclusions: The present study highlights factors that affect patient survival after surgery in British Columbia. Our data are gathered within a single-payer, high-resource setting which removes possible confounders in outcome analysis. We find persistent differences in overall survival when compared with clinical trials and the Surveillance, Epidemiology, and End Results database. Further efforts should ensure access to the gold standard of care. All neuro-oncology centers should analyze the real-world outcomes of their local glioblastoma treatment strategies. Knowledge of the variance from expected and comparative results are fundamental for appropriate patient care. abstract_id: PUBMED:28887646 Cancer in First Nations people living in British Columbia, Canada: an analysis of incidence and survival from 1993 to 2010. Background: For First Nations (FN) peoples living in British Columbia (BC), little is known regarding cancer in the population. The aim of this study was to explore cancer incidence and survival in the FN population of BC and compare it to the non-FN population. Methods: All new cancers diagnosed from 1993 to 2010 were linked to the First Nations Client File (FNCF). Age-standardized incidence rates (ASIR) and rate ratios, and 1- and 5-year cause-specific survival estimates and hazard ratios were calculated. Follow-up end date for survival was December 31, 2011 and follow-up time was censored at a maximum of 15 years. Results: ASIR of colorectal cancer (male SRR = 1.42, 95% CI 1.25-1.61; female SRR = 1.21, 95% CI 1.06-1.38) and cervical cancer (SRR = 1.84, 95% CI 1.45-2.33) were higher overall in FN residents in BC, compared to non-FN residents. Incidence rates of almost all other cancers were generally similar or lower in FN populations overall and by sex, age, and period categories, compared to non-FN residents. Trends in ASIR over time were similar except for lung (increasing for FN, decreasing for non-FN) and colorectal cancers (increasing for FN, decreasing for non-FN). Conversely, survival rates were generally lower for FN, with differences evident for some cancer sites at 1 year following diagnosis. Conclusion: FN people living in BC face unique cancer issues compared to non-FN people. Higher incidence and lower survival associated with certain cancer types require further research to look into the likely multifaceted basis for these findings. abstract_id: PUBMED:19036470 Improved survival from lung cancer in British Columbia compared to Scotland-are different treatment rates the whole story? To explore possible causes of the poor survival of Scottish lung cancer patients, a retrospective registry-based audit was conducted comparing demography, treatment and survival of 3833 Scottish patients and 2073 from British Columbia (BC). Patients from Scotland were older, had a lower rate of pathological confirmation (74% vs 89%, p&lt;0.001), but more squamous (51% vs 31%, p&lt;0.001) or small cell (SCLC) (18% vs 15%, p=0.005) cancers. Fewer Scottish patients received any treatment (57% vs 66%, p&lt;0.001) or treatment aimed at cure (14% vs 26%, p&lt;0.001). Survival was lower in Scotland (median 3.6 months vs 7.3 months; 5% vs 10% 5-year overall survival, p&lt;0.001), irrespective of treatment intent (potentially curative treatment median survival 20.9 months vs 34.0 months, 5-year overall survival 29% vs 34%, p&lt;0.001; palliative treatment 5.0 months vs 6.3 months (p&lt;0.001) and no treatment 1.4 months vs 2.5 months (p&lt;0.001)). With treatment intent included in a multivariate analysis, the hazard ratio for death for lung cancer patients in Scotland compared to British Columbia was 1.5. Relative survival was higher in BC (38% at 1 year and 12% at 5 years vs 22% and 6%, p&lt;0.001), indicating that life expectancy differences between the two countries was not the explanation. Reduced levels of treatment could only partially explain survival differences and other unknown factors related to lifestyle differences such as diet and smoking, co-morbid diseases, population genetics or cancer biology, may be important and warrant further exploration. abstract_id: PUBMED:27054698 Differences in Breast Cancer Survival between Public and Private Care in New Zealand: Which Factors Contribute? Background: Patients who received private health care appear to have better survival from breast cancer compared to those who received public care. This study investigated if this applied to New Zealand women and identified factors that could explain such disparities. Methods: This study involved all women who were diagnosed with primary breast cancer in two health regions in New Zealand, covering about 40% of the national population, between June 2000 and May 2013. Patients who received public care for primary treatment, mostly surgical treatment, were compared with those who received private care in terms of demographics, mode of presentation, disease factors, comorbidity index and treatment factors. Cox regression modelling was performed with stepwise adjustments, and hazards of breast cancer specific mortality associated with the type of health care received was assessed. Results: Of the 14,468 patients, 8,916 (61.6%) received public care. Compared to patients treated in private care facilities, they were older, more likely to be Māori, Pacifika or Asian and to reside in deprived neighbourhoods and rural areas, and less likely to be diagnosed with early staged cancer and to receive timely cancer treatments. They had a higher risk of mortality from breast cancer (hazard ratio: 1.95; 95% CI: 1.75, 2.17), of which 80% (95% CI: 63%, 100%) was explained by baseline differences, particularly related to ethnicity, stage at diagnosis and type of loco-regional therapy. After controlling for these demographic, disease and treatment factors, the risk of mortality was still 14% higher in the public sector patients. Conclusions: Ethnicity, stage at diagnosis and type of loco-regional therapy were the three key contributors to survival disparities between patients treated in public and private health care facilities in New Zealand. The findings underscore the need for more efforts to improve the quality, timeliness and equitability of public cancer care services. abstract_id: PUBMED:18299732 Incidence and survival for gastric and esophageal cancer diagnosed in British Columbia, 1990 to 1999. Background: Geographical variation and temporal trends in the incidence of esophageal and gastric cancers vary according to both tumour morphology and organ subsite. Both diseases are among the deadliest forms of cancer. The incidence and survival rates for gastric and esophageal carcinoma in British Columbia (BC) between 1990 and 1999 are described. Methods: Incidence data for the period 1990 to 1999 were obtained from the BC Cancer Registry. Age-adjusted incidence and survival rates were computed by anatomical subsite, histological type and sex. All rates were standardized to the 1996 Canadian population. The estimated annual percentage change (EAPC) was used to measure incidence changes over time. Kaplan-Meier curves were used to show survival rates, and log-rank tests were used to test for differences in the curves among various groups. Results: Between 1990 and 1999, 1741 esophageal cancer cases and 3431 gastric cancer cases were registered in BC. There was an increase in the incidence of adenocarcinoma of the esophagus over time (EAPC=9.6%) among men, and of gastric cardia cancer among both women (EAPC=9.2%) and men (EAPC=3.8%). Patients with proximal gastric (cardia) cancer had significantly better survival rates than patients with cancer in the lower one-third of the esophagus. Among gastric cancers, patients with distal tumours had a significantly better survival rate than patients with proximal tumours. Discussion: The incidences of proximal gastric cancer and esophageal adenocarcinoma are increasing, and their survival patterns are different. Examining these cancers together may elucidate new etiological and prognostic factors. abstract_id: PUBMED:29205301 Public health surveillance of cancer survival in the United States and worldwide: The contribution of the CONCORD programme. CONCORD is a programme for the global surveillance of cancer survival. In 2015, the second cycle of the program (CONCORD-2) established long-term surveillance of cancer survival worldwide, for the first time, in the largest cancer survival study published to date. CONCORD-2 provided cancer survival trends for 25,676,887 patients diagnosed during the 15-year period between 1995 and 2009 with 1 of 10 common cancers that collectively represented 63% of the global cancer burden in 2009. Herein, the authors summarize the past, describe the present, and outline the future of the CONCORD programme. They discuss the difference between population-based studies and clinical trials, and review the importance of international comparisons of population-based cancer survival. This study will focus on the United States. The authors explain why population-based survival estimates are crucial for driving effective cancer control strategies to reduce the wide and persistent disparities in cancer survival between white and black patients, which are likely to be attributable to differences in access to early diagnosis and optimal treatment. Cancer 2017;123:4977-81. Published 2017. This article is a U.S. Government work and is in the public domain in the USA. Answer: Yes, the British public appears to recognize differences in survival between three common cancers. According to a population-based survey conducted through home interviews with 2018 respondents, British adults correctly perceived that 5-year survival for breast cancer was higher than for colorectal cancer, which in turn was recognized to be higher than for lung cancer. Similarly, curability was perceived to be higher for breast cancer than colorectal cancer, and both were perceived to be more curable than lung cancer. This awareness of survival differences did not vary by sex, age, or socioeconomic status. However, there was a tendency for the public to underestimate breast cancer survival and overestimate lung cancer survival (PUBMED:22555400).
Instruction: Is obstructive sleep apnea syndrome a risk factor for auditory pathway? Abstracts: abstract_id: PUBMED:21479758 Is obstructive sleep apnea syndrome a risk factor for auditory pathway? Purpose: The transduction mechanism of the inner ear and the transmission of nerve impulses along the auditory way are highly dependent upon the cochlear oxygen supply. Several studies have considered the possibility that obstructive sleep apnea-hypopneas during sleep can interfere with these processes, and the results are not uniform. The aim of the study is to evaluate the auditory function in adult patients affected by severe obstructive sleep apnea syndrome (OSAS). Methods: Thirty-nine patients in this study were included and divided in OSAS group, with severe OSAS (Apnea-Hypopnea Index, AHI &gt; 30), and control group with snoring without OSAS (AHI &lt; 5). Each patient was subjected to pure-tone audiogram (PTA), otoacoustic emission (OAE), and brainstem auditory evoked potentials. Results: The OSAS group showed a PTA significantly higher than the control group (14.23 ± 6.25 vs. 7.45 ± 2.54; p &lt; 0.01), a lower TEOAE reproducibility (0.57 ± 0.10 vs. 0.92 ± 0.10; p &lt; 0.01) such as a lower signal-to-noise 0atio (p &lt; 0,01) and a lower DPOAE amplitude (5.96 ± 6.34; 13.18 ± 2.97; p &lt; 0.01). The mean latencies of waves I, III, and V were prolonged in OSAS group as compared to the healthy people, especially for wave V (p &lt; 0.05). The interpeak latency (IPL) of I-V was significantly higher (p &lt; 0.01) in the OSAS patients (5.84 ± 0.15) as compared to the control group (5.4 ± 0.12), such as IPLs I-III and III-V (p &lt; 0.05). Conclusions: Our data showed an auditory dysfunction in patients affected by severe OSAS, suggesting that severe OSAS could represent a risk factor for auditory pathway. abstract_id: PUBMED:28024827 Evaluation of peripheral auditory pathways and brainstem in obstructive sleep apnea. Introduction: Obstructive sleep apnea causes changes in normal sleep architecture, fragmenting it chronically with intermittent hypoxia, leading to serious health consequences in the long term. It is believed that the occurrence of respiratory events during sleep, such as apnea and hypopnea, can impair the transmission of nerve impulses along the auditory pathway that are highly dependent on the supply of oxygen. However, this association is not well established in the literature. Objective: To compare the evaluation of peripheral auditory pathway and brainstem among individuals with and without obstructive sleep apnea. Methods: The sample consisted of 38 adult males, mean age of 35.8 (±7.2), divided into four groups matched for age and Body Mass Index. The groups were classified based on polysomnography in: control (n=10), mild obstructive sleep apnea (n=11) moderate obstructive sleep apnea (n=8) and severe obstructive sleep apnea (n=9). All study subjects denied a history of risk for hearing loss and underwent audiometry, tympanometry, acoustic reflex and Brainstem Auditory Evoked Response. Statistical analyses were performed using three-factor ANOVA, 2-factor ANOVA, chi-square test, and Fisher's exact test. The significance level for all tests was 5%. Results: There was no difference between the groups for hearing thresholds, tympanometry and evaluated Brainstem Auditory Evoked Response parameters. An association was observed between the presence of obstructive sleep apnea and changes in absolute latency of wave V (p=0.03). There was an association between moderate obstructive sleep apnea and change of the latency of wave V (p=0.01). Conclusion: The presence of obstructive sleep apnea is associated with changes in nerve conduction of acoustic stimuli in the auditory pathway in the brainstem. The increase in obstructive sleep apnea severity does not promote worsening of responses assessed by audiometry, tympanometry and Brainstem Auditory Evoked Response. abstract_id: PUBMED:36170352 Pathway-Specific Polygenic Risk Scores Identify Obstructive Sleep Apnea-Related Pathways Differentially Moderating Genetic Susceptibility to Coronary Artery Disease. Background: Obstructive sleep apnea (OSA) and its features, such as chronic intermittent hypoxia, may differentially affect specific molecular pathways and processes in the pathogenesis of coronary artery disease (CAD) and influence the subsequent risk and severity of CAD events. In particular, competing adverse (eg, inflammatory) and protective (eg, increased coronary collateral blood flow) mechanisms may operate, but remain poorly understood. We hypothesize that common genetic variation in selected molecular pathways influences the likelihood of CAD events differently in individuals with and without OSA, in a pathway-dependent manner. Methods: We selected a cross-sectional sample of 471 877 participants from the UK Biobank, with 4974 ascertained to have OSA, 25 988 to have CAD, and 711 to have both. We calculated pathway-specific polygenic risk scores for CAD, based on 6.6 million common variants evaluated in the CARDIoGRAMplusC4D genome-wide association study (Coronary ARtery DIsease Genome wide Replication and Meta-analysis [CARDIoGRAM] plus The Coronary Artery Disease [C4D] Genetics), annotated to specific genes and pathways using functional genomics databases. Based on prior evidence of involvement with intermittent hypoxia and CAD, we tested pathway-specific polygenic risk scores for the HIF1 (hypoxia-inducible factor 1), VEGF (vascular endothelial growth factor), NFκB (nuclear factor kappa-light-chain-enhancer of activated B cells) and TNF (tumor necrosis factor) signaling pathways. Results: In a multivariable-adjusted logistic generalized additive model, elevated pathway-specific polygenic risk scores for the Kyoto Encyclopedia of Genes and Genomes VEGF pathway (39 genes) associated with protection for CAD in OSA (interaction odds ratio 0.86, P=6×10-4). By contrast, the genome-wide CAD PRS did not show evidence of statistical interaction with OSA. Conclusions: We find evidence that pathway-specific genetic risk of CAD differs between individuals with and without OSA in a qualitatively pathway-dependent manner. These results provide evidence that gene-by-environment interaction influences CAD risk in certain pathways among people with OSA, an effect that is not well-captured by the genome-wide PRS. This invites further study of how OSA interacts with genetic risk at the molecular level and suggests eventual personalization of OSA treatment to reduce CAD risk according to individual pathway-specific genetic risk profiles. abstract_id: PUBMED:30134647 Auditory Deficits in Patients With Mild and Moderate Obstructive Sleep Apnea Syndrome: A Speech Syllable Evoked Auditory Brainstem Response Study. Objectives: The energy consumption process of cochlea and neural signal transduction along the auditory pathway are highly dependent on blood oxygen supply. At present, it is under debate on whether the obstructive sleep apnea syndrome (OSAS) would affect the auditory function since the patients suffer from low oxygen saturation. Moreover, it is difficult to detect the functional state of auditory in less severe stage of OSAS. Recently, speech-evoked auditory brainstem response (speech-ABR) has been reported to be a new electrophysiological tool in characterizing the auditory dysfunction. The aim of the present study is to evaluate the auditory processes in adult patients with mild and moderate OSAS by speech-ABR. Methods: An experimental group of 31 patients with mild to moderate OSAS, and a control group without OSAS diagnosed by apnea hypopnea index in polysomnogram were recruited. All participants underwent otologic examinations and tests of pure-tone audiogram, distortion product otoacoustic emissions, click-evoked auditory brainstem response (click-ABR) and speech-ABR, respectively. Results: The results of pure-tone audiogram, distortion product otoacoustic emissions, and click-ABR in OSAS group showed no significant differences compared with the control group (P&gt;0.05). Speech-ABRs for OSAS participants and controls showed similar morphological waveforms and typical peak structures. There were significant group differences for the onset and offset transient peaks (P&lt;0.05), where OSAS group had longer latencies for peak V (6.69± 0.33 ms vs. 6.39±0.23 ms), peak C (13.48±0.30 ms vs. 13.31±0.23 ms), and peak O (48.27±0.39 ms vs. 47.60± 0.40 ms) compared to the control group. The latency of these peaks showed significant correlations with apnea hypopnea index for peak V (r=0.37, P=0.040), peak C (r=0.36, P=0.045), as well as peak O (r=0.55, P=0.001). Conclusion: These findings indicate that some auditory dysfunctions may be present in patients with mild and moderate OSAS, and the damages were aggravated with the severity of OSAS, which suggests that speech-ABR may be a potential biomarker in the diagnosis and evaluation at early stage of OSAS. abstract_id: PUBMED:28522104 Auditory behavior and auditory temporal resolution in children with sleep-disordered breathing. Objective: Intermittent hypoxia caused by obstructive sleep apnea syndrome (OSAS) may lead to damage in brain areas associated to auditory processing. The aim of this study was to compare children with OSAS or primary snoring (PS) to children without sleep-disordered breathing with regard to their performance on the Gaps-in-Noise (GIN) test and the Scale of Auditory Behaviors (SAB) questionnaire. Methods: Thirty-seven children (6-12 years old) were submitted to sleep anamnesis and in-lab night-long polysomnography. Three groups were organized according to clinical criteria: OSAS group (13 children), PS group (13 children), and control group (11 children). They were submitted to the GIN test and parents answered SAB questionnaire. The Kruskal-Wallis statistical test was used to compare the groups; p &lt; 0.05 was considered statistically significant. Results: The OSAS group performed significantly worse than PS (p = 0.011) and Control (p = 0.029) groups on gap detection percentage, while PS and Control groups showed no significant differences. The three groups showed similar gap detection thresholds. Regarding SAB questionnaire, PS group had significantly worse scores when compared to Control (p = 0.011), but not to OSAS (p = 0.101) groups. No statistical difference between OSAS and Control groups were found. Conclusion: Children with OSAS showed worse performance on GIN test in comparison to children with PS and children without sleep-disordered breathing. PS negatively affected auditory behavior in children. These findings suggest that sleep-disordered breathing may lead to auditory behavior impairment. abstract_id: PUBMED:16248502 The features of auditory brainstem response in patients with obstructive sleep apnea-hypopnea syndrome Objective: To investigate the effects and causes of obstructive sleep apnea-hypopnea syndrome (OSAHS) on auditory pathway, the changes of auditory brainstem response (ABR) were analyzed. Method: With the use of polysomnography,72 snorers were examined and divided into OSAHS and non- OSAHS groups. These two groups patients and a group of normal person were underwent ABR measuring. Result: There were statistically significant difference between the group of OSAHS and the non-OSAHS or the group of OSAHS and the normals on the delayed latency of wave I and wave V, the response liminal value of wave V rised. Conclusion: The results suggest that cochlear function may be affected in patient with OSAHS, the effected of brain stem is unknown. abstract_id: PUBMED:36292063 Obstructive Sleep Apnea and Auditory Dysfunction-Does Snoring Sound Play a Role? The objective of the study was to investigate the relationship between obstructive sleep apnea (OSA) and auditory dysfunction, and to clarify the role of snoring sounds in contributing to auditory dysfunction. A comprehensive assessment of OSA and the auditory system was performed, including overnight polysomnography, detection of the intra-ear canal snoring sound energy (SSE), pure tone average (PTA), tinnitus pitch matching, the tinnitus handicap inventory (THI), and the Epworth sleepiness scale (ESS). The patients were identified as having tinnitus if their THI score was higher than zero or their tinnitus pitches were matched to specific frequencies. The median age, body mass index, and apnea-hypopnea index score were 41 years, 26.4 kg/m2, and 29.9 events/h, respectively. Among the 50 participants, 46 (92%) had a normal PTA, and only 4 (8%) patients had mild hearing loss. There was no significant difference in PTA among OSA severities (p = 0.52). Among the 50 participants, 33 patients (66%) were identified as having tinnitus. In the tinnitus group (n = 33), the ESS score (p = 0.01) and intra-ear canal SSE of 851-1500 Hz (p = 0.04) were significantly higher than those in the non-tinnitus group (n = 17). OSA patients with a higher ESS score had a higher risk of tinnitus (odds ratio 1.22 [95% CI: 1.01-1.46]). OSA-related auditory dysfunction emerged in tinnitus rather than in hearing impairment. OSA patients with daytime sleepiness had a higher risk of tinnitus. High-frequency SSE can jeopardize cochlea and is a potential mechanism contributing to tinnitus. Detection of snoring sounds through an intra-ear canal device may be more precise in assessing acoustic trauma from snoring sounds to vulnerable auditory system and thus warrants further research. abstract_id: PUBMED:34089435 Influence of obstructive sleep apnea on auditory event-related potentials. Purpose: To evaluate the influence of obstructive sleep apnea (OSA) on the P300 response of auditory event-related potentials (ERPs) and to correlate the electrophysiological findings with OSA severity. Methods: Patients with no OSA and mild, moderate, and severe OSA according to polysomnography (PSG) with normal hearing and no comorbidities were studied. Individuals with a body mass index (BMI) ≥ 40 kg/m2, hypertension, diabetes, dyslipidemia, the use of chronic medications, and a risk of hearing loss were excluded. All patients underwent full PSG and auditory ERP measurement using the oddball paradigm with tone burst and speech stimuli. For P300 analysis (latencies and amplitudes), normal multiple linear regression models were adjusted with the groups (No OSA, Mild OSA, Moderate OSA, Severe OSA), age, BMI, and Epworth score as explanatory variables. Results: We studied 54 individuals (47 males) aged 35 ± 8 years with a BMI of 28.4 ± 4.3 kg/m2. Patients were divided according to the apnea-hypopnea index (AHI) derived from PSG into no OSA (n = 14), mild (n = 16), moderate (n = 12), and severe OSA (n = 12) groups. Patients with severe OSA presented prolonged P300 latencies with tone burst stimuli compared to patients with no OSA and those with mild and moderate OSA. Conclusion: Severe OSA is associated with impairment of the P300 response of auditory ERPs, suggesting a decrease in the processing speed of acoustic information that may be mediated by the level of somnolence. abstract_id: PUBMED:37659225 Evaluation of the auditory findings of patients with obstructive sleep apnea syndrome. Objectives: To investigate the effects of hypoxia occurring in patients with Obstructive Sleep Apnea Syndrome (OSAS) on Auditory evoked late latency, Auditory brainstem response, and the contralateral suppression of otoacoustic emissions. Materials And Methods: 46 patients diagnosed with OSAS were divided into groups as moderate and severe based on their Apnea Hipopnea Index (AHI) values. The control group consisted of 22 healthy individuals. All participants underwent an Auditory Brainstem Response (ABR) test, Auditory Evoked Late Latency Response (LLR), and Contralateral Suppression Otoacoustic Emission (CS-OAE). Findings: There was no statistical difference between the OSAS group and the control group regarding P1 latency, N1 latency, and P1 and N1 wave amplitude (p &gt; 0.05). In ABR, statistically significant differences were found between the control, moderate OSAS, and severe OSAS groups in wave I in the right and left ear (p &lt; 0.05). In the analyses performed for the otoacoustic emission frequencies with and without contralateral suppression of the right and left ear, suppression was not observed at some frequencies, and this was regarded as statistically significant (p &lt; 0.05). Conclusion: It is considered that OSAS does not have cortical effects but impacts the brainstem region and the cochlea. Bilateral impact, especially observed in wave I of ABR, is prominent on the auditory nerve. Considering that the medial olivo-cochlear (MOC) system is affected in patients with OSAS, it is thought that these patients are inadequate in suppressing noise, and this may cause various problems, particularly the inability to distinguish speech in noisy environments. abstract_id: PUBMED:29479935 The effect of obstructive sleep apnea syndrome on the central auditory system Background/aim: Obstructive sleep apnea syndrome (OSAS) is a disease characterized by repeated hypoxia attacks during sleep. The effect of hypoxia on the central nervous system is a well-known entity. In this study we aimed to investigate the effect of OSAS on the central auditory system. Materials and methods: Twenty-one OSAS patients diagnosed by polysomnography (PSG) and 10 control subjects were included in the study. After a thorough otorhinolaryngology examination, all subjects underwent pure tone audiometry (250 to 8000 Hz frequency). The subjects with normal otoscopic examination and hearing threshold were included in the study. All participants underwent speech discrimination analyses and auditory time processing and sequencing tests, i.e. frequency pattern test (FPT) and duration pattern test (DPT). Results: Although hearing was normal in the OSAS patients, significant loss was observed in the speech discrimination rates compared to the control group (P &lt; 0.05). Significant disruption was also detected in the FPT and SPT in the OSAS patients (P &lt; 0.05). Conclusion: Repeated hypoxic episodes in OSAS resulted in statistically significant impairments in the central auditory pathways, even if the hearing threshold was within normal limits. Answer: Yes, obstructive sleep apnea syndrome (OSAS) is considered a risk factor for auditory pathway dysfunction. Several studies have investigated the relationship between OSAS and auditory function, with varying results. One study found that patients with severe OSAS had significantly higher pure-tone audiogram (PTA) thresholds, lower transient evoked otoacoustic emission (TEOAE) reproducibility, lower signal-to-noise ratio, lower distortion product otoacoustic emission (DPOAE) amplitude, and prolonged latencies in brainstem auditory evoked potentials compared to a control group. These findings suggest that severe OSAS could represent a risk factor for auditory pathway dysfunction (PUBMED:21479758). Another study reported that the presence of OSAS is associated with changes in nerve conduction of acoustic stimuli in the auditory pathway in the brainstem, as evidenced by changes in the absolute latency of wave V in brainstem auditory evoked responses. However, the increase in OSAS severity did not promote worsening of responses assessed by audiometry, tympanometry, and brainstem auditory evoked response (PUBMED:28024827). Further research using speech-evoked auditory brainstem response (speech-ABR) indicated that patients with mild and moderate OSAS had longer latencies for certain peaks compared to a control group, suggesting that some auditory dysfunctions may be present in these patients and that the damages were aggravated with the severity of OSAS (PUBMED:30134647). Children with OSAS also showed worse performance on the Gaps-in-Noise (GIN) test compared to children with primary snoring and those without sleep-disordered breathing, indicating that OSAS may lead to auditory behavior impairment (PUBMED:28522104). Additionally, a study on auditory event-related potentials found that severe OSA is associated with impairment of the P300 response, suggesting a decrease in the processing speed of acoustic information (PUBMED:34089435). Overall, these studies support the notion that OSAS is a risk factor for auditory pathway dysfunction, affecting both peripheral and central auditory processing.
Instruction: Is tissue oxygen tension during esophagectomy a predictor of esophagogastric anastomotic healing? Abstracts: abstract_id: PUBMED:9587355 Is tissue oxygen tension during esophagectomy a predictor of esophagogastric anastomotic healing? Purpose: The genesis of anastomotic leakage and late stenosis of esophagogastrostomy is still unknown, although minimal blood flow and tissue hypoxia of the gastric tube are discussed as main reasons. However, the changes in tissue oxygen tension (PtO2) on esophagogastric anastomoses have not yet been evaluated in the perioperative course. Methods: Submucosal tissue oxygen tension (PtO2) was measured in 33 patients with cervical esophagogastrostomy during resection of esophageal carcinoma and reconstruction by a gastric tube. Measurements were taken close to the projected resection line and latter anastomosis using a Clark-type oxygen electrode. Results: Mean baseline PtO2 was 55.1 +/- 10.4 mmHg. Following the ligature of the vasa gastricae brevis and the left gastroepiploic artery (46.1 +/- 9.7 mmHg), the left gastric artery (34.8 +/- 9.8 mmHg), and the pull up of the gastric tube, PtO2 decreased to 25.8 +/- 9.4 mmHg. Anastomotic leakage occurred in 6 patients and late stenosis in 10 patients. During the operation there was no significant evidence of decreased PtO2 levels in these two groups. Postoperative PtO2 levels showed a significant increase in patients with anastomotic leakage. Conclusion: A disorder in oxygen consumption may cause a significant increase of PtO2 in anastomotic tissue, which is associated with anastomotic leakage. abstract_id: PUBMED:34123838 Arterial Infusion Chemotherapy for Neoplastic Esophagogastric Anastomotic Strictures After Esophagectomy. Background: Neoplastic esophagogastric anastomotic strictures after resection of esophageal cancer is a very difficult problem in clinical practice. We aim at to investigate the safety and feasibility of arterial infusion chemotherapy in treatment of neoplastic esophagogastric anastomotic strictures after esophagectomy. Methods: From October 2014 to December 2019, 50 patients with Neoplastic esophagogastric anastomotic strictures after resection of esophageal cancer were assessed retrospectively. Preoperative dysphagia was grade III in 34 cases and grade IV in 16 cases. Thirty-eight patients had different degrees of dyspnea before surgery Twenty-five patients had intolerable (grade IV) dyspnea and airway stenting was undertaken before surgery. Thirteen patients had tolerable dyspnea that did not require airway stenting, and preoperative dyspnea was grade III. Results: All patients were successfully treated with arterial infusion chemotherapy, no paraplegia or death occurred. The dysphagia grade of 50 patients after AIC was compared: one case had grade I, 40 cases had grade II, and nine cases had grade III. Thirteen patients had tolerable dyspnea that did not necessitate airway stenting. Dyspnea was classified as grade I in five cases and grade II in eight cases. After 1-3 courses of AIC, 50 patients were followed up for a complete response (eight cases), partial response (28) and stable disease (14 cases). Total objective effective rate (complete response+ partial response) and disease control rate(complete response + partial response + stable disease)were 72.0% and 100.0%, respectively. The median duration of follow-up was 8.5 months. One-year survival was 46.0%. Conclusion: Arterial infusion chemotherapy is safe and efficacious treatment for Neoplastic esophagogastric anastomotic strictures after esophagectomy. abstract_id: PUBMED:10466049 Esophagogastric anastomotic wound healing in rats. Esophagectomy with esophagogastric anastomosis is commonly complicated by anastomotic dehiscence. Although this is a major problem in clinical esophageal surgery, laboratory investigation of esophagogastric anastomotic wound healing has been hampered by the lack of a practical rodent model. Problems with aspiration pneumonia and anastomotic strictures hindered our previous studies in the rat. Other researchers have turned to large animal experiments, or used various upper gastrointestinal pseudoanastomotic techniques in rodents. None of these approaches has proved satisfactory. We developed a technique of side-to-side esophagogastric anastomosis in the rat, and then studied normal esophagogastric anastomotic healing in this model. Anastomoses were performed in 24 rats. Anastomotic breaking strength and hydroxyproline concentration were measured 1 and 2 weeks after surgery. Anastomotic breaking strength was 3.78 +/- 1.18 N at 1 week and 4.83 +/- 0.91 N after 2 weeks (p &lt; 0.03). Anastomotic tissue hydroxyproline concentration was 370.6 +/- 31.2 nmol/mg at 1 week and 462.1 +/- 69.7 nmol/mg after 2 weeks (p &lt; 0.001). Many of the problems encountered in esophagogastric anastomotic studies in the rat have been overcome using this new model. abstract_id: PUBMED:14994746 The effect of tension on esophagogastric anastomotic wound healing in rats. Aim: The effect of tension on the occurrence of esophagogastric anastomotic leaks has been controversial for along time. We used a rat model and investigated the effect of tension on the esophagogastric anastomotic wound healing. Methods: Eighty Sprague-Dawley rats were divided into 8 groups and each group has 10 rats. A side-to-side single layer esophagogastric anastomosis was constructed using interrupted sutures through laparotomy. Anastomoses were excised on day 0, 3, 5, 7, 14, mounted in an Instronmini-44 Tension meter and distracted at 10 mm/min to measure breaking strength. On the other hand, the increased anastomotic tensions were obtained by increasing the distance between esophageal incision and gastrotomy, and anastomotic leak rates in different anastomotic tension groups were investigated. Results: A total of 11 rats were excluded from the studies because of pulmonary complication. Breaking strength on day 0, 3, 5, 7, 14 were 3.35+/-0.89, 1.18+/-0.31, 1.92+/-0.46,3.47+/-0.83, 5.06+/-1.22 (Newton), respectively. The breaking strength is significantly different between day 0 and day 3, day 5 and day 7, day 7 and day 14 (p&lt;0.05). There is no difference between day 3 and day 5, day 0 and day 7(p&gt;0.05). Breaking strength is lowest on day 3 and close to culmination on day 14. Esophagogastric anastomotic leaks increased along with anastomotic tension increasing. Conclusion: Anastomotic tension could be a key factor resulting in esophagogastric anastomotic leaks. abstract_id: PUBMED:3677962 Tissue oxygen tension as a predictor of colonic anastomotic healing. Inadequate blood flow causing tissue hypoxia can result in failure of anastomotic healing. Tissue oxygen tension (ptO2) measurement has been used to predict anastomotic leakage in animals, but its use in humans has not been described previously. A Clark-type oxygen electrode was used to measure ptO2 on the colon of 50 patients undergoing colonic resection and anastomosis. Baseline ptO2 levels were lowest on the descending colon (31.8 +/- 7.4 mmHg, mean +/- SD) and tended to increase at all sites with increasing arterial paO2 (r greater than .76, P less than .001). Perianastomotic ptO2 levels were predictive of subsequent anastomotic leakage when they were less than either 20 mmHg; 50 percent of the preresection ptO2; 15 percent of the arterial paO2; or 40 percent of the ptO2 at a control site (ileum). It is concluded that perioperative ptO2 measurements are of value in the prediction of anastomotic leakage. abstract_id: PUBMED:34270974 A New Esophagogastric Anastomosis for McKeown Esophagectomy in Esophageal Cancer. For McKeown esophagectomy, hand-sewn and mechanical esophagogastric anastomosis techniques have been improved for constructing esophagogastrostomy. However, postoperative anastomosis-related complication rates remain high in patients undergoing cervical anastomosis. Here, we report an original and reliable hand-sewn cervical tunnel esophagogastric anastomosis technique to maximally reduce cervical anastomotic leakage and stricture rates after McKeown esophagectomy. The key features and innovations of cervical tunnel esophagogastric anastomosis are the right gastroepiploic artery as the center for the esophagogastric anastomosis to reduce ischemia, sufficient width of the anastomotic site for anastomosis without stricture, enfolding of the anastomotic site by the tunnel, and tension- and rotation-free anastomosis. abstract_id: PUBMED:27541591 Hybrid Ivor Lewis Esophagectomy for Esophageal Cancer. The last 25 years have witnessed a steady increase in the use of minimally invasive esophagectomy for the treatment of esophageal cancer. However, it is unclear which the optimal minimally invasive approach is: totally minimally invasive or hybrid (laparoscopic assisted or thoracoscopic assisted)? The current evidence from nonrandomized control trials suggests that hybrid laparoscopic-assisted esophagectomy couples the benefits of laparoscopy and the advantages of thoracotomy, leading to reduced surgical trauma without jeopardizing survival compared with open esophagectomy. Compromised blood supply and tension on the anastomosis are two of the main factors that lead to anastomotic leakage. Recent studies have shown that a side-to-side mechanical intrathoracic esophagogastric anastomosis is associated with low anastomotic complications. This article discusses surgical aspects and outcomes of hybrid laparoscopic-assisted esophagectomy for esophageal cancer. abstract_id: PUBMED:31986361 Systematic Review on the Influence of Tissue Oxygenation on Gut Microbiota and Anastomotic Healing. Background: Anastomotic leak rates have not improved over several decades despite improvements in surgical techniques and patient care. The gut microbiome has been implicated in the development of leaks. The exact mechanisms by which tissue oxygenation affects gut microbial composition and anastomotic healing physiology are unclear. Also, commonly used carbon dioxide (CO2) is a known vasodilator that improves tissue oxygen tension. We performed a systematic review to determine the influence of hyperoxia, hypoxia, and hypercapnia on the gut microbiome and anastomotic healing. Methods: A literature search was performed in MEDLINE, EMBASE, and COCHRANE to identify studies investigating the effects of hyperoxia, hypoxia, and hypercapnia on anastomotic healing and gut microbiota published between 1998 and 2018. Two reviewers screened the articles for eligibility and quality. Fifty-three articles underwent full text review, and a narrative synthesis was undertaken. Results: Hyperoxia is associated with better anastomotic healing, increased gastrointestinal oxygen tension, and may reduce gut anaerobes. Hypoxia is associated with poor healing and increased gut anaerobes. However, it is unclear if hypoxia is the most important predictor of anastomotic leaks. Low pressure CO2 pneumoperitoneum and mild systemic hypercapnia are both associated with increased gastrointestinal oxygen tension and may improve anastomotic healing. We found no studies which investigated the effect of hypercapnia on gut microbiota in the context of anastomotic healing. Conclusions: Tissue oxygenation influences gut anastomotic healing, but little evidence exists to demonstrate the influence on the gut microbiome in the context of healing. Further studies are needed to determine if anastomotic microbiome changes with altered tissue oxygenation and if this affects healing and leak rates. If confirmed, altering tissue oxygenation through hyperoxia or hypercapnia could be feasible means of altering the microbiome such that anastomotic leak rates reduce. abstract_id: PUBMED:36417036 Which Anastomotic Techniques Is the Best Choice for Cervical Esophagogastric Anastomosis in Esophagectomy? A Bayesian Network Meta-Analysis. Introduction: The optimal choice of anastomotic techniques for cervical esophagogastric anastomosis in esophagectomy remains unclear. Methods: An electronic literature search of PubMed, Embase, and Web of Science (data up to April 2022) was conducted and screened to compare hand sewn (HS), circular stapling (CS), side-to-side linear stapling (LS), and triangulating stapling (TS) for cervical esophagogastric anastomosis. Anastomotic leak, pulmonary complications, anastomotic stricture, and reflux esophagitis of the 4 anastomotic techniques were evaluated using a Bayesian network meta-analysis by R. Result: Twenty-nine studies were ultimately included, with a total of 5,020 patients from 9 randomized controlled trials, 7 prospect cohort studies, and 13 retrospective case-control studies in the meta-analysis. The present study demonstrates that the incidence of anastomotic leakage is lower in TS than HS and CS (TS vs. HS: odds ratio (OR) = 0.32, 95% CI: 0.1 to 0.9; TS vs. CS: OR = 0.37, 95% CI: 0.13 to 1.0), and the incidence of anastomotic stricture is lower in TS than in HS and CS (TS vs. HS: OR = 0.32, 95% CI: 0.11 to 0.86; TS vs. CS: OR = 0.23, 95% CI: 0.08 to 0.58). TS ranks best in terms of anastomotic leakage, pulmonary complication, anastomotic stricture, and reflux esophagitis. Conclusion: TS for cervical esophagogastric anastomosis of esophagectomy had a lower incidence of anastomotic leakage and stricture. TS should be preferentially recommended. Large-scale RCTs will be needed to provide more evidence in future studies. abstract_id: PUBMED:32650259 T-drain esophagostomy under thoracoscopy for intrathoracic esophagogastric anastomotic leakage following esophagectomy for esophagogastric junction cancer: A case report. Introduction: Intrathoracic esophagogastric anastomotic leakage is considered the most severe complication. We successfully performed T-drain esophagostomy under thoracoscopy for intrathoracic esophagogastric anastomotic leakage. Presentation Of Case: A 44-year-old man visited a local hospital because of vomiting during the night. Upon examination, the patient was diagnosed with c-T2N0M0 stage II adenocarcinoma in Barrett's esophagus. We performed laparoscopic proximal gastrectomy and lower esophagectomy and gastric conduit reconstruction using the posterior mediastinal route with intrathoracic anastomosis under thoracoscopy. The patient developed fever, chest pain and dyspnea on postoperative day 5. We diagnosed anastomotic leakage and performed reoperation via thoracoscopy. The perforation, which was approximately 8 mm in length, was found on the back side of the esophagogastric anastomosis. There was no clear finding of necrosis in the gastric tube or the esophagus. After sufficiently deterging the thoracic cavity, a T-drain was inserted through the perforation and fixed. After fistula formation, the T-drain was slowly phased out. The postoperative course was uneventful. Discussion: It is important to note that early treatment of severe leaks is mandatory to limit related mortality. However, current therapies for treating anastomotic leakage are still inefficient and controversial. Conclusion: T-drain esophagostomy under thoracoscopy for intrathoracic esophagogastric anastomotic leakage could be minimally invasive and effective. Answer: Tissue oxygen tension (PtO2) during esophagectomy has been investigated as a potential predictor of esophagogastric anastomotic healing. A study measuring submucosal tissue oxygen tension in patients undergoing cervical esophagogastrostomy found that mean baseline PtO2 decreased significantly following the ligature of the vasa gastricae brevis, the left gastroepiploic artery, and the left gastric artery, and further decreased after the pull-up of the gastric tube. However, during the operation, there was no significant evidence of decreased PtO2 levels in patients who later developed anastomotic leakage or late stenosis. Interestingly, postoperative PtO2 levels showed a significant increase in patients with anastomotic leakage, suggesting that a disorder in oxygen consumption may cause a significant increase of PtO2 in anastomotic tissue, which is associated with anastomotic leakage (PUBMED:9587355). In contrast, another study on colonic anastomotic healing found that perioperative PtO2 measurements were predictive of subsequent anastomotic leakage when they were below certain thresholds, indicating that inadequate blood flow and tissue hypoxia can result in failure of anastomotic healing (PUBMED:3677962). This suggests that while PtO2 may be a predictor of anastomotic healing in colonic surgery, its role in esophagogastric anastomoses is less clear. A systematic review also highlighted that tissue oxygenation influences gut anastomotic healing, although the exact mechanisms by which it affects gut microbial composition and anastomotic healing physiology are unclear (PUBMED:31986361). Overall, while tissue oxygen tension is a factor in anastomotic healing, its predictive value specifically for esophagogastric anastomotic healing during esophagectomy is not definitively established and may be influenced by other factors such as oxygen consumption disorders and changes in the gut microbiome. Further research is needed to clarify the relationship between tissue oxygen tension and esophagogastric anastomotic healing.