input
stringlengths
6.82k
29k
Instruction: Is there a case for a specialist forensic psychiatry service for the elderly? Abstracts: abstract_id: PUBMED:16683463 An audit of a specialist old age psychiatry liaison service to a medium and a high secure forensic psychiatry unit. The elderly prison population is increasing and there is a significant amount of unidentified psychiatric morbidity among elderly prisoners. A sizeable number of elderly subjects are referred to regional forensic units. These units are able to provide advice but are reluctant to admit frail, physically ill and vulnerable elderly into their unit because the ward environment is considered inappropriate for them. A sizeable number of inpatients in medium and high secure units are elderly. This paper is an exploratory audit of referrals from a medium and high secure forensic psychiatry unit to a specialist consultation-only liaison old age psychiatry service, which was specifically developed to service the forensic unit. The demographic and clinical characteristics of the patients seen by this specialist service were similar to elderly inpatients in medium and high secure units. All referrals were judged to be appropriate and new management advice was provided in all cases. Main reasons for referral included diagnostic advice, placement advice and treatment advice. Establishing a diagnosis of dementia was considered important because Alzheimer's disease and Lewy body dementia can be treated with cholinesterase inhibitors. Placement advice was the most difficult to provide because of paucity of residential and nursing homes able and willing to accept patients with a forensic history. There is a need for a comprehensive model of specialist forensic old age psychiatry service at a regional or supraregional level. A consensus needs to be reached on the exact configuration of such a service. abstract_id: PUBMED:15578672 Is there a case for a specialist forensic psychiatry service for the elderly? Background: The number of elderly offenders in England and Wales is increasing. There is, therefore, a concern that their needs may not be met by existing forensic services. However, there is a paucity of information on elderly patients referred to existing forensic psychiatric units. Method: Data on patients over the age of 65 years referred to a large medium secure forensic psychiatric unit in London were collected for a 13-year period using a retrospective design. The sample was divided into those who had first offended before the age of 65 and those who had offended after the age of 65. Data was also collected on victims of the offences. Results: 5477 referrals were made during the study period. Those aged over 65 years accounted for 78 (1.4%) of all referrals. These 78 referrals were for 55 patients. Forty-five of these had offended. Case notes of 42 patients from this group were screened. Sexual and violent offences accounted respectively for 20 (47%) and 15 (36%) of offences. 31% had no psychiatric disorder but organic disorders accounted for 21% of cases. Only eight (19%) required admission to the medium secure unit. Fourteen (33%) had first offended after the age of 65 while others were known to either the forensic services or criminal justice system before the age of 65. The two groups did not differ from each other. Conclusions: The elderly accounted for very few referrals to the medium secure forensic service, yet there is a high prevalence of psychiatric morbidity in both remand and sentenced elderly prisoners. Therefore, elderly offenders with psychiatric morbidity may benefit from specialist old age psychiatric forensic services, perhaps at a supraregional level. abstract_id: PUBMED:23015386 Specialist training in forensic psychiatry: the UK experience. Background: Training pathways and structures in forensic psychiatry differ between European Union countries, and perspective may differ between trainees and trainers. Aim: To describe the context of forensic psychiatry training in the UK, with a particular emphasis on England and Wales, as well as some details of core psychiatry and specialist forensic training. Information And Discussion: Forensic psychiatry in the UK takes place in the context of general medical training guidance and regulation. In the last decade, it has changed to be in line with the European working directive and to take account of opinions from outside medicine. The specialist component is typically during the last 3 years of post-graduate training and is based on an apprenticeship/competency model. abstract_id: PUBMED:15895642 Is there a need for elderly forensic psychiatric services? The population of the elderly in most developed nations is on the increase. Furthermore, the prevalence of mental disorder amongst elderly offenders is high. The true extent of 'elderly' crime is unknown because much of it goes undetected and unreported. This leads to a failure to detect mental illness in such offenders. Court diversion schemes may improve recognition of mental illness but these schemes usually tend to deal with the more severe crimes. This may result in an overestimation of the amount of serious crime committed by the elderly and a failure to detect mental illness amongst those who commit less serious crimes. Efforts to service this hidden morbidity call for multi-agency collaboration. Improved detection and reporting of crimes is essential if mental health difficulties in the elderly are not to go unnoticed. The needs of elderly mentally-disordered offenders are complex and fall within the expertise of old age and forensic psychiatry, without being adequately met by either one. Therefore, consideration should be given to the development of a tertiary specialist forensic old-age psychiatry service. abstract_id: PUBMED:12072127 Community forensic psychiatry: restoring some sanity to forensic psychiatric rehabilitation. Objective: To review clinical and legal paradigms of community forensic mental health care, with specific focus on New Zealand, and to develop a clinically based set of guiding principles for service development in this area. Method: The general principles of rehabilitating mentally disordered offenders, and assertive community care programmes were reviewed and applied to the law and policy in a New Zealand forensic mental health setting. Results: There is a need to develop comprehensive community treatment programmes for mentally disordered offenders. The limited available research supports assertive community treatment models, with specialist forensic input. Ten clinically based principles of care provision important to forensic mental health assertive community treatment were developed. Conclusion: Deinstitutionalization in forensic psychiatry lags behind the rest of psychiatry, but can only occur with well-supported systems in place to assess and manage risk in the community setting. The development of community-based forensic rehabilitation services in conjunction with general mental health is indicated. abstract_id: PUBMED:33561777 Establishing comprehensive forensic mental health services in Qatar. Qatar has experienced exponential economic and healthcare infrastructural development recently. Mental health has been prioritized for development by the state with the launching of the ambitious National Mental Health Strategy in 2013 which incorporates the development of specialist Forensic psychiatry services. Traditionally, some aspects of forensic psychiatry care were provided under general psychiatry, being largely reactive. The new strategy supported the development of specialist Forensic service with a focus on developing safer communities, preventing victimization and supporting the criminal justice system. We provide an overview of the development of this service in Qatar, one of the first of its kind in the region. abstract_id: PUBMED:38347724 The impact of COVID-19 on clinical forensic medicine and forensic psychiatry: A literature review. Purpose:The COVID-19 pandemic prompted significant challenges in clinical forensic medicine and forensic psychiatry and emphasised the need for sustainable service delivery. Both fields faced difficulties in maintaining services due to COVID-19 precautions and limited referrals. This review examined the changes in forensic practices, aiming to understand their impact and learn from them to inform future strategies. Methods: A search was conducted across Ovid Medline, Ovid Embase, Ovid Emcare and PubMed, and webpages of governments and other organisations. Studies were included if they assessed any change that occurred in clinical forensic medicine or forensic psychiatry during the pandemic. A narrative review with a systematic approach was undertaken. Results: A total of 27 articles were included. There was a notable decrease in the volume of forensic assessments in early 2020. The numbers gradually rebounded with the easing of restrictions yet have not fully returned to pre COVID-19 levels. Telemedicine and COVID-19 precautions were widely incorporated into forensic services, which were seen to disrupt the patient-doctor dynamics, restrict the setting of examinations and complicate work processes and sample handling steps. Conclusion: This is the first review that describes the pandemic-driven changes in clinical forensic medicine and forensic psychiatry in respect of case trends and incorporation of COVID-19 precautions. The pandemic emphasised the need for adaptability and adoption of new assessment approaches in forensic services. Technology, like telemedicine, emerged as a valuable tool, enhancing accessibility for individuals in remote or high-risk areas. It is necessary to review the changes and re-evaluate their impacts in the post COVID-19 era. abstract_id: PUBMED:7651108 Forensic psychiatry and elderly people--a retrospective review. Elderly offenders have rarely been studied. We describe a population of elderly offenders referred to a forensic psychiatric service over a 20-year period (1972-1992). Patients over 65 years charged with a criminal offence were eligible for inclusion. For the purpose of the study they were divided into four study groups. Demographic data, forensic and psychiatric history, alcohol abuse, charge and psychiatric diagnosis are reported. The possible reasons for the high proportion of sex offences are discussed. Difficulties were encountered in the transfer of longstay elderly offenders back to their local area. Finally, we recommend that most of the elderly patients referred to the forensic service for treatment would best be cared for in their local catchment area hospital rather than in a high security hospital. abstract_id: PUBMED:30368464 The Barriers and Benefits to Developing Forensic Rotations for Psychiatry Residents. Psychiatry residents' experiences in forensic psychiatry vary greatly across the country, and many psychiatry programs meet the Accreditation Council for Graduate Medical Education requirements for a forensic experience through general psychiatry rotations (e.g., on a consult-liaison service) or classroom-based activities. Forensic clinical experiences during psychiatry residency are important for preparing future general psychiatrists for practice with justice-involved patients, generating interest in forensic psychiatry, and easing the transition from "healer to evaluator" for future fellows. Unfortunately, residency programs interested in expanding their forensic training may face many challenges because of current regulatory frameworks, the nature of forensic practice, and competing demands within residency training programs. This article describes these challenges, and the experience of the authors at one institution with developing a novel forensic experience in a criminal justice diversion setting. The authors conclude with some practical considerations for educators interested in developing forensic experiences at their institutions. abstract_id: PUBMED:32434610 Forensic psychiatry and Covid-19: accelerating transformation in forensic psychiatry. Swift medically led scientifically informed responses to the Covid-19 epidemic nationally have been demonstrably superior to other, non-scientific approaches. In forensic psychiatry and across all psychiatric services, urgent and clinically led responses have underlined redundancies and confusions in the governance of mental health services and a vacuum in policy makers. For the future, a greater emphasis on services for patients with schizophrenia and other severe, enduring mental disorders must aim at reducing standardised mortality ratios, managing risk of violence and improving hard outcomes such as symptomatic remission, functional recovery and forensic recovery of autonomy. This will require more use of information technology at service level and at national level where Scandinavian-style population-based data linkage research must now become legally sanctioned and necessary. A national research and development centre for medical excellence in forensic psychiatry is urgently required and is complimentary to and different from quality management. Answer: Yes, there is a case for a specialist forensic psychiatry service for the elderly. The elderly prison population is increasing, and there is significant unidentified psychiatric morbidity among elderly prisoners (PUBMED:16683463). The number of elderly offenders in England and Wales is also on the rise, and their needs may not be met by existing forensic services, as there is a high prevalence of psychiatric morbidity in both remand and sentenced elderly prisoners (PUBMED:15578672). Furthermore, the population of the elderly in most developed nations is increasing, and the prevalence of mental disorder among elderly offenders is high, indicating a need for services that can address the complex needs of elderly mentally-disordered offenders (PUBMED:15895642). The existing forensic psychiatric units have seen a small percentage of referrals for the elderly, but the complexity of the cases, including the need for diagnostic advice, placement advice, and treatment advice, suggests that a specialist service could be beneficial (PUBMED:15578672). Establishing a diagnosis of dementia, for example, is important because conditions like Alzheimer's disease and Lewy body dementia can be treated with cholinesterase inhibitors (PUBMED:16683463). However, providing placement advice is challenging due to the scarcity of residential and nursing homes willing to accept patients with a forensic history (PUBMED:16683463). The need for specialist training in forensic psychiatry, as seen in the UK experience, also supports the idea that a specialized service could improve the management of elderly offenders with psychiatric conditions (PUBMED:23015386). Additionally, the development of comprehensive forensic mental health services in places like Qatar, which includes specialist forensic psychiatry services, reflects a growing recognition of the need for such services (PUBMED:33561777). In conclusion, the evidence suggests that there is a case for developing specialist forensic psychiatry services for the elderly to better meet their unique needs and improve outcomes for this vulnerable population.
Instruction: Intramedullary femoral nailing in sheep: does severe injury predispose to pulmonary dysfunction? Abstracts: abstract_id: PUBMED:26730068 The Intramedullary Nailing of Adult Femoral Shaft Fracture by the Way of Open Reduction is a Disadvantage or Not? The purpose of this study is that to evaluate superiority and results of open technique in the treatment of femoral shaft fracture with interlocking intramedullary nailing. The retrospective study is designed to evaluate results of our technique. In this study, the patients that were admitted to the Orthopedics and Traumatology Department of University Hospital that is third level of trauma center. We claim that open technique is not a disadvantage during union process over closed technique in treatment of femoral shaft fracture with interlocking nailing. In this study, 44 patients that were consulted for adult femoral shaft fracture between January 2008 to July 2010 were included. Patients with open fractures, gunshot wounds, neurovascular injuries, and patients that did not have isolated femoral diaphysis fractures were excluded from the study. Clinical and radiological results of the patients were checked periodically. The open interlocking intramedullary nailing was used in treatment. Complete union rate was 90.9 % in 40 patients who were treated with open interlocking intramedullary nailing for adult femoral shaft fracture, and nonunion rate was 9.1 % in four patients. Mean union time was 18.3 weeks (12-36 weeks). Evaluation of 44 patients according to Thoresen criteria was excellent in 22 patients, good in 6 patients, and bad in 4 patients. We think that open technique is an acceptable technique because all results of our study were similar to results of closed intramedullary nailing technique in literature and some advantages of open technique over closed technique. abstract_id: PUBMED:7599294 Intramedullary femoral nailing in sheep: does severe injury predispose to pulmonary dysfunction? Objective: To find out if intramedullary nailing affects lung function and microvascular permeability whether or not the lung is already injured; if so whether a different method of fixation would diminish the effect; and are the pathogenetic changes related to mechanisms known to precipitate adult respiratory distress syndrome? Design: Experimental study. Setting: University hospital, Germany. Material: 29 Adult female merino sheep. Interventions: Chronic lung lymph fistulas were created. Two of the three groups (1 and 3) underwent right sided lung contusion and haemorrhage to a mean blood pressure of 50 mm Hg for 2 hours. On day 3 groups 1 and 2 underwent intramedullary nailing, and group 3 had external fixators applied. Main Outcome Measures: Pulmonary arterial pressure, concentrations of triglycerides, chemiluminescence of isolated polymorphonuclear leucocytes (PMN), lymph flow, microvascular pressure, filtration coefficient, and permeability. Results: Intramedullary nailing caused a transient significant increase in pulmonary arterial pressure and triglycerides in groups 1 and 2. Chemiluminescence of isolated PMN decreased in group 1 and increased in group 2. Lymph flow increased 2.5 times in group 1 while microvascular pressure decreased; in group 2 the increase was less but microvascular pressure increased. Filtration coefficient in group 1 was five times that of group 2 and there was a twofold increase in permeability. There were no changes in group 3. Conclusion: Intramedullary nailing causes additional damage to lungs after lung damage and haemorrhagic shock in sheep. This can be avoided if (in the presence of additional injuries) alternative methods are used. abstract_id: PUBMED:6869667 Early intramedullary nailing of femoral shaft fractures: a cause of fat embolism syndrome. Results of our retrospective review of 100 consecutive patients show that early intramedullary nailing can be accomplished in severely injured patients without increasing the risk of fat embolism syndrome. No cases of fat embolism syndrome were seen after immediate (less than 24 hours) intramedullary nailing of femoral shaft fractures. In contrast, five cases of fat embolism syndrome were found in the patients treated in the traditional manner with initial balanced skeletal traction and delayed intramedullary nailing. There was a trend toward more pulmonary complications in the immediate group. This is attributed to the greater severity of injury present in the patients selected for immediate intramedullary nailing (injury severity scores 23.2 in immediate group; 12.4 in delayed group). The incidence of critical hypoxemia in the immediate group was equivalent to that in another group of injured patients who did not have fractures, but who did have similar injury severity scores. abstract_id: PUBMED:19411641 Intramedullary nailing of femoral shaft fractures: current concepts. Intramedullary nailing is the preferred method for treating fractures of the femoral shaft. The piriformis fossa and greater trochanter are viable starting points for antegrade nailing. Alternatively, retrograde nailing may be performed. Each option has relative advantages, disadvantages, and indications. Patient positioning can affect the relative ease of intramedullary nailing and the incidence of malalignment. The timing of femoral intramedullary nailing as well as the use of reaming must be tailored to each patient to avoid systemic complications. Associated comorbidities, the body habitus, and associated injuries should be considered when determining the starting point, optimal patient positioning for nailing, whether to use reduction aids as well as which to use, and any modifications of standard technique. Intramedullary nailing of diaphyseal femur fractures provides a stable fixation construct that can be applied using indirect reduction techniques. This method yields high union rates and low complication rates when vigilance is maintained during preoperative planning, the surgical procedure, and the postoperative period. abstract_id: PUBMED:31058233 Clinical outcomes of intramedullary nailing of femoral shaft fractures with third fragments: a retrospective analysis of risk factors for delayed union. Background: This study retrospectively evaluated the clinical outcomes of intramedullary nailing of femoral shaft fractures with third fragments and analyzed the risk factors for delayed union. Methods: Retrospective analyses involving 51 patients who underwent intramedullary nailing of femoral shaft fractures with third fragments (AO classification type B, 35 cases; type C, 16 cases) were conducted. Delayed union was defined as either more than 10 months required for callus formation in more than three of the four cortical bone surfaces observed in the frontal and lateral radiographic views or the requirement for additional surgery such as nail conversion or bone transplantation. Seventeen patients developed delayed union (D group). Thirty-four patients achieved bony union within 9 months (U group). The following background variables were compared between groups: age at the time of the injury; AO classification; ratio of open fracture; waiting period before surgery; rate of the infraisthmal fracture; diameter of the intramedullary nail; ratio of the intramedullary nail to the femur; length and displacement of the third fragment; and use of open reduction, poller screws, or dynamization. Results: Significant differences were found between the D and U groups for age (32.2±14.1 vs. 25.3±9.6 years), open fracture ratio (35.3% vs. 11.8%), and displacement of the third fragment (13.7±6.4 vs. 9±6.3 mm). Multiple logistic regression analysis only identified displacement of the third fragment as a risk factor for delayed union (p=0.03; OR 1.13; 95% CI 1.01 to 1.26). Discussion: Delayed union was observed in 17 cases (33.3%) after intramedullary nailing of femoral shaft fractures with third fragments. Displacement of the third fragment influenced delayed union. Level Of Evidence: Level III. abstract_id: PUBMED:35794570 Aseptic femoral nonunion treated with exchange locked nailing with intramedullary augmentation cancellous bone graft. Background: Closed reamed locked intramedullary nailing has been the treatment of choice for most of femoral shaft fractures. A high union rate with a low complication rate is generally predictable. For an aseptic femoral shaft nonunion with a prior inserted intramedullary nail, exchange nailing is one of favored surgical techniques for treatment. However, a greatly varied success rate of 72-100% has been reported. To improve the success rate of exchange femur nailing, a modified bone grafting technique was developed. The purpose of this retrospective study intended to evaluate outcomes of such a revised technique. Methods: From July 2011 to March 2019, 48 consecutive adult patients (average, 38 years; range, 19-67 years) with aseptic femoral shaft nonunions after intramedullary nailing treatment were studied. All femoral shaft fractures were initially caused by traffic accidents, which were treated by a closed or open intramedullary nailing technique at various hospitals. The current revision treatment was performed after an average of 2.2 years (range 1.1-6.2 years) from initial injuries. In the surgery, the prior nail was removed and the marrow cavity was reamed widely (at least 2 mm as possible). Sufficient cancellous bone grafts harvested on the trochanteric marrow wall from the inside were placed in the marrow cavity of the junction of nonunion fragments. A new 1-mm smaller size locked intramedullary nail was inserted. Whether the dynamic or static mode of nails were used mainly depended on the nonunion level. Postoperatively, protected weight bearing with crutches was allowed for all patients. Results: Forty-one patients were followed for an average of 2.8 years (85.4%; range, 1.9-4.5 years) and all fractures healed. The union rate was 100% (41/41, p < 0.001) with a union time of an average of 3.4 months (range, 2.5-5.0 months). There were no complications of deep infection, nonunions, malunions, implant failures or an avulsed trochanter tip fracture. The satisfactory knee function improved from 73.2% (30/41) preoperatively to 92.7% (38/41) at the latest follow-up (p = 0.019). Conclusions: The described modified bone grafting technique may effectively improve a union rate of exchange femur nailing while the surgical procedure is not complicated. It may therefore be used concomitantly in all aseptic femoral shaft nonunions when exchange nailing is performed. abstract_id: PUBMED:34645566 Initial definitive treatment of open femoral shaft fractures with retrograde nailing - is it safe? A retrospective analysis comparing antegrade to retrograde nailing. Introduction: Treatment of severe open femoral fractures with retrograde intramedullary nailing, raises concerns of septic arthritis of the knee due to its intra-articular entrance point. There is little evidence concerning the safety of retrograde femoral nailing (RFN) usage in these cases, and what evidence there is does not necessarily include severe open fractures. Furthermore, the outcome of ballistic injuries treated in this manner and a comparison with antegrade femoral nailing (AFN), is yet to be established. The aim of this study was to examine the outcome of primary retrograde femoral nailing in high-grade open fractures and compare our results with similar fractures treated with AFN. Patients And Methods: A retrospective analysis of treated high-grade open femoral fractures, was performed at an academic level one trauma centre between 2006 and 2016. Included were all open femur shaft fractures treated with either RFN or AFN as primary fixation. We analysed both patient groups and compared outcomes including the infection rate, non-union rate and secondary interventions. Presence of knee pain and long-term function, was evaluated using the Short-Form 12 (SF-12) questionnaire. Results: Forty-nine patients with open femoral shaft fractures whose primary treatment was intramedullary nailing (IMN), were included in the cohort. Thirty patients were treated with RFN and 19 with AFN. No septic arthritis of the knee was detected in either group. One patient in the RFN group and two in the AFN had a deep surgical wound infection. Excluding reoperations for locking screw removal due to local irritation, the reoperation rate was 22% with two patients treated for non-union in the RFN group and two in the AFN. When comparing outcome measures, including SF-12 scores and the reported knee pain rates of AFN and RFN groups, no significant differences were observed. Conclusion: Our study showed that retrograde femoral nailing as initial definitive treatment in high-grade open femoral shaft fractures, resulted neither in septic arthritis of the knee nor in an unacceptable infection rate. General outcomes regarding complications and reoperations, was similar to antegrade femoral nailing performed in our centre for similar injuries. abstract_id: PUBMED:31824873 Characteristics and Surgical Outcomes of Intertrochanteric or Subtrochanteric Fractures Associated with Ipsilateral Femoral Shaft Fractures Treated with Closed Intramedullary Nailing: A Review of 31 Consecutive Cases over Four Years at a Single Institution. Purpose: To evaluate the clinical characteristics of intertrochanteric or subtrochanteric fractures associated with ipsilateral femoral shaft fractures and assess the surgical outcomes of a novel, closed intramedullary nailing surgical approach designed to minimize fixation failure. Materials And Methods: Between May 2013 and April 2017, 31 patients with intertrochanteric or subtrochanteric fractures associated with ipsilateral femoral shaft fractures treated with closed intramedullary nailing or long proximal femoral nail antirotation (PFNA) were enrolled in this study. Preoperative data included age, sex, injury severity score, body mass index, location of shaft fracture, injury mechanism, accompanying traumatic injury, walking ability before injury, and surgical timing. Perioperative outcomes, including follow-up period, types of intramedullary nails, number of blocking screws used, operation time, and blood loss were assessed. Radiologic outcomes, including union rate, time from surgery to union, and femoral shortening, and clinical outcomes, including hip flexion, walking ability, and Harris hip score were also evaluated. Results: A total of 29 unions (93.5%) were achieved. The time to union was 16.8 months (range, 11-25 months) for hip fractures (15.7 weeks for intertrochanteric fractures and 21.7 weeks for subtrochanteric fractures) and 22.8 months for femoral shaft fractures. There were no significant differences in surgical outcomes between the two groups except for type of intramedullary nail. Conclusion: Closed intramedullary nailing in the treatment of intertrochanteric or subtrochanteric fractures associated with ipsilateral femoral shaft fractures may be a good surgical option. However, fixation of femoral shaft fractures might not be sufficient depending on the implant design. abstract_id: PUBMED:20954660 Treatment strategies for intramedullary nailing of femoral shaft fractures. Intramedullary nailing has become the gold standard to treat femoral shaft fractures. It is unknown which nailing technique orthopedic surgeons prefer. The goal of this study was to determine current techniques and perioperative complications of intramedullary nailing of diaphyseal femoral fractures. Fifty-one institutions in 26 countries participated in an international survey to assess detailed descriptions of preferred operative strategies and perioperative complications. Altogether, 517 cases of diaphyseal femoral fractures were collected. The Internet-based survey incorporated information about fracture classification, time to operation, Injury Severity Score, type of nail, and operative technique, as well as perioperative complications such as infection, femoral neck fracture, and hardware failure. The preferred position for implantation was supine (91.1%). Most surgeons used a traction table (57.1%) and an antegrade implantation technique (84.5%). Intraoperative fractures of the femoral neck occurred in 1.2% of cases when a traction table was used and in 0.2% if no traction table was used, but without statistical significance (P>.16). In 59.2% of the cases, an isolated femur fracture was present, while the rest sustained multiple injuries. In polytrauma patients and patients with severe thorax injuries, most surgeons chose a delayed treatment with intramedullary femoral nails. Interestingly, 38.0% of the patients with severe thorax injuries were treated on the first day with intramedullary femoral nails. The total rate of complications for intramedullary femoral nailing was low (4.9%), but a high rate of intraoperative femoral neck fractures was observed (1.4%). abstract_id: PUBMED:10912604 Intramedullary nailing of femoral fractures in adolescents. Fifty femoral shaft fractures were treated with reamed, interlocking, intramedullary nails utilizing a greater trochanteric starting point from 1991 through 1998. The 17 girls and 31 boys ranged in age from 10 to 16 years (mean, 13.2 years) at the time of injury. Average follow-up after fracture was 16 months. All fractures united, and the average leg-length discrepancy was 1 mm (range, -10 mm to +11 mm). No patient had angular or rotational deformities. No patients developed osseous necrosis of the femoral head. Despite intramedullary nailing through the greater trochanteric apophysis, articulotrochanteric distance measurements increased only 4.5 mm at the time of last follow-up. No patient developed significant proximal femoral deformity with some follow-up as far as 60 months from the time of surgery. Intramedullary nailing through a trochanteric starting point is a safe, effective method for treating femur fractures in this adolescent age group. Answer: Intramedullary femoral nailing in sheep does predispose to pulmonary dysfunction when there is pre-existing lung injury and hemorrhagic shock. An experimental study conducted on adult female merino sheep found that intramedullary nailing caused a transient significant increase in pulmonary arterial pressure and triglycerides. Additionally, there was an increase in lymph flow and a decrease in microvascular pressure in one group, while the other group showed a less pronounced increase in lymph flow but an increase in microvascular pressure. The filtration coefficient in one group was five times that of the other, and there was a twofold increase in permeability. These changes suggest that intramedullary nailing can exacerbate lung damage after lung damage and hemorrhagic shock in sheep. The study concluded that alternative methods of fixation should be used in the presence of additional injuries to avoid further lung damage (PUBMED:7599294).
Instruction: Can pretreatment CT predict local control of T2 glottic carcinomas treated with radiation therapy alone? Abstracts: abstract_id: PUBMED:16494910 Arsenite pretreatment enhances the cytotoxicity of mitomycin C in human cancer cell lines via increased NAD(P)H quinone oxidoreductase 1 expression. Arsenic is an effective therapeutic agent for the treatment of patients with refractory or relapsed acute promyelocytic leukemia. The use of arsenic for treating solid tumors, particularly in combination with other chemotherapeutic agents, has been extensively studied. Here, we report that arsenite-resistant human lung cancer CL3R15 cells constitutively overexpress NAD(P)H quinone oxidoreductase 1 (NQO1), an enzyme responsible for activation of mitomycin C (MMC), and are more susceptible to MMC cytotoxicity than parental CL3 cells. The effects of arsenite pretreatment on NQO1 induction were examined in CL3, H1299, H460, and MC-T2 cells. Arsenite pretreatment significantly enhanced the expression of NQO1 and susceptibility to MMC in CL3, H1299, and MC-T2 cells, but not in H460 cells that express high endogenous levels of NQO1. Alternatively, arsenic pretreatment reduced adriamycin sensitivity of CL3 cells. Arsenite-mediated MMC susceptibility was abrogated by dicumarol (DIC), an NQO1 inhibitor, indicating that NQO1 is one of the key regulators of arsenite-mediated MMC susceptibility. Various cancer cell lines showed different basal levels of NQO1 activity and a different capacity for NQO1 induction in response to arsenite treatment. However, overall, there was a positive correlation between induced NQO1 activity and MMC susceptibility in cells pretreated with various doses of arsenite. These results suggest that arsenite may increase NQO1 activity and thus enhance the antineoplastic activity of MMC. In addition, our results also showed that inhibition of NQO1 activity by DIC reversed the arsenite resistance of CL3R15 cells. abstract_id: PUBMED:10079371 Potential use of T cell receptor genes to modify hematopoietic stem cells for the gene therapy of cancer. The purpose of this review is to illustrate some of the technical and biological hurdles that need to be addressed when developing new gene therapy based clinical trials. Gene transfer approaches can be used to "mark" cells to monitor their persistence in vivo in patients, to protect cells from toxic chemotherapeutic agents, correct a genetic defect within the target cell, or to confer a novel function on the target cell. Selection of the most suitable vector for gene transfer depends upon a number of factors such as the target cell itself and whether gene expression needs to be sustained or transient. The TCR gene transfer approach described here represents one innovative strategy being pursued as a potential therapy for metastatic melanoma. Tumor reactive T cells can be isolated from the tumor infiltrating lymphocytes (TIL) of melanoma patients. A retroviral vector has been constructed containing the T cell receptor (TCR) alpha and beta chain genes from a MART-1-specific T cell clone (TIL 5). Jurkat cells transduced with this virus specifically release cytokine in response to MART-1 peptide pulsed T2 cells, showing that the virus can mediate expression of a functional TCR. HLA-A2 transgenic mice are being used to examine whether transduced bone marrow progenitor cells will differentiate in vivo into mature CD8+ T cells expressing the MART-1-specific TCR. Expression of the human TCR alpha and beta chain genes has been detected by RT-PCR in the peripheral blood of HLA-A2 transgenic mice reconstituted with transduced mouse bone marrow. Expression of the TIL 5 TCR genes in the peripheral blood of these mice was maintained for greater than 40 weeks after bone marrow reconstitution. TIL 5 TCR gene expression was also maintained following transfer of bone marrow from mice previously reconstituted with transduced bone marrow to secondary mouse recipients, suggesting that a pluripotent progenitor or lymphocyte progenitor cell has been transduced. abstract_id: PUBMED:36604465 Activated T cell therapy targeting glioblastoma cancer stem cells. Naïve T cells become effector T cells following stimulation by antigen-loaded dendritic cells (DCs) and sequential cytokine activation. We aimed to develop procedures to efficiently activate T cells with tumor-associated antigens (TAAs) to glioblastoma (GBM) stem cells. To remove antigen presentation outside of the immunosuppressive tumor milieu, three different glioma stem cell (GSC) specific antigen sources to load DCs were compared in their ability to stimulate lymphocytes. An activated T cell (ATC) protocol including cytokine activation and expansion in culture to target GSCs was generated and optimized for a planned phase I clinical trial. We compared three different antigen-loading methods on DCs to effectively activate T cells, which were GBM patient-derived GSC-lysate, acid-eluate of GSCs and synthetic peptides derived from proteins expressed in GSCs. DCs derived from HLA-A2 positive blood sample were loaded with TAAs. Autologous T cells were activated by co-culturing with loaded DCs. Efficiency and cytotoxicity of ATCs were evaluated by targeting TAA-pulsed DCs or T2 cells, GSCs, or autologous PHA-blasts. Characteristics of ATCs were evaluated by Flow Cytometry and ELISpot assay, which showed increased number of ATCs secreting IFN-γ targeting GSCs as compared with non-activated T cells and unloaded target cells. Neither GSC-lysate nor acid-eluate loading showed enhancement in response of ATCs but the synthetic peptide pool showed significantly increased IFN-γ secretion and increased cytotoxicity towards target cells. These results demonstrate that ATCs activated using a TAA synthetic peptide pool efficiently enhance cytotoxicity specifically to target cells including GSC. abstract_id: PUBMED:18434122 Interleukin-1beta and tumor necrosis factor-alpha have opposite effects on fibroblasts and epithelial cells during basement membrane formation. Interleukin-1beta (IL-1beta) and tumor necrosis factor-alpha (TNF-alpha) are typical proinflammatory cytokines that influence various cellular functions, including metabolism of the extracellular matrix. We examined the roles of IL-1beta and TNF-alpha in basement membrane formation in an in vitro model of alveolar epithelial tissue composed of alveolar epithelial cells and pulmonary fibroblasts. Formation of the basement membrane by immortalized rat alveolar type II epithelial (SV40-T2) cells, which ordinarily do not form a continuous basement membrane, was dose-dependently upregulated in the presence of 2 ng/ml IL-1beta or 5 ng/ml TNF-alpha. IL-1beta or TNF-alpha alone induced increased secretion of type IV collagen, laminin-1, and nidogen-1/entactin, all of which contributed to this upregulation. In contrast, while SV40-T2 cells cultured with a fibroblasts-embedded type I collagen gel were able to form a continuous basement membrane, they failed to form a continuous basement membrane in the presence of IL-1beta or TNF-alpha. Fibroblasts treated with IL-1beta or TNF-alpha secreted matrix metalloproteinase (MMP)-9 and MMP-2, and these MMPs inhibited basement membrane formation and degraded the basement membrane architecture. Neither IL-1beta- nor TNF-alpha-treated SV40-T2 cells increased the secretion of MMP-9 and MMP-2. These results suggest that IL-1beta participates in basement membrane formation in two ways. One is the induction of MMP-2 and MMP-9 secretion by fibroblasts, which inhibits basement membrane formation, and the other is induction of basement membrane component secretion from alveolar epithelial cells to enhance basement membrane formation. abstract_id: PUBMED:12529970 Novel system for generating cytotoxic effector lymphocytes using carcinoembryonic antigen (CEA) peptide and cultured dendritic cells. We have established a practical system for generating antitumor effector lymphocytes using the tumor antigen peptide CEA and cultured dendritic cells (DCs), and have also characterized effector cells. DCs were induced from the adherent cell population of autologous peripheral blood mononuclear cells (PBMCs) obtained from HLA-A0201 normal or tumor-bearing donors using IL-4 and GM-CSF. The cultured DCs were shown to express class I, class II, CD80 and CD86 molecules. The PBMCs were stimulated for 7 days with the DCs pulsed with the HLA-A0201-restricted CEA peptide CEA9 671 and then expanded in an anti-CD3 antibody (1 microgram/ml)-coated flask in the presence of a 80 U/ml IL-2 (IL-2/CD3 system). The effector cells, which were designated as CEA peptide-pulsed dendritic cell-activated killer (CEA-PDAK) cells, were preferentially CD3+CD8+, and capable of killing T2 cells pulsed with CEA peptide but not T2 cells alone. The CA-PDAK cells also lysed the gastric cancer cell line KATO III (HLA-A0201, CEA (+)), but not the WiDr (HLA-A2402, CEA(+)) cells. The cytotoxicity was abrogated when the CEA-PDAK cells were treated with anti-TCR alpha beta antibody or when the target cells were treated with the anti-class I antibody prior to the cytotoxicity assay. The CEA-PDAK cells exerted their cytotoxic activity even in the presence of a high amount of CEA protein at the effector phase, which mimicked the clinical setting. The CEA-PDAK cells showed approximately a hundred-fold expansion in total cell numbers yielded without any loss of the specific lysis, when stimulated with the IL-2/CD3 system compared to those stimulated with IL-2 alone. The TCR V beta gene analysis for the CEA-PDAK cells, conducted by means of RT-PCR-Southern blotting, demonstrated oligoclonal expression of TCR beta 7 and 12, and the latter was shown to be responsible for the killing activity. SSCP analysis indicated the clonotype of the TCR V beta 12 gene, indicating a selective expansion of lymphocytes bearing a limited TCR variable region by the stimulation with CEA peptide-pulsed DCs. Taken together, the effector lymphocytes reactive with the CEA antigen can be generated from PBMCs with the antigenic CEA peptide and cultured DCs. The IL-2/CD3 system is effective and practical in activating the effector cells for the clinical use of CEA-PDAK cells. Adoptive immunotherapy using this system may be promising for treating CEA-expressing tumors. abstract_id: PUBMED:35028614 TGF-β1 potentiates Vγ9Vδ2 T cell adoptive immunotherapy of cancer. Despite its role in cancer surveillance, adoptive immunotherapy using γδ T cells has achieved limited efficacy. To enhance trafficking to bone marrow, circulating Vγ9Vδ2 T cells are expanded in serum-free medium containing TGF-β1 and IL-2 (γδ[T2] cells) or medium containing IL-2 alone (γδ[2] cells, as the control). Unexpectedly, the yield and viability of γδ[T2] cells are also increased by TGF-β1, when compared to γδ[2] controls. γδ[T2] cells are less differentiated and yet display increased cytolytic activity, cytokine release, and antitumor activity in several leukemic and solid tumor models. Efficacy is further enhanced by cancer cell sensitization using aminobisphosphonates or Ara-C. A number of contributory effects of TGF-β are described, including prostaglandin E2 receptor downmodulation, TGF-β insensitivity, and upregulated integrin activity. Biological relevance is supported by the identification of a favorable γδ[T2] signature in acute myeloid leukemia (AML). Given their enhanced therapeutic activity and compatibility with allogeneic use, γδ[T2] cells warrant evaluation in cancer immunotherapy. abstract_id: PUBMED:34729377 Novel TCR-like CAR-T cells targeting an HLA∗0201-restricted SSX2 epitope display strong activity against acute myeloid leukemia. The synovial sarcoma X breakpoint 2 (SSX2) belongs to a multigene family of cancer-testis antigens and can be found overexpressed in multiple malignancies. Its restricted expression in immune-privileged normal tissues suggest that SSX2 may be a relevant target antigen for chimeric antigen receptor (CAR) therapy. We have developed a T cell receptor (TCR)-like antibody (Fab/3) that binds SSX2 peptide 41-49 (KASEKIFYV) in the context of HLA-A∗-0201. The sequence of Fab/3 was utilized to engineer a CAR with the CD3 zeta intra-cellular domain along with either a CD28 or 4-1BB costimulatory endodomain. Human T cells from HLA-A2+ donors were transduced to mediate anti-tumor activity against acute myeloid leukemia (AML) tumor cells. Upon challenge with HLA-A2+/SSX2+ AML tumor cells, CAR-expressing T cells released interferon-γ and eliminated the tumor cells in a long-term co-culture assay. Using the HLA-A2+ T2 cell line, we demonstrated a strong specificity of the single-chain variable fragment (scFv) for SSX2 p41-49 and the closely related SSX3 p41-49, with no response against the others SSX-homologous peptides or unrelated homologous peptides. Since SSX3 has not been observed in tumor cells and expression cannot be induced by pharmacological intervention, SSX241-49 represents an attractive target for CAR-based cellular therapy to treat multiple types of cancer. abstract_id: PUBMED:17695529 In vitro antitumor lymphocyte generation using dendritic cells and innate immunity mechanisms as tumor cell treatments. Dendritic cells play a central role in the initiation and regulation of acquired and innate immunity, playing an important role in immunosurveillance and antitumor reaction. This reaction is mediated by effector cells and soluble factors. We chose to investigate four dendritic cell loading methods by mimicking innate immunity mechanisms and using whole tumor cell treatments in order to stimulate lymphocytes: sodium hypochlorite, TNFalpha and IFNgamma and IgG opsonization. These methods were compared in an HLA.A2 model of healthy donors and with the M74 melanoma cell line. Treated tumor cell-loaded DC were able to increase proliferation of lymphocytes. Moreover, a CTL population was stimulated, as shown by their specific cytotoxicity against tumor cells (with w6/32 antibody assays), against MelanA/MART-1 loaded T2 cells and using MelanA/MART-1 tetramer. IgG opsonization seemed to be less efficient than other tumor cell treatments. These loaded DC, or the obtained effector cells, could be interesting for therapeutic applications in antitumor cell therapy. abstract_id: PUBMED:19555216 Identification of an HLA-A*0201-restrictive CTL epitope from MUC4 for applicable vaccine therapy. Recent research has indicated that MUC4 plays an important role in the development of many tumors and may prove useful as a novel cancer immunotherapy target. We aimed to identify HLA-A*0201-restrictive cytotoxic T lymphocyte (CTL) epitopes of the cancer-associated antigen MUC4. The MUC4 sequence was scanned for immunogenic peptides using HLA-binding prediction software. Dendritic cells (DCs) from peripheral blood mononuclear cells (PBMCs) were induced by cytokines. Five possible CTL epitopes were selected by software analysis, synthesized, and used to pulse mature DCs. The CD8(+) T cells from PBMCs from an HLA-A*0201 healthy donor were stimulated with autologous MUC4-peptide-loaded DCs and expanded in vitro. T cell activation was assessed by ELISPOT, and cytotoxicity was determined by (51)chromium ((51)Cr)-release assays. Our results show that CTLs induced by peptide P01204 could lyse T2 cells pulsed with peptide P01204 and HCT-116 cells (MUC4(+), HLA-A2(+)). Compared with a control peptide, P01204 increased the number of IFN-gamma producing T cells. Overall, these results suggest that P01204 is a novel HLA-A*0201-restrictive CTL epitope of the cancer-associated antigen MUC4. This will provide a foundation for the development of tumor-specific peptide vaccines. abstract_id: PUBMED:22469208 Potential antitumor applications of a monoclonal antibody specifically targeting human papilloma virus 16 E7 49-57 peptide. Our study aims to evaluate whether the approach of TCRm mAb has therapeutic potential against HPV-induced tumors. In the present study, we generated a murine IgG2a mAb 6C10 specifically recognizing HPV-16-E7(49-57) epitope (RAHYNIVTF) in the polypeptides and in complex with a MHC class I molecule. Analysis of the primary structure shows that the 6C10 Ab displays a novel sequence in the CDR of the heavy chain, compared to the sequences in the Kabat database, which suggests the Ab has completed its affinity maturation. The 6C10 Ab can specifically recognize E7 and Trx-E7(30-67) protein in ELISA, and can also specifically bind to T2 cell carrying HPV-16-E7(49-57) peptide. In the TC-1 cell tumor-bearing mouse model, 6C10 exhibits tumor suppression activity when compared to the isotype control Ab. 6C10 Ab has showed tumor-inhibition potency in a mouse model and this Ab may have the prospect of cancer therapy. Answer: The provided abstracts do not contain information relevant to the use of pretreatment CT (computed tomography) to predict local control of T2 glottic carcinomas treated with radiation therapy alone. The abstracts focus on various topics related to cancer treatment, such as the cytotoxicity of mitomycin C enhanced by arsenite pretreatment (PUBMED:16494910), gene therapy approaches using T cell receptor genes (PUBMED:10079371), activated T cell therapy targeting glioblastoma cancer stem cells (PUBMED:36604465), effects of cytokines on basement membrane formation (PUBMED:18434122), generation of cytotoxic effector lymphocytes using carcinoembryonic antigen and dendritic cells (PUBMED:12529970), TGF-β1 potentiation of Vγ9Vδ2 T cell adoptive immunotherapy (PUBMED:35028614), TCR-like CAR-T cells targeting an HLA-restricted SSX2 epitope (PUBMED:34729377), antitumor lymphocyte generation using dendritic cells and innate immunity mechanisms (PUBMED:17695529), identification of an HLA-A*0201-restrictive CTL epitope from MUC4 for vaccine therapy (PUBMED:19555216), and a monoclonal antibody targeting human papilloma virus 16 E7 peptide (PUBMED:22469208). To answer the question about the predictive value of pretreatment CT for local control of T2 glottic carcinomas treated with radiation therapy, one would need to consult clinical studies or reviews that specifically address the use of imaging modalities like CT scans in the context of treatment planning and outcome prediction for laryngeal cancer. Such information is not provided in the abstracts given.
Instruction: Does breast feeding influence liver biochemistry? Abstracts: abstract_id: PUBMED:14581797 Does breast feeding influence liver biochemistry? Objective: It is assumed that early feeding can affect liver biochemistry because breast-fed infants have a higher risk of hyperbilirubinemia than formula-fed infants. The authors sought to determine how feeding mode affected liver biochemistry in healthy term infants. Methods: Healthy term infants were followed up during infancy with a monthly questionnaire about feeding mode. Blood samples were obtained at 2, 6, and 9 months. Liver biochemistry (serum albumin, alkaline phosphatase, lactic dehydrogenase, aspartate aminotransferase [AST], and bilirubin), total insulin-like growth factor 1 (IGF-I), and insulin growth factor binding protein 3 (IGFBP-3) were determined at all ages. Results: Mean AST and bilirubin were significantly higher in breast-fed infants at 2 and 6 months. In addition, mean albumin levels were higher in breast-fed infants at 2 months. Alkaline phosphatase, IGF-I, IGFBP-3, and lactic dehydrogenase levels did not differ between the feeding groups. AST levels did not correlate significantly with bilirubin, albumin, alkaline phosphatase, or lactic dehydrogenase values. There was a strong positive association between AST and IGF-I at 2 months (r = 0.47, P = 0.004). Conclusion: Cytomegalovirus infection, vitamin K deficiency, and macromolecular forms of AST could be an explanation for a higher AST level among breast-fed infants. However, no other clinical or paraclinical sign of liver disease was seen, all infants were given oral vitamin K, and the AST did not rise to levels comparable to those seen in individuals with macromolecular AST. The authors speculate the most likely explanation of the elevated AST is induction of hepatocytes by factors in human milk. This is supported by the higher albumin levels in breast-fed infants and the positive association between AST and IGF-I. abstract_id: PUBMED:2605102 Weekly epirubicin for breast cancer with liver metastases and abnormal liver biochemistry. Thirty-six consecutive patients with breast cancer and liver metastases with abnormal liver biochemistry were treated with epirubicin 25 mg m-2 i.v. weekly. No dose modification was made for abnormal liver biochemistry, but dose intensity was adjusted by delaying treatment according to myelosuppression. The UICC overall response rate according to UICC criteria was 11/36 (30%) and median response duration was 27 weeks. Liver biochemistry improved in a further seven patients. Treatment was well tolerated. Epirubicin given in this way is effective in patients with breast cancer and liver metastases. An initial deterioration in liver biochemistry may occur before there is a response to epirubicin. abstract_id: PUBMED:3872949 Liver disease in alpha 1-antitrypsin deficiency. A retrospective analysis of the influence of early breast- vs bottle-feeding. We identified children with alpha 1-antitrypsin deficiency from the medical records of the Massachusetts General Hospital and Children's Hospital, Boston, and investigated their early feeding history. Between 1969 and 1983, forty children with the deficiency were seen at one or both hospitals. Clinical information was obtained from hospital records and from questionnaires mailed to the parents. Complete morbidity, mortality, and early feeding data were obtained for 32 of the children who were born at 38 to 42 weeks' gestation and whose weights were appropriate for gestational age. We compared the presence of severe liver disease and the death rate of those who had been exclusively breast-fed for one month with those who had been bottle-fed. Severe liver disease was present in eight (40%) of bottle-fed and one (8%) of breast-fed infants. Twenty-four of the 32 infants were still alive at the termination of the study; 12 had been breast-fed and 12 bottle-fed during their first month of life. All eight deceased infants had been bottle-fed. The mortality rate in the bottle-fed group was significantly greater than that of the breast-fed group. Our study suggests that breast-feeding may offer some protection against severe liver disease and early death in infants with alpha 1-antitrypsin deficiency. abstract_id: PUBMED:16047816 The influence of the "Friend of Children Hospital" campaign on breast-feeding The paper investigates the awareness of women who gave birth in a 'Friend of Children' Hospital (FCH) of the influence of this campaign on their decision to breast-feed. We conducted semi-structured interviews, focusing the maternal breast-feeding process, expectations, experiences, and the importance of the health staff. The data analysis was based on the "Ten Steps to Successful Maternal Breast-Feeding." We found out that in situations regarded by women as problematic and lacking in answers that could attenuate their discomfort and suffering in the face of their desire to breast-feed, they stopped breast-feeding. We believe that by maintaining the FCH support to women (so that they can face those difficulties), its influence can be decisive to the continuation of breast-feeding. abstract_id: PUBMED:7139482 Influence of cultural and environmental factors on breast-feeding. A study was undertaken at hôpital Sainte-Justine, Montreal, to determine the factors that influence the choice of feeding method for infants and the duration of breast-feeding. The factors that most clearly distinguished the mothers who breast-fed were lifestyle, education and conditioning to the parental role; those who nursed successfully for a long time were well motivated, well educated, supported by their family, especially the husband, and the La Leche League, and were of an economic status such that it was not necessary for them to return to work soon after delivery. Although the breast-fed babies were more fussy, they were healthier and received fewer drugs. abstract_id: PUBMED:30109844 The influence of a community-level breast-feeding promotion intervention programme on breast-feeding practices in Myanmar. Objective: Breast-feeding in the first 6 months of life is critical for ensuring both child health and well-being. Despite efforts to improve breast-feeding practices, recent studies have reported that Myanmar continues to have low rates of exclusive breast-feeding.Design/Setting/SubjectsA community-based breast-feeding promotion programme using trained community members was implemented for 1 year in hard-to-reach townships of Myanmar. The present study assessed the breast-feeding practices using a cross-sectional survey of 610 mothers of children under 2 years old: specifically, breast-feeding within 24 h, exclusive breast-feeding up to 6 months and breast-feeding duration. Results: Using Cox models for breast-feeding duration before 24 months, the hazard of breast-feeding cessation was lower in programme v. non-programme townships (hazard ratio (HR)=0·55; 95 % CI 0·32, 0·95). Mothers who worked as shop owners or ran a family business had lower hazard of breast-feeding cessation (HR=0·13, P<0·05) v. those who worked as supervisors, managers, self-employed and businesswomen. The hazard of breast-feeding cessation was higher in women in higher wealth quintiles v. those in the lowest quintile (lower quintile, HR=3·49, P<0·1; higher quintile, HR=3·50, P<0·1; highest quintile, HR=3·47, P<0·1). Conclusions: The intervention did not affect exclusive breast-feeding practices or breast-feeding within the first 24 h. Potential reasons include existing high levels of early initiation of breast-feeding due to ongoing government-led maternal and child health activities, and social and traditional practices related to complementary feeding. Community-based breast-feeding programmes should continue to promote exclusive breast-feeding and develop strategies to support working mothers. abstract_id: PUBMED:7195111 Factors, which may influence breast feeding (author's transl) The case histories of 2197 mothers, who delivered at the 1st Department of Obstetrics and Gynecology of the University of Vienna in the years 1976 and 1977 were evaluated by the computer. 14 different parameters, which could have an influence on the incidence and duration of breast feeding were investigated. Our results suggest, that the incidence of breast feeding is declining with advancing years of the mother. An early reintegration into professional life after delivery has an effect on the duration of nursing. The family status and parity of the mother had no influence on the incidence and duration of breast feeding. Foreigners, as well as non-Roman Catholic mothers, who are represented mainly by guest workers from Yugoslavia and Turkey had a distinctly higher frequency of breast feeding of 74,6%, respectively 78,5% than average (67%). Also the duration of breast feeding of foreign mothers was above average. Women of higher socioeconomic classes had a higher incidence of breast feeding, as well as women, who had abortion or interruption or those using oral contraceptives. Operative delivery under narcosis and premature birth were accompanied by a significantly lower frequency of breast feeding. The rate of mastitis is at 1,4% in the group of breast feeding mothers. abstract_id: PUBMED:12635980 Influence of breast-feeding on mental and psychomotor development. The purpose of this study was to analyze the effects of breast-feeding on cognitive and motor development. A prospective cohort study was carried out enrolling 249 babies from 2 rural and urban areas. Cognitive development was assessed at the age of 18 months. Results show that adjusted scores on the mental development index were 4.6 points higher in babies breast-fed for longer than 4 months than in those breast-fed for less time. No differences in psychomotor development were found as a function of type of feeding. A statistically-significant linear correlation was observed between the duration of breast-feeding and scores for Bayley's mental development index. These data suggest that breast-feeding for longer than 4 months has a positive effect on the baby's mental development measured at 18 months of age. Parental intelligence quotient also seems to have an influence on cognitive development. abstract_id: PUBMED:2082173 Adolescents and breast feeding The attitudes and future decisions of teenage girls towards infant feeding methods, breast-feeding or bottle feeding have been investigated. A self-conducted questionnaire was completed by 146 secondary school girls ranging in age from 16 to 18 years. Economic convenience, better nutritional value, deeper emotional link with the infant, own and friends past experience nearly always in favour of breast-feeding, represent for the girls the main advantages of breast-feeding. On the other hand, many girls apprehend breast-feeding for various possible physical, psychological and social implications. Some aspects of the problem studied in this paper show a significant correlation with the future choice of infant feeding. The girls would be interested in receiving information at school. Suitable information during school health lessons may positively influence teenage girls to adopt breast-feeding as the infant feeding method of choice. abstract_id: PUBMED:22185539 Breast-feeding and benign breast disease. Benign breast disease (BBD) is very common among women in their fertile age, but its correlation with breast reproductive function remains unclear. Our study aimed to investigate the relation between BBD and breast-feeding. We collected data on 105 women with BBD and 98 controls, focusing on their reproductive history and breast-feeding. We analysed data by R (version 2.12.1) considering p < 0.05 as significant. The results showed that fibroadenoma represented the most frequent BBD (55%), followed by fibrocystic changes (19%), intraductal papilloma (6%) and inflammatory breast disorders (5%). The mean age was 31.5 years (± 6.1), BMI 21.2 kg/m² (± 3.4) and age at menarche 13.0 years (± 1.5). Duration of breast-feeding was not significantly different between controls and BBD types (p = NS). Selecting women with fibroadenoma breast-feeding duration directly correlated with the number of benign lesions (p < 0.05), which remains significant also by multivariate analysis. It was concluded that there seemed to be no difference in breast-feeding among BBDs types, but lactation may influence the number of fibroadenomas. Moreover, prospective studies would better define the correlation between lactation and BBDs. Answer: Yes, breast feeding does influence liver biochemistry. A study found that mean aspartate aminotransferase (AST) and bilirubin levels were significantly higher in breast-fed infants at 2 and 6 months compared to those who were not breast-fed. Additionally, mean albumin levels were higher in breast-fed infants at 2 months. The study suggested that the elevated AST in breast-fed infants could be due to induction of hepatocytes by factors in human milk, as supported by the higher albumin levels and the positive association between AST and insulin-like growth factor 1 (IGF-I) (PUBMED:14581797). Moreover, another study on children with alpha 1-antitrypsin deficiency indicated that breast-feeding may offer some protection against severe liver disease and early death in infants with this condition. The study found that severe liver disease was present in 40% of bottle-fed and 8% of breast-fed infants, and all deceased infants had been bottle-fed, suggesting a protective effect of breast-feeding (PUBMED:3872949). These findings demonstrate that breast-feeding can have a significant impact on liver biochemistry in infants, potentially offering protective benefits in certain conditions.
Instruction: Should we feed back research results in the midst of a study? Abstracts: abstract_id: PUBMED:22974318 Should we feed back research results in the midst of a study? Background: This report is an introduction to a series of three research papers that describe the evolution of the approaches taken by the Translating Research in Elder Care (TREC) research team during its first four years to feed back the research findings to study participants. TREC is an observational multi-method health services research project underway in 36 nursing homes in the prairie provinces of Canada. TREC has actively involved decision makers from the sector in all stages from initial planning, through data collection to dissemination activities. However, it was not planned as a fully integrated knowledge translation project. These three papers describe our progress towards fully integrated knowledge translation--with respect to timely and requested feedback processes. The first paper reports on the process and outcomes of creating and evaluating the feedback of research findings to healthcare aides (unregulated health professionals). These aides provide over 80% of the direct care in our sample and actively requested the feedback as a condition of their continued cooperation in the data acquisition process. The second paper describes feedback from nursing home administrators on preliminary research findings (a facility annual report) and evaluation of the reports' utility. The third paper discusses an approach to providing a more in-depth form of feedback (expanded feedback report) at one of the TREC nursing homes. Findings: Survey and interview feedback from healthcare aides is presented in the first paper. Overall, healthcare aides' opinions about presentation of the feedback report and the understand ability, usability, and usefulness of the content were positive. The second paper describes the use of telephone interviews with facility administrators and indicates that the majority of contextual areas (e.g., staff job satisfaction) addressed in facility annual report to be useful, meaningful, and understandable. More than one-half of the administrators would have liked to have received information on additional areas. The third paper explores how a case study that examined how involvement with the TREC study influenced management and staff at one of the TREC nursing homes. The importance of understanding organizational routines and the impact of corporate restructuring were key themes emerging from the case study. In addition, the Director of Care suggested changes to the structure and format of the feedback report that would have improved its usefulness. Conclusions: We believe that these findings will inform others undertaking integrated knowledge translation activities and will encourage others to become more engaged in feedback processes. abstract_id: PUBMED:6455055 Application of the bio-feed-back-method in asthmatic patients (author's transl) Principle and research with the bio-feed-back-method, their relations to the techniques of relaxation and possibilities of the therapeutical use in the clinic were described. The practical importance of this method is based on the possibility to measure the ability to relax and their degree and to mediate the relaxation more rapidly. For this purpose a bio-feed-back-apparatus (electromyograph) was the best. Studies were carried out with a mini-suit-case-apparatus. The first experiences with this method were described. abstract_id: PUBMED:27207929 Translation Quality Assessment in Health Research: A Functionalist Alternative to Back-Translation. As international research studies become more commonplace, the importance of developing multilingual research instruments continues to increase and with it that of translated materials. It is therefore not unexpected that assessing the quality of translated materials (e.g., research instruments, questionnaires, etc.) has become essential to cross-cultural research, given that the reliability and validity of the research findings crucially depend on the translated instruments. In some fields (e.g., public health and medicine), the quality of translated instruments can also impact the effectiveness and success of interventions and public campaigns. Back-translation (BT) is a commonly used quality assessment tool in cross-cultural research. This quality assurance technique consists of (a) translation (target text [TT1]) of the source text (ST), (b) translation (TT2) of TT1 back into the source language, and (c) comparison of TT2 with ST to make sure there are no discrepancies. The accuracy of the BT with respect to the source is supposed to reflect equivalence/accuracy of the TT. This article shows how the use of BT as a translation quality assessment method can have a detrimental effect on a research study and proposes alternatives to BT. One alternative is illustrated on the basis of the translation and quality assessment methods used in a research study on hearing loss carried out in a border community in the southwest of the United States. abstract_id: PUBMED:18075014 Interventions to reduce low-back injury risk among youth who perform feed handling and scooping tasks on farms. Children and adolescents who perform farm chores are exposed to cumulative trauma injury risks, particularly to the low back. For example, they may routinely handle heavy materials and need to adopt awkward postures during farm chores. Two potential interventions aimed at reducing low-back injury risk were examined in the laboratory: the use of a rugged cart and proper orienting of feed bags to ease feed transport; and the use of a modified feed bin, intended to allow easier access to product scooped from the bin at different levels. A lumbar motion monitor device was used to quantify trunk movement and determine injury risk level. Fourteen male and female youth who regularly perform these farm chores participated in the study. The cart significantly reduced low-back injury risk by nearly 10%, compared with manual feed bag lifting and carrying. The modified feed bin did not significantly reduce low-back injury risk, compared with traditional scooping. Regardless of the method used, however, scooping feed from the top of the bin reduced lower back disorder risk by 50% or more compared to the two lower levels. This study showed that relatively simple and low-cost solutions can be applied to farm environments to help protect the low backs of youth who perform farm chores. abstract_id: PUBMED:24315150 Low back pain research--future directions. Low back pain is a challenge for clinicians and researchers, due to the large variability in clinical presentation, lack of consensus regarding diagnostic criteria or clinical classification; wide variation in course and prognosis, and limited success in identifying effective treatments. However, increasing research efforts has generated an expanding body of evidence on the epidemiology, prognosis and treatment of back pain. This paper presents four key developments in research and clinical practice, and describes how these can influence the future direction of back pain research: (1) the increasing awareness of the impact of low back pain on population health; (2) new approaches to describing and investigating course and prognosis of back pain; (3) the need to better understand the bio-psycho-social mechanisms or pathways that explain impact and long-term outcomes in order to inform intervention research; and (4) the potential for stratified models of care to improve patient outcomes and efficiency of care for back pain. abstract_id: PUBMED:17958038 Recommendations for good practice, consensus, feed-back The author brings up the different phases in the development of the Recommendations for Good Practice by the Scientific Societies of General Practice in Belgium (Société Scientifique de Médecine Générale and Domus Medica) and by the Belgian Antibiotic Policy Coordination Committee (BAPCOC). Are also discussed the methodology of the consensus conferences organized by the National Sickness and Disability Insurance Institute and the process of the prescription feed-back campaigns to some groups of prescribers considered by the National Board of Quality Promotion. abstract_id: PUBMED:20227640 Treatment-based subgroups of low back pain: a guide to appraisal of research studies and a summary of current evidence. There has been a recent increase in research evaluating treatment-based subgroups of non-specific low back pain. The aim of these sub-classification schemes is to identify subgroups of patients who will respond preferentially to one treatment as opposed to another. Our article provides accessible guidance on to how to interpret this research and determine its implications for clinical practice. We propose that studies evaluating treatment-based subgroups can be interpreted in the context of a three-stage process: (1) hypothesis generation-proposal of clinical features to define subgroups; (2) hypothesis testing-a randomised controlled trial (RCT) to test that subgroup membership modifies the effect of a treatment; and (3) replication-another RCT to confirm the results of stage 2 and ensure that findings hold beyond the specific original conditions. At this point, the bulk of research evidence in defining subgroups of patients with low back pain is in the hypothesis generation stage; no classification system is supported by sufficient evidence to recommend implementation into clinical practice. abstract_id: PUBMED:35797529 Back to the Future: A Report From the 16th International Forum for Back and Neck Pain Research in Primary Care and Updated Research Agenda. Study Design: The 16th meeting of the International Forum for Back and Neck Pain Research in Primary Care was held in Québec City in July 2019 under the theme of innovation. This paper addresses the state of research in the field. Objective: To ascertain the evolution of knowledge and clinical application in back and neck pain and identify shifting research priorities. Materials And Methods: After a brief presentation of the Forum and its history, the current state of the field was depicted from the scientific program and the recordings of the plenary and parallel oral and poster communications of Forum XVI. Research agendas established in 1995 and 1997 were updated from a survey of a multidisciplinary group of experts in the field. A discussion of the progress made and challenges ahead follows. Results: While much progress has been made at improving knowledge at managing back pain in the past 25 years, most research priorities from earlier decades are still pertinent. The need for integration of physical and psychological interventions represents a key challenge, as is the need to better understand the biological mechanisms underlying back and neck pain to develop more effective interventions. Stemming the tide of back and neck pain in low and middle-income countries and avoiding the adoption of low-value interventions appear particularly important. The Lancet Low Back Pain Series initiative, arising from the previous fora, and thoughts on implementing best practices were extensively discussed, recognizing the challenges to evidence-based knowledge and practice given competing interests and incentives. Conclusion: With the quantity and quality of research on back and neck pain increasing over the years, an update of research priorities helped to identify key issues in primary care. abstract_id: PUBMED:32089017 Translation and back-translation methodology in health research - a critique. Introduction: While back translation has been widely used in medical research surveying linguistically diverse populations, research literature often fails to document this complex translation process. Our study examines inadequacies in the use of back translation, suggests improvements, as well as suggesting where other translation strategies may be more appropriate.Areas covered: This paper cites numerous metastudies showing how back translation is often uncritically adopted in validation of research instruments, pointing to potential methodological failings, before examining the back-translation processes in an Australian study of non-English speaking cancer patients. Our study of back translation applied to patient self-report questionnaires demonstrates that appropriate renditions of items are critically dependent upon both translator and researcher awareness of item purpose, overall project specifications and identification of linguistic ambiguities in source test items. The poor implementation and documentation of back-translation processes in many studies indicate alternatives to back translation may be appropriate.Expert opinion: Where translations are used in research, translation processes need to be made explicit in research protocols and reports, and translation experts need to be part of the research team, with translation guidance and advice integrated into all stages of research design. abstract_id: PUBMED:9779536 Methodologic issues in low back pain research in primary care. Study Design: Narrative review and discussion of the selected literature. Objectives: To discuss some important methodologic challenges in low back pain research in primary care. Summary Of Background Data: Many methodologic problems must be confronted when conducting low back pain research. Some of these problems are back pain specific or specific to the primary care setting. Methods: Methodologic problems related to four research issues will be discussed: study designs, definition of low back pain, determinants of low back pain, and outcome assessment. Results: Two fundamentally different study designs are frequently used in low back pain research, namely observational studies and experimental studies. The definition of low back pain is typically restricted to a highly variables self-reported symptom, the sensation of pain in the back. There clearly is a need for an evidence-based classification system for low back pain. Because a tenable theoretical framework is lacking, it is difficult to know which determinants of low back pain should be quantified. Low back pain studies focus usually on health-related quality-of-life outcome parameters. The identification of the minimum clinically relevant changes for the most important outcome instruments needs further consideration. Conclusions: In years to come, low back pain researchers are challenged to overcome some of these (and other) problems to enhance the quality of low back pain research in primary care. Answer: The question of whether to feed back research results in the midst of a study is addressed in the context of the Translating Research in Elder Care (TREC) project, which is an observational multi-method health services research project underway in 36 nursing homes in the prairie provinces of Canada. The TREC project evolved its approach to feedback during its first four years, actively involving decision-makers from the sector at all stages, from planning through data collection to dissemination activities. Although not initially planned as a fully integrated knowledge translation project, TREC moved towards this model, particularly in terms of timely and requested feedback processes (PUBMED:22974318). Feedback was provided to healthcare aides, who constitute over 80% of the direct care workforce in the sample nursing homes and had requested feedback as a condition of their continued cooperation in the data acquisition process. The feedback was generally well-received, with positive opinions about the presentation, understandability, usability, and usefulness of the content. Nursing home administrators also found the preliminary research findings useful, meaningful, and understandable, with some expressing a desire for additional information in more areas. An in-depth feedback report provided at one of the TREC nursing homes highlighted the importance of understanding organizational routines and the impact of corporate restructuring. Suggestions were made to improve the structure and format of the feedback report to enhance its usefulness (PUBMED:22974318). The findings from the TREC project suggest that providing feedback during a study can be beneficial and inform integrated knowledge translation activities. It can also encourage more engagement in feedback processes. The positive reception of the feedback by healthcare aides and administrators indicates that such an approach can be valuable for participants and may improve the research process by fostering cooperation and enhancing the relevance and application of the research findings (PUBMED:22974318). In conclusion, while the decision to feed back research results during a study may depend on the specific context and goals of the research, the experience of the TREC project suggests that it can be a constructive practice that benefits both the research process and the participants involved.
Instruction: Does external stenting reduce porcine vein-graft occlusion via an action on vascular nerves? Abstracts: abstract_id: PUBMED:12643468 Does external stenting reduce porcine vein-graft occlusion via an action on vascular nerves? Background: Neural reorganization occurs in porcine vein grafts and placement of an external stent reduces graft occlusion. Aim Of The Study: To determine the effect of external stenting on the innervation of porcine vein grafts. Methods: Saphenous vein into carotid artery grafting (with and without external stents) was performed in 16 pigs. After one and six months, grafts were removed, nerves were counted, and neointima was assessed. Results: In vein graft compared to ungrafted vein, there was a significant (p < 0.05) decrease in medial perivascular nerves, but a dramatic increase in paravascular nerves in the adventitia (p < 0.05). In stented vein grafts there was also a reduction of perivascular nerves and the paravascular nerve proliferation observed in vein grafts at one month was inhibited (p < 0.05). Neointima formation and the appearance of large paravascular nerve bundles in the adventitia of vein grafts were abolished by external stenting. Conclusions: Neural reorganization plays a role in vein-graft failure, possibly through the local release of mitogens; the prevention of this reorganization contributes to the inhibitory effect of the external stent on neointima formation. abstract_id: PUBMED:33592294 External stenting and disease progression in vein grafts 1 year after open surgical repair of popliteal artery aneurysm. Objective: Open surgical repair remains the gold standard treatment for popliteal artery aneurysms (PAA). The objective of this study was to evaluate the safety of external stenting and its medium-term effect on vein graft disease after open PAA repair. Methods: Between December 2017 and September 2019, 12 consecutive patients with PAA underwent open surgical repair with externally stented saphenous vein grafts. Duplex ultrasound scanning of the grafts was performed at discharge and at 3, 6, and 12 months after the procedure to evaluate graft patency, average lumen diameter and lumen uniformity. Results: Eleven patients underwent aneurysm ligation and bypass grafting and one patient was treated with aneurysm exclusion and interposition of a venous segment. External stenting of the vein graft was successful in all patients. The mean follow-up time was 12 months (range, 7-17 months), with a primary patency rate of 100% and no graft revisions or reinterventions. The mean lumen diameters at 3, 6, and 12 months were 5.9 ± 1.2 mm, 5.7 ± 0.8 mm, and 5.7 ± 0.7 mm, respectively, with no significant changes between 3 and 6 (P = .34) and between 6 and 12 months (P = .34). The coefficient of variance at 3, 6, and 12 months was 8.2 ± 9.3, 9.4 ± 7.2, and 10.4 ± 8.9, respectively, with no significant change between 3 and 6 months (P = .78) or 6 and 12 months (P = .98). No mortality or amputations were recorded throughout the follow-up period. Conclusions: External stenting of vein grafts in open surgical repair of PAA is feasible and safe. This technique may potentially improve the outcomes of surgical repair in patients with PAA. abstract_id: PUBMED:11254903 Macro-porosity is necessary for the reduction of neointimal and medial thickening by external stenting of porcine saphenous vein bypass grafts. Background: placing external non-restrictive macro-porous stents around porcine vein grafts prevents neointima formation and medial thickening in both the short and long term. Whether the porosity of the stent material influences this effect, however, has not been determined. Therefore, the effect on neointimal and medial thickening of external macro-porous (polyester) and micro-porous (polytetrafluorethylene) stents of equal diameter were compared. The effect on expression of platelet-derived growth factor (PDGF), a potent mediator of vascular smooth muscle cell migration and proliferation and its receptors was also investigated. Methods And Results: saphenous vein-carotid artery interposition grafting was performed in Landrace pigs with external placement of 8 mm diameter macro- and micro-porous stents contralaterally. One month after surgery, graft wall dimensions, PDGF and PDGF receptor expression and cell proliferation using proliferating cell nuclear antigen (PCNA) were measured on histological sections. Macro-porous stents significantly reduced neointimal and medial thickening compared with micro-porous stents (0.1+/-0.02 vs. 0.25+/-0.03 mm, P<0.002, and 0.10+/-0.02 vs. 0.17+/-0.02 mm, P<0.014, respectively). Macro-porous stents significantly reduced the percentage of cells expressing PDGF and PCNA, compared with micro-porous stents (36+/-9 vs. 80+/-7, P < 0.002, and 11+/-3 vs. 21+/-2, P < 0.02, respectively). The percentage of cells expressing PDGF receptors was similar with both the stent types. Adventitial microvessel formation occurred across macro-porous stents but was markedly suppressed by micro-porous stents. Conclusions: porosity is crucial to the efficacy of external stents in reducing neointima formation in porcine vein grafts. Decreases in PDGF expression and cell proliferation accompany the reduction in neointima formation. In addition, macro-porous stents allow adventitial microvessels to connect with the vasculature outside the stent, thereby potentially improving oxygenation. Although external stenting is highly effective in reducing neointima formation after vein grafting, the properties of the stent material necessary for this effect have not been defined. This study establishes that macro-porosity is one essential feature required to reduce PDGF expression cell proliferation and neointima formation. abstract_id: PUBMED:15557918 Long-term reduction of medial and intimal thickening in porcine saphenous vein grafts with a polyglactin biodegradable external sheath. Objectives: The development of neointimal hyperplasia with subsequent atherosclerotic deposition has been proposed to cause most late vein graft failures. Our unit has previously demonstrated that placement of a macroporous, loose-fitting polyester external stent prevents neointimal thickening in porcine vein grafts, and has been proposed as a therapeutic strategy to prevent late vein graft failure. To reduce any potential long-term complications of the permanent polyester stent, a study was undertaken to investigate the effect of a biodegradable external stent on porcine vein graft thickening at 1 month and to identify its longer term effects at 6 months. Methods: Bilateral saphenous vein to common carotid artery interposition grafting was performed in Large White pigs (25-32 kg; n = 6 per time course group) according to UK Home Office guidelines. A commercially constructed loose-fitting 8-mm-diameter polyglactin stent was placed externally around the vein graft on one side, and the contralateral side remained unstented to serve as control. The external stent was designed to biodegrade and hence disappear within 90 days. Grafts were left in situ for 1 month in 1 group of animals, and for up to 6 months in the other group, before explantation. Graft morphometric features were assessed with computer-aided planimetry. Results: At 1 month the vein grafts fitted with the polyglactin stent demonstrated a statistically significant decrease in neointimal thickening (0.038 mm; interquartile range [IQR], 0.035-0.039 mm) compared with the unstented control grafts (0.13 mm; IQR; 0.11-0.19; P = .0012), and also in medial thickening (0.09 mm; IQR, 0.086-0.093) compared with unsheathed control grafts (0.302 mm; IQR, 0.272-0.414; P = .0012). The 6-month polyglactin stented grafts also demonstrated a statistically significant reduction in neointimal thickening (0.049 mm; IQR, 0.047-0.07; P = .0012) compared with control grafts (0.178 mm; IQR, 0.164-0.19), and also in medial thickening (0.105 mm; IQR, 0.095-0.143) compared with unstented grafts (0.421 mm; IQR, 0.35-0.44; P = .0012, Mann-Whitney U test). Conclusion: The loose-fitting biodegradable polyglactin external stent reduces porcine vein graft thickening at 1 month, which persists in the long term, even after degradation of the stent itself. This effective removal of the stent may therefore reduce the long-term risks for infection and mechanical complications associated with implanted prosthetic material while still eliciting the primary objective of preventing graft thickening over the long term. Biodegradable external stents therefore have potential advantages over permanent stent material in clinical application. Clinical Relevance: Arteriovenous bypass graft failure has a huge economic effect on health care resources, and a devastating effect o the patient. The attenuation of vein wall thickening, with subsequent luminal narrowing and occlusion, is a major goal in improving the longevity of the venous graft, to reduce secondary percutaneous and surgical interventions. The biodegradable external stent demonstrated in this study has possible clinical applications in bypass procedures with autogenous venous tissue, and represents a novel approach to ameliorating the problem of intimal hyperplasia that plagues these grafts. abstract_id: PUBMED:25886810 A Randomized Trial of External Stenting for Saphenous Vein Grafts in Coronary Artery Bypass Grafting. Background: External stents inhibit saphenous vein graft (SVG) intimal hyperplasia in animal studies. We investigated whether external stenting inhibits SVG diffuse intimal hyperplasia 1 year after coronary artery bypass graft surgery. Methods: Thirty patients with multivessel disease undergoing coronary artery bypass graft surgery were enrolled. In addition to an internal mammary artery graft, each patient received one external stent to a single SVG randomly allocated to either the right or left coronary territories; and one or more nonstented SVG served as the control. Graft patency was confirmed at the end of surgery in all patients. The primary endpoint was SVG intimal hyperplasia (mean area) assessed by intravascular ultrasonography at 1 year. Secondary endpoints were SVG failure, ectasia (>50% initial diameter), and overall uniformity as judged by Fitzgibbon classification. Results: One-year follow-up angiography was completed in 29 patients (96.6%). All internal mammary artery grafts were patent. Overall SVG failure rates did not differ significantly between the two groups (30% stented versus 28.2% nonstented SVG, p = 0.55). The SVG mean intimal hyperplasia area, assessed in 43 SVGs, was significantly reduced in the stented group (4.37 ± 1.40 mm(2)) versus nonstented group (5.12 ± 1.35 mm(2), p = 0.04). In addition, stented SVGs demonstrated marginally significant improvement in lumen uniformity (p = 0.08) and less ectasia (6.7% versus 28.2%, p = 0.05). There was some evidence that ligation of side branches with metallic clips increased SVG failure in the stented group. Conclusions: External stenting has the potential to improve SVG lumen uniformity and reduce diffuse intimal hyperplasia 1 year after coronary artery bypass graft surgery. abstract_id: PUBMED:32120449 External Stents for Vein Grafts in Coronary Artery Bypass Grafting: Targeting Intimal Hyperplasia. Introduction: Progressive saphenous vein graft (SVG) failure remains a key limitation to the long-term success of coronary artery bypass grafting (CABG). SVG disease after the first year is dominated by intimal hyperplasia, which predisposes the SVG to thrombosis and accelerated atherosclerosis. The objective of this study was to review and summarize the latest experimental and clinical data on the use of mechanical external stents for vein grafts. Methods: In January 2020, the PubMed database was searched using the terms "external stent", "CABG", "saphenous vein graft" and "intimal hyperplasia". The results were reviewed and only randomized experimental and clinical studies that analyzed the effect of external stenting on venous intimal hyperplasia were included in the analysis, together with studies that investigated the clinical benefit of external stenting. Results: Eight experimental and four clinical trials met the search criteria. Controlled trials in different large animal models concluded that external stenting significantly reduced intimal hyperplasia 3-6 months post implantation, and reduced both thrombosis rates and the development of lumen irregularities. Data from randomized controlled trials with a follow-up period of 1-4.5 years supported the pre-clinical findings and demonstrated that external stents significantly reduced vein graft disease. Conclusion: Strong evidence indicates that supporting the vein with external stents is safe and leads to clear advantages at both the anatomical and cellular levels. With the further accumulation of consistent positive results, external stenting of SVG may become the standard of care in future CABG. abstract_id: PUBMED:30453984 Long-term performance of an external stent for saphenous vein grafts: the VEST IV trial. Background: Externally stenting saphenous vein grafts reduces intimal hyperplasia, improves lumen uniformity and reduces oscillatory shear stress 1 year following surgery. The present study is the first to present the longer-term (4.5 years) performance and biomechanical effects of externally stented saphenous vein grafts. Methods: Thirty patients previously implanted with the VEST external stent in the randomized, within-patient-controlled VEST I study were followed up for adverse events; 21 of these were available to undergo coronary angiography and intravascular ultrasound. Results: Twenty-one stented and 29 nonstented saphenous vein grafts were evaluated by angiography and ultrasound at 4.5 ± 0.3 years. Vein graft failure rates were comparable between stented and nonstented grafts (30 and 23% respectively; p = 0.42). All failures were apparent at 1 year except for one additional nonstented failure at 4.5 years. In patent vein grafts, Fitzgibbon perfect patency remained significantly higher in the stented versus nonstented vein grafts (81 and 48% respectively, p = 0.002), while intimal hyperplasia area (4.27 mm2 ± 1.27 mm2 and 5.23 mm2 ± 1.83 mm2 respectively, p < 0.001) and thickness (0.36 mm ± 0.09 mm and 0.42 mm ± 0.11 mm respectively, p < 0.001) were significantly reduced. Intimal hyperplasia proliferation correlated with lumen uniformity and with the distance between the stent and the lumen (p = 0.04 and p < 0.001 respectively). Conclusions: External stenting mitigates saphenous vein graft remodeling and significantly reduces diffuse intimal hyperplasia and the development of lumen irregularities 4.5 years after coronary artery bypass surgery. Close conformity of the stent to the vessel wall appears to be an important factor. Trial Registration: NCT01415245 . Registered 11 August 2011. abstract_id: PUBMED:31471277 Outcomes of left renal vein stenting in patients with nutcracker syndrome. Background: Nutcracker syndrome (NCS) is a rare condition that can be manifested with hematuria, flank pain, pelvic varicosities, or chronic pelvic congestion related to left renal vein (LRV) compression. Open surgery, specifically LRV transposition, has been the mainstay of treatment, but in the past few years, LRV stenting has emerged as a less invasive alternative without sufficient evidence to support it. This study aimed to assess outcomes of renal vein stenting in the treatment of NCS. Methods: A retrospective chart review of patients with NCS who underwent LRV stenting between 2010 and 2018 was performed. End points were perioperative adverse outcomes, symptom relief, and stent patency. Symptom resolution was classified as complete, partial, and none on the basis of the interpretation of medical records on clinical follow-up. Standard descriptive statistics and survival analysis were used. Results: Eighteen patients (17 female; mean age, 38.1 ± 16.9 years) diagnosed with NCS and treated with LRV stenting were identified. Five of these had a prior LRV transposition that had failed within a mean of 7.0 ± 4.9 months, manifested by symptom recurrence (or no improvement) along with imaging evidence of persistently severe renal vein stenosis. Twelve patients had coexisting pelvic congestion syndrome treated with gonadal vein embolization. The most frequent sign and symptom were hematuria (10/18 patients) and flank pain (15/18 patients), respectively. All patients received self-expanding stents (mean diameter, 12.8 ± 1.6 mm), the smaller ones typically placed in the previously transposed LRVs. No perioperative complications occurred. Nine patients were discharged on the same day; the remaining patients stayed longer for pain control (mean hospital stay, 1.0 ± 1.3 days). At an average follow-up of 41.4 ± 26.6 months, 13 (72.2%) patients had symptoms resolved or improved (9 complete, 4 partial). Three of the five patients whose symptoms remained unchanged had previous LRV transposition surgery, and two of these three patients eventually required renal autotransplantation. Six of 10 patients who presented with hematuria had it resolved. Three patients underwent a stent reintervention at 5.8 months, 16.8 months, and 51.7 months because of symptom recurrence or stent restenosis. The two early ones required balloon venoplasty and the third one restenting. Two-year primary and primary assisted patency was 85.2% and 100%, respectively. No stent migration occurred. Conclusions: Endovascular treatment with renal vein stenting is safe, providing encouraging results with good midterm patency rates and symptom relief. Minimally invasive approaches may have a potential role in the treatment of NCS. Larger series and longer follow-up are needed to better assess the comparative performance against LRV transposition. abstract_id: PUBMED:38068447 External Stenting for Saphenous Vein Grafts in Coronary Surgery: A Systematic Review and Meta-Analysis. The external stenting of saphenous vein grafts (SVGs) during coronary artery bypass grafting (CABG) has been proven to reduce intimal hyperplasia (IH) in animal models, paving the way for human randomized controlled trials (RCTs) to be conducted. Herein, we performed a study-level meta-analysis to assess the impact of the Venous External SupporT (VEST) device, an external stent, on the outcomes of SVGs. A systematic search was conducted to identify all RCTs comparing VEST-stented to non-stented SVGs in patients undergoing CABG. The primary outcome was graft occlusion. The main secondary outcomes were repeat revascularization, SVG IH area, and intimal-medial thickness. Two RCTs totaling 407 patients were included. At a mean follow-up of 1.5 years, there was no difference in graft occlusion between groups (incidence rate ratio: 1.11; 95% confidence interval (CI): 0.80-1.53). The rate of repeat revascularization was also similar (odds ratio: 0.66; 95% CI: 0.27-1.64). The IH area (standardized mean difference (SMD): -0.45; 95% CI: -0.79 to -0.10) and intimal-medial thickness (SMD: -0.50; 95% CI: -0.90 to -0.10) were significantly reduced in the VEST group. Our findings show that significant reductions in the IH area and the intimal-medial thickness in VEST-stented SVGs do not currently translate into a lesser need for repeat revascularization or less graft occlusion events compared to non-stented SVGs at 1.5 years after CABG. abstract_id: PUBMED:21660272 Endovascular stenting of an extracranial-intracranial saphenous vein high-flow bypass graft: Technical case report. Background: The authors describe a case of endovascular stenting of an extracranial-intracranial saphenous vein high-flow bypass graft in the management of a complex bilateral carotid aneurysm case. Case Description: A 43-year-old woman was admitted with progressive visual field restriction and headache. Imaging studies revealed bilateral supraclinoid carotid aneurysms. The right carotid aneurysm was clipped and the left one was treated by an endovascular procedure, after performing an internal carotid artery-middle cerebral artery (ICA-MCA) saphenous vein bypass graft. A few months following the bypass procedure, a 70-80% stenosis of the graft was discovered and treated endovascularly with a stenting procedure. Follow-up at 36 months after the first operation showed the patency of the venous graft and no neurological deficits. Conclusions: Endovascular stenting of the extracranial-intracranial saphenous vein high-flow bypass graft is technically feasible when postoperative graft occlusion is discovered. Answer: Yes, external stenting does appear to reduce porcine vein graft occlusion via an action on vascular nerves. A study aimed at determining the effect of external stenting on the innervation of porcine vein grafts found that neural reorganization plays a role in vein-graft failure, possibly through the local release of mitogens. The prevention of this reorganization contributes to the inhibitory effect of the external stent on neointima formation. In vein grafts compared to ungrafted veins, there was a significant decrease in medial perivascular nerves but a dramatic increase in paravascular nerves in the adventitia. In stented vein grafts, the paravascular nerve proliferation observed in vein grafts at one month was inhibited, and neointima formation and the appearance of large paravascular nerve bundles in the adventitia of vein grafts were abolished by external stenting (PUBMED:12643468).
Instruction: Is leg-to-leg BIA valid for predicting minimum weight in wrestlers? Abstracts: abstract_id: PUBMED:15947734 Is leg-to-leg BIA valid for predicting minimum weight in wrestlers? Background: The National Collegiate Athletic Association (NCAA) and several state associations require prediction of minimum weight (MW) for collegiate and high school wrestlers. The rule requires assessment of body composition before the competitive season to minimize unhealthy weight-loss practices. Leg-to-leg bioelectrical impedance analysis (BIA) has been suggested for use with wrestlers. Purpose: To evaluate leg-to-leg BIA against a four-component (4C) criterion to determine whether leg-to-leg BIA predicted MW within acceptable limits for the sport of wrestling. Methods: Criterion MW was calculated by the 4C equation of Lohman (19) using independent measurement of body density (BD) by hydrostatic weighing, bone mineral content (BMC) by dual x-ray absorptiometry (DXA), and total body water (TBW) by deuterium dilution. Subjects were 57 wrestlers (mean +/- SD; age = 19.7 +/- 1.3 yr, height = 176.6 +/- 7.3 cm, weight = 77.7 +/- 12.4 kg). Hydration was confirmed by the NCAA guidelines. Accuracy, precision, and systematic bias were examined. Results: Comparable mean values (72.2 +/- 9.7 vs 72.2 +/- 10.3 kg), a high correlation (r = 0.94), and a regression line similar to the line of identity were found between BIA and 4C. However, large individual differences and systematic bias were seen across the range of MW. BIA predicted MW within 3.5 kg 68% of the time and within 7.0 kg 95% of the time. MW residuals ranged from -10.4 kg to +6.9 kg. When using 2.0 kg as an acceptable cutoff for error, only 40% of the BIA values were within 2.0 kg of the criterion. Conclusion: Large individual variation was seen, and, by definition, the precision was poor when estimating MW for individuals. In practical terms, the prediction error may span multiple weight classes, thus making leg-to-leg BIA unacceptable for prediction of MW in this sample under the conditions of the study. abstract_id: PUBMED:16118588 Evaluation of leg-to-leg BIA in assessing body composition of high-school wrestlers. Purpose: To evaluate the accuracy of leg-to-leg bioelectrical impedance analysis (BIA) in assessing fat-free mass (FFM) using the TBF-300WA in comparison with hydrostatic weighing (HW) and skinfolds (SK) in high-school (HS) wrestlers in a hydrated state. Methods: Body composition was determined by BIA, HW, and three-site SK in 129 HS wrestlers (mean +/- SD, age: 15.5 +/- 1.3, height 1.70 +/- 0.08 m, body mass 65.6 +/- 13.1 kg). For all methods, body density (Db) was converted to percent body fat (%BF) using the Brozek equation. Hydration state was quantified by evaluating urine specific gravity. Results: There were no significant differences for estimated FFM between BIA (56.9 +/- 8.4 kg) and HW (56.2 +/- 9.9 kg) or between SK (56.1 +/- 8.9 kg) and HW. The standard errors of estimate for FFM with HW as the reference method were 3.64 kg for BIA and 1.97 kg for SK. Significant correlations were found for FFM between HW and BIA (r = 0.93, P < 0.001), and between HW and SK (r = 0.98, P < 0.001). The FFM difference between BIA and HW was significantly correlated with the FFM average of the two methods (r = -0.39, P < 0.001). The FFM difference between SK and HW was also significantly correlated with the FFM average (r = -0.44, P < 0.001). Conclusions: This study demonstrates that SK should be considered as the preferred field method of estimating the FFM of HS wrestlers because of its greater individual precision than the leg-to-leg BIA method. Although the BIA is an attractive assessment tool and easy to use, it should be recommended as an alternative to the SK method only when trained skinfold technicians are not available. abstract_id: PUBMED:36963892 Measurement of body composition by DXA, BIA, Leg-to-leg BIA and near-infrared spectroscopy in breast cancer patients - comparison of the four methods. Background & Aims: Body composition plays a crucial role in therapy adherence and the prognosis of cancer patients. The aim of this work was to compare four measurement methods for determining body composition regarding their validity, reliability and practicability in order to be able to draft a practical recommendation as to which method is most suitable as a standard measurement method in oncology. Methods: Fat mass (FM) and fat-free mass (FFM) was estimated for 100 breast cancer patients with ages of 18-70 years during a defined 20-week inpatient and outpatient rehabilitation process after primary therapy or follow-up rehabilitation. The four methods used were dual energy X-ray absorptiometry (DXA), bioelectrical impedance analysis (BIA), leg-to-leg BIA and near-infrared spectroscopy (NIRS). At baseline (t0) and after 20 weeks (t20) the agreement between the four body composition analysis methods was quantified by pairwise method comparisons using Bland-Altman bias and limits of agreement estimates, t-tests and Lin's concordance correlation coefficients (CCCs). Results: CCCs and Bland-Altman plots indicated that DXA and BIA, DXA and NIRS as well as BIA and NIRS showed an excellent agreement concerning FM estimation at both time points (CCC>0.9). In contrast, no methods agreed with a CCC higher than 0.9 with respect to FFM estimation. However, most estimates were also significantly different between two methods, except for BIA and NIRS which yielded comparable FFM and FM estimates at both time points, albeit with large 95% limits of agreement intervals. The agreement between DXA and BIA was best in the lowest BMI tertile and worsened as BMI increased. Significant differences were also found for FFM changes measured with DXA versus BIA (mean difference -0.4 kg, p = 0.0049), DXA versus to Leg-to-leg BIA (-0.6 kg, p = 0.00073) and for FM changes measured with DXA versus Leg-to-leg BIA (0.5 kg, p = 0.0011). Conclusions: For accurate and valid body composition estimates, Leg-to-leg BIA cannot be recommended due to its significant underestimation of FM or significant overestimation of FFM, respectively. BIA and NIRS results showed good agreement with the gold standard DXA. Therefore both measurement methods appear to be very well suitable to assess body composition of oncological patients and should be used more frequently on a routine basis to monitor the body composition of breast cancer patients. abstract_id: PUBMED:11710398 A comparison of leg-to-leg bioelectrical impedance and skinfolds in assessing body fat in collegiate wrestlers. A comparison of the leg-to-leg bioelectrical impedance (BIA) system and skinfold analysis in estimating % body fat in a large number of National Collegiate Athletic Association (NCAA) collegiate wrestlers was conducted. A series of 5 cross-sectional assessments, including the NCAA Division I and III Championships, were completed throughout the 1998-1999 wrestling season with samples ranging from (N = 90-274). Body density was determined from the 3 skinfold measures using the Lohman prediction equation. BIA measurements were determined using the Tanita body fat analyzer, model 305. Significant correlations between methods ranging from (r = 0.67-0.83, p < 0.001) and low standard error of estimates (SEE) for % body fat ranging from 2.1-3.5% were found throughout the 5 assessment periods. This preliminary study demonstrated that the leg-to-leg bioelectrical impedance system accurately estimated % body fat when compared to skinfolds in a diverse collegiate wrestling population. abstract_id: PUBMED:27134197 Leg to leg bioelectrical impedance analysis of percentage fat mass in obese patients-Can it tell us more than we already know? Background: Bioelectrical impedance analysis (BIA) is well tolerated, inexpensive, and readily available, but can it be used to detect with clinical precision aberrant changes in the proportion of fat mass to fat-free mass during weight loss? Objectives: To assess the variance in percentage body fat mass explained by the readily available inputs and assess residual variance provided by leg-to-leg BIA scales. Methods: Using cross-sectional data from a cohort of 665 patients of Indian ethnicity presenting for bariatric surgery, we examine the determinants of percentage body fat as provided by leg-to-leg output from Tanita SC-330 BIA scales. Results: Four input factors-sex, weight, height, and age-contributed to provide 92% and 95% explanation in output variance for percentage fat mass (%FM) and actual fat mass, respectively, in 665 patients. Body mass index alone explained 89% and 81% of variance in %FM output for women and men, respectively. Neither weight distribution, as indicated by waist and hip circumference or waist to hip ratio, nor plasma lipids or markers of glucose metabolism contributed additional variance in %FM when controlled for the 4 key inputs. Conclusions: Simple, known input variables dominate the leg-to-leg BIA output of %FM, and this may compromise the detection of aberrant changes in %FM and fat-free mass with substantial weight loss. For clinical research, validated methods not largely dependent on known inputs should be used for evaluating changes in body composition after substantial weight loss. abstract_id: PUBMED:14707783 The effect of dehydration on wrestling minimum weight assessment. Unlabelled: Given that some wrestlers arrive for minimum weight (MW) testing in a dehydrated condition, it is important to understand the effects of dehydration on MW assessment methods. Purpose: To determine the effect of dehydration on the assessment of MW by three-site skinfolds with the Lohman formula (SF), leg-to-leg bioelectrical impedance analysis (BIA), and multifrequency bioelectrical impedance spectroscopy (BIS) compared with a four-component (4C) criterion. Methods: Twenty-two male collegiate wrestlers (mean +/- SD, age: 19.9 +/- 1.4 yr, height: 174.0 +/- 6.8 cm, body mass: 77.4 +/- 9.1 kg) had their body composition assessed by the 4C criterion, hydrostatic weighing (HW), SF, BIA, and BIS in euhydration (EUH) and dehydration (DEH). Subjects dehydrated 2-5% of body weight through fluid restriction and exercise in a hot environment. Results: In EUH, the total error (TE) for HW (1.75 kg) and SF (2.15 kg) were not significantly different, but the TE for HW and SF methods were significantly lower than the TE for both BIS (3.68 kg) and BIA (3.77 kg). In DEH, SF, BIA, and BIS methods had a TE approaching or exceeding 4 kg (8.8 lb). Dehydration increased the TE for SF and BIA through an artificial lowering of body weight and for BIS by an increased error in intracellular water prediction. Conclusion: Acute thermal dehydration violates assumptions necessary for the accurate and precise prediction of MW by SF, leg-to-leg BIA, and multifrequency BIS. abstract_id: PUBMED:16686564 Validity of leg-to-leg bioelectrical impedance measurement in highly active women. The aim of this study was to compare the validity of the leg-to-leg bioelectrical impedance analysis (BIA) method with that of anthropometry using hydrostatic weighing (HW) as the criterion test. A secondary objective was to cross-validate previously developed anthropometric regression equations as well as to develop a new regression equation formula based on the anthropometric data collected in this study. Three methods for assessing body composition (HW, BIA, and anthropometric) were applied to 60 women university athletes. The means and standard deviations of age, weight, height, and body mass index (BMI) of athletes were as follows: age, 20.70 +/- 1.43; weight, 56.19 +/- 7.83 kg; height, 163.33 +/- 6.11 cm; BMI, 21.01 +/- 2.63 kg x m(-2). Leg-to-leg BIA (11.82 +/- 2.39) has shown no statistical difference between percentage body fat determined by HW (11.63 +/- 2.42%) in highly active women (p > 0.05). This result suggests that the leg-to-leg BIA and HW methods were somewhat interchangeable in highly active women (R = 0.667; standard error of estimate [SEE] = 1.81). As a result of all cross-validation analyses, anthropometric and BIA plus anthropometric results have generally produced lower regression coefficients and higher SEEs for highly active women between the ages of 18 and 25 years. The regression coefficients (0.903, 0.926) and SEE (1.08, 0.96) for the new regression formulas developed from this study were better than the all the other formulas used in this study. abstract_id: PUBMED:10557029 Validation of a leg-to-leg bioimpedance analysis system in assessing body composition in postmenopausal women. Objectives: To evaluate the validity of a leg-to-leg bioimpedance analysis (BIA) system in predicting body composition as measured by dual-energy X-ray absorptiometry (DXA) in postmenopausal women. Subjects And Methods: Body fat mass (FM), %Fat and fat free mass (FFM) were measured in 124 postmenopausal women (age: 51-63 y, body mass index (BMI): 17-38 kg/m2) first by the leg-to-leg BIA system, and then by DXA as reference method. Bland-Altman analysis was used to determine the bias and 95% limits of agreement between the two methods for the assessment of the individual. Precision error (CV%) of the BIA system was obtained by repeated measurements with intermediate repositioning. Results: The leg-to-leg BIA system had a high reproducibility with within-day CVs being 0.6% for FFM and 1.1% for FM, and between-day CVs about twice that. The impedance index (Ht2/Z) obtained by the leg-to-leg BIA was moderately correlated to FFM measured by DXA (r=0.66). A significant, systematic bias was observed between the two methods. The BIA system overestimated FM by a mean of 3.1 kg, and underestimated FFM by 2.7 kg. The analysis of 95% limits of agreement showed that for most individuals, %Fat estimated by the BIA might differ from that measured by DXA by 12% below to 45% above, indicating the lack of agreement between the two methods for the assessment of the individual. Conclusions: The leg-to-leg BIA system can provide simple, rapid and highly reproducible measurements of body composition for groups, but it has limited accuracy for the assessment of the individual. Population-specific equations will be needed to improve its accuracy in estimating body composition in postmenopausal women. abstract_id: PUBMED:9107636 Bioimpedance analysis: evaluation of leg-to-leg system based on pressure contact footpad electrodes. Conventional single frequency bioimpedance analysis (BIA) systems require technician placement of arm and leg gel electrodes, a suitable location for recumbent measurements, and a separate measurement of body weight. The aim of this study was to evaluate a new single frequency 50 KHz leg-to-leg bioimpedance analysis (BIA) system combined with a digital scale that employs stainless steel pressure-contact foot pad electrodes for standing impedance and body weight measurements. Healthy adults were evaluated for 1) electrode validity and 2) potential for body component estimation. Pressure-contact foot-pad electrode measured impedance was highly correlated with (N = 9, r = 0.99, P < 0.001) impedance measured using conventional gel electrodes applied to the plantar surface of both lower extremities; mean (+/-SD) impedance was systematically higher by about 15 ohms for pressure contact electrodes (526 +/- 56 ohms vs 511 +/- 59 ohms; P < 0.001). Second, the relationship between stature-adjusted leg-to-leg impedance (H2/Z) measured by the new system and two body composition components (total body water by 3H2O dilution (N = 144); and fat-free body mass, by underwater weighing and dual x-ray absorptiometry (N = 231)) was modeled using multiple regression analysis. Correlation coefficients for H2/Z alone versus body composition components were lower for leg-to-leg BIA than for arm-to-leg BIA; correlation coefficients and SEEs became similar for the leg-to-leg and arm-to-leg BIA systems with addition of three covariates (age, gender, and waist/hip circumference ratio) to regression models. The leg-to-leg pressure contact electrode BIA system has overall performance characteristics for impedance measurement and body composition analysis similar to conventional arm-to-leg gel electrode BIA and offers the advantage of increased speed and ease of measurement. abstract_id: PUBMED:35149243 Does the way of weight-bearing matter? Single-leg and both-leg standing radiographic and pedobarographic differences in pediatric flatfoot. Background: An exact definition is lacking for the term "weight-bearing" or different standing modalities when implementing foot radiographs for children and adults; moreover, only few studies have investigated the relationship between radiographic and pedobarographic measurements. Research Question: We hypothesized that the method of weight-bearing in single-leg and both-leg standing positions could influence the measurement results in radiographs and the distribution of foot pressure. Methods: This prospective study evaluated 33 children (66 feet) with flexible flatfoot deformities scheduled for subtalar screw arthroereisis surgery. Radiographs in the lateral and anteroposterior (AP) views were assessed independently in the single-leg and both-leg standing positions. Static pedobarography was performed as that for measuring weight-bearing. Standardized radiographic angles and pedobarographic data were analysed and correlated. Results: There were differences in radiographic measurements between the single-leg and both-leg standing positions, including the AP talocalcaneal angle (p = 0.032), AP talus-first metatarsal base angle (p = 0.003), AP talus-first metatarsal angle (p = 0.003), lateral calcaneal pitch angle (p = 0.001), talus-first metatarsal index (p = 0.004), and talocalcaneal index (p = 0.029). Moreover, differences between these two standing modalities were found in most of the static pedobarographic data, including the contact area (p = 0001), maximal force (p = 0.001), and peak pressure (p = 0.007). Overall, medial foot pressure increased more in both-leg standing than in the single-leg standing position, whereas radiographic measurements showed a more pronounced flatfoot deformity in the single-leg standing position. The AP talus-first metatarsal angle was the only angle or index with a significant association to some pedobarographic measurements in both standing modalities. Significance: As there are significant differences between single-leg standing and both-leg standing radiographic and static pedobarographic values, observers have to be precise in the definition of "weight-bearing" to gain reproducible and comparable study values in children and adults. We recommend acquiring both-leg standing foot radiographs because children with flexible flatfeet can stand more steadily in this position than in the single-leg standing position. Answer: The validity of leg-to-leg bioelectrical impedance analysis (BIA) for predicting minimum weight in wrestlers has been investigated in several studies, with mixed results. One study found that while leg-to-leg BIA predicted minimum weight within 3.5 kg 68% of the time and within 7.0 kg 95% of the time, the prediction error could span multiple weight classes, making it unacceptable for prediction of minimum weight in wrestlers under the conditions of the study (PUBMED:15947734). Another study suggested that skinfolds should be considered the preferred field method for estimating fat-free mass of high school wrestlers due to its greater individual precision compared to leg-to-leg BIA (PUBMED:16118588). Furthermore, a study comparing different methods for assessing body composition in breast cancer patients found that leg-to-leg BIA significantly underestimated fat mass or overestimated fat-free mass, suggesting it cannot be recommended for accurate and valid body composition estimates (PUBMED:36963892). In contrast, another study found that leg-to-leg BIA accurately estimated percent body fat when compared to skinfolds in a diverse collegiate wrestling population (PUBMED:11710398). However, a study assessing the variance in percentage body fat mass explained by leg-to-leg BIA in obese patients found that simple, known input variables dominated the BIA output, which may compromise the detection of aberrant changes in body composition with substantial weight loss (PUBMED:27134197). Additionally, the effect of dehydration on wrestling minimum weight assessment showed that dehydration increased the total error for leg-to-leg BIA, making it less reliable (PUBMED:14707783). In conclusion, the validity of leg-to-leg BIA for predicting minimum weight in wrestlers appears to be limited, with concerns about its precision and accuracy, especially when individual variation and factors like hydration status are considered. Alternative methods, such as skinfolds, may offer greater precision for this purpose.
Instruction: Can dual-energy computed tomography improve visualization of hypoenhancing liver lesions in portal venous phase? Abstracts: abstract_id: PUBMED:27840263 Can dual-energy computed tomography improve visualization of hypoenhancing liver lesions in portal venous phase? Assessment of advanced image-based virtual monoenergetic images. Purpose: The purpose was to assess image quality of portal-venous phase dual-energy computed tomography (DECT) for liver lesions. Methods: We performed 120-kVp-equivalent linear-blended (LB) and monoenergetic reconstructions from 40 to 190 keV by standard (VMI) and advanced virtual monoenergetic (VMI+) methods. Diagnostic performance, and quantitative and qualitative image analyses were assessed and compared. Results: Liver contrast to noise ratio peaked at 40 keV_VMI+, while image quality and reader preference peaked at 50 keV_VMI+. 50 keV_VMI+ scored overall higher diagnostic performance: lesion sensitivity 95.4% vs. 83.3% for both 75 keV_VMI and LB. Conclusions: DECT improves assessment of hypoenhancing liver lesions on portal venous phase. 50 keV_VMI+ demonstrated the highest image quality and diagnostic performance over VMI and LB. abstract_id: PUBMED:28495460 Dual-energy computed tomography for the detection of focal liver lesions. Objective: To qualitatively and quantitatively explore the spectral study of focal liver lesions, comparing it with the usual polychromatic assessment with single-energy computed tomography. Material And Methods: We prospectively studied 50 patients with at least one focal liver lesion who were referred for abdominal multidetector computed tomography with intravenous contrast material. The portal phase was acquired with dual energy sources. The density of the lesions and of the surrounding liver parenchyma was measured both in the baseline polychromatic acquisition and in the posterior monochromatic reconstructions at 40 keV, 70 keV, and 140 keV. Spectral curves were traced and the dual-energy indices and contrast-to-noise ratio were calculated. Lastly, the quality of the images and the detectability of the lesions were assessed qualitatively. Results: Densitometric differences between the different types of lesions (avascular and vascularized) and the liver were greater at low energy levels (left side of the spectral curve) than in the polychromatic evaluation. In the subjective assessment, the 40keV energy level had the greatest lesion detectability. Conclusions: Monochromatic spectral study with dual-energy computed tomography provides better lesion detectability at 40keV compared to that provided by the ordinary polychromatic evaluation. abstract_id: PUBMED:35989041 Hepatobiliary Dual-Energy Computed Tomography. Dual-energy computed tomography (DECT) increases confidence in hepatobiliary computed tomography (CT) evaluation by boosting visible iodine enhancement and differentiating between materials based on relative attenuation of 2 different X-ray energy spectra. Image reconstructions from DECT scans improve the detection and characterization of focal liver lesions, allows for quantification of diffuse liver disease, and reveals gallstones that may be missed on standard CT imaging. Our article aims to illustrate the basic concepts of DECT and types of image reconstruction relevant for the assessment of hepatobiliary diseases. We then review literature on the use of DECT for evaluating focal and diffuse hepatobiliary diseases. abstract_id: PUBMED:24041437 Utility of dual phase liver CT for metastatic melanoma staging and surveillance. Rationale And Objectives: To evaluate the clinical utility of dual phase computed tomography (CT) for assessment of hepatic metastases in patients with metastatic melanoma. Materials And Methods: A retrospective case-control study of dual phase CT examinations consisting of late hepatic arterial and portal venous phases performed on patients with melanoma was undertaken. In 2010, 420 dual phase CT examinations were performed on 188 patients. Of these, 46 CT examinations on 24 patients with hepatic metastases were combined with 52 control studies for evaluation. Two blinded reviewers independently evaluated single portal venous phase alone and dual phase imaging on separate occasions. The presence of hepatic lesions, the conspicuity of the lesions, and the likelihood that the detected lesions were metastases was recorded. Agreement between readers, sensitivity and specificity was calculated. Results: In no case was hepatic metastatic disease only apparent on arterial phase imaging. Arterially enhancing hepatic lesions only visible on the arterial phase or much more conspicuous on the arterial phase were present in 10 studies (10%), all of which were benign. Liver metastases were rated as being more accurately assessed on the portal venous phase in up to 100%. In a per scan analysis dual phase and venous phase imaging had similar sensitivities of 96% (95%, CI: 86-100) and 98% (95%, CI: 89-100), respectively. Conclusion: Single portal venous phase imaging is adequate for staging and surveillance in patients with metastatic melanoma. abstract_id: PUBMED:29369944 Virtual Monoenergetic Images From a Novel Dual-Layer Spectral Detector Computed Tomography Scanner in Portal Venous Phase: Adjusted Window Settings Depending on Assessment Focus Are Essential for Image Interpretation. Objective: We aimed to determine optimal window settings for conventional polyenergetic (PolyE) and virtual monoenergetic images (MonoE) derived from abdominal portal venous phase computed tomography (CT) examinations on a novel dual-layer spectral-detector CT (SDCT). Methods: From 50 patients, SDCT data sets MonoE at 40 kiloelectron volt as well as PolyE were reconstructed and best individual window width and level values manually were assessed separately for evaluation of abdominal arteries as well as for liver lesions. Via regression analysis, optimized individual values were mathematically calculated. Subjective image quality parameters, vessel, and liver lesion diameters were measured to determine influences of different W/L settings. Results: Attenuation and contrast-to-noise values were significantly higher in MonoE compared with PolyE. Compared with standard settings, almost all adjusted W/L settings varied significantly and yielded higher subjective scoring. No differences were found between manually adjusted and mathematically calculated W/L settings. Conclusions: PolyE and MonoE from abdominal portal venous phase SDCT examinations require appropriate W/L settings depending on reconstruction technique and assessment focus. abstract_id: PUBMED:29708205 Simultaneous dual-contrast multi-phase liver imaging using spectral photon-counting computed tomography: a proof-of-concept study. Background: To assess the feasibility of dual-contrast spectral photon-counting computed tomography (SPCCT) for liver imaging. Methods: We present an SPCCT in-silico study for simultaneous mapping of the complementary distribution in the liver of two contrast agents (CAs) subsequently intravenously injected: a gadolinium-based contrast agent and an iodine-based contrast agent. Four types of simulated liver lesions with a characteristic arterial and portal venous pattern (haemangioma, hepatocellular carcinoma, cyst, and metastasis) are presented. A material decomposition was performed to reconstruct quantitative iodine and gadolinium maps. Finally, a multi-dimensional classification algorithm for automatic lesion detection is presented. Results: Our simulations showed that with a single-scan SPCCT and an adapted contrast injection protocol, it was possible to reconstruct contrast-enhanced images of the liver with arterial distribution of the iodine-based CA and portal venous phase of the gadolinium-based CA. The characteristic patterns of contrast enhancement were visible in all liver lesions. The approach allowed for an automatic detection and classification of liver lesions using a multi-dimensional analysis. Conclusions: Dual-contrast SPCCT should be able to visualise the characteristic arterial and portal venous enhancement with a single scan, allowing for an automatic lesion detection and characterisation, with a reduced radiation exposure. abstract_id: PUBMED:28270287 Correlation Between Dual-energy and Perfusion CT in Patients with Focal Liver Lesions Using Third-generation Dual-source CT Scanner. Objective To compare measurements of dual-energy CT iodine map parameters and liver perfusion CT parameters in patients with focal liver lesions using a third-generation dual-source CT scanner. Methods Between November 2015 and August 2016,33 patients with non-cystic focal lesions of liver were enrolled in this study. CT examinations were performed with a third-generation dual-source CT. The study CT protocol included a perfusion CT and dual-energy arterial and portal venous scans,with a time interval of 15 minutes. Iodine attenuation was measured at five region of interests including areas of high,medium,and low density within the lesion,as well as right and left liver parenchyma from the iodine map,while arterial liver perfusion (ALP),portal venous liver perfusion (PVP),and hepatic perfusion index (HPI) at the same location were measured from perfusion CT. The Pearson product-moment correlation coefficient was used to evaluate the relationship between iodine attenuation and perfusion parameters. Results The iodine attenuation at arterial phase showed significant intra-individual correlation with ALP (r=0.812,95% CI=0.728-0.885,P<0.001)and PVP (r=-0.209,95% CI=-0.323--0.073,P=0.007),but not significantly correlated with HPI (r=0.058,95% CI=0.046-0.498,P=0.461). The iodine attenuation at portal venous phase showed significant correlation with PVP (r=0.214,95% CI=0.072-0.361,P=0.005) but not with HPI(r=0.036,95% CI=-0.002-0.242,P=0.649). The mean effective dose of arterial phase and portal venous phase of dual-energy CT together [(3.53±1.17)mSv] was significantly lower than that of the perfusion CT [(14.53±0.45)mSv](t=25.212,P<0.001). Conclusion Iodine attenuation from arterial phase of dual energy CT demonstrates significant correlation with ALP and PVP,and iodine attenuation from portal venous phase demonstrates significant correlation with PVP. abstract_id: PUBMED:6618089 Portal architecture: a differential guide to fatty infiltration of the liver on computed tomography. Fatty infiltration of the liver, like fatty lesions elsewhere in the body, typically appears as low-density areas on computed tomography (CT). Fatty infiltration of the liver should be considered in the differential diagnosis of homogeneous low-density hepatic lesions regardless of distribution or size. As a physiologic rather than anatomic change, fatty infiltration of the liver characteristically leaves the portal venous architecture unaltered. Although fatty infiltration of the liver typically presents with complete or near complete involvement, other unusual patterns may be encountered. In these atypical cases, recognition of the normal portal structures on CT aids in correct diagnosis. abstract_id: PUBMED:34666645 A Case of Intratumoral and Hepatic Portal Venous Gas in Patient with Gastric Cancer Liver Metastases. Background: Hepatic portal venous gas [HPVG] is not a common finding in daily practice. It is usually associated with mesenteric ischemia and bowel necrosis in adults. Combination of intratumoral gas in metastatic liver lesions with HPVG is quite rare and thought to be associated with chemotherapy-induced necrosis and infection of the necrotized metastasis Objective: Here we present a case of gastric adenocarcinoma with portal venous and intratumoral gas in metastatic liver lesions due to the infected necrosis. Case Presentation: The patient was presented to the emergency room with severe abdominal pain and septic condition after the second round of chemotherapy. Hepatic portal venous and intratumoral gas in metastatic liver lesions due to the infected necrosis of liver metastasis was detected in computed tomography images. There were no findings of mesenteric ischemia both clinically and radiologically. Massive intratumoral infected necrosis in metastatic liver lesions and fistulization to the right portal vein branches were detected on abdominopelvic CT. Secondary infection of the necrotic metastases and fistulization to portal vein branches was believed to cause the air in metastatic liver masses and portal venous gas. Conclusion: Infected necrosis of metastatic liver lesions and fistulizations to the portal venous structures is extremely rare. Clinicians and radiologists should be aware of such a rare complication because early detection is crucial for patient management.. abstract_id: PUBMED:27235872 Virtual unenhanced imaging of the liver with third-generation dual-source dual-energy CT and advanced modeled iterative reconstruction. Objectives: To compare image quality and diagnostic accuracy for the detection of liver lesions of virtual unenhanced (VU) images based on third-generation dual-source dual- energy computed tomography (DECT) compared to conventional unenhanced (CU) images. Methods: Thirty patients underwent triphasic abdominal CT consisting of single-energy CU (120kV, 147 ref.mAs) and dual-energy CT arterial and portal-venous phase acquisitions (100/Sn150kV, 180/90 ref.mAs). VU images were generated from arterial (AVU) and portal venous (PVU) phases. CU, AVU and PVU datasets were reconstructed. Quantitative image quality analysis was performed and two abdominal radiologists independently analyzed all datasets to evaluate image quality and identify liver lesions. Radiation dose was recorded and potential radiation dose reduction was estimated. Results: Image quality was rated diagnostic in 100% of the VU datasets. The mean subjective image quality of the CU datasets was higher than that of VU images (p<0.0001). No significant difference was observed in the mean attenuation values of the liver parenchyma (p>0.99) and hypoattenuating liver lesions (p≥0.21) between CU, AVU and PVU. However, a significant reduction in the attenuation values of calcified lesions (p<0.0001), metallic clips (p<0.0001) and gallstones (p≤0.047) was observed in the AVU and PVU images compared with CU images. A total of 122 liver lesions were found in 25 patients. VU images were more sensitive than CU images for detection of small hypoattenuating liver lesions (≤1cm). However, CU images were more sensitive than VU for calcified liver lesions. The mean radiation dose reduction achievable by avoiding the unenhanced acquisition was 32.9%±1.1% (p<0.01). Conclusions: Third-generation DSCT VU images of the liver provide diagnostic image quality and improve small (≤1cm) liver lesion detection; however calcified liver lesions can be missed due to complete subtraction. Answer: Yes, dual-energy computed tomography (DECT) can improve the visualization of hypoenhancing liver lesions in the portal venous phase. Studies have shown that DECT, particularly with the use of advanced virtual monoenergetic image (VMI+) reconstructions, enhances the assessment of hypoenhancing liver lesions during the portal venous phase. Specifically, image quality and diagnostic performance were found to be highest at 50 keV_VMI+, with a lesion sensitivity of 95.4% compared to 83.3% for both 75 keV_VMI and linear-blended (LB) images (PUBMED:27840263). Additionally, monochromatic spectral study with DECT at 40 keV energy level demonstrated greater lesion detectability than the ordinary polychromatic evaluation (PUBMED:28495460). Furthermore, DECT has been reported to increase confidence in hepatobiliary CT evaluation by enhancing visible iodine enhancement and differentiating between materials based on relative attenuation of two different X-ray energy spectra, which improves the detection and characterization of focal liver lesions (PUBMED:35989041).
Instruction: Economic evaluations conducted by Canadian health technology assessment agencies: where do we stand? Abstracts: abstract_id: PUBMED:18828938 Economic evaluations conducted by Canadian health technology assessment agencies: where do we stand? Objectives: To examine the production of Health Technology Assessments (HTAs) with economic evaluations (EEs) conducted by Canadian HTA agencies. Methods: This research used a three-step approach: (i) the Web sites of five Canadian organizations promoting HTA were searched to identify HTA reports with EEs; (ii) HTA agencies were surveyed to verify that our information was complete with respect to HTA activities and to describe the factors that influence the HTA process in Canada (i.e., selection of HTA topics, execution, dissemination of results and future trends); (iii) HTAs with EEs were appraised in terms of study design, retrieval of clinical and economic evidence, resource utilization and costing, effectiveness measures, treatment of uncertainty as well as presence of a budget impact analysis (BIA), and policy recommendations. Results: Two hundred forty-nine HTA reports were identified of which 19 percent included EEs (n = 48). Decision analytic techniques were used in approximately 75 percent of the forty-eight EEs and probabilistic sensitivity analyses were commonly used by all agencies to deal with parameter uncertainty. BIAs or policy recommendations were given in 50 percent of the evaluations. Differences between agencies were observed in terms of selection of topics, focus of assessment and production of HTA (e.g., in-house activities). Major barriers to the conduct of HTAs with EEs were capacity, a lack of interest by decision makers and a lack of robust clinical information. Conclusions: The results of this research point to the need for increased HTA training, collaboration, evidence synthesis, and use of pragmatic "real world" evaluations. abstract_id: PUBMED:37171709 An introduction to health technology assessment and health economic evaluation: an online self-learning course. Cardiovascular diseases impose an enormous burden on patients and society. New health technologies promise to lower this burden; however, novel treatments often come at a high cost. In the Netherlands, health technology assessment (HTA) is increasingly being used to inform policy bodies about the optimal distribution of scarce healthcare resources and to guide decision-making about financing and reimbursement. In particular, economic evaluations, as one pillar of HTA, are frequently used to compare the costs and effects of different interventions. This paper aims to define HTA and its relevance to healthcare policy as well as providing a comprehensive overview of the methodology of economic evaluations targeting health professionals and researchers with limited prior knowledge of this subject. Accordingly, different types of economic evaluations are introduced, together with their respective costs and outcomes. Further, the results of economic evaluations are explained, along with techniques for performing them and methods for coping with uncertainty. In addition to this paper-based learning format, each chapter is complemented by a video lecture with further information and practical examples, helping to better understand and analyse health economic studies. abstract_id: PUBMED:38462223 Methods to include environmental impacts in health economic evaluations and health technology assessments: a scoping review. Objective: The environmental impacts of healthcare are an important factor that should be considered during health technology assessments. This study aims to summarise the evidence that exists about methods to include environmental impacts in health economic evaluations and health technology assessments. Methods: We identified records for screening using an existing scoping review and a systematic search of academic databases and grey literature up to September 2023. We screened the identified records for eligibility and extracted data using a narrative synthesis approach. The review was conducted following the JBI Manual for Evidence Synthesis and reported according to the PRISMA-ScR checklist. Results: We identified 2,898 records and assessed the full text of 114, of which 54 were included in this review. Ten methods were identified to include environmental impacts in health economic evaluations and health technology assessments. Methods included converting environmental impacts to dollars or disability adjusted life years and including them in a cost-effectiveness, cost-utility, or cost-benefit analysis; calculating an incremental carbon footprint effectiveness ratio or incremental carbon footprint cost ratio; incorporating impacts as one criteria of a multi-criteria decision analysis; and freely considering impacts during health technology assessment deliberation processes. Conclusions: Methods to include environmental impacts in health economic evaluations and health technology assessments exist but have not been tested for widespread use by health technology assessment agencies. Further research and implementation work is needed to determine which method can best aid decision makers to choose low environmental impact healthcare interventions. abstract_id: PUBMED:35266523 A systematic review of scope and quality of health economic evaluations conducted in Ethiopia. There has been an increased interest in health technology assessment and economic evaluations for health policy in Ethiopia over the last few years. In this systematic review, we examined the scope and quality of healthcare economic evaluation studies in Ethiopia. We searched seven electronic databases (PubMed/MEDLINE, EMBASE, PsycINFO, CINHAL, Econlit, York CRD databases and CEA Tufts) from inception to May 2021 to identify published full health economic evaluations of a health-related intervention or programme in Ethiopia. This was supplemented with forward and backward citation searches of included articles, manual search of key government websites, the Disease Control Priorities-Ethiopia project and WHO-CHOICE programme. The quality of reporting of economic evaluations was assessed using the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) checklist. The extracted data were grouped into subcategories based on the subject of the economic evaluation, organized into tables and reported narratively. This review identified 34 full economic evaluations conducted between 2009 and 2021. Around 14 (41%) of studies focussed on health service delivery, 8 (24%) on pharmaceuticals, vaccines and devices, and 4 (12%) on public-health programmes. The interventions were mostly preventive in nature and focussed on communicable diseases (n = 19; 56%) and maternal and child health (n = 6; 18%). Cost-effectiveness ratios varied widely from cost-saving to more than US $37 313 per life saved depending on the setting, perspectives, types of interventions and disease conditions. While the overall quality of included studies was judged as moderate (meeting 69% of CHEERS checklist), only four out of 27 cost-effectiveness studies characterized heterogeneity. There is a need for building local technical capacity to enhance the design, conduct and reporting of health economic evaluations in Ethiopia. abstract_id: PUBMED:32723809 Generating evidence to inform health technology assessment of treatments for SLE: a systematic review of decision-analytic model-based economic evaluations. This study aimed to understand and appraise the approaches taken to handle the complexities of a multisystem disease in published decision-analytic model-based economic evaluations of treatments for SLE. A systematic review was conducted to identify all published model-based economic evaluations of treatments for SLE. Treatments that were considered for inclusion comprised antimalarial agents, immunosuppressive therapies, and biologics including rituximab and belimumab. Medline and Embase were searched electronically from inception until September 2018. Titles and abstracts were screened against the inclusion criteria by two reviewers; agreement between reviewers was calculated according to Cohen's κ. Predefined data extraction tables were used to extract the key features, structural assumptions and data sources of input parameters from each economic evaluation. The completeness of reporting for the methods of each economic evaluation was appraised according to the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Six decision-analytic model-based economic evaluations were identified. The studies included azathioprine (n=4), mycophenolate mofetil (n=3), cyclophosphamide (n=2) and belimumab (n=1) as relevant comparator treatments; no economic evaluation estimated the relative cost-effectiveness of rituximab. Six items of the CHEERS statement were reported incompletely across the sample: target population, choice of comparators, measurement and valuation of preference-based outcomes, estimation of resource use and costs, choice of model, and the characterisation of heterogeneity. Complexity in the diagnosis, management and progression of disease can make decision-analytic model-based economic evaluations of treatments for SLE a challenge to undertake. The findings from this study can be used to improve the relevance of model-based economic evaluations in SLE and as an agenda for research to inform future health technology assessment and decision-making. abstract_id: PUBMED:20942986 Identifying economic evaluations for health technology assessment. Objectives And Methods: Health technology assessment (HTA) often requires the identification and review of economic evaluations and models. This study surveys the available specific and general resources to search to identify economic evaluations. It also provides information on efficient searching of those resources and comments on the current evidence-base. Results: Published checklists recommend searching for economic evaluations in specific information resources which collect economic evaluations such as NHS EED and HEED, followed by top-up searches of large biomedical bibliographic databases (such as MEDLINE and EMBASE). Other resources such as the HTA and DARE databases can yield reports of economic evaluations. Searches within NHS EED and HEED can be made more efficient by using database-specific search options. Searches within large biomedical databases such as MEDLINE and EMBASE require the use of economic search terms called search filters. Search filters are highly sensitive, retrieving most economic evaluations, but suffer from low precision returning many irrelevant records which need to be assessed. Conclusions: It is relatively easy to identify rapidly a high proportion of economic evaluations but more research is required to improve the efficiency of this process. There are key high yield resources to search but more evidence is required on their overlap and unique contribution to searches. The value of other resources, particularly those providing access to gray literature, should be explored. Research into efficient retrieval requires clear definitions of economic evaluations to allow comparison across studies. abstract_id: PUBMED:32981532 Assessing the capacity of Ghana to introduce health technology assessment: a systematic review of economic evaluations conducted in Ghana. Objectives: Ghana is in the process of formally introducing health technology assessment (HTA) for health decision making. Similar to other low- and middle-income countries, evidence suggests that the lack of data and human capacity is a major barrier to the conduct and use of HTA. This study assessed the current human and data capacity available in Ghana to undertake HTA. Methods: As economic evaluation (EE) forms an integral part of HTA, a systematic review of EE studies undertaken in Ghana was conducted to identify the quality and number of studies available, methods and source of data used, and local persons involved. The literature search was undertaken in EMBASE (including MEDLINE), PUBMED, and Google Scholar. The quality of studies was evaluated using the Consolidated Health Economics Evaluation Reporting Standards. The number of local Ghanaians who contributed to authorship were used as a proxy for assessing human capacity for HTA. Results: Thirty-one studies were included in the final review. Overall, studies were of good quality. Studies derived their effectiveness, resource utilization and cost data mainly from Ghana. The most common source of cost data was from the National Health Insurance Scheme pricing list for medicines and tariffs. Effectiveness data were mostly derived from either single study or intervention programs. Sixty out of 199 authors were Ghanaians (30 percent); these authors were mostly involved in data collection and study conceptualization. Conclusions: Human capacity for HTA in Ghana is limited. To introduce HTA successfully in Ghana, policy makers would need to develop more local capacity to undertake Ghanaian-specific HTA. abstract_id: PUBMED:19804434 Impact of the 1997 Canadian guidelines on the conduct of Canadian-based economic evaluations in the published literature. Objective: To assess the impact of the 1997 Canadian guidelines on the methods and presentation of economic evaluations conducted from a Canadian perspective in the published literature. Methods: A systematic literature review was conducted to identify health technology economic evaluations conducted from a Canadian perspective published in peer-reviewed journals between 2001 and 2006. To investigate the impact of the 1997 Canadian Coordinating Office of Health Technology Assessment guidelines, each included study was assessed against 17 of the 25 recommendations. Results: Of the 153 included studies, a base set of 9 methodological standards, as outlined by the 1997 guidelines, were followed by over 50% of the studies including: indications, outcomes for cost utility analysis, outcomes for cost benefit analysis, discounting future cost and outcomes, cost identification and valuation, evaluating uncertainty and disclosing funding relationships. Main divergences from the guidelines were found for analytic technique (38%), study perspective (23%), source of preferences (8%), equity (7%), and cost measurement (24%). Conclusion: The current assessment has shown that the 1997 Canadian guidelines have set a minimum methodological standard within the community of "doers" conducting economic analyses from a Canadian perspective. Although there was divergence from some of the recommendations, the majority were reflected as changes in the 2006 Canadian guidelines. abstract_id: PUBMED:35184733 Assessing transferability in systematic reviews of health economic evaluations - a review of methodological guidance. Objective: For assessing cost-effectiveness, Health Technology Assessment (HTA) organisations may use primary economic evaluations (P-HEs) or Systematic Reviews of Health Economic evaluations (SR-HEs). A prerequisite for meaningful results of SR-HEs is that the results from existing P-HEs are transferable to the decision context (e.g, HTA jurisdiction). A particularly pertinent issue is the high variability of costs and resource needs across jurisdictions. Our objective was to review the methods documents of HTA organisations and compare their recommendations on considering transferability in SR-HE. Methods: We systematically hand searched the webpages of 158 HTA organisations for relevant methods documents from 8th January to 31st March 2019. Two independent reviewers performed searches and selected documents according to pre-defined criteria. One reviewer extracted data in standardised and piloted tables and a second reviewer checked them for accuracy. We synthesised data using tabulations and in a narrative way. Results: We identified 155 potentially relevant documents from 63 HTA organisations. Of these, 7 were included in the synthesis. The included organisations have different aims when preparing a SR-HE (e.g. to determine the need for conducting their own P-HE). The recommendations vary regarding the underlying terminology (e.g. transferability/generalisability), the assessment approaches (e.g. structure), the assessment criteria and the integration in the review process. Conclusion: Only few HTA organisations address the assessment of transferability in their methodological recommendations for SR-HEs. Transferability considerations are related to different purposes. The assessment concepts and criteria are heterogeneous. Developing standards to consider transferability in SR-HEs is desirable. abstract_id: PUBMED:38174139 The Challenges in Conducting Economic Evaluations for Rehabilitation Technologies. Background: Health technology assessment (HTA) is an important evidentiary component in the decision-making process for the adoption of new healthcare technologies to the healthcare system. Economic evidence is an important consideration in HTAs. Recent systematic reviews in rehabilitation have shown a limited number of economic evaluations and high levels of uncertainty in the results. It is unclear whether there are challenges related to the field of rehabilitation and the technologies used in rehabilitation that inhibit the development of economic evidence. Methods: In this study, economic evaluations in rehabilitation were reviewed. This was followed by a summary of the latest evidence on the challenges of conducting HTA for medical devices and the relationship with rehabilitation technologies. Finally, several considerations are suggested to improve the HTA of technologies that target rehabilitation. A literature review of Google Scholar and PubMed was conducted to identify reviews in economic evaluations in rehabilitation. A recent review on the barriers to HTA of medical devices in general was also examined to identify similar concerns with rehabilitation technologies. Results: The challenges identified include the lack of high-quality studies, the interaction between the technology and the user, the short product life cycle, and estimation of efficacy in technologies with multiple target populations. Conclusion: Overall, many of the challenges in evaluating medical devices also apply to rehabilitation interventions. Further research and discussion on these issues are necessary to increase the clinical evidence for rehabilitation technologies, strengthen the development of HTAs, and facilitate the use of technologies to improve the health of individuals requiring rehabilitation. Answer: The production of Health Technology Assessments (HTAs) with economic evaluations (EEs) conducted by Canadian HTA agencies has been examined, revealing that out of 249 HTA reports identified, 19 percent included EEs (n = 48). Decision analytic techniques were used in approximately 75 percent of these EEs, and probabilistic sensitivity analyses were commonly employed by all agencies to address parameter uncertainty. However, only 50 percent of the evaluations included budget impact analyses (BIAs) or policy recommendations. There were observed differences between agencies in terms of topic selection, assessment focus, and production of HTA, such as in-house activities. Major barriers to conducting HTAs with EEs included capacity constraints, a lack of interest from decision-makers, and a lack of robust clinical information. The findings suggest a need for increased HTA training, collaboration, evidence synthesis, and the use of pragmatic "real world" evaluations (PUBMED:18828938). In the broader context, HTA and economic evaluations are increasingly used to inform healthcare policy and guide decision-making about financing and reimbursement, as seen in the Netherlands. Economic evaluations compare the costs and effects of different interventions, and various techniques and methods are employed to perform these evaluations and manage uncertainty (PUBMED:37171709). Additionally, there is a growing interest in including environmental impacts in health economic evaluations and HTAs, although methods for doing so have not been widely tested for use by HTA agencies (PUBMED:38462223). In Ethiopia, a systematic review of the scope and quality of health economic evaluations showed moderate overall quality, with a focus on preventive interventions for communicable diseases and maternal and child health. However, there is a need for building local technical capacity to enhance the design, conduct, and reporting of health economic evaluations (PUBMED:35266523). The impact of the 1997 Canadian guidelines on economic evaluations has set a minimum methodological standard within the community conducting economic analyses from a Canadian perspective, with most studies following a base set of methodological standards. However, there were divergences from some recommendations, which were reflected as changes in the 2006 Canadian guidelines (PUBMED:19804434). In summary, Canadian HTA agencies have incorporated economic evaluations in a portion of their reports, employing decision analytic techniques and sensitivity analyses. However, challenges such as capacity and interest from decision-makers, as well as the need for robust clinical data, highlight areas for improvement in the conduct of HTAs with EEs in Canada.
Instruction: Is There a Safe Lipoaspirate Volume? Abstracts: abstract_id: PUBMED:35214179 Lipoaspirate Shows In Vitro Potential for Wound Healing. Mesenchymal stem cells (MSCs) are a promising therapy in wound healing, although extensive time and manipulation are necessary for their use. In our previous study on cartilage regeneration, we demonstrated that lipoaspirate acts as a natural scaffold for MSCs and gives rise to their spontaneous outgrowth, together with a paracrine effect on resident cells that overcome the limitations connected to MSC use. In this study, we aimed to investigate in vitro whether the microfragmented adipose tissue (lipoaspirate), obtained with Lipogems® technology, could promote and accelerate wound healing. We showed the ability of resident cells to outgrow from the clusters of lipoaspirate encapsulated in a 3D collagen substrate as capability of repopulating a culture of human skin. Moreover, we demonstrated that the in vitro lipoaspirate paracrine effect on fibroblasts and keratinocytes proliferation, migration, and contraction rate is mediated by the release of trophic/reparative proteins. Finally, an analysis of the paracrine antibacterial effect of lipoaspirate proved its ability to secrete antibacterial factors and its ability to modulate their secretion in culture media based on a bacterial stimulus. The results suggest that lipoaspirate may be a promising approach in wound healing showing in vitro regenerative and antibacterial activities that could improve current therapeutic strategies. abstract_id: PUBMED:37425361 Lipoaspirate fluid derived factors and extracellular vesicles accelerate wound healing in a rat burn model. Background: The regenerative capabilities of derivatives derived from the fat layer of lipoaspirate have been demonstrated. However, the large volume of lipoaspirate fluid has not attracted extensive attention in clinical applications. In this study, we aimed to isolate the factors and extracellular vesicles from human lipoaspirate fluid and evaluate their potential therapeutic efficacy. Methods: Lipoaspirate fluid derived factors and extracellular vesicles (LF-FVs) were prepared from human lipoaspirate and characterized by nanoparticle tracking analysis, size-exclusion chromatography and adipokine antibody arrays. The therapeutic potential of LF-FVs was evaluated on fibroblasts in vitro and rat burn model in vivo. Wound healing process was recorded on days 2, 4, 8, 10, 12 and 16 post-treatment. The scar formation was analyzed by histology, immunofluorescent staining and scar-related gene expression at day 35 post-treatment. Results: The results of nanoparticle tracking analysis and size-exclusion chromatography indicated that LF-FVs were enriched with proteins and extracellular vesicles. Specific adipokines (adiponectin and IGF-1) were detected in LF-FVs. In vitro, LF-FVs augmented the proliferation and migration of fibroblasts in a dose-dependent manner. In vivo, the results showed that LF-FVs significantly accelerated burn wound healing. Moreover, LF-FVs improved the quality of wound healing, including regenerating cutaneous appendages (hair follicles and sebaceous glands) and decreasing scar formation in the healed skin. Conclusion: LF-FVs were successfully prepared from lipoaspirate liquid, which were cell-free and enriched with extracellular vesicles. Additionally, they were found to improve wound healing in a rat burn model, suggesting that LF-FVs could be potentially used for wound regeneration in clinical settings. abstract_id: PUBMED:35169913 Washing Lipoaspirate Improves Fat Graft Survival in Nude Mice. Background: The optimal fat processing technique of fat grafting has not been determined. We have proved the importance of washing lipoaspirate to remove blood, but the necessity of washing when there is no obvious bleeding during liposuction is not clear. Objectives: The purpose of this study is to further investigate the effect of washing on fat graft survival and the underlying mechanisms, from the perspective of inflammation, oxidative stress and apoptosis. Methods: To exclude the influence of blood, de-erythrocyte infranatant (dEI) isolated from lipoaspirate was obtained. Purified fat processed by cotton pad filtration mixed with dEIs after sedimentation (sedimentation group), washing (washing group) or phosphate buffer solution (control group) was transplanted to nude mice subcutaneously. Samples were harvested at 1 day and 1, 3, 8 weeks after transplantation. Volume and weight retention, histologic examination, immunostaining of perilipin-1, CD31, CD45 and Ly6g, mRNA expression of PPAR-γ, C/EBPα, VEGF, bFGF, IL-6, IL10, TNF-α, TGF-β, Bax and Bcl-2, and protein contents of 8-iso-PGF2α, IL-6, IL10, TNF-α and TGF-β were all compared among groups. Results: After transplantation, volume and weight retention, histologic scores, viable adipocytes and vascularization were all improved in the washing group, with increased expression of adipogenic and angiogenic genes. Compared with the sedimentation group, the washing group had milder inflammation, lower levels of oxidative stress and apoptosis. Conclusions: Washing lipoaspirate to eliminate mixed components can improve fat graft survival and promote adipogenesis and angiogenesis, possibly by relieving inflammation, reducing oxidative stress injury and inhibiting apoptosis. No Level Assigned: This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of 47 these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 . abstract_id: PUBMED:31234442 Immunophenotyping of a Stromal Vascular Fraction from Microfragmented Lipoaspirate Used in Osteoarthritis Cartilage Treatment and Its Lipoaspirate Counterpart. Osteoarthritis (OA) is a degenerative joint disease accompanied by pain and loss of function. Adipose tissue harbors mesenchymal stem/stromal cells (MSC), or medicinal signaling cells as suggested by Caplan (Caplan, 2017), used in autologous transplantation in many clinical settings. The aim of the study was to characterize a stromal vascular fraction from microfragmented lipoaspirate (SVF-MLA) applied for cartilage treatment in OA and compare it to that of autologous lipoaspirate (SVF-LA). Samples were first stained using a DuraClone SC prototype tube for the surface detection of CD31, CD34, CD45, CD73, CD90, CD105, CD146 and LIVE/DEAD Yellow Fixable Stain for dead cell detection, followed by DRAQ7 cell nuclear dye staining, and analyzed by flow cytometry. In SVF-LA and SVF-MLA samples, the following population phenotypes were identified within the CD45- fraction: CD31+CD34+CD73±CD90±CD105±CD146± endothelial progenitors (EP), CD31+CD34-CD73±CD90±CD105-CD146± mature endothelial cells, CD31-CD34-CD73±CD90+CD105-CD146+ pericytes, CD31-CD34+CD73±CD90+CD105-CD146+ transitional pericytes, and CD31-CD34+CD73highCD90+CD105-CD146- supra-adventitial-adipose stromal cells (SA-ASC). The immunophenotyping profile of SVF-MLA was dominated by a reduction of leukocytes and SA-ASC, and an increase in EP, evidencing a marked enrichment of this cell population in the course of adipose tissue microfragmentation. The role of EP in pericyte-primed MSC-mediated tissue healing, as well as the observed hormonal implication, is yet to be investigated. abstract_id: PUBMED:38140078 A Multivariate Meta-Analysis for Optimizing Cell Counts When Using the Mechanical Processing of Lipoaspirate for Regenerative Applications. Lipoaspirate has become the preferred source for regenerative cells. The mechanical processing of lipoaspirate has advantages over enzymatic processing but has a lower yield of regenerative cells. A review of the literature shows different techniques of extraction, but the ideal method or combination has not been determined. Methods: A comprehensive literature search was focused on the mechanical processing of lipoaspirate, without the use of enzymes. Data from the articles were integrated by utilizing a multivariate meta-analysis approach and used to create a statistical-based predictive model for a combination of multiple variables. Results: Starting with 10,000 titles, 159 articles were reviewed, and 6 met the criteria for inclusion and exclusion. The six studies included data on 117 patients. Sixteen factors were analyzed and six were identified as significant. The predictive profilers indicated that the optimal combination to maximize the cell yield was: a centrifuge force of 2000× g, a centrifuge time of 10 min, a cannula diameter of 2 mm, and an intra-syringe number of passes of 30. The optimal patient factors were a higher BMI and younger age. Conclusions: The novelty of the method used here was in combining data across different studies to understand the effect of the individual factors and in the optimization of their combination for mechanical lipoaspirate processing. abstract_id: PUBMED:38028938 Lipoaspirate stored at a constant low temperature by electric control suppresses intracellular metabolism and maintains high cell viability. Background: Cell therapy is a useful treatment method for wide spectrum of diseases which utilizes the immunosuppressive and regenerative abilities of administered cells. It is essential to build a transport system of tissues from which cells are harvested, because various external factors, such as temperature, time, air pressure, and vibration affect the cell functions isolated from body tissues. In particular, temperature is a critical factor which determines the viability of the cells and organs. In this study, we investigated the optimal temperature during the transportation of lipoaspirates from which adipose -derived stem cells (ASCs) were isolated. Method: Lipoaspirates obtained by liposuctions (lipomatic or vaser method) were transported in four different temperature zones (4, 20, 32, and 37 °C) in a transport container which is electrically controlled to maintain a constant temperature during transport. Stromal vascular fractions (SVFs) were harvested from the lipoaspirate, and the cell number, viability and proliferation rate and the yield of ASCs were examined. In addition, the metabolic state of the cells was examined. Results: ASCs from lipoaspirates transported at high temperature significantly decreased cell viability, while those at low temperature maintained high cell viability and showed good cell proliferation. In addition, transportation of lipoaspirates at low temperature resulted in a high level of NAD+/NADH, coenzymes involved in intracellular metabolism, and a low level of lactate in lipoaspirate suppressed the glycolytic system of intracellular metabolism, in ASCs. Conclusion: The lipoaspirate transported at 4 °C exhibited best results regarding live cell number, viability and cell proliferation in our experiments. This study offers a direction to build a transport system that connects laboratories and hospitals and achieve a beneficial therapy for patients. abstract_id: PUBMED:37323499 3D-bioprinted human lipoaspirate-derived cell-laden skin constructs for healing of full-thickness skin defects. 29Three-dimensional (3D)-printed bioactive scaffolds that can be produced rapidly could offer an individualized approach for treating full-thickness skin defects. Decellularized extracellular matrix (dECM) and mesenchymal stem cells have been proven to support wound healing. Adipose tissues obtained by liposuction are rich in adipose-derived dECM (adECM) and adipose-derived stem cells (ADSCs) and thus represent a natural source of bioactive materials for 3D bioprinting. Herein, ADSC-laden 3D-printed bioactive scaffolds consisting of gelatin methacryloyl (GelMA), hyaluronic acid methacryloyl (HAMA), and adECM were fabricated with dual properties of photocrosslinking in vitro and thermosensitive crosslinking in vivo. adECM was prepared by decellularization of human lipoaspirate and mixed as a bioactive material with GelMA and HAMA to form a bioink. Compared with the GelMA-HAMA bioink, the adECM-GelMA-HAMA bioink had better wettability, degradability, and cytocompatibility. Full-thickness skin defect healing in a nude mouse model showed that ADSC-laden adECM-GelMA-HAMA scaffolds accelerated wound healing by promoting faster neovascularization, collagen secretion, and remodeling. ADSCs and adECM collectively conferred bioactivity on the prepared bioink. This study represents a novel approach to enhancing the biological activity of 3D-bioprinted skin substitutes by adding adECM and ADSCs derived from human lipoaspirate and may provide a promising therapeutic option for full-thickness skin defects. abstract_id: PUBMED:35302901 Arthroscopic Rotator Cuff Repair Augmentation With Autologous Microfragmented Lipoaspirate Tissue Is Safe and Effectively Improves Short-term Clinical and Functional Results: A Prospective Randomized Controlled Trial With 24-Month Follow-up. Background: Autologous microfragmented lipoaspirate tissue has been recently introduced in orthopaedics as an easily available source of nonexpanded adipose-derived mesenchymal stem cells. Autologous microfragmented lipoaspirate tissue is expected to create a suitable microenvironment for tendon repair and regeneration. Rotator cuff tears show a high incidence of rerupture and represent an ideal target for nonexpanded mesenchymal stem cells. Purpose: To evaluate the safety and efficacy of autologous lipoaspirate tissue in arthroscopic rotator cuff repair. Study Design: Randomized controlled trial; Level of evidence, 2. Methods: Consecutive patients referring to the investigation center for surgical treatment of magnetic resonance imaging-confirmed degenerative posterosuperior rotator cuff tears were assessed for eligibility. Those who were included were randomized to receive a single-row arthroscopic rotator cuff repair, followed by intraoperative injection of autologous microfragmented adipose tissue processed with an enzyme-free technology (treatment group) or not (control group). Clinical follow-up was conducted at 3, 6, 12, 18, and 24 months; at 18 months after surgery, magnetic resonance imaging of the operated shoulder was obtained to assess tendon integrity and rerupture rate. Results: An overall 177 patients were screened, and 44 (22 per group) completed the 24-month follow-up. A statistically significant difference in favor of the treatment group in terms of Constant-Murley score emerged at the primary endpoint at 6-month follow-up (mean ± SD; control group, 76.66 ± 10.77 points; treatment group, 82.78 ± 7.00 points; P = .0050). No significant differences in clinical outcome measures were encountered at any of the other follow-up points. No significant differences emerged between the groups in terms of rerupture rate, complication rate, and number of adverse events. Conclusion: This prospective randomized controlled trial demonstrated that the intraoperative injection of autologous microfragmented adipose tissue is safe and effective in improving short-term clinical and functional results after single-row arthroscopic rotator cuff repair. Registration: NCT02783352 (ClinicalTrials.gov identifier). abstract_id: PUBMED:29256998 Is There a Limit? A Risk Assessment Model of Liposuction and Lipoaspirate Volume on Complications in Abdominoplasty. Background: Combining liposuction and abdominoplasty is controversial because of concerns for increased complications and potential for vascular compromise of the abdominoplasty flap. Also, the lipoaspirate volume in abdominoplasty is regulated in some areas to as little as 500 ml when performed with abdominoplasty. This study measures abdominoplasty complication rates when performed with and without trunk liposuction, and evaluates the effect of lipoaspirate volume on complications. Methods: Abdominoplasty and liposuction of the trunk procedures were identified in the Tracking Operations and Outcomes for Plastic Surgeons database. Multivariate regression models determined the effect of liposuction with abdominoplasty on complications compared with abdominoplasty alone and determined the effect of liposuction volume on complications. Results: Eleven thousand one hundred ninety-one patients were identified: 9638 (86.1 percent) having abdominoplasty with truncal liposuction and 1553 (13.9 percent) having abdominoplasty alone. Overall complication rates were 10.5 percent and 13.0 percent, respectively. Combined liposuction and abdominoplasty was independently associated with a reduced risk of both overall complications (p = 0.046) and seroma (p = 0.030). Given existing laws limiting liposuction volume to 500 or 1000 ml in combination with abdominoplasty, each of these thresholds was evaluated, with no effect on complications. Surprisingly, increasing liposuction volume was not independently associated with an increased risk of any complication. Conclusions: When done by board-certified plastic surgeons, abdominoplasty with truncal liposuction is safe, with fewer complications than abdominoplasty alone. Regulations governing liposuction volumes in abdominoplasty are arbitrary and do not reflect valid thresholds for increased complications. Clinical Question/level Of Evidence: Therapeutic, III. abstract_id: PUBMED:36330332 The therapeutic effect of adipose-derived lipoaspirate cells in femoral head necrosis by improving angiogenesis. Femoral head necrosis (FHN), one of the most popular joint diseases in the musculoskeletal system, is usually attributed to local ischemia of the femoral head. Thus, regenerating the vascularization capacity and restoring the local perfusion of the femoral head becomes an efficient therapeutic approach for FHN. We investigated the function of autologous lipoaspirate cells (LPCs) in regenerating circulation in FHN animal models and human subjects in this study. We also explored the mechanisms of why LPCs show a superior effect than that of the bone marrow-derived stem cells (BMSCs) in vascularization. Thirty-four FHN patients were recruited for the randomized clinical trial. Harris Hip Score (HHS) and digital subtraction arteriography (DSA) and interventional technique were used to compare the efficacy of LPCs treatment and vehicle therapy in improving femoral head circulation and hip joint function. Cellular mechanism that underlies the beneficial effect of LPCs in restoring blood supply and rescuing bone architecture was further explored using canine and mouse FHN animal models. We found that LPCs perfusion through the medial circumflex artery will promote the femoral head vascularization and bone structure significantly in both FHN patients and animal models. The HHS in LPCs treated patients was significantly improved relative to vehicle group. The levels of angiogenesis factor secreted by LPCs such as VEGF, FGF2, VEC, TGF-β, were significantly higher than that of BMSCs. As the result, LPCs showed a better effect in promoting the tube structure formation of human vascular endothelial cells (HUVEC) than that of BMSCs. Moreover, LPCs contains a unique CD44+CD34+CD31- population. The CD44+CD34+CD31- LPCs showed significantly higher angiogenesis potential as compared to that of BMSCs. Taken together, our results show that LPCs possess a superior vascularization capacity in both autonomous and paracrine manner, indicating that autologous LPCs perfusion via the medial circumflex artery is an effective therapy for FHN. Answer: The safety of lipoaspirate volume during procedures such as liposuction and abdominoplasty has been a topic of concern due to the potential for increased complications and vascular compromise of the surgical area. A study aimed at assessing the risk of complications in abdominoplasty when combined with liposuction found that performing liposuction with abdominoplasty was independently associated with a reduced risk of overall complications and seroma. This study also evaluated the effect of lipoaspirate volume on complications and found that increasing liposuction volume was not independently associated with an increased risk of any complication. The study concluded that abdominoplasty with truncal liposuction is safe when performed by board-certified plastic surgeons and that regulations governing liposuction volumes in abdominoplasty are arbitrary and do not reflect valid thresholds for increased complications (PUBMED:29256998). Therefore, based on the findings of this study, there does not appear to be a specific lipoaspirate volume limit that can be deemed unsafe. However, it is important to note that individual patient factors, surgical technique, and the expertise of the surgeon may all play a role in the safety and outcomes of such procedures. It is always essential to evaluate each case individually and proceed with caution, especially in complex or high-risk patients.
Instruction: Follicular lesions of the thyroid. Does frozen section evaluation alter operative management? Abstracts: abstract_id: PUBMED:7618962 Follicular lesions of the thyroid. Does frozen section evaluation alter operative management? Objective: The authors determined the usefulness of frozen section (FS) evaluation in the operative management of follicular lesions of the thyroid. Summary Background Data: Fine-needle aspiration (FNA) cannot reliably discriminate between benign and malignant follicular lesions of the thyroid. Accordingly, FS evaluation is used routinely to guide intraoperative management. Methods: One hundred twenty-five consecutive patients with follicular thyroid lesions who underwent surgical exploration at the Johns Hopkins Hospital were reviewed. Results: Frozen sections were categorized in 104 of 120 patients (87%) as "follicular lesion, defer to permanent section," rendering no useful clinical information. In only 4 of 120 patients (3.3%) did FS evaluation correctly modify the operative procedure. Notably, in six cases (5.0%), an incorrect FS evaluation misled the surgeon, resulting in four misguided operations. Conclusion: Frozen section evaluation is of minimal diagnostic value for follicular thyroid lesions, rendering no additional information 87% of the time; it prolongs the operation, increases costs, and leads to misguided interventions. Until a more definitive diagnostic tool exists for follicular thyroid lesions, FS evaluation could be omitted, resection of the lobe with the nodule could be performed, and the definitive operative management could be based on the final permanent histology. abstract_id: PUBMED:28990674 The futility of intraoperative frozen section in the evaluation of follicular thyroid lesions. Objective: Investigate the utility of intraoperative frozen section (iFS) in patients with follicular thyroid lesions following publication of the 2015 American Thyroid Association (ATA) guidelines. Study Design: Retrospective chart review. Methods: Patient demographics, preoperative cytology, frozen pathology, and final pathology were reviewed on patients undergoing thyroid surgery at a tertiary care hospital in which iFS was utilized over a 5-year period. The test performance of iFS and the frequency of indicated completion/total thyroidectomies pre- and postpublication of the 2015 ATA guidelines were calculated. Results: One hundred and one patients met inclusion criteria: 54 patients with follicular lesions of undetermined significance (FLUS) and 47 patients with a cytologic diagnosis of suspicious for follicular neoplasm/follicular neoplasm. The malignancy rate was 36%, but only 14% of malignancies were identified on iFS. A definitive benign or malignant diagnosis was given on iFS in only 21% of cases, and operative management was altered in two cases as a result of iFS. There was a statistically significant reduction in the frequency of indicated total/completion thyroidectomies based on high-risk features as a result of the 2015 ATA guidelines compared to prior recommendations (20.8% vs. 5.0%, P = < 0.001). None of these patients had findings on iFS that would have altered management intraoperatively. Conclusion: Intraoperative frozen section offers minimal diagnostic utility in the evaluation of follicular thyroid lesions. Updates in the 2015 ATA guidelines further diminish its potential to impact management intraoperatively. Significant improvements in its ability to identify malignancies would be needed to justify its use. Level Of Evidence: 4. Laryngoscope, 128:1501-1505, 2018. abstract_id: PUBMED:14699040 The utility of frozen section evaluation for follicular thyroid lesions. Background: Because fine-needle aspiration cannot reliably discriminate between benign and malignant follicular thyroid lesions, some surgeons use intraoperative frozen section (FS) to guide operative management. To determine the utility of FS for these lesions, we reviewed our institutional experience. Methods: Between 1994 and 2001, 152 patients underwent surgical resection for follicular neoplasms. Results: The mean age of the patients was 47 years, and 76% were female. Forty-one (32%) FSs were reported as benign, 5 (4%) as malignant, and 3 (2%) as indeterminate, and in 80 (62%), the diagnosis was "follicular lesion, deferred to permanent histology." On paraffin section, all patients with malignant FSs had thyroid cancer, and all 41 patients with benign FSs had benign lesions. Thus, FS for diagnosis of follicular thyroid cancer had a sensitivity, specificity, positive predictive value, and accuracy of 67%, 100%, 100%, and 96%, respectively. In most cases (64%), FS rendered no additional information at the time of operation. Therefore, the cost per useful FS was $7800, which is higher than the charge of a completion thyroidectomy (approximately $6000). Conclusions: FS analysis for follicular lesions seems to be highly specific and accurate. However, because of the low sensitivity, routine use of FS is not cost-effective in patients with follicular thyroid lesions. abstract_id: PUBMED:26793630 Evaluating the accuracy of fine needle aspiration and frozen section based on permanent histology in patients with follicular lesions. Background: Fine needle aspiration (FNA) has led to a decrease in unnecessary surgeries for thyroid nodules. This study was designed to compare diagnostic value of FNA and frozen section methods in the follicular lesions. Methods: This is a cross-sectional study based on the medical records of 42 patients who were referred to surgery clinic of Rasool-e-Akram hospital in Tehran with complaint of thyroid nodules. All FNAs were diagnosed as follicular lesions in pathologic evaluation. All the patients underwent thyroid surgery and their frozen section results were also assessed. Finally, diagnostic value of the two tests was compared based on final permanent histologic report. Results: Forty two patients with follicular thyroid nodules diagnosed with FNA were included. During the operation, using frozen sections, diagnosis of papillary carcinoma, non-papillary malignancies, benign lesion and intermediate cytology was made in 13 (31%), 3 (7%), 25 (59.5%), 1 (2.5%) patients, respectively. RESULTS of permanent histology showed that follicular adenoma is the most prevalent lesion which appeared in 25 (59.5%) cases. Papillary and follicular carcinomas were detected in 10 (23.8%) and 4 (9.5%) cases, respectively. Conclusion: RESULTS of the study demonstrate a 73% reduction in second surgery in patients with follicular thyroid lesions based on intraoperative frozen section results. abstract_id: PUBMED:16627916 Use and abuse of frozen section in the diagnosis of follicular thyroid lesions. The role of intraoperative frozen section in certain organ systems such as the thyroid continues to be problematic. In many cases, diagnoses are deferred or nonhelpful--"follicular lesion." In the modern era, the widespread use of preoperative aspiration biopsy has allowed for more careful selection of patients who undergo thyroid surgery. In many cases, the fine-needle-aspiration (FNA) biopsy diagnosis can be definitive or can guide the specific surgical procedure. The literature supports our approach, which is summarized as follows: Intraoperative consultation is not needed on the intrathyroidal nodule if a preoperative FNA was definitive for papillary carcinoma. Frozen section is of no value in the intraoperative diagnosis of lesions diagnosed on FNA as "follicular neoplasm" or "Hürthle cell neoplasm" because the characterization of these lesions requires detailed analysis of the tumor capsule for the demonstration of capsular and/or vascular invasion--an analysis that is not practical in the intraoperative setting. Finally, intraoperative consultation including frozen section and intraoperative cytologic examination is most useful in those cases that are diagnosed as suspicious for papillary carcinoma by FNA, because the assessment of nuclear features needed for the definitive diagnosis is possible with intraoperative techniques in a significant number of cases. abstract_id: PUBMED:33316682 Frozen section in thyroid gland follicular neoplasms: It's high time to abandon it! Thyroid nodules are a very common clinical condition. The 2015 American Thyroid Association (ATA) guidelines recommend surgical excision for Bethesda IV nodules. The use of intraoperative frozen section (FS) has been recommended as a strategy to tailor the extent of the initial surgery. We critically evaluated the literature that discusses the utility and cost-effectiveness of FS to make an intraoperative decision in patients with thyroid nodules classified as follicular neoplasm. FS should not be recommended as a routine intraoperative test to assess for malignancy in thyroid follicular patterned lesions due to its low performance; the high number of deferred results; the inability to adequately assess histologically defining features; the improvements in risk stratification guiding total thyroidectomy; and the low cost-effectiveness of FS. abstract_id: PUBMED:33731149 Diagnostic pitfall of thyroid fine-needle aspiration induced fibrosis: follicular adenoma mimicking medullary thyroid carcinoma in frozen section. Background: Fine-needle aspiration (FNA) is a frequently utilized method for the diagnosis of thyroid nodules. Although the technique has clear advantages, the injury caused by the aspiration needle can induce various histological alterations. Herein, we report a case of follicular adenoma showing histological alterations possibly caused by FNA biopsy. Furthermore, the histological appearance of the lesion mimicked those of medullary thyroid carcinoma, particularly in the frozen section. Case Presentation: Ultrasonography of a thyroid nodule in a 39-year-old man revealed a mass (2.2 cm in diameter) in the right thyroid lobe. FNA was performed three times on the mass, and the results of the cytology were atypia of undetermined significance. Thereafter, the patient underwent right hemithyroidectomy. The histological findings of the operative frozen section analysis indicated medullary thyroid carcinoma. However, after evaluation and immunohistochemical staining of the permanent section, the mass was diagnosed as follicular adenoma with extensive fibrosis. Conclusion: The histological alterations observed in the follicular adenoma are believed to have been caused by injury during the repeated FNA procedures. abstract_id: PUBMED:36387140 Case report: Intraoperative frozen section analysis of Thyroid paraganglioma. Paraganglioma (PGL) is a neuroendocrine tumor that arises from the sympathetic or parasympathetic paraganglia. Primary thyroid PGL is extremely rare. PGL may be difficult to diagnose on frozen sections because its histopathological features, such as polygonal tumor cells with eosinophilic cytoplasm arranged irregularly, overlap with those of thyroid follicular adenoma. We present a case of thyroid PGL in a female patient and provide a detailed description of the patient's clinicopathologic characteristics. Cervical computed tomography showed a left thyroid mass with uneven density. Intraoperative frozen section analysis showed an uneven fibrous septa and rich networks of delicate vessels surrounding tumor cell nests. The tumor cells were polygonal or epithelioid with eosinophilic cytoplasm, arranged in a nest, trabecular, or organoid pattern were and diagnosed as thyroid follicular adenoma. However, in postoperative immunohistochemistry, these were diagnosed as thyroid PGL. The postoperative recovery was uneventful. The patient showed no signs of tumor recurrence or metastasis until 16 months of follow-up. Herein, we summarize the characteristic features of thyroid PGL based on frozen section analysis. In the appropriate clinical context, its proper use as diagnostic and differential diagnostic management strategies is recommended. abstract_id: PUBMED:11323510 Randomized prospective evaluation of frozen-section analysis for follicular neoplasms of the thyroid. Objective: To evaluate the clinical utility of frozen section in patients with follicular neoplasms of the thyroid in a randomized prospective trial. Summary Background Data: The finding of a follicular neoplasm on fine-needle aspiration prompts many surgeons to perform intraoperative frozen section during thyroid lobectomy. However, the focal distribution of key diagnostic features of malignancy contributes to a high rate of noninformative frozen sections. Methods: The series comprised 68 consecutive patients with a solitary thyroid nodule in whom fine-needle aspiration showed a follicular neoplasm. Patients were excluded for bilateral or nodal disease, extrathyroidal extension, or a definitive fine-needle aspiration diagnosis. Final pathologic findings were compared with frozen sections, and cost analyses were performed. Results: Sixty-one patients met the inclusion criteria. Twenty-nine were randomized to the frozen-section group and 32 to the non-frozen-section group. In the non-frozen-section group, one patient was excluded when gross examination of the specimen was suggestive of malignancy and a directed frozen section was diagnostic of follicular carcinoma. Frozen-section analysis rendered a definitive diagnosis of malignancy in 1 of 29 (3.4%) patients, who then underwent a one-stage total thyroidectomy. In the remaining 28 patients, frozen section showed a "follicular or Hürthle cell neoplasm." Permanent histology demonstrated well-differentiated thyroid cancer in 6 of these 28 patients (21%). Of the 31 patients in the non-frozen-section group, 3 (10%) showed well-differentiated thyroid carcinoma on permanent histology. Complications were limited to one transient unilateral vocal cord dysfunction. All but one patient had a 1-day hospital stay. There were no significant differences between the groups in surgical time or total hospital charges; however, the charge per informative frozen section was approximately $12,470. Conclusions: For the vast majority of patients (96.4%) with follicular neoplasms of the thyroid, frozen section is neither informative nor cost-effective. abstract_id: PUBMED:12616215 Frozen section in a cytological diagnosis of thyroid follicular neoplasm. Objective/hypothesis: Fine-needle aspiration biopsy is the most accurate diagnostic test for thyroid nodules, its only limitation being the diagnosis of follicular neoplasm that does not distinguish between benign and malignant follicular lesions. Study Design: To determine the utility of intraoperative frozen-section analysis in cases of a cytological diagnosis of follicular neoplasm, a retrospective review of 66 patients with a solitary thyroid nodule and follicular neoplasm who underwent thyroid surgery was carried out. Methods: Fine-needle aspiration was classified following the Papanicolaou Society of Cytopathology Classification, and frozen section was defined as malignant or "deferred." If a malignant diagnosis was made by frozen-section analysis, a total thyroidectomy was carried out. The extension of thyroid surgery in the deferred cases was based on the definitive histological diagnosis. Results: Sixty-four cases were classified as deferred, and two as suspect for malignancy. Among the 64 deferred cases, 15 were malignant in the final pathological findings, and 49 were benign. The two suspect cases were papillary carcinoma. Frozen-section analysis classified 2 of 17 (11.7%) cases as follicular variant of papillary carcinoma that could not be diagnosed by cytological study. However, these two cases had a strong clinical evidence of malignancy. Conclusion: The routine use of frozen-section analysis is useless in cases of cytological diagnosis of follicular neoplasm on fine-needle aspiration biopsy, because of the low probability of achieving the diagnosis of follicular carcinoma and the inability to provide additional information apart from the clinical and the cytological data. Answer: Based on the provided abstracts, frozen section (FS) evaluation does not significantly alter operative management for follicular lesions of the thyroid. The majority of studies report that FS often defers to permanent section, rendering no useful clinical information in a high percentage of cases (87% in PUBMED:7618962, 62% in PUBMED:14699040). FS evaluation correctly modified the operative procedure in only a small fraction of patients (3.3% in PUBMED:7618962) and in some cases, it even misled the surgeon, resulting in misguided operations (5.0% in PUBMED:7618962). The utility of FS in the intraoperative evaluation of follicular thyroid lesions is minimal, with a low sensitivity and a high cost per useful FS (PUBMED:14699040). The malignancy rate in follicular thyroid lesions is significant (36% in PUBMED:28990674), but FS identified only a small percentage of malignancies (14% in PUBMED:28990674). Furthermore, the 2015 American Thyroid Association (ATA) guidelines have reduced the frequency of indicated total/completion thyroidectomies, further diminishing the potential impact of FS on intraoperative management (PUBMED:28990674). The literature suggests that FS should not be routinely recommended for intraoperative assessment of malignancy in thyroid follicular patterned lesions due to its low performance, high number of deferred results, and low cost-effectiveness (PUBMED:33316682). Additionally, FS can be misleading, as seen in cases where FNA-induced fibrosis in a follicular adenoma mimicked medullary thyroid carcinoma in FS (PUBMED:33731149) or where thyroid paraganglioma was misdiagnosed as follicular adenoma on FS (PUBMED:36387140). In conclusion, FS evaluation for follicular thyroid lesions is of limited diagnostic value and does not frequently alter operative management. It is not cost-effective and is associated with a high rate of noninformative results. Therefore, the definitive operative management should be based on final permanent histology rather than FS evaluation (PUBMED:7618962, PUBMED:14699040, PUBMED:33316682).
Instruction: Does prostate-specific antigen density alter decision making on biopsy? Abstracts: abstract_id: PUBMED:8821683 Does prostate-specific antigen density alter decision making on biopsy? Objective: The ability of prostate-specific antigen density (PSAD) to predict prostate cancer in biopsy specimens is evaluated in patients with benign digital rectal examination (DRE) and prostate-specific antigen (PSA) between 4.0 and 10.0 ng/ml. Material And Methods: 144 referred patients with a benign DRE and PSA &gt; 4.0 ng/ml were additionally evaluated by transrectal ultrasonography and transrectal biopsies. PSAD values were calculated and statistical analysis was performed. Results: The mean PSAD value was able to distinguish significantly between benign prostate conditions and prostate cancer in patients with PSA &gt; 4.0 ng/ml. However, in 73 patients with 4.0 &lt; PSA &lt; or = 10.0 ng/ml no significant stratification was obtained. At a PSAD value of 0.15 the pretest probability of 18% for positive biopsy was lowered to a posttest probability of 8.1% and PSAD appeared to be of limited value regarding sensitivity and specificity at different cutoff values (ROC curve). Applying age-specific reference ranges would have reduced diagnostic procedures for men between 60 and 79 years old with 7.0% without missing prostate cancer. Conclusions: In this study PSAD was shown to have only a moderate additive value in decision making to omit biopsy for the individual patient with benign DRE and PSA between 4.0 and 10.0 ng/ml. Age-specific reference ranges of PSA can prevent unnecessary diagnostic procedures. abstract_id: PUBMED:34193378 Improving prostate biopsy decision making in Mexican patients: Still a major public health concern. Background: Prostate cancer screening has reduced its mortality in 21%. However, this has also led to an increased number of biopsies in order to establish the diagnosis, many of them unnecessary. Current screening guidelines prioritize use of prostatic magnetic resonance and new biomarkers to reduce unnecessary biopsies, however, their implementation in developing countries screening programs is mainly limited by its costs. Objective: We aimed to evaluate Prostate Biopsy Risk Collaborative Group (PBCG) and Prostate Cancer Prevention Trial Risk Calculator (PCPTRC) 2.0 predictions accuracy in Mexican patients in order to guide prostate biopsy decision making and reduce unnecessary biopsies. Materials And Methods: We retrospectively analyzed patients between 55 and 90 years old who underwent prostate biopsy in a high-volume center in Mexico between January 2017 and June 2020. Clinical utility of PBCG and PCPTRC 2.0 to predict high-grade prostate cancer (HGPCa) biopsy outcomes was evaluated using decision curve analysis and compared to actual biopsy decision making. Receiver operating characteristics area under the curve (AUC) was used to measure discrimination and external validation. Results: From 687 patients eligible for prostate biopsy, 433 met selections criteria. One hundred and thirty-five (31.17%) patients were diagnosed with HGPCa, 63 (14.54%) with low-grade disease and 235 (54.27%) had a negative biopsy. PCPTRC 2.0 ≥15% threshold got a standardized net benefit (sNB) of 0.70, while PBCG ≥30% and ≥35% had a sNB of 0.27 and 0.15, respectively. Use of both models for guiding prostate biopsy decision resulted in no statistical difference for HGCPa detection rates, while achieved a significant difference in reducing total and unnecessary biopsies. However, this difference was lower (better) for PCPTRC 2.0, being statistically significative when compared against PBCG thresholds. Both models were well calibrated (AUC 0.79) and achieved external validation compared with international cohorts. Conclusions: Our study is the first to effectively validate both PCPTRC 2.0 and PBCG predictions for the Mexican population, proving that their use in daily practice improves biopsy decision making by accurately predicting HGPCa and limit unnecessary biopsies without representing additional costs to screening programs. abstract_id: PUBMED:36965817 Using IsoPSA With Prostate Imaging Reporting and Data System Score May Help Refine Biopsy Decision Making in Patients With Elevated PSA. Objective: To assess how IsoPSA, a structure-based serum assay which has been prospectively validated in detecting clinically significant prostate cancer (csPCa), can help the biopsy decision process when combined with the prostate imaging reporting and data systems (PI-RADS). Materials And Methods: This was a single-center retrospective review of prospectively collected data on patients receiving IsoPSA testing for elevated PSA (&gt;4.0ng/mL). Patients were included if they had received an IsoPSA test and prostate MRI within 1 year of IsoPSA testing, and subsequently underwent prostate biopsy. Multivariable logistic regression was used to identify predictors of (csPCa, ie, GG ≥ 2) on biopsy. Predictive probabilities for csPCa at biopsy were generated using IsoPSA and various PI-RADS scores. Results: Two hundred and 7 patients were included. Twenty-two percent had csPCa. Elevated IsoPSA ratio (defined as ≥6.0) (OR: 5.06, P = .015) and a PI-RADS 4-5 (OR: 6.37, P &lt;.001) were significant predictors of csPCa. The combination of elevated IsoPSA ratio and PI-RADS 4-5 lesion had the highest area under the curve (AUC) (AUC: 0.83, P &lt;.001). The predicted probability of csPCa when a patient had a negative or equivocal MRI (PI-RADS 1-3) and a low IsoPSA ratio (≤6) was &lt;5%. Conclusion: The combination of PI-RADS with IsoPSA ratios may help refine the biopsy decision-making process. In our cohort, a negative or equivocal MRI with a low IsoPSA may provide a low enough predicted probability to omit biopsy in such patients. abstract_id: PUBMED:31067560 Use of Prostate Specific Antigen Density Combined with Multiparametric Magnetic Resonance Imaging Improves Triage for Prostate Biopsy. Background: Multi-parametric magnetic resonance imaging (mpMRI)-directed biopsy for prostate cancer (PC) diagnosis improves the detection of clinically significant prostate cancer (CSPC) and decreases the rate of over-diagnosis of insignificant disease. The aim of this study was to investigate the value of mpMRI combined with prostate specific antigen density (PSAD) in the decision making related to the biopsy. Methods: mpMRI and mpMRI/transrectal ultrasound fusion targeted biopsies with subsequent systematic biopsies were performed in 397 patients (223 biopsy-naïve and 174 with a previous biopsy). Detection rates of (CSPC) and insignificant PC were stratified using the PIRADS score, and the number of avoidable biopsies and missed (CSPC) were plotted against PSAD values of 0.1-0.5 ng/mL2. Results: PIRADS &lt;3 and PSAD &lt;0.2 ng/mL2 were the safest criteria for not performing a biopsy. When applied to the biopsy-naïve group, 21.5% (48/223) of the biopsies could have been avoided and 3.7% (3/82) of CSPC would have been missed. In the repeat biopsy group, 12.6% (22/174) of biopsies could have been avoided and 6.9% (4/58) of (CSPC) would have been missed. Conclusions: A combination of mpMRI and PSAD might reduce the number of biopsies performed with the cost of missing &lt;4% of CSPC. abstract_id: PUBMED:38263825 Risk stratification of prostate cancer with MRI and prostate-specific antigen density-based tool for personalized decision making. Objectives: MRI is now established for initial prostate cancer diagnosis; however, there is no standardized pathway to avoid unnecessary biopsy in low-risk patients. Our study aimed to test previously proposed MRI-focussed and risk-adapted biopsy decision models on a real-world dataset. Methods: Single-centre retrospective study performed on 2055 biopsy naïve patients undergoing MRI. Diagnostic pathways included "biopsy all", "MRI-focussed" and two risk-based MRI-directed pathways. Risk thresholds were based on prostate-specific antigen (PSA) density as low (&lt;0.10 ng mL-2), intermediate (0.10-0.15 ng mL-2), high (0.15-0.20 ng mL-2), or very high-risk (&gt;0.20 ng mL-2). The outcome measures included rates of biopsy avoidance, detection of clinically significant prostate cancer (csPCa), missed csPCa, and overdiagnosis of insignificant prostate cancer (iPCa). Results: Overall cancer rate was 39.9% (819/2055), with csPCa (Grade-Group ≥2) detection of 30.3% (623/2055). In men with a negative MRI (Prostate Imaging-Reporting and Data System, PI-RADS 1-2), the risk of cancer was 1.2%, 2.6%, 9.0%, and 12.9% in the low, intermediate, high, and very high groups, respectively; for PI-RADS score 3 lesions, the rates were 10.5%, 14.3%, 25.0%, and 33.3%, respectively. MRI-guided pathway and risk-based pathway with a low threshold missed only 1.6% csPCa with a biopsy-avoidance rate of 54.4%, and the risk-based pathway with a higher threshold avoided 62.9% (1292/2055) of biopsies with 2.9% (61/2055) missed csPCa detection. Decision curve analysis found that the "risk-based low threshold" pathway has the highest net benefit for probability thresholds between 3.6% and 13.9%. Conclusion: Combined MRI and PSA-density risk-based pathways can be a helpful decision-making tool enabling high csPCa detection rates with the benefit of biopsy avoidance and reduced iPCa detection. Advances In Knowledge: This real-world dataset from a large UK-based cohort confirms that combining MRI scoring with PSA density for risk stratification enables safe biopsy avoidance and limits the over-diagnosis of insignificant cancers. abstract_id: PUBMED:8973704 Does free to total ratio of prostate-specific antigen alter decision-making on prostatic biopsy? Objectives: For patients with prostate specific antigen (PSA) values of 4-10 ng/mL, some urologists perform prostatic biopsies depending upon the findings of digital rectal examination (DRE) and transrectal ultrasonography (TRUS), and others perform biopsies on most of these men regardless of the findings of DRE and TRUS. The purpose of this study was to examine whether the information given by the measurement of the ratio of free to total (F/T) PSA can alter decision-making on prostatic biopsy. Methods: One hundred and two (102) men with PSA values between 4 and 10 ng/mL, were included in this study. All men were examined with DRE and TRUS; a F/T PSA ratio was also measured, and six prostatic biopsies were taken from each patient. Results: In 102 men who were biopsied, 22 (21.5%) prostatic carcinomas were identified. Among these 22 cancer patients, 13 had abnormal findings in DRE and/or TRUS and would have been biopsied and diagnosed anyway. If we use only the F/T PSA ratio (cut-off value 0.20) to decide whom to biopsy, we would have diagnosed 16/22 cancers; the difference between these two procedures was not statistically significant (P = 0.17). If we decide to biopsy those patients who have abnormal findings in DRE and/or TRUS and those who have a F/T PSA ratio &lt; 0.20, we would diagnose 20/22 cancers (P = 0.05) and at the same time, reduce the unnecessary biopsies from 80 to 41 (48%). With a PSA value between 4 and 10 ng/mL and no findings in DRE and TRUS and at the same time with a F/T PSA ratio &gt; or = 0.20, we would have to perform biopsies in 20.5 men to find one cancer. On the other hand, in patients with suspicious findings in DRE and/or TRUS and a F/T PSA ratio &lt; 0.20, in every two men that we biopsy we would find one cancer. Conclusion: We believe that among patients with PSA values between 4 and 10 ng/mL after performing DRE and TRUS, the additional information of F/T PSA ratio can help since it increases the number of cancers detected and reduces the number of unnecessary biopsies. abstract_id: PUBMED:24725488 Decision making and prostate cancer screening. This article presents an overview of the challenges that men encounter in making decisions about prostate cancer screening, including complex affective and cognitive factors and controversies in the interpretation of the evidence on prostate cancer screening. Shared decision making involving patient decision aids are discussed as approaches that can be used to improve the quality of prostate cancer screening decisions, including a close alignment between a man's values, goals, and preferences and his choice about screening. abstract_id: PUBMED:34589437 Modified Predictive Model and Nomogram by Incorporating Prebiopsy Biparametric Magnetic Resonance Imaging With Clinical Indicators for Prostate Biopsy Decision Making. Purpose: The purpose of this study is to explore the value of combining bpMRI and clinical indicators in the diagnosis of clinically significant prostate cancer (csPCa), and developing a prediction model and Nomogram to guide clinical decision-making. Methods: We retrospectively analyzed 530 patients who underwent prostate biopsy due to elevated serum prostate specific antigen (PSA) levels and/or suspicious digital rectal examination (DRE). Enrolled patients were randomly assigned to the training group (n = 371, 70%) and validation group (n = 159, 30%). All patients underwent prostate bpMRI examination, and T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI) sequences were collected before biopsy and were scored, which were respectively named T2WI score and DWI score according to Prostate Imaging Reporting and Data System version 2 (PI-RADS v.2) scoring protocol, and then PI-RADS scoring was performed. We defined a new bpMRI-based parameter named Total score (Total score = T2WI score + DWI score). PI-RADS score and Total score were separately included in the multivariate analysis of the training group to determine independent predictors for csPCa and establish prediction models. Then, prediction models and clinical indicators were compared by analyzing the area under the curve (AUC) and decision curves. A Nomogram for predicting csPCa was established using data from the training group. Results: In the training group, 160 (43.1%) patients had prostate cancer (PCa), including 128 (34.5%) with csPCa. Multivariate regression analysis showed that the PI-RADS score, Total score, f/tPSA, and PSA density (PSAD) were independent predictors of csPCa. The prediction model that was defined by Total score, f/tPSA, and PSAD had the highest discriminatory power of csPCa (AUC = 0.931), and the diagnostic sensitivity and specificity were 85.1% and 87.5%, respectively. Decision curve analysis (DCA) showed that the prediction model achieved an optimal overall net benefit in both the training group and the validation group. In addition, the Nomogram predicted csPCa revealed good estimation when compared with clinical indicators. Conclusion: The prediction model and Nomogram based on bpMRI and clinical indicators exhibit a satisfactory predictive value and improved risk stratification for csPCa, which could be used for clinical biopsy decision-making. abstract_id: PUBMED:35897274 PROSHADE Protocol: Designing and Evaluating a Decision Aid for Promoting Shared Decision Making in Opportunistic Screening for Prostate Cancer: A Mix-Method Study. Background: Opportunistic prostate-specific antigen (PSA) screening may reduce prostate cancer mortality risk but is associated with false positive results, biopsy complications and overdiagnosis. Although different organisations have emphasised the importance of shared decision making (SDM) to assist men in deciding whether to undergo prostate cancer screening, recent evaluations show that the available decision aids fail to facilitate SDM, mainly because they do not consider the patients' perspective in their design. We aim to systematically develop and test a patient decision aid to promote SDM in prostate cancer screening, following the Knowledge to Action framework. Methods: (1) Feasibility study: a quantitative survey evaluating the population and clinician (urologists and general practitioners) knowledge of the benefits and risks derived from PSA determination and the awareness of the available recommendations. Focus groups to explore the challenges patients and clinicians face when discussing prostate cancer screening, the relevance of a decision aid and how best to integrate it into practice. (2) Patient decision aid development: Based on this data, an evidence-based multicomponent SDM patient decision aid will be developed. (3) User-testing: an assessment of the prototype of the initial patient decision aid through a user-testing design based on mix-methods (questionnaire and semi-structured review). The decision aid will be refined through several iterative cycles of feedback and redesign. (4) Validation: an evaluation of the patient decision aid through a cluster-randomised controlled trial. Discussion: The designed patient decision aid will provide balanced information on screening benefits and risks and should help patients to consider their personal preferences and to take a more active role in decision making. Conclusions: The well-designed patient decision aid (PDA) will provide balanced information on screening benefits and risks and help patients consider their personal preferences. abstract_id: PUBMED:29346783 Does [-2]Pro-Prostate Specific Antigen Meet the Criteria to Justify Its Inclusion in the Clinical Decision-Making Process? Introduction: To assess whether [-2]pro-prostate-specific antigen (p2PSA) meets the criteria to justify its inclusion in a predictive model of prostate cancer (PCa) diagnosis and in the clinical decision-making process. Materials And Methods: A total 172 men with total prostate-specific antigen of 2-10 ng/mL underwent measurement of free PSA and p2PSA before prostate biopsy in an observational and prospective study. From these measurements, the Prostate Health Index (PHI) was calculated. Clinical and analytical predictive models were created incorporating PHI. Results: Of 172 men, 72 (42%) were diagnosed with PCa, 33 (46%) of whom were found to be with high-grade disease. PHI score was the most predictive of biopsy outcomes in terms of discriminative ability (area under the curve = 0.79), with an added gain in predictive accuracy of 17%. All the models that incorporated PHI worked better in terms of calibration close to 45° on the slope. In the decision curve analysis, at a threshold probability of 40% we could prevent 82 biopsies, missing only 16 tumors and 5 high-grade tumors. Conclusions: PHI score is a more discriminant biomarker, has superior calibration and superior net benefit, and provides a higher rate of avoided biopsies; thus, it can be useful for aiding in making a more informed decision for each patient. Answer: Prostate-specific antigen density (PSAD) has been evaluated as a tool to aid in the decision-making process for prostate biopsies, particularly in patients with benign digital rectal examination (DRE) and prostate-specific antigen (PSA) levels between 4.0 and 10.0 ng/ml. The studies have shown varying results regarding the utility of PSAD in this context. One study found that while the mean PSAD value could significantly distinguish between benign prostate conditions and prostate cancer in patients with PSA > 4.0 ng/ml, it did not provide significant stratification in patients with PSA levels between 4.0 and 10.0 ng/ml. The study concluded that PSAD had only moderate additive value in deciding whether to omit biopsy for individual patients with benign DRE and PSA in this range. Age-specific reference ranges of PSA could prevent unnecessary diagnostic procedures without missing prostate cancer cases (PUBMED:8821683). Another study suggested that combining PSAD with multiparametric magnetic resonance imaging (mpMRI) could improve triage for prostate biopsy. It was found that using PIRADS <3 and PSAD <0.2 ng/mL^2 as criteria for not performing a biopsy was the safest. In biopsy-naïve patients, this approach could have avoided 21.5% of biopsies while missing only 3.7% of clinically significant prostate cancers (CSPC). In the repeat biopsy group, 12.6% of biopsies could have been avoided, missing 6.9% of CSPC (PUBMED:31067560). A different study incorporated MRI and PSA density-based tools for personalized decision-making and found that combining MRI scoring with PSA density for risk stratification enabled safe biopsy avoidance and limited the over-diagnosis of insignificant cancers (PUBMED:38263825). In summary, PSAD can play a role in the decision-making process for prostate biopsies, particularly when combined with other diagnostic tools such as mpMRI. However, its value may be limited when used in isolation, and it should be considered as part of a broader assessment that includes age-specific PSA reference ranges and potentially other biomarkers or imaging results.
Instruction: Can we improve the management of acute coronary syndromes in primary care? Abstracts: abstract_id: PUBMED:37084014 Intensive nurse-led follow-up in primary care to improve self-management and compliance behaviour after myocardial infarction. Aims And Objectives: To assess the effects of intensive follow-up by primary care nurses on cardiovascular disease self-management and compliance behaviours after myocardial infarction. Background: Although cardiovascular disease prevention and cardiac rehabilitation take place in hospital settings, a nurse-led approach is necessary in primary care during the first few months after a myocardial infarction. Therefore, it is important to assess self-management of cardiovascular disease and levels of compliance with the prescribed diet, physical activity, and medication. Design: The study used a multicentre, quasi-experimental, pre-post design without a control group. Methods: Patients with acute coronary syndrome from 40 healthcare facilities were included in the study. A total of 212 patients participated in a programme including 11 interventions during the first 12-18 months after myocardial infarction. The following Nursing Outcomes Classification (NOC) outcomes were assessed at baseline and at the end of the intervention: Self-management: Cardiac Disease (1617) and Compliance Behaviour: Prescribed Diet (1622), Compliance Behaviour: Prescribed Activity (1632), and Compliance Behaviour: Prescribed Medication (1623). Marjory Gordon's functional health patterns and a self-care notebook were used in each intervention. Pre-post intervention means were compared using Student's t-tests for related samples. The results of the study are reported in compliance with the TREND Statement. Results: A total of 132 patients completed the intervention. The indicators for each NOC outcome and the variations in scores before and after the intensive follow-up showed a statistically significant improvement (p-value = 0.000). Compliance Behaviour: Prescribed Diet (pre = 3.7; post = 4.1); Compliance Behaviour: Prescribed Activity (pre = 3.9; post = 4.3); Compliance Behaviour: Prescribed Medication (pre = 3.9; post = 4.7). Conclusion: Intensive, immediate follow-up after myocardial infarction improves compliance behaviours and self-management of heart disease. A combined self-care and family care approach should be encouraged to empower post-myocardial infarction patients. To facilitate patients' self-efficacy, the use of health education tools such as a cardiovascular self-care notebook can also be helpful. Relevance To Clinical Practice: This study highlights the benefits of intensive, protocolised, comprehensive patient follow-up in primary care during the first few months after an acute myocardial infarction (AMI). Primary care nurses train patients in cardiovascular self-care. Patient Or Public Contribution: Patients were not involved in either the design or the carrying out of the study. However, at the end of the study, they participated in an evaluation process about the utility of the research study and their satisfaction with it. This process was carried out using an ad hoc survey consisting of 10 questions assessing the nursing care and follow-up inputs that were received. abstract_id: PUBMED:20417584 Can we improve the management of acute coronary syndromes in primary care? Objective: To evaluate a training programme in the primary care of acute coronary syndrome. Design: A before-during-after comparative cross-sectional study. Setting: Health areas of Lleida, High Pyrenees and Aran (Spain), from 2003-2006. Participants: Patients with acute coronary syndrome admitted to the intensive medicine unit and the coronary unit of the Arnau de Vilanova University Hospital of Lleida after being referred from home, general practitioner or health centre. Intervention: A training program was introduced to establish protocols of pre-hospital performance in acute coronary syndrome (the administration of acetylsalicylic acid, nitroglycerin and morphine chloride, the performing of an electrocardiogram, the insertion of an intravenous tube and to speed up care times). Main Measures: Linear trend of the three periods of the study was analyzed through prevalences ratio and linear trend test. Results: The intervention showed a statistically significant linear increase in the application of the aforementioned therapeutic procedures, with the exception of nitroglycerin, which started out with a higher baseline level, and an improvement of care times. The application of an electrocardiogram obtained almost optimal levels. Care times considerably improved. Conclusions: Training programs are a useful tool in improving treatment of acute coronary syndromes in primary care. Advance diagnosis and an early start to treatment almost certainly results in a decrease of its morbidity-mortality. abstract_id: PUBMED:29779296 The assessment and management of chest pain in primary care: A focus on acute coronary syndrome Background: Chest pain is a common presentation and diagnosis can be challenging. There are many causes for chest pain, including life-threatening conditions such as acute coronary syndrome (ACS), which can prove difficult to diagnose. Objective: This article focuses on diagnosis and early management of patients with possible ACS. Key differentials and essential primary care investigations and management are outlined. Hospital-based risk stratification and management are described, providing an outline of what patients can expect if referred to hospital. Discussion: In primary care, an electrocardiogram (ECG) is the only investigation required for most patients while referral is made to hospital. Troponin testing should rarely be requested to investigate patients with suspected ACS in the primary care setting. Initial treatment may include aspirin, glyceryl trinitrate and oxygen if required. If ACS is suspected as the cause of the symptoms, urgent referral for definitive risk stratification is required. abstract_id: PUBMED:27829379 Evidence-based interventions in primary care following acute coronary syndrome in Australia and New Zealand: a systematic scoping review. Background: Coronary artery disease has a significant disease burden, but there are many known barriers to management of acute coronary syndrome (ACS). General practitioners (GPs) bear considerable responsibility for post-discharge management of ACS in Australia and New Zealand (NZ), but knowledge about the extent and efficacy of such management is limited. This systematic review summarises published evidence from Australia and New Zealand regarding management in primary care after discharge following ACS. Methods: A search of PubMed, Scopus, CINAHL-Plus and PSYCINFO databases in August 2015 was supplemented by citation screening and hand-searching. Literature was selected based on specified criteria, and assessed for quality using the Mixed Methods Appraisal Tool (MMAT). Extracted data was related to evidence-based interventions specified by published guidelines. Results: The search yielded 19 publications, most of which reported on quantitative and observational studies from Australia. The majority of studies scored at least 75 % on the MMAT. Diverse aspects of management by GPs are presented according to categories of evidence-based guidelines. Data suggests that GPs are more likely to prescribe ACS medications than to assist in lifestyle or psychological management. GP referral to cardiac rehabilitation varied, and one study showed an improvement in the number of ACS patients with documented ACS management plans. Few studies described successful interventions to improve GP management, though some quality improvement efforts through education and integration of care with hospitals were beneficial. Limited data was published about interventions effective in rural, minority, and Indigenous populations. Conclusions: Research reflects room for improvement in GP post-discharge ACS management, but little is known about effective methods for improvement. Additional research, both observational and interventional, would assist GPs in improving the quality of post-discharge ACS care. abstract_id: PUBMED:32207352 Transition of care to prevent recurrence after acute coronary syndrome: the critical role of the primary care provider and pharmacist. Despite therapeutic advances, patients with acute coronary syndrome (ACS) are at an increased long-term risk of recurrent cardiovascular events. This risk continues to rise as the number of associated comorbidities, often observed in patients presenting with ACS, increases. Such a level of clinical complexity can lead to gaps in care and subsequently worse outcomes. Guidelines recommend providing an evidence-based post-discharge plan to prevent readmission and recurrent ACS, including cardiac rehabilitation, medication, patient/caregiver education, and ongoing follow-up. A patient-centric multidisciplinary approach is critical for the effective management of the transition of care from acute care in the hospital setting to the outpatient care setting in patients with ACS. Ongoing communication between in-hospital and outpatient healthcare providers ensures that the transition is smooth. Primary care providers and pharmacists have a pivotal role to play in the effective management of transitions of care in patients with ACS. Guideline recommendations regarding the post-discharge care of patients with ACS and the role of the primary care provider and the pharmacist in the management of transitions of care will be reviewed. abstract_id: PUBMED:24661903 Management of chest pain: a prospective study from Norwegian out-of-hours primary care. Background: Chest pain is a common diagnostic challenge in primary care and diagnostic measures are often aimed at confirming or ruling out acute ischaemic heart disease. The aim of this study was to investigate management of patients with chest pain out-of-hours, including the use of ECG and laboratory tests, assessment of severity of illness, and the physicians' decisions on treatment and admittance to hospital. Methods: Data were registered prospectively from four Norwegian casualty clinics. Data from structured telephone interviews with 100 physicians shortly after a consultation with a patient presenting at the casualty clinic with "chest pain" were analysed. Results: A total of 832 patients with chest pain were registered. The first 100 patients (corresponding doctor-patient pairs) were included in the study according to the predefined inclusion criteria. Median age of included patients was 46 years, men constituted 58%. An ECG was taken in 92 of the patients. Of the 24 patients categorised to acute level of response, 15 had a NACA-score indicating a potentially or definitely life-threatening medical situation. 50 of the patients were admitted to a hospital for further management, of which 43 were thought to have ischaemic heart disease. Musculoskeletal pain was the second most common cause of pain (n = 22). Otherwise the patients were thought to have a variety of conditions, most of them managed at a primary care level. Conclusions: Patients with chest pain presenting at out-of-hours services in Norway are investigated for acute heart disease, but less than half are admitted to hospital for probable acute coronary syndrome, and only a minority is given emergency treatment for acute coronary syndrome. A wide variety of other diagnoses are suggested by the doctors for patients presenting with chest pain. Deciding the appropriate level of response for such patients is a difficult task, and both over- and under-triage probably occur in out-of-hours primary care. abstract_id: PUBMED:25390501 The transition from hospital to primary care for patients with acute coronary syndrome: insights from registry data. Despite guidelines advocating routine engagement with proven secondary prevention strategies, current uptake and access remain suboptimal. Australian registries of acute coronary syndrome have predominantly focused on inpatient care and have highlighted under-referral to cardiac rehabilitation, gaps in medication prescription and limited commencement of lifestyle change before discharge from hospital. Opportunities to improve equity, access and uptake of secondary prevention include better transition between acute and primary care, systematic delivery of prevention in primary care, workforce strengthening and embracing new technologies. Adopting a structured framework for delivery of secondary prevention in primary care will enhance continuity of care and improve coordination of services after acute coronary syndrome. abstract_id: PUBMED:28985344 Point-of-care testing in primary care patients with acute cardiopulmonary symptoms: a systematic review. Background: Point-of-care tests (POCT) can assist general practitioners (GPs) in diagnosing and treating patients with acute cardiopulmonary symptoms, but it is currently unknown if POCT impact relevant clinical outcomes in these patients. Objective: To assess whether using POCT in primary care patients with acute cardiopulmonary symptoms leads to more accurate diagnosis and impacts clinical management. Methods: We performed a systematic review in four bibliographic databases. Articles published before February 2016 were screened by two reviewers. Studies evaluating the effect of GP use of POCT on clinical diagnostic accuracy and/or effect on treatment and referral rate in patients with cardiopulmonary symptoms were included. Results: Our search yielded nine papers describing data from seven studies, on the clinical diagnostic accuracy of POCT in a total of 2277 primary care patients with acute cardiopulmonary symptoms. Four papers showed data on GP use of D-dimer POCT in pulmonary embolism (two studies); two studies on Troponin T in acute coronary syndrome; one on heart-type fatty acid-binding protein (H-FABP) in acute coronary syndrome; one on B-type natriuretic peptide (BNP) in heart failure; one on 3-in-1 POCT (Troponin T, BNP, D-dimer) in acute coronary syndrome, heart failure and/or pulmonary embolism. Only one study assessed the effect of GP use of POCT on treatment initiation and one on actual referral rates. Conclusion: There is currently limited and inconclusive evidence that actual GP use of POCT in primary care patients with acute cardiopulmonary symptoms leads to more accurate diagnosis and affects clinical management. However, some studies show promising results, especially when a POCT is combined with a clinical decision rule. abstract_id: PUBMED:35819357 Frequency and management of emergencies in primary care offices: A cross-sectional study in northwestern Germany. Background: Little literature exists on emergencies within primary care offices. Objectives: We aimed to study the occurrence of emergencies and confidence in dealing with them among primary care physicians (PCPs) in Germany. Methods: We conducted a cross-sectional study among all PCPs with licences to practice with an own office (n = 915) in a northwestern region in Germany in 2019. Participants were asked to estimate the frequency and type of emergencies that occurred in the last 12 months in their office and about their confidence in managing emergency situations. Results: Answers from 375 PCPs could be analysed (response: 41.0%); 95.7% reported at least one emergency in their office within the last 12 months (mean 12.9). PCPs from rural offices reported more emergencies (on average 13.7 vs. 9.6). Acute coronary syndrome, cardiac arrhythmia and dyspnoea were the most common emergencies. A greater likelihood of feeling more confident in managing medical emergencies was found among male physicians, general internists, PCPs additionally qualified as emergency physicians and those with previous training in the emergency department and intensive care unit. In contrast, more general practitioners felt secure treating paediatric emergencies than general internists (highest level of confidence 22.1% vs. 16.3%). Conclusion: In Germany, emergencies in primary care offices occur on average once a month and more often in rural than urban areas. While most PCPs are confident in managing medical emergencies, some differences related to the training path became apparent. Ongoing training programmes may be tailored to improve emergency skills. abstract_id: PUBMED:31326246 Communicating acute coronary syndrome risk to women in primary care: A scoping review of the literature. Objectives: Delay from symptom onset to hospital arrival drives poor outcomes in acute coronary syndrome (ACS), particularly for women. Primary care clinicians can discuss ACS with high-risk women, potentially reducing delay. We conducted a scoping review to assess what is known about ACS risk communication to women in primary care. Methods: We used Arksey and O'Malley's framework. The PubMed, CINAHL, PsycINFO, and Embase databases were searched for relevant articles from inception through September, 2018. No restrictions on study methodology were applied. At least two reviewers assessed each article. Articles addressing risk communication, coronary heart disease, and ACS, related to primary care settings, and including women were retained. Results: Eleven articles met inclusion criteria. Cardiovascular disease (CVD) risk communication is common in primary care; however, ACS symptoms are rarely discussed. Structured risk calculators are used to frame discussions. Communication styles include patient-centered discussions, paternalistic orders, and "scare tactics;" no single style is more effective. Analysis of gender differences in risk communication is extremely limited. Conclusion: There is scant evidence that primary care clinicians communicate effectively about ACS risk, symptoms, and appropriate symptom response. Practice Implications: Interventions are needed to improve communication about ACS to at-risk women in the primary care setting. Answer: Yes, we can improve the management of acute coronary syndromes (ACS) in primary care. Several studies have demonstrated various strategies and interventions that can enhance the management of ACS in the primary care setting: 1. A study showed that intensive follow-up by primary care nurses after myocardial infarction significantly improved self-management and compliance behaviors related to diet, physical activity, and medication adherence (PUBMED:37084014). 2. A training program for primary care professionals on pre-hospital performance protocols for ACS, including the administration of medications, performing electrocardiograms, and improving care times, resulted in a statistically significant increase in the application of therapeutic procedures and improved care times (PUBMED:20417584). 3. In primary care, the use of an electrocardiogram (ECG) is essential for most patients with suspected ACS, and initial treatment may include aspirin, glyceryl trinitrate, and oxygen if required. Urgent referral for definitive risk stratification is necessary if ACS is suspected (PUBMED:29779296). 4. Research from Australia and New Zealand suggests that general practitioners (GPs) are more likely to prescribe medications for ACS than to assist in lifestyle or psychological management. There is room for improvement in post-discharge ACS management by GPs, and additional research could help improve the quality of care (PUBMED:27829379). 5. The transition of care from hospital to primary care is critical for preventing recurrence after ACS. Primary care providers and pharmacists play a pivotal role in managing transitions of care and ensuring adherence to evidence-based post-discharge plans (PUBMED:32207352). 6. Point-of-care testing (POCT) in primary care patients with acute cardiopulmonary symptoms has shown promising results, especially when combined with a clinical decision rule, although evidence is limited and inconclusive regarding its impact on diagnosis and clinical management (PUBMED:28985344). 7. Communication about ACS risk, symptoms, and appropriate symptom response to at-risk women in primary care is an area that requires improvement. Interventions are needed to enhance this aspect of care (PUBMED:31326246). In conclusion, improving the management of ACS in primary care involves a multifaceted approach that includes training programs, protocol development, effective use of diagnostic tools, transition of care coordination, and effective communication strategies. These interventions can lead to better outcomes for patients with ACS.
Instruction: Can severely impaired cortical transit predict which children with pelvi-ureteric junction stenosis detected antenatally might benefit from pyeloplasty? Abstracts: abstract_id: PUBMED:21178646 Can severely impaired cortical transit predict which children with pelvi-ureteric junction stenosis detected antenatally might benefit from pyeloplasty? Introduction: Until recently, renogram, performed in children with pelvi-ureteric junction stenosis detected antenatally, has not been able to predict the probability of function improvement after surgery or the risk of function deterioration in case of conservative attitude. Recently, Schlotmann et al. have suggested that cortical transit might have this predictive role. The aim of this study, focused on those kidneys with severely impaired cortical transit, was to verify this statement. Methods: All renograms performed in children during a 3-year period (n=729) were retrospectively reviewed and 19 pediatric patients were selected based on an antenatally detected unilateral pelvi-ureteric junction syndrome, the existence of at least two renograms during the follow-up and a severe cortical transit impairment, visually defined. Twenty-six pairs of data could be analyzed and allowed comparing preoperative and postoperative differential renal function (DRF), and the DRF changes during the conservative management. Results: Among the 16 patients who underwent pyeloplasty, 10 showed a significant DRF improvement. Among the 10 patients with conservative follow-up, four showed a significant DRF deterioration. Conclusion: Severely impaired cortical transit seems to be a valuable marker of those patients who could benefit from a pyeloplasty, either because of the high probability of postoperative DRF improvement, or because of DRF deterioration in case of an conservative approach. However, a normal cortical transit, as defined in this study, does not exclude the risk of DRF deterioration. Alternatively, the design of this study does not allow excluding the fact that DRF might improve after pyeloplasty despite a normal cortical transit. abstract_id: PUBMED:23602207 Cortical transit time as a predictive marker of the need for surgery in children with pelvi-ureteric junction stenosis: preliminary study. Introduction: Postnatal management of prenatally detected hydronephrosis remains controversial. It has been suggested that cortical transit time (CTT) could successfully predict deterioration in children with pelvi-ureteric junction (PUJ) obstruction. We decided to conduct a retrospective study in our hydronephrosis population to evaluate whether initial CTT was significantly correlated with the need for surgery. Patients And Method: We reviewed the charts of all our patients managed for significant PUJ obstruction (&gt;12 mm) between 2007 and 2010 and determined CTT retrospectively, on the first diuretic scan of each of these patients. We then determined the relationship between initial CTT and the need for surgery. Results: We identified 37 patients with hydronephrosis (pelvic size &gt;12 mm) of which 26 were diagnosed prenatally. Out of 22 patients with an initial abnormal CTT, 20 underwent surgery. Out of 15 children with a normal initial CTT, 4 underwent surgery (OR 27.5 (IC95%: 4.3-174.9)). Conclusion: Initial CTT could be a reliable prognostic factor for future evolution of renal function in children with hydronephrosis. CTT is easy to determine on diuretic renal scan. A prospective trial is being devised to confirm what role it could have in the management of children with hydronephrosis. abstract_id: PUBMED:23525768 The antenatally detected pelvi-ureteric junction stenosis: advances in renography and strategy of management. This review includes an analysis of new developments in the field of renography, the predictive factors suggesting the need for pyeloplasty in cases of pelvi-utereric stenosis detected antenatally and integration of the pelvi-ureteric junction stenosis within the framework of antenatally detected hydronephrosis. abstract_id: PUBMED:27843217 Robot-assisted pyeloplasty for pelvi-ureteric junction obstruction of lower moiety in partial duplex system: A technical challenge. Management of pelvi-ureteric junction obstruction (PUJO) in a duplex system is technically challenging as dissection at the pelvis may jeopardize the vascularity of the normal moiety ureter. Anastomosing the pelvis to the one single ureter will have a risk of future development of stricture which then will risk both the moieties. Robotic assistance enables appropriate tissue dissection; minimal handling of normal ureter and precision in suturing, overcoming the potential challenges involved in the minimally invasive management of such complex cases. We report the feasibility and efficacy of robot-assisted laparoscopic pyeloplasty in such case. abstract_id: PUBMED:25837703 Functional outcomes of early versus delayed pyeloplasty in prenatally diagnosed pelvi-ureteric junction obstruction. Unlabelled: Although initial conservative management is popular in the management of antenatally detected pelvi ureteric junction (PUJ) obstruction [1-3], several authors [4,5] have questioned this approach and expressed concern about failure to recover the function lost during expectant management following surgery. In this single center prospective study, we have compared the functional outcomes following early versus delayed pyeloplasty in SFU grade 3-4 PUJ obstruction. Methods: Among those children, who presented between 2004 and 2013, with prenatal diagnoses of unilateral PUJ obstruction (n = 886), those with SFU grade 1 or 2 hydronephrosis on USG (n = 533) were excluded. In the remaining 353 children with SFU grade 3 and 4 hydronephrosis, 243 had obstructive pattern on radionuclide scan. After excluding those with severely impaired or supranormal split renal function (SRF), palpable mass, single kidney status, bilateral disease and associated other urological anomalies a total of 126 children were included in the study group. Parents who were unwilling for a frequent follow-up underwent early pyeloplasty, (Group I: n = 62) while the remaining underwent initial conservative management, with 3 monthly USG and nuclear scans (Group II; n = 64). In this group pyeloplasty was performed whenever there was deterioration in SRF &gt;10%, or urine infection or pain during the follow-up. A standard open dismembered pyeloplasty was performed by the same surgeon in all patients. Radionuclide scan was performed at 1 year, at the same center using the same protocol, to assess final SRF and drainage. The functional outcomes were compared using student's t test and chi square test. Results: Group I comprised of 62 patients while Group II 64 patients. The mean age at pyeloplasty was 2.8 months in group I while 12.5 months in group II. There was no significant difference in the initial antero posterior diameter (APD) between the groups; 30.2 (±3.2) mm in group I and 29.6 (±3.7) mm in group II. At 1-year follow up after surgery, there was improvement in the APD, 16.8 (±4.2) mm in group I and 18.2 (±4.5) mm in group II, with no significant difference between them. In group I, the initial mean SRF was 34.1% (±6.4) and there was significant improvement (p = 0.01) in mean SRF to 37.2 (±7.1) at 1-year follow up after surgery. In group II, the mean SRF was 35.9 (5.7) initially and there was a deterioration to 32.6 (±5.5) before surgery (Figure). At 1-year follow up after surgery, there was a marginal improvement to 33.5 (5.6), however it was significantly lower compared to the initial SRF (p = 0.01). Compared to initial function, at 1-year follow up after pyeloplasty, SRF improved in significantly higher number of patients; 17/62 (27.4%) in group I while only 7/64 (10.9%) in group II (p = 0.03) (Table). There was significantly fewer patients with deterioration in final SRF at 8/62 (12.9%) in group I compared to 22/64 (34.4%) in group II (p = 0.03). Discussion: Although several publications [1-3] have reported functional recovery during initial conservative treatment of PUJ obstruction, in our study a large proportion of patients (80%) in Group II had loss of function during follow-up. This is probably because the study population included only SFU grade 3-4 with obstructive renogram. Several authors have expressed concern about irreversible loss of renal function during expectant management [4,5]. Findings of our study reveal that irrespective of initial SRF, early pyeloplasty in prenatally diagnosed SFU grade 3-4 PUJ obstruction leads to significant improvement of SRF, while delayed pyeloplasty leads to a marginal but, significant loss. This fact should be highlighted to parents so that informed decisions can be made regarding early versus delayed surgery. abstract_id: PUBMED:23726167 Predicting the clinical outcome of antenatally detected unilateral pelviureteric junction stenosis. Objective: To determine, in children with antenatally detected pelviureteric junction (PUJ) stenosis, what factors may be predictive for deterioration of differential renal function (DRF) in case of conservative treatment or improvement of DRF in case of pyeloplasty. Methods: This study analyzed and compared the initial level of hydronephrosis, DRF, quality of renal drainage, and cortical transit with the late DRF outcome. We reviewed the medical charts of 161 consecutive children with antenatally diagnosed PUJ stenosis during a 10-year period (between 1997 and 2007). From this cohort, we retained 81 children with unilateral PUJ and strictly normal contralateral kidney, with a median follow-up of 67 months. Repeated ultrasounds, voiding cystourethrography, and radionuclide renograms were performed in all children. Results: Fifty patients never underwent a surgical intervention (62%), whereas surgical repair (Anderson-Hynes dismembered pyeloplasty) was performed in 31 (38%). During conservative follow-up, DRF deterioration was observed in 11% of patients. After pyeloplasty, DRF improvement was observed in 25% of patients. Abnormal cortical transit was the only predictive factor of DRF deterioration in case of conservative approach, whereas the initial degree of hydronephrosis, or renal drainage, and the initial DRF level were not predictive. In children who were operated on, only impaired cortical transit was predictive of DRF improvement postoperatively. Conclusion: Conservative management of children with unilateral PUJ stenosis is a safe procedure. Impaired cortical transit although imperfect, seems the best criterion for identifying children for whom pyeloplasty is warranted. abstract_id: PUBMED:27286113 Ureterocalycostomy - final resort in the management of secondary pelvi-ureteric junction obstruction: our experience. Ureterocalycostomy can be performed in patients in whom desired methods of treating secondary PUJ (Pelvi-Ureteric Junction) obstructions either failed or could not be used. In our study, one child and two adults in whom one redo-ureterocalycostomy and two ureterocalycostomies were performed for severely scarred PUJ. The causes for secondary PUJ obstruction were post-pyelolithotomy in one case, post-pyeloplasty and ureterocalycostomy for PUJ obstruction in the second patient and the third patient had long upper ureteric stricture post-ureteropyeloplasty due to tuberculosis. In all these cases ureterocalycostomy proved to be salvage/final resort for preserving functional renal unit. abstract_id: PUBMED:28988673 Predictive value of cortical transit time on MAG3 for surgery in antenatally detected unilateral hydronephrosis caused by ureteropelvic junction stenosis. Background: In children with antenatally detected hydronephrosis caused by ureteropelvic junction (UPJ) stenosis, the main challenge is preserving renal function by identifying children who require early surgical intervention from those for whom watchful waiting may be appropriate because of the potential for spontaneous resolution without a significant loss of renal function. Objective: To assess the impact of initial cortical transit time (CTT) on technetium-99m mercaptoacetyltriglycerine (MAG3) diuretic renogram on the need for surgery in children with antenatally detected unilateral hydronephrosis caused by UPJ stenosis. Study Design: We retrospectively reviewed the medical records of 33 patients with antenatally detected unilateral hydronephrosis caused by UPJ stenosis who were managed at our institution between 2006 and 2014. Delayed CTT was defined as the absence of activity in the subcortical structures within 3 min of tracer injection on a MAG3 scan. The surgical indication includes symptomatic UPJ stenosis, seriously deteriorating hydronephrosis with parenchymal thinning on serial USG, split renal function &lt;40%, or progressive deterioration of split renal function (&gt;5%) on a MAG3 scan. This study analyzed and compared the initial level of Society for Fetal Urology grade, anteroposterior diameter (APD), split renal function, drainage pattern on a diuretic renogram, and CTT with the need for surgery. Results: Of the 33 children, 16 were classified into the delayed CTT group and 17 were placed in the normal CTT group. During the follow-up period (mean 31.8 months), surgery was needed in 75.0% (12/16) of patients in the delayed CTT group and in 5.9% (1/17) of those in the normal CTT group. Multivariate analysis showed that delayed CTT on initial MAG3 scan and APD on initial ultrasonography were independent predictive factors of the need for surgery. Discussion: In this era of conservative management of antenatally detected hydronephrosis caused by UPJ stenosis, it is critical to identify which measurement on an image study is the most reliable for predicting the need for surgery by reflecting functional deterioration. In this study, multivariate analysis revealed that CTT on the initial MAG3 scan was an independent predictive factor of the need for surgery. In addition, CTT showed high negative predictive value for surgical need in children with antenatally detected unilateral hydronephrosis caused by UPJ stenosis. Conclusion: CTT on an initial MAG3 scan may be a useful predictor of the need for surgery in children with antenatally detected unilateral hydronephrosis caused by UPJ stenosis. abstract_id: PUBMED:8777624 Retrograde balloon dilatation for primary pelvi-ureteric junction stenosis in children. Objective: To evaluate the efficacy of retrograde balloon dilatation for primary pelvi-ureteric junction (PUJ) stenosis (RBDP) in children. Patients And Methods: Sixteen children (12 boys, four girls; mean age 29 months) with primary PUJ stenosis initially underwent RBDP. A double-pigtail catheter was left in the ureter for 6 weeks to provide drainage. The initial diagnosis of PUJ stenosis and the subsequent post-operative assessment were both made by diuretic renography in almost all patients. All children were followed up for between 9 and 38 months (mean 25) after the procedure. Results: RBDP was unsuccessful in nine of 17 renal units. In three of these, failure was caused by inability to pass the catheter through the PUJ or vesico-ureteric junction (VUJ). In six cases, failure was caused by recurrent or persistent stenosis. In eight of 17 renal units, RBDP was successful in relieving the stenosis. The morbidity from the procedure was minimal. Conclusions: Long-term follow-up revealed a significant failure rate for RBDP, caused by the inability to pass the catheter through the VUJ or PUJ, or persistent and/or recurrent stenosis. Recurrent stenosis may result from excessive dilatation which traumatizes the PUJ and causes subsequent scarring. Failure to overcome the stenosis is thought to be caused by a large redundant renal pelvis, producing 'kinking' of the upper ureter. In these cases, only reduction pyeloplasty may be successful in relieving the obstruction. abstract_id: PUBMED:8944519 Antegrade endopyelotomy for pelvi-ureteric junction obstruction in children. Objective: To compare the results of repairing pelvi-ureteric junction (PUJ) obstruction by percutaneous endopyelotomy in children with a similar series carried out in adults. Patients And Methods: Nine children with primary (six) or secondary (three) PUJ obstruction were treated using a one-stage cold-knife percutaneous endopyelotomy. The success and morbidity rates were compared with a series of 61 adults with primary (46) or secondary (15) PUJ obstruction treated similarly. Results: In children, endopyelotomy was successful in five of six with primary and two of three with secondary PUJ obstruction, with a mean follow-up of 30 months (range 18-56). In the adults, endopyelotomy was successful in 38 of 46 (83%) with primary and 12 of 15 with secondary PUJ obstruction, an overall success rate of 82%, with a mean follow-up of 42 months (range 9-86). There was no statistical difference in the success rates with primary and secondary endopyelotomy between adults and children (P = 0.58). Failures were associated with high-grade hydronephrosis, a stenotic segment &gt; 1.5 cm long and technical problems. Morbidity occurred in one of nine children and 3.2% of the adults. Conclusion: This early experience suggests that percutaneous endopyelotomy can be performed safely and successfully in children with primary PUJ obstruction. However, in secondary stenosis, the results were less than optimal. Larger series should be analysed to form definitive conclusions on the role of endopyelotomy for the treatment of PUJ obstruction in children, given the high rate of success of open pyeloplasty and its minimal morbidity. Answer: Severely impaired cortical transit has been suggested as a valuable marker for identifying children with pelvi-ureteric junction stenosis detected antenatally who could benefit from pyeloplasty. Studies have shown that among patients with severely impaired cortical transit, a significant number demonstrated differential renal function (DRF) improvement following pyeloplasty. Conversely, a portion of patients managed conservatively without surgery showed significant DRF deterioration (PUBMED:21178646). This suggests that severely impaired cortical transit could be predictive of those who may benefit from surgical intervention. Additional research supports the predictive value of cortical transit time (CTT) in determining the need for surgery in children with pelvi-ureteric junction obstruction. A retrospective study found that an initial abnormal CTT was significantly correlated with the need for surgery, with a high proportion of patients with abnormal CTT undergoing surgery compared to those with normal CTT (PUBMED:23602207). Furthermore, a study analyzing predictive factors for deterioration of DRF in conservative treatment or improvement of DRF in case of pyeloplasty found that impaired cortical transit was predictive of DRF improvement postoperatively (PUBMED:23726167). Another study concluded that CTT on an initial MAG3 scan may be a useful predictor of the need for surgery in children with antenatally detected unilateral hydronephrosis caused by ureteropelvic junction stenosis (PUBMED:28988673). In summary, the evidence suggests that severely impaired cortical transit can indeed predict which children with antenatally detected pelvi-ureteric junction stenosis might benefit from pyeloplasty, as it is associated with a higher probability of postoperative DRF improvement and a risk of DRF deterioration with conservative management.
Instruction: Can a redesign of emergency pharmacist roles improve medication management? Abstracts: abstract_id: PUBMED:24210425 Can a redesign of emergency pharmacist roles improve medication management? A prospective study in three Australian hospitals. Background: Emergency departments (EDs) face increasing service demands and the imposition of treatment targets which has led to continual process redesign and changes in staff skill mix and functions. Objective: To identify extended ED pharmacist roles that could improve medication management and to implement and evaluate one such role change. Methods: A focus group of clinicians sought to redesign processes around ED medication management. Preparation of medication charts for patients admitted from ED was selected. Baseline data were obtained to define elements of existing medication charting processes. Suitably trained ED pharmacists' trialled the effectiveness of making therapeutic suggestions on a medication chart 'sticker' or by direct 'consultation' with medical staff. At the conclusion of the study focus groups at each site evaluated clinical staff perceptions of the change. Results: Focus group participants thought that ED pharmacists could undertake extended roles in analgesia, nausea control, antibiotic cover, addiction management and preparation of medication charts for admitted patients. In the pre-intervention audit (n = 140), 74% required at least one intervention by the ward pharmacists to address medication discrepancies. Ward pharmacists detected 292 medication discrepancies (median 1, IQR 0-3). In the 'sticker' intervention the ED pharmacist made 84 therapeutic suggestions of which 66 (78.6%) were accepted by medical staff. In the 'consultation' intervention 230 therapeutic suggestions were made of which 219 (95.2%) were accepted. The qualitative evaluation found that pharmacist-prepared medication charts within the processes established were deemed safe, timely, accurate, complete and legible. Conclusion: Support exists for ED pharmacists to expand their medication-related roles. ED pharmacists can safely prepare medication charts in a timely fashion and their therapeutic suggestions within a 'consultative' framework are more beneficial than written advice. However, issues relating to resourcing, hours of service, service focus, statutory restrictions and training support for extended roles require resolution. abstract_id: PUBMED:20554575 Medication reconciliation in the emergency department: opportunities for workflow redesign. Aim: To examine the role of workflow redesign to improve medication reconciliation at four Washington State community hospital emergency departments. Method: Lean redesign methodology was used for workflow process mapping and redesign workshops attended by emergency department staff. Observations were made about barriers to successful operation of current medication reconciliation workflows, and ideal future process maps were developed to improve the efficacy of creating a current, complete and accurate medication list for each patient seen in the emergency department. Conclusion: Ideas for an optimal workflow to generate a medication list include involving patients and utilising clerical staff to a greater extent in medication information gathering, identifying and flagging patients with missing medication information, and gathering only the medication information needed to make clinical decisions in the emergency department. abstract_id: PUBMED:28970432 Quality of Life and Medication Adherence of Independently Living Older Adults Enrolled in a Pharmacist-Based Medication Management Program. This study sought to understand the medication adherence and quality of life (QOL) of recipients of a pharmacist-based medication management program among independently living older adults. Using a cross-sectional, quasi-experimental study design, we compared older adults enrolled in the program to older adults not enrolled in the program. Data were collected via face-to-face interviews in independent-living facilities and in participants' homes. Independently living older adults who were enrolled in the medication management program (n = 38) were compared to older adults not enrolled in the program (control group (n = 41)). All participants were asked to complete questionnaires on health-related quality of life (QOL, using the SF-36) and medication adherence (using the four-item Morisky scale). The medication management program recipients reported significantly more prescribed medications (p &lt; 0.0001) and were more likely to report living alone (p = 0.01) than the control group. The medication management program recipients had a significantly lower SF-36 physical functioning score (p = 0.03) compared to the control group, although other SF-36 domains and self-reported medication adherence were similar between the groups. Despite taking more medications and more commonly living alone, independent living older adults enrolled in a pharmacist-based medication management program had similar QOL and self-reported medication adherence when compared to older adults not enrolled in the program. This study provides initial evidence for the characteristics of older adults receiving a pharmacist-based medication management program, which may contribute to prolonged independent living and positive health outcomes. abstract_id: PUBMED:36674208 Impact of Partnered Pharmacist Medication Charting (PPMC) on Medication Discrepancies and Errors: A Pragmatic Evaluation of an Emergency Department-Based Process Redesign. Medication errors are more prevalent in settings with acutely ill patients and heavy workloads, such as in an emergency department (ED). A pragmatic, controlled study compared partnered pharmacist medication charting (PPMC) (pharmacist-documented best-possible medication history [BPMH] followed by clinical discussion between a pharmacist and medical officer to co-develop a treatment plan and chart medications) with early BPMH (pharmacist-documented BPMH followed by medical officer-led traditional medication charting) and usual care (traditional medication charting approach without a pharmacist-collected BPMH in ED). Medication discrepancies were undocumented differences between medication charts and medication reconciliation. An expert panel assessed the discrepancies' clinical significance, with 'unintentional' discrepancies deemed 'errors'. Fewer patients in the PPMC group had at least one error (3.5%; 95% confidence interval [CI]: 1.1% to 5.8%) than in the early BPMH (49.4%; 95% CI: 42.5% to 56.3%) and usual care group (61.4%; 95% CI: 56.3% to 66.7%). The number of patients who need to be treated with PPMC to prevent at least one high/extreme error was 4.6 (95% CI: 3.4 to 6.9) and 4.0 (95% CI: 3.1 to 5.3) compared to the early BPMH and usual care group, respectively. PPMC within ED, incorporating interdisciplinary discussion, reduced clinically significant errors compared to early BPMH or usual care. abstract_id: PUBMED:38143495 Clinical and economic impact of partnered pharmacist medication charting in the emergency department. Introduction: Partnered pharmacist medication charting (PPMC), a process redesign hypothesised to improve medication safety and interdisciplinary collaboration, was trialed in a tertiary hospital's emergency department (ED). Objective: To evaluate the health-related impact and economic benefit of PPMC. Methods: A pragmatic, controlled study compared PPMC to usual care in the ED. PPMC included a pharmacist-documented best-possible medication history (BPMH), followed by a clinical conversation between a pharmacist and a medical officer to jointly develop a treatment plan and chart medications. Usual care included medical officer-led traditional medication charting in the ED, without a pharmacist-obtained BPMH or clinical conversation. Outcome measures, assessed after propensity score matching, were length of hospital or ED stay, relative stay index (RSI), in-hospital mortality, 30-day hospital readmissions or ED revisits, and cost. Results: A total of 309 matched pairs were analysed. The median RSI was reduced by 15.4% with PPMC (p = 0.029). There were no significant differences between the groups in the median length of ED stay (8 vs. 10 h, p = 0.52), in-hospital mortality (1.3% vs. 1.3%, p &gt; 0.99), 30-day readmission rates (21% vs. 17%; p = 0.35) and 30-day ED revisit rates (21% vs. 19%; p = 0.68). The hospital spent approximately $138.4 for the cost of PPMC care per patient to avert at least one medication error bearing high/extreme risk. PPMC saved approximately $1269 on the average cost of each admission. Conclusion: Implementing the ED-based PPMC model was associated with a significantly reduced RSI and admission costs, but did not affect clinical outcomes, noting that there was an additional focus on medication reconciliation in the usual care group relative to current practice at our study site. abstract_id: PUBMED:35801158 Pharmacist impact on medication reconciliation of behavioral health patients boarding in the emergency department. Introduction: The high demand for psychiatric services has exceeded the capacity of available resources for behavioral health patients, forcing these patients to seek mental health care in the emergency department. Average lengths of stay for behavioral health boarders commonly extend over multiple days and prior-to-admission (PTA) medication administration may be delayed, which could lead to further deterioration and longer inpatient lengths of stay. Addition of a pharmacist-led medication reconciliation process and pharmacist integration into daily emergency department psychiatry rounds may decrease time to initiation of PTA medications and improve outcomes in this population. Methods: This is a retrospective review of adult patients who required a psychiatric emergency evaluation in a large rural academic medical center emergency department. Objectives were to determine the number and type of medication discrepancies found with pharmacist intervention, and to compare time to initiation of PTA medications with a pharmacist versus a nonpharmacist completing medication reconciliation. Results: A total of 139 patients were identified, 85 patients in August 2019 (no pharmacist [NP]) and 54 in October 2019 (pharmacist involvement [PI]). Among 484 medications reviewed in the PI group, 298 discrepancies were identified. The most common types of discrepancies were no longer taking (n = 99, 33%) and omission (n = 94, 32%). Time to administration of PTA medications was similar between NP and PI groups (median hours, interquartile range: NP: 10.8, 7.8-16.57; PI: 11.49, 6.16-16; P = .179). Discussion: This study depicted one of the many values of pharmacists in the hospital setting, especially in the behavioral health patient population where continuation of accurate PTA medications may prevent further clinical deterioration. abstract_id: PUBMED:38026998 The impact of partnered pharmacist medication charting in the emergency department on the use of potentially inappropriate medications in older people. Introduction: A process redesign, partnered pharmacist medication charting (PPMC), was recently piloted in the emergency department (ED) of a tertiary hospital. The PPMC model was intended to improve medication safety and interdisciplinary collaboration by having pharmacists work closely with medical officers to review and chart medications for patients. This study, therefore, aimed to evaluate the impact of PPMC on potentially inappropriate medication (PIM) use. Methods: A pragmatic concurrent controlled study compared a PPMC group to both early best-possible medication history (BPMH) and usual care groups. In the PPMC group, pharmacists initially documented the BPMH and collaborated with medical officers to co-develop treatment plans and chart medications in ED. The early BPMH group included early BPMH documentation by pharmacists, followed by traditional medication charting by medical officers in ED. The usual care group followed the traditional charting approach by medical officers, without a pharmacist-collected BPMH or collaborative discussion in ED. Included were older people (≥65 years) presenting to the ED with at least one regular medication with subsequent admission to an acute medical unit. PIM outcomes (use of at least one PIM, PIMs per patient and PIMs per medication prescribed) were assessed at ED presentation, ED departure and hospital discharge using Beers criteria. Results: Use of at least one PIM on ED departure was significantly lower for the PPMC group than for the comparison groups (χ2, p = 0.040). However, PIM outcomes at hospital discharge were not statistically different between groups. PIM outcomes on ED departure or hospital discharge did not differ from baseline within the comparison groups. Discussion: In conclusion, PIM use on leaving ED, but not at hospital discharge, was reduced with PPMC. Close interprofessional collaboration, as in ED, needs to continue on the wards. abstract_id: PUBMED:26012888 Partnered medication review and charting between the pharmacist and medical officer in the Emergency Short Stay and General Medicine Unit. Objective: A partnered medication review and charting model involving a pharmacist and medical officer was implemented in the Emergency Short Stay Unit and General Medicine Unit of a major tertiary hospital. The aim of the study was to describe the safety and effectiveness of partnered medication charting in this setting. Methods: A partnered medication review and charting model was developed. Credentialed pharmacists charted pre-admission medications and venous thromboembolism prophylaxis in collaboration with the admitting medical officer. The pharmacist subsequently had a clinical discussion with the treating nurse regarding the medication management plan for the patient. A prospective audit was undertaken of all patients from the initiation of the service. Results: A total of 549 patients had medications charted by a pharmacist from the 14th of November 2012 to the 30th of April 2013. A total of 4765 medications were charted by pharmacists with 7 identified errors, corresponding to an error rate of 1.47 per 1000 medications charted. Conclusions: Partnered medication review and charting by a pharmacist in the Emergency Short Stay and General Medicine unit is achievable, safe and effective. Benefits from the model extend beyond the pharmacist charting the medications, with clinical value added to the admission process through early collaboration with the medical officer. Further research is required to provide evidence to further support this collaborative model. abstract_id: PUBMED:36510270 Pharmacist's interventions in factors contributing to medication errors reduces medication errors in self-management of patients in the rehabilitation ward. Background: The number of medications, number of administrations per day, dosing frequency on indicated day, and medication from multiple prescriptions are the medication factors prone to medication errors in self-management that have been previously reported. However, whether pharmacists actually intervene in medication factors that affect medication error occurrences in self-management is unclear. Therefore, we conducted this study to clarify these issues. Method: This study included patients who underwent self-management in the rehabilitation ward of Higashinagoya National Hospital. From April 2019 to March 2020, a one-pharmacist period existed, and from April 2020 to March 2021, a two-pharmacist period existed. The number of patient instructions and interventions were expected to increase with an increase in the number of pharmacists. Considering this to be an environment of differential interventions by pharmacists, a pre-post-test design was conducted with all self-managed patients in both the time periods. The primary and secondary endpoints were the proportion of medication error occurrences and proportion of pharmacist's interventions in medication factors, respectively. Result: The proportions of medication error occurrences during the one-pharmacist and two-pharmacist periods were 41% (71/173) and 28% (51/180) (relative risk 0.690, 95% confidential interval 0.515-0.925), respectively. The proportion of pharmacist's interventions in medication factors in the one-pharmacist period was 13% (22/173) and 22% (40/180) in the two-pharmacist period; there was an increase in the proportion of pharmacist's interventions in medication factors in the two-pharmacist period. Conclusion: The proportion of medication error occurrences was significantly lower in the two-pharmacist period than that in the one-pharmacist period. This can be attributed to the increase in the proportion of pharmacist's interventions in medication factors. Therefore, an environment in which pharmacists could intervene in the medication factors to prevent medication errors in advance is necessary. abstract_id: PUBMED:27566629 Use of a service evaluation and lean thinking transformation to redesign an NHS 111 refer to community Pharmacy for Emergency Repeat Medication Supply Service (PERMSS). Objectives: To demonstrate the contribution of community pharmacy from NHS 111 referrals out of hours (OOH) for emergency supply repeat medication requests via presentation of service activity, community pharmacist feedback and lean thinking transformation. Design: Descriptive service evaluation using routine service activity data over the pilot period; survey of community pharmacists, and service redesign through lean thinking transformation. Setting: North East of England NHS 111 provider and accredited community pharmacies across the North East of England. Participants: Patients calling the North East of England NHS 111 provider during OOH with emergency repeat medication supply requests. Interventions: NHS 111 referral to community pharmacies for assessment and if appropriate, supply of emergency repeat medication. Main Outcome Measures: Number of emergency repeat medication supply referrals, completion rates, reasons for rejections, time of request, reason for access, medication(s), pharmaceutical advice and services provided. Secondary outcomes were community pharmacist feedback and lean thinking transformation of the patient pathway. Results: NHS 111 referred 1468 patients to 114 community pharmacies (15/12/2014-7/4/2015). Most patients presented on Saturdays, with increased activity over national holidays. Community pharmacists completed 951 (64.8%) referrals providing 2297 medications; 412 were high risk. The most common reason for rejecting referrals was no medication in stock. Community pharmacists were positive about the provision of this service. The lean thinking transformation reduced the number of non-added value steps, waits and bottlenecks in the patient pathway. Conclusions: NHS 111 can redirect callers OOH from urgent and emergency care services to community pharmacy for management of emergency repeat medication supply. Existing IT and community pharmacy regulations allowed patients to receive a medication supply and pharmaceutical advice. Community pharmacists supported integration into the NHS OOH services. Adopting lean thinking provided a structured framework to evaluate and redesign the service with the aim to improve effectiveness and efficiency. Answer: Yes, a redesign of emergency pharmacist roles can improve medication management. Studies have shown that extended roles for emergency department (ED) pharmacists, such as preparation of medication charts for admitted patients, can lead to safer and more efficient medication management. For instance, in one study, ED pharmacists made therapeutic suggestions either on a medication chart 'sticker' or through direct 'consultation' with medical staff, resulting in a high acceptance rate of these suggestions (78.6% for 'sticker' and 95.2% for 'consultation') (PUBMED:24210425). Another study demonstrated that partnered pharmacist medication charting (PPMC), which involves a pharmacist-documented best-possible medication history followed by a clinical discussion with a medical officer to co-develop a treatment plan, significantly reduced medication discrepancies and errors compared to traditional medication charting approaches (PUBMED:36674208). Furthermore, PPMC was associated with a reduced relative stay index (RSI) and admission costs, indicating economic benefits alongside improvements in medication safety (PUBMED:38143495). In the context of behavioral health patients boarding in the ED, pharmacist-led medication reconciliation did not significantly decrease time to initiation of prior-to-admission medications but did identify a high number of discrepancies, highlighting the value of pharmacists in accurate medication management (PUBMED:35801158). Additionally, PPMC was found to reduce the use of potentially inappropriate medications (PIMs) in older people upon departure from the ED, suggesting that pharmacist involvement can enhance the appropriateness of medication use in vulnerable populations (PUBMED:38026998). In general medicine units, partnered medication review and charting between pharmacists and medical officers proved to be safe and effective, with a low error rate in medications charted by pharmacists (PUBMED:26012888). Moreover, increasing the number of pharmacists involved in medication management led to a significant reduction in medication error occurrences, emphasizing the importance of pharmacist interventions in medication factors (PUBMED:36510270). Lastly, service redesign using lean thinking transformation, as applied to an NHS 111 refer to community Pharmacy for Emergency Repeat Medication Supply Service (PERMSS), showed that community pharmacists could effectively manage emergency repeat medication supply requests, improving the efficiency of the service (PUBMED:27566629).
Instruction: Does the UKCAT predict performance on exit from medical school? Abstracts: abstract_id: PUBMED:27855088 Does the UKCAT predict performance on exit from medical school? A national cohort study. Objectives: Most UK medical programmes use aptitude tests during student selection, but large-scale studies of predictive validity are rare. This study assesses the UK Clinical Aptitude Test (UKCAT: http://www.ukcat.ac.uk), and 4 of its subscales, along with individual and contextual socioeconomic background factors, as predictors of performance during, and on exit from, medical school. Methods: This was an observational study of 6294 medical students from 30 UK medical programmes who took the UKCAT from 2006 to 2008, for whom selection data from the UK Foundation Programme (UKFPO), the next stage of UK medical education training, were available in 2013. We included candidate demographics, UKCAT (cognitive domains; total scores), UKFPO Educational Performance Measure (EPM) and national exit situational judgement test (SJT). Multilevel modelling was used to assess relationships between variables, adjusting for confounders. Results: The UKCAT-as a total score and in terms of the subtest scores-has significant predictive validity for performance on the UKFPO EPM and SJT. UKFPO performance was also affected positively by female gender, maturity, white ethnicity and coming from a higher social class area at the time of application to medical school An inverse pattern was seen for a contextual measure of school, with those attending fee-paying schools performing significantly more weakly on the EPM decile, the EPM total and the total UKFPO score, but not the SJT, than those attending other types of school. Conclusions: This large-scale study, the first to link 2 national databases-UKCAT and UKFPO, has shown that UKCAT is a predictor of medical school outcome. The data provide modest supportive evidence for the UKCAT's role in student selection. The conflicting relationships of socioeconomic contextual measures (area and school) with outcome adds to wider debates about the limitations of these measures, and indicates the need for further research. abstract_id: PUBMED:27704290 Do personality traits assessed on medical school admission predict exit performance? A UK-wide longitudinal cohort study. Traditional methods of assessing personality traits in medical school selection have been heavily criticised. To address this at the point of selection, "non-cognitive" tests were included in the UK Clinical Aptitude Test, the most widely-used aptitude test in UK medical education (UKCAT: http://www.ukcat.ac.uk/ ). We examined the predictive validity of these non-cognitive traits with performance during and on exit from medical school. We sampled all students graduating in 2013 from the 30 UKCAT consortium medical schools. Analysis included: candidate demographics, UKCAT non-cognitive scores, medical school performance data-the Educational Performance Measure (EPM) and national exit situational judgement test (SJT) outcomes. We examined the relationships between these variables and SJT and EPM scores. Multilevel modelling was used to assess the relationships adjusting for confounders. The 3343 students who had taken the UKCAT non-cognitive tests and had both EPM and SJT data were entered into the analysis. There were four types of non-cognitive test: (1) libertariancommunitarian, (2) NACE-narcissism, aloofness, confidence and empathy, (3) MEARS-self-esteem, optimism, control, self-discipline, emotional-nondefensiveness (END) and faking, (4) an abridged version of 1 and 2 combined. Multilevel regression showed that, after correcting for demographic factors, END predicted SJT and EPM decile. Aloofness and empathy in NACE were predictive of SJT score. This is the first large-scale study examining the relationship between performance on non-cognitive selection tests and medical school exit assessments. The predictive validity of these tests was limited, and the relationships revealed do not fit neatly with theoretical expectations. This study does not support their use in selection. abstract_id: PUBMED:33483439 Does the UKCAT predict performance in medical and dental school? A systematic review. Objectives: For the first time, this systematic review provides a summary of the literature exploring the relationship between performance in the UK Clinical Aptitude Test (UKCAT) and assessments in undergraduate medical and dental training. Design: In accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis, relevant studies were identified through systematic literature searches. Electronic searches were carried out on EBSCO, EMBASE, Educational Resources Information Centre, SCOPUS, Web of Knowledge. Studies which included the predictive validity of selection criteria including some element of the UKCAT were considered. Results: 22 papers were identified for inclusion in the study. Four studies describe outcomes from dental programmes with limited results reported. 18 studies reported on relationships between the UKCAT and performance in undergraduate medical training. Of these, 15 studies reported relationships between the UKCAT cognitive tests and undergraduate medical assessments. Weak relationships (r=0.00-0.29) were observed in 14 of these studies; four studies reported some moderate relationships (r=0.30-0.49). The strongest relationships with performance in medical school were observed for the UKCAT total score and the verbal reasoning subtest. Relationships with knowledge-based assessments scores were higher than those for assessments of skills as the outcome. Relationships observed in small (single and double centre studies) were larger than those observed in multicentre studies. Conclusion: The results indicate that UKCAT scores predict performance in medical school assessments. The relationship is generally weak, although noticeably stronger for both the UKCAT total score and the verbal reasoning subtest. There is some evidence that UKCAT continues to predict performance throughout medical school. We recommend more optimal approaches to future studies. This assessment of existing evidence should assist medical/dental schools in their evaluation of selection processes. abstract_id: PUBMED:27638740 Predictive validity of the UKCAT for medical school undergraduate performance: a national prospective cohort study. Background: The UK Clinical Aptitude Test (UKCAT) has been shown to have a modest but statistically significant ability to predict aspects of academic performance throughout medical school. Previously, this ability has been shown to be incremental to conventional measures of educational performance for the first year of medical school. This study evaluates whether this predictive ability extends throughout the whole of undergraduate medical study and explores the potential impact of using the test as a selection screening tool. Methods: This was an observational prospective study, linking UKCAT scores, prior educational attainment and sociodemographic variables with subsequent academic outcomes during the 5 years of UK medical undergraduate training. The participants were 6812 entrants to UK medical schools in 2007-8 using the UKCAT. The main outcome was academic performance at each year of medical school. A receiver operating characteristic (ROC) curve analysis was also conducted, treating the UKCAT as a screening test for a negative academic outcome (failing at least 1 year at first attempt). Results: All four of the UKCAT scale scores significantly predicted performance in theory- and skills-based exams. After adjustment for prior educational achievement, the UKCAT scale scores remained significantly predictive for most years. Findings from the ROC analysis suggested that, if used as a sole screening test, with the mean applicant UKCAT score as the cut-off, the test could be used to reject candidates at high risk of failing at least 1 year at first attempt. However, the 'number needed to reject' value would be high (at 1.18), with roughly one candidate who would have been likely to pass all years at first sitting being rejected for every higher risk candidate potentially declined entry on this basis. Conclusions: The UKCAT scores demonstrate a statistically significant but modest degree of incremental predictive validity throughout undergraduate training. Whilst the UKCAT could be considered a fairly crude screening tool for future academic performance, it may offer added value when used in conjunction with other selection measures. Future work should focus on the optimum role of such tests within the selection process and the prediction of post-graduate performance. abstract_id: PUBMED:19930512 Does the UKCAT predict Year 1 performance in medical school? Context: The need to identify the best applicants for medicine and to ensure that selection is fair and ethical has led to the development of alternative, or additional, selection tools. One such tool is the United Kingdom Clinical Aptitude Test, or UKCAT. To date there have been no studies of the predictive validity of the UKCAT. Objectives: This study set out to identify whether UKCAT total score and subtest scores predict Year 1 outcomes in medical school. Methods: Year 1 students starting in 2007 at the University of Aberdeen or University of Dundee medical schools were included. Data collected were: UKCAT scores; Universities and Colleges Admissions Service (UCAS) form scores; admission interview scores; final Year 1 degree examination scores, and records of re-sitting examinations and of withdrawing from a course. Correlations were used to select variables for multiple regression analysis to predict examination scores. Results: Data were available for 341 students. Examination scores did not correlate with UKCAT total or subtest scores. Neither UCAS form score nor admission interview score predicted outcomes. None of the UKCAT scores were reliably associated with withdrawals (P-values for all comparisons &gt; 0.05). Only the decision analysis subtest was associated with re-sits of examinations, but the difference in means was contrary to the direction anticipated (P = 0.025, 95% confidence interval = 6.1-89.7). Discussion: UKCAT scores did not predict Year 1 performance at the two medical schools. Although early prediction is arguably not the primary aim of the UKCAT, there is some cause for concern that the test failed to show even the small-to-moderate predictive power demonstrated by similar admissions tools. abstract_id: PUBMED:24002854 Validity of the UKCAT in applicant selection and predicting exam performance in UK dental students. The United Kingdom's Clinical Aptitude Test (UKCAT) aims to assess candidates' "natural talent" for dentistry. The aim of this study was to determine the validity of the UKCAT for dental school applicant selection. The relationship of the UKCAT with demographic and academic variables was examined, assessing if the likelihood of being offered a place at a UK dental school was predicted by demographic factors and academic selection tools (predicted grades and existing school results). Finally, the validity of these selection tools in predicting first-year dental exam performance was assessed. Correlational and regression analyses showed that females and poorer students were more likely to have lower UKCAT scores. Gender and social class did not, however, predict first-year dental exam performance. UKCAT scores predicted the likelihood of the candidate being offered a place in the dental course; however, they did not predict exam performance during the first year of the course. Indeed, the only predictor of dental exam performance was existing school results. These findings argue against the use of the UKCAT as the sole determinant in dental applicant selection, instead highlighting the value of using existing school results. abstract_id: PUBMED:32891164 UKCAT and medical student selection in the UK - what has changed since 2006? Background: The United Kingdom Clinical Aptitude Test (UKCAT) is an aptitude test used since 2006 within selection processes of a consortium of UK medical and dental schools. Since 2006, student numbers have increased in medical training and schools now have an increased focus on widening access. A growing evidence base has emerged around medical student selection (Patterson et al., Med Educ 50:36-60, 2016) leading to changes in practice. However, whilst some papers describe local selection processes, there has been no overview of trends in selection processes over time across Universities. This study reports on how the use of the UKCAT in medical student selection has changed and comments on other changes in selection processes. Methods: Telephone interviews were conducted annually with UKCAT Consortium medical schools. Use of the UKCAT was categorised and data analysed to identify trends over time. Results: The number of schools using the UKCAT to select applicants for interview has risen, with cognitive test results contributing significantly to outcomes at this stage at many universities. Where schools use different weighted criteria (Factor Method), the UKCAT has largely replaced the use of personal statements. Use of the test at offer stage has also increased; the most significant use being to discriminate between applicants at a decision borderline. A growing number of schools are using the UKCAT Situational Judgement Test (SJT) in selection. In 2018, all but seven (out of 26) schools made some adjustment to selection processes for widening access applicants. Multiple Mini Interviews (MMIs) are now used by the majority of schools. Whilst medical student numbers have increased over this time, the ratio of applicants to places has fallen. The probability of applicants being invited to interview or receiving an offer has increased. Conclusions: More medical schools are using the UKCAT in undergraduate selection processes in an increasing number of ways and with increasing weight compared with 2007. It has replaced the use of personal statements in all but a few Consortium medical schools. An increased focus on academic attainment and the UKCAT across medical schools may be leading to the need for schools to interview and make offers to more applicants. abstract_id: PUBMED:24919950 Predictive power of UKCAT and other pre-admission measures for performance in a medical school in Glasgow: a cohort study. Background: The UK Clinical Aptitude Test (UKCAT) and its four subtests are currently used by 24 Medical and Dental Schools in the UK for admissions. This longitudinal study examines the predictive validity of UKCAT for final performance in the undergraduate medical degree programme at one Medical School and compares this with the predictive validity of the selection measures available pre-UKCAT. Methods: This was a retrospective observational study of one cohort of students, admitted to Glasgow Medical School in 2007. We examined the associations which UKCAT scores, school science grades and pre-admissions interview scores had with performance indicators, particularly final composite scores that determine students' postgraduate training opportunities and overall ranking (Educational Performance Measure - EPM, and Honours and Commendation - H&amp;C). Analyses were conducted both with and without adjustment for potential socio-demographic confounders (gender, age, ethnicity and area deprivation). Results: Despite its predictive value declining as students progress through the course, UKCAT was associated with the final composite scores. In mutually adjusted analyses (also adjusted for socio-demographic confounders), only UKCAT total showed independent relationships with both EPM (p = 0.005) and H&amp;C (p = 0.004), school science achievements predicted EPM (p = 0.009), and pre-admissions interview score predicted neither. UKCAT showed less socio-demographic variation than did TSS. Conclusion: UKCAT has a modest predictive power for overall course performance at the University of Glasgow Medical School over and above that of school science achievements or pre-admission interview score and we conclude that UKCAT is the most useful predictor of final ranking. abstract_id: PUBMED:29724213 The ability of 'non-cognitive' traits to predict undergraduate performance in medical schools: a national linkage study. Background: In addition to the evaluation of educational attainment and intellectual ability there has been interest in the potential to select medical school applicants on non-academic qualities. Consequently, a battery of self-report measures concerned with assessing 'non-cognitive' traits was piloted as part of the UK Clinical Aptitude Test (UKCAT) administration to evaluate their potential to be used in selection. Methods: The four non-cognitive instruments piloted were: 1) the Libertarian-communitarian scale, (2) The NACE (narcissism, aloofness, confidence and empathy, (3) the MEARS (Managing emotions and resilience scale; self-esteem, optimism, control, self-discipline, emotional-nondefensiveness and faking, and (4) an abridged version of instruments (1) and (2) combined. Non-cognitive scores and sociodemographic characteristics were available for 14,387 applicants. A series of univariable and multivariable analyses were conducted in order to assess the ability of the non-cognitive scores to predict knowledge and skills-based performance, as well as the odds of passing each academic year at first attempt. Non-cognitive scores and medical performance were standardised within cohorts. Results: The scores on the non-cognitive scales showed only very small (magnitude of standardised betas&lt; 0.2), though sometimes statistically significant (p &lt; 0.01) univariable associations with subsequent performance on knowledge or skills-based assessments. The only statistically significant association between the non-cognitive scores and the probability of passing an academic year at first attempt was the narcissism score from one the abridged tests (OR 0.84,95% confidence intervals 0.71 to 0.97, p = 0.02). Conclusions: Our findings are consistent with previously published research. The tests had a very limited ability to predict undergraduate academic performance, though further research on identifying narcissism in medical students may be warranted. However, the validity of such self-report tools in high-stakes settings may be affected, making such instruments unlikely to add value within the selection process. abstract_id: PUBMED:29535485 Widening Participation To The Medical Course At Queens University Belfast. The United Kingdom Clinical Aptitude Test (UKCAT) was introduced to assist in identification of applicants from all levels of society with the appropriate characteristics to become good doctors. Evidence that the UKCAT has achieved such widened participation (WP) in applicants to medical school remains elusive. One of the limitations to WP investigation has been that data on socioeconomic status of applicants to medical schools has been obtained through voluntary submission on application to UKCAT and up to 30% of applications offered either none or only limited information. In this study of local applicants (451 from Northern Ireland) to Queens University Belfast (QUB) for 2012, socioeconomic data was ascertained through post code analysis. These data were utilized to investigate the relationship between affluence, application to the medical school and UKCAT score. Our study has shown that for NI applicants to QUB medical school, postcode /socioeconomic back ground accounts for only 3 percent of UK CAT score variation. We have also shown that our admissions process is largely independent of socioeconomic background. However we have demonstrated that the socioeconomic profile of applicants from Northern Ireland to QUB medical school is such that even if every applicant to QUB in 2012 were offered a place in the medical school the number of applicants from least affluent areas would increase by only 9. In conclusion efforts to achieve meaningful WP must be directed at raising aspirations for a career in Medicine within the community. Answer: Yes, the UKCAT has been shown to predict performance on exit from medical school. A national cohort study found that the UKCAT, including its total score and subtest scores, has significant predictive validity for performance on the UK Foundation Programme Educational Performance Measure (EPM) and the national exit situational judgement test (SJT) (PUBMED:27855088). Another study also found that the UKCAT scores demonstrate a statistically significant but modest degree of incremental predictive validity throughout undergraduate training (PUBMED:27638740). Additionally, a systematic review indicated that UKCAT scores predict performance in medical school assessments, with the relationship being generally weak but stronger for the UKCAT total score and the verbal reasoning subtest (PUBMED:33483439). However, it is important to note that the predictive validity of the UKCAT is modest, and the relationships revealed do not fit neatly with theoretical expectations (PUBMED:27704290). Moreover, the use of the UKCAT as a sole screening tool for future academic performance may not be optimal, as it could lead to rejecting candidates who might have passed all years at first sitting (PUBMED:27638740). The UKCAT's predictive power for overall course performance has also been described as modest over and above that of school science achievements or pre-admission interview scores (PUBMED:24919950). In summary, while the UKCAT does have some predictive power for performance on exit from medical school, its predictive validity is limited, and it should be used in conjunction with other selection measures rather than as the sole determinant in medical student selection (PUBMED:27638740, PUBMED:24919950).
Instruction: Could bystander first-aid prevent trauma deaths at the scene of injury? Abstracts: abstract_id: PUBMED:17448104 Could bystander first-aid prevent trauma deaths at the scene of injury? Objective: To identify potentially preventable prehospital deaths following traumatic cardiac arrest. Methods: Deaths following prehospital traumatic cardiac arrest during 2003 were reviewed in the state of Victoria, Australia. Possible survival with optimal bystander first-aid and shorter ambulance response times were identified. Injury Severity Scores (ISS) were calculated. Victims with an ISS &lt;50 and signs of life were reviewed for potentially preventable factors contributing to death including signs of airway obstruction, excessive bleeding and/or delayed ambulance response times. Results: We reviewed 112 cases that had full ambulance care records, hospital records and autopsy details in Victoria 2003. Most deaths involved road trauma and 55 victims had an ISS &lt;50. Twelve patients received first-aid from bystanders. Ambulance response times &gt;10 min might have contributed to five deaths with an ISS &lt;25. Conclusion: Five (4.5%) potentially preventable prehospital trauma deaths were identified. Three deaths potentially involved airway obstruction and two involved excessive bleeding. There is a case for increased awareness of the need for bystander first-aid at scene following major trauma. abstract_id: PUBMED:37013526 Assessing bystander first aid: development and validation of a First Aid Quality Assessment (FAQA) tool. Background: Injuries are one of the leading causes of death worldwide. Bystanders at the scene can perform first aid measures before the arrival of health services. The quality of first aid measures likely affects patient outcome. However, scientific evidence on its effect on patient outcome is limited. To properly assess bystander first aid quality, measure effect, and facilitate improvement, validated assessment tools are needed. The purpose of this study was to develop and validate a First Aid Quality Assessment (FAQA) tool. The FAQA tool focuses on first aid measures for injured patients based on the ABC-principle, as assessed by ambulance personnel arriving on scene. Methods: In phase 1, we drafted an initial version of the FAQA tool for assessment of airway management, control of external bleeding, recovery position and hypothermia prevention. A group of ambulance personnel aided presentation and wording of the tool. In phase 2 we made eight virtual reality (VR) films, each presenting an injury scenario where bystander performed first aid. In phase 3, an expert group discussed until consensus on how the FAQA tool should rate each scenario. Followingly, 19 respondents, all ambulance personnel, rated the eight films with the FAQA tool. We assessed concurrent validity and inter-rater agreement by visual inspection and Kendall's coefficient of concordance. Results: FAQA-scores by the expert group concurred with ± 1 of the median of the respondents on all first aid measures for all eight films except one case, where a deviation of 2 was seen. The inter-rater agreement was "very good" for three first aid measures, "good" for one, and "moderate" for the scoring of overall quality on first aid measures. Conclusion: Our findings show that it is feasible and acceptable for ambulance personnel to collect information on bystander first aid with the FAQA tool and will be of importance for future research on bystander first aid for injured patients. abstract_id: PUBMED:28262281 Prehospital deaths from trauma: Are injuries survivable and do bystanders help? Background And Objectives: Deaths from trauma occurring in the prehospital phase of care are typically excluded from analysis in trauma registries. A direct historical comparison with Hussain and Redmond's study on preventable prehospital trauma deaths has shown that, two decades on, the number of potentially preventable deaths remains high. Using updated methodology, we aimed to determine the current nature, injury severity and survivability of traumatic prehospital deaths and to ascertain the presence of bystanders and their role following the point of injury including the frequency of first-aid delivery. Methods: We examined the Coroners' inquest files for deaths from trauma, occurring in the prehospital phase, over a three-year period in the Cheshire and Manchester (City), subsequently referred to as Manchester, Coronial jurisdictions. Injuries were scored using the Abbreviated-Injury-Scale (AIS-2008), Injury Severity Score (ISS) calculated and probability of survival estimated using the Trauma Audit and Research Network's outcome prediction model. Results: One hundred and seventy-eight deaths were included in the study (one hundred and thirty-four Cheshire, forty-four Manchester). The World Health Organisation's recommendations consider those with a probability of survival between 25-50% as potentially preventable and those above 50% as preventable. The median ISS was 29 (Cheshire) and 27.5 (Manchester) with sixty-two (46%) and twenty-six (59%) respectively having a probability of survival in the potentially preventable and preventable ranges. Bystander presence during or immediately after the point of injury was 45% (Cheshire) and 39% (Manchester). Bystander intervention of any kind was 25% and 30% respectively. Excluding those found dead and those with a probability of survival less than 25%, bystanders were present immediately after the point of injury or "within minutes" in thirty-three of thirty-five (94%) Cheshire and ten of twelve (83%) Manchester. First aid of any form was attempted in fourteen of thirty-five (40%) and nine of twelve (75%) respectively. Conclusions: A high number of prehospital deaths from trauma occur with injuries that are potentially survivable, yet first aid intervention is infrequent. Following injury there is a potential window of opportunity for the provision of bystander assistance, particularly in the context of head injury, for simple first-aid manoeuvres to save lives. abstract_id: PUBMED:28363752 Are prehospital deaths from trauma and accidental injury preventable? A direct historical comparison to assess what has changed in two decades. Background & Objectives: In 1994, Hussain and Redmond revealed that up to 39% of prehospital deaths from accidental injury might have been preventable had basic first aid care been given. Since then there have been significant advances in trauma systems and care. The exclusion of prehospital deaths from the analysis of trauma registries, giv en the high rate of those, is a major limitation in prehospital research on preventable death. We have repeated the 1994 study to identify any changes over the years and potential developments to improve patient outcomes. Methods: We examined the full Coroner's inquest files for prehospital deaths from trauma and accidental injury over a three-year period in Cheshire. Injuries were scored using the Abbreviated-Injury-Scale (AIS-1990) and Injury Severity Score (ISS), and probability of survival estimated using Bull's probits to match the original protocol. Results: One hundred and thirty-four deaths met our inclusion criteria; 79% were male, average age at death was 53.6 years. Sixty-two were found dead (FD), fifty-eight died at scene (DAS) and fourteen were dead on arrival at hospital (DOA). The predominant mechanism of injury was fall (39%). The median ISS was 29 with 58 deaths (43%) having probability of survival of &gt;50%. Post-mortem evidence of head injury was present in 102 (76%) deaths. A bystander was on scene or present immediately after injury in 45% of cases and prior to the Emergency Medical Services (EMS) in 96%. In 93% of cases a bystander made the call for assistance, in those DAS or DOA, bystander intervention of any kind was 43%. Conclusions: The number of potentially preventable prehospital deaths remains high and unchanged. First aid intervention of any kind is infrequent. There is a potentially missed window of opportunity for bystander intervention prior to the arrival of the ambulance service, with simple first-aid manoeuvres to open the airway, preventing hypoxic brain injury and cardiac arrest. abstract_id: PUBMED:38185649 Effects of an advanced first aid course or real-time video communication with ambulance personnel on layperson first response for building-site severe injury events: a simulation study. Background: The risk of high-energy trauma injuries on construction sites is relatively high. A delayed response time could affect outcomes after severe injury. This study assessed if an advanced first aid course for first aid response for laypersons (employees or apprentices) in the construction industry or real-time video communication and support with ambulance personnel, or neither, together with access to an advanced medical kit, would have an effect on immediate layperson vital responses in a severe injury scenario. Method: This was a controlled simulation study. Employees or apprentices at a construction site were recruited and randomly allocated into a group with video support or not, and advanced first aid course or not, and where one group had both. The primary outcomes were correct behavior to recognize and manage an occluded airway and correct behavior to stop life-threatening bleeding from a lower extremity injury. Secondary outcomes included head-to-toe assessment performed, placement of a pelvic sling, and application of remote vital signs monitors. Results: Ninety participants were included in 10 groups of 3 for each of 4 exposures. One group was tested first as a baseline group, and then later after having done the training course. Live video support was effective in controlling bleeding. A first aid course given beforehand did not seem to be as effective on controlling bleeding. Video support and the first aid course previously given improved the ability of bystanders to manage the airway, the combination of the two being no better than each of the interventions taken in isolation. Course exposure and video support together were not superior to the course by itself or video by itself, except regarding placing the biosensors on the injured after video support. Secondary results showed an association between video support and completing a head-to-toe assessment. Both interventions were associated with applying a pelvic sling. Conclusion: These findings show that laypersons, here construction industry employees, can be supported to achieve good performance as first responders in a major injury scenario. Prior training, but especially live video support without prior training, improves layperson performance in this setting. abstract_id: PUBMED:32683059 Redefining Preventable Death-Potentially Survivable Motorcycle Scene Fatalities as a New Frontier. Background: The National Academies of Science, Engineering, and Medicine defined a roadmap to achieve zero preventable trauma deaths. In the United States, there are over 5000 motorcycle fatalities annually. Florida leads the nation in annual motorcycle crash (MCC) deaths and injuries. It is unknown how many are potentially preventable. We hypothesize that certain patterns of injuries in on-scene fatalities that are potentially survivable and aim to make recommendations to achieve the National Academies of Science, Engineering, and Medicine objective. Materials And Methods: Miami-Dade County medical examiner reports of MCC deaths pronounced on scene, and emergency medical service or law enforcement reports from 2010 to 2012 were reviewed by board-certified trauma surgeons. Causes of death were categorized into exsanguination, traumatic brain injury or decapitation, crushed chest, or airway complications. Determination of potentially survivable versus nonsurvivable injuries was based upon whether the riders had potentially survivable injuries and had they been transported immediately to a trauma center. Traumatic brain injury cases were reviewed by a board-certified neurosurgeon. Results: Sixty MCC scene deaths were analyzed. Ninety-five percent were men, 55% were helmeted, and 42% had positive toxicology. The median Injury Severity Score was 41 (Range 14-75, IQR 31-75). Nineteen (32%) deaths were potentially survivable, with death due to airway in 14 (23%) and exsanguination in 4 (7%) patients. Conclusions: One-third of on-scene urban motorcycle deaths are potentially survivable in a young patient population. ISS score comparison demonstrates the lower injury burden in those deemed potentially survivable. Automatic alert systems in motorcycles and first responder training to police are recommended to improve trauma system efficacy in reducing preventable deaths from MCCs. abstract_id: PUBMED:25805428 Practice of first aid in burn related injuries in a developing country. Introduction: First aid with cool running water reduces the severity of burn. Low level of knowledge of first aid in burns was shown in previous studies with few patients receiving first aid by water lavage. A study investigating the use of water lavage as first aid in patients presenting to hospital with burn in Lagos, Nigeria was carried out. Methods: Patients admitted to a University Teaching Hospital for treatment of burns were recruited for this prospective study. Data detailing demographics, scene and aetiology of burns, material used for first aid, who administered first aid, level of education and relationship of first-aider with patients, length of hospital stay, complications and outcome of treatment were collected and statistical analysis performed. Results: 168 patients; 73 (43.4%) children and 95 (56.6%) adults were seen. Burns were sustained at home in 95 (74.2%) cases and outside in 33 (25.8%). Water lavage was used in 49 (29.2%) cases, raw eggs in 21 (12.5%), pap in 16 (9.5%) and other materials in 48.8%. 40 (23.8%) patients had not received any form of first aid at presentation. Patients that received no water first aid had higher complication rate (35.3% versus 18.4%) compared with those that had water first aid. Conclusion And Recommendations: The use of water first aid in burns was shown to reduce complication rate in this study. People should be educated on the efficacy of water first aid in pre-hospital care of burns. abstract_id: PUBMED:22267326 First-aid algorithms in dental avulsion. Almost one fourth of traumatic dental injuries occur at schools or in their surroundings. Prevalence of tooth avulsion varies from 0.5% to 16% of all cases of dental trauma. Children with dental avulsion may seek help from school nurses so they should be able to provide first-aid treatment. However, many studies showed that the general level of knowledge of medical staff concerning tooth avulsion is unsatisfactory and that it could be improved by educational activities. This article attempts to give short algorithms of first-aid management of avulsed tooth. abstract_id: PUBMED:36935818 Symptoms of post-traumatic stress disorder among first aid providers. Background: Every year, large numbers of individuals are present or provide first aid in situations involving out-of-hospital cardiac arrest, injuries, or suicides. Little is known about the impact of providing first aid or witnessing a first aid situation, but research indicates that many first aid providers (FAP) experience persistent psychological difficulties. Here we aimed to assess the level of psychological impact of being a FAP. Methods: In this retrospective study, FAP attending follow-up were asked to complete the International Trauma Questionnaire (ITQ), which is a self-report diagnostic measure of post-traumatic stress disorder (PTSD) and complex post-traumatic stress disorder (CPTSD). We recorded endorsement of a symptom or functional impairment (score ≥ 2/4 on at least one of 18 items). Results: Of the 102 FAP in this study, 86 (84%) showed endorsement of a symptom or functional impairment. Common symptoms/functional impairments included being super-alert, watchful, or on guard; having powerful mental images; avoiding internal reminders or memories; and being affected in important parts of one's life. One-third had affected ability to work. Of the FAPs who attended follow-up more than one month after the incident (n = 32), 19% met the criteria for PTSD or CPTSD. Conclusions: The majority of FAPs have endorsement of a symptom or functional impairment. Some FAPs fulfil the criteria of PTSD. We suggest that follow-up should be offered by the EMS to all FAPs involved in incidents with an unconscious patient. abstract_id: PUBMED:31432195 The safety and efficacy of improvised tourniquets in life-threatening hemorrhage: a systematic review. Objective: The increased incidence of mass casualty incident (MCI) with penetrating injuries in the civilian setting creates a call for implementing devices, such as a tourniquet (TQ), in civilian first aid. Bystanders could act as immediate responders after an MCI in order to prevent a victim from exsanguination using direct pressure or commercial tourniquets (C-TQ). Reports have shown that immediate access to C-TQs was not available and bystanders used objects available at the trauma scene to make an improvised tourniquet (I-TQ). The aim of this systematic review of literature was to summarize the existing literature on designs, efficacy and safety of I-TQs. Methods: A systematic review of the literature was performed. Bibliographic databases PubMed, EMBASE.com and Cochrane Library were searched. All types of original studies about I-TQ's were included. Review studies, exempts from textbooks or studies with TQs applied during elective surgeries were excluded. Results: Twenty studies were included. In both simulated experiments and real-life situations, I-TQs outperformed commercial TQs (C-TQ) regarding success rate. Of the I-TQs, the band and windlass design performed most consistently. Although lacking any statistical analysis, there was no reported difference in adverse events between I-TQs and C-TQs. Conclusion: The use of- and training in I-TQ by civilian immediate responders is not recommended because of limited efficacy and safety concerns; direct pressure is a viable alternative. However, I-TQs may save lives when applied correctly with proper objects; therefore, future studies regarding the best design and training in application of effective and safe I-TQs should be encouraged. Answer: Yes, bystander first aid could potentially prevent trauma deaths at the scene of injury. Studies have identified that a significant number of prehospital deaths from trauma are potentially survivable, and bystander intervention, particularly in the form of first aid, could provide a critical window of opportunity to save lives (PUBMED:28262281). For instance, in Victoria, Australia, it was found that 4.5% of prehospital trauma deaths might have been preventable with optimal bystander first aid, especially in cases involving airway obstruction and excessive bleeding (PUBMED:17448104). Similarly, a historical comparison study revealed that the number of potentially preventable prehospital deaths remains high, and first aid intervention is infrequent, indicating a missed opportunity for bystander intervention to prevent hypoxic brain injury and cardiac arrest (PUBMED:28363752). Moreover, the development and validation of a First Aid Quality Assessment (FAQA) tool to assess the quality of bystander first aid based on the ABC-principle (airway, breathing, circulation) suggest that improving the quality of first aid delivered by bystanders could positively affect patient outcomes (PUBMED:37013526). In the context of motorcycle scene fatalities, it was hypothesized that certain patterns of injuries in on-scene fatalities are potentially survivable, and immediate transportation to a trauma center could have made a difference (PUBMED:32683059). Furthermore, a simulation study showed that live video support without prior training improves layperson performance as first responders in a major injury scenario, indicating that real-time guidance could enhance the effectiveness of bystander first aid (PUBMED:38185649). However, it is also noted that the use of improvised tourniquets (I-TQs) by civilian immediate responders is not recommended due to limited efficacy and safety concerns, and direct pressure is suggested as a viable alternative (PUBMED:31432195). In summary, bystander first aid has the potential to prevent trauma deaths at the scene of injury, and efforts to improve bystander intervention through training, awareness, and real-time support could be beneficial in reducing prehospital trauma mortality.
Instruction: Three-dimensional conformal radiation therapy for esophageal squamous cell carcinoma: is elective nodal irradiation necessary? Abstracts: abstract_id: PUBMED:20004527 Three-dimensional conformal radiation therapy for esophageal squamous cell carcinoma: is elective nodal irradiation necessary? Purpose: To evaluate the local control, survival, and toxicity associated with three-dimensional conformal radiotherapy (3D-CRT) for squamous cell carcinoma (SCC) of the esophagus, to determine the appropriate target volumes, and to determine whether elective nodal irradiation is necessary in these patients. Methods And Materials: A prospective study of 3D-CRT was undertaken in patients with esophageal SCC without distant metastases. Patients received 68.4 Gy in 41 fractions over 44 days using late-course accelerated hyperfractionated 3D-CRT. Only the primary tumor and positive lymph nodes were irradiated. Isolated out-of-field regional nodal recurrence was defined as a recurrence in an initially uninvolved regional lymph node. Results: All 53 patients who made up the study population tolerated the irradiation well. No acute or late Grade 4 or 5 toxicity was observed. The median survival time was 30 months (95% confidence interval, 17.7-41.8). The overall survival rate at 1, 2, and 3 years was 77%, 56%, and 41%, respectively. The local control rate at 1, 2, and 3 years was 83%, 74%, and 62%, respectively. Thirty-nine of the 53 patients (74%) showed treatment failure. Seventeen of the 39 (44%) developed an in-field recurrence, 18 (46%) distant metastasis with or without regional failure, and 3 (8%) an isolated out-of-field nodal recurrence only. One patient died of disease in an unknown location. Conclusions: In patients treated with 3D-CRT for esophageal SCC, the omission of elective nodal irradiation was not associated with a significant amount of failure in lymph node regions not included in the planning target volume. Local failure and distant metastases remained the predominant problems. abstract_id: PUBMED:23186308 Three-dimensional conformal radiation for esophageal squamous cell carcinoma with involved-field irradiation may deliver considerable doses of incidental nodal irradiation. Background: To quantify the incidental irradiation dose to esophageal lymph node stations when irradiating T1-4N0M0 thoracic esophageal squamous cell carcinoma (ESCC) patients with a dose of 60 Gy/30f. Methods: Thirty-nine patients with medically inoperable T1-4N0M0 thoracic ESCC were treated with three-dimensional conformal radiation (3DCRT) with involved-field radiation (IFI). The conformal clinical target volume (CTV) was re-created using a 3-cm margin in the proximal and distal direction beyond the barium esophagogram, endoscopic examination and CT scan defined the gross tumor volume (GTV) and a 0.5-cm margin in the lateral and anteroposterior directions of the CT scan-defined GTV. The PTV encompassed 1-cm proximal and distal margins and 0.5-cm radial margin based on the CTV. Nodal regions were delineated using the Japanese Society for Esophageal Diseases (JSED) guidelines and an EORTC-ROG expert opinion. The equivalent uniform dose (EUD) and other dosimetric parameters were calculated for each nodal station. Nodal regions with a metastasis rate greater than 5% were considered a high-risk lymph node subgroup. Results: Under a 60 Gy dosage, the median D mean and EUD was greater than 40 Gy in most high-risk nodal regions except for regions of 104, 106tb-R in upper-thoracic ESCC and 101, 104-R, 105, 106rec-L, 2, 3&amp;7 in middle-thoracic ESCC and 107, 3&amp;7 in lower-thoracic ESCC. In the regions with an EUD less than 40 Gy, most incidental irradiation doses were significantly associated with esophageal tumor length and location. Conclusions: Lymph node stations near ESCC receive considerable incidental irradiation doses with involved-field irradiation that may contribute to the elimination of subclinical lesions. abstract_id: PUBMED:31658709 Three-Dimensional Conformal Radiotherapy-Based or Intensity-Modulated Radiotherapy-Based Concurrent Chemoradiotherapy in Patients with Thoracic Esophageal Squamous Cell Carcinoma. Background: To date, intensity-modulated radiation therapy (IMRT) with concurrent chemoradiotherapy (CCRT) and CCRT with standard fractionation three-dimensional conformal radiation therapy (3D-CRT) have not been compared. In this study, the outcomes of IMRT-based concurrent CCRT and those of 3D-CRT-based CCRT were compared in patients with thoracic esophageal squamous cell carcinoma (TESCC). Methods: We enrolled 2062 patients with TESCC who had received CCRT and categorized them into two groups on the basis of their treatment modality: Group 1 (3D-CRT-based CCRT) and Group 2 (IMRT-based CCRT). Results: Multivariate Cox regression analysis indicated that the American Joint Committee on Cancer advanced stages (≥IIIA) and 3D-CRT were significant independent predictors of poor outcomes in patients with TESCC who received definitive CCRT. Moreover, receiving IMRT-based CCRT (adjusted hazard ratio [aHR]: 0.88, 95% confidence interval [CI]: 0.78-0.98) was a significant independent prognostic factor for overall survival (p = 0.0223). In Group 2, aHRs (95% CIs) for overall mortality at early (IA-IIB) and advanced clinical stages were 0.91 (0.67-1.25, p = 0.5746) and 0.88 (0.77-0.99, p = 0.0368), respectively. Conclusion: IMRT-based CCRT resulted in higher survival rates in patients with advanced clinical stages of TESCC (i.e., IIIA-IIIC), namely, clinical T3, clinical T4, or lymph node involvement. abstract_id: PUBMED:36567356 Incidental nodal irradiation in patients with esophageal cancer undergoing (chemo)radiation with 3D-CRT or VMAT. The extent of elective nodal irradiation (ENI) in patients undergoing definitive chemoradiotherapy (dCRT) for esophageal squamous cell carcinoma (ESCC) remains unclear. The aim of this dosimetric study was to evaluate the extent of incidental nodal irradiation using modern radiation techniques. A planning target volume (PTV) was generated for 30 patients with node-negative esophageal carcinoma (13 cervical/upper third, 7 middle third, 10 lower third/abdomen). Thereby, no elective nodal irradiation (ENI) was intended. Both three-dimensional conformal radiotherapy (3D-CRT) and volumetric-modulated arc therapy (VMAT) treatment plans (50 Gy in 25 fractions) were calculated for all patients. Fifteen nodal stations were contoured according to the definitions of the AJCC and investigated in regard to dosimetric parameters. Compared to 3D-CRT, VMAT was associated with lower dose distribution to the organs at risk (lower Dmean, V20 and V30 for the lungs and lower Dmean and V30 for the heart). For both techniques, the median Dmean surpassed 40 Gy in 12 of 15 (80%) nodal stations. However, VMAT resulted in significantly lower Dmeans and equivalent uniform doses (EUD) compared to 3D-CRT for eight nodal stations (1L, 2L, 2R, 4L, 7, 8L, 10L, 15), while differences did not reach significance for seven nodal station (1R, 4R, 8U, 8M, 10R, 16). For dCRT of ESCC, the use of VMAT was associated with significantly lower median (incidental) doses to eight of 15 regional lymph node areas compared to 3D-CRT. However, given the small absolute differences, these differences probably do not impair (regional) tumor control rates. abstract_id: PUBMED:32474127 Propensity score-based comparison of survival and radiation pneumonitis after definitive chemoradiation for esophageal cancer: Intensity-modulated radiotherapy versus three-dimensional conformal radiotherapy. Purpose: To compare survival outcomes and radiation pneumonitis (RP) between intensity-modulated radiotherapy (IMRT) and three-dimensional conformal radiotherapy (3DCRT) in patients with esophageal cancer (EC) who underwent definitive chemoradiation therapy (CRT). Methods: Clinical characteristics and dose-volume histogram parameters were collected for 388 EC patients who received definitive CRT with either IMRT (n = 297) or 3DCRT (n = 91) from 2010 through 2017. Dosimetric parameters, survival end-points, and symptomatic RP (grade ≥2) were compared between groups. Propensity score matching (PSM) was performed to balance potential confounding factors. Univariate and multivariate logistic regression analyses were applied to identify predictors of RP. Results: Compared with 3DCRT, IMRT was significantly associated with better overall survival (OS; P = 0.001), progression-free survival (PFS; P = 0.008), and distant metastasis-free survival (P = 0.011), but not with locoregional failure-free survival (P = 0.721). Moreover, IMRT demonstrated a remarkably lower risk of RP than 3DCRT (5.4% vs 23.1%, P &lt; 0.001). PSM analysis further confirmed the clinical benefit of IMRT. In the matched cohort, radiation modality was independently correlated with OS and PFS. On multivariate analysis, smoking history (odds ratio [OR]: 4.225, P = 0.002), primary tumor length (OR: 2.764, P = 0.049), radiation modality (OR: 10.760, P &lt; 0.001), planning target volume (OR: 1.004, P &lt; 0.001), and lung V20 (OR: 1.286, P = 0.002) were found to be significant predictors of RP. Conclusions: Compared with 3DCRT, IMRT was associated with more favorable survival and a reduced risk of RP after definitive CRT, supporting the routine use of IMRT for EC. abstract_id: PUBMED:36646953 Long-term results of chemoradiotherapy with elective nodal irradiation for resectable locally advanced esophageal cancer in three-dimensional planning system. Background: We evaluated the long-term results of definitive chemoradiotherapy (CRT) with elective nodal irradiation (ENI) using a three-dimensional (3D) planning system for resectable, locally advanced esophageal squamous cell carcinoma (LA-ESCC). Methods: This retrospective study included 65 patients with LA-ESCC who started CRT between 2006 and 2017. Patients with Stage I-IV LA-ESCC according to the Union for International Cancer Control TNM classification (eighth edition) were included. In stage IV, only supraclavicular lymph node (LN) metastasis was included. All patients received radiotherapy with ENI and concurrent chemotherapy with platinum and 5-fluorouracil. Results: The median age of the patients was 70 years (range 52-83 years). Stage I, II, III, and IV diseases were observed in 3 (5%), 28 (43%), 22 (34%), and 12 patients (18%), respectively. The median prescription dose was 66 Gy (range 50.4-66 Gy). The median follow-up period for the survivors was 71 months (range 8-175 months). The 5-year overall survival (OS) and progression-free survival rates were 54 and 43%, respectively. The 5-year OS rates for stages I-II and III-IV were 67 and 42%, respectively. Recurrence occurred in 29 patients (45%), and recurrence of regional LNs only occurred in 2 patients (3%). Grade 3 or higher late adverse events were observed in 8 patients (12%). Grade 5 heart failure occurred in two patients (3%); both had cardiovascular disease before treatment. Conclusion: The long-term results of definitive CRT with ENI for resectable LA-ESCC were favorable. ENI with a 3D planning system may reduce regional LN recurrence and late adverse events. abstract_id: PUBMED:24438695 Concurrent three-dimensional conformal radiotherapy and chemotherapy for postoperative recurrence of mediastinal lymph node metastases in patients with esophageal squamous cell carcinoma: a phase 2 single-institution study. Aim: The aim of this study was to evaluate the effects of radiotherapy plus concurrent weekly cisplatin chemotherapy on the postoperative recurrence of mediastinal lymph node metastases in esophageal cancer patients. Methods: Ninety-eight patients were randomly enrolled to receive either three-dimensional conformal radiotherapy alone (group A) or concurrent chemoradiotherapy (group B). A radiation dose of 62-70 Gy/31-35 fractions was delivered to the recurrent tumor. Furthermore, the patients in group B simultaneously received weekly doses of cisplatin (30 mg/m(2)), and the survival outcomes and toxic effects were compared. Results: The response rate of group B (91.8%) was significantly greater than that of group A (73.5%) (χ(2) = 5.765, P = 0.016). The 1- and 3-year survival rates of group B (85.7% and 46.9%, respectively) were also greater than those of group A (69.4% and 28.6%, respectively). However, there were no significant differences in the 5-year survival rates. The numbers of patients who died of distant metastases in groups A and B were 13 (26.5%) and 5 (10.2%), respectively (χ(2) = 4.356, P = 0.036). Acute radiation-related esophagitis and granulocytopenia in group B was frequent. However, intergroup differences in terms of late toxicity were not significant. Conclusions: Three-dimensional conformal radiotherapy (3DCRT) is a practical and feasible technique to treat the recurrence of mediastinal lymph node metastases of postoperative esophageal cancer. In addition, concurrent chemotherapy can increase local tumor control, decrease the distant metastasis rate, and increase the long-term survival rate. abstract_id: PUBMED:33123483 Stage III Esophageal Squamous Cell Carcinoma Patients With Three-Dimensional Conformal or Intensity-Modulated Radiotherapy: A Multicenter Retrospective Study. Aim: To evaluate long-term outcome and prognostic factors of stage III esophageal cancer after definitive radiotherapy using three dimensional conformal radiotherapy (3DCRT) or intensity-modulated radiotherapy (IMRT) techniques. Methods: Patients with T3N1M0/T4N0-1M0 esophageal squamous cell carcinoma (ESCC) treated with definitive radiotherapy from 2002 to 2016 in 10 Chinese medical centers were retrospectively analyzed. Overall survival (OS) and progression-free survival (PFS) rates were calculated. Prognostic factors were analyzed by Log-rank test and multivariable Cox model. Results: Survival data of 1,450 patients were retrospectively collected. With a median follow-up time of 65.9 months, 1-, 3-, and 5-year OS rates were 69.3, 36.7, and 27.7%, respectively, and PFS rates were 58.6, 32.7, and 27.4%, respectively. Univariable analyses showed that gender, age, lesion location, lesion length, largest tumor diameter, lymph node metastasis, gross tumor volume, EQD2, short-term response, and concurrent chemotherapy were prognostic factors for OS. Multivariable analyses showed that lesion location, T-classification, GTV size, EQD2, and short-term response to RT were independent prognostic factors for OS, and tumor diameter, GTV size, and short-term response were independent prognostic factors for PFS. Conclusions: This study demonstrated that definitive radiotherapy using 3DCRT and IMRT provides promising outcomes for locally advanced ESCC. abstract_id: PUBMED:19919886 Three-dimensional conformal radiation therapy for squamous cell carcinoma of the esophagus: a prospective phase I/II study. Purpose: A prospective phase I-II study was conducted to determine the tolerance and local control rate of three-dimensional conformal radiotherapy (3-DCRT) for esophageal squamous cell carcinoma (SCC). Methods And Materials: Thirty patients underwent 3-DCRT for thoracic esophageal SCC. PTV1 composed of a 1.2-1.5 cm margin lateral around GTV and 3.0 cm margin superior/inferior of GTV. PTV2 encompassed GTV with a margin of 0.5-0.7 cm. The dose for PTV1 was 50 Gy in 2 Gy daily fractions; PTV2 received a boost of 16 Gy in 2 Gy daily fractions to a total dose of 66 Gy. Results: Median follow-up time was 18 months. The most common acute toxicity was esophagitis in 63% of patients with RTOG grades 1-2, and in 3% with grade 3. RTOG grades 1-2 radiation pneumonitis developed in 27% of patients. One patient developed pulmonary fibrosis RTOG grade 2 and another patient experienced grade 3 pulmonary fibrosis. Two patients developed mild esophageal stricture requiring dilatation. Two-year overall survival, local disease progression-free rate, and distant metastasis-free rate were 69%, 36% and 56%, respectively. Conclusions: Although 3-DCRT to 66 Gy for esophageal SCC was well tolerated, the local control was disappointing. The result supports the use of chemoradiation as the standard care for esophageal SCC. abstract_id: PUBMED:26663710 Reduced toxicity with three-dimensional conformal radiotherapy or intensity-modulated radiotherapy compared with conventional two-dimensional radiotherapy for esophageal squamous cell carcinoma: a secondary analysis of data from four prospective clinical trials. We conducted a retrospective analysis to assess the toxicity and long-term survival of esophageal squamous cell carcinoma patients treated with three-dimensional conformal radiotherapy (3DCRT) or intensity-modulated radiotherapy (IMRT) versus conventional two-dimensional radiotherapy (2DRT). All data in the present study were based on four prospective clinical trials conducted at our institution from 1996 to 2004 and included 308 esophageal squamous cell carcinoma patients treated with 2DRT or 3DCRT/IMRT. Based on the inclusion and exclusion criteria, 254 patients were included in the analysis. Of these patients, 158 were treated with 2DRT, whereas 96 were treated with 3DCRT/IMRT. The rates of ≥Grade3 acute toxicity of the esophagus and lung were 11.5% versus 28.5% (P = 0.002) and 5.2% versus 10.8% (P = 0.127) in the 3DCRT/IMRT and 2DRT groups, respectively. The incidences of ≥Grade 3 late toxicity of the esophagus and lungs were 3.1% versus 10.7% (P = 0.028) and 3.1% versus 5.7% (P = 0.127) in the 3DCRT/IMRT and 2DRT groups, respectively. The 1-year, 3-year and 5-year estimated overall survival rates were 81%, 38% and 34% in the 3DCRT/IMRT group and 79%, 44% and 31% in the 2DRT group, respectively (P = 0.628). The 1-year, 3-year and 5-year local control rates were 88%, 71% and 66% in the 3DCRT/IMRT group and 84%, 66% and 60% in the 2DRT group, respectively (P = 0.412). Fewer incidences of acute and late toxicities were observed in esophageal squamous cell carcinoma patients treated with 3DCRT/IMRT compared with those treated with 2DRT. No significant survival benefit was observed with the use of 3DCRT/IMRT. Answer: The necessity of elective nodal irradiation (ENI) in the treatment of esophageal squamous cell carcinoma (ESCC) with three-dimensional conformal radiation therapy (3D-CRT) has been evaluated in several studies. According to a prospective study (PUBMED:20004527), the omission of ENI was not associated with a significant amount of failure in lymph node regions not included in the planning target volume. The study reported that local failure and distant metastases remained the predominant problems, with only a small percentage (8%) of patients developing isolated out-of-field nodal recurrence. Another study (PUBMED:36567356) found that with modern radiation techniques such as 3D-CRT and volumetric-modulated arc therapy (VMAT), the median dose to most nodal stations surpassed 40 Gy even without intentional ENI. This incidental nodal irradiation could potentially contribute to regional tumor control. However, the study also noted that VMAT was associated with significantly lower median doses to certain lymph node areas compared to 3D-CRT, which might not impair regional tumor control rates. A retrospective study (PUBMED:36646953) on definitive chemoradiotherapy (CRT) with ENI for resectable locally advanced ESCC showed favorable long-term results, suggesting that ENI with a 3D planning system may reduce regional lymph node recurrence and late adverse events. In summary, while some studies suggest that ENI may not be necessary due to the low rate of isolated nodal failures when omitted (PUBMED:20004527), other studies indicate that incidental nodal irradiation occurs with 3D-CRT and may contribute to control of subclinical disease (PUBMED:36567356). Additionally, the use of ENI in a 3D planning system has been associated with favorable outcomes and a reduction in regional lymph node recurrence (PUBMED:36646953). Therefore, the decision to use ENI in 3D-CRT for ESCC should be based on individual patient factors, tumor characteristics, and the potential benefits and risks of additional nodal irradiation.
Instruction: Dietary manipulation and energy compensation: does the intermittent use of low-fat items in the diet reduce total energy intake in free-feeding lean men? Abstracts: abstract_id: PUBMED:9806319 Dietary manipulation and energy compensation: does the intermittent use of low-fat items in the diet reduce total energy intake in free-feeding lean men? Objective: The effect of two low-fat treatments on ad libitum energy intake (EI) was investigated in five lean men living within a metabolic facility. Design: Diet was controlled over two consecutive periods of 12 d when either, i) all foods eaten or ii) only a single (lunch) meal, was manipulated to increase the fat content from 20, 40 to 60% of energy, and ad libitum EI measured. Results: All foods: EI increased from 8.6 (2.9 s.d.)-14.8 (3.1 s.d.) MJ/d and energy density (ED) from 4.1 (0.8 s.d.)-7.7 (1.6 s.d.) kJ/g as fat content increased from 20-60% (P &lt; 0.0001). There was no decrease in weight of food eaten across diets (P &gt; 0.05) and hence no energy compensation. Lunch meal: EI (20%:13.1 MJ/d, 40%:13.8 MJ/d, 60%:14.8 MJ/d) and ED (6.03 kJ/g, 5.89 kJ/g, 6.41 kJ/g) increased but not significantly across treatments (P &gt; 0.05). There was partial energy compensation on the low-fat 20% diet (due in part to compensatory increase in fat intake), but no compensation for the high-fat 60% diet. Conclusions: Changes in total dietary fat and ED result in concomitant changes in EI; low fat diets reducing EI. However, the dietary strategy of intermittent use of low- and high-fat items fail to significantly alter ED, and hence EI, in free-feeding lean men. Whilst there is a trend towards reduction in intake, manipulation of the fat content of a single meal may not be sufficient to induce significant long-term weight loss. abstract_id: PUBMED:10453438 A clinical trial to selectively change dietary fat and/or energy intake in women: the Women's Diet Study. Dietary fat and energy intake have been implicated in breast cancer etiology. To examine the relative importance of these dietary factors on markers of cancer risk in women, we designed an intervention trial to selectively decrease fat and/or energy intake in free-living, premenopausal women who were somewhat overweight. The study used a 2 x 2 factorial design to evaluate the independent and interactive effects of dietary fat and energy. The diets were nonintervention, low fat (15% of energy from fat, maintenance of energy intake), low energy (25% energy reduction), and combination low fat and low energy. We utilized an individualized counseling approach with self-selection of foods. Women on the low-fat and combination diets were asked to meet given daily goals for fat grams and food group exchanges, while women on the low-energy diet used only food group exchanges. Of the 113 premenopausal women randomized who were eligible for analysis, 43% were African-American. A total of 88 women completed the 12-week program, and adherence to the dietary goals was similar in both racial groups. Women on the low-fat diet were able to reduce dietary fat intake to 19% of energy by 4 weeks and to 17% by 12 weeks with a slight decrease in energy intake. Women on the low-energy diet met their energy reduction goals by four weeks while maintaining percentage of energy from fat. Women on the combination diet largely met their goals by four weeks as well. These data indicate that it is possible to selectively manipulate dietary fat and energy intake in women over a short period of time, which makes clinical studies on the relative effects of these two dietary variables on cancer risk biomarkers readily feasible. abstract_id: PUBMED:29030250 Fructo-oligosaccharides reduce energy intake but do not affect adiposity in rats fed a low-fat diet but increase energy intake and reduce fat mass in rats fed a high-fat diet. The ingestion of low or high lipid diets enriched with fructo-oligosaccharide (FOS) affects energy homeostasis. Ingesting protein diets also induces a depression of energy intake and decreases body weight. The goal of this study was to investigate the ability of FOS, combined or not with a high level of protein (P), to affect energy intake and body composition when included in diets containing different levels of lipids (L). We performed two studies of similar design over a period of 5weeks. During the first experiment (exp1), after a 3-week period of adaptation to a normal protein-low fat diet, the rats received one of the following four diets for 5weeks (6 rats per group): (i) normal protein (14% P/E (Energy) low fat (10% L/E) diet, (ii) normal protein, low fat diet supplemented with 10% FOS, (iii) high protein (55%P/E) low fat diet, and (iv) high protein, low fat diet supplemented with 10% FOS. In a second experiment (exp2) after the 3-week period of adaptation to a normal protein-high fat diet, the rats received one of the following 4 diets for 5weeks (6 rats per group): (i) normal protein, high fat diet (35% of fat), (ii) normal protein, high fat diet supplemented with 10% FOS, (iii) high protein high fat diet and (iv) high protein high fat diet supplemented with 10% FOS. In low-fat fed rats, FOS did not affect lean body mass (LBM) and fat mass but the protein level reduced fat mass and tended to reduce adiposity. In high-fat fed rats, FOS did not affect LBM but reduced fat mass and adiposity. No additive or antagonistic effects between FOS and the protein level were observed. FOS reduced energy intake in low-fat fed rats, did not affect energy intake in normal-protein high-fat fed rats but surprisingly, and significantly, increased energy intake in high-protein high-fat fed rats. The results thus showed that FOS added to a high-fat diet reduced body fat and body adiposity. abstract_id: PUBMED:32191158 Nutritional Intake and Energy Availability of Collegiate Distance Runners. Objective: Research investigating the dietary habits of distance runners has presented varying results. Proper dietary intake appears to enhance distance running performance and low dietary intake may impact health. The purpose of this investigation was to perform a comprehensive evaluation of nutrient intake of collegiate distance runners with comparison to recommendations for athletes.Methods: Twenty-one men (Age: 19.6 ± 1.2 years; height: 177.1 ± 5.7 cm; body mass: 65.7 ± 4.6 kg; body fat: 15.5 ± 2.2%) and 20 women (Age: 20.2 ± 1.7 years; height: 162.9 ± 6.6 cm; body mass: 53.7 ± 6.5 kg; body fat: 23.3 ± 3.6%) volunteered to participate in the investigation. Energy intake was derived from the Block Food Frequency Questionnaire. Energy availability was calculated by subtracting exercising energy expenditure from daily energy intake, divided by bone free lean mass and fat-free mass. Macronutrient and micronutrient consumption were compared with the appropriate dietary reference intake values, U.S. Dietary Guidelines, or standards recommended for endurance athletes.Results: Dietary intake for the men was 2,741.0 ± 815.2kilocalories and for the women was 1,927.7 ± 638.2kilocalories. A majority of the runners (73%) consumed less than recommended levels of carbohydrates. All men and 75% of women met or exceeded the recommended daily protein intake. Fifty percent of women and 24% of men did not meet the recommended daily allowance for calcium. Ninety-five percent of the runners did not meet the RDA for vitamin D. All the men and 75% of the women met the RDA for iron intake, with 24 of the runners taking an iron supplement. Eight men and 10 women did not meet the recommended intake for potassium.Conclusion: The dietary intake in this group of distance runners is below that necessary for the level of energy expended in their training. Carbohydrate intake is below the recommended amount for endurance athletes, and the calcium and vitamin D intake may not be favorable for bone health in this group of distance runners. abstract_id: PUBMED:9972184 Impact of adopting lower-fat food choices on energy and nutrient intakes of American adults. Objective: To evaluate the energy and nutrient intake of free-living men and women who choose foods consistent with different fat-reduction strategies. Design: For each year of the Continuing Survey of Food Intake by Individuals from 1989 through 1991, food codes were used to sort respondents by type of milk; type of meats; and type of cheese, yogurt, salad dressing, cake, and pudding (i.e., full-fat or fat-modified products) consumed. Subjects: A nationally representative sample of 3,313 men and 3,763 women who completed 3-day intake records and consumed either a reduced-fat or full-fat food from at least 1 of the 3 fat-reduction strategy categories. Statistical Analysis Performed: Analysis of variance with the Scheffé test was used to analyze differences in energy and nutrient intake between exclusive users, mixed users, and nonusers of each strategy or combined strategies. Results: Regardless of fat-reduction strategy, men and women who used them reported significantly lower intakes of total fat (up to 18 g lower), saturated fat (up to 12 g lower), cholesterol (up to 75 mg lower) and energy compared with nonusers. Exclusive users of single strategies met or approached recommendations of the National Cholesterol Education Program for total fat, saturated fat, and cholesterol intake; micronutrient intake varied depending on the strategy used. Skim milk users had the most favorable micronutrient intake, whereas lean meat users reported inadequate intake of zinc (men 63% and women 59% of the Recommended Dietary Allowances [RDAs]) and female users of fat-modified products reported inadequate intakes of vitamin E (64% of RDA) and zinc (65% of RDA). Multiple-strategy users achieved National Cholesterol Education Program goals and reported adequate micronutrient intakes and significantly lower energy intake. Mixed users of fat-modified products compared with nonusers of any fat-modified products had adequate micronutrient intake and lower intakes of total fat (32% vs 36% of energy for men and 32% vs 35% of energy for women) and saturated fat (11% vs 13% of energy for men and 11% vs 12% of energy for women). In addition, nonusers of any fat-modified strategy had the highest cholesterol and energy intake and the lowest intake of many micronutrients. Applications: A variety of fat-reduction strategies can be implemented to reduce energy, total fat, saturated fat, and cholesterol intake. Some of the strategies were associated with an inadequate micronutrient intake, so additional dietary guidance is needed to ensure that all nutrient requirements are met. Furthermore, people who do not use any fat-reduction strategy or those who exclusively use lean meats or fat-modified products would benefit from understanding how to balance their food choices. abstract_id: PUBMED:8737167 Energy density and its role in the control of food intake: evidence from metabolic and community studies. This review discusses the role that dietary composition and energy density play in the control of eating behaviour. The effect of dietary manipulations of fat and carbohydrate on energy intake remains controversial. We suggest this to be largely a consequence of different study designs. When low-fat foods are included in the diet and thus only some items manipulated subsequent food choice commonly ensures compensation and energy intake remains constant. However, when all items are manipulated and macronutrient composition fixed, an alteration in the energy density results in a parallel change in energy intake and there is no energy compensation. In addition, we hypothesise that hyperphagia on high-fat diets is a consequence of a high energy density rather than fat content per se. Independent of fat content, low energy dense diets generate greater satiety than high energy dense diets, suggesting that an important regulatory signal may be the weight or volume of food consumed. Epidemiological studies confirm that energy intake increases with energy density and thus weight loss may be best achieved on a low energy dense diet. Although the use of low-fat items may not reduce intake during covert manipulation, it may be successful during periods of deliberate dieting, providing that food-substitutions are not counter-balanced by other high density items. abstract_id: PUBMED:34748116 Long-term intake of total energy and fat in relation to subjective cognitive decline. Diet is one of the modifiable risk factors for cognitive decline. However, human studies on total energy intake and cognitive function have remained limited and studies on fat intake and cognitive decline have been inconclusive. We aimed to examine prospectively the associations between long-term intakes of total energy and fat with subsequent subjective cognitive decline (SCD). A total of 49,493 women from the Nurses' Health Study and 27,842 men from the Health Professionals Follow-up Study were followed for over 20 years. Average dietary intake was calculated based on repeated food frequency questionnaires (SFFQs), and Poisson regression was used to evaluate associations. Higher total energy intake was significantly associated with greater odds of SCD in both cohorts. Comparing the highest with lowest quintiles of total energy intake, the pooled multivariable-adjusted ORs (95% CIs) for a 3-unit increment in SCD, corresponding to poor versus normal SCD, was 2.77 (2.53, 2.94). Each 500 kcal/day greater intake of total energy was associated with 48% higher odds of SCD. Intakes of both total fat and total carbohydrate appeared to contribute to the positive association between total energy intake and SCD although for the same percent of energy, the association was stronger for total fat. In conclusion, higher intakes of total energy, total fat, and total carbohydrate were adversely associated with SCD. Whether these associations are causal is unclear and deserves further investigation. abstract_id: PUBMED:10232624 Energy density but not fat content of foods affected energy intake in lean and obese women. Background: Studies have shown that energy intake increases when both the fat content and energy density of the entire diet increases. When the fat content and energy density vary independently of one another, however, energy density, but not fat content, influences intake. Objective: The present study examined whether energy intake in lean and obese women is affected when either the energy density or the fat content of a portion of the diet is manipulated and palatability is held constant. Design: In a within-subjects design, 17 lean and 17 obese women consumed meals in the laboratory for four, 4-d test periods. In 3 of these test periods the energy density (4.4 and 6.7 kJ/g) or the fat content (16% and 36% of energy) of compulsory entrees representing 50% of each subject's usual energy intake was manipulated. Additional self-selected foods were consumed ad libitum at meals and as snacks. Results: There were no systematic differences in palatability of the manipulated foods across conditions. Obese and lean participants responded similarly to the dietary manipulations. Intake of self-selected foods at meals was reduced significantly by 16% for both lean and obese subjects in the low- compared with the high-energy-density condition. The fat content of the compulsory foods had no significant effect on energy intake. Ratings of hunger did not differ between diets. Conclusion: These results indicate that when a portion of the diet was manipulated, the energy density, but not the fat content, of the foods affected total energy intake at meals in both lean and obese women. abstract_id: PUBMED:27413124 Probable insomnia is associated with future total energy intake and diet quality in men. Background: Insomnia is associated with several adverse health outcomes. Small clinical studies have suggested that an inferior nutrition status is a potential explanation, but to our knowledge, this possibility has not been examined in a large-scale, population-based cohort study. Objective: We examined whether individuals with probable insomnia and individual insomnia symptoms had greater energy intake and a lower diet quality as assessed with the use of the Alternate Healthy Eating Index (AHEI) 2 y later. Design: A cohort study of 15,273 US men aged 58-93 y who were free of cancer, cardiovascular diseases, and diabetes and were participating in the Health Professionals Follow-Up Study reported information on insomnia symptoms in 2004. Dietary intake was assessed with the use of a food-frequency questionnaire in 2002 and 2006. We calculated the adjusted mean differences of total energy intake in 2006 and the AHEI-component scores and their 95% CIs between subjects with and without probable insomnia in 2004 and also across categories for each insomnia symptom while adjusting for related covariates. Results: After dietary intake in 2002, major chronic conditions, and other potential confounders were controlled for, men with probable insomnia had a mean higher consumption of 35.8 kcal/d (95% CI: 17.4, 54.1 kcal/d) and had lower scores in 3 individual AHEI components (trans fat, vegetables, and sodium), which denoted higher consumption of trans fat and sodium and lower intake of vegetables (P ≤ 0.01 for all). For individual insomnia symptoms, nonrestorative sleep and a difficulty maintaining sleep were associated with higher energy intake (P-trend ≤ 0.007 for both). A similar trend was observed in men who had difficulty initiating sleep (P-trend = 0.07). We also observed a significant association between the difficulty of initiating sleep and a lower AHEI score 2 y later (P-trend = 0.004). Conclusion: Probable insomnia is associated with higher intakes of total energy, trans fat, and sodium and lower intake of vegetables. abstract_id: PUBMED:2393005 Diet composition, energy intake, and exercise in relation to body fat in men and women. This study examined the relationships among body fat, diet composition, energy intake, and exercise in adults. Male (n = 107) and female (n = 109) adults aged 18-71 y (36.6 +/- 1.0 y, means +/- SEM) were hydrostatically weighed to determine body fat (5.7-49.0% of total weight). Diet and exercise behaviors were determined by use of a questionnaire. As body fat increased, percent of energy intake derived from fat increased (p less than 0.001) whereas the percent from carbohydrate decreased (p less than 0.001). There was no relationship between energy intake and adiposity although leanness and exercise were related (p less than 0.001). When subgroups of lean and obese subjects were compared, the lean subjects derived approximately 29% of their energy from fat and 53% from carbohydrate vs 35% and 46%, respectively, for the obese subjects. No differences were found between groups for energy intake but the lean individuals exercised more often than did the obese individuals. These data suggest that diet composition may play as important a role in fat deposition as do energy intake and lack of exercise. Answer: The study described in abstract PUBMED:9806319 investigated the effects of dietary manipulation on ad libitum energy intake in lean men. The study involved two low-fat treatments over consecutive 12-day periods, where either all foods eaten or only a single (lunch) meal was manipulated to increase the fat content from 20% to 60% of energy. The results showed that when all foods were manipulated, energy intake (EI) increased significantly as the fat content increased from 20% to 60%, with no decrease in the weight of food eaten and hence no energy compensation. When only the lunch meal was manipulated, there was partial energy compensation on the low-fat 20% diet, but no compensation for the high-fat 60% diet. The study concluded that changes in total dietary fat and energy density result in concomitant changes in EI, with low-fat diets reducing EI. However, the intermittent use of low- and high-fat items failed to significantly alter energy density and hence EI in free-feeding lean men. While there was a trend towards a reduction in intake, manipulation of the fat content of a single meal may not be sufficient to induce significant long-term weight loss. Therefore, the intermittent use of low-fat items in the diet does not appear to reduce total energy intake in free-feeding lean men (PUBMED:9806319).
Instruction: Diagnosing a patent foramen ovale in children: is transesophageal echocardiography necessary? Abstracts: abstract_id: PUBMED:21127297 Diagnosing a patent foramen ovale in children: is transesophageal echocardiography necessary? Background And Purpose: transesophageal echocardiography (TEE) is the gold standard for the diagnosis of a patent foramen ovale in adults. In children, acoustic windows on transthoracic echocardiography (TTE) are better than in adults; thus, an invasive TEE may not be necessary. Our goal was to assess the validity of TTE with agitated saline injection for the diagnosis of a patent foramen ovale in children using TEE as the gold standard. Methods: fifty consecutive pediatric patients &gt;1 year of age referred for TEE were prospectively enrolled. Imaging included 2-dimensional, color Doppler, and agitated saline contrast injections with and without Valsalva by TTE followed by TEE. Interpreters of the TTE were blinded to TEE results. Studies were categorized as "inconclusive" if the TTE images were inadequate for definitive diagnosis by the blinded interpreter. Results: TTE results were considered conclusive in 43 of 50 (86%) patients. Among the 43 conclusive studies, the 2 modalities disagreed in 1 patient. TTE had a positive predictive value of 100%, negative predictive value 97%, sensitivity of 88%, and specificity of 100% for detecting a patent foramen ovale. Conclusions: TTE with agitated saline injection is diagnostic for the assessment of atrial septal integrity in the majority of children. abstract_id: PUBMED:18569955 Is transesophageal echocardiography still necessary to exclude patent foramen ovale? Objectives: Current guidelines still recommend transesophageal echocardiography (TEE) as reference method to diagnose interatrial shunts. The aim was to test the accuracy of high-end transthoracal echocardiography (TTE) to exclude inter-atrial shunts. Methods: Prospective TTE and TEE study with second harmonic imaging to determine left-to-right shunt (L/R) by both colour Doppler or R/L by contrast echocardiography in patients with unexplained cerebrovascular incidents or newly detected valvular or myocardial disease. Results: An inter-atrial shunt was diagnosed in 200 of 438 analyzed patients (117 males). Colour Doppler echocardiography visualized a shunt in 67 patients (34%) on TTE vs. 84 (42%) patients on TEE (p &lt;0.0001). However injection of agitated blood with a valsalva maneuver detected 190 (95%) interatrial shunts by both TTE and TEE, but 10 shunts (5%) only by TTE. Conclusions: Our study shows that patent foramen ovale can safely be demonstrated with high-end transthoracic contrast- echocardiography. If additional studies confirm our results, TTE has the potential to become the method of choice in the diagnosis of PFO. abstract_id: PUBMED:28176181 Transthoracic contrast echocardiography using vitamin B6 and sodium bicarbonate as contrast agents for the diagnosis of patent foramen ovale. To evaluate the utility of transthoracic contrast echocardiography (cTTE) using vitamin B6 and sodium bicarbonate as contrast agents for diagnosing right-to-left shunt (RLS) caused by patent foramen ovale (PFO) compared to that of transesophageal echocardiography (TEE). We investigated 125 patients admitted to our neurology department with unexplained cerebral infarction and migraine. All patients underwent cTTE using vitamin B6 and sodium bicarbonate as contrast agents, after which they underwent transthoracic echocardiography. The Doppler signal was recorded during the Valsalva maneuver, and TEE examinations were performed. The feasibility, diagnostic sensitivity, and safety of cTTE and TEE for PFO recognition were compared. Evidence of PFO was found in 49 (39.20%) patients with cTTE, more than were detected with TEE (39, 31.20%) (χ2=5.0625, P=0.0244). cTTE had a sensitivity of 92.31% and a specificity of 84.88% for diagnosing PFO, showing high concordance with TEE for PFO recognition (κ=0.72). Further, results of a semi-quantitative evaluation of PFO-RLS by cTTE were better than those with TEE (Z=-2.011, P=0.044). No significant adverse reaction was discovered during cTTE examination. cTTE using vitamin B6 and sodium bicarbonate as contrast agents has relatively good sensitivity and specificity for diagnosing RLS caused by PFO when compared with those for TEE. Using vitamin B6 and sodium bicarbonate as contrast agents to perform cTTE is recommended for detecting and diagnosing the PFO due to its simplicity, non-invasive character, low cost, and high feasibility. abstract_id: PUBMED:10909499 Diagnosis of cerebral embolism by transesophageal echocardiography Cerebrovascular events have high mortality and morbidity, especially in the elderly. Ischemia is the main cause and 30% of the ischemic events are embolic and of cardiac origin. The clinical picture is not always typical of the type of stroke, but diagnosis of the mechanism of the event determines treatment. Transesophageal echocardiography (TEE) is a sensitive procedure more appropriate for diagnosing emboli of cardiac origin than transthoracic echocardiography (TTE). We therefore compared TEE and TTE in the determination of the source of emboli in 65 patients with ischemic stroke but without significant atherosclerotic changes in their carotid arteries, and compared these findings with those in 50 patients without stroke. 68% of the patients had potential sources of emboli according to TEE, compared to only 15% according to TTE. In the control group only 24% had potential sources of emboli by TEE. The findings were: clots in the left atrium, severe aortic atheroma, patent foramen ovale with paradoxical shunt, spontaneous echocardiography contrast, vegetations and mitral valve prolapse. The study showed that TEE is better than TTE in detecting the etiology of embolic stroke in those with normal carotid arteries, thus determining appropriate management. abstract_id: PUBMED:34075451 The role of cardiac CT throughout the full cardiac cycle in diagnosing patent foramen ovale in patients with acute stroke. Objectives: We explored the hypothesis that the diagnostic performance of cardiac computed tomography (CT) throughout the full cardiac cycle would be superior to single-phase CT and comparable to transesophageal echocardiography (TEE) in diagnosing patent foramen ovale (PFO). Methods And Results: From May 2011 to April 2015, 978 patients with stroke were diagnosed with PFO by TEE. In patients with stroke, cardiac CT was performed if the patients had more than two cardiovascular risk factors. We prospectively enrolled 70 patients with an indication for cardiac CT. Cardiac CT images were reconstructed at 10% increments of the R-R interval. The sensitivity of cardiac CT throughout the full cardiac cycle in diagnosing PFO was compared to that for TEE and single-phase cardiac CT. To evaluate the specificity of cardiac CT, we analyzed patients without PFO confirmed by TEE who underwent cardiac CT within 1 month of pre-cardiac surgery. Sixty-six patients with cardiac CT and TEE were included in the final analysis. Throughout the full cardiac cycle, cardiac CT had a sensitivity of 89.4% and a specificity of 92.3% in diagnosing PFO, compared to TEE as a reference. PFO was primarily detected in the 60% and 70% intervals in 10-phase reconstructed images. The sensitivity of PFO diagnosis with cardiac CT was 81.8% when analyzing both the 60% and 70% intervals instead of the full cardiac cycle. Conclusion: Cardiac CT throughout the full cardiac cycle outperforms single-phase cardiac CT in detecting PFO. Cardiac CT can be used as an alternative method to TEE for detecting PFO in stroke patients. Key Points: • Throughout the full cardiac cycle, cardiac computed tomography (CT) had a sensitivity of 89.4% and a specificity of 92.3% in diagnosing patent foramen ovale (PFO), compared to transesophageal echocardiography. • The sensitivity of diagnosing patent foramen ovale with cardiac CT was 81.8% when analyzing 60% and 70% R-R intervals instead of the full cardiac cycle. • Cardiac CT with retrospective electrocardiographic gating throughout the full cardiac cycle can increase the detectability of PFO, compared to single-phase cardiac CT. abstract_id: PUBMED:24028319 Determination of the optimum number of cardiac cycles to differentiate intra-pulmonary shunt and patent foramen ovale by saline contrast two- and three-dimensional echocardiography. Introduction: Patent foramen ovale (PFO) and intra-pulmonary shunt (IPS) are potential causes of stroke. The most optimum cardiac cycle cutoff for bubbles to appear in the left heart on saline contrast transthoracic echocardiography (TTE) as criteria to differentiate the 2 entities is unknown. Methods: Ninety-five adult patients had saline contrast transesophageal echocardiography (TEE), two-dimensional (2D) and 3DTTE. Sensitivity and specificity of each cardiac cycle as cutoff to differentiate a PFO and IPS were obtained. Results: Transesophageal echocardiography showed IPS in 28 and PFO in 15 patients. If bubbles appeared in the left heart within the first 4 cardiac cycles (the 4th cardiac cycle rule) as compared to alternate cutoffs, a PFO was most accurately diagnosed by both 2D and 3DTTE. Bubbles appearing at or after the 5th cardiac cycle most accurately determined an IPS. 3D versus 2DTTE had a trend for a higher sensitivity (61% vs. 36%, P = 0.06), but similar specificity (94% vs. 91%) for IPS. Accuracy of 3DTTE was 84% and 2DTTE was 75% (P = 0.08) for IPS. For PFO, 2DTTE sensitivity (87%) and specificity (98%) did not differ (P = NS) from that of 3DTTE sensitivity (73%) and specificity (100%). Conclusions: This study demonstrates for the first time that the 4th cardiac cycle rule differentiates PFO and IPS most optimally by 2D and 3DTTE. 3DTTE appears to have higher sensitivity for diagnosing IPS. These data suggest that 3DTTE is preferable when IPS is to be diagnosed. Both methods are similar for diagnosing PFO. abstract_id: PUBMED:9862095 Role of echocardiography in systemic arterial embolism. A review with recommendations. The ability of echocardiography to diagnose sources of embolism and the role of the examination in the prediction of thromboembolism are reviewed. In addition, the yield of transthoracic (TTE) and transoesophageal echocardiography (TEE) is analysed in patients with suspected embolism and guidelines are proposed for performing echocardiography in this setting. In general, echocardiography is reliable for diagnosing sources of embolism and this applies in particular to TEE in the case of atrial, valvular, and aortic abnormalities. However, the method is useful for predicting embolism in a few cases only. There is a substantial risk in the event of mobile or protruding thrombi, but screening for these and other markers of thromboembolism seems to be unproductive in most groups of risk patients. Yet, in the presence of atrial fibrillation, echocardiography may be helpful in defining patients with an otherwise normal heart and low risk of embolism--and in defining the relatively rare patient with a clinically low-risk profile but moderate-to-severe left ventricular systolic dysfunction and a high risk of embolism. TEE-guided conversion of atrial fibrillation without weeks of preceding anticoagulation may prove useful, after further investigation. The risk of embolism in relation to the size and mobility of valvular vegetations has remained controversial. In patients with suspected recent embolism, TTE results in less than 5% new therapeutic consequences. In those with a normal TTE, the yield of TEE seems to be equally low. We therefore recommend a selective strategy: TTE and TEE can be omitted when a cardiac source of embolism appears from the clinical setting and in most patients with an obvious predisposition to cerebrovascular disease. However, in the latter cases TTE should be performed if indicated by the clinical situation, e.g. in the presence of fever and murmur. TTE is also recommended when there are no obvious markers of primary vascular disease. To preclude very rare sources of embolism (e.g. atrial thrombi despite sinus rhythm), supplementary TEE is recommended in younger patients in whom primary vascular disease is very unlikely. The diagnosis by TEE of common conditions such as atrial septal aneurysms and patent foramen ovale cannot, however, be taken as proof of the mechanism of a systemic arterial occlusive event; thus it is difficult to change therapy on the basis of such diagnoses. abstract_id: PUBMED:12735591 Timing of transesophageal echocardiography in diagnosing patent foramen ovale in patients supported with left ventricular assist device. Left ventricular assist devices unload the left ventricle and decrease left atrial pressure. This hemodynamic change may cause a right to left atrial shunt and hypoxemia in patients with patent foramen ovale. We prospectively studied the best time for performing diagnostic transesophageal echocardiography in left ventricular assist device patients. Intraoperative transesophageal echocardiography was performed in 14 patients before cardiopulmonary bypass was initiated and after left ventricular assist device was implanted. No patent foramen ovale was detected when transesophageal echocardiography was done before bypass, but a patent foramen ovale was found in 3 patients when transesophageal echocardiography was performed after left ventricular assist device was activated. Patent foramen ovale was confirmed by inspection in all three patients and surgically closed during the same procedure. There were no patent foramen ovale closure-related complications. abstract_id: PUBMED:12664819 Role of intracardiac echocardiography in atrial septal abnormalities. The purpose of this review is to outline the feasibility of performing a comprehensive atrial septal examination from the internal confine of the right atrium and to evaluate the advantages resulting by intracardiac echocardiography (ICE) evaluation of atrial septal morphology as well as pathophysiology. In this setting, ICE indications have not yet been established because ICE is a relatively new technique that is still evolving. Notwithstanding, during catheter-based secundum atrial septal defect and patent foramen ovale closure, ICE seems useful for diagnosing cardiac abnormalities instantly, guiding and monitoring all stages of the procedures, and assessing proper selection and optimal device placement. Moreover, ICE provides solid anatomical criteria to diagnose fenestrated atrial septal aneurysm, interatrial communications such as ostium primum and sinus venosus defects, partial anomalous pulmonary venous connection, and lipomatous hypertrophy of atrial septum. abstract_id: PUBMED:3085620 Diagnosis of atrial septal defects by contrast echocardiography with sensitivity increased by coughing The aim of this study was to assess the sensitivity of contrast echocardiography in the diagnosis of permanent (atrial septal defects, ASD) and transient (patent foramen ovale, PFO) interatrial communications under basal conditions, after Valsalva manoeuvres and coughing. Sixty-four patients suspected of having defects of the interatrial septum were studied. The results of echocardiography were compared with those of cardiac catheterisation, the classical method of reference. The sensitivity of contrast echocardiography was significantly improved in the diagnosis of PFO by the cough test (100%) compared with contrast echocardiography during spontaneous respiration (55%). Similarly, the cough test increased the sensitivity of contrast echocardiography in the diagnosis of ASD (96% compared to 83%). The specificity was good in both cases, about 90%. The cough test was a better method (98%) than Valsalva manoeuvres (59%) for the potentiation of interatrial right-to-left shunts. The results of echocardiography and catheterisation explain the mechanism of the right-to-left shunt during spontaneous respiration, Valsalva manoeuvres and coughing. The passage of the microbubbles from the right to the left atrium occurs during early systole when the atrioventricular valves are closed. The right-to-left shunt is potentiated by provocative manoeuvres. The demonstration of a right-to-left shunt by contrast echocardiography therefore indicates the presence of an interatrial communication; our results show that this non-invasive technique is a reliable method of diagnosing ASD and PFO. Answer: Transesophageal echocardiography (TEE) is not always necessary for diagnosing a patent foramen ovale (PFO) in children. A study assessing the validity of transthoracic echocardiography (TTE) with agitated saline injection for the diagnosis of PFO in children found that TTE was diagnostic for the assessment of atrial septal integrity in the majority of children. The study reported that TTE had a positive predictive value of 100%, a negative predictive value of 97%, a sensitivity of 88%, and a specificity of 100% for detecting PFO when using TEE as the gold standard. This suggests that TTE with agitated saline injection is a reliable non-invasive alternative to TEE in the pediatric population (PUBMED:21127297). Furthermore, other studies have shown that high-end transthoracic contrast echocardiography can safely demonstrate PFO, suggesting that TTE has the potential to become the method of choice in the diagnosis of PFO (PUBMED:18569955). Additionally, transthoracic contrast echocardiography (cTTE) using vitamin B6 and sodium bicarbonate as contrast agents has been found to have relatively good sensitivity and specificity for diagnosing right-to-left shunt (RLS) caused by PFO when compared with TEE (PUBMED:28176181). In conclusion, while TEE has been the reference method for diagnosing PFO, recent studies indicate that TTE, particularly with the use of contrast agents, can be a valid and less invasive alternative for diagnosing PFO in children, potentially reducing the need for TEE in many cases.
Instruction: Particle repositioning manoeuvre in benign paroxysmal positional vertigo: is it really safe? Abstracts: abstract_id: PUBMED:29526580 Home particle repositioning maneuver to prevent the recurrence of posterior canal BPPV. Objective: To check the value of home particle repositioning maneuver in the prevention of the recurrence of posterior canal benign paroxysmal positional vertigo (pc-BPPV). Methods: In this study, patients diagnosed as unilateral posterior canal BPPV were selected following an accurate evaluation using video goggle VNG system. All patients were managed by particle repositioning maneuver (PRM). Patients were instructed to do home PRM once weekly for five years. Then, they were divided into two groups (according to choice of patient to do PRM). The first group (control group) consisted of 144 patients who did not do home PRM; whereas the second group (study group) included 165 patients who performed home PRM. All patients (control &amp; study groups) were followed up every four months for five years. Results: The study found out that the recurrence rate of pc-BPPV in control group was 33 patients in the first year (27.2%), 11 patients in second year (9%), 5 patients in third year (4%), 3 patients in fourth year (2.5%) and 3 patients in fifth year (2.5%). The recurrence of pc-BPPV in the treated side (study group) of patients was reported as 5 patients in the first year (3.5%), 3 patients in the second year (2%), 2 patients in the third year (1.4%), 2 patients in the fourth year (1.4%), and 1 patient in the fifth year (0.7%). There was statistically significant difference between the control and the study groups regarding the recurrence rates in the first year follow up which was the highest in first four months. Conclusion: Home particle repositioning maneuver has the capacity to prevent the recurrence of pc-BPPV. It proved to be more successful and functional in minimizing the recurrence of the disease in the study than in the control group. Hence, home particle repositioning maneuver is highly recommended for one year at least in pc-BPPV. abstract_id: PUBMED:36514424 An Effective Home-Based Particle Repositioning Procedure for Posterior Canal Benign Paroxysmal Positional Vertigo (BPPV). Patients with benign paroxysmal positional vertigo (BPPV) find it difficult to visit the hospital many times for a standard Epley's maneuver performed only by a specialist. The aim of this study was to compare the efficacy of a home-based particle repositioning procedure (HBPRP) with the standard Epley's maneuver in treating patients with posterior canal BPPV. A prospective non-blinded randomized controlled study was conducted. Patients were randomized into two groups, where one group received the standard treatment and other received a new HBPRP. The vertigo scale, duration of nystagmus during Dix-Hallpike test and frequency of vertigo, were documented on first, second and third visits, with complications noted during the second and third visits. These parameters were compared between both the groups following the treatment, during all visits. The patients were randomized into 2 arms with 15 each. Those belonging to group 1 received Epley's maneuver and group 2 received HBPRP. There was no significant difference in the baseline characteristics of patients in both groups. Both groups of patients had significant improvement of symptoms at the end of the study. A comparison of both groups at 2nd and 3rd visits showed no differences in frequency of vertigo, reduction in vertigo scale and duration of nystagmus following Dix-Hallpike test between both groups. HBPRP is a safe and effective procedure and can be taught as a home-based treatment for patients diagnosed with posterior canal BPPV. abstract_id: PUBMED:10718439 Modified particle repositioning procedure. Objectives: To evaluate the efficacy of modifications to traditional particle repositioning maneuvers in the treatment of benign paroxysmal positional vertigo. Study Design: Prospective trial of 118 patients with cupolocanalithiasis of the posterior canal treated with three different canal-repositioning techniques. Methods: Results were compared with the maneuvers employed and the statistical importance of rotating patients by 360 degrees along their longitudinal axis and head shaking on reaching each single position were evaluated. Results: Treatment of patients with our maneuver, which, in comparison with traditional repositioning maneuvers, was modified by breaking the procedure up into seven positions and rotating patients by 360 degrees along their longitudinal axis, gives a higher, but not statistically significant, number of treatment successes (84.5%) than the traditional Parnes maneuver (60%) (P = .154); treatment of a third group of patients with our modified particle repositioning maneuver with the addition of head-shaking on reaching each single position gives a higher (95.6%), statistically significant number of treatment successes than traditional Parnes maneuver (P = .00011). Conclusions: The success rates achieved from modified particle repositioning maneuvers are statistically significant. Onset or persistence of dizziness, which patients frequently complain of after liberatory maneuvers, affects only 5.6% of the patients treated. This low incidence is statistically correlated to head-shaking. abstract_id: PUBMED:29544670 Analysis of non-posterior canal benign paroxysmal positional vertigo in patients treated using the particle repositioning chair: A large, single-institution series. Purpose: Benign paroxysmal positional vertigo (BPPV) involving the horizontal and superior semicircular canals is difficult to study due to variability in diagnosis. We aim to compare disease, treatment, and outcome characteristics between patients with BPPV of non-posterior semicircular canals (NP-BPPV) and BPPV involving the posterior canal only (P-BPPV) using the particle repositioning chair as a diagnostic and therapeutic tool. Methods: Retrospective review of patients diagnosed with and treated for BPPV at a high volume otology institution using the particle repositioning chair. Results: A total of 610 patients with BPPV were identified, 19.0% of whom had NP-BPPV. Patients with NP-BPPV were more likely to have bilateral BPPV (52.6% vs. 27.6%, p &lt; 0.0005) and Meniere's disease (12.1% vs. 5.9%, p = 0.02) and were more likely to have caloric weakness (40.3% vs. 24.3%, p = 0.01). Patients with NP-BPPV required more treatments for BPPV (average 3.4 vs. 2.4, p = 0.01) but did not have a significantly different rate of resolution, rate of recurrence, or time to resolution or recurrence than patients with posterior canal BPPV. Conclusions: Comparison of NP-BPPV and P-BPPV is presented with reliable diagnosis by the particle repositioning chair. NP-BPPV affects 19% of patients with BPPV, and these patients are more likely to have bilateral BPPV and to require more treatment visits but have similar outcomes to those with P-BPPV. NP-BPPV is common and should be part of the differential diagnosis for patients presenting with positional vertigo. abstract_id: PUBMED:30282147 Demographic characteristics and canalith repositioning efficacy in 907 patients with typical benign paroxysmal positional vertigo AbstractObjective:To review the demographic characteristics and canalith repositioning efficacy in 907 patients with typical benign paroxysmal positional vertigo(BPPV). Method: The demographic characteristics of 907 patients with typical BPPV were statistically analyzed. According to the type of BPPV, patients were treated with the appropriate repositioning maneuver, and the clinical efficacy of repositioning maneuver was analyzed and summarized. Result: Nine hundred and seven patients of BPPV with typical nystagmus were elected in this study. 585 out of 907 were female and 322 out of were male, and the mean age was 53.10±14.25(13 to 89) years. The lesion located to the posterior semiCIrcular canal was 489 patients(53.9%), horizontal semiCIrcular canal was 312 patients(34.4%), anterior semiCIrcular was 63 patients(6.9%), and multiple semiCIrcular was 43 patients(4.8%). According to Kaplan-Meier survival analysis curve, the median cure time for the modified Semont and Epley repositioning maneuver in PC-BPPV groups was 3 days, and there was no significant difference in survival curves between the two repositioning maneuver. Meanwhile, the median cure time for Barbecue and Li horizontal canal quick repositioning maneuver groups was 3 days, and 1 day for Gufoni repositioning maneuver group in HC-BPPV groups. And there was no significant difference in survival curves among the three repositioning maneuver. In the AC-BPPV, 7 cases, 31 cases, 57 cases, 58 cases were cured with the Li anterior canal quick repositioning maneuver in the 1st day, the 3rd day, 1 week later, and 1 month later, and 5 cases lost to be follow-up. According to Kaplan-Meier survival analysis curve, the median cure time for the Li anterior canal quick repositioning maneuver in AC-BPPV groups was 3 days. Conclusion:Repositioning maneuver represents a simple, safe, rapid and effective approach to the treatment of BPPV. Therefore, repositioning maneuver should be choice for the BPPV treatment. abstract_id: PUBMED:15966476 Particle repositioning manoeuvre in benign paroxysmal positional vertigo: is it really safe? Objective: A prospective study to determine the safety of the particle repositioning manoeuvre (PRM) by analyzing the various complications of the procedure. Settings: Outpatient Department of Otorhinolaryngology, Nehru Hospital, Chandigarh, India. Methods: Thirty patients with the classic findings of benign paroxysmal positional vertigo (BPPV) were included in the study. Clinical symptoms prior to the procedure were noted. Twenty-nine of them were subjected to PRM, and postprocedural instructions were given to all patients. Various side effects during and following the procedure were recorded. They were classified into early and late based on the period and into major and minor based on severity. All patients were reviewed after 3 days, 7 days, and 1 month. Results: Of the 29 patients, 19 patients (65.52%) had heaviness in the head, with 11 each (37.93%) reporting nausea and imbalance and 9 (31.03%) reporting instability during the procedure. A major complication, asystole, was noted in one patient. The percentage of side effects remained more or less the same in the early phase following the procedure. Only 5 of 29 patients were entirely asymptomatic. Ninety percent were relieved of symptoms by the end of 7 days, with no major complication recorded. Conclusions: PRM is an easy, effective, and relatively safe procedure. The risk of major complications with PRM, such as arrhythmias and asystole, highlights the need to consider other management modalities for BPPV in certain medically unfit patients. abstract_id: PUBMED:19513312 Residual dizziness after successful repositioning treatment in patients with benign paroxysmal positional vertigo. Background And Purpose: Benign paroxysmal positional vertigo (BPPV) is the most common form of vertigo. Although the repositioning maneuver dramatically improves the vertigo, some patients complain of residual dizziness. We evaluated the incidence and characteristics of persistent dizziness after successful particle repositioning and the clinical factors associated with the residual dizziness. Methods: We performed a prospective investigation in 49 consecutive patients with confirmed BPPV. The patients were treated with a repositioning maneuver appropriate for the type of BPPV. Success was defined by the resolution of nystagmus and positional vertigo. All patients were followed up until complete resolution of all dizziness, for a maximum of 3 months. We collected data on the characteristics and duration of any residual dizziness and analyzed the clinical factors associated with the residual dizziness. Results: Of the 49 patients, 11 were men and 38 were women aged 60.4+/-13.0 years (mean +/-SD), and 30 (61%) of them complained of residual dizziness after successful repositioning treatment. There were two types of residual dizziness: continuous lightheadedness and short-lasting unsteadiness occurring during head movement, standing, or walking. The dizziness lasted for 16.4+/-17.6 days (range=2-80 days, median=10 days). A longer duration of BPPV before treatment was significantly associated with residual dizziness (p=0.04). Conclusions: Residual dizziness after successful repositioning was observed in two-thirds of the patients with BPPV and disappeared within 3 months without specific treatment in all cases. The results indicate that early successful repositioning can reduce the incidence of residual dizziness. abstract_id: PUBMED:8052079 Particle repositioning maneuver for benign paroxysmal positional vertigo. The recent demonstration of free-floating particles in the endolymph of the posterior semicircular canal in patients with benign paroxysmal positional vertigo (BPPV) has renewed interest in the physiology and treatment of this entity. The particle repositioning maneuver (PRM) relocates the free-floating particles from the posterior semicircular canal back into the utricle, relieving the patient of bothersome, often long-standing vertigo. This report represents a prospective study of 27 consecutive patients seen with a diagnosis of BPPV. Eighty-four percent of the patients treated with the particle repositioning maneuver who had no other associated pathology were cured or significantly improved with this new technique. Two patients who failed conservative management went on to surgical intervention with the posterior semicircular canal occlusion. The authors find the particle repositioning maneuver effective for many patients with benign positional vertigo and recommend it as the first-line treatment modality for BPPV. abstract_id: PUBMED:18525198 Head accelerations during particle repositioning manoeuvres. Benign paroxysmal positional vertigo (BPPV) due to canalithiasis can be treated with particle repositioning manoeuvres, which aim to evacuate trapped particles from the semicircular canals (SCC). The movement of particles within the SCC is affected by gravity as well as by the accelerations of the head during the manoeuvres. Moreover, as experienced by the particles, gravity is indistinguishable from an upward acceleration of the SCC in free space. We used a set of three orthogonal linear accelerometers to measure the net three-dimensional linear acceleration vector acting on the head during the Hallpike manoeuvre and Epley and Semont particle repositioning manoeuvres (which are used to treat posterior canal BPPV). The projection of the net acceleration vector onto the SCC planes showed that both the Epley and Semont manoeuvres approximated to stepwise, 360 degrees , backward rotations in the plane of the targeted posterior canal. Angular velocity measurements however showed that the rotational component during the central stages of these two manoeuvres is opposite in direction. A simple model of head rotations during particle repositioning manoeuvres was created which showed good agreement to the linear acceleration measurements. Analysis of modelled and measured data identified that speed of movement during the Semont manoeuvre should be critical to its clinical success. abstract_id: PUBMED:31446735 Long-term efficacy observation of BPPV repositioning maneuver Objective:The aim of this study is to observe and analyze the long-term follow-up of the recurrence of benign paroxysmal positional vertigo BPPV with repositioning maneuver, and to determine the long-term therapeutic effect of repositioning maneuver. Method:To collect 738 patients who were diagnosed with BBPV due to vertigo. Follow up by telephone, SMS and other contact methods to analyze and summarize the long-term follow up of the recurrence of patients who had been treated with the appropriate repositioning maneuver. Result:Of the 738 patients, 458 patients were followed up and 280 were lost to follow up. The total follow up rate was 62.06%. 458 patients were followed up with 314 patients without recurrence and 144 patients with recurrence. The total recurrence rate was 31.44%. Conclusion:Repositioning maneuver is the most commonly used way for the treatment of BPPV. It is safe, simple and effective. However, patients with BPPV who have been cured by manual reduction still have a risk of recurrence. Among patients with relapse, the proportion of patients who relapsed once after treatment was the highest, and the recurrence rate of women with BPPV was significantly higher than that of men. Answer: The particle repositioning maneuver (PRM) is generally considered a safe and effective treatment for benign paroxysmal positional vertigo (BPPV). However, there are some concerns regarding its safety due to potential complications. A prospective study aimed to determine the safety of PRM by analyzing various complications of the procedure. The study included 30 patients with classic findings of BPPV, and 29 of them underwent PRM. The results showed that during the procedure, 65.52% of patients experienced heaviness in the head, 37.93% reported nausea and imbalance, and 31.03% reported instability. A major complication, asystole, was noted in one patient. By the end of 7 days, 90% of patients were relieved of symptoms, with no major complications recorded (PUBMED:15966476). Despite these findings, the overall consensus in the literature is that PRM is a relatively safe procedure. For instance, a study of 27 consecutive patients with BPPV found that 84% of patients treated with PRM who had no other associated pathology were cured or significantly improved with this technique (PUBMED:8052079). Another study observed long-term efficacy and noted that while repositioning maneuver is safe, simple, and effective, there is still a risk of recurrence in patients with BPPV (PUBMED:31446735). Moreover, residual dizziness after successful repositioning treatment was observed in two-thirds of patients with BPPV in one study, but it disappeared within 3 months without specific treatment in all cases (PUBMED:19513312). This suggests that while PRM is effective, some patients may experience temporary discomfort or dizziness post-treatment. In conclusion, while PRM is widely regarded as a safe and effective treatment for BPPV, practitioners should be aware of potential complications and monitor patients accordingly. The risk of major complications, such as arrhythmias and asystole, although rare, indicates that alternative management strategies may be considered for certain medically unfit patients (PUBMED:15966476).
Instruction: Does a diagnosis of the metabolic syndrome provide additional prediction of cardiovascular disease and total mortality in the elderly? Abstracts: abstract_id: PUBMED:17437393 Does a diagnosis of the metabolic syndrome provide additional prediction of cardiovascular disease and total mortality in the elderly? The Dubbo Study. Objective: To assess whether a diagnosis of the metabolic syndrome (MetS) improves the prediction of cardiovascular disease or total mortality beyond that already provided by conventional risk factors. Design And Setting: A longitudinal cohort study conducted in Dubbo, New South Wales. Participants: 2805 men and women aged 60 years and older living in the community, first assessed in 1988-1989 and followed for 16 years. Main Outcome Measures: Coronary heart disease (CHD) events, ischaemic stroke events, and total mortality. Results: MetS was present in 31% of men and 34% of women. Crude CHD, ischaemic stroke, and total mortality rates were higher in the presence of MetS in men and women. In proportional hazards models that included conventional risk factors, but excluded variables used to define the presence of MetS, MetS was a significant predictor of CHD, stroke and total mortality. In men, the respective hazard ratios were 1.64 (95% CI, 1.37-1.96), 1.31 (95% CI, 0.97-1.77), and 1.53 (95% CI, 1.30-1.79). In women, the respective hazard ratios were 1.70 (95% CI, 1.43-2.02), 1.37 (95% CI, 1.04-1.82), and 1.35 (95% CI, 1.15-1.59). The use of MetS variables on an ordinal scale produced broadly similar conclusions. Conclusions: A diagnosis of MetS provides additional prediction of CHD events, stroke events, and total mortality beyond that provided by other conventional risk factors. abstract_id: PUBMED:29133071 Red blood cell distribution width is associated with mortality in elderly patients with sepsis. Introduction: RDW is a prognostic biomarker and associated with mortality in cardiovascular disease, stroke and metabolic syndrome. For elderly patients, malnutrition and multiple comorbidities exist, which could affect the discrimination ability of RDW in sepsis. The main purpose of our study was to evaluate the prognostic value of RDW in sepsis among elderly patients. Methods: This was a retrospective cohort study conducted in emergency department intensive care units (ED-ICU) between April 2015 and November 2015. Elderly patients (≥65years old) who were admitted to the ED-ICU with a diagnosis of severe sepsis and/or septic shock were included. The demographic data, biochemistry data, qSOFA, and APACHE II score were compared between survivors and nonsurvivors. Results: A total of 117 patients was included with mean age 81.5±8.3years old. The mean APACHE II score was 21.9±7.1. In the multivariate Cox proportional hazards model, RDW level was an independent variable for mortality (hazard ratio: 1.18 [1.03-1.35] for each 1% increase in RDW, p=0.019), after adjusting for CCI, any diagnosed malignancy, and eGFR. The AUC of RDW in predicting mortality was 0.63 (95% confidence interval [CI]: 0.52-0.74, p=0.025). In subgroup analysis, for qSOFA &lt;2, nonsurvivors had higher RDW levels than survivors (17.0±3.3 vs. 15.3±1.4%, p=0.044). Conclusions: In our study, RDW was an independent predictor of in-hospital mortality in elderly patients with sepsis. For qSOFA scores &lt;2, higher RDW levels were associated with poor prognosis. RDW could be a potential parameter used alongside the clinical prediction rules. abstract_id: PUBMED:35198606 Association of Metabolic Syndrome With Long-Term Cardiovascular Risks and All-Cause Mortality in Elderly Patients With Obstructive Sleep Apnea. Background: Evidence suggests that an increased risk of major adverse cardiac events (MACE) and all-cause mortality is associated with obstructive sleep apnea (OSA), particularly in the elderly. Metabolic syndrome (MetS) increases cardiovascular risk in the general population; however, less is known about its influence in patients with OSA. We aimed to assess whether MetS affected the risk of MACE and all-cause mortality in elderly patients with OSA. Methods: From January 2015 to October 2017, 1,157 patients with OSA, aged ≥60 years, no myocardial infarction (MI), and hospitalization for unstable angina or heart failure were enrolled at baseline and were followed up prospectively. OSA is defined as an apnea-hypopnea index of ≥5 events per hour, as recorded by polysomnography. Patients were classified on the basis of the presence of MetS, according to the definition of the National Cholesterol Education Program (NCEP). Incidence rates were expressed as cumulative incidence. Cox proportional hazards analysis was used to estimate the risk of all events. The primary outcomes were MACE, which included cardiovascular death, MI, and hospitalization for unstable angina or heart failure. Secondary outcomes were all-cause mortality, components of MACE, and a composite of all events. Results: MetS was present in 703 out of 1,157 (60.8%) elderly patients with OSA. During the median follow-up of 42 months, 119 (10.3%) patients experienced MACE. MetS conferred a cumulative incidence of MACE in elderly patients with OSA (log-rank, P &lt; 0.001). In addition, there was a trend for MACE incidence risk to gradually increase in individuals with ≥3 MetS components (P = 0.045). Multivariate analysis showed that MetS was associated with an incidence risk for MACE [adjusted hazard ratio (aHR), 1.86; 95% confidence interval (CI), 1.17-2.96; P = 0.009], a composite of all events (aHR, 1.54; 95% CI, 1.03-2.32; P = 0.036), and hospitalization for unstable angina (aHR, 2.01; 95% CI, 1.04-3.90; P = 0.039). No significant differences in the risk of all-cause mortality and other components of MACE between patients with and without MetS (P &gt; 0.05). Subgroup analysis demonstrated that males (aHR, 2.23; 95% CI, 1.28-3.91, P = 0.05), individuals aged &lt;70 years (aHR, 2.36; 95% CI, 1.27-4.39, P = 0.006), overweight and obese individuals (aHR, 2.32; 95% CI, 1.34-4.01, P = 0.003), and those with moderate-severe OSA (aHR, 1.81;95% CI: 1.05-3.12, P = 0.032) and concomitant MetS were at a higher risk for MACE. Conclusion: MetS is common in elderly patients with OSA in the absence of MI, hospitalization for unstable angina or heart failure. Further, it confers an independent, increased risk of MACE, a composite of all events, and hospitalization for unstable angina. Overweight and obese males, aged &lt;70 years with moderate-severe OSA combined with MetS presented a significantly higher MACE risk. abstract_id: PUBMED:18505405 The metabolic syndrome increases cardiovascular mortality in Taiwanese elderly. Background: The prevalence of the metabolic syndrome (MetS) is high among the elderly. However, evidence that mortality increases with MetS is rare. In this study, we investigated the relationship between MetS, cardiovascular disease (CVD) and all cause mortality in the elderly. Materials And Methods: A total 10 547 participants, aged 65 years and older, of baseline cohort were recruited from four nationwide Health Screening Centres in Taiwan from 1998 to 1999. The metabolic syndrome was defined according to the America Heart Association/National Heart Lung Blood Institute definition. Cox proportional hazards regression analyses were used to estimate the relative risks (RRs) of CVD and all cause mortality for those with MetS for up to 8 years of follow-up. Results: The baseline prevalence of MetS was 50.1% (45.6% in men and 54.4% in women, respectively). A total of 1312 participants died; of these, 300 participants died from CVD. Adjusted for age, gender, smoking, total cholesterol and estimated glomerular filtration rate, the RRs for CVD and all cause mortality among participants with MetS were 1.48 (95% confidence interval = 1.16-1.90) and 1.16 (1.03-1.30), respectively, for participants compared to those without MetS. The mean RRs for CVD, however, ranged from 1.21 to 5.31 among different combinations of MetS components. Conclusion: The elderly with MetS, compared to those without MetS, had a higher CVD and all cause mortality in Taiwan. Furthermore, different combinations of MetS components posed different risks to the mortality, which deserves further research in the future. abstract_id: PUBMED:27117940 The clinical value of metabolic syndrome and risks of cardiometabolic events and mortality in the elderly: the Rotterdam study. Background: To evaluate the clinical value of metabolic syndrome based on different definitions [American Heart Association/National Heart, Lung and Blood Institute (AHA/NHLBI), International Diabetes Federation (IDF) and European Group for the Study of Insulin Resistance (EGIR)] in middle-aged and elderly populations. Methods: We studied 8643 participants from the Rotterdam study (1990-2012; mean age 62.7; 57.6 % female), a large prospective population-based study with predominantly elderly participants. We performed cox-proportional hazards models for different definitions, triads within definitions and each separate component for the risk of incident type 2 diabetes mellitus, coronary heart disease, stroke, cardiovascular- and all-cause mortality. Results: In our population of 8643 subjects, metabolic syndrome was highly prevalent (prevalence between 19.4 and 42.4 %). Metabolic syndrome in general was associated with incident type 2 diabetes mellitus (median follow-up of 6.8 years, hazard ratios 3.13-3.78). The associations with coronary heart disease (median follow-up of 7.2 years, hazard ratios 1.08-1.32), stroke (median follow-up of 7.7 years, hazard ratios 0.98-1.32), cardiovascular mortality (median follow-up of 8.2 years, ratios 0.95-1.29) and all-cause mortality (median follow-up of 8.7 years, hazard ratios 1.05-1.10) were weaker. AHA/NHLBI- and IDF-definitions showed similar associations with clinical endpoints compared to the EGIR, which was only significantly associated with incident type 2 diabetes mellitus. All significant associations disappeared after correcting metabolic syndrome for its individual components. Conclusions: Large variability exists between and within definitions of the metabolic syndrome with respect to risk of clinical events and mortality. In a relatively old population the metabolic syndrome did not show an additional predictive value on top of its individual components. So, besides as a manner of easy identification of high risk patients, the metabolic syndrome does not seem to add any predictive value for clinical practice. abstract_id: PUBMED:23330616 Higher mortality in metabolically obese normal-weight people than in metabolically healthy obese subjects in elderly Koreans. Objective: The purpose of this study was to investigate the impact of body mass index (BMI) and the presence of metabolic syndrome (MetS) on all-cause and cardiovascular mortality in elderly Korean men and women, and especially to compare metabolically obese normal-weight (MONW) and metabolically healthy obese (MHO) subjects. Patients And Methods: A total of 2317 elderly people (over 60 years of age) were studied using follow-up data from the South-West Seoul (SWS) Study, a prospective cohort study. Mortality from all causes and cardiovascular disease (CVD) were evaluated according to the combination of the presence or absence of MetS and Asian-specific body mass index (BMI) criteria (BMI &lt;23 kg/m²; normal weight, BMI 23-24·9 kg/m²; overweight, BMI ≥25 kg/m²; obesity). Results: During a median follow-up of 10·3 years, 393 subjects died, including 126 from CVD. Among subjects with MetS, all-cause and CVD mortality were significantly higher in normal-weight subjects than overweight or obese individuals in Cox proportional-hazard models adjusted for confounding factors. Furthermore, among six groups with various MetS/BMI combinations, MONW individuals had the highest risk, whereas overweight subjects without MetS had the lowest risk of death from all causes and CVD [HR = 2·2 (95% CI = 1·4-3·4), HR = 3·0 (95% CI = 1·4-6·6) respectively]. Interestingly, all-cause mortality was significantly higher in MONW than MHO individuals. Conclusions: In contrast to MHO subjects, elderly individuals with the MONW phenotype exhibited greater all-cause mortality during 10 years of follow-up. abstract_id: PUBMED:30884605 Study of epidemiological characteristics of metabolic syndrome and influencing factors in elderly people in China Objective: To estimate the prevalence of MS in elderly people aged ≥60 years and its related factors in China and provide scientific evidence for prevention and control of MS in the elderly. Methods: Data used in this study were obtained from the 2013 Chinese Chronic Diseases and Risk Factor Surveillance Program. A total of 50 497 people aged ≥60 years were selected and interviewed through multistage stratified cluster sampling at 298 surveillance sites in 31 provinces. According to the Chinese MS diagnostic criteria proposed by the Chinese Medical Association Diabetes Branch in 2017, the prevalence rates of different MS forms were compared, and the main related factors were analyzed. Results: The prevalence rate of MS was 36.9% (95%CI: 35.4-38.5). The prevalence rate was higher in urban area than in the rural area, higher in females than in males, higher in eastern area than in western area. The prevalence rate of MS in elderly people aged ≥70 years was lower than that in those aged 60-69 years. The rate in the elderly with higher education and income levels was higher than that in the elderly with lower socioeconomic level. The comparison of the prevalence of the five forms of MS in the elderly showed that hypertension had the highest prevalence rate (72.8%), followed by hyperglycemia (41.7%) and central obesity (37.6%). The prevalence rates of hypertriglyceridemia and low HDL-C were 25.8% and 17.5%. The risk for MS in women was 1.20 times higher than that in men. Age, gender, education level, living area and urban or rural residence were the main factors influencing the prevalence of MS. Smoking, drinking and physical activity levels were correlated with MS. Conclusions: The risk for MS was higher in women than in men in China's elderly population, and the risk was related to socioeconomic level and life behaviors. It is recommended to carry out lifestyle interventions, such as increasing exercise and having reasonable diet for the elderly patients with MS. Hypertension and diabetes patients also need to be treated with drugs to reduce the risk of cardiovascular disease morbidity and mortality. abstract_id: PUBMED:25014070 Association of body mass index with all-cause and cardiovascular disease mortality in the elderly. Objectives: To evaluate the associations of body mass index (BMI) with all-cause, cardiovascular disease (CVD), and expanded CVD mortality in the elderly. Design: Observational cohort study. Setting: Annual physical examination program for the elderly from 2006 to 2010. Participants: We included 77,541 Taipei residents aged ≥ 65 years (39,365 men and 38,176 women). Measurements: BMI was categorized as underweight (BMI&lt;18.5), normal weight (18.5 ≤ BMI&lt;25), overweight (25 ≤ BMI&lt;30), grade 1 obesity (30 ≤ BMI&lt;35), or grade 2-3 obesity (BMI ≥ 35). Mortality was ascertained by national death files. Results: Underweight (hazard ratios [HRs] of all-cause, CVD, and expanded CVD mortality: 1.92, 1.74, and 1.77, respectively), grade 2-3 obesity (HRs: 1.59, 2.36, and 2.22, respectively), older age, male sex, smoking, and high fasting blood sugar were significant predictors of mortality. Meanwhile, being married/cohabitating, higher education, alcohol consumption, more regular exercise, and high total cholesterol were inversely associated with mortality. Multivariate stratified subgroup analyses verified smokers (HRs of all-cause, CVD, and expanded CVD mortality: 3.25, 10.71, and 7.86, respectively, for grade 2-3 obesity), the high triglyceride group (HRs: 5.82, 10.99, and 14.22, respectively for underweight), and patients with 3-4 factors related to metabolic syndrome (HRs: 4.86, 12.72, and 11.42, respectively, for underweight) were associated with mortality. Conclusion: The associations of BMI with all-cause, CVD, expanded CVD mortality in the elderly are represented by U-shaped curves, suggesting unilateral promotions or interventions in weight reduction in the elderly may be inappropriate. Heterogeneous effects of grades 1 and 2-3 obesity on mortality were observed and should be treated as different levels of obesity. abstract_id: PUBMED:19237305 Cardiometabolic determinants of mortality in a geriatric population: is there a "reverse metabolic syndrome"? Aims: Diabetes or insulin resistance, overweight, arterial hypertension, and dyslipidaemia are recognized risk factors for cardiovascular (CV) disease. However, their predictive value and hierarchy in elderly subjects remain uncertain. Methods: We investigated the impact of cardiometabolic risk factors on mortality in a prospective cohort study of 331 elderly high-risk subjects (mean age+/-SD: 85+/-7 years). Results: Two-year total mortality was predicted by age, diabetes, low BMI, low diastolic blood pressure (DBP), low total and HDL cholesterol, and previous CV events. The effect of diabetes was explained by previous CV events. In non-diabetic subjects, mortality was predicted by high insulin sensitivity, determined by HOMA-IR and QUICKI indices. In multivariate analyses, the strongest mortality predictors were low BMI, low HDL cholesterol and previous myocardial infarction. Albumin, a marker of malnutrition, was associated with blood pressure, total and HDL cholesterol, and HOMA-IR. The inflammation marker CRP was associated with low total and HDL cholesterol, and high HOMA-IR. Conclusion: In very old patients, low BMI, low DBP, low total and HDL cholesterol, and high insulin sensitivity predict total mortality, indicating a "reverse metabolic syndrome" that is probably attributable to malnutrition and/or chronic disorders. These inverse associations limit the relevance of conventional risk factors. Previous CV events and HDL cholesterol remain strong predictors of mortality. Future studies should determine if and when the prevention and treatment of malnutrition in the elderly should be incorporated into conventional CV prevention. abstract_id: PUBMED:26061328 Associations of Metabolic Syndrome and its Components With Mortality in the Elderly: A Cohort Study of 73,547 Taiwanese Adults. Available evidence shows that metabolic syndrome (Mets) has clear adverse effects for middle-aged and pre-elderly adults; however, the effect of Mets on mortality among elderly adults remains unclear. In addition, the comparative utility of Mets and its component for predicting mortality among the elderly has not been clearly established. Using data from a large Taiwanese cohort, we evaluated the effect of Mets and its components on subsequent all-cause and cause-specific mortality overtime among the elderly. A total of 73,547 elders (age ≥65 years) participated in the Taipei Elderly Health Examination Program from 2007 to 2010. Mets was diagnosed using the adult treatment panel III criteria, and mortality was ascertained by using national death records. Time-dependent analysis was used to evaluate associations of Mets and its components with all-cause mortality, cardiovascular disease (CVD) mortality, and expanded CVD mortality. This retrospective cohort study found that 42.6% of elders had Mets. During 194,057 person-years of follow-up, 2944 deaths were observed. After adjusting for sociodemographic characteristics and comorbidities, Mets was associated with increased risk of expanded CVD mortality (hazard ratio [HR], 1.27; 95% CI, 1.10-1.46) but not all-cause or CVD mortality. Among Mets components, decreased high-density lipoprotein cholesterol (HDL-C, HR 1.25, 95% CI 1.13-1.37) and hyperglycemia (HR 1.21, 95% CI 1.12-1.31) were associated with a significant increase in all-cause mortality. Hypertension and low HDL-C were predictors of CVD mortality and expanded CVD mortality, and, as compared with Mets, were associated with a higher risk of expanded CVD mortality. The present findings indicate that, in elderly adults, individual components of Mets are better predictors of all-cause and cause-specific mortality than is Mets as a whole. Our results suggest that future efforts should focus on preventing and managing individual risk factors (particularly hypertension, low HDL-C, and hyperglycemia) rather than on "diagnosing" Mets in elders. Answer: Yes, a diagnosis of the metabolic syndrome (MetS) does provide additional prediction of cardiovascular disease (CVD) and total mortality in the elderly. Studies have shown that MetS is a significant predictor of coronary heart disease (CHD) events, stroke events, and total mortality in elderly populations. For instance, in the Dubbo Study, MetS was present in a significant portion of the elderly population and was associated with higher crude CHD, ischaemic stroke, and total mortality rates in both men and women. Even after adjusting for conventional risk factors, MetS remained a significant predictor of these outcomes (PUBMED:17437393). Similarly, in Taiwanese elderly populations, MetS was associated with higher cardiovascular mortality compared to those without MetS. The study found that different combinations of MetS components posed varying risks to mortality, indicating the complexity of the syndrome's impact on health outcomes (PUBMED:18505405). In elderly patients with obstructive sleep apnea (OSA), MetS was common and conferred an independent increased risk of major adverse cardiac events (MACE) and hospitalization for unstable angina. The study also noted that overweight and obese males, aged less than 70 years with moderate-severe OSA combined with MetS, presented a significantly higher MACE risk (PUBMED:35198606). Furthermore, the Rotterdam study indicated that while MetS was associated with incident type 2 diabetes mellitus, its associations with CHD, stroke, cardiovascular mortality, and all-cause mortality were weaker. However, the study concluded that MetS did not show additional predictive value on top of its individual components in an elderly population (PUBMED:27117940). Lastly, a study of 73,547 Taiwanese adults found that individual components of MetS were better predictors of all-cause and cause-specific mortality than MetS as a whole, suggesting that managing individual risk factors may be more effective than focusing on a MetS diagnosis in elders (PUBMED:26061328). In summary, while MetS does provide additional predictive information for cardiovascular disease and total mortality in the elderly, the predictive value may vary depending on the population studied and the individual components of MetS present.
Instruction: Margin involvement at prostatectomy for clinically localized prostate cancer: does a low-risk group exist? Abstracts: abstract_id: PUBMED:25413418 Margin involvement at prostatectomy for clinically localized prostate cancer: does a low-risk group exist? Purpose: To determine whether additional pathology details may provide risk stratification for patients with involved surgical margins at radical prostatectomy (RP). Methods And Materials: Eligible patients underwent RP between 2003 and 2010. Patients with preoperative prostate-specific antigen (PSA) ≥20, follow-up &lt;12 months, lymph node or seminal vesicle involvement, or who received radiation therapy or hormone therapy prior to PSA relapse were excluded. Surgical specimens were reviewed by a study pathologist, blinded to outcomes. Survival analysis methods were employed to assess disease control and survival rates, as well as association of patient-, tumor-, and treatment-specific factors for endpoints. Results: Of 355 RP cases, 279 patients were eligible for the present analysis. At a median follow-up of 53 months (range, 16-127), 31/114 (27%) of patients with involved surgical margins experienced PSA relapse, as compared with 7/165 (4%) for negative margins (hazard ratio, 4.997; 95% confidence interval, 2.425-10.296; P &lt; .0001). Detailed pathology review demonstrated associations between PSA relapse and Gleason score at RP, extent of margin involvement (width), capsule penetration, and perineural invasion. Subgroup analysis identified low risk (4%) of 5-year PSA relapse for patients with Gleason ≤6 mm and margin width ≤4 mm (single maximal or cumulative). All subgroups with higher Gleason score or wider margin were associated with &gt;20% risk of PSA relapse at 5 years. Conclusions: Within the present study, Gleason score, 6 patients with margin width ≤4 mm appear to have low rates of early PSA relapse following RP. Low-grade cases with larger extent of margin involvement or higher risk Gleason score patients with any margin involvement have high rates of early PSA relapse. abstract_id: PUBMED:35083067 The association of a risk group with positive margin in the intraoperative and final pathology examination after robotic radical prostatectomy. Introduction: The aim of this study was to evaluate the risk of a positive margin in the intraoperative and final pathology depending on the risk group for biochemical recurrence in biopsy specimens after robot-assisted radical prostatectomy (RaRP) with sparing of the neurovascular bundles (NS). Material And Methods: The study was prospective and non-randomised. The intraoperative and final pathology examinations were performed in 65 consecutive patients treated with RaRP between 11.2019-08.2020. In the intraoperative examination, the site of the dissected neurovascular bundles and any suspicious places were examined. Patients were divided into 3 risk groups [according to the European Association of Urology (EAU) biochemical recurrence-risk stratification]. Due to the uncertain prognostic value of microscopic positive margins, 3 groups were identified: R0, Rmicro and R1. Results: In the intraoperative examination, the distribution of risk groups in R0, Rmicro and R1 groups is similar (p = 0.132). In the postoperative study, the distribution of risk groups in each margin group is different, and is statistically significant (p &lt;0.001). It has been shown that an increase in the risk group is an indicator of the occurrence of a positive margin in the final histopathological result regardless of the inclusion of Rmicro into R1 or into R0 by 2.68 and 6.52 times, respectively. Conclusions: The preoperative risk group is an important factor for the occurrence of a positive margin, but only in the final examination and not in the intraoperative one. An intraoperative examination of the neurovascular bundles only is pointless and should be extended to the examination of the apex. abstract_id: PUBMED:2368227 Incidence of surgical margin involvement in various forms of radical prostatectomy. The pathologic specimens of 64 patients who underwent radical prostatectomy for clinical Stage A or B carcinoma of the prostate were reviewed retrospectively for surgical margin involvement with cancer. Fourteen of the operations were performed by the radical transperineal method, 30 by the standard radical retropubic approach, and 20 by the nerve-sparing radical retropubic technique. Seventy-eight percent of the radical transperineal group had resection margin involvement, as opposed to 30 percent of the standard radical retropubic cases, and 45 percent of the nerve-sparing radical retropubic cases. The average tumor burden of the transperineal group was larger than that of the other two groups. Resection margin involvement in all groups was associated with a higher Gleason histologic score. No significant difference was noted between the two retropubic groups in terms of resection margin involvement (P = 0.28), suggesting that nerve-sparing radical retropubic prostatectomy does not compromise the surgical goal of radical prostatectomy for carcinoma over that of the standard radical retropubic prostatectomy. abstract_id: PUBMED:35411637 Impact of nerve sparing in robot-assisted radical prostatectomy on the risk of positive surgical margin and biochemical recurrence. Objectives: Nerve sparing may increase positive surgical margin rate during radical prostatectomy. Our objective was to analyze the positive surgical margin rate and location as well as its impact on biochemical recurrence according to nerve sparing procedure in robot-assisted radical prostatectomy. Methods: We included 814 patients treated with robot-assisted radical prostatectomy between 2009 and 2021, and evaluated the impact of nerve sparing on positive surgical margin and biochemical recurrence using logistic regression and Cox models. Results: Unilateral nerve sparing and bilateral nerve sparing were performed in 152 (18.6%) cases and 118 (14.5%) cases, respectively. On multivariable analysis, in addition to nerve sparing, bilateral nerve sparing, but not unilateral nerve sparing was associated with an increased risk of positive surgical margin compared with non-nerve sparing. Positive surgical margin at any location increased the risk of biochemical recurrence. During unilateral nerve sparing, positive surgical margin in nerve sparing side, but not in non-nerve sparing side was associated with increased risk of biochemical recurrence on multivariate analysis. Conclusions: Taken together, surgeons need to notice an increased risk of biochemical recurrence associated with positive surgical margin when performing nerve sparing in robot-assisted radical prostatectomy, and then need to choose the patients suitable for nerve sparing. abstract_id: PUBMED:37401692 Modern methods for determining the positive surgical margin during radical prostatectomy Determining and evaluation of predictors of biochemical recurrence (BCR) is one of the essential aim, which may help to achieve the most effective treatment of prostate cancer. Obviously, positive surgical margins represent an independent risk factor for BR following radical prostatectomy. The development of methods determining the status of the surgical margin during surgery is an important direction which can upgrade the effectiveness of prostate cancer treatment.Moreoverit is relevant to review modern methods for diagnosing the status of the surgical margin during radical prosta-tectomy. This article presents a systematic review carried out at the Department of Urology and Andrology of Pirogov Russian National Research Medical University. In September 2021, we performed a PubMed/ Web of Science search to include articles published in 1995-2020 evaluating the key words "prostate cancer", "surgical margin", "radical prostatectomy", "biochemical recurrence", "methods for determining the surgical margin". Nowadays the following technologies have been developed and being actively studied: the usage of aminolevulinic acid, optical coherence tomography, optical spectroscopy, confocal laser microscopy, 3D augmented reality, 3D modeling, the study of frozen samples. abstract_id: PUBMED:25512161 Surgical margin length and location affect recurrence rates after robotic prostatectomy. Background: Robotic-assisted laparoscopic radical prostatectomy is a current standard treatment for localized prostate cancer, with treatment failure defined by biochemical recurrence (BCR). Open radical prostatectomy series have identified the presence of a positive surgical margin (PSM) as a predictor of long-term recurrence, a measure that is affected by the surgeon׳s skill. We evaluate the effect of PSM parameters on BCR rates from robotic-assisted laparoscopic radical prostatectomy, across 3 high-volume institutions. Methods: De-identifiable clinicopathological and histopathological data were prospectively collected for 4,001 patients with at least 3 years of follow-up. Kaplan-Meier plots and 3 statistical models were used to evaluate the effect of margin parameters on BCR, via crude rates, traditional multivariable Cox regression, and a propensity-adjusted Cox regression model. Results: Overall, 37% of men with a PSM developed BCR compared with 10% of men with negative margins (hazard ratio [HR] = 1.81, 95% CI: 1.47-2.22). Length ≥3 mm or a multifocal positive margin was associated with a higher risk of BCR compared with negative margin cases. On multivariable Cox regression analysis of the positive margin cohort, only apical margins significantly predicted BCR relative to basal margins (HR = 2.03, 95% CI: 1.01-4.09), whereas there was no significant difference in BCR rates for posterolateral margins relative to basal margins (HR = 1.62, 95% CI: 0.84-3.11). Propensity-adjusted modeling confirmed a greater effect of apical compared with posterolateral PSM. Conclusions: A PSM length ≥3 mm is predictive of BCR, as is to a lesser extent multiple positive margins. In contrast to open prostatectomy series, posterolateral margins carry a smaller risk of BCR compared with apical margins. abstract_id: PUBMED:37006215 The prevalence and risk factors of upgrading of Gleason grade group between transrectal ultrasound prostate biopsy and prostatectomy specimens. Background: The risk stratification of prostate cancer using Gleason grade group (GG), serum prostate-specific antigen (PSA), and T staging has an important role for appropriate treatment. In fact, the GG of biopsy was not the same as the prostatectomy specimen. The upgrading of GG has a significant risk of delay treatment. The study aims to evaluate the concordance of GG between biopsy and prostatectomy specimens and the factors of upgrading GG. Materials And Methods: Retrospectively reviewed data from January 2010 to December 2019, 137 patients underwent prostate biopsy and followed by prostatectomy. Patients' data include pathological reports, imaging reports, serum PSA, PSA density (PSAD), and free PSA were analyzed by univariate and multivariate analysis. Results: The concordance between the pathology was found in 54 specimens (39.4%) with the upgrading of GG in the prostatectomy was 57 specimens (41.6%). Furthermore, the downgrading was 26 specimens (18.9%). Serum PSA &gt;10 ng/ml (P 0.003), PSAD &gt;0.2 ng/ml/cm3 (P 0.002), free/total PSA ratio (P 0.003), margin positive for malignancy (P 0.033), and extraprostatic involvement (P 0.039) were significantly related with upgrading at the univariate analysis. Only a PSAD &gt;0.2 (P 0.014) was found to be an independent factor that is predictive of upstaging in multivariate analysis. Conclusions: The prevalence of upgrading of GG from prostate biopsy to radical prostatectomy is as high as the other study. The factor that related to upstaging of GG was PSAD. Therefore, additional tools for biopsy were required to enhance the accurate diagnosis and staging of prostate cancer. abstract_id: PUBMED:19913824 Impact of positive surgical margins after radical prostatectomy differs by disease risk group. Purpose: Positive surgical margins have a negative impact on disease outcomes after radical prostatectomy, yet their prognostic value may vary depending on specific pathological characteristics. We examined the relationship of positive surgical margins to biochemical progression according to several clinicopathological features. Materials And Methods: We analyzed data from 1,268 patients who underwent radical prostatectomy for clinically localized prostate cancer at our center between 1992 and 2008, and did not receive any neoadjuvant or adjuvant treatment. We examined the relation of age, pretreatment prostate specific antigen, pathological T stage, radical prostatectomy Gleason score, disease risk group and surgical margin status to biochemical progression-free survival. Results: The overall positive surgical margin rate was 20.8% and median followup was 79 months. The impact of positive surgical margins was dependent on risk group. Biochemical progression-free survival was 99.6% for the negative surgical margin group vs 94.9% for the positive surgical margin group in low risk disease (log rank p = 0.53), 93.5% for the negative surgical margin group vs 83% for the positive surgical margin group in intermediate risk disease (log rank p &lt;0.001) and 78.5% for the negative surgical margin group vs 57.1% for the positive surgical margin group in high risk disease (log rank p = 0.003). These differences remained significant in a multivariate Cox regression model adjusting for other clinicopathological features. Conclusions: Positive surgical margins are an independent predictor of biochemical progression in patients with intermediate and high risk prostate cancer. Patients with low risk disease have a favorable long-term outcome regardless of margin status and may be candidates for expectant management even with positive surgical margins, sparing them the side effects and costs of treatment. abstract_id: PUBMED:35387469 Risk of biochemical recurrence in patients with a positive surgical margin after radical prostatectomy. Introduction: Serum prostate specific antigen (PSA) is an irreplaceable marker in the detection and follow-up of patients with prostate cancer. In our analysis we addressed factors that could indicate the likelihood of biochemical recurrence (BCR) early after radical prostatectomy. We mainly focused on the positive surgical margin (R1). Methods: Retrospective evaluation and analysis of the database of patients with prostate cancer after radical prostatectomy from 2001 to 2019. In total 1529 patients were enrolled in the study. The median follow-up was 48 months. The age of the patients ranged from 49 to 76 years. We used pre-operative PSA values, and the monitoring of the dynamics of 3rd generation PSA progression (detection limit 0.003 ng/ml) at month 1 and month 3 after surgery and then in 3-month intervals. We monitored the surgical margin positivity (R0 negative, R1 positive) and the Gleason score (GS) based on histological samples and we analysed the relationship to biochemical recurrence of the disease. Results: The pre-operative PSA value did not show a direct relationship to the R1 risk. Patient values in the groups R1 and R0 differed only by 1.159 ng/ml (p=NS). The 3rd generation PSA value at month 1 after surgery was 50.82% higher in R1 patients (p&amp;gt;0.001). 50% of patients with R1 (29.5% patients of the total) did develop BCR during the follow-up period, while in patients with R0 (70.5% patients of the total) this proportion was 30% (p&amp;gt;0.001). Among those with GS 67, 47% developed BCR. The GS 810 group relapsed in 75% of the cases (p&amp;gt;0.001). Conclusion: According to our analysis 33% of the patients reached the stage of biochemical recurrence. We demonstrated a direct dependency between the risk of recurrence and the final Gleason score. The presence of R1 should not be viewed as a direct indication for adjuvant radiotherapy. abstract_id: PUBMED:18640061 Robotic-assisted laparoscopic and radical retropubic prostatectomy generate similar positive margin rates in low and intermediate risk patients. Objective: Robotic-assisted laparoscopic prostatectomy (RALP) is being increasingly utilized. To assess the efficacy of the operation, we compared apical and overall margin status for RALP with radical retropubic prostatectomy (RRP) in a group of contemporary patients. Patients And Methods: We retrospectively reviewed 98 consecutive RRPs and then 94 RALPs from a single institution. Groups were analyzed and matched with regard to preoperative prostate-specific antigen (PSA), cancer grade, pathologic stage, and tumor volume. Surgical margins were quantitated. Results: Clinicopathologic parameters were compared and additional high risk patients were observed in the RRP vs. RALP group. To risk-adjust these patient groups, those meeting preoperative high risk criteria were excluded from further positive margin analysis. Postoperatively, the average tumor volume was 13% in both groups. Pathologic stage pT3 was similar between RRP (14%) and RALP (11%). A positive surgical margin (PSM) was found in 12 cases (14%) after RRP and 11 cases (13%) after RALP including apical margins. Positive margins at the apex, non-apex, and both were statistically similar between groups. Conclusions: In this study, no differences were seen between robotic prostatectomy with regard to apical or overall margin status compared with open prostatectomy in lower risk patients. This suggests that despite improved visualization, RALP generates a similar margin status as RRP. Answer: Yes, a low-risk group for margin involvement at prostatectomy for clinically localized prostate cancer does exist. According to the study by PUBMED:25413418, a subgroup of patients with Gleason score ≤6 and margin width ≤4 mm (single maximal or cumulative) demonstrated a low risk (4%) of 5-year PSA relapse following radical prostatectomy (RP). This suggests that within this specific subgroup, the rates of early PSA relapse are low, indicating a lower risk group for margin involvement. However, it is important to note that patients with higher Gleason scores or wider margin involvement are associated with a greater than 20% risk of PSA relapse at 5 years, indicating a higher risk group. Additionally, the study by PUBMED:19913824 found that positive surgical margins are an independent predictor of biochemical progression in patients with intermediate and high-risk prostate cancer. However, patients with low-risk disease have a favorable long-term outcome regardless of margin status and may be candidates for expectant management even with positive surgical margins, sparing them the side effects and costs of treatment. This further supports the existence of a low-risk group for whom margin involvement may not significantly impact disease outcomes.
Instruction: Does insulin glargine increase the risk of cancer compared with other basal insulins? Abstracts: abstract_id: PUBMED:26979594 Tumour Risk with Once-Weekly Glucagon-Like Peptide-1 Receptor Agonists in Type 2 Diabetes Mellitus Patients: A Systematic Review. Background And Objective: Once-weekly glucagon-like peptide-1 receptor agonists (GLP-1RAs) are a novel class of injectable antidiabetic drugs. Previous studies indicated that GLP-1RAs (exenatide and liraglutide) might increase the incidence of pancreatitis and pancreatic cancer. Here, we evaluated the clinical safety of once-weekly GLP-1RAs with respect to tumour risk. Methods: Relevant studies were selected from ClinicalTrials.gov. Randomized controlled trials that reported the incidences of neoplasms were included in our research. Outcomes were calculated as the risk ratio using the Mantel-Haenszel method and fixed-effects model. Results: Our analysis included 26 randomized controlled trials with 16,090 patients. Once-weekly GLP-1RAs did not increase the risk for tumours compared with other antidiabetic drugs [risk ratio (RR), 1.02; 95 % confidence interval (CI), 0.74-1.41; p = 0.91]; this finding was independent of the type of GLP-1RA administered (albiglutide, exenatide extended-release and dulaglutide) and duration of the trials (limited to ≥52 weeks). Subgroup analyses revealed that once-weekly GLP-1RAs did not increase tumour risk compared with placebos, exenatide and liraglutide, insulin or oral drugs. Additionally, once-weekly GLP-1RAs did not increase tumour risk in any tissue. Conclusions: Compared with other antidiabetic drugs, once-weekly GLP-1RAs did not increase the risk for any tumour, and this finding was independent of the type of GLP-1RA administered and treatment duration. However, our study had many limitations, and further longer term trials with larger samples should be conducted in future to confirm our results. abstract_id: PUBMED:22019750 Diabetes mellitus and the risk of cancer. Although diabetes has been known to increase the risk of cancer for over a century, it was not until recently when this area gained momentum and generated a lot of interest. That is in- part because of the rising global diabetes epidemic and the wide spread use of insulin analogues, metformin and other anti-diabetic agents, providing hypothesis generating data on the cancer risk in the diabetic population. Type 2 diabetes is associated with increased risk of breast, colon, pancreatic and other types of cancer, while type 1 diabetes is associated with increase in stomach, pancreatic, endometrial and cervical cancer. Mechanisms postulated for increased cancer risk in diabetes include hyperglycemia, hyperinsulinemia with stimulation of IGF-1 axis, obesity that serves as a common soil hypothesis for both cancer and diabetes as well as other factors such as increased cytokine production. More recently some antidiabetic agents have been thought to increase cancer risk such as insulin glargine, while metformin appears to lower cancer risk. In this review, we present the evidence for the link between diabetes and cancer highlighting the general mechanisms proposed for such a link as well as specific hypotheses for individual cancer. We will also discuss the role of insulin, metformin and other antidiabetic agents in cancer risk. abstract_id: PUBMED:32404443 Is Gliclazide Associated with a Lower Obesity-Related Cancer Risk Compared to Other Sulfonylureas? A Long-term Prospective Cohort Study. Background: Gliclazide has been suspected to be associated with a lower obesity-related cancer risk; however, current evidence is limited by important methodologic shortcomings. This study aimed to evaluate whether gliclazide is preferred over other sulfonylureas regarding obesity-related cancer risk. Methods: In this prospective cohort study, an annual benchmarking database in Dutch primary care (Zwolle Outpatient Diabetes project Integrating Available CareZODIAC, 1998-2014) was linked to the Netherlands Cancer Registry and the Dutch Personal Record Database. Of the 71,648 patients with type 2 diabetes, we included 26,207 who used sulfonylureas and had no history of cancer or insulin use at baseline. Obesity-related cancer was defined using the latest definition of the World Cancer Research Fund. Cox regression analyses were used to estimate HRs, with both baseline sulfonylurea and cumulative exposure modeled and corrected for baseline covariates. Results: During follow-up for 167,692 person-years, there were 1,111 obesity-related cancer events. For males, the adjusted HRs [95% confidence interval (CI)] for baseline sulfonylurea compared with gliclazide were as follows: glibenclamide, 1.10 (0.92-2.69); glimepiride, 1.13 (0.68-1.84); and tolbutamide, 0.93 (0.59-1.48). For females, these were as follows: glibenclamide, 1.49 (0.72-3.13); glimepiride, 0.96 (0.59-1.54); and tolbutamide, 0.84 (0.54-1.28). The adjusted HRs (95% CI) for one more year of cumulative exposure compared with gliclazide were as follows: glibenclamide, 0.90 (0.71-1.14); glimepiride, 0.96 (0.87-1.06); and tolbutamide, 1.00 (0.92-1.09). For females, these were as follows: glibenclamide, 0.93 (0.77-1.13); glimepiride, 0.99 (0.90-1.10); and tolbutamide, 1.04 (0.96-1.13). Conclusions: Obesity-related cancer risk was comparable between gliclazide and other sulfonylureas. Impact: Gliclazide is not preferred over other sulfonylureas regarding obesity-related cancer risk. abstract_id: PUBMED:22966091 Does insulin glargine increase the risk of cancer compared with other basal insulins?: A French nationwide cohort study based on national administrative databases. Objective: To explore in France the relationship between insulin glargine use and overall and specific cancer risks in type 2 diabetic patients compared with other basal insulins. Research Design And Methods: Data were extracted from French health insurance information system (Système National d'Information Inter-Régimes de l'Assurance Maladie) linked with data from the French Hospital Discharge database (Programme de Médicalisation des Systèmes d'Information). Included were 70,027 patients aged 40-79 years who started a basal insulin in 2007-2009. Cox proportional hazards models with age as time-scale were used to calculate multivariate-adjusted hazard ratios for associations between type of basal insulin and risk of overall cancer, breast cancer, and seven other cancer sites. Results: The median follow-up was 2.67 years in patients exposed to insulin glargine. Absolute event rates for all cancer in patients exposed to glargine versus other basal insulin users were 1,622 and 1,643 per 100,000 person-years, respectively. No significant association was observed between glargine exposure and overall cancer incidence after adjustment for sex, with a hazard ratio of 0.97 (95% CI 0.87-1.07), or after additional adjustment for any other hypoglycemic agent use and duration of diabetes. No increased risk of breast cancer was observed for glargine users compared with other basal insulins users, with a fully adjusted hazard ratio of 1.08 (0.72-1.62). Conclusions: In a large cohort of patients newly treated by basal insulin, no increased risk of any cancer was observed in insulin glargine users compared with other basal insulin users. Because follow-up did not exceed 4 years, longer-term studies are needed. abstract_id: PUBMED:21898383 Increase in circulating levels of IGF-1 and IGF-1/IGFBP-3 molar ratio over a decade is associated with colorectal adenomatous polyps. High levels of circulating insulin-like growth factor-1 (IGF-1) have been associated with increased risk of several cancers. Regarding colorectal cancer, these associations are generally weak. We hypothesized that an increase in IGF-1 over time would be a stronger risk factor for cancer-related outcomes than the actual levels. In this analysis we utilized existing data from the Insulin Resistance and Atherosclerosis Study (IRAS). Circulating IGF-1 levels and molar ratios of IGF-1 to IGF binding protein 3 (IGFBP-3) were measured at three time points, within a 10-year follow-up period. We examined the associations of increase of the two variables with the presence of colorectal adenoma at the end of follow-up among participants with normal glucose tolerance at baseline. This included 143 individuals, from which 24 were diagnosed with adenomatous polyps. Although the mean levels of IGF-1 and IGF-1/IGFBP-3 decline with age, ~ 30% of the participants showed an increase of at least fifteen percent ("ever increase") in one or both of these variables, compared to baseline. We found a positive association between "ever increase" in IGF-1 or IGF-1/IGFBP-3 and the presence of colorectal adenoma: ORs were 3.81 (95% CI: 1.30-10.8) and 2.83 (95% CI: 1.00-8.22), respectively. No association was found when analyzing the actual levels of both variables at any time point. Our data suggest that an increase in circulating IGF-1 or IGF-1/IGFBP-3 may represent a disturbed GH/IGF1 homeostasis, which could favor the development of precancerous lesions such as colorectal adenoma. abstract_id: PUBMED:32913460 High serum Androgen and Insulin concentrations increase the tendency of Endometrial Carcinoma. Purpose: The objective of the study was to evaluate the important role played by androgen and insulin in the development of endometrial carcinoma (EC), and their combined effect on EC risk. Methods: We enrolled 510 type I EC patients and 510 age-, time-, and nationality-matched subjects into this study. Metabolic and hormonal parameters of enrolled subjects were examined. Univariate and multivariate logistic regression analyses for EC and control subjects were performed. Type I EC risk was evaluated with respect to testosterone, androstenedione, and insulin levels based on odds ratios (ORs) using stratified data. Results: EC risk was positively associated with C-peptide, estrone, androgen (including testosterone and androstenedione) and insulin levels, BMI, WHR, family history of cancer, nulliparity, irregular menstruation, diabetes, and hypertension. In multivariate logistic regression models, high C-peptide and testosterone levels, diabetes, and hypertension were independent risk factors after adjustment for BMI, WHR, family history of cancer, high serum insulin, and estrone levels. Increased serum total testosterone and insulin levels were positively correlated with EC risk in total, premenopausal, and postmenopausal women. Androstenedione was correlated with EC in total and postmenopausal, but not in premenopausal subjects. Compared with higher testosterone and insulin, odds ratios (ORs) for higher testosterone with lower insulin and lower testosterone with higher insulin were decreased in total, premenopausal, and postmenopausal women. Similarly, compared to both higher FAI and insulin, ORs for higher FAI with lower insulin and lower FAI with higher insulin were decreased in all three groups. Coordinately, ORs for higher androstenedione with lower insulin and lower androstenedione with higher insulin were decreased in total and postmenopausal, but not premenopausal subjects. Conclusions: These findings suggested that androgen and insulin were risk factors of type I EC, and relatively high levels of both testosterone and insulin synergistically affected EC risk. abstract_id: PUBMED:20947488 Metformin and cancer risk in diabetic patients: a systematic review and meta-analysis. Metformin, an insulin-lowering agent, has been associated with decreased cancer risk in epidemiologic studies in diabetic patients. We performed a comprehensive literature search and meta-analysis of epidemiologic studies to assess the effect of metformin on cancer incidence and mortality in diabetic patients, using Pubmed, ISI Web of Science, Embase, and the Cochrane library until May 2009, with no language or time restrictions. Independent reports with sufficient information to allow risk estimation of cancer risk/mortality and a measure of uncertainty were reviewed and cross-checked independently by three investigators. Eleven studies were selected for relevance in terms of intervention, population studied, independence, and reporting of cancer incidence or mortality data, reporting 4,042 cancer events and 529 cancer deaths. A 31% reduction in overall summary relative risk (0.69; 95% confidence interval, 0.61-0.79) was found in subjects taking metformin compared with other antidiabetic drugs. The inverse association was significant for pancreatic and hepatocellular cancer, and nonsignificant for colon, breast, and prostate cancer. A trend to a dose-response relationship was noted. Metformin is associated with a decreased risk of cancer incidence compared with other treatments among diabetic patients. Given the retrospective nature of most studies and the possibility that the control treatments increase risk, phase II trials are needed before large cancer prevention trials are launched. abstract_id: PUBMED:30968228 Obesity and the Risk of Gastrointestinal Cancers. Obesity is a risk factor for all major gastrointestinal cancers. With the rapid increase in the prevalence of obesity worldwide, this link could lead to an elevated burden of cancers of the digestive system. Currently, three main mechanisms explaining the link between excess adiposity and gastrointestinal cancer risk are being considered, including altered insulin signaling, obesity-associated chronic low-grade inflammation, and altered sex hormone metabolism, although new potential mechanisms emerge. This review is aimed to present our current knowledge on biological mechanisms involved in adiposity-related gastrointestinal carcinogenesis supported by results collected in epidemiological studies. abstract_id: PUBMED:24466131 Human insulin does not increase bladder cancer risk. Background: Whether human insulin can induce bladder cancer is rarely studied. Methods: The reimbursement databases of all Taiwanese diabetic patients from 1996 to 2009 were retrieved from the National Health Insurance. An entry date was set at 1 January 2004 and a total of 785,234 patients with type 2 diabetes were followed up for bladder cancer incidence until the end of 2009. Users of pioglitazone were excluded and the period since the initiation of insulin glargine (marketed after the entry date in Taiwan) was not included in the calculation of follow-up. Incidences for ever-users, never-users and subgroups of human insulin exposure (using tertile cutoffs of time since starting insulin, duration of therapy and cumulative dose) were calculated and the hazard ratios were estimated by Cox regression. Results: There were 87,940 ever-users and 697,294 never-users, with respective numbers of incident bladder cancer of 454 (0.52%) and 3,330 (0.48%), and respective incidence of 120.49 and 94.74 per 100,000 person-years. The overall hazard ratios (95% confidence intervals) indicated a significant association with insulin in the age-sex-adjusted models [1.238 (1.122-1.366)], but not in the model adjusted for all covariates [1.063 (0.951-1.187)]. There was also a significant trend for the hazard ratios for the different categories of the dose-response parameters in the age-sex-adjusted models, which became insignificant when all covariates were adjusted. Conclusions: This study relieves the concern of a bladder cancer risk associated with human insulin. Appropriate adjustment for confounders is important in the evaluation of cancer risk associated with a medication. abstract_id: PUBMED:24267900 Social disparities, health risk behaviors, and cancer. Background: Overall cancer incidence rates decreased in the most recent time period in both men and women, largely due to improvements in surgical therapeutic approaches (tertiary prevention) and screening programs (secondary prevention), but differences in cancer incidence and survival according to socioeconomic status are documented worldwide. Health risk behaviors, defined as habits or practices that increase an individual's likelihood of harmful health outcomes, are thought to mediate such inequalities. Discussion: Obesity has been related with increased cancer incidence and mortality due to imbalance of leptin and adiponectin which are connected to activation of PI3K, MAPK, and STAT3 pathways and decreasing insulin/insulin-like growth factor (IGF)-1 and mTOR signaling via activation of 5 AMP-activated protein kinase (AMPK), respectively. Physical activity has been associated to prevent cancer by the aforementioned obesity-related mechanisms, but also increasing level of circulating vitamin D, which has been related to lower risk of several cancers, and increasing prostaglandin F2a and reducing prostaglandin E2, which are both related with cancer prevention and promotion, respectively. A large number of different substances may induce themselves a direct cytotoxicity and mutagenic action on cells by smoking, whereas alcohol promote immune suppression, the delay of DNA repair, inhibition of the detoxification of carcinogens, the production of acetaldehyde, and the contribution to abnormal DNA methylation. The combined smoking and alcohol drinking habits have been shown to increase cancer risk by smoke action of increasing the acetaldehyde burden following alcohol consumption and alcohol action of enhancing the activation of various procarcinogens contained in tobacco smoke. Conclusions: Interventions at the social level may be done to increase awareness about cancer risks and promote changing in unhealthy behaviors. Answer: Based on the information provided in the abstracts, insulin glargine does not appear to increase the risk of cancer compared with other basal insulins. A French nationwide cohort study based on national administrative databases explored the relationship between insulin glargine use and overall and specific cancer risks in type 2 diabetic patients compared with other basal insulins. The study included 70,027 patients aged 40-79 years who started a basal insulin in 2007-2009. The results showed no significant association between glargine exposure and overall cancer incidence after adjustment for sex, with a hazard ratio of 0.97 (95% CI 0.87-1.07), or after additional adjustment for any other hypoglycemic agent use and duration of diabetes. Furthermore, no increased risk of breast cancer was observed for glargine users compared with other basal insulins users, with a fully adjusted hazard ratio of 1.08 (0.72-1.62). The study concluded that in a large cohort of patients newly treated by basal insulin, no increased risk of any cancer was observed in insulin glargine users compared with other basal insulin users (PUBMED:22966091). Additionally, another study specifically investigating whether human insulin can induce bladder cancer found that there was no significant association with insulin in the model adjusted for all covariates, with a hazard ratio of 1.063 (0.951-1.187). This study relieved the concern of a bladder cancer risk associated with human insulin (PUBMED:24466131). Therefore, based on the findings from these studies, insulin glargine does not increase the risk of cancer compared to other basal insulins.
Instruction: Are independent mobility and territorial range associated with park visitation among youth? Abstracts: abstract_id: PUBMED:24909862 Are independent mobility and territorial range associated with park visitation among youth? Background: Parks are important for providing opportunities for physical activity among youth. Apart from engaging in physical activity whilst visiting a park, active transportation (e.g. walking or cycling) to parks is potentially an additional source of physical activity. Previous research has shown that a major barrier to young people visiting parks is their inability to visit parks unaccompanied by an adult. It is not known; however, whether young people who have greater independent mobility and territorial range (ability to move around their neighbourhood alone or with friends, unaccompanied by an adult) are more likely to visit parks. This study examined park visitation and travel mode to parks and whether independent mobility and territorial range were associated with park visitation among youth living in disadvantaged areas of Victoria, Australia. Methods: In 2010-11, 311 youth aged 8-16 years self-reported their park use, active transport, independent mobility to parks, and territorial range. Logistic regression models determined the odds of park visitation (once per week or more) according to independent mobility and territorial range, adjusting for key covariates. Results: Overall, 75% of participants reported visiting parks, and 37% visited their 'usual' park at least once per week. Of those who reported visiting parks, 87% travelled to the park they usually visited using active transport: 57% walked, 22% cycled, and 8% used a scooter/skateboard. Just 15% and 13% of youth regularly walked or cycled alone to parks/playgrounds respectively, and 25% and 19% regularly walked or cycled with friends or siblings (no adults) respectively. For the 84% who reported having parks/playgrounds within walking distance from home, those who regularly walked alone to parks (OR 3.61; CI=1.67, 7.80), and regularly walked (OR 2.27; CI=1.14, 4.55) or cycled (OR 3.38; CI=1.73, 6.62) with friends to parks, were significantly more likely to visit a park at least once per week, compared to others. Conclusions: This study showed that active transport is frequently used by this sample of young people to travel to parks. Findings also highlight the potential importance of providing opportunities for youth aged 8-16 years to visit local parks independent of an adult. abstract_id: PUBMED:23933224 Is park visitation associated with leisure-time and transportation physical activity? Objective: The aim of this study was to examine whether frequency of park visitation was associated with time spent in various domains of physical activity among adults living in a disadvantaged neighbourhood of Victoria, Australia. Methods: In 2009, participants (n=319) self-reported park visitation and physical activity including: walking and cycling for transport, leisure-time walking, leisure-time moderate- to vigorous-intensity physical activity, and total physical activity. Results: The mean number of park visits per week was 3.3 (SD=3.8). Park visitation was associated with greater odds of engaging in high (as compared to low) amounts of transportation physical activity, leisure-time walking, leisure-time moderate- to vigorous-intensity physical activity (MVPA) and total physical activity. Each additional park visit per week was associated with 23% greater odds of being in the high category for transportation physical activity, 26% greater odds of engaging in high amounts of leisure-time walking, 11% greater odds of engaging in MVPA, and 40% greater odds of high total physical activity. Conclusions: Acknowledging the cross-sectional study design, the findings suggest that park visitation may be an important predictor and/or destination for transportation and leisure-time walking and physical activity. Findings highlight the potentially important role of parks for physical activity. abstract_id: PUBMED:31466293 Roaming the Neighbourhood: Influences of Independent Mobility Parenting Practices and Parental Perceived Environment on Children's Territorial Range. Children's independent mobility (IM), their freedom to move about their neighbourhood without supervision by adults, has been in steady decline in recent decades. Previous research has linked perceptions of the environment with various measures of IM, but recently concerns have been raised regarding inconsistency in measuring IM. This study used various measures of IM and aimed to address how parental perceptions of the neighbourhood environment are associated with children's territorial range (actual IM), as well as how this relationship is mediated by IM parenting practices (allowed IM). A sample of 105 child/parent dyads from Vancouver, Canada participated in this study. Children (age 10-13) wore a global positioning system (GPS) watch and an accelerometer and completed an activity diary for seven days to assess their territorial range. Parents completed a questionnaire that assessed perceptions of their neighbourhood environment and IM parenting practices-license for IM and roaming allowance. Path analyses were used to address the research aims. License for IM and roaming allowance mediated the relationship between perceived walking facilities, crime safety, and neighbourhood relations and children's territorial range. Findings suggest that future interventions to increase children's territorial range should focus primarily on attitude and behaviour change among parents to grant children more freedom. abstract_id: PUBMED:29368610 The REVAMP natural experiment study: the impact of a play-scape installation on park visitation and park-based physical activity. Background: Designing parks that optimise visitation and support visitors to be active is important for public health. Yet there is very little evidence about whether playground refurbishment achieves these objectives. This study examined the impact of the installation of a play-scape in a large metropolitan park in Melbourne, Australia. Methods: Natural experiment study (intervention vs control). At both parks, park visitation and physical activity were assessed before (T1, 2013) and after the intervention at 12 (T2, 2014) and 24 months (T3, 2015). At each time point, measures included: observations of park visitors using the System for Observing Play and Recreation in Communities on four weekdays and four weekend days, objective monitors to record usage of the walking paths and the number of cars entering the park; and intercept surveys with adult park visitors. Cross-sectional surveys were conducted with local residents at T1 and T3. Results: The observational data showed a 176% increase in park visitor counts from T1 to T2 (Incidence Rate Ratio (IRR) = 2.76, 95% CI = 1.04-7.33), at the intervention park relative to the control park. The intervention park had a 119% increase in counts of visitors observed engaging in MVPA from T1 to T2 (IRR = 2.19, 95% CI = 1.14-4.20), and a 128% increase from T1 to T3 (IRR = 2.28, 95% CI = 1.19-4.38), relative to the control park. The relative increases in visitation at the intervention park play-scape compared with the control park playground were highly statistically significant from both T1 to T2 (IRR = 18.12, 95% CI = 5.51-59.59) and T1 to T3 (IRR = 15.05, 95% CI = 4.61-49.16). Similarly, there was a significant interaction between time and park with regard to the number of visitors observed engaging in MVPA in the play-scape/playground areas. The intercept survey data showed an increased odds of children's regular visitation to the intervention park at T2 (OR = 2.67, 95% CI = 1.08, 6.64), compared with T1, relative to the control park. The remaining results from the intercept survey, objective monitors and resident surveys showed no significant differences in visitation between the two parks. Conclusions: These findings confirm that a well-designed play-scape installation has the potential to increase park visitation and encourage visitors to be physically active. Trial Registration: Current controlled trial ISRCTN50745547 . abstract_id: PUBMED:30483200 Independent Mobility and Social Affordances of Places for Urban Neighborhoods: A Youth-Friendly Perspective. Meaning of place is usually approached as slow social cognitive construction. However, grounded on the theory of affordances, it may also stem from direct perception-action processes, which enable the formation of immediate perceived functional, social or symbolic meaning of place (Raymond et al., 2017b). In the present study, affordances of places, which are perceived by a specific perceiver in a specific place, were mapped using a web-map survey. Each place offers opportunities for interaction, behavior, use, feeling or meaning, which is directly perceived and actualized there. This paper aims at identifying the degree of youth-friendliness of urban neighborhoods using the hypothetical intertwined model of independent mobility and actualized affordances (Kyttä, 2004) combined with place use and meaning (Broberg et al., 2013a). SoftGISchildren methodology (Broberg et al., 2013a) was adopted to carry out a cross-sectional research, involving 145 sixth to ninth graders of the Great Lisbon area. SoftGIS integrates Public Participation Geographic Information Systems (PPGIS), enabling collection and place mapping of daily subjective environmental experiences in the physical environment, consequently allowing for individuals to be actively engaged in public processes of participation (Brown and Kyttä, 2014). A place based web survey called "Ideal City: a game of graphic imagination" was adopted. Participants were asked to map their home place, select and mark social, functional, leisure and emotional place transactions, and report actual and ideal mobility to these places. Findings on mobility showed that shorter distances to meaningful affordances of places promote active and independent travel; ideally, youths would like to be more frequently active and more frequently autonomous. As for meaningful places, a total of 1632 affordances were localized, with a higher number on social category. Neighborhood area (500 m around home place) was assessed as youth-friendly, where active and independent travel occurred more frequently, and social affordances were the most expressive type. Relational and affectional experience in the neighborhood places was meaningful for youth. Municipalities should consider these features when planning, designing and managing residential areas aiming for the well-being and health of young citizens; and include youths as specialists of space (spatialists) in planning participatory processes (PPGIS). abstract_id: PUBMED:35751247 Assessing the validity of mobile device data for estimating visitor demographics and visitation patterns in Yellowstone National Park. Monitoring visitor demographics and temporal visitation patterns can help national park managers understand their visitors and allocate resources more effectively. Traditional approaches, such as visitor surveys or vehicle counts, are limited by time, space, labor, and financial resources. More recently, mobile device data have been adopted for monitoring visitors in park-related or tourism research. However, few studies validated mobile device data with traditional visitor surveys or count data. Combining mobile device data with the American Community Survey (ACS), this study assessed mobile device data's validity in a national park context with three approaches: Points of Interest (POIs), visitor demographics, and temporal visitation patterns. The results revealed that only half of the POIs inside Yellowstone National Park are valid. Compared to traditional visitor surveys, mobile device data are limited due to platform bias and the exclusion of international visitors, resulting in discrepancies in visitor demographics, such as education and income levels. Conversely, mobile device data have strong correlations with count data regarding monthly and daily visitation patterns. The results suggest that with careful consideration, mobile device data can serve as an additional and complementary source of information to traditional survey data for understanding visitor demographics and temporal visitation patterns. abstract_id: PUBMED:33204057 Impacts of COVID-19 pandemic on urban park visitation: a global analysis. The COVID-19 pandemic has resulted in over 33 million confirmed cases and over 1 million deaths globally, as of 1 October 2020. During the lockdown and restrictions placed on public activities and gatherings, green spaces have become one of the only sources of resilience amidst the coronavirus pandemic, in part because of their positive effects on psychological, physical and social cohesion and spiritual wellness. This study analyzes the impacts of COVID-19 and government response policies to the pandemic on park visitation at global, regional and national levels and assesses the importance of parks during this global pandemic. The data we collected primarily from Google's Community Mobility Reports and the Oxford Coronavirus Government Response Tracker. The results for most countries included in the analysis show that park visitation has increased since February 16th, 2020 compared to visitor numbers prior to the COVID-19 pandemic. Restrictions on social gathering, movement, and the closure of workplace and indoor recreational places, are correlated with more visits to parks. Stay-at-home restrictions and government stringency index are negatively associated with park visits at a global scale. Demand from residents for parks and outdoor green spaces has increased since the outbreak began, and highlights the important role and benefits provided by parks, especially urban and community parks, under the COVID-19 pandemic. We provide recommendations for park managers and other decision-makers in terms of park management and planning during health crises, as well as for park design and development. In particular, parks could be utilized during pandemics to increase the physical and mental health and social well-being of individuals. abstract_id: PUBMED:35937569 Transnational youth mobility: new categories for migrant youth research. Large-scale research on migrant youth categorises youth along two lines: ethnicity and generation. Yet insights from smaller-scale qualitative studies indicate that it is important to experiment with categories based on mobility. While these studies have shown that young people's mobility affects their identities, educational resilience, sense of belonging and sense of self, findings have not led to new thinking about categories used in large-scale migrant youth research. Given this lacuna, we investigate young people's mobility, understood here as long or short trips to countries other than where they reside, based on a large-scale survey in three European countries (N = 2019). We find that travels are common amongst secondary school pupils of both migrant and non-migrant background and that youth with a migration background primarily travel to their or their parents' 'home' country. While lower socio-economic status is associated with less frequent travel for the general population, it is not linked to the frequency of travel of youth with a migration background. In today's globalised world, where there are important distinctions between those who can travel and those who cannot, our findings call for putting the mobility of young people at the heart of analytical categories. abstract_id: PUBMED:31082755 Bringing forecasting into the future: Using Google to predict visitation in U.S. national parks. In recent years, visitation to U.S. National Parks has been increasing, with the majority of this increase occurring in a subset of parks. As a result, managers in these parks must respond quickly to increasing visitor-related challenges. Improved visitation forecasting would allow managers to more proactively plan for such increases. In this study, we leverage internet search data that is freely available through Google Trends to create a forecasting model. We compare this Google Trends model to a traditional autoregressive forecasting model. Overall, our Google Trends model accurately predicted 97% of the total visitation variation to all parks one year in advance from 2013 to 2017 and outperformed the autoregressive model by all metrics. While our Google Trends model performs better overall, this was not the case for each park unit individually; the accuracy of this model varied significantly from park to park. We hypothesized that park attributes related to trip planning would correlate with the accuracy of our Google Trends model, but none of the variables tested produced overly compelling results. Future research can continue exploring the utility of Google Trends to forecast visitor use in protected areas, or use methods demonstrated in this paper to explore alternative data sources to improve visitation forecasting in U.S. National Parks. abstract_id: PUBMED:34246081 Important park features for encouraging park visitation, physical activity and social interaction among adolescents: A conjoint analysis. Little is known about what park features adolescents perceive as most important for encouraging them to visit and be active and social in parks. This study examined the relative importance of park features for encouraging these behaviours among adolescents and explored differences according to gender and frequency of visitation. Adolescents were recruited from secondary schools in diverse areas of Melbourne, Australia. In Phase 1 (2018-2019), participants [n = 222; 14.0 years (SD 1.1); 46% female] rated photographs of 40 park features in an online survey, indicating the degree to which each feature encouraged them to visit, and be active and social in the park. In Phase 2 (2019-2020), the top ten-rated features from Phase 1 for each outcome (overall and by gender and frequency of visitation) were presented in Adaptive Choice-Based Conjoint tasks to another sample of participants [n = 244; 14.7 years (SD 1.3); 46% female]. The most important feature for encouraging visitation was large swings; for physical activity was sports courts; and for social interaction was a café. Grassy open space was important for all behaviours, and preferred park features varied among sub-groups. The results highlight which park features park planners and designers should prioritise to support active and social park use among adolescents. Answer: Yes, independent mobility and territorial range are associated with park visitation among youth. A study examining park visitation and travel mode to parks among youth living in disadvantaged areas of Victoria, Australia, found that those who regularly walked alone to parks or walked or cycled with friends to parks were significantly more likely to visit a park at least once per week compared to others (PUBMED:24909862). This suggests that providing opportunities for youth aged 8-16 years to visit local parks independent of an adult may be important. Additionally, the study "Roaming the Neighbourhood" found that independent mobility (IM) parenting practices and parental perceived environment influenced children's territorial range, which is a measure of actual independent mobility (PUBMED:31466293). The study concluded that interventions to increase children's territorial range should focus on changing parents' attitudes and behaviors to grant more freedom to their children.
Instruction: Does cardiac denervation affect the short-term blood pressure variability in humans? Abstracts: abstract_id: PUBMED:7706700 Does cardiac denervation affect the short-term blood pressure variability in humans? Objective: To explore the repercussion of cardiac denervation on the short-term blood pressure variability in humans, in order to assess the extent to which the variability of blood pressure is linked to the variability of heart rate. Methods: Beat-to-beat blood pressure and RR interval time were recorded in 16 heart-transplanted patients and were compared with those of 10 healthy control subjects in the resting supine, sitting and standing positions. Blood pressure and RR interval variabilities were assessed by spectral analysis. Results: The total blood pressure power and the sitting very low-frequency, low-frequency, low-frequency and high-frequency blood pressure variability were similar in the heart-transplanted patients and in the controls, despite a marked reduction in the RR interval variability in the heart-transplanted patients. However, the heart-transplanted patients had lower standing low-frequency blood pressure variability than the control subjects. Moreover, very low-frequency and low-frequency RR interval variabilities reappeared in the long-term heart-transplanted patients but not in the short-term heart-transplanted patients (range of time after transplantation 53-124 and 3-25 months, respectively). Conclusions: Short-term RR interval fluctuations are not mandatory for the maintenance of normal blood pressure variability in the supine and sitting positions, but may contribute to the increase in the low-frequency blood pressure variability which occurs normally in the standing position. Moreover, the long-term heart-transplanted patients had increased RR interval variability, which may have been caused by the reappearance of limited autonomic cardiac modulation. However, this increased RR interval variability did not affect the corresponding blood pressure variability. abstract_id: PUBMED:28445207 Effect of renal sympathetic denervation on short-term blood pressure variability in resistant hypertension: a meta-analysis. Background/objectives: Short-term blood pressure variability (BPV) is affected by multiple factors including the sympathetic nervous system drive. Regarding the latter, the novel interventional technology of renal denervation (RDN), by modulating the sympathetic system activation, could have a beneficial impact on BPV. The aim of the current study is to review and meta-analyze the available evidence on the effect of RDN on short-term BPV. Methods: We searched Medline/PubMed database until October 2016 for studies with eligible content. We performed random-effect meta-analyses for 12 outcomes of interest: the standard deviation (SD) of SBP (24 h, daytime and night-time) and DBP (24 h, daytime and night-time), the weighted SD of SBP and DBP across 24 h, the coefficient of variation of SBP and DBP across 24 h and the average real variability of SBP and DBP across 24 h. Results: RDN reduced the SD of SBP across 24 h [mean change: -1.212 (95% confidence intervals (CIs): -2.354/-0.071), P = 0.037] and decreased the SD of systolic daytime BP [mean difference: -1.617 (95% CIs: -3.21/-0.026), P = 0.046] and diastolic daytime BP (marginally) [mean difference: -2.605 (95% CIs: -5.21/-0.003), P = 0.05]. The effect of RDN in reducing SD of SBP across 24 h or DBP across daytime was not influenced by absolute or relative reduction in SBP and DBP indices. (P &gt; 0.1 for all). Conclusion: Catheter-based RDN in resistant hypertensive patients can favorably affect short-term BPV, independent of the level of BP reduction. Further investigation of the effect of RDN on BPV is needed with large randomized trials. abstract_id: PUBMED:19875896 Large blood pressure variability and hypertensive cardiac remodeling--role of cardiac inflammation. An increase in short-term blood pressure (BP) variability is a characteristic feature of hypertensive patients, especially in elderly patients. There is increasing evidence that large BP variability aggravates hypertensive target organ damage and is an independent risk factor for the cardiovascular events in elderly hypertensive patients. However, little is known about the underlying mechanism. We have created a rat model of a combination of hypertension and large BP variability by performing sinoaortic denervation (SAD) in spontaneously hypertensive rats (SHRs). SAD aggravates left ventricular (LV)/myocyte hypertrophy and myocardial fibrosis to a greater extent and impairs LV systolic function without changing mean BP in SHR. SAD upregulates cardiac monocyte chemoattractant protein-1 and transforming growth factor-beta, and induces macrophage infiltration. Cardiac angiotensinogen expression is increased and the angiotensin II type 1 receptor is activated by SAD. A subdepressor dose of angiotensin receptor blocker abolishes SAD-induced inflammatory changes and cardiac remodeling and subsequently prevents systolic dysfunction in SHR+SAD. Accordingly, it is suggested that cardiac inflammation via activation of the cardiac angiotensin II system would play a role in the aggravation of cardiac remodeling and dysfunction in hypertensives with large BP variability. abstract_id: PUBMED:25463375 Impact of multi-electrode renal sympathetic denervation on short-term blood pressure variability in patients with drug-resistant hypertension. Insights from the EnligHTN I study. Background: Transluminal renal sympathetic denervation (RDN) has been shown to reduce blood pressure (BP) in patients with treatment-resistant hypertension. Methods: We assessed the effect of multi-electrode RDN on short-term BP variability indexes in resistant hypertensives. Thirty-one patients with drug-resistant uncontrolled hypertension, participants in the EnligHTN I study, underwent ambulatory BP measurements at baseline and 6months after RDN using the EnligHTN ablation catheter (St. Jude Medical). Twelve resistant hypertensives matched for office BP served as control group. Results: At 6months post-RDN, office BP and 24-hour BP were reduced by 25.6/10.3mmHg and by 10.2/6mmHg (p&lt;0.001 for all cases), respectively. No significant changes were observed 6months post-RDN in standard short-term BP variability indexes including 24-hour systolic and diastolic average real variability. The rates of systolic and diastolic 24-hour BP variation were decreased 6months after RDN, (from 0.40/0.30 to 0.34/0.24, p=0.030/0.006, respectively), especially in the responders group (n=23, 74.2%). No significant differences in BP and BP variability parameters in the control group were detected. ROC analysis revealed an area under the curve for prediction of response to RDN by systolic time rate of 66.8% (95% CI: 46.7% to 87%; p=0.16) and by diastolic time rate of 76.1% (95% CI: 58.2% to 93.9%; p=0.030). Conclusions: Although standard BP variability indexes remained unchanged, the rate of systolic and diastolic BP variation was significantly decreased 6months after RDN in patients with drug-resistant hypertension. These novel indexes might also be useful as predictors of response. abstract_id: PUBMED:4006233 Mechanisms of blood pressure variability in man. This paper will touch on three points derived from the studies our group has performed by means of continuous 24 hour blood pressure recording in humans. The first point will concern a description of the factors involved in the production of spontaneous blood pressure variability and in particular of our evidence that the magnitude of this phenomenon depends on arterial baroreflexes but also, and perhaps to a larger extent, on central modulation of the cardiovascular system. The second point will deal with the differences in blood pressure variability we have observed in subjects with normal and high blood pressure. Finally, the third point will briefly address the changes in blood pressure variability that take place with ageing. abstract_id: PUBMED:1442131 Power spectral analysis of heart rate and blood pressure variability in anaesthetized dogs. Short-term oscillation of heart rate and blood pressure are mainly regulated by the automatic nervous system. It has been proposed that non-neural factors, such as changes in intrathoracic pressure, can strongly modulate this rhythmicity. Our aim was to evaluate the effect of changing intrathoracic pressure and central autonomic nervous activity on heart rate and blood pressure variability. Evaluation was performed by using spectral analysis techniques with autoregressive modelling. The variability in heart rate and blood pressure remained in animals with open chest or paralysed respiratory muscles. After vagotomy, the variability in heart rate decreased, but not that of blood pressure. Total spinal anaesthesia elicited a decrease in the variability in blood pressure. The pharmacological blockade of alpha- and beta-receptors further decreased both variabilities. It was concluded that in anaesthetized dogs heart rate and blood pressure variability are mainly of central origin and non-neural factors have only minor effect on these central rhythms. High (&gt; 0.15 Hz), medium (0.07-0.15 Hz) and, obviously low (0.00-0.07 Hz) frequency variations in heart rate are mostly mediated vagally. In blood pressure, medium and obviously low frequency variations are modulated by sympathetic nervous system, whereas high frequency variations are secondary to the heart rate variation. abstract_id: PUBMED:22709469 Circadian rhythm of blood pressure reflects the severity of cardiac impairment in familial amyloid polyneuropathy. Background: Cardiac amyloidosis due to familial amyloid polyneuropathy (FAP) includes restrictive cardiomyopathy, thickened cardiac walls, conduction disorders and cardiac denervation. Impaired blood pressure variability has been documented in FAP related to the Val30Met mutation. Aims: To document blood pressure variability in FAP patients with various mutation types and its relationship to the severity of cardiac involvement. Methods: Blood pressure variability was analysed in 49 consecutive FAP patients and was compared with a matched control population. Cardiac evaluation included echocardiography, right heart catheterization, electrophysiological study, Holter electrocardiogram and metaiodobenzylguanidine (MIBG) scintigraphy. Results: A non-dipping pattern was found in 80% of FAP patients and in 35% of control patients (P&lt;0.0001); this was due to a significantly lower diurnal blood pressure in FAP patients (FAP group, 113 ± 21 mmHg; control group, 124 ± 8 mmHg; P&lt;0.0001), whereas nocturnal blood pressures were similar. Among FAP patients, a non-dipping pattern was significantly associated with haemodynamic involvement, cardiac thickening or conduction disorders. These associations did not depend on the average blood pressure levels. Impaired blood pressure variability was more frequent and more pronounced in patients with multiple criteria for severe cardiac amyloidosis. Conclusion: Low blood pressure variability is common in cardiac amyloidosis due to FAP. A non-dipping pattern was more frequently observed in FAP patients with haemodynamic impairment, cardiac thickening or conduction disorders. It is suggested that impairment of circadian rhythm of blood pressure reflects the severity of cardiac amyloidosis due to FAP. abstract_id: PUBMED:25544288 Molecular mechanism of aggravation of hypertensive organ damages by short-term blood pressure variability. There is increasing evidence that not only the elevation of systolic and diastolic blood pressure (BP) but also the increase in BP variability (or fluctuation) are associated with hypertensive organ damages and the morbidity and mortality of cerebrovascular and cardiovascular events. However, the molecular mechanism whereby the increase in BP variability aggravates hypertensive organ damages remains unknown. Thus, we created a rat chronic model of a combination of hypertension and large BP variability by performing bilateral sino-aortic denervation in spontaneously hypertensive rat. A series of our studies using this model revealed that large BP variability induces chronic myocardial inflammation by activating local angiotensin II and mineralocorticoid receptor systems and thereby aggravates cardiac hypertrophy and myocardial fibrosis, leading to systolic dysfunction, in hypertensive hearts. In addition, large BP variability induces the aggravation of arteriolosclerotic changes and ischemic cortical fibrosis in hypertensive kidney via local angiotensin II system. abstract_id: PUBMED:11901044 Long-term effects of carotid sinus denervation on arterial blood pressure in humans. Background: After experimental carotid sinus denervation in animals, blood pressure (BP) level and variability increase markedly but normalize to preoperative levels within 10 to 14 days. We investigated the course of arterial BP level and variability after bilateral denervation of the carotid sinus baroreceptors in humans. Methods And Results: We studied 4 women (age 41 to 63 years) who were referred for evaluation of arterial baroreflex function because of clinical suspicion of carotid sinus denervation attributable to bilateral carotid body tumor resection. The course of BP level and variability was assessed from repeated office and 24-hour ambulatory measurements (Spacelabs/Portapres) during 1 to 10 years of (retrospective) follow-up. Rapid cardiovascular reflex adjustments to active standing and Valsalva's maneuver were assessed. Office BP level increased from 132/86 mm Hg (range, 118 to 148/80 to 92 mm Hg) before bilateral surgery to 160/105 mm Hg (range, 143 to 194/90 to 116 mm Hg) 1 to 10 years after surgery. During continuous 24-hour noninvasive BP recording (Portapres), a marked BP variability was apparent in all 4 patients. Initial symptomatic hypotension on change to the upright posture and abnormal responses to Valsalva's maneuver were observed. Conclusions: Acute carotid sinus denervation, as a result of bilateral carotid body tumor resection, has a long-term effect on the level, variability, and rapid reflex control of arterial BP. Therefore, in contrast to earlier experimental observations, the compensatory ability of the baroreceptor areas outside the carotid sinus seems to be of limited importance in the regulation of BP in humans. abstract_id: PUBMED:23364340 Acute effects of third generation β-blockers on short-term and beat-to-beat blood pressure variability in sinoaortic-denervated rats. An increase in blood pressure variability (BPV) contributes to the development of target organ damage associated with hypertension. Treatment with conventional β-blockers, such as atenolol, has been associated with an increase in BPV; however, the extrapolation of these results to third generation β-blockers with pleiotropic effects seems to be inappropriate. The cardiovascular effects of third generation β-blockers, carvedilol and nebivolol, were assessed in sinoaortic-denervated rats (SAD) and compared with the second generation β-blocker atenolol and the calcium channel blocker verapamil, with a special focus on short-term BPV. Male SAD rats were acutely treated with carvedilol, nebivolol, atenolol or verapamil at two different doses, and the effects on blood pressure and BPV were recorded. Short-term BPV was assessed by the s.d. of BP recordings. Beat-to-beat BPV was studied using spectral analysis to assess the vascular sympatholytic activity of carvedilol and nebivolol by estimating the effects of these drugs on the ratio of low frequency (LF) to high frequency (HF) BPV (LF/HF ratio). Nebivolol, carvedilol and the calcium channel blocker verapamil significantly attenuated short-term BPV at both doses in SAD animals, and there were no differences between the drugs. Conversely, atenolol did not modify baseline s.d. values at either dose. Carvedilol and nebivolol significantly reduced the LF/HF ratio in SAD rats compared with the effects of atenolol and verapamil, suggesting the ability of the third generation β-blockers to reduce vascular sympathetic activity. In conclusion, third generation β-blockers induce a marked reduction in short-term BPV in SAD rats compared to atenolol. Moreover, the ability of carvedilol and nebivolol to reduce short-term BPV in SAD rats is equivalent to that of verapamil, suggesting that these β-blockers may have an additional beneficial effect through their control of short-term variability to a similar extent to calcium channel blockers. Answer: Cardiac denervation does affect short-term blood pressure variability in humans, but the extent of this effect can vary depending on the position of the body and the time elapsed since the denervation procedure. A study on heart-transplanted patients, who effectively underwent cardiac denervation, showed that while the total blood pressure power and variability in the supine and sitting positions were similar to those of healthy controls, there was a reduction in standing low-frequency blood pressure variability in the heart-transplanted patients. This suggests that short-term RR interval fluctuations are not essential for maintaining normal blood pressure variability when supine or sitting, but they may contribute to the increase in low-frequency blood pressure variability normally seen in the standing position. Interestingly, long-term heart-transplanted patients showed increased RR interval variability, which could indicate some reestablishment of autonomic cardiac modulation, but this did not affect corresponding blood pressure variability (PUBMED:7706700). In the context of renal sympathetic denervation, a procedure that indirectly affects cardiac function by modulating sympathetic nervous system activation, a meta-analysis found that this intervention can favorably affect short-term blood pressure variability in patients with resistant hypertension, independent of the level of blood pressure reduction (PUBMED:28445207). Furthermore, studies on patients with familial amyloid polyneuropathy, a condition that includes cardiac denervation, found that impaired blood pressure variability was common and that a non-dipping pattern of blood pressure was more frequently observed in patients with severe cardiac amyloidosis, suggesting that cardiac denervation can indeed affect blood pressure variability (PUBMED:22709469). In summary, cardiac denervation can influence short-term blood pressure variability in humans, with the effects being more pronounced in certain body positions and potentially changing over time as some autonomic function may recover (PUBMED:7706700). Additionally, interventions that affect the sympathetic nervous system, such as renal sympathetic denervation, can also modulate short-term blood pressure variability (PUBMED:28445207).
Instruction: Psychosocial risks for low back pain: are these related to work? Abstracts: abstract_id: PUBMED:22973487 Evaluation of Work-related Psychosocial and Ergonomics Factors in Relation to Low Back Discomfort in Emergency Unit Nurses. Background And Aim: High prevalence of low back pain is one of the most common problems among nurses. The aim of this study was to evaluate the relation of the intensity of low back discomfort to two low back pain contributor factors (Ergonomics risk factors and psychosocial factors). Methods: This cross-sectional survey was conducted on 120 emergency unit nurses in Esfahan. Job content, ergonomics hazards and nordic questionnaire were used in that order for daily assessment of Psychosocial and Ergonomics factors and the intensity of low back discomfort. Nurses were questioned during a 5-week period, at the end of each shift work. The final results were analyzed with SPSS software18/PASW by using Spearman, Mann-Whitney and Kolmogorov-Smirnove test. Results: There was a significant relationship between work demand, job content, social support and intensity of low back discomfort (P value &lt;0.05). But, there was not any link between intensity of low back discomfort and job control. Also, there was significant relationship between intensity of low back discomfort and ergonomics risk factors. Conclusion: This study showed an indirect relationship between the intensity of low back discomfort and social support. This study also confirmed a direct relationship between the intensity of low back discomfort and work demand, job content, ergonomics factors (Awkward Postures (rotating and bending), manual patient handling and repetitiveness, standing continuously more than 30 min). So, to decrease work related low back discomfort, psychosocial factors should be attended in addition to ergonomics factors. abstract_id: PUBMED:36463146 Psychosocial work characteristics and low back pain in daycare (nursery) workers in Japan: a prospective cohort study. Background: Low back pain (LBP) is one of the most common musculoskeletal problems affecting daycare (nursery) workers. We aimed to identify the psychosocial factors influencing LBP in daycare workers. Methods: We conducted a prospective cohort study with a one-year observation period. The baseline sample was a convenience sample of 444 daycare workers from 34 daycare facilities in Nagoya, Japan, and its suburbs. All the data were collected through a questionnaire survey. The question "Where are you currently feeling LBP?" was used to determine whether the subjects suffered from LBP. We examined the prospective relationships of the psychosocial work characteristics, i.e., high job strain, low social support, effort-reward imbalance, and overcommitment, at baseline and LBP after one year. We used multiple logistic regression analyses to calculate the odds ratios of psychosocial work characteristics for the persistence and onset of LBP, adjusted for age, sex, body mass index, smoking, employment status, occupation, and working schedule. Results: At baseline, 270 (60.8%) subjects suffered from LBP. Of 208 who also gave information on LBP one year later, 176 (84.6%) suffered from the persistence of LBP. Low social support at baseline was significantly related to persistent LBP one year later. The incidence of persistent LBP was 89.9% and 80.0% among those with and without low social support at baseline, respectively. The adjusted odds ratio (95% confidence interval) of low social support at baseline for the persistence of LBP was 2.43 (1.01-5.87). Of 150 who were without LBP at baseline and provided information on LBP one year later, 45 (30.0%) suffered from the onset of LBP. None of the psychosocial work characteristics showed significant relationships with the onset of LBP one year later. Conclusion: Low social support was related to the persistence, but not to the onset of LBP in a prospective cohort analysis among daycare workers in Japan. High job strain, ERI, or overcommitment did not show a significant prospective effect on LBP. abstract_id: PUBMED:28524238 Psychosocial work factors and low back pain in taxi drivers. Introduction: Taxi drivers are at high risk for low back pain (LBP). Aim: Identify the association between psychosocial-work factors (Job strain, Iso-strain, effort-reward imbalance [ERI], unfairness, and mental exertion) and LBP in taxi drivers. Methods: A cross-sectional study was done with 129 taxi drivers. Results: Approximately 63% reported LBP in the prior 12 months. Chi square or t-test analyses identified the associations between demographic, work, health, and psychosocial work factors, and self-report of LBP in the prior 12 months. Depression, perceived physical exertion, dispatcher and manager support, unfair treatment at work, and unfair treatment due to nationality were significantly associated with LBP in bivariate analyses. Multivariate logistic regression was done to identify the predictors of LBP. High dispatcher support remained the sole significant predictor for lower prevalence of LBP (OR = 0.66, P = 0.017). Conclusion: Greater understanding of psychosocial work factors may aid in developing interventions to prevent LBP in taxi drivers. abstract_id: PUBMED:21629595 Psychosocial treatment techniques to augment the impact of physiotherapy interventions for low back pain. Purpose: The present study examined the profile of physical and psychosocial changes that occur in physiotherapy intervention when patients also participate in a psychosocial intervention. The psychosocial intervention, delivered by physiotherapists, was designed to target catastrophic thinking, fear of pain, perceived disability, and depression. Methods: The study sample consisted of 48 individuals referred for the rehabilitation treatment of disabling back pain. Half the sample was enrolled in a physiotherapy intervention only; the other half was enrolled in a psychosocial intervention in addition to receiving a physiotherapy intervention. Results: At post-treatment, the two treatment groups did not differ significantly on measures of pain severity, physical function, or self-reported disability. Patients who participated in the psychosocial intervention in addition to physiotherapy showed significantly greater reductions in pain catastrophizing, fear of movement, and depression than patients who received only the physiotherapy intervention. Reductions in psychosocial risk factors contributed to reduced use of the health care system, reduced use of pain medication, and improved return-to-work outcomes. Conclusions: The findings of the present study suggest that a psychosocial intervention provided by physiotherapists can lead to meaningful reductions in psychosocial risk factors for pain and disability and may contribute to more positive rehabilitation outcomes. abstract_id: PUBMED:34033963 Lack of Consensus Across Clinical Guidelines Regarding the Role of Psychosocial Factors Within Low Back Pain Care: A Systematic Review. It is widely accepted that psychosocial prognostic factors should be addressed by clinicians in their assessment and management of patient suffering from low back pain (LBP). On the other hand, an overview is missing how these factors are addressed in clinical LBP guidelines. Therefore, our objective was to summarize and compare recommendations regarding the assessment and management of psychosocial prognostic factors for LBP chronicity, as reported in clinical LBP guidelines. We performed a systematic search of clinical LBP guidelines (PROSPERO registration number 154730). This search consisted of a combination of previously published systematic review articles and a new systematic search in medical or guideline-related databases. From the included guidelines, we extracted recommendations regarding the assessment and management of LBP which addressed psychosocial prognostic factors (ie, psychological factors ["yellow flags"], perceptions about the relationship between work and health, ["blue flags"], system or contextual obstacles ["black flags") and psychiatric symptoms ["orange flags"]). In addition, we evaluated the level or quality of evidence of these recommendations. In total, we included 15 guidelines. Psychosocial prognostic factors were addressed in 13 of 15 guidelines regarding their assessment and in 14 of 15 guidelines regarding their management. Recommendations addressing psychosocial factors almost exclusively concerned "yellow" or "black flags," and varied widely across guidelines. The supporting evidence was generally of very low quality. We conclude that in general, clinical LBP guidelines do not provide clinicians with clear instructions about how to incorporate psychosocial factors in LBP care and should be optimized in this respect. More specifically, clinical guidelines vary widely in whether and how they address psychosocial factors, and recommendations regarding these factors generally require better evidence support. This emphasizes a need for a stronger evidence-base underlying the role of psychosocial risk factors within LBP care, and a need for uniformity in methodology and terminology across guidelines. PERSPECTIVE: This systematic review summarized clinical guidelines on low back pain (LBP) on how they addressed the identification and management of psychosocial factors. This review revealed a large amount of variety across guidelines in whether and how psychosocial factors were addressed. Moreover, recommendations generally lacked details and were based on low quality evidence. abstract_id: PUBMED:25716659 Associations among pain, disability and psychosocial factors and the predictive value of expectations on returning to work in patients who undergo lumbar disc surgery. Purpose: The aim of this study was to describe the associations among pain, disability and psychosocial factors preoperatively as well as 3 and 24 months later for patients who undergo first time discectomy and to analyse the predictive value of psychosocial factors on the outcome 24 months after surgery. Methods: Fifty-nine patients, 41 % women, with a mean age of 40 years and without comorbidities were included, of whom 56 responded to the 24-month follow-up; at that point, they were divided into patients with complaints (C, n = 36) and patients without complaints (NC, n = 20). Correlations among the pain intensity, disability and psychosocial factors were analysed preoperatively, 3 and 24 months after discectomy, and regression analyses of psychosocial factors on the outcome at 24 months were performed. Results: Psychosocial variables were weakly correlated with the pain intensity and disability preoperatively. High expectations on the return to work were predictive of both pain intensity (β = 8.0, p = 0.03) and disability (β = 9.1, p &lt; 0.001) at 24 months. Associations between psychosocial variables and outcome variables were strengthened at the 3-month follow-up in the C group, and this association remained 24 months after surgery. Fear of movement was most strongly correlated with leg pain intensity (r (s) 0.64, p &lt; 0.001) and the ability to decrease pain was the most correlated with disability (r (s) 0.78, p &lt; 0.001). Conclusion: Having high expectations on the return to work after surgery was the strongest predictor for a favourable outcome. Therefore, low preoperative expectations on return to work convey an important prognostic signal. abstract_id: PUBMED:31878915 Health, work and demographic factors associated with a lower risk of work disability and unemployment in employees with lower back, neck and shoulder pain. Background: Chronic musculoskeletal pain affects over 20% of the adult population and is one of the most common reasons for sick leave in Sweden. The aim of this study was to investigate which demographic, health and psychosocial work environment factors are of importance for a lower risk of future work disability and unemployment among workers with low back pain (LBP) and/or neck shoulder pain (NSP), and if familial factors influence these associations. Methods: All 5556 persons that reported having LBP and/or NSP in a web-based questionnaire study in 2004-2006 were included. They were followed up for work disability (sick leave &gt; 90 days or disability pension), and unemployment (&gt; 180 days in a year) until 31 December 2013. Hazard ratios (HR) with 95% confidence intervals were calculated using cox proportional hazard models of the whole sample, adjusting for covariates. In addition, co-twin analyses of outcome discordant twin pairs were conducted to assess the impact of familial confounding on the associations. Results: Being male, 19-28 years old, having higher education, only NSP, no history of depression or anxiety, good self-rated health, low job demands and high job control were associated with a lower risk of work disability (adjusted HR ranging between 0.29-0.85). No history of anxiety and depression and high job control was associated with a lower risk of unemployment (adjusted HR ranging from 0.53 and 0.67). Familial factors were found to affect the association between education and work disability, but none of the other associations investigated. Conclusions: Among those with LBP or NSP, good health in terms of mental- and self-rated health, few pain sites, as well as good psychosocial working conditions seem to indicate a lower risk for work disability. abstract_id: PUBMED:36420950 Psychosocial risk factors for low back pain in US workers: Data from the 2002-2018 quality of work life survey. Background: Examining workplace psychosocial risk factors for back pain becomes increasingly important because of the changing nature of work and rising healthcare costs. Some psychosocial risk factors for back pain, such as work and family imbalance, exposure to a hostile work environment, and job insecurity, are understudied for the working population in the United States. Methods: Data used in this study came from the Quality of Work Life Survey (QWL), a supplementary module of the General Social Survey conducted in the United States. Data from the 2002, 2006, 2010, 2014, and 2018 QWL surveys were used in these analyses, giving a total sample size of 6661. Five domains of workplace psychosocial risk factors for back pain were examined, including job strain, low social support, work-family imbalance, exposure to a hostile work environment (harassment and discrimination), and job insecurity. The adjusted odds ratio (aOR) of each psychosocial risk factor for back pain with 95% confidence intervals (CI) was estimated using a multivariable logistic regression model after controlling for job physical factors, occupation, and demographic and socioeconomic characteristics. Results: Significant associations were found between back pain and several psychosocial factors including job strain (aOR 1.19; CI 1.00,1.41), work-family imbalance (aOR,1.42; CI 1.22,1.64), harassment (aOR 1.40; CI 1.15,1.71), and discrimination (aOR 1.20 CI 1.00,1.44). Conclusion: This study contributes to the understanding of the relationship between a variety of workplace psychosocial factors and back pain. Our findings suggest directions in future longitudinal research to examine emerging workplace psychosocial factors for back pain. abstract_id: PUBMED:9383868 Back injury and work loss. Biomechanical and psychosocial influences. The exponential increase in occupational low back pain disability is a problem that is not being addressed adequately in clinical practice. The notion of achieving primary control through ergonomic intervention, based on biomechanics principles, has so far been unhelpful. The traditional secondary prevention strategies of rest and return to restricted work duties are seemingly suboptimal. Biomechanics/ergonomic considerations may be related to the first onset of low back pain, but there is little evidence that secondary control based solely on these principles will influence the risk of recurrence or progression to chronic disability. More promising in this respect are programs that take account of the psychosocial influences surrounding disability. Work organizational issues are clearly important, but so also is the behavior of clinicians. The balance of the available evidence suggests that clinicians generally should adopt a proactive approach to rehabilitation by recommending, whenever possible, early return to normal rather than restricted duties as well as complementary psychosocial advice if the issue of chronic disability is to be successfully tackled. abstract_id: PUBMED:28366960 Psychological detachment as moderator between psychosocial work conditions and low back pain development. Objectives: Recovery processes in leisure time influence the effect of psychosocial work factors on health issues. However, this function of recovery has been neglected in research regarding the influence of work-related risk factors on low back pain (LBP) development. The aim of this prospective study was to examine the function of psychological detachment - a relevant recovery experience - concerning the influence of psychosocial work factors on LBP development. A moderating function of detachment for the interplay of work factors and LBP was assumed. Material And Methods: Sixty pain-free administrative employees of German universities completed an online survey 3 times during a 6-month period. Generalized estimating equations were used to estimate risk-factors of LBP. Results: Analyses revealed an increased chance of LBP development for smokers and a decreasing chance when work resources were high. Detachment had no direct influence on LBP development, although it moderated the influence of work stressors and work resources on LBP. On the one hand, high detachment values seem to protect against an increased chance of LBP development when employees were confronted with high work stressors, while on the other hand high detachment values enhance the protective effect of high work resources. Conclusions: The results indicated a moderating role of detachment concerning the influence of psychosocial work factors on LBP development. Therefore, it is necessary to include recovery processes in future research regarding LBP development and consequently in LBP prevention concepts. Int J Occup Med Environ Health 2017;30(2):313-327. Answer: Yes, psychosocial risks for low back pain are related to work. Several studies have found associations between work-related psychosocial factors and the development or persistence of low back pain among different occupational groups. For instance, a study on emergency unit nurses found a significant relationship between work demand, job content, social support, and the intensity of low back discomfort (PUBMED:22973487). Similarly, a prospective cohort study among daycare workers in Japan identified low social support at baseline as significantly related to persistent low back pain one year later (PUBMED:36463146). Taxi drivers also showed an association between psychosocial work factors, such as dispatcher support and unfair treatment at work, and low back pain (PUBMED:28524238). Moreover, psychosocial treatment techniques have been shown to augment the impact of physiotherapy interventions for low back pain, suggesting that addressing psychosocial factors can lead to better rehabilitation outcomes (PUBMED:21629595). However, there is a lack of consensus across clinical guidelines regarding the role of psychosocial factors within low back pain care, indicating a need for more uniformity and evidence-based recommendations (PUBMED:34033963). Expectations regarding returning to work after lumbar disc surgery were also found to be predictive of pain intensity and disability outcomes, highlighting the importance of psychosocial factors in recovery (PUBMED:25716659). Additionally, demographic, health, and psychosocial work environment factors were found to be important for a lower risk of future work disability and unemployment among workers with low back pain (PUBMED:31878915). Furthermore, a study using data from the Quality of Work Life Survey in the United States found significant associations between back pain and several psychosocial factors, including job strain, work-family imbalance, harassment, and discrimination (PUBMED:36420950). Lastly, psychological detachment was identified as a moderator between psychosocial work conditions and low back pain development, suggesting that recovery processes in leisure time can influence the effect of work-related risk factors on low back pain (PUBMED:28366960). In summary, the evidence suggests that psychosocial risks for low back pain are indeed related to work, and addressing these factors may be important for preventing and managing low back pain in the workplace.
Instruction: Dynamic contrast-enhanced MR imaging in screening detected microcalcification lesions of the breast: is there any value? Abstracts: abstract_id: PUBMED:17063274 Dynamic contrast-enhanced MR imaging in screening detected microcalcification lesions of the breast: is there any value? Purpose: To prospectively evaluate whether dynamic contrast-enhanced magnetic resonance (MR) imaging findings can help predict the presence of malignancy when screening detected microcalcification lesions, and its contribution to patient management of stereotactic vacuum-assisted breast biopsy (SVAB). Materials And Methods: Dynamic contrast-enhanced breast MR imaging was performed when screening 100 detected microcalcification lesions not visualized by ultrasonography with 11-gauge SVAB. Definitive surgery was performed on all patients with the biopsy resulting in the diagnosis of breast cancer or atypical ductal hyperplasia (ADH). Positive predictive values (PPVs) and negative predictive values (NPVs) were calculated on the basis of a BI-RADS (Breast Imaging Reporting and Data System) category and the absence or presence of contrast uptake in the area of microcalcification. Results: The BI-RADS mammography category correlated with the diagnosis of breast cancer (ADH excluded): category 3 = 7% (4/55); category 4 = 48% (13/27); category 5 = 94% (17/18). After dynamic contrast-enhanced MR imaging, three of four malignancies with BI-RADS mammography category 3 were diagnosed as true positive. Therefore, the PPV of BI-RADS mammography category 3 with MR imaging was 1.8% (1/55). The PPV of contrast uptake of MR imaging was 86% (32/37), significantly higher than the 67% (30/45) PPV of BI-RADS mammography 4 and 5 (P = 0.033). The NPV of BI-RADS mammography 3 was 93% (51/55) versus 97% (61/63) NPV of MR imaging (P = 0.167). Conclusion: In the evaluation of screening detected microcalcification lesions, dynamic contrast-enhanced breast MR imaging provides additional information with high PPV and NPV, and may therefore offer an alternative to SVAB for women who do not want to undergo SVAB with equivocal findings following full diagnostic mammographic assessment, but breast MR imaging with imperfect PPV and NPV cannot replace SVAB. Clinical Relevance: Dynamic contrast-enhanced breast MR imaging can demonstrate malignant microcalcifications detected by screening mammography and can be recommended in the evaluation of equivocal microcalcifications prior to SVAB. abstract_id: PUBMED:30219186 MR Imaging of Joint Infection and Inflammation with Emphasis on Dynamic Contrast-Enhanced MR Imaging. Contrast-enhanced MR imaging (CE-MR imaging) is recommended for diagnosis and monitoring of infectious and most inflammatory joint diseases. CE-MR imaging clearly differentiates soft and bony tissue from fluid collections and infectious debris. To improve imaging information, a dynamic CE-MR imaging sequence (DCE-MR imaging) sequence can be applied using fast T1-weighted sequential image acquisition during contrast injection. Use of DCE-MR imaging allows robust extraction of quantitative information regarding blood flow and capillary permeability, especially when dedicated analysis methods and software are used to analyze contrast kinetics. This article describes principles of DCE-MR imaging for the assessment of infectious and inflammatory joint diseases. abstract_id: PUBMED:35169420 A case of pleomorphic adenoma of the breast on dynamic contrast-enhanced MR imaging. Pleomorphic adenoma commonly develops in the salivary gland, but rarely in the breast. The dynamic contrast-enhanced MR imaging findings of pleomorphic adenoma of the breast have not been well described. We report a 43-year-old woman with pleomorphic adenoma of the left breast. The imaging findings, including those on dynamic contrast-enhanced MR imaging, included an oval mass with a smooth margin, which consisted of solid and cystic components. The solid component was hypo-intense on T1-weighted imaging, hyper-intense on short tau inversion recovery imaging, with no apparent restricted diffusion, and had heterogeneous enhancement with dark internal septation and a fast/plateau dynamic contrast enhancement pattern. The cystic component was slightly hyper-intense on T1-weighted imaging, slightly hypo-intense on short tau inversion recovery imaging and had no apparent restricted diffusion or contrast enhancement. Together with its rarity, the similarities of imaging findings and the pathologic findings of pleomorphic adenoma of the breast to those of other tumors make accurate preoperative diagnosis difficult. Therefore, through this case report, awareness of pleomorphic adenoma of the breast on dynamic contrast-enhanced MR imaging will facilitate appropriate surgery and postoperative observation based on an accurate diagnosis. abstract_id: PUBMED:36359419 The Frequency and Causes of Not-Detected Breast Malignancy in Dynamic Contrast-Enhanced MRI. Breast MR is the most sensitive imaging modality, but there are cases of malignant tumors that are not detected in MR. This study evaluated the frequency and main causes of malignant breast lesions not detected in dynamic contrast-enhanced (DCE) MR. A total of 1707 cases of preoperative breast MR performed between 2020 and 2021 were included. Three radiologists individually reviewed the DCE MRs and found not-detected malignancy cases in the MRs. The final cases were decided through consensus. For the selected cases, images other than DCE MRIs, such as mammography, ultrasounds, diffusion-weighted MRs, and, if possible, contrast-enhanced chest CTs, were analyzed. In the final sample, 12 cases were not detected in DCE MR, and the frequency was 0.7% (12/1707). Six cases were not detected due to known non-enhancing histologic features. In four cases, tumors were located in the breast periphery and showed no enhancement in MR. In the remaining two cases, malignant lesions were not identified due to underlying marked levels of BPE. The frequency of not-detected malignancy in DCE MR is rare. Knowing the causes of each case and correlating it with other imaging modalities could be helpful in the diagnosis of breast malignancy in DCE MR. abstract_id: PUBMED:29128001 Dynamic Contrast-Enhanced MR Imaging in Head and Neck Cancer. Dynamic contrast-enhanced (DCE) MR imaging uses rapid sequential MR image acquisition before, during, and after intravenous contrast administration to elucidate information on the microvascular biologic function of tissues. The derived pharmacokinetic parameters provide useful information on tissue perfusion and permeability that may help to evaluate entities that otherwise appear similar by conventional imaging. When specifically applied to the evaluation of head and neck cancer, DCE-MR imaging may provide valuable information to help predict treatment response, discriminate between posttreatment changes and residual tumor, and discriminate between various head and neck neoplasms. abstract_id: PUBMED:37193163 Diagnostic efficacy of ultrasound elastography and dynamic contrast-enhanced MR in benign and malignant breast masses. Objective: To analyze the diagnostic efficacy of ultrasound elastography (UE) and dynamic contrast-enhanced MR in benign and malignant breast masses. Methods: From August 2016 to May 2019, the medical records of 98 patients with breast masses in the Zhuji Sixth People's Hospital were retrospectively analyzed, including 45 cases of benign tumor and 53 cases of malignancy diagnosed by pathology. All patients were examined by UE and dynamic contrast-enhanced MR imaging. The pathologic results were used as the gold standard, and the detection results of benign and malignant masses under different examinations were observed and compared with pathology to analyze the specificity and sensitivity. Results: The specificity and sensitivity of diagnosis by UE were 94.44% and 86.89% respectively. The specificity and sensitivity of diagnosis by dynamic contrast-enhanced MR imaging were 96.30% and 91.80%, respectively. The specificity and sensitivity of joint diagnosis were 98.36% and 90.74%, respectively. Conclusion: Joint diagnosis can improve the sensitivity in the diagnosis of benign and malignant breast masses. This improves the diagnostic value for breast tumors. abstract_id: PUBMED:29094174 Contrast-enhanced ultrasound of malignant liver lesions. Contrast-enhanced ultrasound (CEUS) is a safe, relatively inexpensive, and widely available imaging technique using dedicated imaging ultrasound sequences and FDA-approved contrast microbubbles that allow detection and characterization of malignant focal liver lesions with high diagnostic accuracy. CEUS provides dynamic real-time imaging with high spatial and temporal capability, allowing for unique contributions to the already established protocols for diagnosing focal liver lesions using CT and MR imaging. In patients with lesions indeterminate on CT and MRI, CEUS is a helpful problem-solving complementary tool that improves patient management. Furthermore, CEUS assists guidance of liver biopsies and local treatment. Variations of CEUS such as DCE-US and ultrasound molecular imaging are emerging for quantitative monitoring of treatment effects and possible earlier detection of cancer. In this review, basic principles of CEUS techniques and ultrasound contrast agents along with a description of the enhancement patterns of malignant liver lesions are summarized. Also, a discussion of the role of CEUS for treatment guidance and monitoring, intraoperative CEUS, and an outlook on emerging applications is provided. abstract_id: PUBMED:36339058 Alternative Evaluation of the Right Axillary Lymphatic Pathway by Using Dynamic Contrast-enhanced MR Lymphangiography. The lymphatic system plays an integral part in fluid homeostasis. Disturbances in lymphatic pathways are congenital, posttraumatic, or posttreatment related, such as after Fontan palliation. Lymphatic pathway evaluation is challenging because of the difficulty in introducing contrast material into the lymphatics. Intranodal, intramesenteric, and intrahepatic dynamic contrast-enhanced MR lymphangiography (DCMRL) offer better visualization of major lymphatic pathways. However, these techniques exclude pathways outside the central conduction system, preventing the visualization of abnormalities and, thus, administration of treatment. The authors describe alternative imaging of an axillary pathway via DCMRL in a patient with a symptomatic chylous effusion not previously assessed with current techniques. Keywords: Lymphatic, MR-Dynamic Contrast Enhanced, Pediatrics, Thorax, Pleura Supplemental material is available for this article. © RSNA, 2022. abstract_id: PUBMED:22229077 Dynamic contrast-enhanced MR imaging findings of bone metastasis in patients with prostate cancer. Aim: To evaluate the dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) findings of bone metastasis in prostate cancer patients. Methods: Sixteen men with a diagnosis of metastatic prostate cancer to bones were examined with DCE-MRI at 1.5 Tesla. The mean contrast agent concentration vs time curves for bone metastasis and normal bone were calculated and K(trans) and ve values were estimated and compared. Results: An early significant enhancement (wash-out: n = 6, plateau: n = 8 and persistent: n = 2) was detected in all bone metastases (n = 16). Bone metastasis from prostate cancer showed significant enhancement and high K(trans) and ve values compared to normal bone which does not enhance in the elderly population. The mean K(trans) was 0.101/min and 0.0051/min (P &lt; 0.001), the mean ve was 0.141 and 0.0038 (P &lt; 0.001), for bone metastases and normal bone, respectively. Conclusion: DCE-MRI and its quantitative perfusion parameters may have a role in improving the detection of skeletal metastasis in prostate cancer patients. abstract_id: PUBMED:33237017 Dynamic Glucose-Enhanced MR Imaging. Conventional medical imaging techniques use contrast agents that are chemically labeled, for example, iodine in the case of computed tomography, radioisotopes in the case of PET, or gadolinium in the case of MR imaging to create or enhance signal contrast and to visualize tissue compartments and features. Dynamic glucose-enhanced MR imaging represents a novel technique that uses natural, unlabeled d-glucose as a nontoxic biodegradable contrast agent in chemical exchange-sensitive MR imaging approaches. Answer: Dynamic contrast-enhanced MR imaging (DCE-MRI) has been evaluated for its utility in screening detected microcalcification lesions of the breast. A study aimed to determine whether DCE-MRI findings could predict the presence of malignancy in such lesions and its contribution to patient management of stereotactic vacuum-assisted breast biopsy (SVAB) (PUBMED:17063274). The study found that DCE-MRI provides additional information with high positive predictive values (PPVs) and negative predictive values (NPVs), which may offer an alternative to SVAB for women with equivocal findings following full diagnostic mammographic assessment. Specifically, the PPV of contrast uptake of MR imaging was 86%, significantly higher than the 67% PPV of BI-RADS mammography categories 4 and 5. However, the study concluded that while DCE-MRI can demonstrate malignant microcalcifications detected by screening mammography and can be recommended in the evaluation of equivocal microcalcifications prior to SVAB, it cannot replace SVAB due to its imperfect PPV and NPV (PUBMED:17063274). In summary, DCE-MRI has value in the evaluation of screening detected microcalcification lesions of the breast, particularly in providing additional information that may influence patient management decisions. However, it is not a replacement for SVAB, and its predictive values, while high, are not perfect.
Instruction: Does the completeness of revascularization affect early survival after coronary artery bypass grafting in elderly patients? Abstracts: abstract_id: PUBMED:11423284 Does the completeness of revascularization affect early survival after coronary artery bypass grafting in elderly patients? Objective: Usefulness and risks of incomplete versus complete revascularization are still matters of ongoing discussions. Because an increasing number of elderly patients are undergoing coronary artery bypass grafting (CABG), the question arises whether a less extensive surgical approach is more prudent than complete revascularization. Methods: Of 6531 patients undergoing isolated CABG, 859 were 75 and older at the time of operation. Mean age of the 859 patients was 77+/-2.7 years (median: 76 years); 65% were men. Follow-up enquiry by questionnaire was performed at the 180th postoperative day with a completeness of 95.6%. Assessment of the impact of incomplete revascularization utilized both multivariable analysis and propensity score matching to account for selection factors. Results: Incomplete revascularization was performed in 133 patients (16%). The most common reasons for incomplete revascularization were small vessels (55%) and massive calcification (32%). Mortality until 180 days after CABG was higher (n=32; 24%) after incomplete than after complete revascularization (n=105; 15%; P=0.005). By logistic multivariable regression, incomplete revascularization was identified as an independent risk factor for death (Odds ratio, 1.8; P=0.015). By time-related analysis, incomplete revascularization predominantly affected the early period after CABG (P=0.001). Aortic cross clamping time was only slightly shorter for the group with incomplete (59+/-27 min (median: 55 min) vs. 63+/-26 min (median: 58 min); P=0.1). Conclusions: Incomplete revascularization increases the early risk of death after CABG in patients aged 75 years and older. The potential compensating benefit of the shorter aortic cross clamping time does not outweigh the advantages of complete revascularization. Thus, in the era of high-volume interventional approaches and minimally invasive techniques, the advantages of complete revascularization need to be considered. abstract_id: PUBMED:11849851 The effect of completeness of revascularization on event-free survival at one year in the ARTS trial. Objectives: We sought to assess the relationship between completeness of revascularization and adverse events at one year in the ARTS (Arterial Revascularization Therapies Study) trial. Background: There is uncertainty to what extent degree of completeness of revascularization, using up-to-date techniques, influences medium-term outcome. Methods: After consensus between surgeon and cardiologist regarding the potential for equivalence in the completeness of revascularization, 1,205 patients with multivessel disease were randomly assigned to either bypass surgery or stent implantation. All baseline and procedural angiograms and surgical case-record forms were centrally assessed for completeness of revascularization. Results: Of 1,205 patients randomized, 1,172 underwent the assigned treatment. Complete data for review were available in 1,143 patients (97.5%). Complete revascularization was achieved in 84.1% of the surgically treated patients and 70.5% of the angioplasty patients (p &lt; 0.001). After one year, the stented angioplasty patients with incomplete revascularization showed a significantly lower event-free survival than stented patients with complete revascularization (i.e., freedom from death, myocardial infarction, cerebrovascular accident and repeat revascularization) (69.4% vs. 76.6%; p &lt; 0.05). This difference was due to a higher incidence of subsequent bypass procedures (10.0% vs. 2.0%; p &lt; 0.05). Conversely, at one year, bypass surgery patients with incomplete revascularization showed only a marginally lower event-free survival rate than those with complete revascularization (87.8% vs. 89.9%). Conclusions: Complete revascularization was more frequently accomplished by bypass surgery than by stent implantation. One year after bypass, there was no significant difference in event-free survival between surgically treated patients with complete revascularization and those with incomplete revascularization, but patients randomized to stenting with incomplete revascularization had a greater need for subsequent bypass surgery. abstract_id: PUBMED:11745063 Does the completeness of revascularization contribute to an improved early survival in patients up to 70 years of age? Objective: In the era of a renewal of incomplete revascularization approaches, the controversy reappears as to whether the approach for complete revascularization is of prognostic value. The clear advantage of complete revascularization in elderly patients has recently been published. However, for the younger patient group, there is no conclusive information available so far. The aim of our study was to investigate the effect of complete vs. incomplete revascularization in patients up to 70 years of age. Patients And Methods: 6531 patients underwent isolated CABG. 5003 of these patients were aged up to 70 years at the time of operation. Results: Incomplete revascularization was performed in 534 (10.7 %) patients. The most common reasons for incomplete revascularization were small vessels and massive calcification. The differences in mortality up to the 180(th) day after CABG are statistically significant. By Kaplan-Meier analysis, the time relationship between incomplete revascularization and death affects predominantly the very early period after CABG. By logistical regression, incomplete revascularization was found to be an independent risk factor for death after CABG. Conclusion: Incomplete revascularization affects the early outcome after CABG in patients up to 70 years of age as an independent risk factor for death. In view of recent approaches for primarily incomplete CABG, our results indicate the necessity to reconsider the advantages of complete revascularization. abstract_id: PUBMED:22480392 Completeness of revascularization and survival among octogenarians with triple-vessel disease. Background: We sought to determine the impact of the completeness of surgical revascularization among octogenarians with triple-vessel disease. Methods: Between 1992 and 2008, 476 consecutive patients aged 80 years or more who underwent primary isolated coronary artery bypass grafting (CABG) procedures were identified. Early and late survival were compared among patients who underwent complete revascularization (CR, n=391) and incomplete revascularization (IR, n=85). IR was present when 1 or more of the 3 main coronary arteries with 50% or greater stenosis that were identified preoperatively as a surgical target by the operating surgeon were not grafted. The mean follow-up was 5.4±3.0 years (maximum 15.3 years). Results: Baseline risk was similar between the 2 groups of patients. IR was more frequent in off-pump compared with on-pump CABG (34.9% versus 16.2%, respectively; p=0.002). The most common reason for IR was small or severely diseased arteries (87%). The incidence of postoperative myocardial infarction (MI) was similar in both groups (CR, 18.4% versus IR, 17.3%; p=0.81). In-hospital mortality was 7.2% among patients with CR and 4.7% among patients with IR (p=0.60). Three, 5-, and 8-year freedom from all-cause mortality among patients who underwent CR were 89.2%, 74.1%, and 54.3%, respectively, and were not significantly different from those patients who underwent IR (86.6%, 74.5%, and 49.4%, respectively) (p=0.40). Conclusions: In octogenarians with triple-vessel disease, a strategy of incomplete revascularization during CABG does not negatively impact early or long-term survival. abstract_id: PUBMED:30216551 The effect of completeness of revascularization during CABG with single versus multiple arterial grafts. Introduction: Incomplete coronary revascularization is associated with suboptimal outcomes. We investigated the long-term effects of Incomplete, Complete, and Supra-complete revascularization and whether these effects differed in the setting of single-arterial and multi-arterial coronary artery bypass graft (CABG). Methods: We analyzed 15-year mortality in 7157 CABG patients (64.1 ± 10.5 years; 30% women). All patients received a left internal thoracic artery to left anterior descending coronary artery graft with additional venous grafts only (single-arterial) or with at least one additional arterial graft (multi-arterial) and were grouped based on a completeness of revascularization index (CRI = number of grafts minus the number of diseased principal coronary arteries): Incomplete (CRI ≤ -1 [N = 320;4.5%]); Complete (CRI = 0 [N = 2882;40.3%]; reference group); and two Supra-complete categories (CRI = +1[N = 3050; 42.6%]; CRI ≥ + 2 [N = 905; 12.6%]). Risk-adjusted mortality hazard ratios (AHR) were calculated using comprehensive propensity score adjustment by Cox regression. Results: Incomplete revascularization was rare (4.5%) but associated with increased mortality in all patients (AHR [95% confidence interval] = 1.53 [1.29-1.80]), those undergoing single-arterial CABG (AHR = 1.27 [1.04-1.54]) and multi-arterial CABG (AHR = 2.18 [1.60-2.99]), as well as in patients with 3-Vessel (AHR = 1.37 [1.16-1.62]) and, to a lesser degree, with 2-Vessel (AHR = 1.67 [0.53-5.23]) coronary disease. Supra-complete revascularization was generally associated with incrementally decreased mortality in all patients (AHR [CRI = +1] = 0.94 [0.87-1.03]); AHR [CRI ≥ +2] = 0.74 [0.64-0.85]), and was driven by a significantly decreased mortality risk in single-arterial CABG (AHR [CRI = +1] = 0.90 [0.81-0.99]; AHR [CRI ≥ +2] = 0.64 [0.53-0.78]); and 3-Vessel disease patients (AHR [CRI = +1] = 0.94 [0.86-1.04]; and AHR [CRI ≥ +2] = 0.75 [0.63-0.88]) with no impact in multi-arterial CABG (AHR [CRI = +1] = 1.07 [0.91-1.26]; AHR [CRI ≥ +2] = 0.93 [0.73-1.17]). Conclusions: Incomplete revascularization is associated with decreased late survival, irrespective of grafting strategy. Alternatively, supra-complete revascularization is associated with improved survival in patients with 3-Vessel CAD, and in single-arterial but not multi-arterial CABG. abstract_id: PUBMED:12021224 Survival after coronary revascularization in the elderly. Background: Elderly patients with ischemic heart disease are increasingly referred for coronary artery bypass grafting (CABG) or percutaneous coronary intervention (PCI). However, reports of poor outcomes in the elderly have led to questions about the benefit of these strategies. We studied survival by prescribed treatment (CABG, PCI, or medical therapy) for patients in 3 age categories: &lt;70 years, 70 to 79 years, and &gt; or =80 years of age. Methods And Results: The Alberta Provincial Project for Outcomes Assessment in Coronary Heart Disease (APPROACH) is a clinical data collection and outcome monitoring initiative capturing all patients undergoing cardiac catheterization and revascularization in the province of Alberta, Canada, since 1995. Characteristics and long-term outcomes of a cohort of &gt;6000 elderly patients with ischemic heart disease were compared with younger patients. In 15 392 patients &gt;70 years of age, 4-year adjusted actuarial survival rates for CABG, PCI, and medical therapy were 95.0%, 93.8%, and 90.5%, respectively. In 5198 patients 70 to 79 years of age, survival rates were 87.3%, 83.9%, and 79.1%, respectively. In 983 patients &gt; or = 80 years of age, survival was 77.4% for CABG, 71.6% for PCI, and 60.3% for medical therapy. Absolute risk differences in comparison to medical therapy for CABG (17.0%) and PCI (11.3%) were greater for patients &gt; or =80 years of age than for younger patients. Conclusions: Elderly patients paradoxically have greater absolute risk reductions associated with surgical or percutaneous revascularization than do younger patients. The combination of these results with a recent randomized trial suggests that the benefits of aggressive revascularization therapies may extend to subsets of patients in older age groups. abstract_id: PUBMED:22496082 Impact of completeness of revascularization on long-term cardiovascular outcomes in patients with type 2 diabetes mellitus: results from the Bypass Angioplasty Revascularization Investigation 2 Diabetes (BARI 2D). Background: Patients with diabetes have more extensive coronary disease than those without diabetes, resulting in more challenging percutaneous coronary intervention or surgical (coronary artery bypass graft) revascularization and more residual jeopardized myocardium. The Bypass Angioplasty Revascularization Investigation 2 Diabetes (BARI 2D) trial provided an opportunity to examine the long-term clinical impact of completeness of revascularization in patients with diabetes. Methods And Results: This is a post hoc, nonrandomized analysis of the completeness of revascularization in 751 patients who were randomly assigned to early revascularization, of whom 264 underwent coronary artery bypass graft surgery and 487 underwent percutaneous coronary intervention. The completeness of revascularization was determined by the residual postprocedure myocardial jeopardy index (RMJI). RMJI is a ratio of the number of myocardial territories supplied by a significantly diseased epicardial coronary artery or branch that was not successfully revascularized, divided by the total number of myocardial territories. Mean follow-up for mortality was 5.3 years. Complete revascularization (RMJI=0) was achieved in 37.9% of patients, mildly incomplete revascularization (RMJI &gt;0≤33) in 46.6%, and moderately to severely incomplete revascularization (RMJI &gt;33) in 15.4%. Adjusted event-free survival was higher in patients with more complete revascularization (hazard ratio, 1.14; P=0.0018). Conclusions: Patients with type 2 diabetes mellitus and less complete revascularization had more long-term cardiovascular events. Clinical Trial Registration: URL: http://www.clinicaltrials.gov. Unique identifier: NCT00006305. abstract_id: PUBMED:24521971 Completeness of coronary revascularization and survival: Impact of age and off-pump surgery. Objectives: We conducted a multicenter observational cohort study of the effect of completeness of revascularization on long-term survival after coronary artery bypass grafting. We also investigated the impact of age and off-pump surgery. Methods: The Veterans Affairs Continuous Improvement in Cardiac Surgery Program was used to identify all patients (N=41,139) with left main and 3-vessel coronary artery disease who underwent nonemergency coronary artery bypass grafting from October 1997 to April 2011. The primary outcome measure, all-cause mortality, was compared between patients with complete revascularization and patients with incomplete revascularization. Survival functions were estimated with the Kaplan-Meier method and compared by using the log-rank test. Propensity scores calculated for each patient were used to match 5509 patients undergoing complete revascularization to 5509 patients undergoing incomplete revascularization. A subgroup analysis was performed in patients aged at least 70 years and patients who underwent off-pump coronary artery bypass grafting. Results: In the unmatched groups, several risk factors were more common in the incomplete revascularization group, as was off-pump coronary artery bypass grafting. In the matched groups, risk-adjusted mortality was higher in the incomplete revascularization group than in the complete revascularization group at 1 year (6.96% vs 5.97%; risk ratio [RR], 1.17; 95% confidence interval [CI], 1.01-1.34), 5 years (18.50% vs 15.96%; RR, 1.16; 95% CI, 1.07-1.26), and 10 years (32.12% vs 27.40%; RR, 1.17; 95% CI, 1.11-1.24), with an overall hazard ratio of 1.18 (95% CI, 1.09-1.28; P&lt;.0001). The hazard ratio for patients aged 70 years or more was 1.125 (95% CI, 1.001-1.263; P=.048). The hazard ratio was 1.47 (95% CI, 1.303-1.655) for the unmatched off-pump coronary artery bypass grafting group and 1.156 (95% CI, 1.000-1.335) for the matched off-pump coronary artery bypass grafting group. Conclusions: Incomplete revascularization is associated with decreased long-term survival, even in elderly patients. Surgeons should consider these findings when choosing a revascularization strategy, particularly if off-pump coronary artery bypass grafting is contemplated. abstract_id: PUBMED:37921898 CCTA-based CABG SYNTAX Score: a tool to evaluate completeness of coronary segment revascularization after bypass surgery. To describe the updated coronary computed tomographic angiography (CCTA)-based coronary artery bypass graft (CABG) anatomic SYNTAX Score (aSS) and assess its utility and reproducibility for assessing the completeness of revascularization after CABG. The CCTA-CABG aSS is a visual assessment using CCTA post-CABG which quantifies the failure in effectively grafting stenotic coronary segments, and therefore assesses the completeness of surgical revascularization. It is calculated by subtracting the aSS of successfully anastomosed coronary segments from the aSS of the native coronary tree. The inter-observer reproducibility of the CCTA-CABG aSS was evaluated in 45 consecutive patients with three-vessel disease with or without left main disease who underwent a CCTA 30 days (± 7 days) after CABG. The CCTA-CABG aSS was evaluated in 45 consecutive patients with 117 bypass grafts and 152 anastomoses. The median native coronary aSS was 35.0 [interquartile range (IQR) 27.0-41.0], whilst the median CCTA-CABG aSS was 13.0 (IQR 9.0-20.5). The inter-observer level of agreement for the native coronary aSS and the CCTA-CABG aSS were both substantial with respective Kappas of 0.67 and 0.61. The CCTA-CABG aSS was feasible in all patients who underwent CABG for complex coronary artery disease with substantial inter-observer reproducibility, and therefore can be used to quantify the completeness of revascularization after CABG. abstract_id: PUBMED:32005574 Priorities in coronary artery bypass grafting: Is midterm survival more dependent on completeness of revascularization or multiple arterial grafts? Objective: Both completeness of revascularization and multiple arterial grafts (multiple arterial coronary artery bypass grafting) have been associated with increased midterm survival after coronary artery bypass grafting. The purpose of this study was to evaluate the relative impact of completeness of revascularization and multiple arterial coronary artery bypass grafting on midterm survival after coronary artery bypass grafting. Methods: A retrospective review of 17,411 isolated, primary coronary artery bypass grafting operations from January 2002 to June 2016 at a US academic institution was performed. Patients were divided into groups based on complete or incomplete revascularization and number of arterial grafts. Inverse probability of treatment weighting based on the generalized propensity score was performed to minimize imbalance in preoperative characteristics. Between-group differences in outcomes were assessed using multivariable logistic and Cox regression analyses, incorporating the propensity score weights. Results: Patients undergoing multiple arterial coronary artery bypass grafting in this study were younger, had fewer comorbid conditions, and had lower incidence of left main stenosis compared with patients undergoing single-arterial coronary artery bypass grafting. Short-term perioperative outcomes were similar between groups once propensity score weighting was used to minimize between-group differences in preoperative variables. Median follow-up in the entire population was 630 days, but was 1366 days in the cohort with data available from the Social Security Death Index. Multiple arterial coronary artery bypass grafting was protective for midterm survival compared with single arterial coronary artery bypass grafting, regardless of complete or incomplete revascularization or strategy (multiple arterial complete revascularization vs single-arterial complete revascularization: hazard ratio, 0.82; 95% confidence interval, 0.69-0.97; P = .02; multiple arterial incomplete revascularization vs single-arterial incomplete revascularization: hazard ratio, 0.70; 95% confidence interval, 0.53-0.90; P = .007). Conclusions: After controlling for preoperative comorbidities, multiple arterial coronary artery bypass grafting provides a modest midterm survival benefit over single-arterial coronary artery bypass grafting irrespective of completeness of revascularization, suggesting that when forced to choose, surgeons may elect to pursue multiple arterial conduits. Answer: Yes, the completeness of revascularization does affect early survival after coronary artery bypass grafting (CABG) in elderly patients. A study focusing on patients aged 75 years and older found that incomplete revascularization increased the early risk of death after CABG. Mortality until 180 days after CABG was higher in patients who had incomplete revascularization compared to those who had complete revascularization (24% vs. 15%; P=0.005). Incomplete revascularization was identified as an independent risk factor for death, with an odds ratio of 1.8 (P=0.015). The study concluded that the potential benefit of shorter aortic cross clamping time in incomplete revascularization does not outweigh the advantages of complete revascularization (PUBMED:11423284). Furthermore, another study on octogenarians with triple-vessel disease showed that while incomplete revascularization during CABG did not negatively impact early or long-term survival, the general consensus across multiple studies suggests that complete revascularization is associated with better outcomes (PUBMED:22480392). In the context of the ARTS trial, complete revascularization was more frequently accomplished by bypass surgery than by stent implantation. One year after bypass, there was no significant difference in event-free survival between surgically treated patients with complete revascularization and those with incomplete revascularization. However, patients randomized to stenting with incomplete revascularization had a greater need for subsequent bypass surgery (PUBMED:11849851). Overall, the evidence suggests that complete revascularization is generally associated with better early survival and outcomes in elderly patients undergoing CABG.
Instruction: Lung transplantation in patients 70 years old or older: have outcomes changed after implementation of the lung allocation score? Abstracts: abstract_id: PUBMED:22944081 Lung transplantation in patients 70 years old or older: have outcomes changed after implementation of the lung allocation score? Objective: The objective of the present study was to evaluate whether the outcomes of lung transplantation in patients aged 70 years or older have changed after implementation of the lung allocation score in May 2005. Methods: Patients aged 70 years or older undergoing primary lung transplantation from 1995 to 2009 were identified from the United Network for Organ Sharing registry. The primary stratification was the pre-lung allocation score era versus lung allocation score era. Risk-adjusted multivariate Cox regression and Kaplan-Meier analyses were conducted to evaluate the effect of age 70 years or older on 1-year post-transplant mortality compared with a reference cohort of patients aged 60 to 69 years. Results: Of the overall 15,726 adult lung transplantation patients in the study period, 225 (1.4%) were 70 years old or older and 4634 (29.5%) were 60 to 69 years old. The patients aged 70 years or older were a larger cohort of overall lung transplantation patients in the lung allocation score era compared with before the lung allocation score era (3.1% vs 0.3%, P &lt; .001). In the risk-adjusted Cox analysis, age 70 years or older was a significant risk factor for 1-year post-lung transplantation mortality in the pre-lung allocation score era (hazard ratio, 2.00; 95% confidence interval, 1.10-3.62, P = .02) but not in the lung allocation score era (hazard ratio, 1.02; 95% confidence interval, 0.71-1.46; P = .92). Similarly, Kaplan-Meier 1-year survival was significantly reduced in patients 70 years old or older versus 60 to 69 years old in the pre-lung allocation score era (56.7% vs 76.3%, P = .006) but not in the lung allocation score era (79.0% vs 80.0%, P = .72). Conclusions: Recipients aged 70 years or older were a larger proportion of overall lung transplantation patients after implementation of the lung allocation score. Although associated with significantly increased post-lung transplantation mortality in the pre-lung allocation score era, age 70 years or older is currently associated with outcomes comparable to those of patients aged 60 to 69 years. Therefore, age 70 years or older should not serve as an absolute contraindication to lung transplantation in the lung allocation score era. abstract_id: PUBMED:26922274 Effect of the lung allocation score on lung transplantation in the United States. Background: On May 4, 2005, the system for allocation of deceased donor lungs for transplant in the United States changed from allocation based on waiting time to allocation based on the lung allocation score (LAS). We sought to determine the effect of the LAS on lung transplantation in the United States. Methods: Organ Procurement and Transplantation Network data on listed and transplanted patients were analyzed for 5 calendar years before implementation of the LAS (2000-2004), and compared with data from 6 calendar years after implementation (2006-2011). Counts were compared between eras using the Wilcoxon rank sum test. The rates of transplant increase within each era were compared using an F-test. Survival rates computed using the Kaplan-Meier method were compared using the log-rank test. Results: After introduction of the LAS, waitlist deaths decreased significantly, from 500/year to 300/year; the number of lung transplants increased, with double the annual increase in rate of lung transplants, despite no increase in donors; the distribution of recipient diagnoses changed dramatically, with significantly more patients with fibrotic lung disease receiving transplants; age of recipients increased significantly; and 1-year survival had a small but significant increase. Conclusions: Allocating lungs for transplant based on urgency and benefit instead of waiting time was associated with fewer waitlist deaths, more transplants performed, and a change in distribution of recipient diagnoses to patients more likely to die on the waiting list. abstract_id: PUBMED:24854023 Introduction of the lung allocation score in Germany. The aim of this study was to assess performance of the new lung allocation system in Germany based on lung allocation score (LAS). Retrospective analysis of waitlist (WL) outflow, lung transplantation (LTx) activity and 3-month outcomes comparing 1-year pre- and post-LAS introduction on December 10, 2011 was performed. Following LAS introduction, WL registrations remained constant, while WL mortality fell by 23% (p = 0.04). Reductions in WL mortality occurred in patients with cystic fibrosis (CF; -52%), emphysema (chronic obstructive pulmonary disease [COPD]; -49%) and pulmonary hypertension (PH; -67%), but not idiopathic pulmonary fibrosis (IPF; +48%). LTx activity increased by 9% (p = 0.146). Compared to pre-LAS, more patients with IPF (32% vs. 29%) and CF (20% vs. 18%) underwent transplantation and comparatively fewer with COPD (30% vs. 39%). Median LAS among transplant recipients was highest in PH (53) and IPF (49) and lowest in COPD (34). Transplantation under invasive respiratory support increased to 13% (in CF 28%, +85%, p = 0.017). Three-month survival remained unchanged (pre: 96.1% and post: 94.9%, p = 0.94). Following LAS implementation in Germany, reductions in waiting list size and WL mortality were observed. Composition of transplant recipients changed, with fewer COPD and more IPF recipients. Transplantation under invasive respiratory support increased. Reductions in WL mortality were most pronounced among CF and PH patients. abstract_id: PUBMED:29804306 Lung transplantation in the Lung Allocation Score era: Medium-term analysis from a single center. In 2005, the Lung Allocation Score (LAS) was implemented as the allocation system for lungs in the US. We sought to compare 5-year lung transplant outcomes before and after the institution of the LAS. Between 2000 and 2011, 501 adult patients were identified, with 132 from January 2000 to April 2005 (Pre-LAS era) and 369 from May 2005 to December 2011 (Post-LAS era). Kruskal-Wallis or chi-squared test was used to determine significance between groups. Survival was censored at 5 years. Overall, the post-LAS era was associated with more restrictive lung disease, higher LAS scores, shorter wait-list times, more preoperative immunosuppression, and more single lung transplantation. In addition, post-LAS patients had higher O2 requirements with greater preoperative pulmonary impairment. Postoperatively, 30-day mortality improved in post-LAS era (1.6% vs 5.3%, P = .048). During the pre- and post-LAS eras, 5-year survival was 52.3% and 55.3%, respectively (P = .414). The adjusted risk of mortality was not different in the post-LAS era (P = .139). Freedom from chronic lung allograft dysfunction was significantly higher in the post-LAS era (P = .002). In this single-center report, implementation of the LAS score has led to allocation to sicker patients without decrement in short- or medium-term outcomes. Freedom from CLAD at 5 years is improving after LAS implementation. abstract_id: PUBMED:27866928 Improved waitlist and transplant outcomes for pediatric lung transplantation after implementation of the lung allocation score. Background: Although the lung allocation score (LAS) has not been considered valid for lung allocation to children, several additional policy changes for pediatric lung allocation have been adopted since its implementation. We compared changes in waitlist and transplant outcomes for pediatric and adult lung transplant candidates since LAS implementation. Methods: The United Network for Organ Sharing database was reviewed for all lung transplant listings during the period 1995 to June 2014. Outcomes were analyzed based on date of listing (pre-LAS vs post-LAS) and candidate age at listing (adults &gt;18 years, adolescents 12 to 17 years, children 0 to 11 years). Results: Of the 39,962 total listings, 2,096 (5%) were for pediatric candidates. Median waiting time decreased after LAS implementation for all age groups (adults: 379 vs 83 days; adolescents: 414 vs 104 days; children: 211 vs 109 days; p &lt; 0.001). The proportion of candidates reaching transplant increased after LAS (adults: 52.6% vs 71.6%, p &lt; 0.001; adolescents: 40.3% vs 61.6%, p &lt; 0.001; children: 42.4% vs 50.9%, p = 0.014), whereas deaths on the waitlist decreased (adults: 28.0% vs 14.4%, p &lt; 0.001; adolescents: 33.1% vs 20.9%, p &lt; 0.001; children: 32.2% vs 25.0%; p = 0.025), despite more critically ill candidates in all groups. Median recipient survival increased after LAS for adults and children (adults: 5.1 vs 5.5 years, p &lt; 0.001; children: 6.5 vs 7.6 years, p = 0.047), but not for adolescents (3.6 vs 4.3 years, p = 0.295). Conclusions: Improvements in waiting time, mortality and post-transplant survival have occurred in children after LAS implementation. Continued refinement of urgency-based allocation to children and broader sharing of pediatric donor lungs may help to maximize these benefits. abstract_id: PUBMED:36774168 The Past, Present, and Near Future of Lung Allocation in the United States. The first official donor lung allocation system in the United States was initiated by the United Network of Organ Sharing in 1990. The initial policy for lung allocation was simple with donor lungs allocated based on ABO match and the amount of time the candidates accrued on the waiting list. Donor offers were first given to candidates' donor service area. In March 2005, the implementation of the lung allocation score (LAS) was the major change in organ allocation. International adoption of the LAS-based allocation system can be seen worldwide. abstract_id: PUBMED:25430425 Candidacy for lung transplant and lung allocation. This article provides a summary of the changes in lung transplantation after implementation of the Lung Allocation Score in 2005. Specific issues that are addressed include impact of diagnosis group, age, critical illness, and geographic disparities in transplant. abstract_id: PUBMED:35016781 Lung transplantation for chronic obstructive pulmonary disease: A call to modify the lung allocation score to decrease waitlist mortality. Objective: Approximately 40% of lung transplants for chronic obstructive pulmonary disease (COPD) in the lung allocation score era are single lung transplantations (SLTs). We hypothesized that double lung transplantation (DLT) results in superior survival, but that mortality on the waitlist may compel clinicians to perform SLT. We investigated both waitlist mortality in COPD patients with restricted versus unrestricted listing preferences and posttransplant survival in SLT versus DLT to identify key predictors of mortality. Methods: A retrospective analysis of waitlist mortality and posttransplant survival in patients with COPD was conducted using post-lung allocation score data from the United Network for Organ Sharing database between 2005 and 2018. Results: Of 6740 patients with COPD on the waitlist, 328 (4.87%) died and 320 (4.75%) were removed due to clinical deterioration. Median survival on the waitlist was significantly worse in patients listed as restricted for DLT (4.39 vs 6.09 years; P = .002) compared with patients listed as unrestricted (hazard ratio, 1.34; 95% CI, 1.13-1.57). Factors that increase waitlist mortality include female sex, increased pulmonary artery pressure, and increased wait time. Median posttransplant survival was 5.3 years in SLT versus 6.5 years in DLT (P &lt; .001). DLT recipients are younger, male patients with a higher lung allocation score. The survival advantage of DLT persisted in adjusted analysis (hazard ratio, 0.819; 95% CI, 0.741-0.905). Conclusions: Restricted listing preference is associated with increased waitlist mortality, but DLT recipients have superior posttransplant survival. Because the lung allocation score does not prioritize COPD, concern for increased waitlist mortality with restricted listing preference may drive continued use of SLT despite better posttransplant survival in DLT. abstract_id: PUBMED:30664859 Lung Allocation Score Thresholds Prioritize Survival After Lung Transplantation. Background: The lung allocation score (LAS) prioritizes lung transplant (LTx) candidates with poor transplant-free survival and expected survival benefit from LTx. Although patients with the highest LAS have the shortest waiting time, mortality benefit is unclear in this group, raising criticism that the LAS inappropriately prioritizes critically ill candidates. We aim to identify a threshold above which increasing LAS values do not predict increasing survival benefit. Methods: The United Network for Organ Sharing Registry was queried for first-time adult LTx candidates with LAS ≥ 30 between May 2005 and December 2016. Survival was tracked from the time of listing through the posttransplant period and compared with survival while remaining on the waitlist, using proportional hazards regression. The survival benefit of LTx was modeled as a piecewise-constant time-dependent covariate, moderated by candidate LAS. Results: Of the overall cohort (N = 21,157), LTx was particularly protective for 365 patients with an initial LAS of 70 to 79 (hazard ratio of death after undergoing LTx relative to remaining on the waitlist, 0.2; 95% CI, 0.1-0.3). However, the survival benefit of LTx did not meaningfully increase for 1,042 patients listed with even higher LAS. Among patients with cystic fibrosis, the survival benefit of LTx was constant above an LAS of approximately 50. Conclusions: Consistent survival benefit of LTx was observed among patients with an initial LAS of 70 and greater. This result supports equalizing priority for donor lung allocation for patients with LAS ≥ 70. A lower LAS threshold for maximum priority is indicated in patients with cystic fibrosis. abstract_id: PUBMED:32271967 Clinical judgment versus lung allocation score in predicting lung transplant waitlist mortality. Canadian lung transplant centers currently use a subjective and dichotomous "Status" ranking to prioritize waitlisted patients for lung transplantation. The lung allocation score (LAS) is an objective composite score derived from clinical parameters associated with both waitlist and post-transplant survival. We performed a retrospective cohort study to determine whether clinical judgment (Status) or LAS better predicted waitlist mortality. All adult patients listed for lung transplantation between 2007 and 2012 at three Canadian lung transplant programs were included. Status and LAS were compared in their ability to predict waitlist mortality using Cox proportional hazards models and C-statistics. Status and LAS were available for 1122 patients. Status 2 patients had a higher LAS compared to Status 1 patients (mean 40.8 (4.4) vs 34.6 (12.5), P = .0001). Higher LAS was associated with higher risk of waitlist mortality (HR 1.06 per unit LAS, 95% CI 1.05, 1.07, P &lt; .001). LAS predicted waitlist mortality better than Status (C-statistic 0.689 vs 0.674). Patients classified as Status 2 and LAS ≥ 37 had the worst survival awaiting transplant, HR of 8.94 (95% CI 5.97, 13.37). LAS predicted waitlist mortality better than Status; however, the best predictor of waitlist mortality may be a combination of both LAS and clinical judgment. Answer: Yes, outcomes of lung transplantation in patients 70 years old or older have changed after the implementation of the lung allocation score (LAS). A study evaluating the effect of age on post-transplant mortality found that before the LAS era, being 70 years or older was a significant risk factor for 1-year post-lung transplantation mortality. However, in the LAS era, age 70 years or older is not associated with increased post-transplant mortality, and the outcomes are comparable to those of patients aged 60 to 69 years. This suggests that age should not be an absolute contraindication to lung transplantation in the LAS era (PUBMED:22944081). The LAS was introduced in the United States on May 4, 2005, to allocate lungs based on urgency and benefit rather than waiting time. This change has been associated with fewer waitlist deaths, more transplants performed, and a change in the distribution of recipient diagnoses to patients more likely to die on the waiting list (PUBMED:26922274). The introduction of the LAS in Germany also showed reductions in waiting list mortality, particularly among patients with cystic fibrosis and pulmonary hypertension, and an increase in lung transplantation activity (PUBMED:24854023). In the United States, the LAS era has been associated with more restrictive lung disease, higher LAS scores, shorter wait-list times, and improved postoperative outcomes, including 30-day mortality and freedom from chronic lung allograft dysfunction, without a decrement in short- or medium-term outcomes (PUBMED:29804306). Additionally, improvements in waiting time, mortality, and post-transplant survival have been observed in children after LAS implementation (PUBMED:27866928). Overall, the implementation of the LAS has led to a more effective and equitable allocation of lungs for transplantation, improving outcomes for various age groups, including those 70 years or older (PUBMED:36774168; PUBMED:25430425).
Instruction: Is outpatient care associated with lower use of inpatient and emergency care? Abstracts: abstract_id: PUBMED:14597499 Is outpatient care associated with lower use of inpatient and emergency care? An analysis of persons with HIV disease. Objectives: The authors use data from the AIDS Costs and Service Utilization Survey (ACSUS) to investigate the extent to which use of ambulatory medical care is associated with inpatient and emergency department use among HIV-infected persons. Methods: Parameter estimates were derived from simultaneous, multiequation models. Results: Higher use of ambulatory medical services is not significantly associated with lower probability of inpatient admissions or emergency department (ED) visits. For the subgroup of patients who received an AIDS diagnosis during the study period, however, the number of ambulatory visits had significant negative effects on hospitalizations and ED use. Conclusions: Outpatient care may offset inpatient and ED services at particular points in the disease course. abstract_id: PUBMED:35505398 Use of outpatient and inpatient health care services by occupation-a register study of employees in Oulu, Finland. Background: The aim of this study was to examine how the use of outpatient and inpatient health services differs by occupational groups, and whether the differences are explained by sociodemographic factors and health status. Methods: We used register-based data on 25-64-year-old employees living in the city of Oulu, Finland, in 2018 (N = 61,848). Use of outpatient health care services (public, private and occupational health care) among men and women was analysed with negative binomial regression models, and use of inpatient health care with logistic regression models, using two occupational classifications: occupational group (1-digit level) and more detailed occupation (2-digit level). Adjusted covariates were age, education, income, marital status, special reimbursement entitlements for medicines, and sickness absence. Results: Examined at the level of larger occupational groups, the use of outpatient and inpatient health care was less common than average among managers, professionals and skilled agricultural, forestry and fishery workers; in women also among craft and related trades workers. Controlling for covariates explained only part of the differences, more among women than among men. Analysed at the level of more detailed occupations, the adjusted use of outpatient and inpatient care was more common among health associate professionals and stationary plant and machine operators, both among men and women. Furthermore, the use of outpatient care was common among male personal care workers, protective service workers and metal, machinery and related trades workers as well as among labourers in mining, construction, manufacturing and transport, and female customer services clerks and sales workers. Conclusion: The use of health care services differs by occupation, and the differences are not fully explained by sociodemographic factors and health status. High occupational risks, attitudes and knowledge may explain the more frequent use of health services. Furthermore, explanations may be sought from lack of access to occupational health care or healthier working conditions and behavior. abstract_id: PUBMED:28808952 Effect of reducing cost sharing for outpatient care on children's inpatient services in Japan. Background: Assessing the impact of cost sharing on healthcare utilization is a critical issue in health economics and health policy. It may affect the utilization of different services, but is yet to be well understood. Objective: This paper investigates the effects of reducing cost sharing for outpatient services on hospital admissions by exploring a subsidy policy for children's outpatient services in Japan. Methods: Data were extracted from the Japanese Diagnosis Procedure Combination database for 2012 and 2013. A total of 366,566 inpatients from 1390 municipalities were identified. The impact of expanding outpatient care subsidy on the volume of inpatient care for 1390 Japanese municipalities was investigated using the generalized linear model with fixed effects. Results: A decrease in cost sharing for outpatient care has no significant effect on overall hospital admissions, although this effect varies by region. The subsidy reduces the number of overall admissions in low-income areas, but increases it in high-income areas. In addition, the results for admissions by type show that admissions for diagnosis increase particularly in high-income areas, but emergency admissions and ambulatory-care-sensitive-condition admissions decrease in low-income areas. Conclusions: These results suggest that outpatient and inpatient services are substitutes in low-income areas but complements in high-income ones. Although the subsidy for children's healthcare would increase medical costs, it would not improve the health status in high-income areas. Nevertheless, it could lead to some health improvements in low-income areas and, to some extent, offset costs by reducing admissions in these regions. abstract_id: PUBMED:31631824 Emergency Management for the Outpatient Health Care Environment. Utilization of ambulatory and outpatient services for primary, specialty, and surgical care has risen in the United States over the last decade, in parallel with the evolution of health care emergency management. Regulatory and accreditation authorities; legislature and policies; and real-life events such as hurricanes, tornadoes, and wildfires throughout the country have caused health care systems to take a more all-hazards approach for emergency management. While health care emergency management has grown tremendously in significance, outpatient settings have yet to see the same growth. However, concepts of comprehensive emergency management and the incident command system are important and valuable across all health care system settings, including outpatient facilities. The purpose of this article is to summarize regulatory requirements for outpatient health care emergency management, describe nuances of outpatient settings, and provide recommendations for how to successfully incorporate outpatient and ambulatory locations into the "Enterprise" model for comprehensive health care emergency management. abstract_id: PUBMED:38429199 Cost analysis of orthognathic surgery: outpatient care versus inpatient care. With limited healthcare resources, it is important to provide the right level and form of care. The aim of this study was to determine whether selected single-jaw orthognathic surgery in outpatient care (OPC) generates lower healthcare costs than in inpatient care (IPC). The costs of surgically assisted rapid maxillary expansion (SARME), Le Fort I osteotomy (LFI), and bilateral sagittal split osteotomy (BSSO) were calculated for 165 patients, 107 treated in OPC and 58 in IPC. Additionally, costs for revisits, emergency visits, emergency phone calls, re-operations, and plate removal during the first 12 months postoperatively were recorded. The total mean costs of the different operations including revisits, emergency visits, and phone calls were 34.2-48.8% lower in OPC than in IPC at 12 months postoperatively. Operation costs were lower for LFI in OPC (P = 0.009) and for SARME in IPC (P = 0.007). Anaesthesia costs were lower for LFI (P &lt; 0.001) and BSSO (P &lt; 0.001) in OPC, and there were fewer revisits (P = 0.001) and lower costs (P = 0.002) after LFI in OPC compared to IPC. This study showed that selected single-jaw orthognathic surgeries in outpatient care are associated with lower healthcare costs compared to inpatient care. abstract_id: PUBMED:36737059 Associations between outpatient treatment and the use of intensive psychiatric healthcare services. The current manuscript examines concurrent and longitudinal associations between the utilization of outpatient and intensive psychiatric services among Medicaid-enrolled youth. Using an administrative dataset of Medicaid claims from 2007 to 2017, youth were included if they were between the ages of 10-18 (M = 13.4, SD = 2.6) and had a psychiatric Medicaid claim (N = 33,590). Psychiatric services were coded as outpatient, emergency department (ED), inpatient, or residential based on Medicaid codes. Logistic regression analyses indicated that the receipt of even one outpatient visit significantly reduced the odds of having an ED, inpatient, and residential visit within 60-, 90-, and 120-day windows. Survival analyses indicated most youth did not have any ED, inpatient, or residential visit following their first outpatient visit. For remaining youth, having an outpatient visit significantly increased the risk of having an ED, inpatient, and residential visit following their initial appointment, which may suggest these youth are being triaged to a more appropriate level of care. Classification accuracy analyses indicated a cutoff of 2 outpatient visits yielded maximum accuracy in determining youth with ED, inpatient, and residential visits. Findings highlight use of outpatient-level services in reducing risk of more intensive service utilization. abstract_id: PUBMED:34224059 Effects of establishing a financing scheme for outpatient care on inpatient services: empirical evidence from a quasi-experiment in China. The relationship between outpatient and inpatient care is central to the current healthcare reform debate especially in developing countries. Despite the importance of this relationship to health policy makers, empirical evidence, particularly evidence that can be interpreted as causal is limited and inconclusive. This paper examines the effects of establishing a financing scheme for outpatient care on inpatient utilization and expenditure in China's Urban Employee Basic Medical Insurance scheme. Under a quasi-experimental design, we use a unique administrative insurance claim dataset and conduct a difference-in-differences analysis. Our results indicate that after the policy change, total number of admissions and total inpatient expenditure of the enrollees decreased by 0.47% and 6.05% respectively, which imply outpatient and inpatient care are substitutes, and the reduction in cost-sharing can release the underuse of the outpatient care, so as to reduce those excessive demands for inpatient care. Moreover, we present evidence that the effects on the admissions of Ambulatory Care Sensitive Conditions which should be sensitive to outpatient care intervention are relatively limited because of the lower reimbursement cap, inadequate capacity of the local primary care providers and stickiness in patients' healthcare-seeking behaviors. While the enrollees aged over 55 and retirees are more vulnerable to the medical prices, and the enrollees living in the central districts are more responsive because of the better and more accessible primary care. abstract_id: PUBMED:22013140 Costs associated with outpatient, emergency room and inpatient care for migraine in the USA. Background: Data on the average US costs of an outpatient visit, emergency room (ER) visit or hospitalization for migraine are scant, with the most recent available values based on healthcare charges reported from 1994 data. Methods: We estimated healthcare costs associated with outpatient and ER visits and inpatient hospitalizations related to migraine retrospectively obtained from the 2007 Medstat MarketScan Commercial Claims &amp; Encounters database. Tabulated costs reflected payments from insurers, patients and other sources. All costs were adjusted to 2010 US dollars. Results: The estimated mean cost (95% CI) for migraine-related care per outpatient visit (N = 680,946) was $139.88 ($139.35-140.41); per ER visit (N = 88,128) was $775.09 ($768.10-782.09); and per inpatient hospitalization (N = 5516) was $7317.07 ($7134.96-7499.17). The most frequently coded procedures at outpatient and ER visits were subcutaneous or intra-muscular injection, and for hospitalizations was computed tomography. Estimated annual US healthcare costs in 2010 for migraine associated with: outpatient visits were $3.2 billion, ER visits were $700 million, and inpatient hospitalizations were $375 million. Conclusions: Direct healthcare costs associated with patient visits and hospitalizations for migraine headaches have increased since previously published estimates. Further research is needed to understand the current overall healthcare cost burden per patient and within the US population. abstract_id: PUBMED:23152695 Comparing common reasons for inpatient and outpatient visits between commercially-insured duloxetine or pregabalin initiators with fibromyalgia. Background: The purpose of this study was to examine the main reasons for inpatient or outpatient visits after initiating duloxetine or pregabalin. Methods: Commercially insured patients with fibromyalgia and aged 18-64 years who initiated duloxetine or pregabalin in 2006 with 12-month continuous enrollment before and after initiation were identified. Duloxetine and pregabalin cohorts with similar demographics, pre-index clinical and economic characteristics, and pre-index treatment patterns were constructed via propensity scoring stratification. Reasons for inpatient admissions, physician office visits, outpatient hospital visits, emergency room visits, and primary or specialty care visits over the 12 months post-index period were examined and compared. Logistic regression was used to assess the contribution of duloxetine versus pregabalin initiation to the most common reasons for visits, controlling for cross-cohort differences. Results: Per the study design, the duloxetine (n = 3711) and pregabalin (n = 4111) cohorts had similar demographics (mean age 51 years, 83% female) and health care costs over the 12-month pre-index period. Total health care costs during the 12-month post-index period were significantly lower for duloxetine patients than for pregabalin patients ($19,378 versus $27,045, P &lt; 0.05). Eight of the 10 most common reasons for inpatient admissions and outpatient hospital (physician office, emergency room, primary or specialty care) visits were the same for both groups. Controlling for cross-cohort differences, duloxetine patients were less likely to be hospitalized due to an intervertebral disc disorder or major depressive disorder, to have a physician office visit due to nonspecific backache/other back/neck pain (NB/OB/NP) disorder, or to go to specialty care due to a soft tissue, NB/OP/NP, or intervertebral disc disorder. However, duloxetine patients were more likely to have a primary care visit due to a soft tissue disorder, essential hypertension, or other general symptoms. Conclusion: Among similar commercially insured patients with fibromyalgia who initiated duloxetine or pregabalin, duloxetine patients had significantly lower health care costs over the 12-month post-index period. The leading reasons for inpatient or outpatient visits were also somewhat different. abstract_id: PUBMED:30786291 Developments in Emergency Care - Analysis of Emergency Cases in In- and Outpatient Care from 2009 To 2015 in Germany Background: Increasing numbers of emergency cases, different stakeholders involved (physicians' practices vs. hospitals providing ambulatory and inpatient care) and changing patient utilization patterns lead to changes in health care needs. This study aimed at analyzing changes in patient characteristics as well as indications for in- and outpatient emergency care between 2009 and 2015 and their potential reasons. Methods: Based on in- and outpatient routine data, we descriptively analyzed changes in emergency diagnosis, population-based emergency prevalence as well as regional differences and their changes over time. Using generalized linear models (GLM), we examined regional shifts in emergency cases being treated in ambulatory and inpatient settings. Results: Between 2009 and 2014 the number of cross-sectoral ambulatory emergency cases increased by 4 % (in emergency departments an increase by 42% with the highest incidence rate ratio (IRR) in the 20-34-year-old age group). Inpatient emergency cases increased by 20% with older patients representing the largest group. The ICD chapter "Diseases of the circulatory system", responsible for most inpatient hospital cases, had the second highest probability of hospital addmissions (64.7%). There were larger variations in indications for outpatient care. Regression analysis showed that there was greater use of ambulatory emergency services by the healthier (IRR 1.15 [KI 95%: 1.13; 1.16]) and urban population (IRR 1.14 [KI 95%: 1.13; 1.15]). Conclusions: The first time cross-sector analysis of indication-specific emergencies based on nationwide inpatient and outpatient billing data from 2009-2015 provides insightsinto healthcare provision at the interface between the sectors. Indications that are treated in physician practices and emergency outpatient clinics and those that lead to hospital admissions point out the potential for managing patient care appropriately. Patient behaviors in healthcare utilization can be addressed by interventions for specific patient subgroups. However, a prerequisite for the development of such measures is the inclusion of a cross-sectoral perspective in the system of emergency care. Answer: The relationship between outpatient care and the use of inpatient and emergency care is complex and varies depending on several factors, including the patient population, disease stage, and socioeconomic context. For HIV-infected persons, higher use of ambulatory medical services was not significantly associated with a lower probability of inpatient admissions or emergency department (ED) visits overall. However, for the subgroup of patients who received an AIDS diagnosis during the study period, the number of ambulatory visits had significant negative effects on hospitalizations and ED use, suggesting that outpatient care may offset inpatient and ED services at particular points in the disease course (PUBMED:14597499). In Japan, reducing cost sharing for outpatient services for children did not significantly affect overall hospital admissions, but the effects varied by region. The subsidy reduced the number of overall admissions in low-income areas but increased it in high-income areas, indicating that outpatient and inpatient services may act as substitutes in low-income areas but as complements in high-income ones (PUBMED:28808952). A study of Medicaid-enrolled youth found that receiving even one outpatient visit significantly reduced the odds of having an ED, inpatient, and residential visit within certain time windows. This suggests that the use of outpatient-level services can reduce the risk of more intensive service utilization (PUBMED:36737059). In China, establishing a financing scheme for outpatient care led to a decrease in the total number of admissions and total inpatient expenditure, implying that outpatient and inpatient care are substitutes. The reduction in cost-sharing for outpatient care could reduce excessive demands for inpatient care (PUBMED:34224059). However, the use of health care services, including outpatient and inpatient care, differs by occupation, and the differences are not fully explained by sociodemographic factors and health status. This indicates that other factors such as occupational risks, attitudes, knowledge, and access to care may influence the frequency of health service use (PUBMED:35505398). In summary, outpatient care can be associated with lower use of inpatient and emergency care in certain contexts, particularly among specific patient subgroups or in regions with particular socioeconomic characteristics. However, this association is not uniform across all populations and may be influenced by a variety of factors.
Instruction: Laparoscopy-assisted distal gastrectomy for early gastric cancer: is it beneficial for patients of heavier weight? Abstracts: abstract_id: PUBMED:24767611 Triple-incision laparoscopic distal gastrectomy for the resection of gastric cancer: comparison with conventional laparoscopy-assisted distal gastrectomy. Introduction: Reduced port surgery and single-port surgery are currently in the spotlight as next-generation, minimally invasive surgical techniques. We performed a triple-incision laparoscopic distal gastrectomy (TIL-DG) for gastric cancer as a reduced port surgery. Method: A total of 76 patients underwent a TIL-DG. A D1+ or D2 lymph node dissection was performed, and the Roux-en-Y method was used for reconstruction. The short-term patient outcomes of the TIL-DG group were compared with those of the conventional laparoscopy-assisted distal gastrectomy group (59 cases) to evaluate the feasibility of TIL-DG. Results: No significant differences were observed between the TIL-DG group and the laparoscopy-assisted distal gastrectomy group in terms of mean operative time, blood loss, and the length of the postoperative hospital stay. The mean number of retrieved regional lymph nodes in the TIL-DG group was slightly higher than that in the laparoscopy-assisted distal gastrectomy group. Conclusion: A triple-incision laparoscopic distal gastrectomy is a feasible and safe procedure. abstract_id: PUBMED:23346495 Laparoscopy-assisted distal gastrectomy for early gastric cancer in the elderly. Purpose: With an increase in life expectancy, more elderly patients are presenting with gastric cancer. As a result it is yet be resolved whether laparoscopy assisted distal gastrectomy is a suitable treatment for elderly patients with early gastric cancer. This study retrospectively compared surgical outcomes of laparoscopy assisted distal gastrectomy between elderly and nonelderly patients with gastric cancer. Materials And Methods: The study group was comprised of 316 patients who underwent laparoscopy assisted distal gastrectomy between April 2005 and December 2010. Of these patients, 93 patients whose ages were 65 years or more were compared with 223 patients who were younger. Results: There were no differences in the short term outcome or minor complication rate between the elderly patients and the nonelderly patients. The hospital stay was significantly longer and the major complication rate was significantly higher for the elderly patients compared with nonelderly patients. Conclusions: Laparoscopy assisted distal gastrectomy for early gastric cancer in the elderly patients had comparable operation time and blood loss with the nonelderly group. However, the safety and advantage of laparoscopy assisted distal gastrectomy in the elderly patients need to be further studied in higher volume trials. abstract_id: PUBMED:30625964 Occurrence of iatrogenic pneumothorax during laparoscopy-assisted distal gastrectomy: A case report. The occurrence of a pneumothorax during laparoscopy-assisted distal gastrectomy (LADG) is rare. A pneumothorax was developed during a LADG under general anesthesia in a 67-year-old woman with gastric cancer. About 140 minutes after CO2 insufflation, sudden hemodynamic collapse occurred. A defect was noted in the diaphragm. After immediate repair under laparoscopy, hemodynamic stability was achieved within several minutes. In the anesthetic management of a LADG, the anesthesia provider should be aware of the possible occurrence of a pneumothorax. abstract_id: PUBMED:26715017 Laparoscopy-assisted versus open distal gastrectomy for gastric cancer in elderly patients: a retrospective comparative study. Background: With the current increased longevity in elderly population, surgeons can expect to operate more frequently on elderly patients with both malignancies and comorbid medical conditions. This study aimed to compare the surgical and early postoperative outcomes of laparoscopy-assisted distal gastrectomy (LADG) with those of open distal gastrectomy (ODG) for gastric cancer in patients 70 years of age or older. Methods: Retrospective analysis based on a prospectively collected database of elderly patients who underwent laparoscopy-assisted distal gastrectomy or ODG from February 2013 to January 2014. Preoperative patient baseline parameters, surgical and oncological outcomes, postoperative complications and pathologic results were analyzed in this report. Results: Distal gastrectomy was performed for 50 patients with the age of 70 years or older, using laparoscopic surgery for 23 patients (LADG group) and open surgery for 27 patients (ODG group). The mean age of LADG group was 76.6 years and ODG group 80.0 years. The comparison between the two groups revealed statistically similar results regarding age, gender, BMI, ASA class, history of previous surgeries, CCI and pathologic characteristics. The LADG group was characterized by less intraoperative blood loss (LADG group 100 mL vs. ODG group 250 mL, P &lt; 0.001), less narcotic use (LADG group 1 day vs. ODG group 3 days, P &lt; 0.001), faster bowel function recovery (time to first flatus: LADG group 51.6 h vs. ODG group 67.2 h, P &lt; 0.001; days to oral intake: LADG group 6.1 days vs. ODG group 7.9 days, P = 0.002) and shorter postoperative hospital stay (LADG group 12 days vs. ODG group 16 days, P &lt; 0.001). There was no significant difference in postoperative complication rate (overall complication rate: LADG group 21.7 % vs. ODG group 25.9 %, P = 0.730), survival rate (P = 0.719), postoperative recurrence and metastasis rate between the patients who underwent LADG and ODG. Conclusions: LADG for gastric cancer is feasible, efficacious and safe in elderly patients and may be superior to conventional open resection as regards some surgical outcomes. abstract_id: PUBMED:31945189 Postoperative quality of life after laparoscopy-assisted pylorus-preserving gastrectomy compared with laparoscopy-assisted distal gastrectomy for early gastric cancer. Background And Aim: This study aimed to investigate the postoperative quality of life (QOL) between laparoscopy-assisted pylorus-preserving gastrectomy (LAPPG) and laparoscopy-assisted distal gastrectomy with Billroth I anastomosis (LADGBI) in patients with middle-third early gastric cancers (EGC). Methods: From January 2015 to August 2017, a total 91 patients with EGC underwent LAPPG or LADGBI procedure with complete QLQ-C30 and QLQ-STO22 in Ren Ji Hospital. Not only do surgical and oncological safety as well as clinicopathologic characteristics analyze, but also chronological changes of QOL and nutritional status were compared for the evaluation of functional advantages. Results: There was no significant difference in clinicopathologic characteristics and perioperative recovery between LAPPG and LADGBI. During postoperative 2-year follow up, LAPPG showed significant advantages over LADGBI in emotional functioning, insomnia, appetite loss, reflux, and taste problem and presented larger area in global and functional scales and less area in symptom scales, which means that LAPPG tended to present better improvement and less symptoms than LADGBI for QOL 2 years after surgery. Furthermore, LAPPG could bring about significant improvement in total protein and hemoglobin compared to LADGBI at postoperative 2 years. Conclusion: LAPPG obtains QOL as well as total protein and hemoglobin superiority and could be recommended to patients with EGC whose tumor located in the middle third of the stomach. abstract_id: PUBMED:27751530 Postoperative quality of life after laparoscopy-assisted pylorus-preserving gastrectomy compared With laparoscopy-assisted distal gastrectomy: A cross-sectional postal questionnaire survey. Background: Little is known about postgastrectomy syndrome and quality of life (QOL after laparoscopy-assisted pylorus-preserving gastrectomy (LAPPG). The aim of this study was to assess postgastrectomy syndrome and QOL after LAPPG as compared with laparoscopy-assisted distal gastrectomy with Billroth-I reconstruction (LADGBI). Methods: Postgastrectomy Syndrome Assessment Scale-45 (PGSAS-45) questionnaires were sent by mail to 167 patients. To balance the characteristics of the groups, propensity score matching was performed. Results: Of the 167 patients sent questionnaires, 112 (67%) responded, including 47 who underwent LAPPG and 65 who underwent LADGBI. After propensity score matching, the LAPPG group scored significantly better on the diarrhea and dumping subscales. Multiple regression analysis showed that female sex and LADGBI were independent factors predicting dumping. Evaluation of outcome measures for singular symptom showed that the LAPPG group scored significantly worse on the acid regurgitation subscale, but significantly better on the lower abdominal pain and early dumping abdominal subscales. Conclusion: LAPPG is superior to LADGBI for ameliorating postgastrectomy syndrome and maintaining QOL. LAPPG is recommended for patients with cT1N0 middle third gastric cancer. abstract_id: PUBMED:27739194 Laparoscopy-assisted distal gastrectomy in a case of gastric cancer with situs inversus totalis: a case report. Situs inversus totalis (SIT) is a rare congenital condition in which abdominal and thoracic organs are transposed from normal positions. Laparoscopy-assisted distal gastrectomy for situs inversus totalis is technically difficult and has rarely been reported. Here, we report the case of man in his 40s with situs inversus totalis and a preoperative diagnosis of stage IA gastric cancer (cT1b, cN0, cM0). We successfully performed laparoscopy-assisted distal gastrectomy with D1+ lymph node dissection and Billroth I reconstruction. To ensure a safe procedure, we evaluated the vessels preoperatively with 3-D CT angiography. Furthermore, we performed the surgery by reversing our surgical positions, using a two-monitor method, and then reconstructing under direct vision through the incision. The surgery was performed with minimal blood loss, and no severe postoperative complications were observed. Histopathological examination revealed poorly to moderately differentiated stage IA (pT1b, pN0, pM0) adenocarcinoma. No recurrence has been observed as of 2 years postoperatively. abstract_id: PUBMED:29776645 Effectiveness and safety of total laparoscopic distal gastrectomy versus laparoscopy-assisted distal gastrectomy for gastric cancer: A retrospective cohort study. Aim: To compare the results of total laparoscopic distal gastrectomy (TLDG) and laparoscopy-assisted distal gastrectomy (LADG) and explore the safety and feasibility of TLDG. Methods: Data were collected and analyzed from patients underwent TLDG and LADG from January 2009 to December 2011 at our institution. Results: 127 LADG cases and 104 TLDG cases were included and balanced for age, sex, BMI, ASA scores, and CEA level in this study. A decrease in postoperative pain (P &lt; 0.001), wound infection rate (P = 0.013), and hospitalization time after surgery (P &lt; 0.001) was found in the TLDG group. Compared with the LADG group, there was no increase in operative time (P = 0.084), intraoperative blood loss (P = 0.061), or anastomotic fistula rate (P = 0.473). Statistical differences did not exist in recurrence and (or) metastasis (P = 0.204), 5-years disease-free survival (DFS) rate and overall survival (OS) (P = 0.570 and 0.560, respectively). Conclusion: As long as it follows the surgical principles of malignant tumor, TLDG can achieve the same therapeutic effect as LADG does. TLDG is safe and feasible for gastric cancer patients though further studies are needed. abstract_id: PUBMED:14696510 The benefit of laparoscopy-assisted distal gastrectomy compared with conventional open distal gastrectomy: a case-matched control study. Background/aims: Laparoscopy-assisted distal gastrectomy is a surgical procedure which is safe, useful, and technically feasible for patients with early gastric cancer. The aim of this study is to determine whether laparoscopy-assisted distal gastrectomy is also superior to conventional open distal gastrectomy. Methodology: The study comprised 27 consecutive patients with early gastric cancer who were admitted to our hospital from 1997 to 2000 and underwent distal gastrectomy. Ten patients underwent laparoscopy-assisted distal gastrectomy; and 17 conventional open distal gastrectomy. Results: Estimated blood loss was 17.7 +/- 12.1 g during laparoscopy-assisted distal gastrectomy, and 250.0 +/- 160.8 g during conventional open distal gastrectomy. The difference was statistically significant (p &lt; 0.001). The day of ambulation (2.7 +/- 0.7 vs. 1.0 +/- 0.0), and start of liquid diet (6.9 +/- 1.2 vs. 4.3 +/- 0.5) were significantly delayed in conventional open distal gastrectomy compared with laparoscopy-assisted distal gastrectomy (p &lt; 0.001, p &lt; 0.05). At the 3rd postoperative day, the serum C-reactive protein level in laparoscopy-assisted distal gastrectomy decreased significantly more than that in conventional open distal gastrectomy (4.2 +/- 1.7 vs. 9.4 +/- 2.5: p &lt; 0.05). No postoperative complication was found in laparaoscopy-assisted distal gastrectomy. Conclusions: Laparoscopy-assisted distal gastrectomy is a safe and useful operation for most early gastric cancers. Laparoscopy-assisted distal gastrectomy has been superior to conventional open distal gastrectomy. abstract_id: PUBMED:20838052 Disadvantage of operation cost in laparoscopy-assisted distal gastrectomy under the national health insurance system in Japan. Background: The utility and problems including the socioeconomic aspect of laparoscopy-assisted distal gastrectomy for gastric cancer have not been fully evaluated. Subjects And Methods: We compared open distal gastrectomy and laparoscopy-assisted distal gastrectomy for the clinical benefit, quality of life, and problems of operation cost by the reference documents in which the difference between open distal gastrectomy and laparoscopy-assisted distal gastrectomy was examined in detail. The reference documents retrieved by the key words 'gastric, cancer, laparoscopic, surgery' were 22 in PubMed with the following limits activated: Humans, Clinical Trial, Meta-Analysis, Randomized Controlled Trial, Review, English, Core clinical journals, published in the last 10 years. Results: The operation time of laparoscopy-assisted distal gastrectomy is longer than that of open distal gastrectomy. However, if skilled, the blood loss of laparoscopy-assisted distal gastrectomy is less, the hospitalization days and the duration of fasting after laparoscopy-assisted distal gastrectomy are shorter than those after open distal gastrectomy. The number of excised lymph nodes and the incidence of postoperative complications were similar between laparoscopy-assisted distal gastrectomy and open distal gastrectomy. On the other hand, in the national health insurance system, the operation fee of open distal gastrectomy was USD 6,637 as compared to USD 7,586 for laparoscopy-assisted distal gastrectomy. In spite of the USD 949 difference in the operation fee, the use of disposable instruments for laparoscopy-assisted distal gastrectomy results in a deficit of USD 1,500 over open distal gastrectomy. Conclusion: In spite of the medical superiority of laparoscopy-assisted distal gastrectomy over open distal gastrectomy (if a skilled surgeon operates) as less invasive surgery, laparoscopy-assisted distal gastrectomy is associated with less financial benefit to the hospital as compared to open distal gastrectomy in the current Japanese health insurance system. Answer: The provided abstracts do not directly address the question of whether laparoscopy-assisted distal gastrectomy (LADG) for early gastric cancer is specifically beneficial for patients of heavier weight. The abstracts focus on various aspects of LADG, including comparisons with other surgical methods, outcomes in elderly patients, rare complications, quality of life post-surgery, and economic considerations, but none of the abstracts mention the impact of patient weight on the outcomes or benefits of LADG for early gastric cancer. To answer the question, clinical studies specifically investigating the outcomes of LADG in patients with different body mass indices (BMIs) would be required. Such studies would need to compare the surgical outcomes, complication rates, recovery times, and long-term benefits of LADG in patients of heavier weight versus those of normal weight. Without such data, it is not possible to conclude from the provided abstracts whether LADG is particularly beneficial for patients of heavier weight with early gastric cancer.
Instruction: The use of the detrusorrhaphy for vesico-ureteric reflux: the way forward? Abstracts: abstract_id: PUBMED:3962031 Use of bowel in undiversion. Urinary undiversion is becoming an uncommon procedure because fewer patients are being diverted. The lessons learned from undiversion, however, have made possible early primary reconstructions in patients who would previously have been candidates for diversion. Intermittent clean catheterization and the use of bowel to augment or form a compliant storage vessel for urine have permitted reconstruction in patients with abnormal bladder and urethral function. Thus patients with neurogenic bladder or with severe lower urinary tract abnormalities can look forward to healthy kidneys and urinary continence without the use of appliances. Certainly there are yet many lessons to be learned, but the potential is just starting to be appreciated. abstract_id: PUBMED:9202568 The use of the detrusorrhaphy for vesico-ureteric reflux: the way forward? Objective: To assess the outcome of all detrusorrhaphies carried out for simple single-system vesico-ureteric reflux (VUR) in children at one institution over a 4-year period. Patients And Methods: Twenty-nine patients (11 boys, 18 girls, 43 renal units; mean age at presentation 23 months; range, antenatal to 72 months) were analysed in two groups. Group 1 comprised those undergoing asynchronous bilateral procedures (two patients, four units) and unilateral procedures (15 patients, 15 units), and group 2, those undergoing synchronous bilateral procedures (12 patients, 24 units). The mean (range) follow-up was 17 months (6-39) and 15 months (7-24), respectively. Results: Three patients had antenatal hydronephrosis related to VUR post-natally and 26 had urinary tract infections (13 recurrent, one with haematuria, seven 'breakthrough', one with calculi and four with enuresis). The mean age at operation was 54 months (range 14-167). The mean (range) duration of anaesthesia was 69 min (40-120) in group 1 and 80 min (65-120) in group 2. All patients were catheterized urethrally for a mean (range) duration of 3 days (2-4) in group 1 and 5 days (2-15) in group 2 and the mean hospital stay was 3 days (2-6) and 6 days (4-16), respectively. Post-operative complications occurred in 14 patients including one bladder spasm, five urinary tract infections, two with urinary retention, three with haematuria and one each of pneumonia, epididymo-orchitis, anuria, failure to stent and conversion to Cohen reimplantation. Operative success and clinical success were similar within groups but differed between groups, although not significantly (15 of 17 in group 1 and eight of 12 in group 2). Conclusion: Unilateral but not synchronous bilateral detrusorrhaphy seems an appropriate surgical treatment for VUR. abstract_id: PUBMED:18076937 Vesicoureteral reflux: the RIVUR study and the way forward. N/A abstract_id: PUBMED:11832701 The use of fibrin sealant in urology. Purpose: Fibrin sealant has been increasingly applied in various surgical fields, including urological surgery, in the last 2 decades. We determined the safety and efficacy of fibrin sealant in urological surgery and identified areas that need further clinical investigation. Materials And Methods: A MEDLINE search of all available literature regarding the use of fibrin sealant was performed. All articles, including experimental animal studies, prospective and retrospective studies, case series and case reports of fibrin sealant for hemostasis and/or other urological applications, were identified and reviewed. Results: Prospective randomized studies in the field of thoracic and trauma surgery show the efficacy and safety of fibrin sealant for hemostasis. Based on these data fibrin sealant has been used successfully for hemostasis during partial nephrectomy and traumatic renal reconstruction. A number of experimental animal studies, case series and case reports show the efficacy of fibrin sealant for ureteral anastomosis, microsurgical vasal anastomosis, fistula repair, circumcision and orchiopexy as well as it use as an adjunct in other areas of reconstruction. Conclusions: Fibrin sealant is an effective and safe topical agent for controlling surface bleeding during elective and trauma related urological procedures. Using fibrin sealant as an adhesive for reconstruction requires further prospective studies. The introduction of laparoscopic procedures in urology may expand the indications for fibrin sealant as an alternative method of tissue reapproximation. Limiting the routine use of fibrin sealant to procedures with demonstrable benefits is desirable and would lead to a cost saving approach. abstract_id: PUBMED:31890416 The Use of Contrast-enhanced Ultrasound in Pediatrics: A Case Series. As the use of contrast-enhanced ultrasound (CEUS) is still relatively novel but quickly expanding, we would like to present a series of clinical cases where CEUS was used in diagnostics of pediatric patients. The presented cases include kidney trauma, non-Hodgkin's lymphoma with secondary liver deposits, renal cortical cysts, retropharyngeal abscess, necrotizing pancreatitis, and vesicoureteral reflux. The wide range of the presented cases demonstrates CEUS' multipurpose use, which, together with its other useful attributes, especially its favorable safety profile, makes it an excellent diagnostic tool. abstract_id: PUBMED:25811682 Anti-Reflux Ureteral Stent with Polymeric Flap Valve Using Three-Dimensional Printing: An In Vitro Study. Purpose: This article aims to describe the design of an anti-reflux ureteral stent with a polymeric flap valve and the fabrication methods using three-dimensional (3D) printing. The stent effectively prevents backward flow with a negligible reduction in forward flow. Fabrication of miniaturized valves was easy with high precision and rapid prototyping. Materials And Methods: The proposed stent comprised a 7F Double-J (DJ) stent and a polymeric flap valve. The valve was made of Tango Plus FLX980 and was fabricated using a 3D printer. Two types of stent were prepared for in vitro tests: DJ stents with (1) an uncoated valve (UCV) stent and (2) a parylene C coated valve (PCV) stent for enhanced biocompatibility. The flow characteristics of each stent were evaluated considering flow direction, parylene coating, and stent side holes, and were compared to the intact DJ stent. Results: The forward flow rate for the distal portion of the UCV and PCV stents was 9.8 mL/min and 7.8 mL/min at applied pressure of 15 cm H2O (normal anterograde pressure in patients with stents), respectively. Backward flow rate for the distal portion of the UCV and PCV stents was decreased by 28 times and 8 times at applied pressure of 50 cm H2O (maximum bladder pressure), respectively, compared with the distal portion of the intact DJ stent. Forward flow rates of whole stents were 22.2 mL/min (UCV stent) and 20.0 mL/min (PCV stent) at applied pressure of 15 cm H2O, and backward flow rates of whole UCV and PCV stents were decreased by 8.3 times and 4.0 times at applied pressure of 50 cm H2O, respectively, compared with the intact DJ stent. Conclusions: The anti-reflux ureteral stent was successfully designed and fabricated using a 3D printer. In vitro studies showed that the stent effectively prevented backward flow while minimizing reduction in forward flow. abstract_id: PUBMED:15175890 The use of ureteral stents and suprapubic catheter in vesicoureteric reflux surgery. The use of ureteric stents in reimplantation surgery is important. The younger the patient, the more important the stenting of ureters post reimplantation becomes, because even minimal oedema following surgery will produce ureteric obstruction unless stents are in place. JJ stents are now the preferred method of choice in ureteric reimplantation surgery, but in the past the patient required another admission to hospital and general anaesthetic to have the stents removed endoscopically. We describe a technique whereby the stents are attached to the suprapubic catheter and are therefore removed prior to the patient's discharge from hospital, thus obviating the need for a second admission and second anaesthetic for the stent removal. We have studied 23 patients with this technique and find that it is a reliable and safe method to use. abstract_id: PUBMED:23466044 Practice patterns among pediatric urologists in the use of Deflux® for vesicoureteral reflux: a survey. Objectives: This study aims to assess pediatric urology practice patterns and factors which influence the use of Deflux(®) in the management of vesicoureteral reflux among pediatric urologists. Methods: A 11-question survey was sent out to 476 pediatric urologists who are members of the Society for Pediatric Urology. Results: 23.7% of pediatric urologists use Deflux(®) as first line therapy for Grade III reflux or higher. The presence of renal scarring is not a deterrent to the use of Deflux(®). 17.7% would use Deflux(®) before a trial of observation with or without chemoprophylaxis. In children who are on observation, 20.3% would perform Deflux(®) when they are at an age considered appropriate for surgery as opposed to continued observation. The majority of pediatric urologists cite Deflux(®) success rates of &gt;70% to &gt;80% for Grades II-III and &gt;50% to &gt;60% for Grades IV-V. 23.3% of respondents indicated that new evidence citing low long-term success rates at one year decreased their use of Deflux(®). 59.8% of respondents indicated they would perform a second injection after an initial failure. Ultrasound and VCUG are used as follow-up in 86.9% and 65.4% respectively after Deflux(®); the majority are performed within the first 3 months, rarely at one year. Conclusion: The use of Deflux(®) is growing and whether it surpasses open reimplantation and chemoprophylaxis as first-line therapy remains to be seen. With new literature showing lower success rates, long-term follow-up with repeat imaging may be required. abstract_id: PUBMED:9393307 The use of a multipurpose stent in children. Objectives: To assess the use of a multipurpose stent (the 'Blue stent', Angiomed Urosoft Pyeloplasty Stent, Bard, UK) in children undergoing pyeloplasty and ureteric reimplantation. Patients And Methods: Between August 1994 and August 1996, the Blue stent was used in 50 renal units in 46 children aged 2 months to 12 years and 6 months. Twenty-five children underwent pyeloplasty, 11 had ureteric reimplantation for vesico-ureteric reflux (VUR), eight had ureteric reimplantation with remodelling for obstructed megaureters and in two patients it was used during the removal of stones. The mean follow-up was 18 months (range 6-30 months). Results: After pyeloplasty, 22 patients (88%) had improved renal function and drainage with a decrease in hydronephrosis; two patients had a decrease in hydronephrosis only, one had an anastomotic leak and needed a repeat pyeloplasty and four developed a urinary tract infection (UTI). After ureteric reimplantation, VUR was not detected in any patient. Two patients had no change in drainage after remodelling and reimplantation of a megaureter, one was later diagnosed as having a neuropathic bladder and one child developed a UTI after ureteric reimplantation. The hospital stay was 3 days after pyeloplasty and 5 days after reimplantation. Conclusion: The design of the multipurpose Blue stent provides versatility; it can be used as a stent, and both an internal and external drain. Its use does not prolong the hospital stay. Insertion causes minimal trauma to the renal parenchyma, and removal is easy, pain-free and requires no anaesthesia. The complication rates in the present series compare favourably with other reported series. abstract_id: PUBMED:28866466 Impact of Clinical Guidelines on Voiding Cystourethrogram Use and Vesicoureteral Reflux Incidence. Purpose: To prevent over diagnosis and overtreatment of vesicoureteral reflux the 2007 NICE (National Institute for Health and Care Excellence) and 2011 AAP (American Academy of Pediatrics) guidelines recommended against routine voiding cystourethrograms in children presenting with first febrile urinary tract infections. The impact of these guidelines on clinical practice is unknown. Materials And Methods: Using an administrative claims database (Clinformatics™ Data Mart) children who underwent voiding cystourethrogram studies or had a diagnosis of vesicoureteral reflux between 2001 and 2015 were identified. The cohort was divided into children age 0 to 2 and 3 to 10 years. Single and multiple group interrupted time series analyses (difference-in-difference) were performed with the guidelines as intervention points. The incidence of vesicoureteral reflux was compared across each period. Results: Of the 51,649 children who underwent voiding cystourethrograms 19,422 (38%) were diagnosed with vesicoureteral reflux. In children 0 to 2 years old voiding cystourethrogram use did not decrease after the 2007 NICE guidelines were announced (-0.37, 95% CI -1.50 to 0.77, p = 0.52) but did decrease significantly after the 2011 AAP guidelines were announced (-2.00, 95% CI -3.35 to -0.65, p = 0.004). Among children 3 to 10 years old voiding cystourethrogram use decreased during the entire study period. There was a decrease in the incidence of vesicoureteral reflux in both groups that mirrored patterns of voiding cystourethrogram use. Conclusions: The 2011 AAP guidelines led to a concurrent decrease in voiding cystourethrogram use and incidence of vesicoureteral reflux among children 0 to 2 years old. Further studies are needed to assess the risks and benefits of reducing the diagnosis of vesicoureteral reflux in young children. Answer: The abstract from PUBMED:9202568 discusses the use of detrusorrhaphy for vesico-ureteric reflux (VUR) in children, assessing the outcomes of the procedure over a 4-year period at a single institution. The study included 29 patients and analyzed the results in two groups based on whether they underwent asynchronous bilateral procedures, unilateral procedures, or synchronous bilateral procedures. The study found that unilateral detrusorrhaphy seemed to be an appropriate surgical treatment for VUR, but synchronous bilateral detrusorrhaphy did not show the same level of appropriateness. Post-operative complications occurred in 14 patients, including urinary tract infections, urinary retention, and haematuria, among others. The study concluded that unilateral detrusorrhaphy could be considered a suitable surgical treatment for VUR. While the abstract does not explicitly state that detrusorrhaphy is "the way forward," it suggests that unilateral detrusorrhaphy is a viable option for treating VUR. However, it is important to note that the study did not find synchronous bilateral detrusorrhaphy to be as appropriate, indicating that the approach to detrusorrhaphy may need to be tailored to the individual patient's condition and needs. Further research and comparison with other treatment modalities would be necessary to determine if detrusorrhaphy should be considered the preferred approach for VUR treatment moving forward.
Instruction: Are the current MRI criteria using the DWI-FLAIR mismatch concept for selection of patients with wake-up stroke to thrombolysis excluding too many patients? Abstracts: abstract_id: PUBMED:25888410 Are the current MRI criteria using the DWI-FLAIR mismatch concept for selection of patients with wake-up stroke to thrombolysis excluding too many patients? Background: Up to 25% of stroke patients wake up with a neurological deficit, so called wake-up stroke (WUS). Different imaging approaches that may aid in the selection of patients likely to benefit from reperfusion therapy are currently under investigation. The magnetic resonance imaging (MRI) diffusion weighted imaging - fluid attenuated inversion recovery (DWI-FLAIR) mismatch concept is one proposed method for identifying patients presenting within 4.5 hours of the ischemic event. Purpose: To report our experience with the DWI-FLAIR mismatch concept for selection of wake-up stroke patients to be thrombolysed at our centre. Material And Methods: Patients treated with off label intravenous thrombolysis (IVT) for WUS at our centre during a 6.5-month period were included. We performed MRI including DWI and FLAIR in all patients at admission. Each MRI examination was rated as either DWI-FLAIR mismatch or match. National Institutes of Health Stroke Scale (NIHSS) and modified Rankin Scale were used to measure clinical outcome. Cerebral computed tomography (CT) or MRI was performed within 24 hours after thrombolysis to determine the presence of any intracranial haemorrhage (ICH). Results: Ten patients treated with IVT for WUS were included. Four patients had a DWI-FLAIR mismatch and after IVT treatment the mean reduction in NIHSS in the DWI-FLAIR mismatch group was 4.0. In the DWI-FLAIR match group the mean reduction in NIHSS after IVT therapy was 4.8. None of the ten patients had any signs of ICH on follow-up imaging. Conclusions: In this small series DWI-FLAIR mismatch was not associated with worse outcome or ICH. This suggests that selecting WUS patients using DWI-FLAIR mismatch in clinical trials may exclude a large group of patients who might benefit. abstract_id: PUBMED:34189656 Implementation of emergent MRI for wake-up stroke: a single-center experience. Purpose: Recent updates in national guidelines for management of acute ischemic stroke in patients of unknown time of symptom onset ("wake-up" strokes) incorporate, for the first time, use of emergent MRI. In this retrospective case series, we analyze our experience at a Comprehensive Stroke Center implementing a new workflow including MRI in this clinical setting. This study also describes "DWI-FLAIR" mismatch, a critical concept for the interpretation of emergent brain MRIs performed for wake-up strokes. Methods: Over a 14-month period, all brain MRIs for wake-up stroke were identified. The imaging was analyzed by two board-certified, fellowship-trained neuroradiologists, and a diagnosis of DWI-FLAIR mismatch was made by consensus. Process metrics assessed included interval between last known well time and brain imaging, interval between CT and MRI, and interval between brain MRI and interpretation. Results: Sixteen patients with a history of "wake-up stroke" were identified. Thirteen of the 16 patients (81.3%) were found to have a DWI-FLAIR mismatch, suggesting infarct &lt; 4.5 h old. The mean time between last known well and MRI was 7.89 h with mean interval between CT and MRI of 1.83 h. Forty-six percent of patients with DWI-FLAIR mismatch received intravenous thrombolysis. Conclusion: In this "real world" assessment of incorporation of emergent MRI for wake-up strokes, there were several key factors to successful implementation of this new workflow, including effective and accurate description of MRI findings; close collaboration amongst stakeholders; 24/7 availability of MRI; and 24/7 onsite coverage by neurology and radiology physicians. abstract_id: PUBMED:30415393 Modified DWI-FLAIR mismatch guided thrombolysis in unknown onset stroke. DWI-FLAIR mismatch has been recently proven to identify patients with unknown onset stroke (UOS) eligible for thrombolysis. However, this concept may exclude patients from thrombolysis who may eventually benefit as well. We aimed to examine the feasibility, safety and potential efficacy of thrombolysis in wake-up stroke (WUS) and UOS patients using a modified DWI-FLAIR mismatch allowing for partial FLAIR positivity. WUS/UOS patients fulfilling the modified DWI-FLAIR mismatch and treated with intravenous thrombolysis (IVT) were compared to propensity score matched WUS/UOS patients excluded from IVT due to FLAIR positivity. The primary endpoint was a symptomatic intracranial hemorrhage (SICH), the secondary endpoints were improvement of ≥ 4 in NIHSS score and mRS score at 3 months. 64 IVT-treated patients (median NIHSS 9) and 64 controls (median NIHSS 8) entered the analysis (p = 0.2). No significant difference in SICH was found between the IVT group and the controls (3.1% vs. 1.6%, p = 0.9). An improvement of ≥ 4 NIHSS points was more frequent in IVT patients as compared to controls (40.6% vs. 18.8%, p = 0.01). 23.4% of IVT patients achieved a mRS score of 0-1 at 3 months as compared to 18.8% of the controls (p = 0.8). SICH, improvement of NIHSS ≥ 4 and mRS 0-1 at 3 months were comparable in thrombolyzed patients with negative FLAIR images versus those thrombolyzed with partial positive FLAIR images (3% vs. 3%, p = 0.9; 40% vs. 41%, p = 0.9; 19% vs. 22%, p = 0.8). Our study signalizes that thrombolysis may be feasible in selected WUS/UOS patients with partial FLAIR signal positivity. abstract_id: PUBMED:37371006 Artificial Intelligence for Automated DWI/FLAIR Mismatch Assessment on Magnetic Resonance Imaging in Stroke: A Systematic Review. We conducted this Systematic Review to create an overview of the currently existing Artificial Intelligence (AI) methods for Magnetic Resonance Diffusion-Weighted Imaging (DWI)/Fluid-Attenuated Inversion Recovery (FLAIR)-mismatch assessment and to determine how well DWI/FLAIR mismatch algorithms perform compared to domain experts. We searched PubMed Medline, Ovid Embase, Scopus, Web of Science, Cochrane, and IEEE Xplore literature databases for relevant studies published between 1 January 2017 and 20 November 2022, following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. We assessed the included studies using the Quality Assessment of Diagnostic Accuracy Studies 2 tool. Five studies fit the scope of this review. The area under the curve ranged from 0.74 to 0.90. The sensitivity and specificity ranged from 0.70 to 0.85 and 0.74 to 0.84, respectively. Negative predictive value, positive predictive value, and accuracy ranged from 0.55 to 0.82, 0.74 to 0.91, and 0.73 to 0.83, respectively. In a binary classification of ±4.5 h from stroke onset, the surveyed AI methods performed equivalent to or even better than domain experts. However, using the relation between time since stroke onset (TSS) and increasing visibility of FLAIR hyperintensity lesions is not recommended for the determination of TSS within the first 4.5 h. An AI algorithm on DWI/FLAIR mismatch assessment focused on treatment eligibility, outcome prediction, and consideration of patient-specific data could potentially increase the proportion of stroke patients with unknown onset who could be treated with thrombolysis. abstract_id: PUBMED:29356273 Unknown-onset strokes with anterior circulation occlusion treated by thrombectomy after DWI-FLAIR mismatch selection. Background And Purpose: The DAWN trial recently showed compelling evidence in treating late window and wake-up stroke patients with thrombectomy using a clinical-imaging mismatch. The aim was to evaluate the results of thrombectomy for unknown-onset strokes (UOS) treated in our centres after a diffusion weighted imaging/fluid attenuated inversion recovery (DWI-FLAIR) mismatch based selection. Methods: A multicentre, cohort study was performed of consecutive UOS treated by thrombectomy between 2012 and 2016. UOS with proximal anterior circulation occlusion discovered beyond 6 h from 'last seen normal' were compared with known-onset strokes (KOS) for whom thrombectomy was started within 6 h from onset. Time intervals were recorded from first time found abnormal. Results were adjusted for age, diabetes, hypertension, National Institutes of Health Stroke Scale, site of occlusion, DWI Alberta Stroke Programme Early CT Score, intravenous thrombolysis and use of general anaesthesia. Results: Amongst 1246 strokes with anterior circulation occlusion treated by thrombectomy, 277 were UOS, with a 'last time seen well' beyond 6 h and DWI-FLAIR mismatch, and 865 were KOS who underwent groin puncture within 6 h. Favourable outcome was achieved less often in UOS than KOS patients (45.2% vs. 53.9%, P = 0.022). After pre-specified adjustment, this difference was not significant (adjusted relative risk 0.91; 95% confidence interval 0.80-1.04; P = 0.17). No differences were found in secondary outcomes. Time intervals from first found abnormal were significantly longer in UOS. Conclusion: Thrombectomy of UOS with anterior circulation occlusion and DWI-FLAIR mismatch appears to be as safe and efficient as thrombectomy of KOS within 6 h from onset. This pattern of imaging could be used for patient selection when time of onset is unknown. abstract_id: PUBMED:24723908 Wake-up stroke: clinical characteristics, imaging findings, and treatment option - an update. About 25% of all strokes occur during sleep, i.e., without knowledge of exact time of symptom onset. According to licensing criteria, this large group of patients is excluded from treatment with received tissue-plasminogen activator, the only specific stroke treatment proven effective in large randomized trials. This paper reviews clinical and imaging characteristics of wake-up stroke and gives an update on treatment options for these patients. From clinical and imaging studies, there is evidence suggesting that many wake-up strokes occur close to awakening and thus, patients might be within the approved time-window of thrombolysis when presenting to the emergency department. Several imaging approaches are suggested to identify wake-up stroke patients likely to benefit from thrombolysis, including non-contrast CT, CT-perfusion, penumbral MRI, and the recent concept of diffusion weighted imaging-fluid attenuated inversion recovery (DWI-FLAIR). A number of small case series and observational studies report results of thrombolysis in wake-up stroke, and no safety concerns have occurred, while conclusions on efficacy cannot be drawn from these studies. To this end, there are ongoing clinical trials enrolling wake-up stroke patients based on imaging findings, i.e., the DWI-FLAIR-mismatch (WAKE-UP) or penumbral imaging (EXTEND). The results of these trials will provide evidence to guide thrombolysis in wake-up stroke and thus, expand treatment options for this large group of stroke patients. abstract_id: PUBMED:37084592 FLAIR vascular hyperintensity-DWI mismatch and DWI-FLAIR mismatch ASPECTS for prediction of good outcome after recanalization in anterior circulation stroke; multicenter observational study. Introduction: Acute ischemic stroke is a medical emergency caused by decreased blood flow to the brain, leading cause of long-term disability. Recanalization, one of the most concerning difficulties linked with intracranial arterial occlusion, has been used to reduce mortality in ischemic stroke treatment. The mismatch concepts MR PWI-DWI or DWI-FLAIR can help identify patients for thrombolysis. Purpose: This paper introduces a novel method of predicting revascularization using the value of fluid-attenuated inversion recovery vascular hyperintensity FVH-DWI mismatch and DWI-FLAIR mismatch, which releases anterior circulation large vessel occlusion (LVO) after endovascular thrombectomy (EVT). Moreover, we present a new scoring system following anatomical region distributed for MCA territory called a DWI-FLAIR MISMATCH ASPECTS. Result: Statistical analysis was performed to predict revascularization and functional outcome with 110 patients with anterior circulation LVO treated with EVT. We found that FVH-DWI mismatch was present in 71 patients (89.9 %) with complete revascularization and present in 8 patients (10.1 %) with no/partial revascularization, which had no significant difference (p = 0.12), and there was no significant difference between good functional outcome and poor functional outcome. Moreover, in 76 patients with DWI-FLAIR mismatch ASPECTS of &gt; 6 point-group, present FVH-DWI mismatch in 57 patients (83.8 %) with complete revascularization had a significant difference as compared to 11 patients (16.2 %) with absent FVH-DWI mismatch (p &lt; 0.05). The clinical outcome in complete revascularization is better than no/partial revascularization, and complete revascularization is independently associated with good functional outcomes (p &lt; 0.05). Conclusion: FVH-DWI mismatch paired with DWI-FLAIR mismatch ASPECTS &gt; 6 points may be possible to predict revascularization in patients with anterior circulation LVO. abstract_id: PUBMED:36506404 DWI-FLAIR mismatch guided thrombolysis in patients without large-vessel occlusion: real-world data from a comprehensive stroke centre. Introduction: A significant proportion of ischaemic stroke patients present with unknown symptom onset time. DWI-FLAIR mismatch on MRI can help to identify those eligible for thrombolysis. We set out to analyse the short-term efficacy and safety of thrombolysis in a real-world setting. Methods: A retrospective single-centre observational study was conducted. We collected data between January 2017 and April 2020. Patients with a large vessel occlusion (LVO) were excluded. Outcomes were compared between thrombolysed patients and those who did not receive alteplase due to lack of DWI-FLAIR mismatch or other contraindications. We analysed baseline and discharge NIHSS scores for efficacy and defined good outcome as any neurological improvement (ANI) on the NIHSS. In terms of safety, the presence and severity of intracerebral haemorrhage on follow-up imaging was analysed, and mortality at 90 days assessed. Results: Seventy-one patients were included in this study, of whom 29 received thrombolysis. Significantly more patients had ANI in the thrombolysed group (OR, 3.16; 95% CI, 1.178-8.479; p = 0.020). In a multivariable logistic regression analysis, only thrombolysis correlated with ANI (OR, 3.051; 95% CI, 1.135-8.206; p = 0.027). Two thrombolysed patients suffered intracerebral haemorrhage (6.90%), of whom one was symptomatic and eventually fatal. We did not find a significant difference in 90-day mortality between the two groups (OR, 0.81, 95% CI, 0.134-4.856; p = 1.000). Conclusions: Our real-world data demonstrate that thrombolysis based on DWI-FLAIR mismatch in patients without LVO has an early beneficial effect. The rate of intracerebral haemorrhage was similar to this complication reported in large thrombolysis trials with known onset times. abstract_id: PUBMED:33598555 The impact of the DWI-FLAIR-mismatch in the ECASS-4 trial - A post hoc analysis. Introduction: To investigate the impact of a mismatch between diffusion-weighted imaging (DWI) and fluid attenuated inversion recovery (FLAIR) on functional outcome in patients with acute stroke in a prolonged time window or unknown time of symptom onset randomized to intravenous thrombolysis or placebo. Patients And Methods: We performed a post-hoc analysis of the European Cooperative Acute Stroke Study-4 (ECASS-4) trial. ECASS-4 was an investigator driven, phase 3, multi-center, double-blind, placebo-controlled study which randomized ischemic stroke patients presenting within 4.5 and 9h of stroke onset or unknown time-window to either rt-PA or placebo after MR-imaging. Two subgroups "no mismatch" (nMM) and "any mismatch" (aMM) were created by applying a DWI-FLAIR-mismatch criterion. We calculated frequency of nMM and aMM and performed a univariate analysis (Fisher's Test) for excellent clinical outcome (mRS 0-1) and mortality (mRS=6). Results: MR-Imaging of n=111/119 (93.2%) patients was suitable for this analysis. DWI-FLAIR mismatch was found in 49 patients (44.1%). Proportions of mismatch nMM and aMM were comparable in treatment-groups (aMM: Placebo 46.3%, Alteplase 42.1%; p=0.70). Patients with nMM showed no benefit of rt-PA-treatment (OR (95%CI) mRS 0-1: 0.95 (0.29-3.17)). Patients with aMM showed a point estimate of the odds ratio in favour of a treatment benefit of rt-PA (mRS 0-1: OR (95%CI) 2.62 (0.68-11.1)). Mortality within 90 days was not different in patients treated with rt-PA if nMM (15.2%) or aMM (12.5%) was present. Discussion: In this analysis no significant evidence, but subtle indication towards patients treated with rt-PA in a prolonged time window reaching an excellent clinical outcome if a DWI-FLAIR-mismatch is present on initial stroke MR-imaging. Conclusion: A DWI-FLAIR mismatch in the region of ischemia as imaging based surrogate parameter for patient selection for i.v. rt-PA should be strongly pursued. abstract_id: PUBMED:35228801 Use of DWI-FLAIR Mismatch to Estimate the Onset Time in Wake-Up Strokes. Purpose: To compare the MRI characteristics of patients with wake-up ischemic stroke (WUS) and with ischemic stroke with known onset time (clear-onset-time stroke, COS) to clarify the role of diffusion-weighted imaging-fluid-attenuated inversion recovery (DWI-FLAIR) mismatch in estimating the onset time of WUS patients. Patients And Methods: Two hundred patients with acute ischemic stroke were selected for complete brain MRI within six hours of symptom onset, including DWI and FLAIR sequences. The patients were divided into WUS (n = 78) and COS (n = 122) groups, based on whether the time of onset was known. The general conditions and imaging characteristics were collected to compare the DWI-FLAIR mismatch features between the two groups at different time intervals. Results: There was no significant difference in the DWI-FLAIR mismatch on MRI within 2 hour after the first found abnormality between the two groups (50.0% vs 71.8%, p = 0.180). With increasing time, the DWI-FLAIR mismatch decreased substantially in the WUS group, while a higher DWI-FLAIR mismatch presence persisted in the COS group within a four-hour interval from the onset of symptoms to the MRI. The DWI-FLAIR mismatch was significantly lower in the WUS group than in the COS group from symptom identification to MRI at 2-3 h, 3-4 h, and 4-5 h intervals (15% vs 60%, 10.5% vs 48%, 6.7% vs 45.4%; p &lt; 0.01). Conclusion: Our results suggest that the presence of DWI-FLAIR mismatch within 2 h of the first found abnormality was not significantly different between WUS and COS. Therefore, Patients with WUS within 2 hours after the first detected abnormality may be suitable for intravenous thrombolysis. Answer: The current MRI criteria using the DWI-FLAIR mismatch concept for the selection of patients with wake-up stroke (WUS) for thrombolysis may indeed be excluding a significant number of patients who could potentially benefit from treatment. Studies have suggested that the DWI-FLAIR mismatch approach might be too restrictive and could lead to the exclusion of patients who might still have a favorable response to thrombolysis. For instance, one study reported that in a small series, DWI-FLAIR mismatch was not associated with worse outcomes or intracranial hemorrhage (ICH), suggesting that using this criterion in clinical trials may exclude many patients who could benefit from thrombolysis (PUBMED:25888410). Another study found that using a modified DWI-FLAIR mismatch allowing for partial FLAIR positivity did not result in a significant difference in symptomatic intracranial hemorrhage (SICH) compared to controls, and thrombolysis may be feasible in selected WUS/unknown onset stroke (UOS) patients with partial FLAIR signal positivity (PUBMED:30415393). Moreover, a systematic review of Artificial Intelligence (AI) methods for DWI/FLAIR mismatch assessment indicated that AI algorithms could potentially increase the proportion of stroke patients with unknown onset who could be treated with thrombolysis (PUBMED:37371006). Additionally, a multicenter observational study suggested that the FLAIR vascular hyperintensity-DWI mismatch and DWI-FLAIR mismatch ASPECTS could predict good outcomes after recanalization in anterior circulation stroke (PUBMED:37084592). Furthermore, a retrospective single-center observational study demonstrated that thrombolysis based on DWI-FLAIR mismatch in patients without large vessel occlusion (LVO) has an early beneficial effect, with a rate of intracerebral hemorrhage similar to that reported in large thrombolysis trials with known onset times (PUBMED:36506404). Lastly, a post-hoc analysis of the ECASS-4 trial indicated a subtle indication towards patients treated with rt-PA in a prolonged time window reaching an excellent clinical outcome if a DWI-FLAIR mismatch is present on initial stroke MR-imaging (PUBMED:33598555).
Instruction: Does pregnancy and/or shifting positions create more room in a woman's pelvis? Abstracts: abstract_id: PUBMED:24949546 Does pregnancy and/or shifting positions create more room in a woman's pelvis? Objective: The purpose of this study was to assess the impact of different positions on pelvic diameters by comparing pregnant and nonpregnant women who assumed a dorsal supine and kneeling squat position. Study Design: In this cohort study from a tertiary referral center in Germany, we enrolled 50 pregnant women and 50 nonpregnant women. Pelvic measurements were obtained with obstetric magnetic resonance imaging pelvimetry with the use of a 1.5-T scanner. We compared measurements of the depth (anteroposterior (AP) and width (transverse diameters) of the pelvis between the 2 positions. Results: The most striking finding was a significant 0.9-1.9 cm increase (7-15%) in the average transverse diameters in the kneeling squat position in both pregnant and nonpregnant groups. The average bispinous diameter in the pregnant group increased from 12.6 cm ± 0.65 cm in the supine dorsal to 14.5 cm ± 0.64 cm (P &lt; .0001) in the kneeling squat; in the nonpregnant group the increase was from 12 cm ± 0.76 cm to 13.9 cm ± 1.04 cm (P &lt; .0001). The average bituberous diameter in the pregnant group increased from 13.6 cm ± 0.93 cm in the supine dorsal to 14.5 cm ± 0.83 cm (P &lt; .0001) in the kneeling squat position; in the nonpregnant women the increase was from 12.6 cm ± 0.92 cm to 13.5 cm ± 0.88 cm (P &lt; .0001). Conclusion: A kneeling squat position significantly increases the bony transverse and anteroposterior dimension in the mid pelvic plane and the pelvic outlet. Because this indicates that pelvic diameters change when women change positions, the potential for facilitation of delivery of the fetal head suggests further research that will compare maternal delivery positions is warranted. abstract_id: PUBMED:33816030 Dynamic External Pelvimetry Test in Third Trimester Pregnant Women: Shifting Positions Affect Pelvic Biomechanics and Create More Room in Obstetric Diameters. Dystocia in labor is still a clinical challenge. The "contracted pelvis" is the absence of pelvic mobility, which leads to fetal-pelvic disproportion, obstructed labor, and operative delivery. Maternal pelvis biomechanics studies by high technological techniques have shown that maternal shifting positions during pregnancy and labor can create more room in the pelvis for safe delivery. The external and internal pelvic diameters are related. The present study aims to evaluate the external obstetric pelvic diameters in shifting positions using a clinical technique suitable for daily practice in every clinical setting: the dynamic external pelvimetry test (DEP test). Seventy pregnant women were recruited, and the obstetric external pelvic diameters were measured, moving the position from kneeling standing to "hands-and-knees" to kneeling squat position. Results showed modification of the pelvic diameters in shifting position: the transverse and longitudinal diameters of Michaelis sacral area, the inter-tuberosities diameter, the bi-trochanters diameter, and the external conjugate widened; the bi-crestal iliac diameter, the bi-spinous iliac diameter, and the base of the Trillat's triangle decreased. The test showed good reproducibility and reliability. Linear correlations were found between diameters and between the range of motion of the diameters. The maternal pelvis is confirmed to modify the diameters changing its tridimensional shape. The pelvic inlet edge's inclination is inferred to be modified, facilitating the fetal descend. The pelvic outlet enlarged the transverse diameter, facilitating birth. The DEP test estimates the pelvic diameters' modification with postural changes, as magnetic resonance (MR) and computational biomechanics studies have demonstrated. abstract_id: PUBMED:33207288 Measurement of pubic symphysis width in different birthing positions using ultrasound. Studies have shown that some vertical birthing positions, such as squatting and standing, may influence labour and childbirth. However, a rigorous biomechanical understanding of how different positions affect a woman's pelvis during labour is currently lacking. It was hypothesized that the position of pubic bones is affected in different positions, and thus the pubic symphysis width changes. Therefore, we measured pubic symphysis width on eleven women in their seventh to ninth months of pregnancy in different positions through three-dimensional (3D) reconstruction of pubic symphysis ultrasound images. Positions studies included those used in clinics like supine and lithotomy positions, in addition to squatting and standing, which are recommended by midwives. The results showed that the average superior and inferior pubic symphysis widths in lithotomy, squatting and standing positions were approximately 1 mm larger than those in the supine position. However, no significant difference between lithotomy, squatting and standing positions was found. The results were consistent with those of two comparable studies. A significant difference was only found between standing and squatting, standing and lithotomy in the rotation angle about the medio-lateral axis. The results suggest that positions with thigh hyperflexion and gravity effect make a difference in pubic symphysis width and might have a positive effect in childbirth. However, due to the small sample size, no definite conclusion can be drawn. abstract_id: PUBMED:2239585 Transitional cell carcinoma of the renal pelvis in a woman discovered at childbirth We report a case of transitional cell carcinoma of a renal pelvis in a woman discovered after childbirth. A 38-year-old woman, who delivered an immature male infant in cesarean section 29 days prior to hospitalization, was admitted complaining of asymptomatic gross hematuria. Excretory urography and retrograde pyelography showed a filling defect of the right renal pelvis, Spontaneous urine cytology indicated class 5. Renal computed tomographic scan demonstrated a mass lesion in the right kidney. Right total nephroureterectomy and partial cystectomy was performed for diagnosis and treatment. Pathological diagnosis was papillary transitional cell carcinoma (grade 2). This is the first case of transitional cell carcinoma of the renal pelvis occurring in a childbearing woman in Japan. abstract_id: PUBMED:28403977 New approaches for assessing childbirth positions. Background: An overview of labor based only on epidemiological data cannot identify or explain the mechanisms involved in childbirth. Data about the position that women should take in giving birth are discordant. None of the studies of birth positions adequately define or describe them or their biomechanical impact (pelvic orientation, position of the back). The measurement of the effect of one position relative to that of another requires precise definitions of each position and of their maternal biomechanical consequences, as well as safe measurement methods. Methodology: We have developed a system to analyze the position of labor by quantifying the posture of the woman's body parts (including thighs, trunk, and pelvis), using an optoelectronic motion capture device (Vicon™, Oxford Metrics) widely used in human movement analysis and a system for measuring the lumbar curve (Epionics spine system). A specific body model has also been created to conduct this biomechanical analysis, which is based on external markers. With this methodology and model, it should be possible to define: (1) the hip joint angles (flexion/extension, abduction/adduction, internal/external rotation); (2) the ante/retroversion of the pelvis; (3) the lumbar curve. Discussion: This methodology could become a reference for assessing delivery postures, one that makes it possible to describe the relation between the postures used in the delivery room and their impact on the pelvis and the spine in an integrated and comprehensive model. Trial Registration: No. Eudract 2013-A01203-42. abstract_id: PUBMED:37223195 Alternative Birthing Positions Compared to the Conventional Position in the Second Stage of Labor: A Review. The position in which the woman delivers has a lot of impact on the ease of delivery. Women's satisfaction with their birthing experience and the care they receive is significantly impacted by the fact that giving birth is frequently a challenging experience. Birthing positions refer to various postures which can be assumed at the time of delivery by a pregnant woman. Currently, the majority of women give birth either while lying flat on their backs or in a semi-sitting position. Upright positions, which include standing, sitting, or squatting along with side-lying and hands-and-knees, are less common birth positions. Doctors, nurses, and midwives are among the most important healthcare professionals, having a significant influence in deciding which position the woman will give birth in and on the physiological and psychological effects of the experience of a woman in labor. There isn't much research to back up the best position for mothers during the second stage of labor. This review article aims to review and compare the advantages and risks of common birthing positions and know about the knowledge of alternative birthing positions among pregnant women. abstract_id: PUBMED:12389678 Rupture of the renal pelvis associated with intravenous fluid bolus. Previously reported cases of spontaneous rupture of the kidney or ureter reflect underlying renal pathology and have been reported both spontaneously and in relationship to delivery, but not in relation to other operative procedures. A 27-year-old woman at 19 weeks' gestation developed severe right flank pain in the operating room immediately prior to cerclage placement. Postoperative renal ultrasound examination and intravenous pyelogram performed to evaluate persistent flank pain demonstrated renal pelvis rupture. No other renal abnormality was present. We postulated that increased urine flow from the fluid bolus for the spinal anesthetic precipitated the rupture. abstract_id: PUBMED:37546496 Examining the impact of upright and recumbent positions on labor outcomes in Saudi Arabia: A quasi-experiment. Background: Helping the woman to adopt a comfortable position during childbirth significantly affects labor changes and pain management. However, there is a lack of consensus on the impacts of different childbirth positions on labor outcomes. In addition, a scarce of studies have assessed the effects of the upright and recumbent positions on delivery outcomes, especially in the Kingdom of Saudi Arabia. Objective: This study aimed to examine and compare the influence of upright and recumbent childbirth positions on birth outcomes. Methods: The setting was the childbirth unit at East Jeddah Hospital from November 2020 to March 2021. The research design was quasi-experimental, including 300 women in labor under 18-45 years. The sample includes two equal groups of 150 women: upright (experiment) and recumbent (comparison) position groups. Three tools were used to collect data: a structured interviewing questionnaire, the modified WHO partograph, and the Wong-Baker FACES® pain rating scale. Descriptive statistics, chi-square, independent t-test, and paired t-test were used for data analysis. Results: Women in the recumbent position spent a longer duration in the first, second, and third stages of childbirth and had higher pain scores and less satisfaction with the assumed position than women in the upright position, with a highly significant difference (p &lt;0.001). Conclusion: Laboring women in upright positions experienced faster progress of labor, shorter duration of childbirth, less pain, and higher satisfaction than those assumed recumbent positions. This study serves as an input for midwives and nurses to enable them to offer appropriate advice to improve intrapartum care. In addition, educational programs targeting pregnant women about the positive impacts of upright position on women's birth experience are encouraged. abstract_id: PUBMED:560740 Duplication of the renal pelvis and blind-ending bifid ureter in twins. Blind-ending bifid ureter in a woman with urinary tract infection and a duplicated renal pelvis in her twin sister are reported. The blind-ending bifid ureter terminated as a fibrous cord with a terminal mass of immature renal tissue. This type of ureter may be a transitional form between histologically blind-ending bifid ureter and duplication of the renal pelvis and/or kidneys. Findings in twins support this opinion. abstract_id: PUBMED:18580844 Pregnancy in a woman with treated bladder extrophy, split pelvis and hypoplasia of ischial bones. Case report. We present a case of a pregnancy in a 24 year old woman who was born with ectopia vesicae, split pelvis and hypoplasia of ischial bones. From childhood to adulthood she had undergone reconstructive surgeries of the abdomen and perineum, as well as urine diversion surgery. During pregnancy she experienced recurrent urinary tract infections which were treated with antibiotics. In spite of tocolysis, she delivered by cesarean section a premature baby boy at 35th week of gestation due to premature uterine contractions. The postoperative period was uneventful and they were discharged from the hospital in a good general condition. Answer: Yes, pregnancy and shifting positions can create more room in a woman's pelvis. Studies have shown that different positions assumed by pregnant women can significantly impact the dimensions of the pelvic diameters. For instance, a study found that a kneeling squat position significantly increases the bony transverse and anteroposterior dimension in the mid pelvic plane and the pelvic outlet in both pregnant and nonpregnant women (PUBMED:24949546). Another study using the dynamic external pelvimetry test (DEP test) confirmed that shifting positions during pregnancy and labor can modify the external obstetric pelvic diameters, creating more room in the pelvis for safe delivery (PUBMED:33816030). Additionally, measurements of pubic symphysis width using ultrasound in different birthing positions indicated that positions with thigh hyperflexion and gravity effect, such as squatting and standing, might have a positive effect in childbirth by increasing the pubic symphysis width (PUBMED:33207288). Furthermore, research has suggested that adopting upright positions during labor can lead to faster progress of labor, shorter duration of childbirth, less pain, and higher satisfaction compared to recumbent positions (PUBMED:37546496). Overall, these findings support the notion that both pregnancy and shifting positions can indeed create more room in a woman's pelvis, which can facilitate childbirth.
Instruction: Patients' view on intraoperative diagnosis of sentinel nodes in breast cancer: is it an automatic choice? Abstracts: abstract_id: PUBMED:17448968 Patients' view on intraoperative diagnosis of sentinel nodes in breast cancer: is it an automatic choice? Unlabelled: Intraoperative detection of sentinel node metastases in breast cancer enables immediate axillary lymph node dissection. This approach, however, introduces uncertainty for patients as to the extent of surgery. Waking to find a surgical drain implies more extensive surgery and worse prognosis disease. False negative diagnoses may cause disappointment. Aim: To evaluate patients' views and preferences on intraoperative diagnosis of sentinel nodes in breast cancer. Methods: Questionnaire based survey of 100 patients who had previously undergone sentinel node biopsy with intraoperative diagnosis using touch imprint cytology (TIC). Patients were encouraged to add free text comments. Results: Sixty-four patients responded to the questionnaire. Patients rated the information provided and their understanding of the procedure highly. Fifty-nine percent of respondents overestimated the sensitivity of TIC. Ninety-five percent of patients would choose to undergo intraoperative diagnosis in future if required. Five percent of patients would choose not to undergo intraoperative diagnosis, citing the resultant uncertainty, disappointment on waking and needing time to come in terms with the diagnosis of metastases as reasons. Conclusion: Given the choice, most patients would choose intraoperative diagnosis, though a minority would explicitly not, due to the adverse psychological effect thereof. Despite a good understanding of the procedure, the majority of patients overestimate the sensitivity of intraoperative diagnosis of sentinel nodes, which may heighten disappointment when a false negative diagnosis occurs. Intraoperative diagnosis should not be the automatic choice and patients should be actively involved in this decision making process. abstract_id: PUBMED:17986799 Intraoperative imprint cytology for the diagnosis of sentinel node metastases in breast cancer. Accurate intraoperative diagnosis of sentinel node metastases enables the selection of patients for axillary lymph node dissection, thus avoiding an unnecessary additional operation in patients with false-negative results. The main reason for false-negative results of imprint cytology was poor quality of the imprint samples because of sampling error. False-positive imprint cytology results that lead to an unnecessary axillary lymph node dissection are very rare. Imprint cytology can be recommended for the intraoperative examination of sentinel nodes in breast cancer patients. abstract_id: PUBMED:18438122 Accuracy of imprint cytology for intraoperative diagnosis of sentinel node metastases in breast cancer. Objective: The aim of this study was to determine the accuracy of imprint cytology for the intraoperative diagnosis of sentinel node metastases in patients with breast cancer. Summary Background Data: Accurate intraoperative diagnosis of sentinel node metastases enables the selection of patients who need axillary lymph node dissection during the same operation. Methods: This study included 631 patients with clinical T1 breast cancer with clinically negative nodes. Sentinel nodes were serially sectioned at 2-mm intervals. Imprint cytologic samples were made from all cut surfaces and stained with Papanicolaou staining. Patients with positive sentinel nodes underwent immediate axillary lymph node dissection. Those with negative sentinel nodes diagnosed by intraoperative imprint cytology, but positive by final pathologic results, underwent subsequent axillary lymph node dissection. The results of imprint cytology were compared with the final pathologic results using hematoxylin and eosin staining and immunohistochemistry. Results: In 110 of 130 patients with positive sentinel nodes diagnosed by final pathology, imprint cytology of at least one sentinel node was positive. In 17 of 501 patients with tumor-negative sentinel nodes diagnosed by final pathology, imprint cytology of at least one sentinel node was positive. The sensitivity, specificity, and overall accuracy of imprint cytology for the diagnosis of sentinel node metastases were 84.6%, 96.6%, and 94.1%, respectively. Only 20 (3.2%) patients required a second axillary operation in the present study. Conclusion: Intraoperative imprint cytology is a useful method for evaluating sentinel node metastasis in patients with breast cancer. abstract_id: PUBMED:14502779 The feasibility of intraoperative diagnosis of sentinel lymph node metastases in breast cancer. Background And Objectives: The aim of the study was to analyse in detail the feasibility of intraoperative assessment of sentinel lymph nodes in breast cancer. Methods: Altogether 139 consecutive breast cancer patients with metastases in axillary sentinel nodes were included in a prospective study. A combination of imprint cytology and frozen section was used as the method of intraoperative diagnosis of sentinel node metastases. The definite postoperative evaluation of the sentinel nodes was taken as the gold standard. Results: The overall sensitivity of intraoperative diagnosis was 83%, reaching 81% if the intraoperative assessment had been limited to the two first retrieved sentinel nodes. False negative (FN) findings were more common in connection with invasive lobular carcinoma (28%) than with invasive ductal carcinoma (8%) (P &lt; 0.01) as well as in connection with micro-metastases, in 38% of the cases, compared to the larger metastases, 6% (P &lt; 0.00005). Conclusions: Intraoperative examination of sentinel lymph nodes enables breast surgery, axillary staging, and treatment in the same operation in a substantial proportion of breast cancer patients. Hospital costs as well as workload in the pathology laboratory may be reduced, limiting the intraoperative assessment to the two first retrieved nodes. abstract_id: PUBMED:17462851 Sentinel node biopsy with intraoperative diagnosis in patients undergoing skin-sparing mastectomy and immediate breast reconstruction. Aims: False negative cases in the intraoperative assessment of sentinel node (SN) metastases in breast cancer prompt for a secondary axillary lymph node dissection (ALND). Such ALND is technically demanding and prone to complications in patients with immediate breast reconstruction (IBR) if there is a microvascular anastomosis or the thoracodorsal pedicle of a latissimus dorsi flap in the axilla. This study aims to evaluate the feasibility of the intraoperative diagnosis of sentinel node biopsy (SNB) in breast cancer patients undergoing IBR. Methods: Sixty-two consecutive breast cancer patients undergoing SNB with the intraoperative diagnosis of SN metastases simultaneously with mastectomy and IBR between 2004 and 2006 were included in this study. Results of the SNB and especially the false negative cases in the intraoperative diagnosis were evaluated. Results: Eleven patients had tumor positive SN. Nine of these cases were detected intraoperatively. The two false negative cases in the intraoperative diagnosis constituted of isolated tumor cells only. Conclusions: Our present study suggests that SNB with intraoperative diagnosis of SN metastases is feasible in patients undergoing IBR if the risk of nodal metastasis is low and the sensitivity of intraoperative SNB diagnosis is high. abstract_id: PUBMED:24231723 Intraoperative injection of radiocolloid for sentinel node biopsy in breast cancer. Unlabelled: Preoperative injection of radiocolloid for axillary sentinel node biopsy in breast cancer is uncomfortable for patients. This study evaluated the reliability of radiocolloid injection as determined by sentinel node identification rate and positive sentinel node biopsy rate, after the patient has been anesthetized in the operating room as compared with preoperative injection. Methods: After institutional review board approval, a retrospective cohort of patients with breast cancer who underwent sentinel node biopsy from January 2005 through December 2010 was evaluated for analysis. Patients who received intraoperative injection of radiocolloid were compared with those who received their injection preoperatively. Patients with incomplete pathologic staging or having received neoadjuvant chemotherapy were excluded. All patients received radiocolloid injections into the retroareolar tissue; some also received intradermal injection directly over the tumor. All injections contained 37 MBq (1 mCi) in 0.5 mL of filtered (0.22-μm) (99m)Tc-sulfur colloid. Results: Over the 6-y study period there were 904 sentinel node biopsy procedures, and 165 patients were excluded from analysis. Of the 739 sentinel node biopsies that were analyzed, 647 had preoperative injection of radiocolloid and 92 had intraoperative injection. The overall sentinel lymph node identification rates were similar for the 2 groups: 93.5% for the preoperative injection group and 94.6% for the intraoperative injection group (not statistically significant). The identification rates remained comparable for both groups when analyzed by T stage of the tumor. The average number of sentinel lymph nodes removed was similar between preoperative and intraoperative injection: 2.60 and 2.70 nodes, respectively. The overall rates of positive sentinel nodes were comparable for the 2 groups: 25.6% for the preoperative injection group and 26.4% for the intraoperative injection group (not statistically significant). When analyzed by T stage, the positive sentinel node rates remained similar between the 2 groups. Conclusion: The sentinel lymph node identification rate of 94.6% for the intraoperative injection group was similar to other published sentinel lymph node identification rates (96%-100%). The positive sentinel lymph node rate was also comparable to that of published series. Intraoperative injection of radiocolloid for axillary sentinel node biopsy appears equivalent to preoperative injection and is a less painful experience for breast cancer patients. abstract_id: PUBMED:12296899 Intraoperative scrape cytology of the sentinel lymph node in patients with breast cancer. Background: Intraoperative examination of the sentinel lymph node (SLN) may detect metastatic disease, allowing immediate axillary lymph node dissection and therefore avoiding the need for reoperation. The aim of this study was to evaluate the accuracy of scrape cytology of the SLN in patients with early breast cancer. Methods: Sentinel node biopsy was performed in 148 patients with clinical T1-2 N0 breast cancer. After harvesting, the SLN was bisected and cells from both halves were scraped with a scalpel blade on to a slide and stained with a Papanicolaou and Giemsa stain. Scrape cytology results were compared with the results of paraffin sections stained with haematoxylin and eosin and with immunohistochemistry. Results: The intraoperative diagnosis was correct in 126 patients (85 per cent). Sensitivity and specificity were 67 and 98 per cent respectively; positive and negative predictive values were 95 and 81 per cent. Conclusion: Scrape cytology is a useful method for intraoperative evaluation of the SLN in patients with breast cancer. abstract_id: PUBMED:15929120 Rapid immunohistochemistry enhances the intraoperative diagnosis of sentinel lymph node metastases in invasive lobular breast carcinoma. Background: The sensitivity of the intraoperative diagnosis of sentinel lymph node (SLN) micrometastases and the metastases of invasive lobular carcinoma (ILC) is low. The goal of the current study was to assess whether the use of intraoperative, rapid immunohistochistochemistry (IHC) enhances the intraoperative detection of micrometastases and metastases of ILC. Methods: The sensitivity of the intraoperative diagnosis of SLN metastasis was evaluated in 438 patients when using rapid IHC with a cytokeratin biomarker. The results were compared with those obtained for 557 patients without rapid IHC but with conventional staining. Results: For patients with ILC, the sensitivity of the intraoperative diagnosis was 87% (45 of 52) in the IHC group and 66% (39 of 59) in the non-IHC group (P = 0.02). The sensitivity of the intraoperative diagnosis was similar for patients with other types of invasive cancer regardless of the use of rapid IHC. However, rapid IHC enhanced marginally the intraoperative diagnosis of the smallest micrometastases, isolated tumor cells (P = 0.06). Conclusions: Rapid IHC with cytokeratin labeling enhanced the intraoperative diagnosis of SLN metastases in patients with ILC. It may also improve the intraoperative diagnosis of micrometastases. abstract_id: PUBMED:19317265 Intraoperative estimation of sentinel lymph nodes in breast cancer by imprint cytology. Background: Frozen section biopsy has been widely used for intraoperative diagnosis and evaluation of sentinel lymph nodes, so a decision can be made regarding whether to perform axillary clearance during primary surgery. This study aims to discuss the reliability of a simpler and faster method - touch imprint cytology - in the interpretation of metastasis from breast cancer. Methods: A retrospective review of 41 sentinel lymph node biopsies from patients with breast cancer were examined by intraoperative imprint cytology using rapid Diff-Quick staining. Paraffin-embedded permanent sections were examined using hematoxylin and eosin stained sections from the sentinel lymph nodes in collaboration with the employment of an anti-cytokeratin antibody. Results: Sixteen of all sentinel nodes harbored metastases in the paraffin sections, of which all 16 were identified by imprint cytology (sensitivity 93%). Conclusion: Touch imprint cytology is a fast and reliable alternative for intraoperative evaluation of sentinel lymph nodes in breast cancer patients. abstract_id: PUBMED:26998984 Pre and intraoperative scintigraphic detection of sentinel nodes in breast cancer Background: Sentinel node detection localizes the first node that drains a malignant lesion aiming to detect tumor dissemination. Aim: To assess the yield of sentinel node detection in breast cancer, using pre or intraoperative scintigraphy. Material And Methods: Review of medical records of patients with breast cancer who had a scintigraphic detection of sentinel nodes. Lymph node scintigraphy and surgery were performed in the same day. Results: We studied 174 women aged 53 ± 13 years, operated with a diagnosis of breast cancer, including six highly suspicious lesions in the contralateral breast (totaling 180 studied breasts). Preoperative scintigraphy showed a sentinel node in 174 of 180 breasts (97%). Intraoperative gamma probe confirmed the presence of the sentinel node in the same 174 breasts and detected an additional one reaching a detection yield of 97%. Four patients in whom a sentinel node was not detected in the preoperative scintigraphy, had macrometastases. Frozen section biopsies were available in 177 of 180 breasts. Metastases were informed in 45 patients who underwent axillary lymph node dissection, plus one additional patient with a suspicious lesion. Conclusions: A high rate of sentinel node detection in the preoperative scintigraphy was observed. Most sentinel nodes not detected with nuclear medicine had macrometastases. In 71% of patients, the detection of sentinel node avoided axillary lymph node dissection. Answer: The intraoperative diagnosis of sentinel nodes in breast cancer is not an automatic choice from the patients' perspective. A study evaluating patients' views and preferences on intraoperative diagnosis of sentinel nodes using touch imprint cytology (TIC) found that while most patients (95%) would choose to undergo intraoperative diagnosis if required in the future, a minority (5%) would opt not to. The reasons cited by the latter group included the uncertainty and disappointment associated with waking up to find a surgical drain, which implies more extensive surgery and a worse prognosis, as well as needing time to come to terms with the diagnosis of metastases (PUBMED:17448968). Despite a good understanding of the procedure, 59% of respondents overestimated the sensitivity of TIC, which could lead to heightened disappointment in the event of a false negative diagnosis. The study concluded that intraoperative diagnosis should not be the automatic choice and that patients should be actively involved in the decision-making process (PUBMED:17448968). Other studies have focused on the technical aspects and accuracy of intraoperative diagnosis methods such as imprint cytology and rapid immunohistochemistry. These studies have generally found that intraoperative diagnosis methods like imprint cytology are useful and have high sensitivity and specificity (PUBMED:18438122, PUBMED:14502779, PUBMED:12296899, PUBMED:19317265). However, false negatives can occur, particularly with certain types of cancer like invasive lobular carcinoma or with micrometastases (PUBMED:14502779, PUBMED:15929120). The feasibility of intraoperative diagnosis in specific scenarios, such as patients undergoing immediate breast reconstruction, has also been demonstrated, with the caveat that the risk of nodal metastasis should be low and the sensitivity of intraoperative diagnosis should be high (PUBMED:17462851). In summary, while intraoperative diagnosis of sentinel nodes in breast cancer is generally favored by patients and can be accurate and feasible, it is not an automatic choice for all patients. Some may decline the option due to the psychological impact of the associated uncertainty and the potential for false negative results. It is important for patients to be well-informed and involved in the decision-making process regarding their surgical options.
Instruction: Can a tibial tunnel in ACL surgery be placed anatomically without impinging on the femoral notch? Abstracts: abstract_id: PUBMED:23338664 Can a tibial tunnel in ACL surgery be placed anatomically without impinging on the femoral notch? A risk factor analysis. Purpose: To analyze anatomical risk factors and surgical technique dependent variables, which determine the risk for femoral notch impingement in anatomically correct placed tibial tunnels for anterior cruciate ligament (ACL) surgery. Methods: Twenty fresh frozen adult human knee specimens under the age of 65 years were used. Digital templates mimicking a tibial tunnel aperture at the tibia plateau were designed for different tibial tunnel diameters and different drill-guide angles. The centres of these templates were placed over the geometric centre of the native tibial ACL footprint. The distances between the anterior borders of the templates and the anterior borders of the footprints (graft free zone) were measured and compared. Furthermore, anatomic risk factors for femoral notch impingement were determined. Results: The graft free zone was statistically significantly longer for larger drill-guide angles compared to smaller drill-guide angles (p &lt; 0.00001). Furthermore, 8 mm diameter tibial tunnels had a statistically significant larger graft free zone compared to 10-mm-diameter tibial tunnels (p &lt; 0.00001). For the 10 mm diameter tibial tunnels with drill-guide angle of 45°, 9 out of 20 knees (45 %) were "at risk" for notching and 4 out of 20 knees (20 %) had "definite" notching. For 10-mm tunnels with drill-guide angle of 45°, a risk for notching was associated with smaller tibial ACL footprint (p &lt; 0.05). Conclusion: If a perfect centrally positioned tibial tunnel is drilled, a real risk for femoral notch impingement exists depending on the size of the tibial ACL footprint and surgery-related factors. Therefore, in anatomical tibial tunnel placement in single bundle ACL reconstruction surgery, particular attention should be paid to size of the tunnel and drill-guide angle to minimize the risk of femoral notch impingement. abstract_id: PUBMED:28721590 High non-anatomic tunnel position rates in ACL reconstruction failure using both transtibial and anteromedial tunnel drilling techniques. Introduction: Although it is well known from cadaveric and biomechanical studies that transtibial femoral tunnel (TT) positioning techniques are associated with non-anatomic tunnel positions, controversial data exist as so far no clinical differences could have been found, comparing transtibial with anteromedial techniques (AM). The purpose of the study was to analyze if graft failure following TT ACL reconstruction was more commonly associated with non-anatomic tunnel position in comparison with the AM technique. We hypothesized that, compared to AM techniques, non-anatomic tunnel positions correlate with TT tunnel positioning techniques. Materials And Methods: A total of 147 cases of ACL revision surgery were analyzed retrospectively. Primary ACL reconstructions were analyzed regarding the femoral tunnel drilling technique. Femoral and tibial tunnel positions were determined on CT scans using validated radiographic measurement methods. Correlation analysis was performed to determine differences between TT and AM techniques. Results: A total of 101 cases were included, of whom 64 (63.4%) underwent the TT technique and 37 (36.6%) the AM technique for primary ACL reconstruction. Non-anatomic femoral tunnel positions were found in 77.2% and non-anatomical tibial tunnel positions in 40.1%. No correlations were found comparing tunnel positions in TT and AM techniques, revealing non-anatomic femoral tunnel positions in 79.7 and 73% and non-anatomic tibial tunnel positions in 43.7 and 35.1%, respectively (p &gt; 0.05). Conclusions: Considerable rates of non-anatomic femoral and tibial tunnel positions were found in ACL revisions with both transtibial and anteromedial femoral drilling techniques. Despite the potential of placing tunnels more anatomically using an additional AM portal, this technique does not ensure anatomic tunnel positioning. Consequently, the data highlight the importance of anatomic tunnel positioning in primary ACL reconstruction, regardless of the applied drilling technique. abstract_id: PUBMED:35201373 Lower anatomical femoral ACL tunnel can be created in the large volume of femoral intercondylar notch. Purpose: The purpose of this study was to investigate the correlation between femoral intercondylar notch volume and the characteristics of femoral tunnels in anatomical single bundle anterior cruciate ligament (ACL) reconstruction. Methods: Fifty-one subjects (24 male and 27 female: median age 27: range 15-49), were included in this study. Anatomical single bundle ACL reconstruction was performed in all subjects using a trans-portal technique. Femoral tunnel length was measured intra-operatively. Three-dimensional computed tomography (3D-CT) was taken at pre and post-surgery. The intercondylar notch volume was calculated with a truncated-pyramid shape simulation using the pre-operative 3D-CT image. In the post-operative 3D-CT, the modified quadrant method was used to measure femoral ACL tunnel placement. Results: Femoral tunnel placement was 47.6 ± 10.5% in the high-low (proximal-distal) direction, and 22.6 ± 5.4% in the shallow-deep (anterior-posterior) direction. Femoral tunnel length was 35.3 ± 4.4 cm. Femoral intercondylar notch volume was 8.6 ± 2.1cm3. A significant correlation was found between femoral intercondylar notch volume and high-low (proximal-distal) femoral tunnel placement (Pearson's coefficient correlation: 0.469, p = 0.003). Conclusion: Femoral ACL tunnel placement at a significantly lower level was found in knees with large femoral intercondylar notch volume in the trans-portal technique. For the clinical relevance, although the sample size of this study was limited, surgeons can create femoral ACL tunnel low (distal) in the notch where close to the anatomical ACL footprint in the knees with large femoral intercondylar notch volume. Level Of Evidence: III. abstract_id: PUBMED:29564331 Arthroscopic Viewing Position Affects Anterior Cruciate Ligament Reconstruction Femoral Tunnel Length Measurements. Purpose: To purpose of this study was to compare arthroscopic anterior cruciate ligament (ACL) reconstruction femoral tunnel length measurements from the anterolateral portal between the standard notch view using a 30° arthroscope versus a "top-down" view utilizing a 70° arthroscope to visual the far side of the femoral tunnel aperture. Methods: Arthroscopic femoral tunnel length measurements using calibrated reamers from the standard notch versus the "top-down" view were obtained and reviewed in 54 skeletally mature patients undergoing ACL reconstruction with no prior bony knee surgery. Patient age, height, weight, sex, and surgery laterality were also recorded. Measurements of femoral tunnel length were repeated using both views for inter-observer and intra-observer correlation. Results: Inter-observer and intra-observer intra-class correlation coefficients for the standard notch view and "top-down" views were excellent, with higher reliability values appreciated using the "top down" view. Mean overall femoral tunnel length measurements obtained using the standard notch view were significantly longer than measurements from the "top-down" view (p &lt; 0.001). Conclusions: The standard notch view provides significantly longer femoral tunnel length measurements in comparison to the "top-down" view. abstract_id: PUBMED:28825030 Bone Tunnel Enlargement after ACL Reconstruction with Hamstring Autograft Is Dependent on Original Bone Tunnel Diameter. Background Bone tunnel enlargement is a well-established phenomenon following anterior cruciate ligament (ACL) reconstruction, and is related to soft tissue grafts, suspension fixation devices, and absorbable implants. Severe tunnel enlargement can lead to reconstruction failure. The correlation between bone tunnel enlargement following ACL reconstruction and original bone tunnel diameter has not been elucidated. Purpose To determine whether bone tunnel enlargement after ACL reconstruction with hamstring autograft is dependent on original tunnel diameter established during primary ACL reconstruction. Materials and Methods A retrospective review was conducted on 56 patients scheduled for ACL revision surgery who had undergone computed tomography (CT) scanning as part of their preoperative evaluation. All patients had undergone previous hamstring ACL reconstruction. Original femoral and tibial bone tunnel diameters were extracted from operative reports, and femoral and tibial bone tunnel enlargement was assessed on CT serial sections. The correlation between original tunnel diameter and bone tunnel enlargement was investigated using regression analysis. Results Mean tibial bone tunnel enlargement was significantly and inversely dependent on the original tibial bone tunnel diameter with a correlation coefficient of -0.55 per unit (7 mm = +1.93 mm, 8 mm = +1.43 mm, 9 mm = 0.83 mm, p = 0.007). Thus, every additional increase (mm) in diameter of the original tibial bone tunnel reduces the extend of tunnel widening by 0.55 mm. Conclusions The results of this study indicate that tibial bone tunnel enlargement following ACL reconstruction is dependent on original tibial bone tunnel diameter with smaller diameter tunnels developing more tunnel enlargement than larger tunnels. The contributing factors remain unclear and need to be further investigated. abstract_id: PUBMED:28589158 Three-Dimensional CT Evaluation of Tunnel Positioning in ACL Reconstruction Using the Single Anteromedial Bundle Biological Augmentation (SAMBBA) Technique. Background: Remnant preservation may confer important advantages in the anterior cruciate ligament (ACL)-reconstructed knee. However, the presence of a large remnant may obscure visualization and impair the ability to correctly place tunnels during surgery. Purpose: To determine whether tunnel placement during anatomic ACL reconstruction using the single anteromedial bundle biological augmentation (SAMBBA) technique is consistent and precise when a large native remnant is preserved. Study Design: Case series; Level of evidence, 4. Methods: Included in this study were 99 patients undergoing an ACL reconstruction during which at least 50% of the native ACL was preserved. The femoral tunnel was created using an outside-in specific guide. The tibial tunnel was positioned in the anteromedial region of the ACL footprint, and the remnant was carefully preserved while drilling and passing the semitendinosus graft through it. Postoperatively, 3-dimensional computed tomography (3D CT) was used to evaluate tunnel placement. The mean tunnel locations were calculated and the standard deviation was used to evaluate precision of positioning. Inter- and intrareader agreement were determined to assess reliability of evaluation of tunnel position. Results: The center of the femoral tunnel was positioned at a mean 19.4% (SD, 2%) of the depth of the notch and a mean 23.1% (SD, 3.5%) of the lateral wall height. The center of the tibial tunnel was positioned at a mean 36.3% (SD, 3.8%) of the anteroposterior length of the tibial plateau and at a mean 47.0% (SD, 2.7%) of the mediolateral width. The small standard deviations demonstrate that this technique allows precise tunnel placement. The tunnel positions achieved were consistent with previous anatomic studies of femoral and tibial anteromedial bundle insertion. Intra- and interobserver reliability were high. Conclusion: Three-dimensional CT evaluation demonstrated that despite the presence of a large remnant, placement of femoral and tibial tunnels for anatomic ACL reconstruction using the SAMBBA technique is consistent and precise. abstract_id: PUBMED:23467279 Three-dimensional reconstruction computed tomography evaluation of tunnel location during single-bundle anterior cruciate ligament reconstruction: a comparison of transtibial and 2-incision tibial tunnel-independent techniques. Background: Anatomic tunnel positioning is important in anterior cruciate ligament (ACL) reconstructive surgery. Recent studies have suggested the limitations of a traditional transtibial technique to place the ACL graft within the anatomic tunnel position of the ACL on the femur. The purpose of this study is to determine if the 2-incision tibial tunnel-independent technique can place femoral tunnel to native ACL center when compared with the transtibial technique, as the placement with the tibial tunnel-independent technique is unconstrained by tibial tunnel. Methods: In sixty-nine patients, single-bundle ACL reconstruction with preservation of remnant bundle using hamstring tendon autograft was performed. Femoral tunnel locations were measured with quadrant methods on the medial to lateral view of the lateral femoral condyle. Tibial tunnel locations were measured in the anatomical coordinates axis on the top view of the proximal tibia. These measurements were compared with reference data on anatomical tunnel position. Results: With the quadrant method, the femoral tunnel centers of the transtibial technique and tibial tunnel-independent technique were located. The mean (± standard deviation) was 36.49% ± 7.65% and 24.71% ± 4.90%, respectively, from the over-the-top, along the notch roof (parallel to the Blumensaat line); and at 7.71% ± 7.25% and 27.08% ± 7.05%, from the notch roof (perpendicular to the Blumensaat line). The tibial tunnel centers of the transtibial technique and tibial tunnel-independent technique were located at 39.83% ± 8.20% and 36.32% ± 8.10%, respectively, of the anterior to posterior tibial plateau depth; and at 49.13% ± 4.02% and 47.75% ± 4.04%, of the medial to lateral tibial plateau width. There was no statistical difference between the two techniques in tibial tunnel position. The tibial tunnel-independent technique used in this study placed femoral tunnel closer to the anatomical ACL anteromedial bundle center. In contrast, the transtibial technique placed the femoral tunnel more shallow and higher from the anatomical position, resulting in more vertical grafts. Conclusions: After single-bundle ACL reconstruction, three-dimensional computed tomography showed that the tibial tunnel-independent technique allows for the placement of the graft closer to the anatomical femoral tunnel position when compared with the traditional transtibial technique. abstract_id: PUBMED:37006732 Low Femoral Tunnel Widening Incidence Rate After ACL Reconstruction Using Patellar Tendon Graft with Press-Fit Fixation. Background: Femoral tunnel widening after ACL reconstruction is a common phenomenon. We hypothesized that using a patellar tendon graft with a press-fit fixation technique without any fixation device reduces the incidence of femoral tunnel widening. Methods: This study was conducted on 467 patients with ACL surgery between 2003 and 2015. Two hundred and nineteen of them had an ACL surgery with patellar tendon (PT) graft, and two hundred and forty-eight of them with hamstring tendon (HS). Exclusion criteria were history of previous ACL reconstruction of either knee, multiple ligament injury, or evidence of osteoarthritis on radiographs. The femoral tunnels were measured on the anteroposterior (ap) and lateral radiographs 6 months after the operation. Two independent orthopedic surgeons measured all radiographs twice and recorded the tunnel widenings. We hypothesized that using an implant-free press-fit technique with PT graft can reduce the femoral tunnel widening incidence rate. Results: The mean incidence rate of the tunnel widening in the HS group was, on the AP and the lateral femoral views, 88% (n = 217) and 83% (n = 205), while in the PT group, 17% (n = 37) and 2% (n = 4), respectively. There was a significant difference both on AP and lateral radiographs (HS vs. PT: fem. AP: 89% vs. 17% p &lt; 0.001; HS vs. PT: fem. lat: 84% vs. 2% p &lt; 0.001). Conclusion: The femoral tunnel widening incidence rate during an ACL reconstruction is significantly less when using PT tendon with femoral press-fit fixation than when using HT tendon with suspensory fixation method. abstract_id: PUBMED:34646899 Rate of Tibial Tunnel Malposition Is Not Changed by Drilling Entirely Within the Stump of Preserved Remnants During ACL Reconstruction: A Prospective Comparative 3D-CT Study. Background: Remnant preservation during anterior cruciate ligament (ACL) reconstruction (ACLR) is controversial, and it is unclear whether the stump aids or obscures tibial tunnel positioning. Purpose/hypothesis: The aim of this study was to determine whether the rate of tibial tunnel malposition is influenced by remnant preservation. The hypothesis was that using a remnant-preserving technique to drill entirely within the tibial stump would result in a significant reduction in tibial tunnel malposition as determined by postoperative 3-dimensional computed tomography (3D-CT). Study Design: Cohort study; Level of evidence, 2. Methods: Patients undergoing ACLR between October 2018 and December 2019 underwent surgery with a remnant-preserving technique (RP group) if they had a large stump present (&gt;50% of the native ACL length) or if there was no remnant or if it was &lt;50% of the native length of the ACL, they underwent remnant ablation (RA group) and use of standard landmarks for tunnel positioning. The postoperative tunnel location was reported as a percentage of the overall anteroposterior (AP) and mediolateral (ML) dimensions of the tibia on axial 3D-CT. The tunnel was classified as anatomically placed if the center lay between 30% and 55% of the AP length and between 40% and 51% of the ML length. Results: Overall, 52 patients were included in the study (26 in each group). The mean tunnel positions were 36.8% ± 5.5% AP and 46.7% ± 2.9% ML in the RP group and 35.6% ± 4.8% AP and 47.3% ± 2.3% ML in the RA group. There were no significant differences in the mean AP (P = .134) and ML (P = .098) tunnel positions between the groups. Inter- and intraobserver reliability varied between fair to excellent and good to excellent, respectively. There was no significant difference in the rate of malposition between groups (RP group, 7.7%; RA group, 11.5%; P ≥ .999). Conclusion: Drilling entirely within the ACL tibial stump using a remnant-preserving reconstruction technique did not significantly change the rate of tunnel malposition when compared with stump ablation and utilization of standard landmarks. abstract_id: PUBMED:34499210 ACL graft with extra-cortical fixation rotates around the femoral tunnel aperture during knee flexion. Purpose: An understanding of the behavior of a new ACL graft in the femoral tunnel during knee motion and external loading can provide information pertinent to graft healing, tunnel enlargement, and graft failure. The purpose of the study was to measure the percentage of the tunnel filled by the graft and determine the amount and location of the graft-tunnel contact with knee motion and under external knee loads. Methods: Single bundle anatomical ACL reconstruction was performed on six cadaveric knees. Specimens were positioned with a robotic testing system under: (1) passive flexion-extension, (2) 89-N anterior and posterior tibial loads, (3) 5-N m internal and external torques, and (4) 7-N m valgus moment. The knees were then dissected, repositioned by the robot and the geometry of the femoral tunnel and graft were digitized by laser scanning. The percentage of tunnel filled and the contact region between graft and tunnel at the femoral tunnel aperture were calculated. Results: The graft occupies approximately 70% of the femoral tunnel aperture and anterior tibial loading tended to reduce this value. The graft contacted about 60% of the tunnel circumference and the location of the graft-tunnel contact changed significantly with knee flexion. Conclusion: This study found that the graft tends to rotate around the tunnel circumference during knee flexion-extension and contract under knee loading. The "windshield-wiper" and "bungee cord" effect may contribute to femoral tunnel enlargement, affect graft healing, and lead to graft failure. There can be a considerable motion of the graft in the tunnel after surgery and appropriate rehabilitation time should be allowed for graft-tunnel healing to occur. To reduce graft motion, consideration should be given to interference screw fixation or a graft with bone blocks, which may allow an earlier return to activity. Answer: Yes, a tibial tunnel in ACL surgery can be placed anatomically without impinging on the femoral notch, but it requires careful consideration of anatomical risk factors and surgical technique variables. A study analyzing the risk for femoral notch impingement in anatomically correct placed tibial tunnels for ACL surgery found that the risk of impingement depends on the size of the tibial ACL footprint and surgery-related factors such as the size of the tunnel and the drill-guide angle. Larger drill-guide angles and smaller diameter tibial tunnels (8 mm versus 10 mm) were associated with a statistically significant larger graft free zone, reducing the risk of notching. For 10 mm diameter tibial tunnels with a drill-guide angle of 45°, there was a 45% risk for notching in the study sample. Therefore, to minimize the risk of femoral notch impingement, attention should be paid to the size of the tunnel and the drill-guide angle during anatomical tibial tunnel placement in single bundle ACL reconstruction surgery (PUBMED:23338664).
Instruction: Does being a "SunSmart School" influence hat-wearing compliance? Abstracts: abstract_id: PUBMED:24382297 Does being a "SunSmart School" influence hat-wearing compliance? An ecological study of hat-wearing rates at Australian primary schools in a region of high sun exposure. Background: Childhood sun exposure is an important risk factor for skin cancer. Anecdotal evidence suggests that hats are under-utilized by Australian primary school students. Methods: The proportion of students and adult role-models wearing hats was observed at 36 primary schools (63.9% SunSmart schools [SSS]) in Townsville (latitude 19.3°S; high to extreme maximum daily UV-index year round), Queensland, Australia, from 2009 to 2011. Results: Overall, 52.2% of 28,775 students and 47.9% of 2954 adults were observed wearing a hat. Hat use (all styles) among SSS and non-SunSmart school (NSSS) students was similar before (24.2% vs 20.5%; p=0.701), after (25.4% vs 21.7%; p=0.775) and during school-hours (93.0% vs 89.2%; p=0.649) except SSS students wore gold-standard (broad-brim/bucket/legionnaire) hats during school play-breaks more often in the warmer months (October-March) than NSSS students (54.7% vs 37.4%; p=0.02). Although the proportion of adults who wore hats (all styles) was similar at SSS and NSSS (48.2% vs 46.8%; p=0.974), fewer adults at SSS wore them before school (3.7% vs 10.2%; p=0.035). Conclusions: SunSmart status is not consistently associated with better hat-wearing behavior. The protective nature of hats and the proportion of school students and adult role-models wearing them could be improved, possibly by offering incentives to schools that promote sun-safety. abstract_id: PUBMED:37208977 The formative evaluation and pre-testing of a SunSmart policy implementation support toolkit targeting sun safe hat-wearing in NSW primary schools. Issue Addressed: Strategies that support the implementation of the SunSmart program in primary schools are essential to optimising policy uptake. Evidence outlining the type of support required, however, is lacking. This project explored the usefulness of an implementation support approach to sun safe hat-wearing in schools. Methods: Formative research was undertaken with 16 primary schools in Greater Western Sydney to explore current sun protection behaviours and practices, perceived barriers and motivators of sun safe hat-wearing, and resource needs. Based on these insights, a resource toolkit was developed and tested in 14 demonstration sites. Follow-up interviews evaluated the usefulness of the toolkit and implementation support approach. Results: Sun safe hat-wearing practices varied among schools. Commonly reported motivators included school policies, role modelling, incentives and knowledge. Commonly reported barriers included negative social norms, forgetfulness, cost and lack of understanding. Formative insights informed the development of the 'Motivation, Access, Triggers' Model and a 23-resource toolkit. Following toolkit rollout, champions reported that the ability to select resources according to local needs was useful, and most found that the toolkit supported their school with sun safe hat-wearing. Conclusions: A toolkit with local champions and leadership buy-in has the potential to improve policy implementation. The opportunity to prioritise resource selection enables schools to address their specific needs in line with their sun protection policy. SO WHAT?: The provision of policy implementation support can help schools overcome the challenges of transforming a written SunSmart policy into practice. abstract_id: PUBMED:28664586 Australian primary school communities' understandings of SunSmart: a qualitative study. Objectives: Skin cancer represents a major health issue for Australia. Childhood sun exposure is an important risk factor and evidence suggests the use of sun protection measures by Australian school children could be improved. This study examines how the SunSmart Program, a school-based skin cancer prevention resource, can be supported to further increase sun protection behaviours to assist in lowering skin cancer incidence. Methods: The Health Promoting Schools (HPS) framework was adopted to select key stakeholders from a convenience sample of five school communities. Students, teaching staff and parents participated in semi-structured focus group and individual interviews. A thematic analysis was used to extract key themes from the data. Results: Although these school communities were aware of sun protection practices and the risks associated with sun exposure, their understandings of the SunSmart Program were limited. Sun protection policy implementation was inconsistent and students were unlikely to engage in sun protection practices beyond the school setting. Conclusion And Implications: School communities require additional support and engagement to holistically enforce the principles of the SunSmart Program. abstract_id: PUBMED:24650947 School sun-protection policies-does being SunSmart make a difference? Evaluate the comprehensiveness of primary school sun-protection policies in tropical North Queensland, Australia. Pre-determined criteria were used to assess publicly available sun-protection policies from primary schools in Townsville (latitude 19.3°S; n = 43), Cairns (16.9°S; n = 46) and the Atherton Tablelands (17.3°S; n = 23) during 2009-2012. Total scores determined policy comprehensiveness. The relationship between policy score, SunSmart status and demographic characteristics was explored. At least 96.6% of primary schools sampled had a sun-protection policy. Although policies of Cancer Council accredited 'SunSmart' schools addressed more environmental, curriculum and review-related criteria than those of 'non-SunSmart' schools, the overall median score for both groups was low at 2 from a possible 12 (48.5% of SunSmart schools [SSSs]: inter-quartile range [IQR = 2.0-9.0] versus 65.9% of non-SSSs: [IQR = 2.0-3.0], P = 0.008). Most policies addressed hat wearing, while criteria related to shade provision at outdoor events, regular policy review and using the policy to plan outdoor events were poorly addressed. Although most primary schools in skin cancer-prone North Queensland have written sun-protection policies, the comprehensiveness of these policies could be vastly improved. These schools may require further support and advice to improve the comprehensive of their policies and incentives to continually implement them to achieve and maintain exemplary sun-protection compliance. abstract_id: PUBMED:36727371 Non-compliance for wearing spectacles: Prevalence and determinants in school-going children in South India. Purpose: To study the prevalence and determinants of compliance with spectacle wear among school-age children in South India who were given spectacles free of charge under a school vision screening program. Methods: A cross-sectional, descriptive study was performed. The participants were recruited from the school from Tirunelveli and Tuticorin districts of Tamil Nadu, where a school vision screening camp was conducted between January 2012 and October 2012. The school screening was performed under. “lavelle Pediatric Eye Care Project”: . Of the 129,720 children examined of 249 schools, 4253 of children had refractive error and 3333 were prescribed and dispensed spectacles. A total of 683 children were interviewed, who were not wearing glasses on follow-up. Results: About 20% of the participants (683/3333) were not wearing their spectacles at examination. The most common reasons given for non-wear were lost (44.9%) or broken (35.3%) spectacles. There was no gender preference on compliance to spectacles. Conclusion: Compliance with spectacle wear is less even when spectacles are provided free of cost, particularly in children among the age group of 5-15 years. Because factors such as type of school, area (urban or rural), parent's income, occupation, and their education were not significantly affecting the compliance of spectacle wear, proper encouragement and counseling of the parents and the child become extremely important. abstract_id: PUBMED:30972900 Protect your skin and let the fun begin: The results of an intervention to improve NSW primary schools' implementation of the SunSmart Program. Issue Addressed: The SunSmart Policy Support and Intervention Study (SSPSIS) (ACTRN12614000926639) investigated the feasibility of improving schools' implementation of the SunSmart Program, which is a resource for primary school communities to support their development of a comprehensive sun protection policy. Methods: A cluster randomised controlled trial (RCT) was used to evaluate the SSPSIS, which was conducted in NSW SunSmart schools (n = 20). Objective measurements of students' sun-safe hat-wearing behaviours and sunscreen application, and teachers' role-modelling behaviours, were collected for baseline, post-test and follow-up data. Interviews with school community stakeholders, including students (n = 103), parents (n = 31), teachers (n = 11) and executive staff (n = 4), were conducted to inform the intervention design, which was implemented following baseline data collection. Results: The results of baseline observations and interviews have been published previously. The intervention design aimed to combat negative perceptions of hat-wearing policy and create a trigger for sunscreen application by rewarding students practising these sun protection behaviours with play-based incentives. Although this intervention had no significant effect on the wearing of sun-safe hats among students or teachers, it did have a large effect on the consumption of sunscreen. Conclusions: Associating sunscreen and play-based incentives can create an effective trigger for students' sunscreen application behaviours. However, further evidence is needed to investigate how students' and teacher role models' hat-wearing behaviours could be increased. SO WHAT?: While combining a play-based incentive with a trigger for behaviour can promptly increase students' sunscreen application, it was unable to increase students' or teacher role models' hat-wearing behaviours. abstract_id: PUBMED:38383360 Towards prevention of new COVID-19 infections in institutions of higher education: factors influencing compliance with mask-wearing among public university students in Ghana. Background: Ghana's mask-wearing compliance with COVID-19 prevention protocols has not been as impressive among the general population. In this study, we examined factors influencing compliance with mask-wearing among public university students in Ghana to make recommendations for the prevention of new COVID-19 infections in public universities. Methods: We conducted this Census in a public university in Ghana between January and December 2022. The study design was an exploratory-cross-sectional and online survey. Structured questionnaires developed by the authors were used to collect data from 3,272 students. Data were analyzed with Jeffreys's Amazing Statistics Program (JASP). Frequency distributions were used to summarize the data into tables and graphs whilst logistic regression analysis was done to examine the factors influencing compliance with mask-wearing among participants as well as their mask-wearing behaviors in school. Results: Compliance with mask-wearing measures was high with 85.9% of the students wearing a nose mask always or often. Agreeing that the reusable masks do not last long was associated with a reduced chance of non-compliance (OR = 0.70, 95% CI = 0.57, 0.86). However, agreeing with some challenges was associated with increased chances of non-compliance. These included concerns that it is boring to mask after wearing makeup or having a haircut (OR = 1.71, 95% CI = 1.37, 2.14), and that masking is burdensome because it has to be removed when talking (OR = 1.26, 95% CI 1.01, 1.57), and that it is difficult to hear while masked (OR = 1.36, 95% CI = 1.04, 1.79). Conclusion: Cost-benefit analyses, opinions about one's look, and communication difficulties are the key factors influencing students' non-compliance with mask-wearing regulations. To encourage student compliance with mask-wearing regulations at Ghana's public universities, we recommend innovation in nose mask manufacture. abstract_id: PUBMED:15020549 Creating SunSmart schools. Kidskin was a sun-protection intervention study involving 1776 children attending 33 primary schools in Perth, Western Australia. There were three study groups: a control group, a moderate intervention group and a high intervention group. In addition to receiving a specially designed curricular intervention (1995-1998), the moderate and high intervention groups received an environmental intervention aimed at creating SunSmart schools (1996-1998). The environmental intervention focused on encouraging implementation of 'No hat, no play' policies and reducing sun exposure at lunchtime. In 1995 and 1998, observational methods were used to measure children's lunchtime sun exposure (i.e. polysulfone film badges) and hat wearing (i.e. video-taping of children). The proportion of children wearing broad-brimmed hats or legionnaire caps increased in seven of the eight high intervention schools between 1995 and 1998. In three schools, however, the impact was very positive with almost all children wearing these hats in 1998. There was no improvement in wearing these types of hats in either the moderate intervention group or the control group. In terms of sun exposure, there were only small non-significant differences among the three groups in terms of lunchtime sun exposure. The Kidskin program had a positive effect on hat wearing in the playground, but did not change children's use of shade at lunchtime. In this study, disseminating policy guidelines to schools using a mail-only strategy was ineffective, even when combined with an awards program. More information on 'champions' who bring about change in schools is required. abstract_id: PUBMED:35283678 Contextual Factors of Face Mask Wearing During the COVID-19 Pandemic and Recommendations to Increase Compliance Rate. Community-wide face mask wearing is recognised as an effective non-pharmaceutical defence against infection by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the causative virus of coronavirus disease 2019 (COVID-19) pandemic. However, few studies have identified contextual factors of face mask wearing during the COVID-19 pandemic. This study aims to identify relationships between demographic factors, personal hygiene factors, online behavioural factors and face mask wearing by Malaysian adults during the COVID-19 pandemic. Data were collected via an online survey questionnaire and analysed with Statistical Package for Social Sciences version 26. Non-availability of personal protective equipment (PPE) as well as fewer social media hours and fewer hours of browsing information related to the COVID-19 pandemic were identified as factors related to low compliance rate of face mask wearing by some Malaysian adults. This study advances contextual understanding of face mask wearing by specific groups during the COVID-19 pandemic and puts forth several recommendations to increase face mask wearing compliance rate. abstract_id: PUBMED:32665864 SunSmart Accreditation and Use of a Professional Policy Drafting Service: Both Positively and Independently Associated with High Sun Protective Hat Scores Derived from Primary School Policies. Background: The head and neck are exposed to the highest solar ultraviolet radiation levels and experience a disproportionate skin cancer burden. Sun protective hats can provide an effective barrier. Since early life exposure contributes to skin cancer risk, the World Health Organisation recommends prevention programmes in schools. The New Zealand SunSmart Schools programme is one example. Two criteria concern wearing hat outdoors: students are required to wear a hat providing protection for the face, neck, and ears; if a suitable hat is not worn, students must play in shaded areas. Objectives: To investigate two internationally relevant interventions as plausible statistical predictors of hat policy strength: (1) skin cancer primary prevention programme membership, (2) use of a professional policy drafting service. Methods: Of 1,242 (62%) eligible schools participating in a 2017 national survey, 1,137 reported a sun protection policy and 842 were available for categorising and allocating protective scores (0-3). Results: In multinomial (polytomous) logistic regression models of cross-sectional association, adjusted for school characteristics, SunSmart accredited schools and those utilising a policy drafting service were independently significantly more likely than their counterparts to obtain the most protective compared to the least protective hat score (respectively, RRR 6.48: 95% CI 3.66, 11.47; 7.47: 3.67, and 15.20). For the dichotomous shade measure, similar associations were found using adjusted logistic regression (OR 3.28: 95% CI 2.11, 5.09; 2.70: 1.54, 4.74). Conclusions: Our findings provide support for two plausible interventions that could potentially be implemented beneficially in primary schools via established infrastructure in any jurisdiction, internationally. Answer: Being a "SunSmart School" does not consistently influence hat-wearing compliance according to the evidence from the provided abstracts. While some studies suggest that SunSmart status can lead to better hat-wearing behavior in certain contexts, the overall findings are mixed. For instance, one study found that the overall hat use among SunSmart School (SSS) and non-SunSmart School (NSSS) students was similar, except that SSS students wore gold-standard hats during school play-breaks more often in the warmer months than NSSS students (PUBMED:24382297). However, the same study concluded that SunSmart status is not consistently associated with better hat-wearing behavior and suggested that incentives might be needed to improve compliance. Another study that developed and tested a resource toolkit to support SunSmart policy implementation found that the toolkit was useful and supported schools with sun safe hat-wearing (PUBMED:37208977). This suggests that additional support and resources can potentially improve policy implementation and compliance. A qualitative study highlighted that school communities' understandings of the SunSmart Program were limited and that sun protection policy implementation was inconsistent, indicating a need for additional support and engagement to enforce the principles of the SunSmart Program (PUBMED:28664586). An evaluation of sun-protection policies in primary schools in North Queensland found that policies of SunSmart accredited schools addressed more criteria than those of non-SunSmart schools, but the overall comprehensiveness of policies was low for both groups (PUBMED:24650947). In summary, while SunSmart accreditation and related interventions can be associated with improved hat-wearing compliance in some cases, the relationship is not consistent across all studies. Additional measures, such as providing implementation support, resources, and possibly incentives, may be necessary to enhance compliance with hat-wearing policies in SunSmart Schools (PUBMED:37208977, PUBMED:28664586, PUBMED:24650947, PUBMED:32665864).
Instruction: Does fecal calprotectin predict relapse in patients with Crohn's disease and ulcerative colitis? Abstracts: abstract_id: PUBMED:26811644 How to predict clinical relapse in inflammatory bowel disease patients. Inflammatory bowel diseases have a natural course characterized by alternating periods of remission and relapse. Disease flares occur in a random way and are currently unpredictable for the most part. Predictors of benign or unfavourable clinical course are required to facilitate treatment decisions and to avoid overtreatment. The present article provides a literature review of the current evidence on the main clinical, genetic, endoscopic, histologic, serologic and fecal markers to predict aggressiveness of inflammatory bowel disease and discuss their prognostic role, both in Crohn's disease and ulcerative colitis. No single marker seems to be reliable alone as a flare predictor, even in light of promising evidence regarding the role of fecal markers, in particular fecal calprotectin, which has reported good results recently. In order to improve our daily clinical practice, validated prognostic scores should be elaborated, integrating clinical and biological markers of prognosis. Finally, we propose an algorithm considering clinical history and biological markers to intercept patients with high risk of clinical relapse. abstract_id: PUBMED:33392214 Global Studies of Using Fecal Biomarkers in Predicting Relapse in Inflammatory Bowel Disease. Inflammatory bowel disease (IBD) is a chronic inflammatory condition of the gastrointestinal tract mainly comprising two forms including Crohn's disease (CD) and ulcerative colitis (UC). IBD is a lifelong relapsing remitting disease and relapses occur at random patterns which are unpredictable. Fecal biomarkers have been increasingly used to assess disease activity in IBD due to their positive correlations with intestinal inflammation. Recent studies have also assessed the use of fecal biomarkers in predicting relapse and post-operative recurrence. This review provides information from global studies of using fecal calprotectin, lactoferrin and S100A12 to predict relapse in IBD. Strategies for further studies and the use of these fecal biomarkers for personalized management in IBD are also discussed. abstract_id: PUBMED:21122498 Does fecal calprotectin predict relapse in patients with Crohn's disease and ulcerative colitis? Background And Aims: An evaluation is made of the utility of fecal calprotectin in predicting relapse in patients with inflammatory bowel disease (IBD). The possible differences in its predictive capacity in Crohn's disease (CD) versus ulcerative colitis (UC), and the different phenotypes, are also examined. Methods: This is a prospective study with 135 patients diagnosed with IBD in clinical remission for at least 3 months. The patients submitted a stool sample within 24 hours after the baseline visit, for the measurement of fecal calprotectin. All patients were followed-up on for one year. Results: Sixty-six patients had CD and 69 UC. Thirty-nine (30%) suffered from relapse. The fecal calprotectin concentration was higher among the patients with relapse than in those that remained in remission: 444 µg/g (95% CI 34-983) versus 112 µg/g (95% CI 22-996); p&lt;0.01. Patients with CD and calprotectin&gt;200 µg/g relapsed 4 times more often than those with lower marker concentrations. In UC, calprotectin&gt;120 µg/g was associated with a 6-fold increase in the probability of disease activity outbreak. The predictive value was similar in UC and CD with colon involvement and inflammatory pattern. In this group, calprotectin&gt;120 µg/g predicted relapse risk with a sensitivity of 80% and a specificity of 60%. Relapse predictive capacity was lower in patients with ileal disease. Conclusions: Fecal calprotectin may be a useful marker for predicting relapse in patients with IBD. Its predictive value is greater in UC and CD with colon involvement and inflammatory pattern, compared with ileal CD. abstract_id: PUBMED:25052347 Does fecal calprotectin predict short-term relapse after stopping TNFα-blocking agents in inflammatory bowel disease patients in deep remission? Background And Aims: This prospective multicenter study examined whether elevated fecal calprotec tin (FC) concentrations after stopping TNFα-blocking therapy can predict clinical or endoscopic relapse. In addition, we evaluated the impact of histological remission on the relapse risk. Methods: We enrolled inflammatory bowel disease (IBD) patients who were in clinical, endoscopic, and FC-based (&lt; 100 μg/g) remission after a minimum 11 months of TNFα-blocking therapy. The patients were followed-up for 12 months after the discontinuation of TNFα-blocking therapy. FC was collected monthly for the first 6 months and thereafter every second month. Ileocolonoscopy was performed at inclusion, at 4 months, at the study end, and at the time of clinical relapse. Results: Of 52 enrolled patients, 49 (16 Crohn's disease, 33 ulcerative colitis/IBD unclassified) provided the stool samples requested and comprised the study group. During the follow-up, 15/49 (31%) relapsed, whereas 34 (69%) remained in remission. Patients relapsing showed constantly elevated FC levels for a median of 94 (13-317) days before the relapse. Significant increase in median FC levels was seen 2 (p = 0.0014), 4 (p = 0.0056), and 6 (p = 0.0029) months before endoscopic relapse. Constantly normal FC concentrations during the follow-up were highly predictive for clinical and endoscopic remission. Normal FC concentrations in patients with remission were associated with histological remission. Conclusion: FC seems to increase and remain elevated before clinical or endoscopic relapse, suggesting that it can be used as a surrogate marker for predicting and identifying patients requiring close follow-up in clinical practice. abstract_id: PUBMED:19291780 Fecal calprotectin and lactoferrin for the prediction of inflammatory bowel disease relapse. Background: The purpose of the study was to determine the role of fecal calprotectin and lactoferrin in the prediction of inflammatory bowel disease relapses, both in patients with ulcerative colitis (UC) and Crohn's disease (CD), in a large, long-term, follow-up study. Methods: The prospective multicenter study included CD and UC patients who had been in clinical remission for 6 months. At baseline, patients provided a single stool sample for calprotectin and lactoferrin determination. Follow-up was 12 months in patients showing no relapse and until activity flare in relapsing patients. Results: In all, 163 patients (89 CD, 74 UC) were included. Twenty-six patients (16%) relapsed during follow-up. Calprotectin concentrations in patients who suffered a relapse were higher than in nonrelapsing patients (239 +/- 150 versus 136 +/- 158 microg/g; P &lt; 0.001). Relapse risk was higher in patients having high (&gt;150 microg/g) calprotectin concentrations (30% versus 7.8%; P &lt; 0.001) or positive lactoferrin (25% versus 10%; P &lt; 0.05). Fecal calprotectin (&gt;150 microg/g) sensitivity and specificity to predict relapse were 69% and 69%, respectively. Corresponding values for lactoferrin were 62% and 65%, respectively. The area under the receiver operating characteristic curve to predict relapse using calprotectin determination was 0.73 (0.69 for UC and 0.77 for CD). Better results were obtained when only colonic CD disease or only relapses during the first 3 months were considered (100% sensitivity). High fecal calprotectin levels or lactoferrin positivity was associated with clinical relapse in Kaplan-Meier survival analysis, and both fecal tests were associated with relapse in the multivariate analysis. Conclusions: Fecal calprotectin and lactoferrin determination may be useful in predicting impending clinical relapse-especially during the following 3 months-in both CD and UC patients. abstract_id: PUBMED:25811118 Fecal Calprotectin as Predictor of Relapse in Patients With Inflammatory Bowel Disease Under Maintenance Infliximab Therapy. Background And Goals: Predicting relapse in Inflammatory Bowel Disease (IBD) could allow for early changes of treatment. Close monitoring of fecal calprotectin (FC) could be useful to predict relapse in IBD. Aim of the study was to evaluate the predictive value of a rapid FC test to predict flares in patients with IBD under maintenance therapy with Infliximab. Study: A prospective observational cohort study was designed. IBD patients in clinical remission under maintenance Infliximab therapy were included. FC was measured using a rapid test on a stool sample obtained within 24 hours before Infliximab infusion. Clinical examination was performed 2 months after that infusion. Results: Fifty-three patients were included (52.8% female). Thirty-three patients (62.3%) had Crohn's disease and 20 (37.7%) had ulcerative colitis. All patients were in remission at inclusion. After 2 months, 41 patients (77.4%) remained in clinical remission and 12 (22.6%) presented a relapse. FC (mean±SD) in relapsing and not-relapsing disease was 332±168 and 110±163 µg/g, respectively (P&lt;0.005). A FC concentration&gt;160 µg/g had a sensitivity of 91.7%, and specificity of 82.9% to predict relapse. Conclusions: In IBD patients under Infliximab maintenance therapy, high FC levels allow predicting relapse within the following 2 months. Long-term remission is associated with low calprotectin levels. Further studies are required to confirm these results. abstract_id: PUBMED:30873890 Prediction of treatment outcome and relapse in inflammatory bowel disease. Introduction: Prediction of treatment outcome and clinical relapse in patients with inflammatory bowel disease (IBD), either ulcerative colitis (UC) or Crohn's disease (CD), is particularly important because therapeutics for IBD are not always effective and patients in remission could frequently relapse. Because undergoing endoscopy for the purpose is sometimes invasive and burdensome to patients, the performance of surrogate biomarkers has been investigated. Areas covered: We particularly featured the performance of patient symptoms, blood markers including C-reactive protein (CRP), fecal markers including fecal calprotectin (Fcal) and fecal immunochemical test (FIT) for prediction of endoscopic mucosal healing (MH) and prediction of relapse. Studies of other modalities and therapeutic drug monitoring (TDM) have also been explored. Expert opinion: Meticulous evaluation of patient symptoms could be predictive for MH in UC. CRP and Fcal may be accurate in prediction of MH of CD when MH is evaluated throughout the entire intestine including the small bowel. Repeated measurements of fecal markers including Fcal and FIT in patients with clinical remission would raise predictability of relapse. Prediction of treatment outcome by monitoring with blood markers including CRP, fecal markers including Fcal, and TDM has frequently been performed in recent clinical trials and shown to be effective. abstract_id: PUBMED:22275341 Role of fecal calprotectin testing to predict relapse in teenagers with inflammatory bowel disease who report full disease control. Background: Teenagers with inflammatory bowel disease undergo regular follow-up visits to watch for symptoms that may indicate relapse. Current disease activity is frequently estimated with the use of the Pediatric Ulcerative Colitis Activity Index (PUCAI) and the Pediatric Crohn's Disease Activity Index (PCDAI). We examined the capacity of fecal calprotectin and C-reactive protein (CRP) to predict relapse in teenagers who report no symptoms. Second, we examined whether calprotectin and CRP as an "add-on test" improve the specificity of PUCAI or PCDAI to predict relapse. Methods: We collected data of 62 consecutive teenagers (31 with Crohn's disease and 31 with ulcerative colitis) who scored their degree of disease control between 90 and 100% on two successive outpatient clinic visits. Calprotectin, PUCAI or PCDAI, and CRP were measured at baseline. Primary outcome was symptomatic relapse within 3 months of baseline, necessitating the introduction of steroids, exclusive enteral nutrition, or an aminosalicylate dose escalation. Results: Fifteen teenagers (24%) developed symptomatic relapse within 3 months of baseline. Based on the receiver operating characteristic curve, the optimum calprotectin cutpoint to differentiate high from low risk patients was 500 μg/g. The PUCAI or PCDAI predicted relapse in 42% (11/26) of teenagers with a positive result (score ≥ 10 points), while a negative PUCAI or PCDAI result reduced the risk of relapse to 11% (4/36). Teenagers with a positive calprotectin test had a 53% (10/19) risk of progressing to symptomatic relapse within 3 months, whereas a negative calprotectin result gave a 12% (5/43) risk of symptomatic relapse. A positive CRP result (cutoff 10 mg/L) gave a 50% (4/8) risk of relapse, whereas a negative CRP result hardly reduced the risk compared with the pretest probability (from 24% to 21% (11/53)). As an add-on test after PUCAI or PCDAI, the calprotectin test limited the number of false positives and thus increased the specificity to detect gastrointestinal inflammation: 60% (9/15) of teenagers with positive concordant test results progressed to symptomatic relapse. Negative concordance reduced the risk of relapse to 10% (3/32). CRP contributed little as add-on test after PUCAI or PCDAI: two of five teenagers with positive concordant tests progressed to symptomatic relapse (40%). Conclusions: Unlike CRP, fecal calprotectin as an add-on test after PUCAI or PCDAI facilitates recognition of preclinical relapse. This could help to identify teenagers who require treatment intensification at the time of minimal disease rather than at the time of clinically overt relapse. Further studies are warranted to determine the impact of fecal calprotectin testing on treatment management and outcome. abstract_id: PUBMED:27984399 Accuracy of Consecutive Fecal Calprotectin Measurements to Predict Relapse in Inflammatory Bowel Disease Patients Under Maintenance With Anti-TNF Therapy: A Prospective Longitudinal Cohort Study. Background And Goal: Predicting relapse in inflammatory bowel disease (IBD) patients could allow early changes in therapy. We aimed at evaluating the accuracy of consecutive fecal calprotectin (FC) measurements to predict flares in IBD patients under maintenance treatment with anti-tumor necrosis factor (TNF) drugs. Study: A prospective longitudinal cohort study with 16-month follow-up period was designed. IBD patients in clinical remission for at least 6 months under anti-TNF therapy were included. FC was quantified at 4-month intervals for 1 year, and patients were clinically evaluated for relapse at 2-month intervals. Diagnostic accuracy of FC for predicting relapse was evaluated by receiver-operating characteristic curve analysis. Results: In total, 95 of 106 included patients finalized the study and were analyzed (median age 44 y, 50.5% female, 75% with Crohn's disease). A total of 30 patients (31.6%) had a relapse over follow-up. FC concentration was significantly higher in patients who relapsed (477 μg/g) than in patients who maintained in remission (65 μg/g) (P&lt;0.005). The optimal cutoff to predict remission was 130 μg/g (negative predictive value of 100%), and 300 μg/g to predict relapse (positive predictive value of 78.3%). Conclusions: FC is a good predictor of clinical relapse and a particularly good predictor of remission over the following 4 months in patients with IBD on maintenance therapy with anti-TNF drugs. FC levels &lt;130 μg/g is consistently associated with maintained disease remission, whereas concentrations &gt;300 μg/g allow predicting relapse with a high probability at any time over the following 4 months. abstract_id: PUBMED:26884729 Fecal immunochemical test as a biomarker for inflammatory bowel diseases: can it rival fecal calprotectin? Accurate evaluation of disease activity is essential for choosing an appropriate treatment and follow-up plan for patients with inflammatory bowel disease (IBD). Endoscopy is required for accurately evaluating disease activity, but the procedures are sometimes invasive and burdensome to patients. Therefore, alternative non-invasive methods for evaluating or predicting disease activity including mucosal status are desirable. Fecal calprotectin (Fcal) is the most widely used fecal marker for IBD, and many articles have described the performance of the marker in predicting disease activity, mucosal healing (MH), treatment efficacy, and risk of relapse. Fecal immunochemical test (FIT) can quantify the concentration of hemoglobin in stool and was originally used for the screening of colorectal cancer. We recently reported that FIT is also a useful biomarker for IBD. A direct comparison between the use of Fcal and FIT showed that both methods predicted MH in ulcerative colitis equally well. However, in the case of Crohn's disease, FIT was less sensitive to lesions in the small intestine, compared to Fcal. FIT holds several advantages over Fcal in regards to user-friendliness, including a lower cost, easy and clean handling, and the ability to make rapid measurements by using an automated measurement system. However, there is insufficient data to support the application of FIT in IBD. Further studies into the use of FIT for evaluating the inflammatory status of IBD are warranted. Answer: Yes, fecal calprotectin (FC) has been shown to predict relapse in patients with Crohn's disease (CD) and ulcerative colitis (UC). Studies have found that elevated levels of FC are associated with an increased risk of relapse in patients with IBD. For instance, patients with CD and FC levels greater than 200 µg/g relapsed four times more often than those with lower marker concentrations, and in UC, FC levels greater than 120 µg/g were associated with a six-fold increase in the probability of disease activity outbreak (PUBMED:21122498). Additionally, consistently elevated FC levels have been observed for a median of 94 days before clinical or endoscopic relapse, suggesting that FC can be used as a surrogate marker for predicting and identifying patients requiring close follow-up in clinical practice (PUBMED:25052347). Furthermore, fecal calprotectin and lactoferrin determination may be useful in predicting impending clinical relapse, especially during the following 3 months, in both CD and UC patients (PUBMED:19291780). In patients under Infliximab maintenance therapy, high FC levels allow predicting relapse within the following 2 months, and long-term remission is associated with low calprotectin levels (PUBMED:25811118). Consecutive FC measurements have also been shown to accurately predict clinical relapse and particularly good prediction of remission over the following 4 months in patients with IBD on maintenance therapy with anti-TNF drugs (PUBMED:27984399). In summary, while no single marker is completely reliable alone as a flare predictor, fecal calprotectin has been demonstrated to be a useful marker for predicting relapse in patients with IBD, including those with Crohn's disease and ulcerative colitis (PUBMED:26811644; PUBMED:33392214).
Instruction: Should we adjust health education methodology to low-educated employees needs? Abstracts: abstract_id: PUBMED:24820031 Should we adjust health education methodology to low-educated employees needs? Findings from Latvia, Poland, Slovenia and Spain. Objectives: The presented study explored health beliefs and experiences as well as health education needs of low-educated employees (LEEs) (incomplete primary, primary, lower secondary and basic vocational education) in comparison to those with higher education (secondary and tertiary education) in four European countries: Latvia, Poland, Slovenia and Spain. The main aim was to identify a specificity of low-educated employees (LEEs) by capturing their opinions, experiences, attitudes and needs concerning health education. Material And Methods: The sample consisted of 1691 individuals with the status of an employee (approximately 400 respondents in each of 4 countries participating in the project). The respondents were aged 25-54 (both the control group and the target group consisted in 1/3 of the following age groups: 25-34, 35-44 and 45-54). The respondents were interviewed during the years 2009 and 2010 with a structured questionnaire concerning their health, health behaviours as well as educational needs concerning health education. Results: The study revealed substantial differences in the attitudes of people from this group concerning methodology of health education. LEEs prefer more competitions and campaigns and less written educational materials in comparison to those with higher education. Additionally, they more often perceive a fee, longer time, necessity to take part in a knowledge test and a concern that their health will be checked as factors that can discourage them from taking part in a health training. On the other hand, LEEs can by encouraged to take part in such a training by a media broadcast concerning the event, snacks or lottery during the training, or financial incentives. Conclusions: The results of the study proved the need for specific health education guidelines to conduct health education for low-educated employees. These guidelines should take in account the sources of health education preferred by LEEs as well as the factors that can encourage/discourage their participation in trainings concerning health. abstract_id: PUBMED:34187445 Using intervention mapping to develop 'Healthy HR' aimed at improving sustainable employability of low-educated employees. Background: The perspectives of low-educated employees are often neglected when designing sustainable employability (SE) interventions. As a result, the interventions offered by the employer do often not align with the needs of low-educated employees. This particular group should therefore be actively involved in the process of developing and implementing SE interventions in their work organizations. The current paper describes the development process of a web-based intervention for HR managers and direct supervisors aimed at improving the SE of low-educated employees. This intervention is specifically designed to involve low-educated employees. Methods: The first four steps of the Intervention Mapping (IM) approach were used to systematically develop the intervention with the active involvement of stakeholders. Step 1 comprised a needs assessment including a literature review, empirical evidence, scoping search and several focus group interviews with employees and with representatives of employers. Step 2 formulated the intervention objective. During step 3, suitable theoretical methods were selected and translated to practical applications. Step 4 involved the development of a web-based intervention by integrating all information from the preceding steps. Results: The needs assessment indicated that the employees' active involvement and employees-employer genuine dialogue should be essential characteristics of an SE intervention for low-educated employees. The online toolkit 'Healthy HR' (HHR) was developed, which contains eight steps. Each step consists of one or more tasks helping the employer and employees with developing and implementing SE interventions themselves. One or more dialogue-based tools support each task. The leading principle providing structure within HHR was Adapted Intervention Mapping. Conclusion: Principles of IM appeared to be useful to develop the intervention HHR systematically. This development process resulted in a practical online toolkit that supports employers in the development and implementation of local SE interventions tailored to the needs of low-educated employees. These employees should be actively involved in the process through a dialogue-based approach. By using IM principles, HHR is expected to increase the effectiveness in bettering the health and well-being of low-educated employees. abstract_id: PUBMED:33014964 Does Dialogue Improve the Sustainable Employability of Low-Educated Employees? A Study Protocol for an Effect and Process Evaluation of "Healthy HR". Background: There is a need to develop sustainable employability (SE) interventions that are better aligned to the needs of low-educated employees. This group needs to get a voice in intervention development and implementation. In this study, a dialogue-based approach is proposed consisting of an online step-by-step support toolkit for employers, "Healthy Human Resources" (HHR). When intervening, this toolkit enables and stimulates employers to have a continuous dialogue with their low-educated employees. By improving the employees' job control, HHR is aimed at cost-beneficially improving SE. This paper describes the protocol of the evaluation study to evaluate the effectiveness and implementation process of HHR on the SE of low-educated employees. Methods: The protocol of the evaluation study consists of: (1) an effect evaluation with a pretest-posttest design with a 1-year follow-up in five work organizations in the Netherlands deploying low-educated employees and with SE as the primary outcome and job control as the secondary outcome. The effect evaluation is expanded with a budget impact analysis; (2) a mixed-method process evaluation at 6 and 12 months after the start of HHR to evaluate the whole implementation process of HHR. This includes the experiences with HHR of various stakeholders, such as employees, human resource managers, and line managers. Discussion: The effect evaluation will give insight into the effects of HHR on the SE of low-educated employees. The process evaluation will provide insight into the underlying mechanisms of the (in) effectiveness of HHR. By improving dialogue, we hypothesize that HHR, through enhancing job control, will strengthen the SE of low-educated employees. Also for helping with tackling the socioeconomic health gap, if proven effective, the implementation of HHR on a wider scale can be recommended. abstract_id: PUBMED:22774464 Employees health education--challenges according to the educational level Background: Article addresses the problem of increasing Polish employees health education effectiveness according to the differences in educational level. Materials And Methods: Research model assume that effective method of developing recommendations improving the health education will synthesise scientific findings regarding methodology of conducting such education and knowledge about needs of two target groups: low and high educated employees. Educational solutions were searched in publications related to: health education, andragogy, propaganda and direct marketing. The empirical material used to characterize two target groups came from four research (qualitative and quantitative) conducted by the National Centre for Workplace Health Promotion (Nofer Institute of Occupational Medicine) in 2007-2010. Results: Low educated employees' health education should be focused on increasing responsibility for health and strengthening their self-confidence according to the introduction of healthy lifestyle changes. To achieve these goals, important issue is to build their motivation to develop knowledge about taking care of health. In providing such information we should avoid the methods associated with school. Another important issue is creating an appropriate infrastructure and conditions facilitating the change of harmful behaviors undertaken at home and in the workplace. According to high-educated employees a challenge is to support taking health behaviors--although they are convinced it is important for their health, such behaviors are perceived as a difficult and freedom restriction. Promoting behavior change techniques, avoiding prohibitions in the educational messages and creating favorable climate for taking care of health in groups they participate are needed. abstract_id: PUBMED:29947578 Health Literacy Deficits Found Among Educated, Insured University Employees. Evidence has consistently shown that low/limited health literacy (HL) is associated with negative health consequences and higher costs for individuals and society. To generate internal data for employee training and health/wellness programming, an HL assessment of 120 university employees was conducted using the Newest Vital Sign (NVS), a valid and reliable clinical screening tool that asks individuals to interpret a nutrition label. Sociodemographic data were collected and time to administer the NVS tool was also measured. Even in this employed, well-educated sample (mean years of formal education was 16.6 years), 17% had scores indicating limited or possibly limited HL. Findings have implications for occupational training and health providers and programs. Even a well-educated workforce benefits from addressing HL challenges or situational issues with universal strategies. This project supports initiatives to assist employees better navigate, understand, and use health information and services to improve their health. abstract_id: PUBMED:13677040 Needs assessment for continuing education and health promotion training for Senegalese chief nurses This study, whose purpose is to contribute to improving chief nurses' (ICP, French acronym) performance and practice in the realm of health promotion, was conducted in the medical region of Kaolak in Senegal. The objective is to identify the needs for ICP's continuing education and health promotion training and to delineate their priorities. This is a descriptive study characterised by a combined methodology which integrates a qualitative phase and a quantitative phase, in which six continuing education and training experts and 74 ICPs working in the region participated respectively. The method initially allowed the team to identify what are in theory the different types of health promotion skills that ICPs have, to validate this typology and it's contents through an expert panel and to adjust this to the practice of nursing at the community level in the Senegalese national context. Second, the range of training needs were measured by the ICPs and the classification of abilities was established in order of priority. This study allows for a comprehensive and detailed listing of needs for continuing education and training among Senegalese ICPs based on consensus on what their abilities are. The study also suggests that nurses' initial education and training be adapted and continuing education and training be established. abstract_id: PUBMED:27466267 Smoking Education for Low-Educated Adolescents: Comparing Print and Audiovisual Messages. This study aims to provide insight into which modality is most effective for educating low-educated adolescents about smoking. It compares the persuasive effects of print and audiovisual smoking education materials. We conducted a field experiment with two conditions (print vs. video) and three measurement times (Time 1, Time 2, and Time 3). A total of 221 high school students in the second year of the lowest levels of education in the Netherlands participated at all three time points of the study. Results showed that participants in both conditions had more negative beliefs about smoking after being exposed to the smoking education than before, but there were no differences between the print and video version in this effect. However, the video version did make the attitude toward smoking more negative at Time 3 compared to baseline, whereas the text version did not, which suggests that the video version was more effective for educating low-educated adolescents about smoking. abstract_id: PUBMED:31288632 Oral Health Status and Treatment Needs of Government Employees in New Delhi. Oral diseases, including dental caries, periodontitis, and edentulism (toothlessness), affect about 3.5 billion people worldwide. A cross-sectional study was conducted to assess the oral health status and treatment needs of government employees employed at an organization in New Delhi. Oral health information was recorded using the standard World Health Organization's (WHO) Oral Health Assessment Tool. A total of 476 employees at various levels of administration were screened at their worksite. Dental caries and periodontal disease were present in more than half of the participants. Around 56% had decayed teeth, 20% had missing teeth, and 16% had filled teeth. Bleeding from gums was seen in 71% and periodontitis in 59% of participants. Preventive treatment and oral health promotion was required in at least 41% of the screened individuals. Annual workplace oral examinations may help in decreasing the oral disease burden and create awareness on the oral health among employees. abstract_id: PUBMED:12236661 Maternal and child health graduate and continuing education needs: a national assessment. Objectives: The purpose of this report is to describe the methodology and results of a recent national assessment of long-term graduate and short-term continuing education needs of public health and health care professionals who serve or are administratively responsible for the U.S. maternal and child health population and also to offer recommendations for future training initiatives. Methods: The target of this needs assessment was all directors of state MCH, CSHCN and Medicaid agencies, as well as a 20% random sample of local public health departments. A 7-page needs assessment form was used to assess the importance of and need for supporting graduate and continuing education training in specific skill and content areas. The needs assessment also addressed barriers to pursuing graduate and continuing education. Respondents (n = 274) were asked to indicate the capacity of their agency for providing continuing education as well as their preferred modalities for training. Results: Regardless of agency type, i.e., state MCH, CSHCN, Medicaid or local health department, having employees with a graduate education in MCH was perceived to be of benefit by more than 70% of the respondents. Leadership, systems development, management, administration, analytic, policy and advocacy skills, as well as genetics, dentistry, nutrition and nursing, were all identified as critical unmet needs areas for professionals with graduate training. Education costs, loss of income, and time constraints were the identified barriers to graduate education. More than 90% of respondents from each agency viewed continuing education as a benefit for their staff, although the respondents indicated that their agencies have limited capacity to either provide such training or to assess their staffs need for continuing education. Program managers and staff were perceived in greatest need of continuing education and core public health skills, leadership, and administration were among the most frequently listed topics to receive continuing education training support dollars. Time away from work, lack of staff to cover functions, and cost were the top barriers to receiving continuing education. While attending on-site, in-state, small conferences was the continuing education modality of first preference, there was also considerable interest expressed in web-based training. Conclusions: Six recommendations were developed on the basis of the findings and address the following areas: the ongoing need for continued support of both graduate and continuing education efforts; the development of a national MCH training policy analysis center; the incorporation of routine assessments of training needs by states as part of their annual needs assessments; the promotion of alternative modalities for training, i.e., web-based; and, the sponsorship of academic/practice partnerships for cross-training. abstract_id: PUBMED:36093655 Return to work of employees with low levels of education: The employers' role and perspective. Background: To achieve adequate return to work (RTW) after sickness absence, Dutch legislation prescribes cooperation between absent employees and employers. Yet, we lack insight into how employees with low levels of education exercise influence over (i.e. self-direct) RTW. Objective: This study aimed to enhance our understanding of: (A) the role that employers play in the self-direction of employees with low levels of education over their RTW; (B) how employers perceive these employees' efforts (or lack thereof) to self-direct their own RTW; and (C) how employers understand and interpret the behaviours of these employees. Social cognitive theory served as a framework. Methods: A qualitative study was conducted with 13 employer representatives using semi-structured interviews. Data were analysed in NVivo12 using a template approach. Results: Employers tend to play a guiding, directive role in employees' RTW. According to employers, employees generally comply with the employers' decisions and suggestions, whether or not they have tried to realise their own preferences regarding mode and timing of RTW. Employers interpret such employee behaviours from the perspective of environmental (e.g. financial pressures to RTW) and person-related factors (e.g. sickness and RTW perceptions). Conclusions: Employers, rather than employees direct the employees' RTW. Employers should give voice to employees and enable them to have more control over their RTW. Future research should acquire more insight in the employees' perspective. Answer: Yes, we should adjust health education methodology to the needs of low-educated employees (LEEs). The study presented in abstract PUBMED:24820031 found that LEEs have specific preferences for health education, favoring more interactive and engaging methods such as competitions and campaigns, and less written material. They are also more discouraged by factors such as fees, time commitments, knowledge tests, and health checks, but can be encouraged by media broadcasts, snacks or lotteries during training, and financial incentives. This indicates the need for tailored health education guidelines that consider the preferred sources and motivational factors for LEEs. Additionally, the development of the 'Healthy HR' web-based intervention, as described in abstract PUBMED:34187445, was based on the active involvement of LEEs and aimed at improving their sustainable employability (SE). The intervention mapping approach used in the development process highlighted the importance of genuine dialogue and active involvement of LEEs in creating SE interventions. The study protocol described in abstract PUBMED:33014964 further supports the idea of a dialogue-based approach to improve the SE of LEEs. The 'Healthy Human Resources' (HHR) toolkit was designed to stimulate continuous dialogue between employers and their low-educated employees, enhancing job control and, consequently, SE. Abstract PUBMED:22774464 discusses the challenges in health education for employees with different educational levels, emphasizing the need for a focus on building motivation and confidence among LEEs, and avoiding traditional school-like methods. In summary, the evidence suggests that health education for LEEs should be adjusted to their specific needs and preferences, involving them actively in the process and utilizing methods that are interactive, engaging, and supportive of their circumstances (PUBMED:24820031, PUBMED:34187445, PUBMED:33014964, PUBMED:22774464).
Instruction: The effect of measurement error in risk factors that change over time in cohort studies: do simple methods overcorrect for 'regression dilution'? Abstracts: abstract_id: PUBMED:16051613 The effect of measurement error in risk factors that change over time in cohort studies: do simple methods overcorrect for 'regression dilution'? Background: The attenuation of the relationship between disease and a risk factor subject to error through 'regression dilution' is well recognized, and researchers often make attempts to adjust for its effects. However, the adjustment methods most often adopted in cohort studies make an implicit assumption that the relationship is driven exclusively by current error-free levels of the risk factor and not by past levels. Here we investigate the bias that is introduced if this assumption is invalid. Methods: We model disease risk at a particular time in terms of error-free levels of the risk factor at that time and in past periods, and summarize the 'life-course' risk factor-disease relationship using crude current level, history adjusted current level and lifetime level associations. Using systolic blood pressure data from the Framingham Heart Study we show the impact of measurement error on these associations and investigate the biases that can occur with simple correction methods. Results: A simple 'ratio of ranges' type correction factor overestimates the lifetime level association by 29% in the presence of a relatively modest dependency of current risk on past levels (levels 5 years ago half as predictive of current risk as current levels). Conclusions: Simple methods of correction for regression dilution bias can lead to substantial overcorrection if the risk factor-disease relationship is not short term. abstract_id: PUBMED:21300627 Comparing measurement error correction methods for rate-of-change exposure variables in survival analysis. In this article we focus on comparing measurement error correction methods for rate-of-change exposure variables in survival analysis, when longitudinal data are observed prior to the follow-up time. Motivational examples include the analysis of the association between changes in cardiovascular risk factors and subsequent onset of coronary events. We derive a measurement error model for the rate of change, estimated through subject-specific linear regression, assuming an additive measurement error model for the time-specific measurements. The rate of change is then included as a time-invariant variable in a Cox proportional hazards model, adjusting for the first time-specific measurement (baseline) and an error-free covariate. In a simulation study, we compared bias, standard deviation and mean squared error (MSE) for the regression calibration (RC) and the simulation-extrapolation (SIMEX) estimators. Our findings indicate that when the amount of measurement error is substantial, RC should be the preferred method, since it has smaller MSE for estimating the coefficients of the rate of change and of the variable measured without error. However, when the amount of measurement error is small, the choice of the method should take into account the event rate in the population and the effect size to be estimated. An application to an observational study, as well as examples of published studies where our model could have been applied, are also provided. abstract_id: PUBMED:12188008 Random errors in the measurement of 10 cardiovascular risk factors. Random errors in the measurement of 10 commonly investigated cardiovascular risk factors (systolic and diastolic blood pressure, blood cholesterol, blood glucose, pulse rate, body mass index (BMI), cigarette consumption, passive smoking, alcohol intake and physical exercise) were assessed in a general population cohort (n = 2517) and a workforce cohort (n = 8008). Random errors were estimated from regression dilution ratios (lower ratios imply greater random error, and a ratio of one implies no random error). All of the risk factors, except for BMI (which had regression dilution ratios of 0.93 and 0.98 in the two cohorts), were measured with substantial levels of random error. Particularly low regression dilution ratios were observed for physical exercise (0.28 and 0.39) and pulse rate (0.47 and 0.56). For each of these risk factors, with the possible exception of BMI, associations with long-term average values could be importantly biased toward the null unless appropriate corrections are made. abstract_id: PUBMED:26175288 Regression dilution bias in blood pressure and body mass index in a longitudinal population-based cohort study. Background: Use of single measurement of risk factors can distort their estimated effects, due to random error in measurements. The aim of this study was to examine the extent of underestimation in the estimated effect of common variables in physical exam i.e. systolic and diastolic blood pressure (SBP, DBP) and body mass index (BMI) on cardiovascular diseases in Tehran Lipid and Glucose Study (TLGS). Methods: A subsample (1167 men and 1786 women) of the original cohort, who had replicate measures of the variables in triennial interval, was used to calculate the regression dilution ratios (RDRs) in men and women. RDRs were determined by parametric and nonparametric methods. Hazard ratios (HR) of risk factors, per one standard deviation change, were corrected for regression dilution bias. Results: The estimated RDRs by parametric method in men and women were 45% and 35% for SBP and 54% and 64% for DBP respectively. There were 26% and 25% underestimation in HR of SBP and 23% and 33% in HR of DBP in men and women. The corresponding underestimation for BMI was about 8%. RDRs of men and women and in age groups by both methods were fairly similar. They were relatively constant during the 10-year follow-up for SBP and BMI. Conclusions: Using baseline measurements of blood pressure underestimate its real association with CVD events and the estimated HRs. The underestimations are independent of age and sex, and it can be fairly constant in short to moderate time intervals. abstract_id: PUBMED:25139469 Covariate measurement error correction methods in mediation analysis with failure time data. Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. abstract_id: PUBMED:11750238 A regression model for longitudinal change in the presence of measurement error. Purpose: The analysis of change in measured variables has become quite popular in studies where data are collected repeatedly over time. The authors describe some of the potential pitfalls in the analysis of change when the variable for change is measured with error. They show that regression analysis is often biased, possibly leading to erroneous results. Methods: A simple method to correct for measurement error bias in regression models that model change is presented. Results And Conclusions: The two examples illustrate how measurement error can adversely affect an analysis. The bias-corrected approach yields valid results. abstract_id: PUBMED:22401135 Regression dilution bias: tools for correction methods and sample size calculation. Background: Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. Aims And Methods: In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. Results: The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Conclusions: Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected. abstract_id: PUBMED:23918853 Within-person variability in calculated risk factors: comparing the aetiological association of adiposity ratios with risk of coronary heart disease. Background: Within-person variability in measured values of a risk factor can bias its association with disease. We investigated the extent of regression dilution bias in calculated variables and its implications for comparing the aetiological associations of risk factors. Methods: Using a numerical illustration and repeats from 42,300 individuals (12 cohorts), we estimated regression dilution ratios (RDRs) in calculated risk factors [body-mass index (BMI), waist-to-hip ratio (WHR), and waist-to-height ratio (WHtR)] and in their components (height, weight, waist circumference, and hip circumference), assuming the long-term average exposure to be of interest. Error-corrected hazard ratios (HRs) for risk of coronary heart disease (CHD) were compared across adiposity measures per standard-deviation (SD) change in: (i) baseline and (ii) error-corrected levels. Results: RDRs in calculated risk factors depend strongly on the RDRs, correlation, and comparative distributions of the components of these risk factors. For measures of adiposity, the RDR was lower for WHR [RDR: 0.72 (95% confidence interval 0.65-0.80)] than for either of its components [waist circumference: 0.87 (0.85-0.90); hip circumference: 0.90 (0.86-0.93) or for BMI: 0.96 (0.93-0.98) and WHtR: 0.87 (0.85-0.90)], predominantly because of the stronger correlation and more similar distributions observed between waist circumference and hip circumference than between height and weight or between waist circumference and height. Error-corrected HRs for BMI, waist circumference, WHR, and WHtR, were respectively 1.24, 1.30, 1.44, and 1.32 per SD change in baseline levels of these variables, and 1.24, 1.27, 1.35, and 1.30 per SD change in error-corrected levels. Conclusions: The extent of within-person variability relative to between-person variability in calculated risk factors can be considerably larger (or smaller) than in its components. Aetiological associations of risk factors should be compared through the use of error-corrected HRs per SD change in error-corrected levels of these risk factors. abstract_id: PUBMED:38508503 Hierarchical models that address measurement error are needed to evaluate the correlation between treatment effect and control group event rate. Objective: To apply a hierarchical model (HM) that addresses measurement error in regression of the treatment effect on the control group event rate (CR). We compare HM to weighted linear regression (WLR) which is subject to measurement error and mathematical coupling. Study Design And Setting: We reviewed published hierarchical models that address measurement error and implemented a Bayesian version in open-source code to facilitate adoption by meta-analysts. We compared WLR and HM across a very large convenience sample of meta-analyses published in the Cochrane Database of Systematic Reviews. Results: We applied both approaches (WLR and a HM that addresses measurement error) to 3,193 meta-analyses that included 33,071 studies (average 10.28 studies per meta-analysis). A statistically significant slope suggesting an association between the treatment effect and CR was demonstrated with both approaches in 568 (17.19%) meta-analyses, with neither approach in 2,036 (63.77%) meta-analyses, only with WLS in 229 (7.17%) and only with HM in 360 (11.28%) meta-analyses. The majority of slopes were negative (WLR 85%, HM 83%). In the majority of cases, HM had wider confidence intervals (72.53%) and slopes farther from the null (64.77%). Conclusion: Approximately 28% of meta-analyses demonstrate a significant association between the treatment effect and CR when HM is used to address measurement error. User-friendly open-source code is provided to meta-analysts interested in exploring this association. abstract_id: PUBMED:9465995 Adjustment for regression dilution in epidemiological regression analyses. Purpose: The term "regression dilution" describes the dilution/attenuation in a regression coefficient that occurs when a single measured value of a covariate is used instead of the usual or average value over a period of time. This paper reviews the current knowledge concerning a simple method of adjusting for regression dilution in single and multiple covariate situations and illustrates the adjustment procedure. Methods: Formulation of the regression dilution problem as a measurement error problem allows existing measurement error theory to be applied to developing methods of adjustment for regression dilution. This theory leads to a precise method of adjustment for linear regression and approximate methods for logistic and Cox proportional hazards regression. The method involves obtaining the naive estimates of coefficients by assuming that covariates are not measured with error, and then adjusting these coefficients using reliability estimates for the covariates. Methods for estimating the reliability of covariates from the reliability and main study data and a method for the calculation of standard errors and confidence intervals for adjusted coefficients are described. Results: An illustration involving logistic regression analysis of risk factors for death from cardiovascular disease based on cohort and reliability data from the Busselton Health Study shows that the different methods for estimating the adjustment factors give very similar adjusted estimates of coefficients, that univariate adjustment procedures may lead to inappropriate adjustments in multiple covariate situations, whether or not other covariates have intra-individual variation, and when the reliability study is moderate to large, the precision of the estimates of reliability coefficients has little impact on the standard errors of adjusted regression coefficients. Conclusions: The simple method of adjusting regression coefficients for "regression dilution" that arises out of measurement error theory is applicable to many epidemiological settings and is easily implemented. The choice of method to estimate the reliability coefficient has little impact on the results. The practice of applying univariate adjustments in multiple covariate situations is not recommended. Answer: The effect of measurement error in risk factors that change over time in cohort studies, particularly in relation to 'regression dilution', is a recognized issue that can attenuate the relationship between disease and a risk factor. Simple methods often used to correct for regression dilution bias in cohort studies typically assume that the relationship between a risk factor and disease is driven exclusively by current, error-free levels of the risk factor, without considering past levels. However, this assumption may not always be valid, and when it is not, simple correction methods can introduce substantial bias (PUBMED:16051613). For instance, a study using systolic blood pressure data from the Framingham Heart Study demonstrated that a simple 'ratio of ranges' type correction factor overestimated the lifetime level association by 29% when there was a modest dependency of current risk on past levels (PUBMED:16051613). This suggests that simple methods of correction for regression dilution bias can lead to overcorrection if the risk factor-disease relationship is not short term. Therefore, it is important to consider the potential biases introduced by simple correction methods when dealing with risk factors that change over time. Researchers should be cautious and consider more sophisticated methods of correction that take into account the life-course relationship between risk factors and disease, rather than relying solely on simple methods that may not be appropriate for all situations.
Instruction: Does curcumin protect against renal dysfunction following reversible unilateral ureteric obstruction in the rat? Abstracts: abstract_id: PUBMED:21474940 Does curcumin protect against renal dysfunction following reversible unilateral ureteric obstruction in the rat? Aims: To investigate the effect of curcumin on the alterations in renal functional parameters following reversible unilateral ureteric obstruction in the rat. Methods: Wistar rats underwent reversible left ureteric obstruction for 72 h. The group Cm (n = 7) received oral curcumin (200 mg/kg/day), whereas the Vx group (n = 8) had only a vehicle. Results: Ureteric obstruction caused a significant increase in the serum tumour necrosis factor α in both groups. However, the post-obstruction level in the Cm group was significantly lower than in the Vx group. In the Vx group, the glomerular filtration rate, renal blood flow, urine volume and urinary sodium excretion in the left obstructed kidney were significantly lower than those in the right kidney, but the fractional excretion of sodium was comparable in the 2 groups. The left kidney in the Cm group behaved similar to that in the Vx group. Moreover, there was no difference in any variable when comparing the right and left kidneys among the groups. Conclusion: Curcumin appears to have no significant protective effect on the haemodynamic or tubular glomerular functions when measured as early as 3 days following reversible ureteric obstruction despite the amelioration in some of the indicators of renal injury. abstract_id: PUBMED:37108528 The Effect of Hypertension on the Recovery of Renal Dysfunction following Reversal of Unilateral Ureteral Obstruction in the Rat. Both ureteral obstruction (UO) and hypertension are common conditions that affect kidney functions. Hypertension and chronic kidney disease are closely associated with an overlapping and intermingled cause-and-effect relationship. The effect of hypertension on the renal dysfunction following reversible UO has not been studied previously. To study this effect, spontaneously hypertensive (G-HT, n = 10) and normotensive Wistar (G-NT, n = 10) rats underwent 48-h reversible left unilateral UO (UUO), and the effect of UUO was studied 96 h following UUO reversal. The glomerular filtration rate, renal blood flow, and renal tubular functions such as the fractional excretion of sodium in the post-obstructed left kidney (POK) in both groups were significantly altered compared with the non-obstructed right kidney (NOK). However, the alterations in the G-HT were significantly more exaggerated when compared with the G-NT. Similar findings were observed with the histological features, gene expression of kidney injury markers, pro-inflammatory, pro-fibrotic and pro-apoptotic cytokines, and pro-collagen, as well as tissue levels of apoptotic markers. We conclude that hypertension has significantly exaggerated the alterations in renal functions and other parameters of renal injury associated with UUO. abstract_id: PUBMED:35619661 The long-term renal effects of short periods of unilateral ureteral obstruction. The response of the kidney and its recovery following unilateral ureteral obstruction (UUO) depend on several factors including the duration of obstruction, the species involved and the age of the individual. In neonates, there is compelling evidence to indicate that even short periods of reversible UUO might lead to long-term renal impairment. In adults, the glomerular filtration rate returns to baseline values soon after the release of short periods of UUO. Despite this return to normal, experimental data have demonstrated that short periods of reversible UUO could lead to long-term renal functional alterations including tubular atrophy, interstitial fibrosis and urinary albumin leakage in addition to alterations in pro-inflammatory and pro-fibrotic cytokines. The concentrating ability of the kidney and its response to stimuli such as renal nerve stimulation and physiological doses of angiotensin-II were also shown to be affected at least in the intermediate-term following UUO reversal. In humans, epidemiological studies have also demonstrated a clear association between long-term renal impairment and ureteral obstruction. However, in clinical studies, it is usually difficult to precisely determine the degree and the time of onset of ureteral obstruction and more studies are required in this field. In conclusion, the available experimental and clinical data indicate that even short periods of UUO can cause long-term renal dysfunction. These findings might have clinical implications related to the early intervention following acute onset of UUO and to the need for long-term monitoring of renal functions particularly in patients with underlying chronic renal disease. abstract_id: PUBMED:30592154 Effect of diabetes mellitus on the recovery of changes in renal functions and glomerular permeability following reversible 24-hour unilateral ureteral obstruction. Background: Following reversal of short periods of ureteral obstruction (UO), glomerular and tubular renal dysfunction recovers with time. Diabetes mellitus (DM) affects glomerular function; thus, the ability of diabetic kidneys to recover from UO may be impaired. This study investigated the effects of long-term DM on the recovery of glomerular and tubular function, as well as permeability of the glomerular filtration barrier (GFB), after unilateral UO (UUO) reversal. Methods: Diabetes mellitus was induced in Wistar rats by intraperitoneal streptozotocin. All diabetic and age-matched control rats underwent reversible 24-hour left UUO. The renal function of both kidneys was measured using clearance techniques 3 hours and 7 and 30 days after UUO reversal. Glomerular permeability was assessed by measuring the glomerular sieving coefficients for fluorescein isothiocyanate-conjugated Ficoll (molecular radius: 20-90 Å). Results: Unilateral UO induced transient changes in the size selectivity of GFB small pores. However, the size selectivity function of large pores had not returned to baseline even 30 days after UUO reversal. Diabetes mellitus caused exaggerated early alterations in glomerular hemodynamic and tubular function, as well as size selectivity dysfunction of both small and large pores. At 30 days after UUO reversal, despite glomerular hemodynamic and tubular function and the size selectivity of small pores returning to normal in both diabetic and non-diabetic rats, the residual size selectivity dysfunction of large pores was more severe in diabetic rats. Conclusion: Unilateral UO caused long-term dysfunction in the size selectivity of large pores of the GFB. In addition, DM significantly exaggerated this dysfunction, indicating a more ominous outcome in diabetic kidneys following UUO. abstract_id: PUBMED:27570581 The effect of aliskiren on the renal dysfunction following unilateral ureteral obstruction in the rat. Purpose: To investigate the effect of blocking renin-angiotensin system by direct renin inhibition using aliskiren on the renal dysfunction following reversible unilateral ureteral obstruction (UO). Methods: Wistar rats underwent reversible left UO for 72 hours. Group-Alsk (n=12) received aliskiren (30 mg/kg/day) dissolved in water starting one day before creating UO and continued until the terminal experiment five days post reversal when renal functions were measured using clearance techniques. Group-Vx (n=12) underwent similar protocol but had water only. Gene expression analysis of some markers of kidney injury was measured using PCR technique. Results: In Group-Vx, renal blood flow (RBF) and glomerular filtration rate (GFR) in the left kidney were significantly lower than the right kidney (1.82±0.12 vs. 3.19±0.40, P=0.001 and 0.81±0.08 vs. 1.44±0.09, P=0.004, respectively). However, left fractional excretion of sodium (FENa) was higher than the right FENa (0.80±0.15 vs. 0.55±0.04, P=0.05). Comparing the left obstructed kidney in Group-Alsk vs. Group-Vx, RBF and GFR were higher in Group-Alsk (2.44±0.30 vs. 1.82±0.12, P=0.049 and 1.02±0.11 vs. 0.81±0.08, P=0.07, respectively). The left renal FENa was lower in Group-Alsk but did not reach statistical significance (0.54±0.07 vs. 0.80±0.15, P=0.07). Aliskiren also decreased the gene expressions of NGAL, KIM-1 and p53. Conclusion: Direct renin inhibition by aliskiren appears to have protective effect on the renal dysfunction and on the markers of renal injury following UO indicating a potential clinical benefit of this agent. Further, this data and the previous studies indicate that blocking renin-angiotensin system at any level has a protective effect in obstructive nephropathy. abstract_id: PUBMED:30070029 Nigella sativa extract is a potent therapeutic agent for renal inflammation, apoptosis, and oxidative stress in a rat model of unilateral ureteral obstruction. Unilateral ureteral obstruction (UUO) is a well-established experimental model to evaluate renal interstitial fibrosis. Current study is aimed to investigate the effects of Nigella sativa (NS) extract and renin-angiotensin system (RAS) blockade against kidney damage following UUO in rats. In this study, the rats received intraperitoneal injection of losartan (15 mg/kg), captopril (30 mg/kg), and two doses of NS extract (200 and 400 mg/kg) for 18 consecutive days. At the fourth day of the experiment, laparotomy was performed, and the left ureter was ligated. Sham-operated animals received saline as vehicle, and laparotomy without ureteral ligation was done. UUO was associated with significant increase in the expression of renal angiotensin II and monocyte chemoattractant protein-1, concentration of malondialdehyde and tumor necrosis factor-α, and the number of apoptotic cells when compared with sham group. Renal total thiol content and the activity of antioxidant enzymes were significantly reduced as compared with the sham group. However, treatment of obstructed rats with losartan, captopril, and NS extract significantly improved these renal impairments when compared with UUO group. Thus, NS extract, a potent antioxidant and anti-inflammatory herb, is a therapeutic agent to treat the UUO-induced kidney damage comparable with the well-known RAS inhibitors captopril and losartan. abstract_id: PUBMED:34630952 Verbascoside alleviates renal fibrosis in unilateral ureteral obstruction rats by inhibiting macrophage infiltration. Objectives: To explore the effect of verbascoside on renal fibrosis in unilateral ureteral obstruction (UUO) rats. Materials And Methods: Twenty Sprague-Dawley rats were randomly distributed into sham-operated, UUO, and UUO+Verbascoside groups. After two weeks of rat model construction, urine and blood samples were collected for biochemical analysis while kidney tissues were harvested for hematoxylin and eosin (H&amp;E), Masson's Trichrome, and immunohistochemistry staining. Pearson coefficient was used to analyze the correlation between the two proteins. Results: Verbascoside improved UUO-induced renal dysfunction as detected by decreased serum creatinine, urea nitrogen, and urinary protein excretion rate. In UUO rats, H&amp;E staining result revealed increased total nucleated cell number, and Masson's Trichrome staining results showed tubular interstitial fibrosis with the deposition of collagen fibrils. Besides, expressions of fibrosis-related proteins including collagen type I (COL-I), α-smooth muscle actin (a-SMA), and tissue inhibitor of metalloproteinase 2 (TIMP2) expressed higher in the UUO group. Moreover, macrophage infiltration-related factors such as CD68+, F4/80+ cells, and suppressor of cytokine signaling-3 (SOCS3) were significantly higher in the UUO group than in sham-operated rats. However, after administration with verbascoside, the accumulation of collagen fibrils and total nucleated cell numbers were mitigated. Likewise, macrophage infiltration was extenuated and fibrosis-related proteins were down-regulated in the UUO+Verbascoside rats. Correlation analysis indicated that macrophage infiltration-related markers were related to fibrosis-related factors. Conclusion: Verbascoside could alleviate renal fibrosis in UUO rats probably through ameliorating macrophage infiltration. abstract_id: PUBMED:29242399 Functional Renal Imaging with 2-Deoxy-2-18F-Fluorosorbitol PET in Rat Models of Renal Disorders. Precise regional quantitative assessment of renal function is limited with conventional 99mTc-labeled renal radiotracers. A recent study reported that the PET radiotracer 2-deoxy-2-18F-fluorosorbitol (18F-FDS) has ideal pharmacokinetics for functional renal imaging. Furthermore, 18F-FDS is available via simple reduction from routinely used 18F-FDG. We aimed to further investigate the potential of 18F-FDS PET as a functional renal imaging agent using rat models of kidney disease. Methods: Two different rat models of renal impairment were investigated: induction of acute renal failure by intramuscular administration of glycerol in the hind legs, and induction of unilateral ureteral obstruction by ligation of the left ureter. At 24 h after these procedures, dynamic 30-min 18F-FDS PET data were acquired using a dedicated small-animal PET system. Urine 18F-FDS radioactivity 30 min after radiotracer injection was measured together with coinjected 99mTc-diethylenetriaminepentaacetic acid urine activity. Results: Dynamic PET imaging demonstrated rapid 18F-FDS accumulation in the renal cortex and rapid radiotracer excretion via the kidneys in healthy control rats. On the other hand, significantly delayed renal radiotracer uptake (continuous slow uptake) was observed in acute renal failure rats and unilateral ureteral obstruction kidneys. Measured urine radiotracer concentrations of 18F-FDS and 99mTc-diethylenetriaminepentaacetic acid correlated well with each other (R = 0.84, P &lt; 0.05). Conclusion:18F-FDS PET demonstrated favorable kinetics for functional renal imaging in rat models of kidney diseases. 18F-FDS PET imaging, with its advantages of high spatiotemporal resolution and simple tracer production, could potentially complement or replace conventional renal scintigraphy in select cases and significantly improve the diagnostic performance of renal functional imaging. abstract_id: PUBMED:21268102 L-carnitine improves oxidative stress and suppressed energy metabolism but not renal dysfunction following release of acute unilateral ureteral obstruction in rat. Background And Aim: We recently showed that L-carnitine reduced oxidative stress and suppressed energy metabolism, while α-tocopherol only prevented redox imbalance, in the obstructed kidney of rats subjected to 24-hr of unilateral ureteral obstruction (UUO). The present study was undertaken to investigate the effects of both compounds on disturbed renal hemodynamics, solutes-excretion, and urine-concentrating ability as well as renal oxidative stress and suppressed metabolism at early hours following release of 24-hr UUO. Methods: UUO was induced in anaesthetized rats that received L-carnitine, α-tocopherol or their vehicles in four different groups. Each rat was re-anaesthetized, prepared for renal functional measurements, and its ureteral obstruction was released at 24-hr. Then, urines of both kidneys were separately collected during 30-90 min of post-release, with taking blood samples at beginning and end of it. Finally, both kidneys were removed and preserved at -80°C for future measuring their levels of ATP and ADP as well as malondialdehyde (MDA) and ferric reducing/antioxidant power (FRAP). There were also sham and control groups. Results: Post-obstructed kidney (POK) of vehicle-treated groups compared to equivalent kidney of sham group had lower ATP, ATP/ADP, FRAP, creatinine clearance, absolute Na(+)- and K(+)-excretion, and effective free-water reabsorption, but higher MDA and ADP. L-carnitine could improve oxidative stress and suppressed energy metabolism and α-tocopherol normalized redox state, but both compounds did not have any effects on altered functional variables of the POK. Conclusion: Oxidative stress and suppressed energy metabolism may not be involved in the development of renal dysfunction during acute ureteral obstruction. abstract_id: PUBMED:31803050 Upregulation of Cortical Renin and Downregulation of Medullary (Pro)Renin Receptor in Unilateral Ureteral Obstruction. Chronic kidney disease (CKD) is characterized by renal dysfunction, which is a common feature of other major diseases, such as hypertension and diabetes. Unilateral ureteral obstruction (UUO) has been used as a model of CKD in experimental animals and consists of total obstruction of one kidney ureter. The UUO decreases renal blood flow, which promotes the synthesis of renin in the juxtaglomerular apparatus, the first step in renin-angiotensin system (RAS) cascade. RAS induces inflammation and remodeling, along with reduced renal function. However, it remains unknown whether intrarenal RAS (iRAS) is activated in early stages of CKD. Our objective was to characterize different iRAS components in the renal cortex and in the medulla in an early phase of UUO. Male C57BL/6 mice (8-12 weeks old) were subjected to UUO in the left kidney, or to sham surgery, and were euthanized after 7 days (n = 5/group). Renal function, renal inflammatory/remodeling processes, and iRAS expression were evaluated. UUO increased plasma creatinine, right renal hypertrophy (9.08 ± 0.31, P &lt; 0.05 vs. Sham), and tubular dilatation in the left kidney cortex (42.42 ± 8.19µm, P &lt; 0.05 vs. Sham). This correlated with the increased mRNA of IL-1β (1.73 ± 0.14, P &lt; 0.01 vs. Sham, a pro-inflammatory cytokine) and TGF-β1 (1.76 ± 0.10, P &lt; 0.001 vs. Sham, a pro-fibrotic marker). In the renal cortex of the left kidney, UUO increased the mRNA and protein levels of renin (in 35% and 28%, respectively, P &lt; 0.05 vs. Sham). UUO decreased mRNA and protein levels for the (pro)renin receptor in the renal medulla (0.67 ± 0.036 and 0.88 ± 0.028, respectively, P &lt; 0.05 vs. Sham). Our results suggest that modulation of iRAS components depends on renal localization and occurs in parallel with remodeling and pro-inflammatory/pro-fibrotic mechanisms. Answer: Based on the provided abstract (PUBMED:21474940), curcumin does not appear to have a significant protective effect on the hemodynamic or tubular glomerular functions when measured as early as 3 days following reversible ureteric obstruction, despite the amelioration in some indicators of renal injury. The study found that while curcumin led to a significantly lower post-obstruction level of serum tumor necrosis factor α compared to the vehicle group, it did not improve the glomerular filtration rate, renal blood flow, urine volume, or urinary sodium excretion in the left obstructed kidney. Therefore, the conclusion is that curcumin does not provide significant protection against renal dysfunction in this context.
Instruction: The generalist role of specialty physicians: is there a hidden system of primary care? Abstracts: abstract_id: PUBMED:9582044 The generalist role of specialty physicians: is there a hidden system of primary care? Context: Despite increased emphasis on primary care in the United States, most care continues to be provided by specialists. The extent to which specialists incorporate elements of primary care in their approach to ambulatory patients is unknown. Objectives: To examine the extent to which selected medical and surgical subspecialties provide generalist care to Medicare patients, and to compare patterns of care between specialists and generalists. Design: A cross-sectional study of all ambulatory care recorded in Part B of the Washington State Medicare Claims Database in 1994 and 1995. Setting: Ambulatory practices in Washington State. Patients: Medicare beneficiaries 65 years or older who made office visits to the study physicians. Main Outcome Measures: The extent to which individual specialties accounted for the majority of visits made by patients to physicians (a measure of continuity), provided care outside the traditional domain of their specialty (a measure of comprehensiveness), and provided influenza immunization. Results: A total of 373 505 patients constituted the sample. Patients had an average of 7.48 outpatient visits per year; 9.6% saw only generalists, while 14.7% saw only specialists. The practices of general internists and family physicians differ systematically from the practices of most specialists. Approximately half (49.8%) of all ambulatory visits to general internists and family physicians are made by patients for whom they provide the majority of outpatient care, compared with 21.0% of medical specialist and 11.7% of surgical specialist visits. The rate of influenza immunization was 55.4% for patients who received the majority of their care from generalists, 47.7% from medical specialists, and 39.6% from surgical specialists. Pulmonologists, general surgeons, and gynecologists were more likely than other specialists to provide services outside their specialty. Conclusions: Most specialists do not assume the principal care responsibility for elderly patients, although a substantial proportion of patients see only specialists for their care. Selected specialties assume the generalist role more often, particularly when they provide the majority of outpatient care for an individual patient. abstract_id: PUBMED:32291713 Use of Primary Care and Specialty Providers: Findings from the Medical Expenditure Panel Survey. Background: A comprehensive picture of how the US population engages in specialty care use is lacking, even though redesign models focused on specialty care are becoming more popular. Objective: To describe the type of provider, primary care or specialist, most often seen by individuals, to test associations between type of provider most often seen and insurance coverage, and to test associations between the number of generalist and specialist visits and insurance coverage. Design: Cross-sectional analysis of 2013-2016 Medicaid Expenditure Panel Survey. Logistic and negative binomial models were used in multivariate regression modeling. Participants: Depending on the analysis, the study samples include between 71,402 and 79,518 US residents. Main Measures: Individuals' provider type most often seen, primary care visits, and specialist visits were reported. Key Results: More than half of the sample (55%) predominantly visited primary care providers (or generalists), and 36% predominantly visited specialists. Among individuals primarily visiting generalists, 80% visited only one type of primary care provider, and 24% also visited one or more specialists. Among individuals primarily visiting specialists, 48% visited only one type of specialist, and 47% did not visit any generalists in the year. Among Medicare enrollees, 50% predominantly visited specialists, and 40% predominantly visited generalists. Medicare enrollment was associated with greater odds of predominantly visiting specialists (p &lt; 0.05), and Medicare-Medicaid enrollment and having no insurance were associated with lower odds of predominantly visiting specialists (p &lt; 0.05). Medicare enrollment was associated with 13% more generalist visits and 35% more specialist visits, and Medicare-Medicaid enrollment was associated with 38% more generalist visits and 15% more specialist visits (all p &lt; 0.05). Conclusions: Given the overall frequency of specialty care use and the reliance on multiple specialists in any given year, particularly among Medicare enrollees, public payers are uniquely positioned to promote specialty care redesign and champion improved coordination between specialists. abstract_id: PUBMED:32698658 Nurse Practitioner Role and Practice Environment in Primary and in Nonprimary Care in California. Between 2008 and 2016, there was an increase in nurse practitioners in specialty care. This study explores some differences in role and practice environment between primary care and nonprimary care nurse practitioners in the domains of time spent on activities, barriers to providing care, working to scope of practice, full skill utilization, and satisfaction. This cross-sectional quantitative study, based on data from the 2017 Survey of California Nurse Practitioners and Certified Nurse Midwives, found that nurse practitioners in nonprimary care practices have lower odds of reporting time as a barrier to practice, lower odds of reporting practice to full scope, and higher odds of reporting a hierarchical or supervisory relationship with the physician. Future exploration of these differences may shed light on ways to promote nonprimary care practice environments to foster more effective collaboration and fewer barriers to providing care. abstract_id: PUBMED:30572898 A categorisation of problems and solutions to improve patient referrals from primary to specialty care. Background: Improving access to specialty care has been identified as a critical issue in the delivery of health services, especially given an increasing burden of chronic disease. Identifying and addressing problems that impact access to specialty care for patients referred to speciality care for non-emergent procedures and how these deficiencies can be managed via health system delivery interventions is important to improve care for patients with chronic conditions. However, the primary-specialty care interface is complex and may be impacted by a variety of potential health services delivery deficiencies; with an equal range of interventions developed to correct them. Consequently, the literature is also diverse and difficult to navigate. We present a narrative review to identify existing literature, and provide a conceptual map that categorizes problems at the primary-specialty care interface with linkages to corresponding interventions aimed at ensuring that patient transitions across the primary-specialty care interface are necessary, appropriate, timely and well communicated. Methods: We searched MEDLINE and EMBASE databases from January 1, 2005 until Dec 31, 2014, grey literature and reference lists to identify articles that report on interventions implemented to improve the primary-specialty care interface. Selected articles were categorized to describe: 1) the intervention context, including the deficiency addressed, and the objective of the intervention 2) intervention activities, and 3) intervention outcomes. Results: We identified 106 articles, producing four categories of health services delivery deficiencies based in: 1) clinical decision making; 2) information management; 3) the system level management of patient flows between primary and secondary care; and 4) quality-of-care monitoring. Interventions were divided into seven categories and fourteen sub-categories based on the deficiencies addressed and the intervention strategies used. Potential synergies and trade-offs among interventions are discussed. Little evidence exists regarding the synergistic and antagonistic interactions of alternative intervention strategies. Conclusion: The categorization acts as an aid in identifying why the primary-specialty care interface may be failing and which interventions may produce improvements. Overlap and interconnectedness between interventions creates potential synergies and conflicts among co-implemented interventions. abstract_id: PUBMED:24475347 Examining the practice of generalist expertise: a qualitative study identifying constraints and solutions. Objectives: Provision of person-centred generalist care is a core component of quality primary care systems. The World Health Organisation believes that a lack of generalist primary care is contributing to inefficiency, ineffectiveness and inequity in healthcare. In UK primary care, General Practitioners (GPs) are the largest group of practising generalists. Yet GPs fulfil multiple roles and the pressures of delivering these roles along with wider contextual changes create real challenges to generalist practice. Our study aimed to explore GP perceptions of enablers and constraints for expert generalist care, in order to identify what is needed to ensure health systems are designed to support the generalist role. Design: Qualitative study in General Practice. Setting: UK primary care. Main Outcome Measures: A qualitative study - interviews, surveys and focus groups with GPs and GP trainees. Data collection and analysis was informed by Normalisation Process Theory. Design And Setting: Qualitative study in General Practice. We conducted interviews, surveys and focus groups with GPs and GP trainees based mainly, but not exclusively, in the UK. Data collection and analysis were informed by Normalization Process Theory. Participants: UK based GPs (interview and surveys); European GP trainees (focus groups). Results: Our findings highlight key gaps in current training and service design which may limit development and implementation of expert generalist practice (EGP). These include the lack of a consistent and universal understanding of the distinct expertise of EGP, competing priorities inhibiting the delivery of EGP, lack of the consistent development of skills in interpretive practice and a lack of resources for monitoring EGP. Conclusions: WE DESCRIBE FOUR AREAS FOR CHANGE: Translating EGP, Priority setting for EGP, Trusting EGP and Identifying the impact of EGP. We outline proposals for work needed in each area to help enhance the expert generalist role. abstract_id: PUBMED:7779328 Physiatry as a primary care specialty. The ongoing changes in health care seek to create an affordable, accessible, and high-quality health care delivery system for all Americans. In response to this, Physical Medicine and Rehabilitation (PM&amp;R) is re-examining its role in the provision of health care to the disabled, particularly the severely affected. We surveyed 100 physiatrists and 100 PM&amp;R residents across the country regarding their opinions on the competence, qualifications, and desire of physiatrists to provide primary care for the disabled. Fifty-five physiatrists and 51 residents responded (55% overall). Thirty-nine percent of all respondents agreed that PM&amp;R should be designated as a primary care specialty, and 53% believed that physiatrists should provide primary care to the severely disabled (e.g., spinal cord and head injury). Overall, 53% felt that physiatrists are competent in general medical care, but 38% are convinced that the current 4-yr PM&amp;R residency training sufficiently prepares physiatrists to assume the role of a primary care provider. Of the patient populations considered, the respondents thought that physiatrists should manage the care of spinal cord injury (60%) and head injury (51%), but only 40% were willing to assume primary care responsibilities. abstract_id: PUBMED:35103529 Generalist versus Specialist Palliative Medicine. The issue of generalist versus specialist palliative care is on the minds of healthcare leaders everywhere. We are amid changing demographics of physicians. The industrialization of medicine is well underway in the US and around the developing world. Is it important to identify patients who benefit the most from specialist palliative care, given that it is currently a limited resource? Should we step out of standard practice and redesign palliative care using principles of population management? The COVID pandemic rapidly introduced virtual palliative care consults. Is it a better way to promote wide access to specialty palliative care? Looking forward, should we promote ways to advance primary palliative care and reserve specialty palliative care to patients who will benefit most from this level of care? These questions, and others, are considered in this transcribed discussion between leading physicians in the field. abstract_id: PUBMED:18055672 Parents as information intermediaries between primary care and specialty physicians. Objective: In this study we assessed the views of parents of children referred to specialty care and the views of the children's primary care and specialty physicians about parents' roles as information intermediaries. Methods: We enrolled 179 patients who were newly referred from primary care pediatricians in 22 practices to 15 pediatric subspecialists in 5 specialties in a study of primary care pediatrician-specialist communication. Parents, primary care pediatricians, and specialists completed questionnaires by mail or telephone at the first visit and 6 months later. Questions included perceived responsibilities of parents as information conduits between primary care pediatricians and specialists. Opinions of parents, primary care pediatricians, and specialists about parents' roles were compared for the sample as a whole, as well as for individual cases. Agreement between parents and providers was assessed. Demographic and clinical determinants of parents reporting themselves as "comfortable with" or "acting" as primary intermediaries were assessed using logistic regression. Results: More parents (44%) than primary care physicians (30%) felt comfortable with parents acting as primary communicators between their children's physicians; 31% of parents who reported that they were the primary communicators felt uncomfortable in that role, and there was no agreement between parents and physicians about the role of parents in individual cases. Although no demographic characteristics of children or parents were associated with parent comfort as the primary communicator, parents of children who saw the same specialist more than once during the 6-month period felt more comfortable in this role. The presence of a chronic condition was not associated with parent comfort. Conclusions: Although parents report more comfort with their own ability as information intermediaries than do their children's physicians, the role in which they feel comfortable is highly variable. Physicians should discuss with parents the roles they feel comfortable in assuming when specialty referrals are initiated. abstract_id: PUBMED:24232853 Association of primary care physicians' exercise habits and their age, specialty, and workplace. Background: In this study, we investigated primary care physicians' exercise habits, and the association of this variable with their age, specialty, and workplace. Methods: The population of this cross-sectional study comprised 3,310 medical doctors who graduated from Jichi Medical University in Japan between 1978 and 2012. The study instrument was a self-administered questionnaire mailed in August 2012 to investigate primary care physicians' exercise habits, age, specialty, and workplace. Results: The 896 available primary care physicians' responses to the self-administered questionnaire were analyzed. Their exercise frequency was as follows: daily, 104 (11.6%); at least 2-3 times per week, 235 (26.2%); no more than once a week, 225 (25.1%); no more than once a month, 278 (31.0%); and other, 52 (5.8%). Their exercise intensity was as follows: high (≥6 Mets), 264 (29.5%); moderate (4-6 Mets), 199 (22.2%); mild, (3-4 Mets), 295 (32.9%); very mild (&lt;3 Mets), 68 (7.6%); none, 64 (7.1%); and other, 6 (0.7%). Their exercise volume was calculated to represent their exercise habits by multiplying score for exercise frequency by score for intensity. Multivariate linear regression analyses showed that the primary care physicians' exercise volumes were associated with their age (P&lt;0.01) and workplace (P&lt;0.01), but not with their specialty (P=0.37). Primary care physicians in the older age group were more likely to have a higher exercise volume than those in the younger age groups (50-60 years &gt; older than 60 years &gt;40-50 years &gt;30-40 years &gt;24-30 years). Primary care physicians working in a clinic were more likely to have a higher exercise volume than those working in a university hospital, polyclinic hospital, or hospital. Conclusion: Primary care physicians' exercise habits were associated with their age and workplace, but not with their specialty. abstract_id: PUBMED:29625232 Shared Care: Using an Electronic Consult Form to Facilitate Primary Care Provider-Specialty Care Coordination. Objective: The quality of children's health is compromised by poor care coordination between primary care providers (PCPs) and specialists. Our objective was to determine how an electronic consultation and referral system impacts referral patterns and PCP-specialist communication. Methods: The primary care clinic at Boston Children's Hospital piloted an electronic referral and consultation system with the neurology and gastroenterology departments from April 1, 2014, to October 31, 2016. PCPs completed an electronic consult form, and if needed, specialists replied with advice or facilitated expedited appointments. Specialist response times, referral rates, wait times, and completion rates for specialty visits were tracked. PCPs and specialists also completed a survey to evaluate feasibility and satisfaction. Results: A total of 82 PCPs placed 510 consults during the pilot period. Specialists responded to 88% of requests within 3 business days. Eighteen percent of specialty visits were deferred and 21% were expedited. Wait times for specialty appointments to both departments significantly decreased, from 48 to 34 days (P &lt; .001), and completion rates improved from 58% to 70% (P &lt; .01), but referral volumes remained stable (25 per month to 23 per month; P = .29). Most PCPs said the Shared Care system facilitated better communication with specialists (89%) and enabled them to provide superior patient care (92%). Specialists reported that the system required a minimal amount of time and enabled them to educate PCPs and triage referrals. Conclusions: Implementation of an electronic referral and consultation system was feasible and provided timely access to specialty care, but did not affect referral volume. This system could serve as a model for other health care organizations and specialties. Answer: The concept of specialty physicians taking on a generalist role in primary care is supported by several studies that suggest a substantial proportion of patients rely exclusively on specialists for their care, and certain specialties are more likely to provide services outside their traditional domain. According to a cross-sectional study of Medicare patients in Washington State, while most specialists do not assume principal care responsibility for elderly patients, a significant number of patients see only specialists for their care. Some specialties, such as pulmonologists, general surgeons, and gynecologists, are more likely to provide generalist care, particularly when they provide the majority of outpatient care for an individual patient (PUBMED:9582044). Furthermore, findings from the Medical Expenditure Panel Survey indicate that a considerable portion of the US population predominantly visits specialists, with 36% primarily seeing specialists and 55% primarily visiting generalists. Among Medicare enrollees, 50% predominantly visited specialists. This suggests that specialists play a significant role in providing primary care services, especially for Medicare patients (PUBMED:32291713). The role of nurse practitioners also reflects the blurring of lines between primary and specialty care. Between 2008 and 2016, there was an increase in nurse practitioners in specialty care, and those in nonprimary care practices reported fewer barriers to practice, suggesting that nurse practitioners are also contributing to the provision of generalist care in specialty settings (PUBMED:32698658). In summary, while the traditional healthcare model delineates clear roles for generalists and specialists, the reality is that there is a "hidden system of primary care" where specialists often provide generalist care, especially for certain patient populations such as the elderly and those with Medicare. This indicates a need for improved coordination and potentially a redesign of care models to better accommodate the generalist role played by specialists (PUBMED:9582044, PUBMED:32291713).
Instruction: Is prophylactic aortic valve replacement indicated during mitral valve surgery for mild to moderate aortic valve disease? Abstracts: abstract_id: PUBMED:12400754 Is prophylactic aortic valve replacement indicated during mitral valve surgery for mild to moderate aortic valve disease? Background: Determining the need for surgical treatment of coexisting mild to moderate aortic valve disease in patients referred for mitral valve surgery is often difficult. The purpose of this study was to assess long-term clinical outcome and the need for subsequent aortic valve replacement in patients with mild to moderate rheumatic aortic valve disease at the time of mitral valve surgery. Methods: A total of 275 patients (90 men and 185 women, mean age 43 years) with rheumatic disease who underwent mitral valve surgery were followed up for an average of 9 years. Patients were classified into two groups: those with coexisting mild to moderate aortic valve disease at the time of mitral valve surgery (141 patients, group A) and those without (134 patients, group B). Primary outcomes (death and subsequent aortic valve surgery) were compared between the two groups. Results: At the time of mitral valve surgery, 104 patients (74%) in group A had mild aortic regurgitation, 37 (26%) had moderate aortic regurgitation, 5 had (4%) mild aortic stenosis, and 2 (1%) had moderate aortic stenosis. At the end of follow-up, no patient had severe aortic valve disease. In all, 12 patients (5%) in group A had primary events (eight deaths and four subsequent aortic valve replacements), and 12 patients (9%) in group B had such events (12 deaths). According to Kaplan-Meier analysis, neither the survival rate nor the event-free survival rate differed significantly over the follow-up period between the two groups. Conclusions: In most patients who have mild to moderate rheumatic aortic valve disease at the time of mitral valve surgery, the long-term outcome is comparable to that of subjects without aortic valve disease at the time of mitral valve surgery. Subsequent aortic valve replacement is rarely needed after a long follow-up period. abstract_id: PUBMED:35671345 Fate of moderate secondary mitral regurgitation in patients undergoing aortic valve replacement for severe aortic regurgitation. Objectives: Moderate secondary mitral regurgitation is common in patients with severe aortic regurgitation, but whether it has to be addressed at the time of aortic valve surgery remains unclear. With this study, we evaluated the long-term fate of moderate secondary mitral regurgitation in this specific scenario. Methods: Between January 2004 and January 2018, in 154 patients admitted to our institution for treatment of severe aortic regurgitation, a moderate secondary mitral regurgitation was diagnosed. Ninety-four patients underwent isolated aortic valve replacement (group 1) and 60 patients underwent also concomitant mitral valve annuloplasty (group 2). Results: One death (1.1%) occurred in group 1, whereas two deaths (3.3%) occurred in group 2 (p = .561). At 11 years, the cumulative incidence function of cardiac death, with noncardiac death as a competing risk was 11.5 ± 5.11% in group 1 and 8.3 ± 5.15% in group 2 (p = .731). The cumulative incidence function of mitral valve reintervention, with death as a competing risk, was 3.7 ± 2.61% in group 1 and 4.5 ± 4.35% in group 2 (p = .620) at 11 years. Secondary mitral regurgitation improved to ≤mild in 66% and 76% of the survivors of group 1 and group 2, respectively (p = .67). Conclusions: In our experience, in patients with moderate secondary mitral regurgitation undergoing aortic valve replacement for severe aortic regurgitation, concomitant mitral valve annuloplasty did not improve the long-term survival, the incidence of cardiac death and mitral valve reoperation or the evolution of the mitral valve disease. abstract_id: PUBMED:24087829 Mitral valve repair versus replacement for moderate-to-severe mitral regurgitation in patients undergoing concomitant aortic valve replacement. Objectives: Whether to repair or replace the mitral valve for patients with significant mitral regurgitation undergoing aortic valve replacement is still controversial. Methods: From January 1990 to December 2011, a total of 663 patients underwent aortic valve replacement combined with mitral valve surgery. Among these, 253 patients (mean age 55.9 ± 14.5 years, 91 females) with moderate-to-severe mitral regurgitation were enrolled to compare the outcomes between double valve replacement (DVR group, n = 158) and aortic valve replacement plus mitral valve repair (AVR plus MVr group, n = 95). Survival and valve-related events were compared by the inverse-probability-treatment-weighted method using propensity scores to reduce treatment selection bias. Results: Early mortality was similar between the groups (1.9% in the DVR group when compared with 3.2% in the AVR plus MVr group, P = 0.55). During the mean follow-up period of 72.1 ± 56.7 months, 45 patients died (28 in DVR and 17 in AVR plus MVr) and 31 experienced valve-related events including valve reoperation in 11, anticoagulation-related bleeding in 14, thromboembolism complications in 9 and infective endocarditis in 3. After adjustment for baseline risk profiles, the DVR group showed no difference with regard to risks of death (hazard ratio [HR], 1.79; 95% confidence interval [CI], 0.79-4.01; P = 0.16) and valve-related events (HR, 1.15; 95% CI, 0.40-3.30; P = 0.80) compared with the AVR plus MVr group. Conclusions: Although the outcomes of either mitral valve repair or replacement for moderate-to-severe mitral regurgitation in patients undergoing concomitant aortic valve replacement show no statistical significance in terms of long-term survival and valve-related event rates, DVR seems more hazardous than AVR plus MVr based on the estimated HR in terms of survival. abstract_id: PUBMED:31221174 Echocardiographic evaluation of non-surgically treated mild-to-moderate mitral dysfunction in patients undergoing aortic valve replacement. Background: Surgical management of the mitral valve (MV) in patients with mild-to-moderate mitral dysfunction undergoing aortic valve replacement is still controversial. We investigated the echocardiographic data from patients with mild-to-moderate mitral dysfunction who did not undergo MV surgery. Materials And Methods: From January 1989 to June 2012, a total of 2731 patients underwent aortic valve replacement. Among these, 560 patients with mild-to-moderate mitral dysfunction were screened. Of these, 292 patients (61.9 ± 13.0 years; 113 females) who had not undergone MV surgery formed our study cohort. Survival, valve-related complication, and echocardiographic data were evaluated. Results: There were three early deaths. During the mean follow-up period of 56.9 ± 46.5 months, there were 23 late deaths and 28 valve-related complications. Valve-related event-free survival at 5 years was 85.9% ± 2.4%. In serial postoperative echocardiographic evaluations (mean follow-up duration: 40.8 ± 44.5 months), 21 patients experienced a progression in late mitral dysfunction. At 5 years, 88.8% ± 2.7% of patients did not suffer from late mitral dysfunction. Based on multivariate analysis, rheumatic pathology of MV (Hazard Ratio: 3.88, 95% confidence intervals 1.60-9.39, p = 0.003) was an independent predictor of late mitral dysfunction. Conclusions: Conservatively treated patients with mild-to-moderate mitral dysfunction exhibited acceptable clinical outcomes. Rheumatic pathology of MV is associated with a higher risk of progressive native MV dysfunction. abstract_id: PUBMED:28882144 Sutureless aortic valve replacement in a calcified homograft combined with mitral valve replacement. Background: Aortic valve replacement in a patient with an aortic homograft can be very challenging, especially when concomitant mitral valve surgery needs to be performed. Case Presentation: We report a case of implantation of a sutureless aortic valve bioprosthesis combined with mitral valve replacement in a patient with a severely calcified aortic homograft where conventional valve replacement was technically unfeasible. Conclusions: We believe that sutureless AVR is a viable option especially for young patients with a high surgical risk where conventional valve replacement cannot be achieved. abstract_id: PUBMED:26215358 Paravalvular regurgitation after conventional aortic and mitral valve replacement: A benchmark for alternative approaches. Objective: Paravalvular regurgitation is a known complication after transcatheter and sutureless aortic valve replacement. Paravalvular regurgitation also may develop in patients undergoing percutaneous mitral valve replacement. There are few studies on contemporary surgical valve replacement for comparison. We sought to determine the contemporary occurrence of paravalvular regurgitation after conventional surgical valve replacement. Methods: We performed a single-center retrospective database review involving 1774 patients who underwent valve replacement surgery from April 2004 to December 2012: aortic in 1244, mitral in 386, and combined aortic and mitral in 144. Follow-up echocardiography was performed in 73% of patients. Patients with endocarditis were analyzed separately from noninfectious paravalvular leaks. Statistical comparisons were performed to determine differences in paravalvular regurgitation incidence and survival. Results: During follow-up, 1+ or greater (mild or more) paravalvular regurgitation occurred in 2.2% of aortic cases and 2.9% of mitral cases. There was 2+ or greater (moderate or more) paravalvular regurgitation in 0.9% of aortic and 2.2% of mitral cases (P = .10). After excluding endocarditis, late noninfectious regurgitation 2+ or greater was detected in 0.5% of aortic and 0.4% of mitral cases (P = .93); there were no reoperations or percutaneous closures for noninfectious paravalvular regurgitation. Conclusions: In an academic medical center, the overall rate of paravalvular regurgitation is low, and late clinically significant noninfectious paravalvular regurgitation is rare. The benchmark for paravalvular regurgitation after conventional valve replacement is high and should be considered when evaluating patients for transcatheter or sutureless valve replacement. abstract_id: PUBMED:8507964 Aortic valve replacement and mitral valve repair with allograft. Techniques of repair of defects in the anterior leaflet of the mitral valve and replacement of the aortic valve using allograft are presented. The case history and operative procedure of a reconstructive operation that did not require anticoagulant therapy after surgery are described for three adult patients. Mitral valve defects were repaired using the anterior leaflet of the mitral valve of the allograft. The aortic valve or entire root was replaced with the aortic allograft. The aortic/mitral allograft should be considered as an alternative to replacement of the aortic and mitral valves with prostheses in selected patients. abstract_id: PUBMED:36071759 Surgical treatment of patients with aortic valve disease complicated with moderate functional mitral regurgitation and heart failure with midrange ejection fraction: a cohort study. Background: Controversies exist on the treatment of moderate functional mitral regurgitation (FMR) in patients with severe aortic valve disease undergoing the aortic valve replacement (AVR). While a substantial proportion of these patients can be complicated with heart failure with midrange ejection fraction (HFmrEF), established studies show that the latter might compromise the patient outcome. This study was aimed to evaluate the prognostic value of concomitant mitral valve surgery during AVR in patients with severe aortic valve disease followed by moderate FMR and HFmrEF. Methods: A total of 78 consecutive patients were retrospectively recruited. Patients were divided into control (isolated AVR) and treatment (AVR + mitral valve surgery) groups. Follow-up outcomes were compared by Kaplan-Meier method, followed by multiple adjustment with inverse probability treatment weighting (IPTW) analysis. The primary outcome was the occurrence of major adverse cardiovascular and cerebrovascular events (MACCE). Results: Thirty-six patients received isolated AVR, while 42 received AVR with mitral valve repair or replacement. The median follow-up time was 28.7 months. Unadjusted analysis showed that there was no significant difference in the rate of MACCE between the two groups [hazard ratio (HR): 1.14, 95% confidence interval (CI): 0.48-2.69, Plogrank=0.770], which was sustained in IPTW analysis (HR: 1.64, 95% CI: 0.59-4.55, Plogrank=0.342). In addition, while concomitant mitral valve surgery improved follow-up FMR more completely (P=0.026) in the IPTW analysis, the ejection fraction was comparable between the two groups (P=0.276). Furthermore, IPTW analysis also showed that mitral valve surgery was associated with the increased risk of postoperative acute kidney injury (P=0.007). Conclusions: In patients with aortic valve disease followed by moderate FMR and HFmrEF, mitral valve surgery concomitant to AVR may not bring extra benefit in the MACCE-free survival and the improvement of HFmrEF. However, while concomitant mitral valve surgery has priority on the complete improvement of FMR, it might increase the risk of postoperative acute kidney injury. abstract_id: PUBMED:34717719 Totally endoscopic aortic valve replacement with concomitant trans-aortic mitral valve repair for mitral regurgitation. Background: Minimally invasive aortic valve procedures through a hemi-sternotomy or a right anterior mini-thoracotomy have gained popularity over the last several years. Totally endoscopic aortic valve replacement (TEAVR) is an innovative and a less invasive (incision-wise) surgical aortic valve replacement technique. The operative steps of TEAVR have been reported previously from our group. Mitral regurgitation (MR) frequently accompanies aortic valve disease that at times may also require repair. Totally endoscopic surgery in such cases has not been tested. Presentation Of The Technique: We present a surgical technique for a totally endoscopic approach to aortic valve replacement and concomitant mitral valve repair for primary and secondary MR. An aortotomy incision was used avoiding an atriotomy, which results in an increase in cross-clamp (XC) and cardiopulmonary bypass (CPB) times that could be associated with higher mortality and morbidity. Neochords (artificial chordae tendineae) were used for primary MR and an edge-to-edge approach for secondary MR. Conclusion: TEAVR and concomitant mitral valve repair can be performed successfully with reasonable XC and CPB times with excellent short-term results. abstract_id: PUBMED:34059886 Long-term fate of moderate aortic regurgitation left untreated at the time of mitral valve surgery. Objectives: The appropriateness of moderate aortic regurgitation treatment during mitral valve (MV) surgery remains unclear. The goal of this study was to evaluate the immediate and long-term outcomes of patients with moderate aortic regurgitation at the time of MV surgery. Methods: We included 183 patients admitted to our institution for elective treatment of MV disease between 2004 and 2018, in whom moderate aortic regurgitation was diagnosed during preoperative evaluation. One hundred and twenty-two patients underwent isolated MV surgery (study group) whereas 61 patients underwent concomitant MV surgery and aortic valve replacement (control group). Results: One death (0.8%) occurred in the study group, and 3 deaths (4.8%) occurred in the control group (P = 0.52). The rate of the most common postoperative complication was similar between the 2 groups. At 12 years, the cumulative incidence function of cardiac death, with non-cardiac death as a competing risk, was 4.7 ± 2.8% in the study group; no cardiac deaths were observed in the control group (P = 0.078). At 6 and 12 years, in the study group, the cumulative incidence function of aortic valve reintervention, with death as a competing risk, was 2.5 ± 1.85% and 19 ± 7.1%, respectively. Conclusions: The appropriate management of moderate aortic regurgitation at the time of MV surgery deserves a careful evaluation by balancing the reintervention rate with the age, the operative risk and the life expectancy of the patient. Our findings suggest that a patient-tailored approach is the key to achieving the best clinical outcome for each individual patient. Answer: Based on the information provided in the abstracts, prophylactic aortic valve replacement during mitral valve surgery for mild to moderate aortic valve disease does not appear to be routinely indicated. A study assessing long-term clinical outcomes in patients with mild to moderate rheumatic aortic valve disease at the time of mitral valve surgery found that most patients did not require subsequent aortic valve replacement after a long follow-up period, suggesting that the long-term outcome is comparable to that of subjects without aortic valve disease at the time of mitral valve surgery (PUBMED:12400754). Another study evaluating the fate of moderate secondary mitral regurgitation in patients undergoing aortic valve replacement for severe aortic regurgitation concluded that concomitant mitral valve annuloplasty did not improve long-term survival, incidence of cardiac death, mitral valve reoperation, or the evolution of the mitral valve disease (PUBMED:35671345). Furthermore, a study comparing mitral valve repair versus replacement for moderate-to-severe mitral regurgitation in patients undergoing concomitant aortic valve replacement found no statistical significance in terms of long-term survival and valve-related event rates between the two approaches, although double valve replacement seemed more hazardous than aortic valve replacement plus mitral valve repair based on the estimated hazard ratio in terms of survival (PUBMED:24087829). Additionally, an echocardiographic evaluation of non-surgically treated mild-to-moderate mitral dysfunction in patients undergoing aortic valve replacement indicated that conservatively treated patients exhibited acceptable clinical outcomes, with rheumatic pathology of the mitral valve being an independent predictor of late mitral dysfunction (PUBMED:31221174). In summary, the evidence suggests that prophylactic aortic valve replacement is not routinely required during mitral valve surgery for patients with mild to moderate aortic valve disease, as long-term outcomes are generally favorable without the need for subsequent aortic valve replacement. However, individual patient factors and the underlying pathology should be considered when making treatment decisions.
Instruction: Do perceptions of friends' behaviors affect age at first sex? Abstracts: abstract_id: PUBMED:16982393 Do perceptions of friends' behaviors affect age at first sex? Evidence from Cebu, Philippines. Purpose: To explore the effect of perceptions of friends' romantic and physical behaviors on adolescent sexual behavior and the relative influence of friends' sexual behaviors and maternal attitudes on age at first sex. Methods: Using longitudinal data collected from 1943 adolescents who have never had sex at baseline in Cebu, Philippines, logistic regression assessed whether adolescents' perceptions of friends' sexual behaviors, measured at ages 14 to 16 years, were associated with age at first intercourse. Survival analysis assessed the hazard of having first intercourse by ages 17 to 19 years. Final models adjusted for sociodemographics, mothers'/caretakers' disapproval of premarital sex, living with a caretaker, and the adolescents' own behaviors at baseline. Results: Boys and girls, who at ages 14 to 16 years, perceived that their friends had ever had boyfriends/girlfriends, dated, held hands, kissed, petted or had sex were significantly more likely to have experienced that behavior by ages 17 to 19 years. For each additional behavior an adolescent perceived his or her friends to have experienced, the hazard of having sex at a younger age increased by 1.15 (p = .02) among boys and 1.19 (p &lt; .002) among girls, after adjusting for mothers'/caretakers' disapproval of premarital sex and other factors. Conclusions: These results demonstrate the important role of peers in light of competing influences in adolescents' lives. They provide support for asking at early ages what romantic and physical behaviors adolescents think their friends have experienced. Interventions can use such information to better prepare adolescents to make responsible and informed decisions about sexuality. abstract_id: PUBMED:27705541 Perceived Cross-Orientation Infidelity: Heterosexual Perceptions of Same-Sex Cheating in Exclusive Relationships. For individuals in exclusive romantic relationships, the dynamics of sexual experimentation are nuanced. Extradyadic behavior outside of a relationship may be perceived as cheating or infidelity, with much of those perceptions driven by the biological sex of the perceiver. This study significantly reframes seminal research on perceptions of cheating with third-party friends by Kruger et al. (2013), to further nuance an evolutionary threat-based model. In doing so, this furthers our understanding of the associated perceptions of individuals in heterosexual relationships when confronted by partners' cheating with their same-sex cross-orientation friends. Results indicate that perceptions of same-sex infidelity vary widely depending on the nature of the behaviors, with decreasing attribution given to sexual and erotic behaviors, close relational behaviors, and casual social interaction behaviors, respectively. Implications are discussed for a variety of sexual communities, as well as the impact of gender and relational status on perceptions of infidelity. abstract_id: PUBMED:35623042 College students' attitudes about ways family, friends, significant others and media affect their eating and exercise behaviors and weight perceptions. Objective: This study examined college students' perceptions of how parents, family, friends, significant others, and the media influenced eating and exercise behaviors and weight perceptions. Participants: Forty-one college students, mostly female, participated in interviews. Methods: A Grounded Theory approach, using open coding and memoing, was used to uncover key themes. Results: Healthy cooking and exercise role models at home were viewed as positive, encouraging healthy eating and exercise. Criticism was perceived as negative for healthy habits and weight perceptions. Friends and significant others who practiced positive health habits and were body accepting were uplifting. Cultural transmission of the thin ideal could occur through the media. Some noted that media messages were becoming more positive. Conclusions: Using peers, especially friends, as collaborators in interventions, and discussing parental influences on eating, exercise, and weight perceptions may positively impact obesity prevention programs and interventions for college students. abstract_id: PUBMED:29373568 Current Age, Age at First Sex, Age at First Homelessness, and HIV Risk Perceptions Predict Sexual Risk Behaviors among Sexually Active Homeless Adults. While HIV disproportionately impacts homeless individuals, little is known about the prevalence of HIV risk behaviors in the southwest and how age factors and HIV risk perceptions influence sexual risk behaviors. We conducted a secondary data analysis (n = 460) on sexually active homeless adults from a cross-sectional study of participants (n = 610) recruited from homeless service locations, such as shelters and drop-in centers, in an understudied region of the southwest. Covariate-adjusted logistic regressions were used to assess the impact of age at homelessness onset, current age, age at first sex, and HIV risk perceptions on having condomless sex, new sexual partner(s), and multiple sexual partners (≥4 sexual partners) in the past 12 months. Individuals who first experienced homelessness by age 24 were significantly more likely to report condomless sex and multiple sexual partners in the past year than those who had a later onset of their first episode of homelessness. Individuals who were currently 24 years or younger were more likely to have had condomless sex, new sexual partners, and multiple sexual partners in the past 12 months than those who were 25 years or older. Those who had low perceived HIV risk had lower odds of all three sexual risk behaviors. Social service and healthcare providers should consider a younger age at homelessness onset when targeting HIV prevention services to youth experiencing homelessness. abstract_id: PUBMED:34303935 Do emerging adults know what their friends are doing and does it really matter? Methodologic challenges and associations of perceived and actual friend behaviors with emerging adults' disordered eating and muscle building behaviors. Disordered eating and muscle building behaviors are common among emerging adults, and friends may be a particularly salient social influence. Epidemiologic research often includes questions about participants' perceptions of their friends. A less common approach, with greater logistical challenges, is to ask for friend nominations and then survey friends about their actual behaviors. The comparability of these different approaches is unknown. This study addresses the following research questions: 1) What is the feasibility of collecting data from emerging adults' friends in epidemiologic research? 2) Do perceptions of friends' weight- and shape-related behaviors align with friends' actual behaviors? and 3) Are perceptions or friends' actual behaviors more strongly and consistently associated with emerging adults' behaviors? Participants (N = 2383) in the EAT (Eating and Activity over Time)-2018 study in Minnesota, USA, were asked to nominate up to three friends and provide data about those friends' weight- and shape-related behaviors (i.e. perceptions); nominated friends were invited to complete an abbreviated survey and report on their own same behaviors (i.e. actual). Among the invited friends, 191 responded and were linked to the 152 EAT 2018 participants who nominated them. Descriptive statistics, Spearman's correlations, and logistic regression were used to address the research questions. The response rate for nominated friends was very low (9.9%), suggesting this approach may have low feasibility for epidemiologic studies of emerging adults. Emerging adults' perceptions of their nominated friends' weight and shape-related behaviors generally did not align well with the behaviors reported by those friends. Furthermore, analytic models found different associations between friends' behavior and EAT 2018 participant behaviors, depending on the measure of friends' behavior used (perceived or actual). Careful consideration of the pros and cons of each study design is essential to build an evidence base and support interventions regarding emerging adults' weight- and shape-related health. abstract_id: PUBMED:34054645 Boldness Personality Traits Are Associated With Reduced Risk Perceptions and Adoption of Protective Behaviors During the First COVID-19 Outbreak. The containment measures imposed during the first COVID-19 outbreak required economic, social, and behavioral changes to minimize the spread of the coronavirus. Some studies have focused on how personality predicts distinct patterns of adherence to protective measures with psychopathic and antisocial traits predicting reduced engagement in such measures. In this study we extended previous findings by analyzing how boldness, meanness, and disinhibition psychopathic traits relate with both risk perceptions and protective behaviors during the first COVID-19 outbreak. A sample of 194 individuals (24% male) engaged in the survey, were assessed for psychopathic traits with the Triarchic Psychopathy Measure, and completed a COVID-19 survey targeting risk perceptions (spread, risk of becoming infected, state anxiety toward the COVID-19, and perceived risk of specific behaviors) and frequency of protective behaviors (e.g., not engaging in social distancing). Overall results show that boldness predicts reduced estimate of COVID-19 spread, reduced perceived risk of becoming infected, reduced state anxiety toward COVID-19, and reduced frequency of protective behaviors. Exploratory mediation models suggest that risk perceptions are not significant mediators of the association between psychopathic traits and reduced engagement in protective behaviors. Our results unveil that psychopathic traits affect risk perceptions and the propensity to engage in protective measures, emphasizing the need to accommodate these personality features in the public health strategy to control the COVID-19 spread. abstract_id: PUBMED:35072522 Moderators of Sexual Interest in Opposite-sex Friends. The fact that men and women experience sexual attraction toward their opposite-sex friends has been evidenced in various studies. It has also been shown that there is a close parallel between preferences for opposite-sex friends and mate preferences, i.e., that men prioritize physical attractiveness of their OSFs, while women prioritize their male friends' ability to provide protection and economic resources. Although this mating activation hypothesis has been validated to an extent, there is hardly any research that points to moderating factors which would define the boundary conditions for these effects. We present two studies that involved heterosexual participants who were in a committed relationship and at the same time had a heterosexual opposite-sex friend. We investigated how both the qualities of one's current partner and the qualities of one's opposite-sex friend shape sexual interest in opposite-sex friends for men and women. Results mostly support the mating activation hypothesis. We show that within actual cross-sex friendships: 1) physical attractiveness of opposite-sex friends predicts sexual interest toward them, and this effect is stronger for men than women, 2) current partner's attractiveness, provided support, and relationship satisfaction moderate this effect only for women, and not men, 3) perceived financial resources of opposite-sex friends predict sexual interest toward them for highly sexually unrestricted women, and, surprisingly, for those who are in committed relationships with high-income men. The results reaffirm previous evidence indicating that perceptions of opposite-sex friends can be viewed as a manifestation of evolved human mating strategies. abstract_id: PUBMED:16554267 Friends' influence on adolescents' first sexual intercourse. Context: Social-psychological theories of health behavior suggest that adolescents' sexual behaviors are influenced by the sexual attitudes and behaviors of their friends. Methods: Data on 2,436 participants in the National Longitudinal Study of Adolescent Health (Add Health) who were sexually inexperienced at Wave 1 (1994-1995) were analyzed to examine whether friend-related variables predicted initiation of vaginal intercourse by Wave 2 (1996). Analyses also assessed whether predictive relationships varied by level of involvement with friends. Odds ratios were generated by logistic regression analysis. Results: In the 9-18 months between Waves 1 and 2, 18% of participants initiated intercourse. In analyses controlling for gender, family structure and romantic relationships, the higher the proportion of a youth's friends who were sexually experienced, the greater the odds of sexual debut (odds ratio, 1.01); the odds also were elevated among youth who believed that they would gain their friends' respect by having sex (1.2). Relationships between friend variables and sexual initiation did not vary by level of involvement with friends. Conclusions: To maximize the likelihood of success, programs focused on delaying teenage sexual intercourse should address norms for sexual behavior among adolescents' close friends as well as the perceptions, skills and behaviors of individual youth. abstract_id: PUBMED:31518821 The reciprocal relations between facebook relationship maintenance behaviors and adolescents' closeness to friends. Introduction: Scholars suggest that social networking sites such as Facebook offer adolescents an ideal setting for engaging in relationship maintenance behaviors. Despite these suggestions, it remains an open question whether adolescents' online relationship maintenance behaviors can improve aspects of positive friendship quality, such as feelings of closeness. Additionally, it is unclear whether adolescents' feelings of closeness can motivate them to engage in online relationship maintenance behaviors. The aim of this two-wave longitudinal panel study was to investigate the reciprocal relations between Facebook relationship maintenance behaviors (FRMB) and adolescents' closeness to friends. Method: A sample of 12- to 18-year-old Flemish adolescents (NTime1 = 1840) filled out a paper-and-pencil survey twice within a six month-interval. We estimated cross-lagged structural equation models in AMOS to test the reciprocal relations between FRMB, closeness to friends, and receiving positive Facebook reactions. To test whether the hypothesized relations were moderated by gender and/or age, we conducted multiple group comparison tests. Results: Findings revealed that FRMB and adolescents' closeness to friends were reciprocally related over time: FRMB positively predicted adolescents' closeness to friends and closeness to friends positively predicted FRMB. In addition, receiving positive Facebook reactions mediated these reciprocal relations. Conclusion: By revealing that receiving positive Facebook reactions mediates the reciprocal relationships between FRMB and adolescents' closeness to friends, we now better understand how FRMB can increase adolescents' closeness to friends and how increased closeness to friends can enhance adolescents' FRMB. The discussion focuses on the understanding of these findings, directions for future research, and key limitations. abstract_id: PUBMED:23828725 Anticipating early fatality: friends', schoolmates' and individual perceptions of fatality on adolescent risk behaviors. Past research indicates that anticipating adverse outcomes, such as early death (fatalism), is associated positively with adolescents' likelihood of engaging in risky behaviors. Health researchers and criminologists have argued that fatalism influences present risk taking in part by informing individuals' motivation for delaying gratification for the promise of future benefits. While past findings highlight the association between the anticipation of early death and a number of developmental outcomes, no known research has assessed the impact of location in a context characterized by high perceptions of fatality. Using data from Add Health and a sample of 9,584 adolescents (51% female and 71% white) nested in 113 schools, our study builds upon prior research by examining the association between friends', school mates', and individual perceptions of early fatality and adolescent risk behaviors. We test whether friends' anticipation of being killed prior to age 21 or location in a school where a high proportion of the student body subscribes to attitudes of high fatality, is associated with risky behaviors. Results indicate that friends' fatalism is positively associated with engaging in violent delinquency, non-violent delinquency, and drug use after controlling for individual covariates and prior individual risk-taking. Although friends' delinquency accounts for much of the effect of friends' fatalism on violence, none of the potential intervening variables fully explain the effect of friends' fatalism on youth involvement in non-violent delinquency and drug use. Our results underscore the importance of friendship contextual effects in shaping adolescent risk-taking behavior and the very serious consequences perceptions of fatality have for adolescents' involvement in delinquency and drug use. Answer: Yes, perceptions of friends' behaviors do affect the age at first sex. A study conducted with adolescents in Cebu, Philippines, found that boys and girls who perceived their friends to have engaged in romantic and physical behaviors such as dating, holding hands, kissing, petting, or having sex were significantly more likely to have experienced that behavior themselves by ages 17 to 19 years. The study showed that for each additional behavior an adolescent perceived their friends to have experienced, the hazard of having sex at a younger age increased, even after adjusting for mothers'/caretakers' disapproval of premarital sex and other factors (PUBMED:16982393). This suggests that peers play an important role in adolescents' sexual behavior and that interventions could use such information to help adolescents make responsible and informed decisions about sexuality.
Instruction: Selective cerebral perfusion using moderate flow in complex cardiac surgery provides sufficient neuroprotection. Are children young adults? Abstracts: abstract_id: PUBMED:22843513 Selective cerebral perfusion using moderate flow in complex cardiac surgery provides sufficient neuroprotection. Are children young adults? Objective: Selective cerebral perfusion (SCP) is commonly applied during the correction of complex congenital cardiac defects. In this study, we assessed the impact of different flow levels of SCP on potential brain ischaemia. Methods: Fifteen piglets (7-10 kg, age 3-4 weeks) received SCP via the right common carotid artery during cardiopulmonary bypass at 25°C for 90 min. Regular brain perfusion (1 ml/g brain weight/min), moderate hypoperfusion (0.5 ml/g/min) and extensive hypoperfusion (0.25 ml/g/min) were evaluated. Clinical parameters and tissue oxygenation index (TOI) were registered online until 3 h of reperfusion. Hematoxylin and eosin (HE) staining and immunohistological analyses for apoptosis inducing factor (AIF) and nitrotyrosine (NO-Tyr) were performed on sections of the hippocampus. Results: Intracerebral pressure remained stable throughout the study. Haemodynamic parameters, blood gas and lactate measurements were stable until the end of the study. Extensive hypoperfusion led to a moderate reduction of TOI. NO-Tyr immuno-positive cells were 15.7% at regular cerebral perfusion, 23.9% at moderate hypoperfusion (P = n.s.) and 46.1% at extensive hypoperfusion (P &lt; 0.05). AIF immuno-positive nuclei were present in 8.3% of the hippocampus cells after regular perfusion, in 10.8% after moderate hypoperfusion (P = n.s.) and in 17.9% after extensive hypoperfusion (P &lt; 0.05). Conclusions: SCP using a moderate SCP flow regime demonstrates comparable results to normal brain perfusion while after extensive hypoperfusion significant morphological brain injury could be found. Thus moderate, but not extensive, hypoperfusion might have the potential to prevent perfusion-related cerebral oedema and an increasing risk of brain injury. abstract_id: PUBMED:23977601 Selective cerebral perfusion for cerebral protection: what we do know. Selective antegrade cerebral perfusion (SACP) for aortic arch surgery has evolved considerably since it was first reported. Various pressure rates have been investigated through animal models, as has the effect of warmer perfusate temperatures and hematocrit. Clinical research into pH management, the role of unilateral and bilateral perfusion, and core temperatures have further refined the procedure. We recommend the following protocol for SACP: perfusion pressure between 40-60 mmHg, flow rates between 6-10 mL/kg/min, and perfusate temperature of 20-28 °C; core cooling to 18-30 °C contingent on duration of arrest; alpha-stat pH management; hematocrit between 25-30%; near infrared spectroscopy to monitor cerebral perfusion; and bilateral perfusion when prolonged durations of SACP is anticipated. abstract_id: PUBMED:35644556 Neuroprotection during Open Aortic Arch Surgery: Cerebral Perfusion Methods and Temperature. Neuroprotection is important in open aortic arch surgery because of the dependence of brain tissues on cerebral perfusion. Therefore, several techniques have been developed to reduce cerebral ischemia and improve outcomes in open aortic arch surgery. In this review, I describe various neuroprotective strategies, such as profound and deep hypothermic circulatory arrest, selective antegrade cerebral perfusion, retrograde cerebral perfusion, and lower body circulatory arrest; compare their advantages and disadvantages, and discuss their evolution and current status by reviewing relevant literature. abstract_id: PUBMED:35096996 Nitric Oxide in Selective Cerebral Perfusion Could Enhance Neuroprotection During Aortic Arch Surgery. Background: Hypothermic circulatory arrest (HCA) in aortic arch surgery has a significant risk of neurological injury despite the newest protective techniques and strategies. Nitric oxide (NO) could exert a protective role, reduce infarct area and increase cerebral perfusion. This study aims to investigate the possible neuroprotective effects of NO administered in the oxygenator of selective antegrade cerebral perfusion (SCP) during HCA. Methods: Thirty male SD adult rats (450-550 g) underwent cardiopulmonary bypass (CPB), cooling to 22°C body core temperature followed by 30 min of HCA. Rats were randomized to receive SCP or SCP added with NO (20 ppm) administered through the oxygenator (SCP-NO). All animals underwent CPB-assisted rewarming to a target temperature of 35°C in 60 min. At the end of the experiment, rats were sacrificed, and brain collected. Immunofluorescence analysis was performed in blind conditions. Results: Neuroinflammation assessed by allograft inflammatory factor 1 or ionized calcium-binding adapter molecule 1 expression, a microglia activation marker was lower in SCP-NO compared to SCP (4.11 ± 0.59 vs. 6.02 ± 0.18%; p &lt; 0.05). Oxidative stress measured by 8oxodG, was reduced in SCP-NO (0.37 ± 0.01 vs. 1.03 ± 0.16%; p &lt; 0.05). Brain hypoxic area extent, analyzed by thiols oxidation was attenuated in SCP-NO (1.85 ± 0.10 vs. 2.74 ± 0.19%; p &lt; 0.05). Furthermore, the apoptotic marker caspases 3 was significantly reduced in SCP-NO (10.64 ± 0.37 vs. 12.61 ± 0.88%; p &lt; 0.05). Conclusions: Nitric oxide administration in the oxygenator during SCP and HCA improves neuroprotection by decreasing neuroinflammation, optimizing oxygen delivery by reducing oxidative stress and hypoxic areas, finally decreasing apoptosis. abstract_id: PUBMED:25550987 Selective antegrade cerebral perfusion reduces brain injury following deep hypothermic circulatory arrest in the piglets' model by decreasing the levels of protein SUMO2/3-ylation. Selective antegrade cerebral perfusion (SACP), which was adopted by many surgical groups for complex neonatal cardiac surgery, especially aortic arch repair, is a proven adjunct for neuroprotection during deep hypothermic circulatory arrest (DHCA). Several recent studies suggest that SUMO2/3 modification of proteins is markedly activated during deep hypothermia and believed to be an endogenous neuroprotective stress response. Here, we report that SACP reduces the increasing degree of SUMO2/3 conjugation following DHCA. Piglets were subjected to 1 h SACP and/or 1 h DHCA. DHCA was sufficient to markedly increase in protein SUMOylation by SUMO2/3 both in the hippocampus and cerebral cortex. SACP, especially at flow rate of 50 ml/kg/min, reduces the increasing degree of SUMO2/3 conjugation and also reduces levels of pro-apoptotic factors, Bax and Caspase 3, and increases levels of antiapoptotic factors, Bcl-2, following DHCA both in the hippocampus and cerebral cortex. This suggests that SACP at flow rate of 50 ml/kg/min is more appropriate for neuroprotection during DHCA in the pig model and level of protein SUMO2/3-ylation maybe an indicator of the degree of brain injury. abstract_id: PUBMED:36762841 Determination of selective antegrade perfusion flow rate in aortic arch surgery to restore baseline cerebral near-infrared spectroscopy values: a single-centre observational study. Objectives: Neuroprotection during aortic arch surgery involves selective antegrade cerebral perfusion. The parameters of cerebral perfusion, e.g. flow rate, are inconsistent across centres and are subject of debate. The aim of this study was to determine the cerebral perfusion flow rate during hypothermic circulatory arrest required to meet preoperative awake baseline regional cerebral oxygen saturation (rSO2). Methods: Patients scheduled for aortic arch surgery with hypothermic circulatory arrest were enrolled in this prospective observational study. After initiation of hypothermic circulatory arrest, bilateral selective antegrade cerebral perfusion was established and cerebral flow rate was continuously increased. The primary end point was the difference of cerebral saturation from baseline during cerebral perfusion flow rates of 6, 8 and 10 ml/kg/min. Results: A total of 40 patients were included. During antegrade cerebral perfusion, rSO2 was significantly lower than the baseline at 6 ml/kg/min [-7.3, 95% confidence interval (CI): -1.7, -12.9; P = 0.0015]. In contrast, flow rates of 8 and 10 ml/kg/min resulted in rSO2 that did not significantly differ from the baseline (-2; 95% CI: -4.3, 8.3; P &gt; 0.99 and 1.8; 95% CI: -8.5%, 4.8%; P &gt; 0.99). Cerebral saturation was significantly more likely to meet baseline values during selective antegrade cerebral perfusion with 8 ml/kg/min than at 6 ml/kg/min (44.1%; 95% CI: 27.4%, 60.8% vs 11.8%; 95% CI: 0.9%, 22.6%; P = 0.0001). Conclusions: At 8 ml/kg/min cerebral flow rate during selective antegrade cerebral perfusion, regional cerebral oximetry baseline values are significantly more likely to be achieved than at 6 ml/kg/min. Further increasing the cerebral flow rate to 10 ml/kg/min does not significantly improve rSO2. abstract_id: PUBMED:32513600 Neurological changes and outcomes of paediatric surgery of the aortic arch using selective cerebral perfusion Introduction: The aims of this article are to analyse the neuropsychological changes in the medium-term in children subjected to aortic arch surgery using selective cerebral perfusion (SCP), as well as to detect any modifiable factors in the surgical technique that may contribute to minimising the subsequent neurological involvement. Material And Methods: Inclusion criteria were established as: aortic arch disease operated on using SCP during the first year of life, between 10 August 2004 and 24 May 2016, biventricular physiology, and gestational age greater than 31 weeks. In the absence of a chromosomal disease, they were classified, from a neurological point of view, using the Rankin score. Children over 4-years of age were subjected to intelligence studies, including attention level, development, and psycho-lingual skills. Results: The study included a total of 82 patients with a mean age of 1.8 months. The mean SCP flow was 32ml/kg/min. The mean time of SCP was 31minutes. The overall mortality of the series was 14.8%. Neurological dysfunction was observed in 35.9% of patients, and the following were detected as risk factors: surgery in patients less than 10-days-old, duration of SCP greater than 40minutes, and the time required for the cooling down and/or warming-up. Attention deficit was diagnosed in 35.2% of patients greater than 5-years-old. Conclusions: Patients operated on using SCP in in the first year of life required a neuropsychological follow-up, and there are modifiable surgical factors that may have an influence on neurological development. abstract_id: PUBMED:23977578 Total arch replacement using selective antegrade cerebral perfusion as the neuroprotection strategy. Background: Optimal neuro-protection strategy in aortic arch surgery is a controversial issue. The present study reported surgical outcomes of total arch replacement using selective antegrade cerebral perfusion (SACP). Methods: From January 2002 to December 2012, 438 consecutive patients (mean age 69.1±13.4 years) underwent total arch replacement using SACP through a median sternotomy. Acute aortic dissection was present in 86 patients (18.3; 80 type A, 6 type B) and shaggy aorta in 36 (8.2%). Emergent/urgent surgery was required in 144 (32.9%). Our current approach included: (I) meticulous selection of arterial cannulation site and type of arterial cannula; (II) selective antegrade cerebral perfusion; (III) maintenance of minimal tympanic temperature between 20 and 23 °C; (IV) early re-warming after distal anastomosis; and (V) maintaining fluid balance below 1,000 mL during cardiopulmonary bypass. A woven Dacron four branch graft was used in all patients. Results: Overall hospital mortality was 4.6% (20/438). Hospital mortality was 9.7% (14/144) in urgent/emergent surgery and 2.0% (6/294) in elective cases. Permanent neurological deficit occurred in 5.3% (23/438) of patients. Prolonged ventilation was necessary in 58 patients (13.2%). Multivariate analysis demonstrated that risk factors for hospital mortality were octogenarian (OR 4.45, P=0.03), brain malperfusion (OR 23.52, P=0.002) and cardiopulmonary bypass time (OR 1.07, P=0.04). The follow-up was completed in 97.9% with mean follow up of 2.3±2.3 years. Survival at 5 and 10 years after surgery was 79.6±3.3% and 71.2±5.0% respectively. In the acute type A dissection group, 10-year survival was 96.8±2.9%, while in the elective non-dissection group 5- and 10-year survival were 81.4±7.2% and 77.0±5.9% respectively. Conclusions: Our current approach for total aortic arch replacement utilizing SACP was associated with low hospital mortality and morbidities leading to favorable long-term outcome. abstract_id: PUBMED:18374757 Optimizing selective cerebral perfusion: deleterious effects of high perfusion pressures. Objective: Selective cerebral perfusion is a proven adjunct to hypothermia for neuroprotection in complex aortic surgery. The ideal conditions for the provision of selective cerebral perfusion, however, including optimal perfusion pressure, remain unknown. We investigated the effects of various perfusion pressures during selective cerebral perfusion on cerebral physiology and outcome in a long-term porcine model. Methods: Thirty piglets (26.3 +/- 1.4 kg), cooled to 20 degrees C on cardiopulmonary bypass with alpha-stat pH management (mean hematocrit 23.6%), were randomly assigned to 90 minutes of selective cerebral perfusion at a pressure of 50 (group A), 70 (group B), or 90 (group C) mm Hg. With fluorescent microspheres and sagittal sinus sampling, cerebral blood flow and cerebral oxygen metabolism were assessed at baseline, after cooling, at two points during selective cerebral perfusion, and for 2 hours after cardiopulmonary bypass. Visual evoked potentials were monitored during recovery. Neurobehavioral scores were assessed blindly from standardized videotaped sessions for 7 postoperative days. Results: Cerebral blood flow during selective cerebral perfusion was significantly increased by higher-pressure perfusion (P = .04), although all groups sustained similar levels of cerebral oxygen metabolism during selective cerebral perfusion (P = .88). After the end of cardiopulmonary bypass, the cerebral oxygen metabolism increased to above baseline in all groups, with the highest levels seen in group C (P = .06). Intracranial pressure was significantly higher during selective cerebral perfusion in group C (P = .0002); visual evoked potentials did not differ among groups. Neurobehavioral scores were significantly better in group A (P = .0002). Conclusion: Selective cerebral perfusion at 50 mm Hg provides neuroprotection superior to that at higher pressures. The increased cerebral blood flow with higher-pressure selective cerebral perfusion is associated with cerebral injury, reflected by high post-cardiopulmonary bypass cerebral oxygen metabolism and poorer neurobehavioral recovery. abstract_id: PUBMED:16798306 Intermittent pressure augmentation during retrograde cerebral perfusion under moderate hypothermia provides adequate neuroprotection: an experimental study. Objective: For cerebral protection during aortic surgery, we introduced a novel retrograde cerebral perfusion method with intermittent pressure augmentation. We then assessed whether this novel method provides benefits similar to those provided by antegrade selective cerebral perfusion. Methods: Eighteen dogs were randomly divided into 3 groups: the RCP-INT group, intermittent-retrograde cerebral perfusion at 15 mm Hg with intermittent pressure augmentation to 45 mm Hg (n = 6); the ASCP group, antegrade selective cerebral perfusion at a flow rate of 10 mL x kg(-1) x min(-1) (n = 6); and the sham group, no circulatory arrest (n = 6). Cooling (26 degrees C) with cardiopulmonary bypass and 60 minutes of circulatory arrest were performed in the RCP-INT and ASCP groups. The levels of tau protein in the cerebrospinal fluid and the diameters of the retinal vessels were measured. The neurologic deficit scores and the histopathologic damage scores of the brains were determined. Results: The total postoperative tau protein levels (calculated as the area under the curve) did not differ significantly between the RCP-INT and ASCP groups (203 +/- 87 pg x mL(-1) x h vs 154 +/- 69 pg x mL(-1) x h, P = .95). The retinal vessels were effectively dilated at an augmented pressure of 45 mm Hg in the RCP-INT group. The total neurologic deficit score (0 = normal, 500 = brain death) and histopathologic damage score (0 = normal, 40 = worst) were not significantly different between the RCP-INT and ASCP groups (neurologic deficit score: 75 +/- 21 vs 70 +/- 21, P = .98; histopathologic damage score: 13.5 +/- 1.5 vs 14.2 +/- 1.3, P = .84). Conclusions: Intermittent augmented pressure dilated the cerebral vessels, allowing adequate blood supply without injuring the brain. This retrograde cerebral perfusion method provides adequate neuroprotection during moderate hypothermia. Answer: Selective cerebral perfusion (SCP) is a technique used during complex cardiac surgery to provide neuroprotection by maintaining blood flow to the brain when circulation is interrupted. The question "Are children young adults?" in the context of SCP implies whether the same principles and outcomes of SCP apply to both children and young adults. The study on piglets, which are often used as a model for human pediatric physiology, showed that moderate SCP flow regimes provided comparable results to normal brain perfusion, suggesting that moderate hypoperfusion might prevent perfusion-related cerebral edema and an increasing risk of brain injury (PUBMED:22843513). This indicates that, at least in this animal model, a moderate flow of SCP can provide sufficient neuroprotection, which could be extrapolated to suggest that children might benefit from similar SCP flow rates as young adults. However, it is important to note that children are not simply small adults; they have distinct physiological characteristics that can affect the outcomes of medical interventions. For instance, the study on pediatric surgery of the aortic arch using SCP found that there were modifiable surgical factors that could influence neurological development, and patients operated on using SCP in the first year of life required neuropsychological follow-up (PUBMED:32513600). This suggests that while the principles of SCP might be similar, the application and outcomes could differ between children and young adults due to developmental factors. Moreover, the determination of the optimal SCP flow rate in aortic arch surgery to restore baseline cerebral near-infrared spectroscopy values showed that at 8 ml/kg/min cerebral flow rate during SCP, regional cerebral oximetry baseline values are significantly more likely to be achieved than at 6 ml/kg/min (PUBMED:36762841). This study, although not specifically focused on children, indicates that the flow rate can be critical in achieving desired neuroprotection outcomes. In conclusion, while moderate SCP flow can provide sufficient neuroprotection in complex cardiac surgery, the specific flow rates and protocols may need to be adjusted for children due to their unique physiological characteristics. Further research and clinical studies are necessary to optimize SCP techniques for pediatric patients.
Instruction: Nurse and resident satisfaction in magnet long-term care organizations: do high involvement approaches matter? Abstracts: abstract_id: PUBMED:16600014 Nurse and resident satisfaction in magnet long-term care organizations: do high involvement approaches matter? Aim: This study examines the association of high involvement nursing work practices with employer-of-choice (magnet) status in a sample of Canadian nursing homes. Background: In response to a severe shortage of registered nursing personnel, it is imperative for health care organizations to more effectively recruit and retain nursing personnel. Some long-term care organizations are developing employee-centred cultures that allow them to effectively enhance nurse and resident satisfaction. At the same time, many nursing homes have adopted progressive nursing workplace practices (high involvement work practices) that emphasize greater employee empowerment, participation and commitment. Method: A mail survey was sent to the director of nursing in 300 nursing homes in western Canada. In total, 125 useable questionnaires were returned and constituted the data set for this study. Separate ordinary least squares regressions are performed with magnet strength, nurse satisfaction and resident satisfaction used as dependent variables. Results: Nursing homes that demonstrate strong magnet (employer-of-choice) characteristics are more likely to have higher levels of nurse and patient satisfaction, even after controlling for a number of significant factors at the establishment level. Magnet nursing homes are more likely to have progressive participatory decision-making cultures and much more likely to spend considerable resources on job-related training for their nursing staff. The presence of high involvement work practices is not found to be a significant predictor in magnet strength, nurse or resident satisfaction. Conclusion: Merely adopting more high involvement nursing work practices may be insufficient for nursing homes, which desire to become 'employers-of-choice' in their marketplaces, especially if these practices are adopted without a concomitant investment in nurse training or an enhanced commitment to establishing a more democratic and participatory decision-making style involving all nursing staff. abstract_id: PUBMED:34560527 Nursing assistants and resident satisfaction in long-term care: A systematic review. Background: The number of nursing assistants (NAs) in the long-term care industry is on the rise, helping to service the needs of an increasingly aging population. Understanding influences on NAs service qualities and resident satisfaction is important to sustainable long-term care services. To date, the research evidence about NAs factors and resident satisfaction has not been synthesized. We aimed to address this gap in the evidence. Research Design And Methods: Utilizing a mixed-methods systematic review, we searched PubMed, PsycINFO, AgeLine, MEDLINE, Scopus, and Google Scholar databases for journal articles with primary data published from database inception to March 2021. Both qualitative and quantitative literature were considered and a narrative summary provided. Study reporting quality was assessed using the Joanna Briggs Institute Critical Appraisal Checklists and Critical Appraisal Skills Programme Checklist. Results: A total of 25 articles, spanning 9 countries and regions, were included in the review. Study reporting quality was good. A positive relationship between aged-care resident satisfaction and NAs job satisfaction (n = 8) was identified. Resident satisfaction was enhanced through NA training programs (n = 7), quality of daily interaction with NAs (n = 6), and the overall contribution of NAs in promoting residents` experiences (n = 4) DISCUSSION: The current findings highlight the important role of NAs in improving resident care in long-term care settings. Nursing home administrators need to better understand the value of NAs and to provide supports for growth and development in their work. Continuing the professional development of NAs can not only improve their skills but also benefit resident satisfaction and organizational sustainability. abstract_id: PUBMED:35144081 Validation of the Chinese version of the resident satisfaction in long-term care facilities. Background: Accurate measurements of resident satisfaction in long-term care settings can provide administrators with valuable information to improve the quality of care. However, such assessment has been insufficient in long-term care facilities of China due to limited validated measuring instruments. We aim to translate and validate a Chinese version of the resident satisfaction assessment based upon the Ohio Long-term Care Resident Satisfaction Survey (OLCRSS). Method: An external specialist team assessed the translated items of OLCRSS and provided content validation scores (CVI). Criterion-related validity was determined by measuring the correlation between the Chinese version OLCRSS, a global satisfaction, a quality-of-life questionnaire, and a depression measurement. The reliability was assessed by Cronbach alpha and intra-class correlation coefficients. We conducted an exploratory factor analysis to examine the structure of the Chinese version OLCRSS. A total of 172 older adults recruited from Shanghai, China, participated in this study. Results: The Chinese version OLCRSS demonstrated excellent validity, with the CVI = 1.0, ICC = .96, p &lt; .001, and a Cronbach alpha = .96. The Chinese version OLCRSS was significantly correlated with quality of life (r=.267 p &lt; 0.01), with the global satisfaction (r=.309, p &lt; 0.01), and had negative correlation with depressive symptoms (r = -.044, P = .498). The exploratory factor analysis implied a slightly different structural relationship between items. Possible explanations were discussed in the discussion section. Conclusions: The Chinese version OLCRSS is a valid and potentially useful instrument for assessing resident satisfaction in long-term care facilities among the older Chinese population and Chinese contexts. abstract_id: PUBMED:27573220 Outcomes related to effective nurse supervision in long-term care homes: an integrative review. Aim: The aim of this integrative review was to describe the organisational, unregulated nurse, and resident outcomes associated with effective supervisory performance of regulated nurses (registered nurses or registered practical nurses) in long-term care homes. Background: While there are data on the influence of regulated nurse staffing levels on resident outcomes, the influence of effective supervisory performance of regulated nurses on resident and organisational outcomes, nursing assistant outcomes have yet to be comprehensively explored. Evaluation: A search of six databases was made for articles dating from 2000 to 2015. Twenty-four articles were selected and an integrative review was performed. Results: Effective nurse supervision had statistically significant positive associations (P &lt; 0.05, P &lt; 0.000) with six different organisational, unregulated nurse and resident outcomes: nurse assistant job satisfaction, turnover/intention to leave, effectiveness, decision making, job stress and consumer satisfaction. Qualitative analyses corroborate these findings. Conclusions: There appear to be some associations between effective supervisory performance of regulated nurses with positive organisational, unregulated nursing and resident outcomes. Implications For Nursing Management: Nursing managers and leaders in long-term care may promote improvements in effective nurse supervision performance as a way of reducing turnover and improving resident outcomes. abstract_id: PUBMED:9524551 Improving certified nurse aide retention. A long-term care management challenge. In the long-term care industry, the turnover rate among nurse aides is extremely high. This adversely affects resident satisfaction, resident care, morale, and finances. It presents a challenge to long-term care administration. Refusing to accept high turnover as an impossible situation allows changes to be made. The authors describe how the staff at one intermediate care facility identified its problems, assessed the causes, and implemented corrective action. abstract_id: PUBMED:24889786 Beyond family satisfaction: Family-perceived involvement in residential care. Aim: To explore perceived family involvement and its relationship with satisfaction and facility impressions. Method: A questionnaire was posted to residents' next of kin from four South Australian residential aged care facilities. Results: One hundred and fifty next of kin participated in the survey. Family-perceived involvement was significantly and positively correlated with satisfaction and facility impressions. Conclusion: The findings of this study add to the limited body of research into family involvement in long-term residential care. Feedback from the family regarding particular aspects of involvement may also improve the experience of long-term care for both family and resident, and assist with the identification of specific issues towards which organisations may target their quality improvement efforts. abstract_id: PUBMED:34536909 Individual and organizational factors of nurses' job satisfaction in long-term care: A systematic review. Background: In long-term care facilities, nurses' job satisfaction predicts staff turnover, which adversely affects resident outcomes. Thus, it is important to develop a comprehensive understanding of factors affecting nurses' job satisfaction in long-term care facilities. Objectives: To analyze factors associated with job satisfaction among nurses in nursing homes from individual and organizational perspectives utilizing a deductive approach. Design: Systematic literature review SETTING: Nursing homes. Participants: Registered nurses and licensed practical nurses in nursing homes. Methods: A systematic literature review of seven online databases (EMBASE, CINAHL, PsychINFO, MEDLINE, PubMed, Scopus, and Web of Science) to July 23, 2020 was conducted. Studies were included if they examined factors associated job satisfaction in the target population and setting. Decision rules on how to determine factors important to nurse job satisfaction were developed a priori. Two team members independently screened the publications for inclusion, extracted data, and assessed included publications for methodological quality; conflicts were resolved through a consensus process and consultation of the third senior team member when needed. Results: Twenty-eight studies were included. Of these, 20 studies were quantitative, 6 were qualitative, and 2 were mixed methods. Factors associated with job satisfaction were grouped into two categories: individual and organizational. Individual factors significantly associated with job satisfaction were age, health status, self-determination/autonomy, psychological empowerment, job involvement, work exhaustion, and work stress. Individual factors identified as not important or equivocal were gender and experience as a nurse/in aged care. No organizational factors were identified as important for nurses' job satisfaction. Facility ownership, supervisor/manager support, resources, staffing level, and social relationships were identified as equivocal or not important. Findings from qualitative studies identified relationship with residents as an important factor for job satisfaction. Conclusions/implications: Factors identified as important to nurses' job satisfaction differ from those reported among care aides in nursing homes and nurses employed in acute care settings, suggesting that there is a need for unique approaches to enhance nurses' job satisfaction in nursing homes. abstract_id: PUBMED:28945639 Associations Among Health Care Workplace Safety, Resident Satisfaction, and Quality of Care in Long-Term Care Facilities. Objective: We performed an integrated cross-sectional analysis of relationships between long-term care work environments, employee and resident satisfaction, and quality of patient care. Methods: Facility-level data came from a network of 203 skilled nursing facilities in 13 states in the eastern United States owned or managed by one company. K-means cluster analysis was applied to investigate clustered associations between safe resident handling program (SRHP) performance, resident care outcomes, employee satisfaction, rates of workers' compensation claims, and resident satisfaction. Results: Facilities in the better-performing cluster were found to have better patient care outcomes and resident satisfaction; lower rates of workers compensation claims; better SRHP performance; higher employee retention; and greater worker job satisfaction and engagement. Conclusion: The observed clustered relationships support the utility of integrated performance assessment in long-term care facilities. abstract_id: PUBMED:17987827 Effects of director of care support on job stress and job satisfaction among long-term care nurse supervisors. The provision of care for frail older adults in Long-term care settings is challenging. It requires not only specialized knowledge and skills, but also supportive commitment on the part of directors of care to their nurse supervisors (registered nurses and registered practical nurses) and unregulated healthcare staff. In these complex work environments, communication and leadership are critical to staff job satisfaction. Therefore, it is essential that directors of care represent a source of support for their nurse supervisors. The purpose of this multi-site study was to examine the relationships among perceived support from directors of care, and nurse supervisors' job stress and job satisfaction. Forty-five per cent of the total variance in job satisfaction of nurse supervisors was explained by supervisory support, stress and job category (registered nurse vs. registered practical nurse). Greater supervisory support was also associated with reduced job stress. These findings are essential in developing strategies to improve the nurse supervisory role in long-term care settings. abstract_id: PUBMED:8826288 Satisfaction of nurse managers in long-term care. Nurse managers control quality and cost of the principal service provided by long-term care agencies. Job satisfaction was higher for long-term care nurse managers in metropolitan than in rural or suburban agencies. Findings suggest that continuing education may increase task satisfaction, as well as job knowledge of diploma-prepared nurse managers. Answer: The study examining the association of high involvement nursing work practices with employer-of-choice (magnet) status in Canadian nursing homes found that while magnet nursing homes are more likely to have progressive participatory decision-making cultures and invest in job-related training for their nursing staff, the presence of high involvement work practices alone was not a significant predictor in magnet strength, nurse, or resident satisfaction (PUBMED:16600014). This suggests that high involvement approaches may not be sufficient by themselves to enhance nurse and resident satisfaction in magnet long-term care organizations. Other factors, such as effective nurse supervision, have been shown to have positive associations with organizational, unregulated nurse, and resident outcomes, including nurse assistant job satisfaction and consumer satisfaction (PUBMED:27573220). Additionally, the retention of certified nurse aides, which is influenced by job satisfaction, is crucial as it affects resident satisfaction, resident care, morale, and finances (PUBMED:9524551). Moreover, the satisfaction of nursing assistants (NAs) has been positively related to aged-care resident satisfaction, with NA training programs, quality of daily interaction with NAs, and their overall contribution to residents' experiences enhancing resident satisfaction (PUBMED:34560527). This underscores the importance of NAs in long-term care settings and the need for their professional development to improve resident satisfaction. In conclusion, while high involvement approaches in nursing work practices are a feature of magnet long-term care organizations, they may not be the sole or most significant factor in determining nurse and resident satisfaction. Other elements such as effective nurse supervision, NA job satisfaction, and professional development, as well as the overall work environment and culture, play critical roles in achieving high levels of satisfaction among both nurses and residents in long-term care settings.
Instruction: Are there specific benefits of amoxicillin plus metronidazole in Aggregatibacter actinomycetemcomitans-associated periodontitis? Abstracts: abstract_id: PUBMED:22873656 Are there specific benefits of amoxicillin plus metronidazole in Aggregatibacter actinomycetemcomitans-associated periodontitis? Double-masked, randomized clinical trial of efficacy and safety. Background: It has been suggested that prescription of amoxicillin plus metronidazole in the context of periodontal therapy should be limited to patients with specific microbiologic profiles, especially those testing positive for Aggregatibacter actinomycetemcomitans. The main purpose of this analysis is to determine if patients positive for A. actinomycetemcomitans with moderate to advanced periodontitis benefit specifically from amoxicillin plus metronidazole given as an adjunct to full-mouth scaling and root planing. Methods: This is a double-masked, placebo-controlled, randomized longitudinal study including 41 participants who were positive for A. actinomycetemcomitans and 41 participants who were negative for A. actinomycetemcomitans. All 82 patients received full-mouth periodontal debridement performed within 48 hours. Patients then received either systemic antibiotics (375 mg amoxicillin and 500 mg metronidazole, three times daily) or placebo for 7 days. The primary outcome variable was persistence of sites with a probing depth (PD) &gt;4 mm and bleeding on probing (BOP) at the 3-month reevaluation. Using multilevel logistic regression, the effect of the antibiotics was analyzed according to the following factors (interaction effect): A. actinomycetemcomitans-positive or -negative at baseline, sex, age, smoking, tooth being a molar, and interdental location. Results: At reevaluation, participants in the test group had significantly fewer sites with a persisting PD &gt;4 mm and BOP than control patients (P &lt;0.01). Being A. actinomycetemcomitans-positive or -negative did not change the effect of the antibiotics. Patients benefited from the antibiotics irrespective of sex, age, or smoking status. Molars benefited significantly more from the antibiotics than non-molars (P for interaction effect = 0.03). Conclusions: Patients who were positive for A. actinomycetemcomitans had no specific benefit from amoxicillin plus metronidazole. Sites on molars benefited significantly more from the antibiotics than non-molar sites. abstract_id: PUBMED:29967975 Antimicrobial susceptibilities of Aggregatibacter actinomycetemcomitans and Porphyromonas gingivalis strains from periodontitis patients in Morocco. Background And Objective: Aggregatibacter actinomycetemcomitans and Porphyromonas gingivalis have been frequently isolated in periodontitis patients in Morocco. Its persistence after the subgingival debridement of the biofilm has been correlated with worse clinical outcomes. The aim of this study was to evaluate the antimicrobial susceptibilities of A. actinomycetemcomitans and P. gingivalis, to amoxicillin, amoxicillin plus clavulanate, metronidazole, and azithromycin. In addition, microbiological profiles of patients harbouring A. actinomycetemcomitans, P. gingivalis, or both were compared. Material And Methods: In 45 consecutive periodontitis Moroccan patients, subgingival samples were taken and processed by culture. Twenty-four A. actinomycetemcomitans and 30 P. gingivalis colonies were isolated (54 strains) and susceptibility tests, using the epsilometric method, were run for amoxicillin, amoxicillin plus clavulanate, metronidazole, and azithromycin. Minimum inhibitory concentrations for 50 (MIC50) and 90% (MIC90) of the organisms were calculated. Results: The prevalence of A. actinomycetemcomitans and P. gingivalis was 79.5 and 84.4%, respectively. A. actinomycetemcomitans showed susceptibility to amoxicillin, amoxicillin plus clavulanate, while 28% of the isolated strains were resistant to azithromycin and 61.7% towards metronidazole. No P. gingivalis resistance towards amoxicillin, amoxicillin plus clavulanate, metronidazole, and azithromycin was found. Conclusion: A. actinomycetemcomitans and P. gingivalis were frequently detected in Moroccan patients with periodontitis, while antimicrobial resistance was only detected for A. actinomycetemcomitans to metronidazole and azithromycin. Clinical Relevance: A. actinomycetemcomitans resistance against some antimicrobials in periodontitis patients in Morocco can influence the selection of the therapeutic approaches. abstract_id: PUBMED:38113673 Systemic ciprofloxacin treatment of multidrug-resistant Aggregatibacter actinomycetemcomitans in severe periodontitis. An adult periodontitis patient treated with mechanical/surgical therapy experienced gingival necrosis and granulomas post-treatment. Aggregatibacter actinomycetemcomitans, a tissue-invasive pathogen, was recovered and multidrug-resistant but susceptible to ciprofloxacin. Systemic ciprofloxacin eliminated A. actinomycetemcomitans with marked clinical improvement. Ciprofloxacin may be prescribed for A. actinomycetemcomitans periodontal infection unresponsive to the common amoxicillin-metronidazole treatment. abstract_id: PUBMED:27594851 Detection of Porphyromonas gingivalis and Aggregatibacter actinomycetemcomitans after Systemic Administration of Amoxicillin Plus Metronidazole as an Adjunct to Non-surgical Periodontal Therapy: A Systematic Review and Meta-Analysis. Objective: To evaluate the variations in the detection of Porphyromonas gingivalis and/or Aggregatibacter actinomycetemcomitans before and after systemic administration of amoxicillin plus metronidazole in association with non-surgical periodontal therapy (NSPT). Background: The adjunctive use of antibiotics has been advocated to improve the clinical outcomes of NSPT. However, no systematic review has investigated the microbiological benefit of this combination. Materials And Methods: An electronic search was conducted up to December 2015. Randomized clinical trials comparing the number of patients testing positive for P. gingivalis and/or A. actinomycetemcomitans before and after NSPT with (test group) or without (control group) amoxicillin plus metronidazole were included. The difference between groups in the variation of positive patients was calculated using the inverse variance method with a random effects model. Results: The frequency of patients positive for A. actinomycetemcomitans was decreased by 30% (p = 0.002) and by 25% (p = 0.01) in the test group compared to the control group at 3- and 6-month follow-up, respectively. Similar findings were observed when considering the frequency of patients positive for Porphyromonas gingivalis, with a reduction by 28% (p &lt; 0.0001), 32% (p &lt; 0.0001), and 34% (p = 0.03) in the test group compared to the control group at 3-, 6-, and 12-month follow-up, respectively. Conclusion: The systemic administration of amoxicillin plus metronidazole as an adjunct to NSPT significantly decreased the number of patients positive for P. gingivalis and A. actinomycetemcomitans compared with periodontal therapy alone or with a placebo. abstract_id: PUBMED:32169683 Antimicrobial resistance of Aggregatibacter actinomycetemcomitans, Porphyromonas gingivalis and Tannerella forsythia in periodontitis patients. Objectives: Administration of systemic antimicrobials as an adjunct to mechanical treatment of periodontitis and sites with adverse clinical results leads to improved outcomes. This study aimed to assess the antimicrobial susceptibility of Aggregatibacter actinomycetemcomitans, Porphyromonas gingivalis and Tannerella forsythia isolated from periodontitis patients to amoxicillin, metronidazole, azithromycin and moxifloxacin. Methods: A total of 76 patients diagnosed with generalised periodontitis were included in the study. Subgingival samples were processed by culture. Etest was used to determine susceptibility to amoxicillin, metronidazole, azithromycin and moxifloxacin. Results: A total of 141 isolates from 76 patients were evaluated, including 61 P. gingivalis, 43 T. forsythia and 37 A. actinomycetemcomitans. Etest results showed complete susceptibility of A. actinomycetemcomitans, P. gingivalis and T. forsythia to moxifloxacin. However, the isolates presented reduced susceptibility to the other antimicrobial agents investigated. Of the A. actinomycetemcomitans isolates, 70.3%, 40.5% and 89.2% were resistant to amoxicillin, azithromycin and metronidazole, respectively. The P. gingivalis samples showed relatively similar rates of resistance to amoxicillin (24.6%), azithromycin (21.3%) and metronidazole (24.6%). Similarly, 25.6%, 21.0% and 25.6% of the T. forsythia isolates were resistant to amoxicillin, azithromycin, and metronidazole, respectively. Conclusion: These findings show that moxifloxacin may be a promising antimicrobial agent against P. gingivalis, T. forsythia and A. actinomycetemcomitans for the treatment of periodontitis. However, amoxicillin, azithromycin and metronidazole were less effective, especially against A. actinomycetemcomitans in vitro. abstract_id: PUBMED:32696031 Effects on clinical outcomes of adjunctive moxifloxacin versus amoxicillin plus metronidazole in periodontitis patients harboring Aggregatibacter actinomycetemcomitans, Porphyromonas gingivalis, and Tannerella forsythia: exploratory analyses from a clinical trial. &lt;p&gt;Objective: Considering the etiopathogenesis of periodontitis, it is relevant to evaluate the efficacy of the adjunctive use of systemic antimicrobials based on microbial occurrence. This report explores whether patients harboring Aggregatibacter actinomycetemcomitans (Aa), Porphyromonas gingivalis (Pg), or Tannerella forsythia (Tf) at baseline could receive greater clinical benefits from adjunctive moxifloxacin (MXF) and amoxicillin plus metronidazole (AM+MT) in comparison to patients without the presence of these microorganisms before therapy for generalized periodontitis. A control group was established that received subgingival debridement (SD) alone.&lt;br /&gt; Method and materials: Thirty-six patients younger than 30 years of age were randomly allocated to one of three treatment groups: SD plus placebo, systemic MXF with SD, or AM+MT combined with SD. Subgingival samples were studied. The effects of the therapies on probing depth and clinical attachment level, including interactions with Aa, Pg, or Tf at baseline, were explored using regression models.&lt;br /&gt; Results: At 6 months, all treatment groups showed improved clinical outcomes in patients harboring Aa, Pg, or Tf at baseline compared to the patients who did not harbor these microorganisms at baseline. Indeed, in the presence of Aa, Pg, or Tf at baseline, the patients receiving antimicrobial protocols showed the most significant gains compared to the control group. Furthermore, the percentage of sites ≥ 6 mm was reduced in the test groups, compared to the control group; these periodontopathogens were not present in sites with probing depth ≥ 6 mm in the MXF group. The interactions of Aa, Pg, and Tf with the test groups significantly improved clinical parameters at 6 months (P &amp;lt; .001). Interestingly, the R2 value in the models that explored clinical attachment gain produced a high degree of correlation (&amp;gt; 0.75), indicating that a high percentage (&amp;gt; 75%) of the total variation in clinical attachment level gain can be explained by the independent variables.&lt;br /&gt; Conclusions: Although all patients benefited from the treatments, patients harboring Aa, Pg, or Tf at baseline showed improved clinical benefits at 6 months, suggesting that Aa, Pg, or Tf at baseline may change the effects of systemic MXF and AM+MT in generalized periodontitis. After 6 months, Aa, Pg, and Tf were not present in sites with probing depth ≥ 6 mm in the MXF group.&lt;/p&gt;. abstract_id: PUBMED:1313103 Microbiological and clinical results of metronidazole plus amoxicillin therapy in Actinobacillus actinomycetemcomitans-associated periodontitis. We report on the microbiological and clinical effects of mechanical debridement in combination with metronidazole plus amoxicillin therapy in 118 patients with Actinobacillus actinomycetemcomitans-associated periodontitis. Patients were categorized into 3 groups: 28 had localized periodontitis; 50 had generalized periodontitis, and 40 had refractory periodontitis. After initial treatment and metronidazole plus amoxicillin therapy 114 of 118 (96.6%) patients had no detectable A. actinomycetemcomitans. Significant reduction in pocket probing depth and gain of clinical attachment were achieved in almost all patients. Four patients were still positive for A. actinomycetemcomitans after therapy. Metronidazole resistance (MIC greater than 25 micrograms/ml) was observed in 2 of 4 strains from these patients. Patients still positive for A. actinomycetemcomitans or Porphyromonas gingivalis showed a significant higher bleeding tendency after therapy. It was concluded that mechanical periodontal treatment in combination with the metronidazole plus amoxicillin therapy is effective for subgingival suppression of A. actinomycetemcomitans in patients with severe periodontitis. abstract_id: PUBMED:28668698 Aggregatibacter actinomycetemcomitans serotype prevalence and antibiotic resistance in a UK population with periodontitis. Objectives: Aggregatibacter actinomycetemcomitans is a recognised pathogen involved in aggressive periodontitis. Seven serotypes of A. actinomycetemcomitans exist with a range of virulence and distribution dependent on ethnicity and geography. The ability of A. actinomycetemcomitans to invade soft tissue can necessitate the use of systemic antibiotics for treatment, however variations in its antibiotic susceptibility exist dependent on geographical location. Methods: Serotypes of A. actinomycetemcomitans isolates from a UK cohort of 50 patients with aggressive periodontitis were determined by PCR. Resistance of the isolates to eight antibiotics [penicillin (1U), amoxicillin (2μg), amoxicillin/clavulanic acid (30μg), metronidazole (5μg), clindamycin (2μg), tetracycline (10μg), ciprofloxacin (5μg) and ceftazidime (30μg)] were determined by disk diffusion according to BSAC guidelines. Results: Prevalences of serotypes a, c, b, e and mixed serotypes were 48%, 22%, 2%, 2% and 12%, respectively. The serotype of isolates from seven patients (14%) could not be deduced by PCR. Of the 56 isolates tested, 100% were resistant to penicillin and metronidazole, 87.5% to clindamycin, 83.9% to amoxicillin and 76.8% to ceftazidime. Low rates of resistance to tetracycline (8.9% resistant) and amoxicillin/clavulanic acid (14.3% resistant) were observed, whereas no isolates were resistant to ciprofloxacin. Conclusions: As in a number of publications the suggested treatment of aggressive periodontitis includes the combined use of amoxicillin with metronidazole, these results highlight the need for culture and antimicrobial susceptibility investigations in patients with aggressive periodontitis prior to systemic use of antibiotics concomitantly to periodontal therapy. abstract_id: PUBMED:8144729 Microbiological and clinical effects of metronidazole and amoxicillin in Actinobacillus actinomycetemcomitans-associated periodontitis. A 2-year evaluation. In this study, we evaluated the microbiological and clinical effects of mechanical debridement in combination with metronidazole and amoxicillin therapy in 48 patients with Actinobacillus actinomycetemcomitans-associated periodontitis, 3 months and at least 24 months after active treatment. The results of this study showed that 47 out of 48 patients were still negative for A. actinomycetemcomitans subgingivally, at the mucous membranes, the tonsillar area and in the saliva, 2 years after therapy. The clinical results showed that a reduction of probing pocket depth, probing attachment level, bleeding index and plaque index was not only seen in the time between baseline and 3 months after therapy, but further clinical improvement was observed between 3 and 24 months after active treatment. We conclude that combined mechanical debridement and metronidazole plus amoxicillin therapy is very effective in suppressing A. actinomycetemcomitans below cultivable levels over a long period of time, suggesting elimination of this organism, and that recolonization of A. actinomycetemcomitans seems to be a rare event. The elimination of A. actinomycetemcomitans is paralleled by a further improvement of the periodontal status of the patients, even up to 24 months after active treatment. abstract_id: PUBMED:29744209 The effect of metronidazole plus amoxicillin or metronidazole plus penicillin V on periodontal pathogens in an in vitro biofilm model. A combination of metronidazole (MET) and amoxicillin (AMX) is commonly used as adjunct to mechanical therapy of periodontal disease. The use of broad spectrum antibiotics such as AMX may contribute to development of antibiotic resistance. The aim was to evaluate the in vitro effect of replacing AMX with penicillin V (PV) in combination with MET on a biofilm model. A biofilm model consisting of Aggregatibacter actinomycetemcomitans, Porphyromonas gingivalis, and Fusobacterium nucleatum was developed. The biofilms were exposed to AMX + MET and PV + MET in two different concentrations. Bacterial viability in biofilms following antibiotic exposure was assessed by viable counts and by confocal microscopy. No live colonies of P. gingivalis nor F. nucleatum were retrieved from biofilms exposed to AMX + MET or PV + MET. The amount of A. actinomycetemcomitans was 4-5 logs reduced following antibiotic treatment; no statistical significance was achieved between AMX + MET or PV + MET treated biofilms. Replacement of AMX with PV at the same concentration, in combination with MET, resulted in similar effect on bacterial viability in this in vitro model. The option of using PV + MET instead of AMX + MET deserves further investigation, as this may contribute to reduce the risk of antibiotic resistance development. Answer: The specific benefits of amoxicillin plus metronidazole in Aggregatibacter actinomycetemcomitans (A. actinomycetemcomitans) associated periodontitis have been investigated in various studies. According to a double-masked, placebo-controlled, randomized longitudinal study, patients who were positive for A. actinomycetemcomitans did not show a specific benefit from the combination of amoxicillin plus metronidazole when used as an adjunct to full-mouth scaling and root planing. The study found that the antibiotics were beneficial irrespective of the A. actinomycetemcomitans-positive or -negative status at baseline, with molars benefiting significantly more from the antibiotics than non-molars (PUBMED:22873656). However, other studies have shown that the systemic administration of amoxicillin plus metronidazole as an adjunct to non-surgical periodontal therapy (NSPT) significantly decreased the number of patients positive for A. actinomycetemcomitans compared with periodontal therapy alone or with a placebo (PUBMED:27594851). Additionally, exploratory analyses from a clinical trial suggested that patients harboring A. actinomycetemcomitans, as well as Porphyromonas gingivalis and Tannerella forsythia, at baseline showed improved clinical benefits at 6 months when treated with adjunctive moxifloxacin or amoxicillin plus metronidazole compared to patients without the presence of these microorganisms before therapy (PUBMED:32696031). It is also important to note that resistance to metronidazole and other antibiotics by A. actinomycetemcomitans has been reported in some populations (PUBMED:29967975, PUBMED:32169683, PUBMED:28668698), which can influence the selection of therapeutic approaches. In cases where A. actinomycetemcomitans is resistant to the common amoxicillin-metronidazole treatment, ciprofloxacin has been suggested as an alternative (PUBMED:38113673). In summary, while the combination of amoxicillin plus metronidazole can be effective in reducing A. actinomycetemcomitans levels and improving clinical outcomes in periodontitis patients, the specific benefits may not be exclusive to A. actinomycetemcomitans-positive patients, and antibiotic resistance should be considered when selecting treatment options.
Instruction: Can malignant potential of endometrial polyps be determined by incorporating the endometrial intraepithelial neoplasia (EIN) classification? Abstracts: abstract_id: PUBMED:25524459 Can malignant potential of endometrial polyps be determined by incorporating the endometrial intraepithelial neoplasia (EIN) classification? Objective: The reported frequency of malignant or premalignant changes confined to endometrial polyps (EP) is 0.5-6%. The management of atypical endometrial hyperplasia (AEH) confined to EP is not yet established. Recently, an alternative pathological nomenclature has emerged using the term endometrial intraepithelial neoplasia (EIN) instead of atypia. The objective of this study was to evaluate the safety of conservative hysteroscopic resection of endometrial polyps with AEH or EIN. Methods: Retrospective cohort study of all cases of hysteroscopic resections of EP was performed at a single center between the years 2000-2011. All patients with a pathologic diagnosis of AEH in EP were included. A post-hoc revision of the pathologic specimens was made according to the EIN classification. Results: Of the 32 patients with AEH in EP, 25 had normal endometrial curetting. Even with AEH confined to EP, 12 cases (48%) showed AEH (11 cases) or carcinoma (1 case) in the hysterectomy specimens. EIN in EP (14 cases) was correlated with 57% of diagnosis of EIN or carcinoma in the uterus; whereas in the absence of EIN in EP only 1 of 9 cases showed EIN in the final pathologic specimen (p=0.002), and none with carcinoma, which yields a PPV of 14% and a NPV of 100%. Conclusion: The diagnosis of EIN in EP may be a better predictor than AEH for endometrial involvement with malignant or pre-malignant neoplasms. The safety of conservative hysteroscopic resection of EP with AEH/EIN is questioned. abstract_id: PUBMED:15262357 The malignant potential of endometrial polyps. Objectives: To determine the pre-malignant and malignant potential of endometrial polyps and to assess whether different clinical parameters are associated with malignancy in the polyps. Study Design: Four hundred and thirty consecutive cases of hysteroscopic diagnosis of endometrial polyp were retrieved. The medical records, preoperative vaginal sonography results and histopathology findings were reviewed. Statistical analysis was performed. Results: Hysteroscopy truly identified endometrial polyps in 95.7% of the cases. In 11.4% cases, hyperplasia without atypia was found in the endometrial polyp. In 3.3 and 3.0% of women pre-malignant or malignant conditions were found in the polyp. Older age, menopause status and polyps larger than 1.5 cm were associated with significant pre-malignant or malignant changes, although the positive predictive value for malignancy was low. All the malignant polyps were diagnosed only in postmenopausal women. The presence of postmenopausal or irregular vaginal bleeding, was not a predictor of malignancy in the polyp. Conclusions: Postmenopausal women with endometrial polyps are at increased risk of malignancy in the polyp. Those patients, whether symptomatic or not should be evaluated by hysteroscopic resection of the polyps. Asymptomatic premenopausal patients with polyps smaller than 1.5 cm can be observed. abstract_id: PUBMED:14559387 Detection of endometrial intraepithelial neoplasia (EIN) in postmenopausal endometrium. Objectives: To detect the presence of endometrial intraepithelial neoplasia (EIN) in the endometrium of postmenopausal patients. Patients And Methods: Sixty-three postmenopausal patients with endometrial polyps (n=48), hyperplasia (n=12) and endometrioid carcinoma (n=3) were enrolled for this study. The diagnosis of EIN was made by using morphological criteria and immunohistochemical methods for detection of PTEN and bcl-2. Results: EIN lesions were found in cases of endometrial polyp (n=1), atrophic endometrium (n=1) and in hyperplasia (n=1). The glands were packed, showed cytological atypia and were negative for both PTEN and bcl-2. Three patients with endometrial hyperplasia had isolated PTEN-negative glands but they were still bcl-2 positive. Conclusions: The use of immunohistochemical methods helps detect the presence of EIN in the postmenopausal endometrium but does not substitute the morphological criteria for this diagnosis. abstract_id: PUBMED:31466367 Proliferation in Postmenopausal Endometrial Polyps-A Potential for Malignant Transformation. Background and objectives: Endometrial polyps in asymptomatic postmenopausal women are often incidentally found, yet only 1.51% of them are malignant. Their potential for malignant transformation has not been adequately addressed. The aim of this study was to investigate the proliferation within endometrial polyps as one of the indicators of their malignization potential in asymptomatic postmenopausal women. Materials and Methods: Immunohistochemical studies of Ki-67 were performed. Cases included 52 benign postmenopausal polyps, 19 endometrioid carcinoma with coexisting benign polyps, 12 polyps with foci of carcinoma and 4 cases of polyps, which later developed carcinoma. The control group included 31 atrophic endometria and 32 benign premenopausal polyps. Ki-67 was scored in either 10 or 20 "hot spot" fields, as percentage of positively stained cells. Results: The median epithelial Ki-67 score in postmenopausal benign polyps (4.7%) was significantly higher than in atrophic endometria (2.41%, p &lt; 0.0001) and significantly lower than in premenopausal benign polyps (11.4%, p = 0.003) and endometrial cancer (8.3%, p &lt; 0.0001). Where endometrial polyps were found in association with endometrial carcinoma, Ki-67 was significantly higher in cancer (p &lt; 0.0001). No significant difference was found between Ki-67 scores of cancer focus and of the polyps tissue itself, respectively 2.8% and 4.55%, p = 0.37. Ki-67 expression, where polyps were resected and women later developed cancer, was not significantly different (p = 0.199). Conclusion: Polyps from asymptomatic postmenopausal women showed significantly more proliferation in both epithelial and stromal components than inactive atrophic endometria but less than premenopausal benign polyps and/or endometrial cancer. Benign postmenopausal endometrial polyps exhibit low proliferative activity, suggesting low malignant potential and may not require resection in asymptomatic women. abstract_id: PUBMED:36600505 Incidence of endometrial carcinoma in patients with endometrial intraepithelial neoplasia versus atypical endometrial polyp. ObjectiveOur study's primary aim was to compare the incidence of endometrial carcinoma in patients with a presurgical diagnosis of endometrial intraepithelial neoplasia confined to the endometrium (EIN-E) versus endometrial intraepithelial neoplasia confined to a polyp (EIN-P). Our secondary aim was to examine the difference in pathological features, prognostic risk groups and sentinel lymph node involvement between the two groups. Methods: We conducted a retrospective cohort study between January 2014 and December 2020 in a tertiary university-affiliated medical center. The study considered the characteristics of women who underwent hysterectomy with sentinel lymph node dissection for endometrial intraepithelial neoplasia (EIN). We compared EIN-E diagnosed by endometrial sampling via dilatation curettage or hysteroscopic curettage vs EIN-P. A multivariate logistic regression analysis was used to assess risk factors for endometrial cancer. Results: Eighty-eight women were included in the study, of those, 50 were women with EIN-P (EIN-P group) and 38 were women with EIN following an endometrial biopsy (EIN-E group).The median age was 57.5 years (range; 52-68) in the EIN-P group as compared with 63 years (range; 53-71) in the EIN-E group (p=0.47). Eighty-nine percent of the women in the EIN-E group presented with abnormal uterine bleeding whereas 46% of the women in the EIN-P group were asymptomatic (p=0.001). Pathology results following hysterectomy revealed concurrent endometrial carcinoma in 26% of women in the EIN-P group compared with 47% of women in the EIN-E group (p=0.038). Multivariate analysis showed that endometrial cancer was significantly less common in the EIN-P group (overall response (OR)=0.3 95% confidence interval (CI)=0.1-0.9, p=0.03). Eighty-four percent of cancers were grade one in the EIN-P group compared with 50% in the EIN-E group (p=0.048). Conclusions: Concurrent endometrial cancer is less frequent with EIN-P than with EIN-E. The high incidence of endometrial carcinoma in both groups supports the current advice to perform hysterectomy for post-menopausal women. Our data does not support performing sentinel lymph node dissection for EIN-P that was completely resected. The benefit of sentinel lymph node dissection for women with pre-operative EIN-E is yet to be determined. abstract_id: PUBMED:30707274 Precursor lesions of endometrial carcinoma The 2014 WHO classification distinguishes between endometrial hyperplasia without atypia (EH) and atypical endometrial hyperplasia/endometrioid intraepithelial neoplasia (AEH/EIN). AEH/EIN is characterized by crowded glands with cytologically atypical epithelium separated by little intervening stroma. Cellular atypia is characterized by nuclear enlargement and rounding, pleomorphism, loss of polarity, and presence of nucleoli. The diagnosis of atypia is facilitated by comparison with areas of adjacent normal and non-atypical glands, respectively. AEH/EIN is often associated with squamous but also secretory and mucinous metaplasia. Loss of PTEN and/or PAX2 immunoreactivity occurs in up to two thirds of AEH/EIN. In contrast, invasive low-grade endometrioid carcinoma shows confluent growth with loss of stroma and formation of labyrinth-like or cribriform structures. Differential diagnosis includes different forms of metaplasias, papillary proliferations, and hyperplastic polyps. Epithelial metaplasia may be present in various benign endometrial lesions as well as in endometrioid adenocarcinoma. AEH/EIN may also occur in endometrial polyps. Progestin therapy of AEH/EIN has low level of evidence but frequently leads to complete regression. Serous intraepithelial carcinoma (SEIC) is characterized by high-grade cellular atypia and polymorphism, detachment of cells, a mutant immunoreactive pattern for the P53 and an increased Ki67 labeling index. Although designated as precursor of serous carcinoma of the endometrium, biologically it is considered a non-invasive serous carcinoma since it may already be associated with massive extrauterine spread. abstract_id: PUBMED:15316149 How often are endometrial polyps malignant? Objective: Endometrial polyps are a frequent pathology encountered during pelvic sonography and diagnostic hysteroscopies. The true incidence of malignant endometrial polyps is unknown. The estimations range from 0.5 to 4.8%. The objective of this study was to investigate the frequency of malignant endometrial polyps, and to characterize the hysteroscopic image of these polyps. Methods: The study included 300 consecutive women with endometrial polyps who underwent hysteroscopic polypectomy. Patients' characteristics, operative findings, and pathology reports were analyzed. Results: Four of the 300 endometrial polyps removed by operative hysteroscopy were malignant (1.3%). Patients with malignant polyps were symptomatic peri- or postmenopausal women. All asymptomatic polyps in this series were found to be benign. We could not identify any distinct hysteroscopic feature of malignancy. Conclusions: Our study shows that about 1.5% of endometrial polyps may be malignant. This finding reinforces the indication for removal of symptomatic endometrial polyps, preferably by a hysteroscopic procedure. The significance of asymptomatic uterine polyps has still to be determined. abstract_id: PUBMED:28098927 The role of HE4 and CA125 in differentiation between malignant and non-malignant endometrial pathologies. Objectives: The aim of the study was to assess the role of HE4 and CA125 in differentiation between malignant and non-malignant endometrial pathologies. Material And Methods: A retrospective study of 87 patients with endometrial pathologies was conducted. Tumor markers were assessed two weeks before surgical intervention in each subject. The final diagnosis was established on the basis of the histopathological examination of the endometrium. Results: Serum HE4 levels were significantly higher in patients with endometrial cancer (EC) as compared to non-malignant endometrial pathologies (p &lt; 0.001), patients with stage I EC as compared to non-malignant endometrial pathologies (p &lt; 0.001), and patients with stage Ia EC as compared to non-malignant endometrial pathologies (p = 0.003). Serum CA125 levels were not significantly different as far as these groups of patients were concerned. Both tumor markers were significantly higher in patients with stage II-III as compared to stage I EC and non-malignant endometrial pathologies (p &lt; 0.001 for both markers). Sensitivity and specificity of HE4 at the cut-off level of 70 pmol/L for detecting endometrial malignancies were 73.08% and 85.71%, respectively. Sensitivity and specificity of CA125 at the cut-off level of 35 U/mL were 29.41% and 94.29%, respectively. The area under the curve (AUC) for HE4 was 0.875, suggesting that this marker reliably differentiates malignant from non-malignant endometrial pathologies (p &lt; 0.001). AUC for CA125 was 0.552, suggesting that this marker does not reliably differentiate between malignant and non-malignant endometrial pathologies (p = 0.414). Conclusion: HE4, in contrast to CA125, might be a useful tool for detecting malignant endometrial pathologies. abstract_id: PUBMED:33118172 Nulliparity and postmenopausal status are independent factors of malignancy potential of endometrial intraepithelial neoplasia in polyps. Objective: To estimate the risk of concurrent endometrial cancer in endometrium when endometrial intraepithelial neoplasia (EIN) is found within an endometrial polyp and to identify the possible predictive factors for concurrent endometrial cancer. Methods: Histopathologic data of women who underwent hysteroscopy for resection of endometrial polyps at Ankara Baskent University Hospital, between 2011 and 2019 were screened. Patients whose polypectomy report was EIN in a polyp, and who had a final report of the hysterectomy specimen were included. Patients were divided into two groups according to the presence of concurrent cancer in the hysterectomy material: group 1, concurrent cancer present and group 2, concurrent cancer absent. Statistical analyses were performed using SPSS. Results: A total of 4125 women underwent hysteroscopy for the resection of endometrial polyps. Of those women, 161 (3.9%) were diagnosed as having EIN and 115 met the criteria. The rate of concurrent endometrial cancer was 28.6% (33/115). According to multivariate analysis, nulliparity (odds ratio [OR] 0.38; 95% confidence interval [CI] 1.04-3.67; p = 0.036) and postmenopausal status (OR 0.64; 95% CI 0.42-0.98; p = 0.042) were found to be independent factors significantly associated with concurrent endometrial cancer. Conclusion: The incidence of concurrent cancer is higher in postmenopausal or nulliparous women when EIN is detected in a polyp. abstract_id: PUBMED:27062662 Wilms' tumor 1 protein expression in endometrial adenocarcinoma and endometrial intra-epithelial neoplasia. Aim: To investigate the expression of Wilms' tumor 1 (WT1) protein in endometrial adenocarcinoma (EC) and endometrial intra-epithelial neoplasia (EIN). Methods: WT1 protein expression was determined on immunohistochemistry in 30 EC patients and in 20 EIN patients. WT1 protein expression in proliferative (n = 7), secretory (n = 9) and atrophic endometrium (n = 9) and benign endometrial polyps (n = 28) was used as the control group. Cellular and vascular WT1 staining was scored semiquantitatively. Results: Cellular WT1 staining was 67% and vascular positivity was 73% in the EC group. In the EIN group the cellular staining was 100% and vascular staining was 85%. Although all EIN samples were positive for cellular WT1 staining, median cellular staining score was similar to that of EC (median, 2; range, 1-3; IQR, 1.75 vs 2, 0-3, 0.75, respectively). Increasing EC grade and stage were not associated with cellular or vascular WT1 staining score. Conclusion: WT1 is expressed in EIN. Answer: The malignant potential of endometrial polyps can indeed be assessed by incorporating the endometrial intraepithelial neoplasia (EIN) classification. Studies have shown that the presence of EIN within endometrial polyps is associated with a higher risk of concurrent endometrial cancer. For instance, one study found that EIN in endometrial polyps correlated with a 57% diagnosis of EIN or carcinoma in the uterus, whereas in the absence of EIN in endometrial polyps, only 1 of 9 cases showed EIN in the final pathologic specimen, and none with carcinoma, yielding a positive predictive value (PPV) of 14% and a negative predictive value (NPV) of 100% (PUBMED:25524459). This suggests that the diagnosis of EIN in endometrial polyps may be a better predictor than atypical endometrial hyperplasia (AEH) for endometrial involvement with malignant or pre-malignant neoplasms. Furthermore, the incidence of concurrent cancer is higher in postmenopausal or nulliparous women when EIN is detected in a polyp (PUBMED:33118172). Another study found that concurrent endometrial cancer is less frequent with EIN confined to a polyp (EIN-P) than with EIN confined to the endometrium (EIN-E), indicating that the location of EIN within the endometrial tissue can influence the risk of malignancy (PUBMED:36600505). Additionally, the expression of Wilms' tumor 1 (WT1) protein in endometrial adenocarcinoma (EC) and EIN suggests that WT1 is expressed in EIN, which could potentially be used as a marker for assessing malignant potential (PUBMED:27062662). Overall, these findings suggest that incorporating the EIN classification into the assessment of endometrial polyps can provide valuable information regarding their malignant potential, which can guide clinical management decisions. However, the safety of conservative hysteroscopic resection of endometrial polyps with AEH/EIN is questioned due to the associated risks (PUBMED:25524459).
Instruction: Minimally invasive autopsy: an alternative to conventional autopsy? Abstracts: abstract_id: PUBMED:19244053 Minimally invasive autopsy: an alternative to conventional autopsy? Purpose: To determine the diagnostic performance of minimally invasive autopsy (MIA) for detection of causes of death and to investigate the feasibility of MIA as an alternative to conventional autopsy (CA) in the clinical setting. Materials And Methods: The institutional review board approved the MIA procedure and study, and informed consent was obtained for all deceased patients from relatives. Thirty deceased patients (19 men, 11 women; age range, 46-79 years), for whom family permission for CA on medical grounds had already been obtained, underwent additional evaluation with MIA prior to CA. MIA consisted of whole-body 16-section computed tomography (CT) and 1.5-T magnetic resonance (MR) imaging, followed by ultrasonography-guided 12-gauge needle biopsy of heart, both lungs, liver, both kidneys, and spleen. Percentage agreement between MIA and CA on cause of death was evaluated. Sensitivity and corresponding 95% confidence intervals (CIs) of MIA for detection of overall (major plus minor) findings, with CA as the reference standard, were calculated. Specificity was calculated for overall findings. Sensitivity analysis was performed to explore the effect of the clustered nature of the data. Results: In 23 patients (77%), MIA and CA were in agreement on the cause of death. Sensitivity of MIA for detection of overall findings and detection of major findings was 93% (95% CI: 90%, 96%) and 94% (95% CI: 87%, 97%), respectively. Specificity was 99% (95% CI: 98%, 99%) for detection of overall findings. MIA failed to demonstrate acute myocardial infarction as the cause of death in four patients. Sensitivity analysis indicated a negligible correlation between observations within each patient. CT was superior to MR for detection of pneumothorax and calcifications. MR was superior to CT for detection of brain abnormalities and pulmonary embolus. With biopsy only, detection of disease in 55 organs was possible, which included 27 major findings. Conclusion: MIA is a feasible procedure with high diagnostic performance for detection of common causes of death such as pneumonia and sepsis; MIA failed to demonstrate cardiac diseases, such as acute myocardial infarction and endocarditis, as underlying cause of death. Supplemental Material: http://radiology.rsnajnls.org/cgi/content/full/250/3/897//DC1. abstract_id: PUBMED:36527838 All-Body-Cavity (ABC)-scopy-An approach for a feasible method of minimally invasive autopsy to allow for postmortem tissue sampling in cases where a conventional autopsy is denied. Objectives: The decreasing autopsy numbers in many western countries have been partially attributed to the invasiveness of the autopsy, which causes relatives to decline postmortem examination. This issue has been addressed by developing methods of minimally or non-invasive autopsy, which could be shown to increase acceptance for autopsies. The aim of this study is to compare the All-Body-Cavity-scopy (ABC-scopy) to conventional autopsies for diagnostic accuracy. Methods: The ABC-scopy is an endoscopic approach for minimally invasive autopsy involving laparoscopic and thoracoscopic evaluation of the accessible organs, followed by excision biopsies of relevant organs and conspicuous findings. The method was performed in 10 cases on deceased patients scheduled for autopsy, each followed by a conventional autopsy. Results: The results gathered from ABC-scopy through observation and histopathological evaluation provided an acceptable diagnostic accuracy in 9 out of 10 autopsies when compared to those of the conventional autopsy for diagnostic findings. Conclusions: The ABC-scopy is a feasible approach for minimally invasive autopsy that provides acceptable diagnostic value. Despite its minimally invasive nature, the procedure enables representative histology through providing large size excision biopsies from intraabdominal and thoracic organs, which is especially useful for examining disseminated diseases such as metastasized tumors. abstract_id: PUBMED:33915771 Minimally Invasive Autopsy Practice in COVID-19 Cases: Biosafety and Findings. Postmortem studies are crucial for providing insight into emergent diseases. However, a complete autopsy is frequently not feasible in highly transmissible diseases due to biohazard challenges. Minimally invasive autopsy (MIA) is a needle-based approach aimed at collecting samples of key organs without opening the body, which may be a valid alternative in these cases. We aimed to: (a) provide biosafety guidelines for conducting MIAs in COVID-19 cases, (b) compare the performance of MIA versus complete autopsy, and (c) evaluate the safety of the procedure. Between October and December 2020, MIAs were conducted in six deceased patients with PCR-confirmed COVID-19, in a basic autopsy room, with reinforced personal protective equipment. Samples from the lungs and key organs were successfully obtained in all cases. A complete autopsy was performed on the same body immediately after the MIA. The diagnoses of the MIA matched those of the complete autopsy. In four patients, COVID-19 was the main cause of death, being responsible for the different stages of diffuse alveolar damage. No COVID-19 infection was detected in the personnel performing the MIAs or complete autopsies. In conclusion, MIA might be a feasible, adequate and safe alternative for cause of death investigation in COVID-19 cases. abstract_id: PUBMED:36853068 Navigating uncertainties of death: Minimally Invasive Autopsy Technology in global health. Global health practitioners and policymakers have become increasingly vocal about the complex challenges of identifying and quantifying the causes of death of the world's poorest people. To address this cause-of-death uncertainty and to minimise longstanding sensitivities about full autopsies, the Bill and Melinda Gates Foundation have been one of the foremost advocates of minimally invasive autopsy technology (MIA). MIA involves using biopsy needles to collect samples from key organs and body fluids; as such, it is touted as potentially more acceptable and less invasive than a complete autopsy, which requires opening the cadaver. In addition, MIA is considered a good means of collecting accurate bodily samples and can provide the crucial information needed to address cause-of-death uncertainty. In this paper, we employ qualitative data to demonstrate that while MIA technology has been introduced as a solution to the enduring cause-of-death uncertainty, the development and deployment of technologies such as these always constitute interventions in complex social and moral worlds; in this respect, they are both the solutions to and the causes of new kinds of uncertainties. We deconstruct the ways in which those new dimensions of uncertainty operate at different levels in the global health context. abstract_id: PUBMED:31006817 Comparison of Minimally Invasive Tissue Sampling With Conventional Autopsy to Detect Pulmonary Pathology Among Respiratory Deaths in a Resource-Limited Setting. Objectives: We compared minimally invasive tissue sampling (MITS) with conventional autopsy (CA) in detection of respiratory pathology/pathogens among Kenyan children younger than 5 years who were hospitalized with respiratory disease and died during hospitalization. Methods: Pulmonary MITS guided by anatomic landmarks was followed by CA. Lung tissues were triaged for histology and molecular testing using TaqMan Array Cards (TACs). MITS and CA results were compared for adequacy and concordance. Results: Adequate pulmonary tissue was obtained by MITS from 54 (84%) of 64 respiratory deaths. Comparing MITS to CA, full histologic diagnostic concordance was present in 23 (36%) cases and partial concordance in 19 (30%), an overall 66% concordance rate. Pathogen detection using TACs had full concordance in 27 (42%) and partial concordance in 24 (38%) cases investigated, an overall 80% concordance rate. Conclusions: MITS is a viable alternative to CA in respiratory deaths in resource-limited settings, especially if combined with ancillary tests to optimize diagnostic accuracy. abstract_id: PUBMED:29302514 Searching cause of death through different autopsy methods: A new initiative. A lawful disposal of human dead body is only possible after establishment of proper and valid cause of death. If the cause is obscure, autopsy is the only mean of search. Inadequacy and unavailability of health care facility often makes this situation more complicated in developing countries where many deaths remain unexplained and proper mortality statistics is missing, especially for infant and children. Tissue sampling by needle autopsy or use of various imaging technique in virtopsy have been tried globally to find out an easier alternative. An exclusive and unique initiative, by limited autopsy through tissue biopsy and body fluid analysis, has been taken to meet this dire need in African and some of Asian developing countries, as worldwide accepted institutional data are even missing or conflicting at times. Traditional autopsy has changed little in last century, consisting of external examination and evisceration, dissection of organs with identification of macroscopic pathologies and injuries, followed by histopathology. As some population groups have religious objections to autopsy, demand for minimally invasive alternative has increased of late. But assessment of cause of death is most important for medico-legal, epidemiological and research purposes. Thus minimally invasive technique is of high importance in primary care settings too. In this article, we have made a journey through different autopsy methods, their relevance and applicability in modern day perspective considering scientific research articles, textbooks and interviews. abstract_id: PUBMED:30251930 Conventional Autopsy versus Minimally Invasive Autopsy with Postmortem MRI, CT, and CT-guided Biopsy: Comparison of Diagnostic Performance. Purpose To compare the diagnostic performance of minimally invasive autopsy with that of conventional autopsy. Materials and Methods For this prospective, single-center, cross-sectional study in an academic hospital, 295 of 2197 adult cadavers (mean age: 65 years [range, 18-99 years]; age range of male cadavers: 18-99 years; age range of female cadavers: 18-98 years) who died from 2012 through 2014 underwent conventional autopsy. Family consent for minimally invasive autopsy was obtained for 139 of the 295 cadavers; 99 of those 139 cadavers were included in this study. Those involved in minimally invasive autopsy and conventional autopsy were blinded to each other's findings. The minimally invasive autopsy procedure combined postmortem MRI, CT, and CT-guided biopsy of main organs and pathologic lesions. The primary outcome measure was performance of minimally invasive autopsy and conventional autopsy in establishing immediate cause of death, as compared with consensus cause of death. The secondary outcome measures were diagnostic yield of minimally invasive autopsy and conventional autopsy for all, major, and grouped major diagnoses; frequency of clinically unsuspected findings; and percentage of answered clinical questions. Results Cause of death determined with minimally invasive autopsy and conventional autopsy agreed in 91 of the 99 cadavers (92%). Agreement with consensus cause of death occurred in 96 of 99 cadavers (97%) with minimally invasive autopsy and in 94 of 99 cadavers (95%) with conventional autopsy (P = .73). All 288 grouped major diagnoses were related to consensus cause of death. Minimally invasive autopsy enabled diagnosis of 259 of them (90%) and conventional autopsy 224 (78%); 200 (69%) were found with both methods. At clinical examination, the cause of death was not suspected in 17 of the 99 cadavers (17%), and 124 of 288 grouped major diagnoses (43%) were not established. There were 219 additional clinical questions; 189 (86%) were answered with minimally invasive autopsy and 182 (83%) were answered with conventional autopsy (P = .35). Conclusion The performance of minimally invasive autopsy in the detection of cause of death was similar to that of conventional autopsy; however, minimally invasive autopsy has a higher yield of diagnoses. © RSNA, 2018 Online supplemental material is available for this article. See also the editorial by Krombach in this issue. abstract_id: PUBMED:38087877 Minimally invasive autopsy in the evaluation of fetal malformations and stillbirths: A feasibility study. Background: Minimally invasive autopsy (MIA) using post-mortem magnetic resonance imaging with ancillary investigations is reported as accurate as conventional autopsy. This study assesses MIA's feasibility and accuracy compared to conventional autopsy. Method: MIA and/or conventional autopsy were performed on malformed fetuses (14-20 weeks gestation) and stillbirths (&gt;20 weeks gestation), with/without malformation. Concordance in diagnostic accuracy (95% confidence interval [CI]) and agreement (Kappa coefficient [k]) were assessed in malformed cases where both MIA and autopsy were conducted. Results: We enrolled 200 cases, including 100 malformed fetuses (&lt;20 weeks) and 100 stillbirths (with/without malformations). Concordance of 97.3% was observed between MIA and autopsy in 156 malformed cases. The overall diagnostic accuracy of MIA was 96.04%. Conclusion: While conventional autopsy remains the gold standard, MIA is feasible in tertiary care settings. It can be considered a potential alternative for post-mortem assessment, particularly in settings with limited facility of conventional autopsy and parental refusal. abstract_id: PUBMED:25386838 Initial experiences of a minimally invasive autopsy service. A report of the techniques and observations in the first 11 cases. Our goal was to set up a clinical service for minimally invasive postmortem examination for the fetal and neonatal age group. The minimally invasive service consisted of imaging, external review, placental examination, and ancillary tests that form part of the formal autopsy process. Eleven patients had imaging and relevant conditions at death successfully determined with the minimally invasive service. We conclude that a minimally invasive autopsy service for fetal and neonatal cases is possible and acceptable to patients and professionals. abstract_id: PUBMED:33242533 Minimally invasive puncture versus conventional craniotomy for spontaneous supratentorial hemorrhage: A meta-analysis. Background: Minimally invasive puncture and conventional craniotomy are both utilized in the treatment of spontaneous supratentorial hemorrhage. The purpose of this study is to review evidence that compares the safety and effectiveness of these two techniques. Methods: We searched EMBASE, Cochrane Library, Web of Science, and PubMed for studies published between 2000 and 2019 that compared the minimally invasive puncture procedure with the conventional craniotomy for the treatment of spontaneous supratentorial hemorrhage. Results: Seven trials (2 randomized control trials and 5 observational studies) with a total of 970 patients were included. The odds ratio indicated a statistically significant difference between the minimally invasive puncture and conventional craniotomy in terms of good functional outcome (OR 2.36, 90% CI 1.24-4.49). The minimally invasive puncture procedure was associated with lower mortality rates (OR 0.61, 90% CI 0.44-0.85) and rebleeding rates (OR 0.48, 95%CI 0.24-0.99; P=0.003). Conclusions: The use of the minimally invasive puncture for the management of spontaneous supratentorial hemorrhage was associated with better functional outcome results, a lower mortality rate, and decreased rebleeding rates. However, because insufficient data has been published thus far, we need more robust evidence to provide a better guide for future management. Answer: Minimally invasive autopsy (MIA) has been investigated as a potential alternative to conventional autopsy (CA) in various studies. The diagnostic performance of MIA has been found to be high for the detection of common causes of death such as pneumonia and sepsis, with a sensitivity of 93% for overall findings and 94% for major findings. However, MIA has shown limitations in demonstrating cardiac diseases, such as acute myocardial infarction and endocarditis, as the underlying cause of death (PUBMED:19244053). The All-Body-Cavity-scopy (ABC-scopy), an endoscopic approach for MIA, has provided acceptable diagnostic accuracy in 9 out of 10 autopsies when compared to CA, particularly useful for examining disseminated diseases such as metastasized tumors (PUBMED:36527838). In the context of COVID-19, MIA has been a feasible, adequate, and safe alternative for cause of death investigation, with biosafety guidelines ensuring the safety of the procedure (PUBMED:33915771). MIA is also seen as a solution to cause-of-death uncertainty in global health, potentially more acceptable and less invasive than a complete autopsy (PUBMED:36853068). Comparisons between MIA and CA in the detection of pulmonary pathology among respiratory deaths in a resource-limited setting have shown MIA to be a viable alternative, especially when combined with ancillary tests (PUBMED:31006817). The demand for less invasive autopsy methods has increased due to religious objections and the importance of establishing the cause of death for medico-legal, epidemiological, and research purposes (PUBMED:29302514). A study comparing the diagnostic performance of MIA with CA found that MIA had a similar performance in detecting the cause of death and a higher yield of diagnoses (PUBMED:30251930). MIA has also been found feasible in the evaluation of fetal malformations and stillbirths, with a high concordance rate with CA (PUBMED:38087877). Initial experiences with a minimally invasive autopsy service for the fetal and neonatal age group have been positive, with conditions at death successfully determined (PUBMED:25386838). In conclusion, MIA has emerged as a feasible alternative to CA with high diagnostic performance in many cases. It is particularly advantageous in situations where CA is not possible due to biohazard challenges, religious objections, or when consent for CA is denied. However, it may not be as effective in detecting certain conditions, such as acute myocardial infarction, and its applicability may vary depending on the specific circumstances and resources available.
Instruction: Are terminal decline and its potential indicators detectable in population studies of the oldest old? Abstracts: abstract_id: PUBMED:21480375 Are terminal decline and its potential indicators detectable in population studies of the oldest old? Objectives: To explore whether it is possible to detect decline in global scores of cognitive function in the proximity of death whilst simultaneously investigating potential risk profiles. Methods: Using the Mini Mental State Examination in a population study of the oldest old in which 99% of participants have died, a linear and quadratic time-to-death repeated measures random effects models were used to detect decline and potential factors which might indicate individual variation. Results: Decline and acceleration of this decline were detectable in the period before death. Some between person variation was detected in this pattern, which included differences in cognitive performance by age at death (-0.2 (SE = 0.02)), sex (-1.2 (SE = 0.2)), initial cognitive impairment (-7.5 (SE = 0.2)) and mobility (-0.6 (SE = 0.2)), in rate of decline by age at death (-0.04 (SE = 0.005)), sex (-0.1 (SE = 0.06)), initial cognitive impairment (-0.3 (SE = 0.07)) and mobility (-0.1 (SE = 0.05)) and differences in change in rate of decline by sex (-0.008 (SE = 0.004)), initial cognitive impairment (-0.02 (SE = 0.04)) and mobility (-0.01 (SE = 0.003)). Conclusion: Using an extension of existing methods for exploring terminal decline, the phenomenon of decline in global cognition measures in the proximity of death was clearly detected as well as potential variables which could influence that pattern. Further work is required to explore whether similar methods can be used to detect the onset of the acceleration of this decline in each individual together with the potential to identify individual level factors that can allow clinicians to distinguish between the normal and preterminal phases of change in extreme old age. abstract_id: PUBMED:23908850 Cognitive decline and dementia in the oldest-old. The oldest-old are the fastest growing segment of the Western population. Over half of the oldest-old will have dementia, but the etiology is yet unknown. Age is the only risk factor consistently associated with dementia in the oldest-old. Many of the risk and protective factors for dementia in the young elderly, such as ApoE genotype, physical activity, and healthy lifestyle, are not relevant for the oldest-old. Neuropathology is abundant in the oldest-old brains, but specific pathologies of Alzheimer's disease (AD) or vascular dementia are not necessarily correlated with cognition, as in younger persons. It has been suggested that accumulation of both AD-like and vascular pathologies, loss of synaptic proteins, and neuronal loss contribute to the cognitive decline observed in the oldest-old. Several characteristics of the oldest-old may confound the diagnosis of dementia in this age group. A gradual age-related cognitive decline, particularly in executive function and mental speed, is evident even in non-demented oldest-old. Hearing and vision losses, which are also prevalent in the oldest-old and found in some cases to precede/predict cognitive decline, may mechanically interfere in neuropsychological evaluations. Difficulties in carrying out everyday activities, observed in the majority of the oldest-old, may be the result of motor or physical dysfunction and of neurodegenerative processes. The oldest-old appear to be a select population, who escapes major illnesses or delays their onset and duration toward the end of life. Dementia in the oldest-old may be manifested when a substantial amount of pathology is accumulated, or with a composition of a variety of pathologies. Investigating the clinical and pathological features of dementia in the oldest-old is of great importance in order to develop therapeutic strategies and to provide the most elderly of our population with good quality of life. abstract_id: PUBMED:35497066 Malnutrition is positively associated with cognitive decline in centenarians and oldest-old adults: A cross-sectional study. Background: Cognitive decline is a growing public health concern. However, presently, only a few large-scale studies are available on the prevalence of cognitive decline worldwide, and the relationship between nutrition and cognitive decline remains unclear and requires further investigation, especially among Chinese centenarians and oldest-old adults. This study aimed to assess the prevalence of cognitive decline among Chinese centenarians and oldest-old adults, its associated factors, and explore a possible connection with nutrition, to provide new directions for the prevention of cognitive decline in Chinese centenarians and oldest-old adults. Methods: Based on the China Hainan Centenarian Cohort Study (CHCCS), a household survey was conducted among all the centenarians and oldest-old adults residing in 16 cities and counties of Hainan province from June 2014 to June 2016. This study included 946 centenarians and oldest-old adults (412 and 534, respectively). Cognitive function was measured using the mini-mental state examination (MMSE). Findings: The total prevalence of cognitive decline was 76·6% (725 participants). Centenarians had a significantly higher prevalence of cognitive decline compared to oldest-old adults [359 centenarians (87·1%) vs. 366 oldest-old adults (68·5%)]. Centenarians and oldest-old adults with cognitive decline had significantly lower prognostic nutritional index (PNI) and mini nutrition assessment-short form (MNA-SF) than those without cognitive decline (P &lt; 0·05). Multivariate logistic regression analyses showed that participants with higher PNI and MNA-SF were less likely to have cognitive decline. Multivariate linear regression analyses showed that PNI and MNA-SF were positively associated with MMSE (P &lt; 0·05). Interpretation: Malnutrition was positively associated with cognitive decline among Chinese centenarians and oldest-old adults. It is therefore important for clinicians and community health workers to pay attention to malnutrition in these populations and provide supplemental nutrients to prevent cognitive decline. Funding: This work was supported by grants from the National Natural Science Foundation of China (81900357, 81903392, 81941021, 81901252, 82001476, 81802804, 81801251), the Military Medical Science and Technology Youth Incubation Program (20QNPY110, 19QNP060), the Excellent Youth Incubation Program of Chinese People's Liberation Army General Hospital (2020-YQPY-007), the Military Medicine Youth Program of Chinese People's Liberation Army General Hospital (QNF19069, QNF19068), the National Key R&amp;D Program of China (2018YFC2000400), the National S&amp;D Resource Sharing Service Platform Project of China (YCZYPT[2018]07), the Innovation Platform for Academinicians of Hainan Province, the Hainan Major Scientific and Technological Cooperation Project (2016KJHZ0039), the China Postdoctoral Science Foundation funded project (2019M650359, 2020M682816, 2021T140298), the Medical Big Data R&amp;D Project of Chinese People's Liberation Army General Hospital (MBD2018030), the National Geriatric Disease Clinical Medicine Research centre Project (NCRCG-PLAGH-2017-014), the Central Health Care Scientific Research Project (W2017BJ12), the Hainan Medical and Health Research Project (16A200057), the Sanya Medical and Health Science and Technology Innovation Project (2016YW21, 2017YW22, 2018YW11), and the Clinical Scientific Research Supporting Fund of Chinese People's Liberation Army General Hospital (2017FC-CXYY-3009). abstract_id: PUBMED:28740370 Kidney function and cognitive decline in an oldest-old Chinese population. Background: Early-stage chronic kidney disease has been suggested to be correlated with cognitive decline, but the association has rarely been explored in the oldest old. Subjects And Methods: This prospective study included 284 Chinese participants aged 80 years or older with serum creatinine levels &lt;150 µmol/L. The median follow-up time was 3.3 years, and 247 (87.0%) participants provided valid data at their last visit. Kidney function was evaluated by measuring the estimated glomerular filtration rate (eGFR) at baseline, and cognitive function was evaluated using the Mini-Mental State Examination (MMSE) at both baseline and annual visits. A reliable decrease in the MMSE score over the follow-up period was observed based on a Reliable Change Index of 1.645 (equivalent to a 90% confidence interval [CI]), which was used to define cognitive decline. Poisson regression models were built to analyze the association between baseline kidney function and cognitive decline. Results: A total of 18 (7.3%) cases of incident cognitive decline were observed during the follow-up period. After adjusting for potential confounders, the relative risk of developing cognitive decline was 4.03 (95% CI 1.09-13.81) among participants with an eGFR of 30-59 mL/min/1.73 m2 compared to participants with an eGFR of ≥60 mL/min/1.73 m2. Conclusion: Early-stage chronic kidney disease was correlated with cognitive decline in an oldest-old Chinese population. abstract_id: PUBMED:35342536 Associations between age-related hearing loss, cognitive decline, and depression in Chinese centenarians and oldest-old adults. Objective: Age-related hearing loss (ARHL) and age-related cognitive decline (ARCD) are leading causes of disability in older adults. ARHL may increase ARCD risk and cause depression development. This study aimed to explore the associations of ARHL with ARCD and depression in Chinese centenarians and oldest-old adults. Methods: A household survey was conducted on 859 centenarians and oldest-old adults residing in 16 cities and counties in China. Basic information was collected using interview questionnaires, physical examinations, and blood tests conducted by systematically trained local doctors and nurses. Results: In total, 501 (58.3%), 649 (75.6%), and 211 (24.6%) participants had ARHL, ARCD, and depression, respectively. The prevalence of ARCD and depression was significantly higher in participants with ARHL than in those without ARHL (all p &lt; 0.05). Participants with ARHL had significantly lower Mini-Mental State Examination (MMSE) scores indicative of more cognitive decline, and significantly higher 15-item version of the Geriatric Depression Scale (GDS-15) scores suggesting greater levels of depression, than those without ARHL (all p &lt; 0.05). Multivariate logistic regression analyses (Stepwise) indicated that ARHL was significantly and positively associated with ARCD [odds ratio (OR): 1.594, 95% confidence interval (CI): 1.132, 2.245] and depression (OR: 1.623, 95% CI: 1.139, 2.314; all p &lt; 0.05). Multivariate linear regression analyses (Stepwise) indicated that ARHL was significantly and inversely associated with MMSE scores (B: -2.520, 95% CI: -3.356, -1.684) and significantly and positively associated with GDS-15 scores (B: 0.713, 95% CI: 0.328, 1.098; all p &lt; 0.05). Conclusion: ARHL is independently associated with ARCD and depression in Chinese centenarians and oldest-old adults. abstract_id: PUBMED:33194977 Psychometric Properties of the EQ-5D for the Assessment of Health-Related Quality of Life in the Population of Middle-Old and Oldest-Old Persons: Study Protocol for a Systematic Review. Introduction: Health care interventions for middle-old and oldest-old individuals (75 years or older) are often economically evaluated using the EuroQol questionnaire (EQ-5D) to measure health-related quality of life. However, the psychometric performance of the EQ-5D in this population has been questioned, as it probably does not adequately capture relevant aspects of quality of life in the older population. Because the results of economic evaluations using the EQ-5D often guide decision-makers, it is important to know whether the EQ-5D has satisfactory psychometric properties in the middle-old and oldest-old population. Therefore, studies assessing the psychometric properties of the EQ-5D in this population should be synthesized by a systematic review. Methods and Analysis: A systematic review of studies providing empirical evidence of reliability, validity, and/or responsiveness of the EQ-5D in a sample with a mean age ≥75 years will be conducted. The databases PubMed, Web of Science, and EconLit will be searched. In addition, reference lists of included studies will be hand-searched. Two independent reviewers will select studies and assess their risk of bias with the COnsensus-based Standards for the selection of health Measurement Instruments (COSMIN) Risk of Bias checklist. Relevant data will be extracted by one reviewer and cross-checked by a second reviewer. Potential disagreements in any phase will be resolved through discussion with a third person. The guidelines for systematic reviews of measurement properties proposed by the COSMIN group, including criteria of good measurement properties, will guide the synthesis and interpretation of the results. Discussion: The review's results could facilitate the making of recommendations for the use of the EQ-5D in a population of middle-old and oldest-old people and thereby being of interest for decision-makers or for researchers designing new intervention studies for older people. Heterogeneity of individual studies regarding the population under study could limit the possibility of making a synthesized statement on the appropriateness of the EQ-5D for the middle-old to oldest-old population. abstract_id: PUBMED:33593441 Association between medical resources and the proportion of oldest-old in the Chinese population. The potential association between medical resources and the proportion of oldest-old (90 years of age and above) in the Chinese population was examined, and we found that the higher proportion of oldest-old was associated with the higher number of beds in hospitals and health centers. abstract_id: PUBMED:32252551 Impact of occupational complexity on cognitive decline in the oldest-old. Objectives: The theory of "Cognitive Reserve" assumes that premorbid factors such as high educational and occupational attainment may enable a better way of coping with brain damage. It has been suggested that more stimulating lifestyles, including more complex work environments, may provide a buffer against cognitive decline in later life. This study aimed to investigate the association between occupational history and cognitive decline in a large cohort of Italian oldest-old. Methods: 392 individuals (266 women/126 men, mean age 93 ± 3 years) enrolled in the "Mugello study" provided information about their work history. Jobs were classified in nine categories, according to the level of expertise required to perform them, as suggested by the Italian National Institute for Statistics (ISTAT). In addition, socio-demographic characteristics, comorbidities, level of independence, depression, and cognitive status were assessed. The presence of dementia was established based on cognitive status and independence in performing four selected instrumental activities of daily living (ability to manage telephone, transportation, medications, and budget). Results: Neither work complexity (p = 0.995) nor work duration (p = 0.701) showed a significant effect on the likelihood of presenting a lower cognitive profile or developing dementia (p = 0.385 and p = 0.096, for work complexity and work duration, respectively). Conclusion: In the observed sample of oldest-old individuals, cognitive decline did not seem to be influenced by cognitive reserve as assessed through the evaluation of cognitive status and level of independence. It is conceivable that in this population, the decline of the brain reserve has a preponderant role in the definition of the cognitive profile. abstract_id: PUBMED:22666698 An epidemiological transition of health conditions, and health status of the old-old-to-oldest-old in Jamaica: A comparative analysis. Background: There is a paucity of information on the old-old-to-oldest-old in Jamaica. In spite of studies on this cohort, there has never been an examination of the epidemiological transition in health condition affect this age cohort. Objectives: The aims of the current study are 1) provide an epidemiological profile of health conditions affecting Jamaicans 75+ years, 2) examine whether there is an epidemiological transition in health conditions affecting old-old-to-oldest-old Jamaicans, 3) evaluate particular demographic characteristics and health conditions of this cohort, 4) assess whether current self-reported illness is strongly correlated with current health status, 5) mean age of those with particular health conditions, 6) model health status and 7) provide valuable information upon which health practitioners and public health specialists can make more informed decisions. Method: The current study utilized a sub-sample of approximately 4% from each national cross-sectional survey that was conducted in 2002 and 2007. The sub-sample was 282 people ages 75+ years from the 6,783 respondents surveyed for 2007 and 1,069 people ages 75+ years from the 25,018 respondents surveyed for 2002. Results: In 2007, 44% of old-to-oldest-old Jamaicans were diagnosed with hypertension, which represents a 5% decline over 2002. The number of cases of diabetes mellitus increased over 570% in the studied period. The poor indicated having more health conditions than the poorest 20% of the sample. Conclusion: The implications of the shift in health conditions will create a health disparity between 75+ year adults and the rest of the population. abstract_id: PUBMED:38178031 Associations between resting heart rate and cognitive decline in Chinese oldest old individuals: a longitudinal cohort study. Background: The trajectories of cognitive function in the oldest old individuals is unclear, and the relationship between resting heart rate (RHR) and cognitive decline is controversial. Methods: 3300 participants who had cognitive function repeatedly measured 4 ~ 8 times were included, and latent class growth mixed models were used to identified the cognitive function trajectories. Cognitive decline was defined by the trajectory shapes, considering level and slope. After excluding individuals with sinus rhythm abnormal, 3109 subjects were remained and were divided into five groups by their RHR. Logistic regression models were used to estimate the relationship between RHR and cognitive decline. Results: Three distinct cognitive function trajectory groups were identified: high-stable (n = 1226), medium-decreasing (n = 1526), and rapid-decreasing (n = 357). Individuals of medium/rapid-decreasing group were defined as cognitive decline. Adjusting for covariates, the odds ratios (95% confidence intervals) of RHR sub-groups were 1.19 (0.69, 2.05), 1.27 (1.03, 1.56), 1.30 (1.01, 1.67) and 1.62 (1.07, 2.47) for those RHR &lt; 60 bpm, 70 ~ 79 bpm, 80 ~ 89 bpm and &gt; 90 bpm respectively, compared with those RHR 60 ~ 69 bpm. The interaction effect between RHR and physical activity (PA) on cognitive decline was found, and stratification analysis was presented that higher RHR would only show risk effects on cognitive decline in those with physical inactivity (P &lt; 0.05 for all). Conclusions: Our study demonstrates RHR more than 70 bpm present significant risk effect on cognitive decline, and this relationship is modified by PA. Elder population with physical inactivity and higher RHR should be paid more attention to prevent cognitive decline. Answer: Yes, terminal decline and its potential indicators are detectable in population studies of the oldest old. A study using the Mini Mental State Examination in a population study of the oldest old, where 99% of participants had died, found that decline and acceleration of this decline were detectable in the period before death. The study also identified potential variables that could influence the pattern of decline, including differences in cognitive performance by age at death, sex, initial cognitive impairment, and mobility. These factors also influenced the rate of decline and the change in the rate of decline (PUBMED:21480375). Furthermore, cognitive decline and dementia are prevalent in the oldest-old, with over half of this population segment having dementia. The etiology of dementia in the oldest-old is not fully understood, and traditional risk and protective factors for dementia in younger elderly populations, such as ApoE genotype, physical activity, and healthy lifestyle, may not be relevant for the oldest-old. The cognitive decline observed in the oldest-old may be due to the accumulation of both Alzheimer's disease-like and vascular pathologies, loss of synaptic proteins, and neuronal loss (PUBMED:23908850). Additional studies have found associations between various factors and cognitive decline in the oldest-old. For instance, malnutrition is positively associated with cognitive decline in centenarians and oldest-old adults (PUBMED:35497066), early-stage chronic kidney disease is correlated with cognitive decline in an oldest-old Chinese population (PUBMED:28740370), and age-related hearing loss is independently associated with age-related cognitive decline and depression in Chinese centenarians and oldest-old adults (PUBMED:35342536). However, one study found that in a sample of Italian oldest-old individuals, cognitive decline did not seem to be influenced by cognitive reserve as assessed through the evaluation of cognitive status and level of independence (PUBMED:32252551). Additionally, resting heart rate has been shown to be associated with cognitive decline, particularly in those with physical inactivity (PUBMED:38178031). Overall, these findings suggest that terminal decline and its potential indicators are detectable in population studies of the oldest old, and various factors may contribute to or be associated with this decline.
Instruction: Is guided imagery effective in reducing pain and anxiety in the postoperative total joint arthroplasty patient? Abstracts: abstract_id: PUBMED:21099647 Is guided imagery effective in reducing pain and anxiety in the postoperative total joint arthroplasty patient? Purpose: To evaluate the effect of guided imagery as an intervention to reduce pain and anxiety in patients undergoing a total joint arthroplasty. Sample: A total of 121 patients scheduled for elective total joint arthroplasty. Methods: The design for this study was a 2-group quasi-experimental design. The intervention group listened to a guided imagery CD containing a message to develop a sense of relaxation and harmony. The intervention and control groups were compared on self-reported pain and anxiety levels postoperatively on Days 1, 2, and 3. Results: There was no significant difference in pain and anxiety levels between the groups. However, the intervention group had lower levels of anxiety and pain at all time points. Both groups followed a similar anxiety and pain pattern with the highest reported levels at Day 2. Conclusions: Conduct further research of guided imagery as an intervention for reducing pain and anxiety utilizing randomized controlled trials with a diverse sample of patients. abstract_id: PUBMED:26222093 Effect of Structured Touch and Guided Imagery for Pain and Anxiety in Elective Joint Replacement Patients--A Randomized Controlled Trial: M-TIJRP. Context: Postoperative management of pain after total joint arthroplasty remains a challenge despite advancements in analgesics. Evidence shows that complementary modalities with mind-body and tactile-based approaches are valid and effective adjuncts to reduce pain and anxiety postoperatively. Objective: To investigate the effectiveness of the "M" Technique (M), a registered method of structured touch using a set sequence and number of strokes, and a consistent level of pressure on hands and feet, compared with guided imagery and usual care, for the reduction of pain and anxiety in patients undergoing elective total knee or hip replacement surgery. Methods: Randomized controlled trial: M-TIJRP (MiTechnique and guided Imagery in Joint Replacement Patients [Mighty Junior P]). At a community hospital, 225 male and female patients, aged 38 to 90 years, undergoing elective total hip or knee replacement were randomly assigned to 1 of 3 groups (75 patients in each): M, guided imagery, or usual care. They were blinded to their assignment until the intervention. Main Outcome Measures: Reduction of pain and anxiety postoperatively. Secondary outcomes measured use of pain medication and patient satisfaction. Results: This study yielded positive findings for the management of pain and anxiety in patients undergoing elective joint replacement using M and guided imagery for 18 to 20 minutes compared with usual care. M showed the largest predicted decreases in both pain and anxiety between groups. There was no significant difference in narcotic pain medication use between groups. Patient satisfaction survey ratings were highest for M, followed by guided imagery. Conclusion: The benefit of M may be because of the specifically structured sequence of touch by competent caring, trained providers. abstract_id: PUBMED:37576284 The association of depression and anxiety with postoperative opioid use following total joint arthroplasty. Background: The devastating opioid epidemic in the United States has been exacerbated by health care practices as well as underlying individual factors. Total joint arthroplasty (TJA) is one of the most common surgical procedures performed annually and patients frequently require opioids for pain control. Patient anxiety and depression has been shown to be associated with increased pain and poorer outcomes after TJA. Our study sought to determine if there was an association between depression/anxiety and postoperative opioid use following TJA. Methods: In this retrospective cohort study, postoperative outcomes after TJA were compared among three cohorts of patients: 1) no depression; 2) mild depression; or 3) moderate or severe depression at our institution from 2017 to 2019. Our primary outcome was persistent opioid use ≥3 months after surgery. Secondary outcomes included postoperative day 1 opioid consumption and hospital length of stay (LOS). Multivariable regression modeling was performed to control for various potential confounders. Results: Of the 542 total patients that met inclusion criteria for this study, 53 (9.8%) had mild depression and 67 (12.4%) had moderate or severe depression. Persistent opioid use ≥3 months after surgery was found in 132 (24.3%) patients. Mild depression was associated with increased odds of persistent opioid use (odds ratio 4.11, 95% confidence interval 1.65-10.18, P = 0.002). Depression was not associated with immediate postoperative opioid use or hospital LOS. Conclusion: Mild depression was associated with persistent opioid use after surgery. Future studies should investigate if better management of this comorbidity could improve outcomes in patients undergoing joint arthroplasty. abstract_id: PUBMED:15554471 The use of guided imagery to manage pain in an elderly orthopaedic population. Background: The management of postoperative pain in elderly orthopaedic patients is critical for advancing patient outcomes and improving the use of healthcare resources. Adequate pain control without adverse side effects, such as confusion and sedation, is crucial to promote comfort and participation in rehabilitation therapies among all patients but particularly among elderly joint replacement patients. Without adequate pain control, physical therapy is delayed and the risk of complications increases. One area of investigation that holds promise for improved treatment outcomes involves the use of complementary therapies, such as guided imagery. Purpose: The purpose of this pilot study was to test the effects of a guided imagery intervention in the older adult patient who has undergone joint replacement surgery. Sample And Methods: This pilot study used a two-group experimental repeated measures design. A sample of 13 patients, age 55 years and older, were recruited. The control group received usual care and a music audio tape. The experimental group received usual care and a guided imagery audio tape intervention. Findings And Discussion: Trends in this pilot study demonstrated positive outcomes for pain relief, decreased anxiety, and decreased length of stay. Complementary therapy holds the promise of increasing positive outcomes. Further research is needed to validate these findings with a larger postoperative sample and in other populations as well. Clinical Implications: There is a critical need to incorporate the use of guided imagery and other complementary therapies into all nursing curricula. Nurses must develop expertise and be ready and able to act as patient educators and advocates in the use of these interventions in programs of care and institutional policy. abstract_id: PUBMED:31492454 Changes in Patient Satisfaction Following Total Joint Arthroplasty. Background: The primary aim is to identify the degree to which patient satisfaction with the outcome of total hip arthroplasty (THA) or total knee arthroplasty (TKA) changes between 1 and 3 years from the procedure. The secondary aim is to identify variables associated with satisfaction. Methods: Data were sourced from 2 prospective international, multicenter studies (919 THA and 450 TKA patients). Satisfaction was assessed by a 10-point numerical rating scale, at 1- and 3-year follow-up. Linear mixed-effects models were used to assess factors associated with satisfaction. Results: For the THA cohort, higher preoperative joint space width (odds ratio [OR] = 0.28; P = .004), pain from other joints (OR = 0.26; P = .033), and lower preoperative health state (OR = -0.02; P &lt; .001) were associated with consistently lower levels of satisfaction. The model also showed that patients with preoperative anxiety/depression improved in satisfaction between 1 and 3 years (OR = -0.26; P = .031). For the TKA cohort, anterior (vs neutral or posterior) tibial component slope (OR = 0.90; P = .008), greater femoral component valgus angle (OR = 0.05; P = .012), less severe osteoarthritis (OR = -0.10; P &lt; .001), and lower preoperative health state (OR = -0.02; P = .003) were associated with lower levels of satisfaction across the study period. In addition, patients with anterior tibial component slope improved in satisfaction level over time (OR = -0.33; P = .022). Conclusion: Changes in satisfaction following THA and TKA are rare between 1- and 3-year follow-up. The findings of this study can be used to guide patient counseling preoperatively and to determine intervals of routine follow-up postoperatively. abstract_id: PUBMED:8897217 The effects of hypnosis/guided imagery on the postoperative course of children. Hypnosis, guided imagery, and relaxation have been shown to improve the postoperative course of adult surgical patients. Children have successfully used hypnosis/guided imagery to significantly reduce the pain associated with invasive procedures and to improve selected medical conditions. The purpose of this study was to examine the effect of hypnosis/guided imagery on the postoperative course of pediatric surgical patients. Fifty-two children (matched for sex, age, and diagnosis) were randomly assigned to an experimental or control group. The experimental group was taught guided imagery by the investigator. Practice of the imagery technique included suggestions for a favorable postoperative course. Significantly lower postoperative pain ratings and shorter hospital stays occurred for children in the experimental group. State anxiety was decreased for the guided imagery group and increased postoperatively for the control group. This study demonstrates the positive effects of hypnosis/guided imagery for the pediatric surgical patient. abstract_id: PUBMED:12134377 Guided imagery in cardiac surgery. Clinical research has demonstrated that guided imagery, a simple form of relaxation, can reduce preoperative anxiety and postoperative pain among patients undergoing surgical procedures. In 1998, the cardiac surgery team implemented a guided imagery program to compare cardiac surgical outcomes between two groups of patients: with and without guided imagery. Data from the hospital financial cost/accounting database and patient satisfaction data were collected and matched to the two groups of patients. A questionnaire was developed to assess the benefits of the guided imagery program to those who elected to participate in it. Patients who completed the guided imagery program had a shorter average length of stay, a decrease in average direct pharmacy costs, and a decrease in average direct pain medication costs while maintaining high overall patient satisfaction with the care and treatment provided. Guided imagery is now considered a complementary means to reduce anxiety, pain, and length of stay among our cardiac surgery patients. abstract_id: PUBMED:33965282 Quantifying Patient Engagement in Total Joint Arthroplasty Using Digital Application-Based Technology. Background: Digital patient engagement has been suggested as a mean to increase patient activation and patient satisfaction after total joint arthroplasty. The purpose of this study was to assess patient engagement with application-based educational tools and to explore what content was most useful to patients in the perioperative period surrounding total hip arthroplasty (THA) and total knee arthroplasty (TKA), respectively. Methods: Patients undergoing THA and TKA between October 2017 and January 2020 were enrolled to use an application-based digital technology. The App provides comprehensive patient education using a series of modules delivered at set intervals preoperatively and postoperatively. Patient engagement was defined as patients viewing at least one time the modules that were sent, or marking them as completed. Patient satisfaction was assessed using an in-application survey. Results: Complete data were available on 207 patients of which 95 (46%) underwent THA and 112 (54%) underwent TKA. The average age was 60 years. 54% with patients invited to the program completed registration. An average compliance rate of 48% (41 modules engaged out of 83) was observed. Of all modules completed, the top three most popular categories included physical therapy/exercise videos, health literacy, and anxiety/stress/pain management. The least viewed category was nutrition planning and education. Conclusion: When presented educational material related to THA and TKA, patients had a high rate of compliance. Digital technology platforms provide a scalable, meaningful approach to engaging patients throughout the continuum of joint replacement care and may serve as a cost-effective adjunct to traditional methods. abstract_id: PUBMED:35986326 Sleep disturbance, dyspnea, and anxiety following total joint arthroplasty: an observational study. Background: Patient-Reported Outcomes Measurement Information System (PROMIS) domains for sleep disturbance, anxiety, and dyspnea have been under-reported for total joint arthroplasty (TJA). The aims of this study were to report postoperative differences for these domains based on TJA location and chronic pain state. We also investigated whether these domains were associated with physical function and pain interference outcomes. Methods: This was a retrospective, observational study of patients who underwent hip, knee, or shoulder TJA (primary and revision surgeries) at a single academic tertiary referral center. A subset of these patients completed an email-based survey for chronic pain grade (Chronic Pain Grade Scale-Revised) and sleep disturbance, anxiety, dyspnea, physical function, and pain interference (PROMIS short forms). Pre-operative and operative data were extracted from the electronic health record. Data analysis investigated PROMIS domains for differences in TJA location and chronic pain grade. Hierarchical linear regression determined associations of these domains with physical function and pain interference. Results: A total of 2638 individuals provided informed consent and completed the email survey. In the ANOVA models for sleep disturbance, anxiety, and dyspnea, there was no location by chronic pain grade interaction (p &gt; 0.05) and no difference based on TJA location (p &gt; 0.05). There were differences for chronic pain grade (p &lt; 0.01). The poorest postoperative outcome score for each domain was associated with high impact chronic pain. Furthermore, sleep disturbance and dyspnea had the strongest associations with physical function and pain interference (p &lt; 0.01). Conclusions: Sleep disturbance, anxiety, and dyspnea did not vary based on TJA location, but were associated with postoperative chronic pain grade. Sleep disturbance and dyspnea were strongly associated with commonly reported outcomes of physical function and pain interference. These findings provide guidance for those interested in expanding TJA outcome assessment to include sleep disturbance, anxiety, and/or dyspnea. abstract_id: PUBMED:35845450 The Impact of Psychological Health on Patient Recovery After Arthroplasty. Purpose: The purpose of this study was to determine the relationship between psychological health and postoperative recovery and satisfaction in patients undergoing total joint arthroplasty (TJA). Methods: We prospectively enrolled patients undergoing TJA from July 2019 to December 2020. A psychological evaluation was conducted according to the Hospital Anxiety and Depression Scale (HADS). Based on the preoperative HADS scores, we grouped the patients into two groups: the symptomatic group and the asymptomatic group. Data on the Harris Hip Score (HHS), Knee Society Knee Scoring System (KSS), Forgotten Joint Score-12 (FJS-12), Short Form-12 (SF-12), and Numeric Rating Scale (NRS) for pain in these two groups were collected preoperatively and postoperatively. Then, these data were analyzed by Statistical Package for Social Sciences (SPSS) version 19. Results: The final cohort consisted of 80 patients. Patients undergoing TJA had significantly decreased HADS and NRS scores and improved HHS, KSS, SF-12, and FJS-12 scores (all p &lt; 0.001). Compared with the symptomatic group, the asymptomatic group showed better postoperative recovery (p &lt; 0.05), especially after total knee arthroplasty (TKA) (p &lt; 0.05). Good postoperative recovery positively impacted the patients' postoperative psychological state. Conclusion: Finally, the psychological state can affect recovery after TJA, and successful TJA can help improve patients' psychological states, especially after TKA. Answer: The effectiveness of guided imagery in reducing pain and anxiety in postoperative total joint arthroplasty patients has been explored in several studies, with varying results. One study with a quasi-experimental design involving 121 patients found no significant difference in pain and anxiety levels between the intervention group, which listened to a guided imagery CD, and the control group. However, the intervention group did report lower levels of anxiety and pain at all time points, suggesting a potential benefit that warrants further research through randomized controlled trials with a diverse sample of patients (PUBMED:21099647). Another randomized controlled trial, the M-TIJRP study, compared the "M" Technique (structured touch), guided imagery, and usual care among 225 patients undergoing elective total knee or hip replacement. This study found positive outcomes for the management of pain and anxiety using both M and guided imagery compared to usual care, with M showing the largest predicted decreases in both pain and anxiety. Patient satisfaction was also highest for M, followed by guided imagery (PUBMED:26222093). A pilot study on elderly orthopaedic patients undergoing joint replacement surgery indicated trends toward positive outcomes for pain relief, decreased anxiety, and decreased length of stay with the use of guided imagery, although further research with a larger sample size is needed to validate these findings (PUBMED:15554471). In pediatric patients, a study demonstrated that hypnosis/guided imagery led to significantly lower postoperative pain ratings and shorter hospital stays, as well as decreased state anxiety for the guided imagery group compared to the control group (PUBMED:8897217). Additionally, a study on cardiac surgery patients found that those who completed a guided imagery program had a shorter average length of stay and decreased average direct pharmacy and pain medication costs while maintaining high patient satisfaction (PUBMED:12134377). In summary, while not all studies show a significant difference, there is evidence to suggest that guided imagery can be effective in reducing pain and anxiety in postoperative total joint arthroplasty patients, with some studies also indicating improved patient satisfaction and reduced hospital stays. Further research, particularly randomized controlled trials, is recommended to confirm these findings and establish guided imagery as a standard complementary intervention for pain and anxiety management in this patient population.
Instruction: Do anxiety and depression confound symptom reporting and diagnostic accuracy in chronic rhinosinusitis? Abstracts: abstract_id: PUBMED:18254366 Do anxiety and depression confound symptom reporting and diagnostic accuracy in chronic rhinosinusitis? Objectives: We sought to determine the impact of psychiatric comorbidity on symptom reporting and diagnostic accuracy in chronic rhinosinusitis (CRS). Methods: A prospective cohort of patients presenting for evaluation of CRS was studied with the Rhinosinusitis Symptom Inventory and the Hospital Anxiety and Depression Scale. Data concerning symptom scores, symptom domains, and psychiatric comorbidity were analyzed with respect to paranasal sinus computed tomography (CT) Lund score. The degree of correlation between sinonasal symptoms and CT scan stage adjusting for anxiety and depression was determined. The relationships between symptoms, psychiatric comorbidity, and the presence of radiographic CRS were determined with multivariate logistic regression. Results: There were 230 patients (mean age, 43.4 years) enrolled. High levels of anxiety and depression were noted in 23.5% and 13.0%, respectively. According to CT criteria, 61.3% to 71.3% of patients had true CRS. Even adjusting for anxiety and depression, only the nasal symptom domain severity had a statistically significant correlation with Lund score (r = 0.240; p = .001); facial, oropharyngeal, systemic, and total symptoms did not correlate with CT score. Similarly, multivariate analysis revealed that even with adjustment for anxiety and depression, sinus symptom domains failed to predict radiographic positivity. Individually, only dysosmia, dental pain, and fatigue were predictive for CRS (odds ratios, 1.29 [p = .032], 1.69 [p = .001], and 0.70 [p = .023], respectively). Conclusions: Although anxiety and depression are prevalent in patients with CRS, they do not influence the correlation between nasal symptoms and CT findings. Markers of psychiatric comorbidity do not correlate with CRS symptoms. The presence of anxiety or depression does not cloud symptom reporting in CRS and should not bias clinical decision-making. abstract_id: PUBMED:19225306 Symptom outcomes following endoscopic sinus surgery. Purpose Of Review: To summarize recent studies exploring the response of chronic rhinosinusitis (CRS) symptoms to endoscopic sinus surgery (ESS). Recent Findings: ESS symptom outcomes were reported using various symptom scoring systems and more than 18 survey instruments. When individual symptom scores were pooled by meta-analysis, most major CRS symptoms improved to a similar degree following surgery, with an overall effect size of 1.19 (95% confidence interval, 0.96-1.41; I(2) = 82%). Headache (effect size, 0.98) and hyposmia (effect size, 0.97) improved less than nasal obstruction (effect size, 1.73). Fatigue and bodily pain were more severe than general population normative values and improved following ESS by an effect size of approximately 0.5 SD, a change usually regarded as a minimally important clinical difference. Similarly, quality-of-life score in fibromyalgia improved after surgery. Neither anxiety nor depression confounded CRS symptom reporting, and neither correlated with computed tomography score; however, depression was associated with lower quality-of-life scores before and after ESS. Summary: Following ESS, most major CRS symptoms usually improve to a similar degree. Bodily pain and fatigue are associated with CRS and improve after ESS. Anxiety and depression do not confound symptom reporting, although depression is associated with poorer preoperative and postoperative quality-of-life scores. abstract_id: PUBMED:32609104 Co-morbid anxiety and depression impacts on the correlation between symptom and radiological severity in patients with chronic rhinosinusitis. Background: Previous studies have reported that there is little correlation between sino-nasal outcome test (SNOT-22) score in chronic rhinosinusitis (CRS) and objective radiological scoring, although conclusions vary. We aimed to investigate whether comorbid anxiety and depression, which are highly prevalent in patients with CRS may cause symptom amplification and account for the lack of correlation in previous studies. Methodology: 100 patients with CRS were evaluated using the General Anxiety Disorder-7 (GAD-7), Patient Health Question- naire-9 (PHQ-9) and SNOT-22 questionnaires as well as the Lund Mackay Score (LMS). Results: Overall correlation analysis did not show a significant relationship between SNOT-22 and LMS scores. Subgroup analysis of patients who do not suffer with anxiety and depression showed a significant correlation between SNOT-22 and LMS scores. The nasal domain of the SNOT-22 showed strongest correlation to LMS in this patient group. We also observed a significant difference in both median SNOT-22 and LMS between patients who suffered both anxiety and depression and patients without either co- morbidity. Conclusion: When CRS patients who do not have anxiety and depression are analysed in isolation, or when these conditions are controlled in a multivariable regression, there is a significant correlation between radiological findings and symptom score. This correlation is absent in patients with co-morbid anxiety and depression. Anxiety and depression should be considered in patients in whom there is a mismatch in symptom and radiological disease severity as it is associated with symptom amplification. abstract_id: PUBMED:34752155 Depression and Anxiety: Considerations for Interpretation of the SNOT-22 (22-Item Sinonasal Outcome Test). Objective: The objective of this study was to determine whether depression and anxiety symptoms affect and confound scoring on the 22-item Sinonasal Outcome Test (SNOT-22), a commonly used outcome measure for chronic rhinosinusitis. Study Design: Prospective cross-sectional. Setting: Tertiary care academic center. Methods: 240 participants completed the SNOT-22, from which nasal, sleep, ear/facial pain, and emotional subdomain scores were calculated. They also completed the 8-item Patient Health Questionnaire (PHQ-8) as a reflection of depression symptoms and 7-item Generalized Anxiety Disorder (GAD-7) questionnaire as a reflection of anxiety symptoms. Correlations were calculated between the 4 SNOT-22 subdomains and the PHQ-8 and GAD-7. Additionally, the predictive ability of subdomains and individual items of the SNOT-22 to predict depression and anxiety was calculated. Results: The SNOT-22 sleep and emotional subdomains most strongly correlated with the PHQ-8 and the GAD-7. The emotional and sleep subdomain scores were predictive of having depression or anxiety. An emotional subdomain score ≥4 had 62.5% sensitivity and 90.1% specificity for detecting depression and 78.8% sensitivity and 88.9% specificity for detecting anxiety. A sleep subdomain score ≥21 had 81.2% sensitivity and 71.4% specificity for detecting depression and 87.9% sensitivity and 68.6% specificity for detecting anxiety. The emotional subdomain item related to sadness and the sleep subdomain items related to functional impairment were most predictive of depression and anxiety. Conclusion: The SNOT-22 emotional and sleep subdomain scores may be used to predict active depression and anxiety symptoms, especially when items related to sadness or functional impairment are scored with moderate burden. abstract_id: PUBMED:17727079 Association of anxiety and depression with reported disease severity in patients undergoing evaluation for chronic rhinosinusitis. Objectives: We sought to determine the impact of psychiatric comorbidity on symptom density and resource utilization in chronic rhinosinusitis (CRS). Methods: A prospective cohort of patients who sought evaluation of CRS was studied with the Rhinosinusitis Symptom Inventory and the Hospital Anxiety and Depression Scale. Data concerning symptom scores, symptom domains, and psychiatric comorbidity were analyzed to determine the interactions among psychiatric comorbidity, symptom reporting, and resource utilization in CRS. Results: We studied 143 patients (mean age, 43.4 years). Low, moderate, and high levels of anxiety were reported by 48.3%, 25.9%, and 25.9% of patients, respectively. Low, moderate, and high levels of depression were reported by 76.2%, 9.1%, and 14.7%. For the combined psychopathology group, 43.3%, 25.9%, and 30% had low, moderate, and high levels. Patients with high anxiety levels reported significant elevations of oropharyngeal symptoms (p = .013) and total symptoms (p = .030) in comparison with the low group. Patients with high depression levels reported higher oropharyngeal (p = .003), systemic (p = .001), and total symptom (p = .003) scores than did the low group. High combined psychopathology scores were associated with elevated facial, oropharyngeal, and systemic scores (p &lt; .05). Regarding medical utilization, high anxiety levels or high combined psychopathology scores were associated with more frequent physician visits (p &lt; .05). A high level of depression was associated with increased antibiotic use, missed workdays, and physician visits (p &lt; .05). Conclusions: High levels of anxiety and depression are common in patients who undergo evaluation for CRS. Psychiatric comorbidity is associated with increased symptoms in CRS and increased health-care utilization. Anxiety and depression should be identified in these patients to structure appropriate treatment. abstract_id: PUBMED:26268493 Prevalence and psychopathological characteristics of anxiety and depression in patients with chronic rhinosinusitis before endoscopic sinus surgery Objective: To investigate the prevalence and psychopathological characteristics of anxiety and depression in patients with chronic rhinosinusitis (CRS) and to find the risk factors leading to psychological problems. Methods: Between August 2013 and April 2014, 117 consecutive patients with the diagnosis of CRS who had been scheduled for endoscopic sinus surgery were prospectively enrolled. Somatic and psychological symptoms were evaluated using a series of questionnaire instruments. The instruments included symptom checklist-90 (SCL-90), self-rating depression scale (SDS), self-rating anxiety scale (SAS) and the visual analogue scale (VAS) and the sinonasal outcome test 20 (SNOT-20) and Lund-Mackay computed tomography score. The results of SAS, SDS, SCL-90 were compared with the standard, obtained from healthy Chinese population. Multivariate Logistic regression was used to analyze the factors that might cause anxiety and depression. SPSS 19.0 software was used to analyze the data. Results: The scores of SAS and SDS (39.40 ± 11.55, 54.05 ± 10.96) were significantly higher than those of our country's normal standard (29.78 ± 10.46, 41.88 ± 10.57, t equals 5.648, 7.529, all P &lt; 0.01). The SCL-90 scores were significantly higher than those of the normal standard population, including dimension of somatization, anxiety, depression, psychosis and total average score of the factors ( all P &lt; 0.01), the result of somatization, anxiety, depression had positive correlation with the scores of SAS and SDS (r equals 0.681, 0.781, 0.531, 0.866, 0.674, 0.557, all P &lt; 0.05). Multivariate Logistic regression showed that gender and CRS complicated with asthma or allergic rhinitis (AR) and the symptom of nasal obstruction were related to the incidence of anxiety depression comorbid. In addition, the gender and concurrent asthma had positive correlation with incidence of any anxiety or depressive disorder. To compare the abnormal psychological state group and healthy group, the SNOT-20 scores had no statistical significance (all P &gt; 0.05). Conclusions: High prevalence of anxiety and depression was found in CRS patients. Such factors as gender, nasal obstruction and concurrent with asthma or AR are high risk factors for anxiety and depression in patients with CRS. abstract_id: PUBMED:29150985 General illness and psychological factors in patients with chronic nasal symptoms. Objectives: Only a minority of patients referred to specialists with sinonasal symptoms have clear evidence of chronic rhinosinusitis (CRS). This study aims to estimate the prevalence of and associations between (i) general illness factors (fatigue, autonomic dysfunction) and (ii) psychological factors (anxiety, depression, somatisation, personality traits) in patients presenting with sinonasal symptoms. Design: The following validated questionnaires were administered to patients: the Sino-Nasal Outcome Test-22 (SNOT-22) identifying symptom burden, Composite Autonomic Symptom Score-31 (COMPASS-31) measuring autonomic function, Chalder Fatigue Questionnaire, Patient Health Questionnaire-15 (PHQ-15) addressing somatisation symptoms, Hospital Anxiety and Depression Scale (HADS), and the International Personality Item Pool-50 (IPIP-50). Comparisons were made with normative and general population data, and relationships were analysed using nonparametric correlation. Setting: Secondary care ENT outpatients. Participants: Adults referred with chronic sinonasal symptoms. Main Outcome Measures: SNOT-22, COMPASS-31, Chalder, PHQ-15, HADS, and IPIP-50 questionnaire scores. Results: Sixty-one patients were included. There was a high prevalence of all general and psychological factors assessed compared with controls. Total SNOT-22 scores showed significant correlation with Chalder fatigue scores, total autonomic dysfunction score, anxiety, depression, somatisation tendencies and the emotionally unstable personality trait. Emotional instability and psychological dysfunction correlated significantly with sleep and psychological subscales of SNOT-22 but not the rhinological or ear/facial subscales. Conclusion: Patients with sinonasal symptoms demonstrate high prevalence and complex associations of general illness factors, psychological distress and certain personality traits. The SNOT-22 is a valuable tool, but its utility is limited by correlations with these confounding factors (eg psychological factors) that may exaggerate the total score. The use of the SNOT-22 component subscales is likely to provide more clinically meaningful and discriminant information. abstract_id: PUBMED:33900051 Chronic rhinosinusitis with and without nasal polyps and asthma: Omalizumab improves residual anxiety but not depression. Background: Chronic rhinosinusitis (CRS) has a high prevalence of anxiety and depression. It is currently uncertain if treatment in patients with CRS with or without nasal polyps (CRSwNP and CRSsNP) has any impact on improving mental health outcomes. The aims here were to document anxiety and depression in patients with severe CRS and asthma already treated with appropriate medical therapy. We then evaluated whether further maximal treatment with omalizumab improved anxiety and/or depression alongside improvements in CRS and coassociated asthma. Methods: Hospital Anxiety and Depression Scale (HADS) scores along with measures of CRS and asthma severity were recorded according to CRSwNP and CRSsNP status in n = 95 patients with severe CRS and asthma. Of this group, a further n = 23 had omalizumab for associated allergic asthma. Follow-up measures were collected 16 weeks after omalizumab treatment. Results: HADS anxiety and depression prevalence in CRS were 49.47 % and 38.95%, respectively. Within the CRSwNP and CRSsNP group 53.06% and 45.66% had raised HADS-anxiety scores. Abnormal HADS-depression scores were present in 40.82% and 36.95% of the CRSwNP and CRSsNP groups, respectively. Correlations for sinonasal outcome test-22 (SNOT-22) versus HADS total was r = 0.59 p &lt; 0.0001, HADS-anxiety r = 0.56 p &lt; 0.0001 and HADS-depression r = 0.49 p &lt; 0.0001. Omalizumab improved anxiety in CRS (p &lt; 0.0001) regardless of nasal polyp status (CRSwNP p = 0.0042 and CRSsNP p = 0.0078). Depression scores did not improve in either group. SNOT-22 (p = 0.0006), asthma control questionnaire-7 (p = 0.0019) and mini-asthma quality of life questionnaire including emotional function (p = 0.0003 and p = 0.0009, respectively) all improved in both subgroups. Conclusion: In CRS and asthma, anxiety scores but not depression improved after omalizumab treatment. Anxiety may be closely related to airway disease severity, but depression may be independent of airway disease itself. If so, a separate mental health care pathway is needed for CRS patients with depression. abstract_id: PUBMED:23095946 The impact of mental health on chronic rhinosinusitis symptom scores. We aimed to assess whether anxiety and depression influence a patient's subjective symptom scores in chronic rhinosinusitis (CRS). Fifty-seven consecutive patients referred with CRS were asked to complete Sinonasal Outcome Test-22 (SNOT22) and Hospital Anxiety and Depression Score (HADS) questionnaires. Spearman's rank correlation was used to test association between SNOT22 and HADS. Linear regression was used to simultaneously analyse SNOT, HADS, patient age and gender. Mean anxiety score was 7.9 (range 0-19) with 32 % scoring over 11 (A score of ≥ 11 for either anxiety or depression is suggestive). Mean depression score was 5.0 (range 0-17), with 11 % scoring over 11. Five patients scored over 11 for both domains in the HADS questionnaire. Total SNOT22 and both anxiety (Spearman's ρ = 0.48, p = 0.0002) and depression scores (ρ = 0.44, p = 0.0007) revealed a strong association. We also identified a very strong association between psychological symptoms (questions 11-20) in SNOT22 and anxiety (ρ = 0.60, p = 0.0001) and depression scores (ρ = 0.341, p = 0.002). However, there was no association between nose-specific symptoms in SNOT22 and anxiety score in HADS (ρ = 0.18, p = 0.1775), but there was a strong association between nose-specific symptoms of SNOT22 and depression score in HADS (ρ = 0.32, p = 0.02). There is a strong association between total SNOT 22 and HADS score, but this appears to be due to the impact on generic quality of life items rather than disease-specific symptoms. This supports inclusion of global items in the SNOT22, but also use of two separate subscales when interpreting data. abstract_id: PUBMED:25952937 Depression and anxiety in chronic rhinosinusitis. Background: Both depression and anxiety have been suspected to impact quality of life adversely in patients with chronic rhinosinusitis (CRS). The objective of this work was to assess for the presence of anxiety and/or depression in the setting of CRS and to assess their impact on disease-related quality of life by analyzing the correlation between the Rhinosinusitis Disability Index (RSDI) and the Hospital Anxiety and Depression score (HADS), as well as the Lund-Kennedy nasal endoscopic scores. Methods: A total of 124 patients with CRS were prospectively evaluated in the outpatient setting using the RSDI and HADS questionnaires as well as the Lund-Kennedy nasal endoscopy scoring system. Results: The total RSDI and its subscale scores had moderate to very strong correlation with the HADS total score and each of its subscale scores in comparison to their poor correlation with Lund-Kennedy endoscopic score. CRS patients with depression or anxiety scores 8 to 10 (possible case of clinically significant depression or anxiety) and 11 to 21 (probable case) reported worse total RSDI and subscale scores when compared with those with normal scores (0 to 7). There was no significant difference in the Lund-Kennedy endoscopic scores between the different groups of anxiety and depression scores. Conclusion: Depression and anxiety are prevalent in CRS. The total RSDI and its different subscale scores exhibit moderate to very strong correlation with depression/anxiety scores as determined by HADS, whereas poor correlation was seen with the Lund-Kennedy endoscopic score. Answer: Anxiety and depression are prevalent in patients with chronic rhinosinusitis (CRS), but their impact on symptom reporting and diagnostic accuracy appears to be complex. Several studies have investigated this relationship with varying findings. One study found that anxiety and depression do not influence the correlation between nasal symptoms and CT findings in CRS. Markers of psychiatric comorbidity did not correlate with CRS symptoms, and the presence of anxiety or depression did not cloud symptom reporting in CRS, suggesting that these factors should not bias clinical decision-making (PUBMED:18254366). Another study reported that anxiety and depression do not confound CRS symptom reporting, although depression was associated with lower quality-of-life scores both before and after endoscopic sinus surgery (ESS). Following ESS, most major CRS symptoms usually improve to a similar degree, and bodily pain and fatigue, which are associated with CRS, also improve after surgery (PUBMED:19225306). However, a different study found that when CRS patients who do not have anxiety and depression are analyzed in isolation, or when these conditions are controlled in a multivariable regression, there is a significant correlation between radiological findings and symptom score. This correlation is absent in patients with comorbid anxiety and depression, indicating that these conditions may be associated with symptom amplification (PUBMED:32609104). Furthermore, the SNOT-22 emotional and sleep subdomain scores may be used to predict active depression and anxiety symptoms, especially when items related to sadness or functional impairment are scored with moderate burden (PUBMED:34752155). Another study found that psychiatric comorbidity is associated with increased symptoms in CRS and increased health-care utilization, suggesting that anxiety and depression should be identified in these patients to structure appropriate treatment (PUBMED:17727079). In summary, while anxiety and depression are common in patients with CRS and may be associated with increased symptom reporting and health-care utilization, they do not necessarily confound the correlation between nasal symptoms and objective CT findings. However, they may influence the overall symptom burden and quality of life, and their presence may be associated with symptom amplification, particularly when not accounted for in analyses. Therefore, it is important to consider these factors in the clinical assessment and management of CRS patients.
Instruction: Decreased endothelin-1 plasma levels in multiple sclerosis patients: a possible factor of vascular dysregulation? Abstracts: abstract_id: PUBMED:25864450 Decreased endothelin-1 plasma levels in multiple sclerosis patients: a possible factor of vascular dysregulation? Background: Multiple sclerosis (MS) is a demyelinating autoimmune disease of the central nervous system with possible involvement of vascular dysregulation secondary to endothelial dysfunction caused by destruction of the vessel wall. Vascular dysregulation leads to excessive vasoconstriction or insufficient vasodilatation, resulting in vasospasm mediated by endothelin-1 (ET-1), the most potent and long-lasting mediator. Vascular dysregulation can play an important role in the pathogenesis of some eye disorders and it has been hypothesized that it is a vascular risk factor for glaucomatous optic neuropathy. The aim of this study was to estimate endothelin-1 (ET-1) plasma levels in patients with MS. Material And Methods: The MS group consisted of 39 patients (9 males, 30 females), mean age: 38.8 ± 10.02 years, range: 22-62. The control group consisted of 27 healthy volunteers (3 males and 24 females), mean age: 37.4 ± 10.88 years, range: 20-62; clinically, in a non-active stage of the disease. ET-1 plasma levels were measured using the Endothelin-1 ELISA Kit (Immuno-Biological Laboratories Co., Japan). Statistical analysis was performed with the nonparametric Mann-Whitney U test for independent groups. Results: Endothelin-1 (ET-1) plasma levels were significantly lower in MS patients compared to healthy controls: mean value 0.55 ± 0.44 pg/ml (146.05 ± 118.27 fmol/ml) vs. 0.95 ± 0.48 pg/ml (252.83 ± 127.16 fmol/ml); P=0.012. Conclusions: Significantly decreased ET-1 plasma levels in the MS patients could reflect the non-active disease at the time of ET-1 measurements or the effects of immunomodulatory treatment, but it cannot be excluded that decreased ET-1 plasma levels in these patients might result from vascular dysregulation. abstract_id: PUBMED:17998040 What is the link between vascular dysregulation and glaucoma? The need of blood flow to different organs varies rapidly over time which is why there is sophisticated local regulation of blood flow. The term dysregulation simply means that blood flow is not properly adapted to this need. Dysregulative mechanisms can lead to an over- or underperfusion. A steady overperfusion may be less critical for long-term damage. A constant underperfusion, however, can lead to some tissue atrophy or in extreme situations to infarction. Unstable perfusion (underperfusion followed by reperfusion) leads to oxidative stress. There are a number of causes that lead to local or systemic vascular dysregulation. Systemic dysregulation can be primary or secondary of nature. A secondary dysregulation is due to other autoimmune diseases such as rheumatoid arthritis, giant cell arteritis, systemic lupus erythematodes, multiple sclerosis, colitis ulcerosa, or Crohns disease. Patients with a secondary vascular dysregulation normally have a high level of circulating endothelin-1 (ET-1). This increased level of ET-1 leads to a reduction of blood flow both in the choroid and the optic nerve head but has little influence on autoregulation. In contrast, primary vascular dysregulation has little influence on baseline ocular blood flow but interferes with autoregulation. This, in turn, leads to unstable oxygen supply, which seems to be a relevant component in the pathogenesis of glaucomatous optic neuropathy. abstract_id: PUBMED:23742177 The primary vascular dysregulation syndrome: implications for eye diseases. Vascular dysregulation refers to the regulation of blood flow that is not adapted to the needs of the respective tissue. We distinguish primary vascular dysregulation (PVD, formerly called vasospastic syndrome) and secondary vascular dysregulation (SVD). Subjects with PVD tend to have cold extremities, low blood pressure, reduced feeling of thirst, altered drug sensitivity, increased pain sensitivity, prolonged sleep onset time, altered gene expression in the lymphocytes, signs of oxidative stress, slightly increased endothelin-1 plasma level, low body mass index and often diffuse and fluctuating visual field defects. Coldness, emotional or mechanical stress and starving can provoke symptoms. Virtually all organs, particularly the eye, can be involved. In subjects with PVD, retinal vessels are stiffer and more irregular, and both neurovascular coupling and autoregulation capacity are reduced while retinal venous pressure is often increased. Subjects with PVD have increased risk for normal-tension glaucoma, optic nerve compartment syndrome, central serous choroidopathy, Susac syndrome, retinal artery and vein occlusions and anterior ischaemic neuropathy without atherosclerosis. Further characteristics are their weaker blood-brain and blood-retinal barriers and the higher prevalence of optic disc haemorrhages and activated astrocytes. Subjects with PVD tend to suffer more often from tinnitus, muscle cramps, migraine with aura and silent myocardial ischaemic and are at greater risk for altitude sickness. While the main cause of vascular dysregulation is vascular endotheliopathy, dysfunction of the autonomic nervous system is also involved. In contrast, SVD occurs in the context of other diseases such as multiple sclerosis, retrobulbar neuritis, rheumatoid arthritis, fibromyalgia and giant cell arteritis. Taking into consideration the high prevalence of PVD in the population and potentially linked pathologies, in the current article, the authors provide recommendations on how to effectively promote the field in order to create innovative diagnostic tools to predict the pathology and develop more efficient treatment approaches tailored to the person. abstract_id: PUBMED:12646761 Extraocular blood flow and endothelin-1 plasma levels in patients with multiple sclerosis. In order to evaluate whether plasma levels of the potent vasoconstrictor endothelin-1 (ET-1) are increased in patients with multiple sclerosis (MS) and whether these patients exhibit an ET-1-mediated vascular dysregulation, ET-1 plasma levels were measured in 30 patients with MS. Blood flow velocities in the ophthalmic artery, central retinal artery, central retinal vein, short lateral posterior ciliary artery, and short medial posterior ciliary artery were assessed in parallel. ET-1 plasma levels were significantly increased in MS patients when compared to sex- and age-matched healthy controls (2.0 +/- 0.4 pg/ml, range 1.1-2.8 vs. 1.5 +/- 0.2 pg/ml, range 0.9-2.0; p &lt; 0.001). Moreover, the patients exhibited significant alterations of extraocular blood flow. The role of ET-1 in the inflammatory process remains to be clarified. abstract_id: PUBMED:11315981 Increased endothelin-1 plasma levels in patients with multiple sclerosis. Objective: We tested the hypothesis that the plasma level of endothelin-1 (ET-1) is increased in patients with multiple sclerosis (MS). The peptide ET-1 is one of the most potent known vasoconstrictors. An increased level of endothelin could explain some of the vascular symptoms of these patients. Materials And Methods: A specific radioimmunoassay was used to determine ET-1 plasma levels. Twenty patients with MS were compared to 20 age- and sex-pair-matched healthy subjects. Results: The plasma ET-1 levels were, on average, 224% higher in the patients with MS than in the controls (p &lt; 0.005). The mean ET-1 levels (mean +/- standard deviation [SD]) were 3.5 +/- 0.83 pg/mL (min 2.13, max 5.37 pg/mL) in patients with MS and 1.56 +/- 0.3 pg/mL (min 0.9, max 2.13 pg/mL) in healthy volunteers. Neither the different forms nor stages of MS had an influence on the results. The ET-1 level was also not correlated with the duration of the disease. Conclusions: The plasma ET-1 level is markedly and significantly increased in patients with MS. Neither the cause of such an increase nor the pathogenetic role is known. abstract_id: PUBMED:21683931 Vascular aspects of multiple sclerosis. Three types of vascular dysfunction have been described in multiple sclerosis (MS). First, findings from epidemiological studies suggest that patients with MS have a higher risk for ischaemic stroke than people who do not have MS. The underlying mechanism is unknown, but might involve endothelial dysfunction secondary to inflammatory disease activity and increased plasma homocysteine concentrations. Second, patients with MS have global cerebral hypoperfusion, which might predispose them to the development of ischaemic stroke. The widespread decrease in perfusion in normal-appearing white matter and grey matter in MS seems not to be secondary to axonal degeneration, but might be a result of reduced axonal activity, reduced astrocyte energy metabolism, and perhaps increased blood concentrations of endothelin-1. Data suggest that a subtype of focal MS lesions might have an ischaemic origin, and there seems to be a link between reduced white matter perfusion and cognitive dysfunction in MS. Third, the pathology of MS might be the consequence of a chronic state of impaired venous drainage from the CNS, for which the term chronic cerebrospinal venous insufficiency (CCSVI) has been coined. A number of recent vascular studies do not support the CCSVI theory, but some elements of CCSVI might be explained by slower cerebral venous blood flow secondary to the reduced cerebral perfusion in patients with MS compared with healthy individuals. abstract_id: PUBMED:11286896 Vasospasm, its role in the pathogenesis of diseases with particular reference to the eye. Vasospasm can have many different causes and can occur in a variety of diseases, including infectious, autoimmune, and ophthalmic diseases, as well as in otherwise healthy subjects. We distinguish between the primary vasospastic syndrome and secondary vasospasm. The term "vasospastic syndrome" summarizes the symptoms of patients having such a diathesis as responding with spasm to stimuli like cold or emotional stress. Secondary vasospasm can occur in a number of autoimmune diseases, such as multiple sclerosis, lupus erythematosus, antiphospholipid syndrome, rheumatoid polyarthritis, giant cell arteritis, Behcet's disease, Buerger's disease and preeclampsia, and also in infectious diseases such as AIDS. Other potential causes for vasospasm are hemorrhages, homocysteinemia, head injury, acute intermittent porphyria, sickle cell disease, anorexia nervosa, Susac syndrome, mitochondriopathies, tumors, colitis ulcerosa, Crohn's disease, arteriosclerosis and drugs. Patients with primary vasospastic syndrome tend to suffer from cold hands, low blood pressure, and even migraine and silent myocardial ischemia. Valuable diagnostic tools for vasospastic diathesis are nailfold capillary microscopy and angiography, but probably the best indicator is an increased plasma level of endothelin-1. The eye is frequently involved in the vasospastic syndrome, and ocular manifestations of vasospasm include alteration of conjunctival vessels, corneal edema, retinal arterial and venous occlusions, choroidal ischemia, amaurosis fugax, AION, and glaucoma. Since the clinical impact of vascular dysregulation has only really been appreciated in the last few years, there has been little research in the according therapeutic field. The role of calcium channel blockers, magnesium, endothelin and glutamate antagonists, and gene therapy are discussed. abstract_id: PUBMED:38433209 Induced pluripotent stem cell derived pericytes respond to mediators of proliferation and contractility. Background: Pericytes are multifunctional contractile cells that reside on capillaries. Pericytes are critical regulators of cerebral blood flow and blood-brain barrier function, and pericyte dysfunction may contribute to the pathophysiology of human neurological diseases including Alzheimers disease, multiple sclerosis, and stroke. Induced pluripotent stem cell (iPSC)-derived pericytes (iPericytes) are a promising tool for vascular research. However, it is unclear how iPericytes functionally compare to primary human brain vascular pericytes (HBVPs). Methods: We differentiated iPSCs into iPericytes of either the mesoderm or neural crest lineage using established protocols. We compared iPericyte and HBVP morphologies, quantified gene expression by qPCR and bulk RNA sequencing, and visualised pericyte protein markers by immunocytochemistry. To determine whether the gene expression of neural crest iPericytes, mesoderm iPericytes or HBVPs correlated with their functional characteristics in vitro, we quantified EdU incorporation following exposure to the key pericyte mitogen, platelet derived growth factor (PDGF)-BB and, contraction and relaxation in response to the vasoconstrictor endothelin-1 or vasodilator adenosine, respectively. Results: iPericytes were morphologically similar to HBVPs and expressed canonical pericyte markers. However, iPericytes had 1864 differentially expressed genes compared to HBVPs, while there were 797 genes differentially expressed between neural crest and mesoderm iPericytes. Consistent with the ability of HBVPs to respond to PDGF-BB signalling, PDGF-BB enhanced and a PDGF receptor-beta inhibitor impaired iPericyte proliferation. Administration of endothelin-1 led to iPericyte contraction and adenosine led to iPericyte relaxation, of a magnitude similar to the response evoked in HBVPs. We determined that neural crest iPericytes were less susceptible to PDGFR beta inhibition, but responded most robustly to vasoconstrictive mediators. Conclusions: iPericytes express pericyte-associated genes and proteins and, exhibit an appropriate physiological response upon exposure to a key endogenous mitogen or vasoactive mediators. Therefore, the generation of functional iPericytes would be suitable for use in future investigations exploring pericyte function or dysfunction in neurological diseases. abstract_id: PUBMED:7818262 Tumor necrosis factor-alpha messenger RNA expression in patients with relapsing-remitting multiple sclerosis is associated with disease activity. We determined the cytokine messenger RNA (mRNA) expression pattern of blood mononuclear cells in 29 patients with relapsing-remitting multiple sclerosis every 4 weeks over a period of 12 months. During this period 27 relapses occurred in 14 patients (48%). Progression of disease activity as assessed by the occurrence of new lesions on nonenhancing T2-weighted magnetic resonance images of the head was detected in 12 (48%) of 25 patients. Using a semiquantitative polymerase chain reaction we demonstrated significant increases in tumor necrosis factor-alpha mRNA expression in peripheral blood mononuclear cells prior to a relapse. In 24 (85%) of 27 relapses increased tumor necrosis factor-alpha mRNA expression preceded clinical symptoms by 4 weeks. A similar pattern was observed for lymphotoxin mRNA expression. At the same time, transforming growth factor-beta and interleukin-10 mRNA levels declined. Fluctuations in the mRNA expression of tumor necrosis factor-alpha were also observed in 6 patients with stable disease who had active magnetic resonance scans on follow-up. No correlation of disease activity was observed with interleukin-1 beta, -4, or -6, inferferon gamma or endothelin-1 mRNA expression. From these data it can be concluded that variations in cytokine mRNA expression in blood mononuclear cells are correlated with disease activity in relapsing-remitting multiple sclerosis. It may be a valuable parameter to monitor the immunological status of patients in future clinical trials. abstract_id: PUBMED:31583031 Increased Levels of Endothelin-1 in Cerebrospinal Fluid Are a Marker of Poor Visual Recovery after Optic Neuritis in Multiple Sclerosis Patients. Background: Multiple sclerosis (MS), a chronic inflammatory and degenerative disease of the central nervous system, typically features immune-mediated focal demyelination and secondary axonal degeneration. Cerebral hypoperfusion of the normal-appearing white matter (NAWM) has been reported in MS patients and may be mediated by elevated levels of endothelin-1 (ET-1), a most potent vasoconstrictive peptide released from reactive astrocytes in MS focal lesions. Optic neuritis (ON) is one of the most frequent manifestations of MS and also shows peripapillary vascular hypoperfusion in combination with disc swelling. Aims: We aimed to compare serum and cerebrospinal fluid (CSF) levels of ET-1 as a potential prognostic marker of MS-ON in two groups of patients differing for severity of MS-ON clinical presentation. Materials And Methods: A cross-sectional study to compare serum and CSF levels of ET-1 between patients with clinically aggressive MS-ON (A-MS-ON) and nonaggressive MS-ON (NA-MS-ON) according to conventional ophthalmological criteria, including optical coherence tomography. CSF and serum concentrations of ET-1 were measured using a commercially available ELISA method. Results: Sixteen patients consecutively referred to the Units of Neurology for visual disturbances attributable to MS were recruited, 11 (69%) patients with A-MS-ON and 5 (31%) with NA-MS-ON. Median CSF ET-1 levels and CSF/serum ET-1 quotient were significantly higher in patients with A-MS-ON (0.30 vs. 0.56 ng/ml) as compared to NA-MS-ON (0.16 vs. 0.16). Conclusions: Severity and failure in the recovery from ON in MS patients may depend from vascular hypoperfusion of the optic nerve induced by high intrathecally produced ET-1, a potential prognostic marker of ON recovery in MS. The detection of CSF ET-1 levels may allow identifying groups of ON patients potentially benefitting from treatment with ET-1 antagonists (e.g., bosentan). Answer: The study presented in PUBMED:25864450 found that endothelin-1 (ET-1) plasma levels were significantly lower in multiple sclerosis (MS) patients compared to healthy controls. This finding suggests that decreased ET-1 plasma levels could be a factor in the vascular dysregulation observed in MS patients. Vascular dysregulation refers to the improper adaptation of blood flow to the needs of tissue, which can lead to over- or underperfusion and potentially contribute to various diseases, including eye disorders and glaucomatous optic neuropathy (PUBMED:17998040; PUBMED:23742177). The role of ET-1 in vascular regulation is significant, as it is a potent mediator that can lead to vasoconstriction or insufficient vasodilation, resulting in vasospasm (PUBMED:17998040). In the context of MS, vascular dysregulation can be secondary to endothelial dysfunction caused by the destruction of the vessel wall (PUBMED:25864450). This dysfunction can be associated with increased plasma homocysteine concentrations and reduced cerebral perfusion, which may predispose MS patients to ischemic stroke and contribute to the pathology of MS (PUBMED:21683931). Interestingly, other studies have reported increased ET-1 plasma levels in MS patients (PUBMED:12646761; PUBMED:11315981), suggesting that the relationship between ET-1 levels and MS may be complex and potentially influenced by the disease stage, activity, or treatment. For instance, PUBMED:25864450 posits that the decreased ET-1 levels observed could reflect the non-active disease stage at the time of measurement or the effects of immunomodulatory treatment. Conversely, PUBMED:11315981 found that ET-1 levels were markedly increased in MS patients, but the cause and pathogenetic role of this increase were not known. In summary, the observed decrease in ET-1 plasma levels in MS patients could be indicative of vascular dysregulation, which may play a role in the pathogenesis of MS and its associated symptoms. However, the exact relationship between ET-1 levels and MS remains to be fully elucidated, as different studies have reported varying levels of ET-1 in MS patients. Further research is needed to clarify the role of ET-1 in MS and its potential as a therapeutic target or prognostic marker.
Instruction: Can wireless text messaging improve adherence to preventive activities? Abstracts: abstract_id: PUBMED:28941056 Mobile phone text messaging improves antihypertensive drug adherence in the community. Antihypertensive drug adherence (ADA) is a mainstay in blood pressure control. Education through mobile phone short message system (SMS) text messaging could improve ADA. The authors conducted a randomized study involving 314 patients with hypertension with &lt;6 months of antihypertensive treatment from the Preventive Health Program of 12 different primary care centers in Santiago, Chile. Patients were randomly assigned to receive or not receive SMS related to ADA and healthy lifestyle. Adherence was assessed by the self-reported four-item scale Morisky-Green-Levine questionnaire at baseline and after 6 months of follow-up, with four of four positive questions classified as good adherence. Group comparison for adherence was performed by means of a logistic regression model, adjusting by baseline adherence, age older than 60 years, and sex. A total of 163 patients were randomized to receive and 151 to not receive SMS. After 6 months of follow-up, ADA in the non-SMS group decreased from 59.3% to 51.4% (P=.1). By contrast, adherence increased from 49% to 62.3% (P=.01) in the SMS group. Text messaging intervention improved ADA (risk ratio, 1.3; 95% confidence interval, 1.0-1.6 [P&lt;.05]). At 6-month follow-up, text messaging resulted in an increase in reporting ADA in this hypertensive Latino population. This approach could become an effective tool to overcome poor medication adherence in the community. abstract_id: PUBMED:25479285 Randomised trial of text messaging on adherence to cardiovascular preventive treatment (INTERACT trial). Background: About one third of patients prescribed blood pressure or lipid-lowering drugs for the prevention of coronary heart disease and stroke do not take their medication as prescribed. We conducted a randomized trial to evaluate text messaging as a means of improving adherence to cardiovascular disease preventive treatment. Methods: 303 patients taking blood pressure and/or lipid-lowering medications were randomly assigned to being sent text messages (Text group, 151) or not being sent them (No text group, 152). Texts were sent daily for 2 weeks, alternate days for 2 weeks and weekly thereafter for 22 weeks (6 months overall), using an automated computer programme. Patients were asked to respond on whether they had taken their medication, whether the text reminded them to do so if they had forgotten, and if they had not taken their medication to determine if there was a reason for not doing so. At 6 months, use of medication was assessed. Results: Two patients were lost to follow-up, providing data on 301 for analysis. In the No text group 38/151 (25%) took less than 80% of the prescribed regimen (ie. stopped medication completely or took it on fewer than 22 of the last 28 days of follow-up) compared to 14/150 patients (9%) in the Text group - an improvement in adherence affecting 16 per 100 patients (95% CI 7 to 24), p&lt;0.001. The texts reminded 98/151 patients (65%) to take medication on at least one occasion and lead to 20/151 (13%) who stopped taking medication because of concern over efficacy or side-effects, resuming treatment. Conclusions: In patients taking blood pressure or lipid-lowering treatment for the prevention of cardiovascular disease, text messaging improved medication adherence compared with no text messaging. Trial Registration: Controlled-Trials.com ISRCTN74757601. abstract_id: PUBMED:26216256 mHealth SMS text messaging interventions and to promote medication adherence: an integrative review. Aims And Objectives: This article is an integrative review of the evidence for mobile health Short Message Service text messages as an innovative and emerging intervention to promote medication adherence. Authors completed this review to draw conclusions and implications towards establishing a scientific foundation for use of text messages to promote medication adherence, thus informing clinical practice. Background: The World Health Organization has identified medication adherence as a priority global problem. Text messages are emerging as an effective means of improving health behaviours and in some diseases to promote medication adherence. However, a gap in the literature indicates lack of evidence in guiding theories and content of text messages, which should be synthesised prior to use in clinical practice. Design: Integrative review. Methods: Cumulative Index to Nursing and Allied Health Literature, Excerpta Medica dataBASE, Scopus, the Cochrane Library and PubMed were searched for relevant studies between 2004-2014. Inclusion criteria were (1) implementation of a text-message intervention and (2) medication adherence to a prescribed oral medication as a primary outcome. Articles were assessed for quality of methodology and measures of adherence. An integrative review process was used to perform analysis. Results: Thirteen articles meeting the inclusion criteria are included in this review. Nine of 13 studies found adherence rates improved between 15·3-17·8% when using text messages to promote medication adherence. Text messages that were standardised, tailored, one- or two-way and timed either daily to medication regimen, weekly or monthly showed improvement in medication adherence. Conclusions: This review established a scientific basis for text messages as an intervention to improve medication adherence across multiple diseases. Future large rigorous randomised trials are needed to further test text messaging interventions. Relevance To Clinical Practice: This review provides clinicians with the state of the science with regard to text messaging interventions that promote medication adherence. A description of intervention components are provided to aid nurses in development of text messages and in translating evidence into practice. abstract_id: PUBMED:27437126 The Effectiveness of Mobile Phone Text Messaging in Improving Medication Adherence for Patients with Chronic Diseases: A Systematic Review. Context: Medication non-adherence is a commonly observed problem in the self-administration of treatment, regardless of the disease type. Text messaging reminders, as electronic reminders, provide an opportunity to improve medication adherence. In this study, we aimed to provide evidence addressing the question of whether text message reminders were effective in improving patients' adherence to medication. Evidence Acquisition: We carried out a systematic literature search, using the five electronic bibliographic databases: PubMed, Embase, PsycINFO, CINAHL, and the Cochrane central register of controlled trials. Studies were included on the basis of whether they examined the benefits and effects of short-message service (SMS) interventions on medication adherence. Results: The results of this systematic review indicated that text messaging interventions have improved patients' medication adherence rate (85%, 29.34). Included in the review, those who had problems with adherence, or those whom text messaging was most helpful had HIV, asthma, diabetes, schizophrenia and heart disease (73.5%). The period of intervention varied from 1 week to 14 months. The most common study design was randomized controlled trials (RCTs) (66%) carried out in the developed countries. Conclusions: This study demonstrated the potential of mobile phone text messaging for medication non-adherence problem solving. abstract_id: PUBMED:26087045 One-way versus two-way text messaging on improving medication adherence: meta-analysis of randomized trials. Background: Mobile telephone text messaging is a simple potential solution to the failure to take medications as directed. There is uncertainty over the effectiveness of 1-way text messaging (sending text message reminders only) compared with 2-way text messaging (sending reminders and receiving replies confirming whether medication has been taken) as a means of improving medication adherence. Methods: A meta-analysis of 8 randomized trials (1994 patients) that tested the effectiveness of text messaging on medication adherence was performed. The trials were divided into 2 groups: trials using 1-way text messaging versus no text messaging and trials using 2-way text messaging versus no text messaging. The summary estimates of the effect of the 2 methods of text messaging (1-way or 2-way) were compared. Results: The summary relative risk estimate was 1.04 (95% confidence interval, 0.97-1.11) for 1-way text messaging and 1.23 (95% confidence interval, 1.13-1.35) for 2-way text messaging. The difference in effect between the 2 methods was statistically significant (P = .007). Conclusions: Two-way text messaging is associated with substantially improved medication adherence compared with 1-way text messaging. This has important implications in the provision of mobile-based messaging in the management of patients taking medication for the prevention of chronic disease. abstract_id: PUBMED:29382623 Improving Refill Adherence in Medicare Patients With Tailored and Interactive Mobile Text Messaging: Pilot Study. Background: Nonadherence is a major concern in the management of chronic conditions such as hypertension, cardiovascular disease, and diabetes where patients may discontinue or interrupt their medication for a variety of reasons. Text message reminders have been used to improve adherence. However, few programs or studies have explored the benefits of text messaging with older populations and at scale. In this paper, we present a program design using tailored and interactive text messaging to improve refill rates of partially adherent or nonadherent Medicare members of a large integrated health plan. Objective: The aim of this 3-month program was to gain an understanding of whether tailored interactive text message dialogues could be used to improve medication refills in Medicare patients with one or more chronic diseases. Methods: We used the mPulse Mobile interactive text messaging solution with partially adherent and nonadherent Medicare patients (ie, over age 65 years or younger with disabilities) of Kaiser Permanente Southern California (KP), a large integrated health plan, and compared refill rates of the text messaging group (n=12,272) to a group of partially adherent or nonadherent Medicare patients at KP who did not receive text messages (nontext messaging group, n=76,068). Both groups were exposed to other forms of refill and adherence outreach including phone calls, secure emails, and robo-calls from December 2016 to February 2017. Results: The text messaging group and nontext messaging group were compared using an independent samples t test to test difference in group average of refill rates. There was a significant difference in medication refill rates between the 2 groups, with a 14.07 percentage points higher refill rate in the text messaging group (P&lt;.001). Conclusions: The results showed a strong benefit of using this text messaging solution to improve medication refill rates among Medicare patients. These findings also support using interactive text messaging as a cost-effective, convenient, and user-friendly solution for patient engagement. Program outcomes and insights can be used to enhance the design of future text-based solutions to improve health outcomes and promote adherence and long-term behavior change. abstract_id: PUBMED:28506955 Text Messaging and Mobile Phone Apps as Interventions to Improve Adherence in Adolescents With Chronic Health Conditions: A Systematic Review. Background: The number of adolescents with chronic health conditions (CHCs) continues to increase. Medication nonadherence is a global challenge among adolescents across chronic conditions and is associated with poor health outcomes. While there has been growing interest in the use of mHealth technology to improve medication adherence among adolescents with CHCs, particularly text messaging and mobile phone apps, there has been no prior systematic review of their efficacy. Objective: The purpose of this review was to systematically evaluate the most recent evidence for the efficacy of text messaging and mobile phone apps as interventions to promote medication adherence among adolescents with CHCs. Methods: PubMed, Embase, CENTRAL, PsycINFO, Web of Science, Google Scholar, and additional databases were searched from 1995 until November 2015. An additional hand search of related themes in the Journal of Medical Internet Research was also conducted. The Preferred Reporting Results of Systematic Reviews and Meta-Analyses guidelines were followed. Two reviewers independently screened titles/abstracts, assessed full-text articles, extracted data from included articles, and assessed their quality using Grades of Recommendation, Assessment, Development, and Evaluation criteria. Included studies were described in original research articles that targeted adherence in adolescents with CHCs (12-24 years-old). Results: Of the 1423 records examined, 15 met predefined criteria: text messaging (n=12) and mobile phone apps (n=3). Most studies were performed in the United States (11/15, 73%), were randomized-controlled trials (8/15, 53%), had a sample size &lt;50 (11/15, 73%), and included adherence self-report and/or biomarkers (9/15, 60%). Only four studies were designed based on a theoretical framework. Approaches for text messaging and mobile phone app interventions varied across studies. Seven articles (7/15, 47%) reported significant improvement in adherence with moderate to large standardized mean differences. Most of the included studies were of low or moderate quality. Studies varied in sample size, methods of adherence assessment, and definition of adherence, which prohibited performing a meta-analysis. Conclusions: The use of text messaging and mobile phone app interventions to improve medication adherence among adolescents with CHCs has shown promising feasibility and acceptability, and there is modest evidence to support the efficacy of these interventions. Further evaluation of short- and long-term efficacy and cost-effectiveness of these interventions is warranted given the early and evolving state of the science. abstract_id: PUBMED:32238334 The Use of Text Messaging to Improve Clinical Engagement for Individuals With Psychosis: Systematic Review. Background: Individuals experiencing psychosis are at a disproportionate risk for premature disengagement from clinical treatment. Barriers to clinical engagement typically result from funding constraints causing limited access to and flexibility in services. Digital strategies, such as SMS text messaging, offer a low-cost alternative to potentially improve engagement. However, little is known about the efficacy of SMS text messaging in psychosis. Objective: This review aimed to address this gap, providing insights into the relationship between SMS text messaging and clinical engagement in the treatment of psychosis. Methods: Studies examining SMS text messaging as an engagement strategy in the treatment of psychosis were reviewed. Included studies were published from the year 2000 onward in the English language, with no methodological restrictions, and were identified using 3 core databases and gray literature sources. Results: Of the 233 studies extracted, 15 were eligible for inclusion. Most studies demonstrated the positive effects of SMS text messaging on dimensions of engagement such as medication adherence, clinic attendance, and therapeutic alliance. Studies examining the feasibility of SMS text messaging interventions found that they are safe, easy to use, and positively received. Conclusions: Overall, SMS text messaging is a low-cost, practical method of improving engagement in the treatment of psychosis, although efficacy may vary by symptomology and personal characteristics. Cost-effectiveness and safety considerations were not adequately examined in the studies included. Future studies should consider personalizing SMS text messaging interventions and include cost and safety analyses to appraise readiness for implementation. abstract_id: PUBMED:31396815 Electronic Pill Bottles or Bidirectional Text Messaging to Improve Hypertension Medication Adherence (Way 2 Text): a Randomized Clinical Trial. Background: Poor medication adherence contributes to inadequate control of hypertension. However, the value of adherence monitoring is unknown. Objective: To evaluate the impact of monitoring adherence with electronic pill bottles or bidirectional text messaging on improving hypertension control. Design: Three-arm pragmatic randomized controlled trial. Patients: One hundred forty-nine primary care patients aged 18-75 with hypertension and text messaging capabilities who were seen at least twice in the prior 12 months with at least two out-of-range blood pressure (BP) measurements, including the most recent visit. Interventions: Patients were randomized in a 1:2:2 ratio to receive (1) usual care, (2) electronic pill bottles for medication adherence monitoring (pill bottle), and (3) bidirectional text messaging for medication adherence monitoring (bidirectional text). Main Measures: Change in systolic BP during the final 4-month visit compared with baseline. Key Results: At the 4-month follow-up visit, mean (SD) change values in systolic blood pressure were - 4.7 (23.4) mmHg in usual care, - 4.3 (21.5) mmHg in the pill bottle arm, and - 4.6 (19.8) mmHg in the text arm. There was no significant change in systolic blood pressure between control and the pill bottle arm (p = 0.94) or the text messaging arm (p = 1.00), and the two intervention arms did not differ from each other (p = 0.93). Conclusions: Despite good measured adherence, neither feedback with electronic pill bottles nor bidirectional text messaging about medication adherence improved blood pressure control. Adherence to prescribed medications was not improved enough to affect BP control or it was not the primary driver of poor control. Trial Registration: clinicaltrials.gov (NCT02778542). abstract_id: PUBMED:27565437 Preliminary feasibility and efficacy of text messaging and application-based adherence interventions in adolescents with epilepsy. Objective: Few interventions have been developed to improve treatment adherence in adolescents with epilepsy. The purpose of this study was to test the feasibility and acceptability of text messaging and application-based interventions to improve adolescent antiepileptic drug adherence. It was hypothesized that these interventions would improve adherence and would be feasible and acceptable to adolescents and their caregivers. Methods: Twenty-five adolescents (mean age=15.7; 52% males) and caregivers were randomized to one of five interventions for one month: 1) adolescent text only, 2) adolescent and caregiver text plus communication, 3) adolescent application only, 4) adolescent and caregiver application plus communication, and 5) epilepsy application for adolescents only. Questionnaires were completed at baseline, posttreatment, and 1-month follow-up. The primary outcome measure was electronically monitored adherence. Results: Results indicated high levels of baseline adherence that contributed to minimal adherence improvements. Adolescents and caregivers reported that text messaging and application-based interventions were feasible and acceptable. Significance: These data highlight that technology-focused adherence interventions are feasible but should likely be targeted to those with demonstrated nonadherence in future clinical trials. Answer: Yes, wireless text messaging has been shown to improve adherence to preventive activities, particularly in the context of medication adherence for chronic conditions. Several studies have demonstrated the effectiveness of mobile phone text messaging as an intervention to enhance medication adherence, which is a critical component of preventive health care. For instance, a randomized study involving patients with hypertension showed that text messaging intervention improved antihypertensive drug adherence (ADA) in a Latino population, suggesting that this approach could be an effective tool to overcome poor medication adherence in the community (PUBMED:28941056). Similarly, the INTERACT trial found that text messaging improved medication adherence in patients taking blood pressure or lipid-lowering treatment for the prevention of cardiovascular disease (PUBMED:25479285). An integrative review of mobile health Short Message Service (SMS) text messages as an intervention to promote medication adherence concluded that text messages are emerging as an effective means of improving health behaviors and promoting medication adherence across multiple diseases (PUBMED:26216256). Moreover, a systematic review indicated that text messaging interventions have improved patients' medication adherence rates, particularly among those with chronic diseases such as HIV, asthma, diabetes, schizophrenia, and heart disease (PUBMED:27437126). A meta-analysis of randomized trials also highlighted that two-way text messaging, which involves sending reminders and receiving replies confirming whether medication has been taken, is associated with substantially improved medication adherence compared with one-way text messaging (PUBMED:26087045). Furthermore, a pilot study using tailored and interactive mobile text messaging demonstrated a significant improvement in medication refill rates among Medicare patients, supporting the use of interactive text messaging as a cost-effective and user-friendly solution for patient engagement (PUBMED:29382623). Overall, the evidence from these studies supports the conclusion that wireless text messaging can effectively improve adherence to preventive activities, particularly medication adherence for chronic health conditions.
Instruction: Laparoscopic management of early stage ovarian cancer: is it feasible, safe, and adequate? Abstracts: abstract_id: PUBMED:30254949 Is It the Time for Laparoscopic Management of Early-stage Ovarian Malignancies? The laparoscopic management of early-stage ovarian cancer remains controversial. Some surgeons hesitate to perform laparoscopic staging due to concern with the adequacy of staging, the possibility of tumor spillage and risk of port-site metastasis. Previous studies and literature reviews have reinforced the argument and supported the use of laparoscopy. However, the results were drawn with limited sample size obtained from case-series and case-control studies which result in difficult to make definite conclusions. Till date, the list of laparoscopic procedures has grown at a pace consistent with improvements in technology and technical skill of the surgeon. The number of studies with larger sample size, more prospective data, and longer duration of follow-up has been increasing. This review serves as an update on safety, feasibility, surgical, and oncological outcomes in cases of early-stage ovarian cancer treated by laparoscopic surgery of the literature published since 2008. We aim to clarify whether laparoscopy is safe and effective enough to be considered as standard management. Rely on nonrandomize studies, the current clinical evidence supports the role of laparoscopy in the management of early-stage ovarian cancer. Laparoscopy appears to offer several perioperative benefits without compromise of surgical morbidity and oncological outcome. abstract_id: PUBMED:24475574 Laparoscopic management of early stage ovarian cancer: is it feasible, safe, and adequate? A retrospective study. Introduction: Minimally invasive surgery to stage early ovarian cancer is still regarded as pioneering among gynecologic oncologists. Previous retrospective experiences demonstrated the safety and feasibility of laparoscopy in this field. Aims: To review the laparoscopic staging procedure in a series of patients with early ovarian cancer and compare results with the literature. Materials And Methods: From January 2004 to September 2011, 19 patients with apparent early stage ovarian/fallopian tube cancer Stage IA to IC underwent either primary treatment or completion staging by laparoscopy. Surgical, pathologic, and oncologic outcomes were analyzed. Results: The mean operative time was 212 +/- 69 minutes. Three patients (16%) underwent fertility sparing surgery. The mean estimated blood loss was two +/- two g/dl. The mean number of pelvic and para-aortic lymph nodes collected was 17 (range 7-27) and 14 (range 8-21), respectively. The mean volume of ovarian/tubal tumor was 119 cm3 (range 1.5-500). The disease was reclassified to a higher stage in ten women (52%). One major intraoperative complication (five percent) occurred which required the conversion to laparotomy. The mean follow up period was 30 +/- 16 months (range 10-74). Overall survival and disease-free survival were 100% and 84%, respectively. Conclusions: Laparoscopic staging of early ovarian cancer appears to be feasible and comprehensive when performed by gynecologic oncologists experienced with advanced laparoscopy. abstract_id: PUBMED:30022633 Adequate pelvic lymphadenectomy and survival of women with early-stage epithelial ovarian cancer. Objective: To examine the trends and survival for women with early-stage epithelial ovarian cancer who underwent adequate lymphadenectomy during surgical treatment. Methods: This is a retrospective observational study examining the Surveillance, Epidemiology, End Results program between 1988 and 2013. We evaluated 21,537 cases of stage I-II epithelial ovarian cancer including serous (n=7,466), clear cell (n=6,903), mucinous (n=4,066), and endometrioid (n=3,102) histology. A time-trend analysis of the proportion of patients who underwent adequate pelvic lymphadenectomy (≥8 per Gynecologic Oncology Group [GOG] criteria, ≥12 per Collaborative Group Report [CGR] criteria for bladder cancer, and &gt;22 per Mayo criteria for endometrial cancer) and a survival analysis associated with adequate pelvic lymphadenectomy were performed. Results: There were significant increases in the proportion of women who underwent adequate lymphadenectomy: GOG criteria 3.6% to 28.6% (1988-2010); CGR criteria 2.4% to 22.4% (1988-2013); and Mayo criteria 0.7% to 9.5% (1988-2013) (all, p&lt;0.05). On multivariable analysis, adequate lymphadenectomy was independently associated with improved cause-specific survival compared to inadequate lymphadenectomy: GOG criteria, adjusted-hazard ratio (HR)=0.75, CGR criteria, adjusted-HR=0.77, and Mayo criteria, adjusted-HR=0.85 (all, p&lt;0.05). Compared to inadequate lymphadenectomy, adequate lymphadenectomy was significantly associated with improved cause-specific survival for serous (HR range=0.67-0.73), endometrioid (HR range=0.59-0.61), and clear cell types (HR range=0.66-0.73) (all, p&lt;0.05) but not in mucinous type (HR range=0.80-0.91; p&gt;0.05). Conclusion: Quality of lymphadenectomy during the surgical treatment for early-stage epithelial ovarian cancer has significantly improved. Adequate lymphadenectomy is associated with a 15%-25% reduction in ovarian cancer mortality compared to inadequate lymphadenectomy. abstract_id: PUBMED:33142772 Impact on Prognosis of the Surgical Route, Laparoscopy or Laparotomy, for the Surgical Staging of Early Stage Ovarian Cancer-A Study from the FRANCOGYN Group. Background And Objective: according to the latest ESMO-ESGO recommendations, laparotomy is the standard surgical approach to treat and stage patients with presumed early stage epithelial ovarian cancer (EOC). A few studies have investigated the efficacy and the safety of laparoscopy for the staging of early stage EOC, and this question is still in the center of debates. Recurrence-free survival (RFS) and overall survival (OS) benefits of the minimally invasive surgery (MIS) have still to be specified. The aim of this multicenter and retrospective study is to assess the survival outcomes of laparoscopic staging in comparison with laparotomic staging for patients presenting with an early stage EOC. Methods: data of patients with early stage EOC (FIGO I-IIA) who underwent primary surgery between 2000 and 2018 were extracted from the FRANCOGYN database. OS and RFS of these two groups, constituted according to the surgical route, were compared using Log rank test. Results: of the 144 patients included, 107 patients underwent laparotomy and 37 underwent laparoscopy for a staging purpose. The median follow-up was 36.0 months (18.0 to 58.0). For the laparoscopy and the laparotomy group, the median follow-up period was 24 (11.0 to 50.0) and 42.0 (24.0 to 66.0) months, respectively, (p &lt; 0.001). Tumor recurrence occurred in 33 (23%) patients: 2 (5.4%) in the laparoscopy group and 31 (29%) in the laparotomy group (p = 0.08). The OS rate at 5 years was 97.3% after laparoscopy and 79.8% after laparotomy (p = 0.19). Conclusions: there is no difference associated with the laparoscopic approach for the staging of early stage EOC on RFS and OS in comparison with laparotomy. MIS may be proposed as a safe and adequate alternative to laparotomy when performed by well-trained surgeons. abstract_id: PUBMED:34421313 Primary Laparoscopic Surgery Does Not Affect the Prognosis of Early-Stage Ovarian Clear Cell Cancer. Purpose: Minimally invasive surgery (MIS) is performed frequently in early-stage ovarian cancer patients, especially in ovarian clear cell carcinoma (OCCC). The aim of this study was to investigate whether primary laparoscopic surgery influences prognosis in patients with early-stage OCCC. Patients And Methods: Patients with International Federation of Gynecology and Obstetrics (FIGO) stage I OCCC were retrospectively reviewed in two hospitals between April 2010 and August 2020. Clinical data were abstracted, and patients were followed up until February 2021. Patients were divided into open surgery (laparotomy) and laparoscopy groups, and the Kaplan-Meier method was applied to compare progression-free survival (PFS) and overall survival (OS) between the groups. Statistical differences were determined by the Log rank test. Results: Eighty-nine patients were included in the study; 20 (22.5%) and 69 (77.5%) patients underwent laparoscopic and open surgery, respectively. The patients' characteristics were well-balanced except that patients in the laparoscopy group tended to have smaller tumors and lower frequency of omentectomy and lymphadenectomy compared with the open surgery group. The median follow-up duration was 42.6 and 36.5 months in the laparoscopy and open surgery groups, respectively. Nine (10.1%) patients developed recurrence, and 4 (4.5%) died of the disease; all in the open surgery group. The estimated 2-year PFS rates were 100.0% and 90.1%, and the estimated 5-year OS rates were 100.0% and 91.9% in the laparoscopy and open surgery groups, respectively. No significant survival differences were found between the groups. Conclusion: Survival was not compromised when primary laparoscopic surgery was performed in early-stage OCCC patients. A well-designed randomized controlled trial is warranted. abstract_id: PUBMED:37894466 Laparoscopic Fertility-Sparing Surgery for Early Ovarian Malignancies. The demand for fertility-sparing surgery (FSS) has increased in the last decade due to increased maternal age, increased incidence of ovarian malignancies in younger patients, and technical advances in surgery. Data on oncological safety and fertility outcomes of patients with ovarian cancer after laparoscopic FSS are sparse, but some retrospective studies have shown that open FSS may be offered to selected patients. We assessed the role of minimally invasive FSS in comparison with radical surgery (RS) in terms of oncological safety and reproductive outcomes after FSS in this multicenter study. Eighty patients with FIGO stage I/II ovarian cancer treated with laparoscopic FSS or RS between 01/2000 and 10/2018 at the participating centers (comprehensive gynecological cancer centers with minimally invasive surgical expertise) were included in this retrospective analysis of prospectively kept data. Case-control (n = 40 each) matching according to the FIGO stage was performed. Progression-free survival [150 (3-150) and 150 (5-150) months; p = 0.61] and overall survival [36 (3-150) and 50 (1-275) months; p = 0.65] did not differ between the FSS and RS groups. Eight (25.8%) women became pregnant after FSS, resulting in seven (22.5%) deliveries; three (37.5%) patients conceived after in vitro fertilization, and five (62.5%) conceived spontaneously. Laparoscopic FSS seems to be applicable and oncologically safe for patients with early-stage ovarian cancer, with adequate fertility outcomes. abstract_id: PUBMED:27040423 Feasibility and surgical outcomes of conventional and robot-assisted laparoscopy for early-stage ovarian cancer: a retrospective, multicenter analysis. Purpose: We aimed to investigate the safety, adequacy and oncological outcomes of laparoscopic surgery (LS) and robot-assisted laparoscopic (RALS) approach for the treatment of early-stage ovarian cancer. Methods: We performed a multicentric, retrospective cohort study, enrolling patients affected by early-stage ovarian cancer who underwent laparoscopic management for early-stage ovarian cancer between 2006 and 2014. Surgical, pathologic and oncologic outcome data were analyzed to compare LS and RALS performances for early-stage ovarian cancer management. Results: 39 patients underwent laparoscopic staging for presumed stage I ovarian cancer: 23 underwent LS and 16 underwent RALS. The mean operative time was 281 ± 81 min (LS 288 ± 88 min; RALS 270 ± 72 min; p = 0.49). No conversion to laparotomy occurred, and one patient had intraoperative hemorrhage requiring blood transfusion. Four patients (10.2 %) experienced postoperative complications of grade 3 according to the Clavien-Dindo classification. The median hospital stay was 3 days (1-15); the differences were not statistically significant between two groups [LS = 4 (1-15); RALS = 3 (1-7); p = 0.43]. During a mean follow-up period of 19.4 months, tumor recurrence occurred in 3 patients: 2 (8.7 %) in the LS group and 1 (6.25 %) in the RALS group. Overall survival and disease-free survival for the entire cohort were 97.4 and 92.3 %, respectively. Conclusions: LS and RALS seem to be adequate and feasible for the treatment of early-stage ovarian cancer in terms of the surgical outcomes and oncological safety. Furthermore, in our experience, perioperative outcomes are comparable between LS and RALS making them an acceptable approach in selected patients. abstract_id: PUBMED:35574324 Comparison of Laparoscopic and Open Surgery for Women With Early-Stage Epithelial Ovarian Cancer. Objective: This study evaluated the oncologic outcomes of laparoscopy and laparotomy in the management of early-stage ovarian cancer patients. Methods: We conducted an observational study of women diagnosed with International Federation of Gynecology and Obstetrics (FIGO) 2014 stage I ovarian cancer who underwent surgery at the West China Second University Hospital from 2012 to 2020. Patients who received adjuvant chemotherapy before surgery, those with non-epithelial histopathological types, or those with insufficient data were excluded. Using propensity score matching, data from consecutive laparoscopic patients treated by laparoscopy were matched 1:2 with a cohort of patients undergoing open surgery. The operative and survival outcomes among the matched cohorts were examined using the Kaplan-Meier method. Results: Among 200 eligible patients, 74 patients undergoing laparoscopy were compared with a cohort of 126 patients undergoing open surgery. Baseline characteristics were similar between groups after matching. Patients who had laparoscopy had a shorter operative time (P = 0.001), a shorter hospital stay (P &lt;0.001), and lower blood loss (P = 0.001) than patients who had open surgery. The median (range) follow-up period was 43.0 (38.8-47.2) and 45.0 (36.0-54.0) months for cases and controls, respectively (P &lt;0.001). There are no significant differences in progression-free survival (P = 0.430, log-rank test) and overall survival (P = 0.067, log-rank test) between the two groups. Conclusions: There is no difference in prognosis between laparoscopic and open surgery in women with stage I epithelial ovarian cancer. Laparoscopic treatment of early-stage ovarian cancer is safe and feasible for stage I epithelial ovarian cancer patients. abstract_id: PUBMED:24761214 Comparison of laparoscopy and laparotomy for the management of early-stage ovarian cancer: surgical and oncological outcomes. Objective: To investigate the surgical and oncological outcomes of laparoscopic surgery compared with laparotomy for the treatment of early-stage ovarian cancer. Methods: Data from patients who underwent surgical management for early-stage ovarian cancer between 2006 and 2012 were retrospectively reviewed. All patients presented with stage I or II disease, and underwent comprehensive staging surgery consisting of a total hysterectomy, bilateral salpingo-oophorectomy, pelvic and para-aortic lymphadenectomy, omentectomy, and peritoneal cytology. Results: Seventy-seven patients who underwent laparoscopic surgery (24 patients) or laparotomy (53 patients) were identified. Surgery for none of the patients was converted from laparoscopy to laparotomy. The mean operation time was shorter and the estimated blood loss was lower in the laparoscopy group than in the laparotomy group, though the differences were not statistically significant (193 min vs. 224 min, p=0.127; 698 mL vs. 973 mL, p=0.127). There were no differences in the intraoperative or postoperative complications. During a mean follow-up period of 31 months, tumor recurrence occurred in 4 patients: 2 (8.3%) in the laparoscopy group and 2 (3.8%) in the laparotomy group. The mean disease-free survival was 59 months after laparoscopy and 66 months after laparotomy (p=0.367). Conclusion: Laparoscopic surgery seems to be adequate and feasible for the treatment of early-stage ovarian cancer with comparable results to laparotomy in terms of the surgical outcomes and oncological safety. abstract_id: PUBMED:23318909 Laparoscopic management of early-stage malignant nonepithelial ovarian tumors: surgical and survival outcomes. Objective: Laparoscopic management in patients with malignant nonepithelial ovarian tumors (MNEOTs) was unpopular owing to the solid nature and relatively large size of the tumors. The purpose of this study was to evaluate the role of laparoscopy for MNEOTs. Methods: Between January 1989 and September 2010, 28 patients with MNEOTs underwent laparoscopic surgery at our institution. These patients' clinicopathologic data were retrospectively reviewed from medical records. Results: Cases included 20 sex cord-stromal tumors (18 granulosa cell and 2 Sertoli-Leydig cell) and 8 malignant germ cell tumors (4 dysgerminomas, 2 immature teratomas, 1 choriocarcinoma, and 1 yolk sac tumor). The patients' median age was 27 years (range, 16-35 years) for those with malignant germ cell tumors and 42 years (range, 7-57 years) for those with stromal tumors. The median primary tumor diameter was 10.4 cm (range, 3.3-20.8 cm). Laparoscopic pelvic and para-aortic lymph node dissections were performed in 9 cases. Laparoscopic removal of primary tumor and omentectomy were performed in 26 and 6 cases, respectively. Hand-assisted laparoscopic surgery was performed for one huge tumor that could not be entered into the endobag. The median operating time was 102 minutes (range, 45-300 minutes), and the median postoperative hospital stay was 3 days (range, 2-10 days). All patients had stage I disease. Five patients received adjuvant chemotherapy, and the median interval to chemotherapy was 14 days (range, 2-21 days). No intraoperative complication or conversion to laparotomy was observed. Only one postoperative febrile morbidity occurred. The median follow-up was 34.5 months (1-185 months). One patient developed recurrence, which was treated with chemotherapy. No patient died of their disease. Conclusion: This is the first case series report of laparoscopic surgery for MNEOTs. Laparoscopic management seems feasible and safe without compromising survival. With additional evidence, laparoscopic surgery could be a safe therapeutic option for management of early-stage MNEOTs. Answer: The laparoscopic management of early-stage ovarian cancer is a topic of ongoing debate among gynecologic oncologists. However, the current body of evidence suggests that it is a feasible, safe, and adequate approach when performed by experienced surgeons. Several studies have demonstrated the safety and feasibility of laparoscopy for early-stage ovarian cancer. A retrospective study found that laparoscopic staging appears to be comprehensive and feasible, with perioperative benefits and without compromising surgical morbidity and oncological outcomes (PUBMED:24475574). Another study indicated that adequate lymphadenectomy, which is a critical component of staging, is associated with improved survival, and the quality of lymphadenectomy has significantly improved over time (PUBMED:30022633). Comparative studies have also been conducted to evaluate the outcomes of laparoscopic versus open surgery. One multicenter retrospective study found no difference in recurrence-free survival (RFS) and overall survival (OS) between laparoscopic staging and laparotomy for early-stage epithelial ovarian cancer (EOC), suggesting that minimally invasive surgery (MIS) may be a safe and adequate alternative to open surgery (PUBMED:33142772). Similarly, another study concluded that primary laparoscopic surgery does not affect the prognosis of early-stage ovarian clear cell cancer (PUBMED:34421313). Laparoscopic fertility-sparing surgery (FSS) for early ovarian malignancies has also been assessed, with findings indicating that it is oncologically safe and offers adequate fertility outcomes (PUBMED:37894466). Additionally, the feasibility and surgical outcomes of both conventional and robot-assisted laparoscopy for early-stage ovarian cancer have been reported to be comparable, with no significant differences in perioperative outcomes (PUBMED:27040423). A study comparing laparoscopic and open surgery for women with early-stage epithelial ovarian cancer found no difference in prognosis between the two approaches, further supporting the safety and feasibility of laparoscopic treatment for stage I epithelial ovarian cancer patients (PUBMED:35574324). Another study echoed these findings, showing that laparoscopic surgery is adequate and feasible for the treatment of early-stage ovarian cancer with comparable results to laparotomy in terms of surgical outcomes and oncological safety (PUBMED:24761214).
Instruction: Cancer diagnosis: do we want to know the truth? Abstracts: abstract_id: PUBMED:23282622 Truth telling of Alzheimer's disease diagnosis Alzheimer's disease is becoming an increasingly common problem due to population aging. Most of the research on truth telling in relation to diagnosis has been done in oncology. However, although growing, there has a lack of interest about attitudes held among physicians towards disclosing the diagnosis of Alzheimer's disease. Physicians, family caregivers and patients have different views about it. The reasons most often given for communicating the diagnosis are the right to know, relief of anxiety to know the cause of memory problems, early access to treatment and ability to plan ahead. On the contrary, the reasons for concealing the diagnosis are based on the right not to know, the anxiety associated to knowing the diagnosis and the absence of curative therapies for the disease. The aim of this paper is to report the current state of literature on diagnostic truth telling in dementia, review the ethical principles involved, and finally give a strategy to address the issue. abstract_id: PUBMED:32129038 Truth-telling or Not: A Dilemma for Health Care Providers Regarding Disclosure of Cancer in China. Truth-telling to cancer patients challenges health care providers in China. Providers confront a series of cultural, ethical, and legal dilemmas in terms of patients' right to know and autonomy. Underlying reasons for truth-telling dilemmas include traditional culture, the role of family, and ambiguity about patients' right to know in accordance with Chinese laws and regulations. These factors complicate ethical judgment by requiring interaction with traditional Chinese culture and laws before health care providers deliver bad news to patients with a range of conditions, including cancer. In this article ethical questions are raised, and strategies concerning breaking bad news are suggested for Chinese health care providers. abstract_id: PUBMED:32649335 Gender Differences in Cancer Patients' Preferences for Truth-Telling in Taiwan. Background: There is a lack of studies investigating gender differences in cancer truth-telling, especially from the Asia Pacific countries. Objective: The aim of this study was to compare preferences for cancer truth-telling between male and female cancer patients in Taiwan. Methods: We used a quantitative cross-sectional design. Cancer patients (aged ≥18 years) were enrolled from 6 hospitals across Taiwan. A Taiwanese version of the Japanese truth-telling questionnaire was used to collect data. Multiple regression and independent t test were used for analysis. Results: A total of 590 patients (49.6% male, 50.4% female) participated in this study. A multiple regression showed no gender differences in total scale and subscales (setting, method of disclosure, additional information, and emotional support). However, in item analysis, we found that female patients preferred detailed medical information (t = -1.973, P = .049). Both male and female patients preferred to know their medical condition (t = -1.259, P = .209), direct and frank truth-telling (t = -0.262, P = .794), and assistance in treatment decision-making (t = -0.467, P = .641). There was no gender difference in willingness to know the life expectancy (t = -0.769, P = .442) and families' presence during truth-telling (t = -0.416, P = .678). Conclusion: Few gender differences exist in the preferences of truth-telling for cancer patients. Implications For Practice: Our findings can increase the sensitivity of truth-telling among nurse and other healthcare personnel when taking care of cancer patients of different genders and thereby likely improve the quality of cancer care. abstract_id: PUBMED:17081920 Telling the truth to patients with cancer: what is the truth? Attitudes and practices of truth-telling to people with cancer have shifted substantially in the past few years. However, cultural and individual differences persist, and some difficulties common to all medical specialties are magnified in oncology. In this Personal View, I review and analyse data for attitudes and practices of truth-telling worldwide. I also assess ethical justifications, with special reference to interpersonal aspects of patients' autonomy and the dynamic nature of truth in the clinical context. Examples are provided to show how this ethical perspective can help oncologists to frame the discourse on truth-telling and to find solutions to the dilemmas of whether, when, and how to tell the truth to their patients in clinical practice. Finally, I identify future targets for research. abstract_id: PUBMED:21595528 The perceptions of physicians in southeast Nigeria on truth-telling for cancer diagnosis and prognosis. Background: The perceptions of Nigerian physicians on truth-telling for cancer diagnosis and prognosis have not been widely studied. There is a need to know the perception of the doctors on truth telling so as to inform appropriate professional education on the subject. Objectives: To ascertain the perceptions of the physicians on truth-telling for cancer diagnosis and prognosis. Methods: A cross-sectional study was done with a self-administered questionnaire to 228 physicians available in the clinics and seminars at the hospital between January and April 2010. Results: A total of 173 questionnaires were returned. Eighty-one (46.8%) always, 54 (31.2%) generally, and 38 (22%) rarely disclose cancer diagnosis and favorable prognosis to patients. Only 7.5% would disclose the truth of the prognosis to patients when the cancer is advanced. Physicians' age, specialty, training in palliative care, and doctors' views on truth disclosure if he/she had cancer significantly influenced the doctors' practice of truth-telling for cancer diagnosis. Conclusion: The physicians who treat cancer patients in southeast Nigeria tend to practice truth-telling for cancer diagnosis but not for a poor prognosis. Most of the physicians need training in physician-patient communication. abstract_id: PUBMED:28537956 The Differences in Preference for Truth-telling of Patients With Cancer of Different Genders. Background: Patients' personality traits, especially age, gender, and cancer stage, tend to affect doctors' truth-telling methods. However, there is a lack of studies investigating the influence of patients' gender on truth-telling, especially for Asian cultures. Objective: The aims of this study were to qualitatively investigate the differences in preferences for truth-telling for patients with cancer of different genders and explore patients' preferences for decision making. Methods: For this descriptive qualitative study, in-depth interviews were conducted with 20 patients with cancer (10 men and 10 women) using a semistructured interview guide. All interviews were audiotaped and transcribed verbatim. Data collection and analysis occurred concurrently; content analysis developed categories and themes. Results: Data analysis revealed 2 themes: (1) similar gender preferences for truth-telling and decision making: knowledge of their medical condition, direct and frank truthfulness, and assistance in decision making for subsequent treatment programs, and (2) preferences in truth-telling that differed by gender: women wanted family members present for confirmation of diagnosis, whereas men did not; men preferred truth-telling for only key points of their cancer, whereas women wanted detailed information; and men did not want to know their survival period, whereas women wanted this information. Conclusions: Our study revealed similar gender preferences for truth-telling regarding knowledge and decision making; however, preferences differed for family support, scope of information, and survival time. Implications For Practice: These findings can serve as a reference for nurses and other healthcare personnel when implementing truth-telling for patients given a diagnosis of cancer. Strategies can be targeted for specific preferences of men and women. abstract_id: PUBMED:27352417 The Truth about Truth-Telling in American Medicine: A Brief History. Transparency has become an ethical cornerstone of American medicine. Today, patients have the right to know their health information, and physicians are obliged to provide it. It is expected that patients will be informed of their medical condition regardless of the severity or prognosis. This ethos of transparency is ingrained in modern trainees from the first day of medical school onward. However, for most of American history, the intentional withholding of information was the accepted norm in medical practice. It was not until 1979 that a majority of physicians reported disclosing cancer diagnoses to their patients. To appreciate the current state of the physician-patient relationship, it is important to understand how physician-patient communication has developed over time and the forces that led to these changes. In this article, we trace the ethics and associated practices of truth-telling during the past two centuries, and outline the many pressures that influenced physician behavior during that time period. We conclude that the history of disclosure is not yet finished, as physicians still struggle to find the best way to share difficult information without causing undue harm to their patients. abstract_id: PUBMED:18235406 Which truth for patients and their family Must the truth always be told to a cancer patient and/or his relatives? Taking a personal experience as the basis for discussion, the author examines this question in the context of Western cultural norms where death is taboo. The legal obligations to inform patients and the representation of cancer are discussed. Two key situations are considered: the delivery of a diagnosis of cancer and the announcement of a bad prognosis. What does it really mean "to tell the truth"? A best strategy for giving information to relatives is developed. The author's conclusion is that it seems more important to establish a "true" relationship with the cancer patient and his relatives than telling or not telling the whole truth. abstract_id: PUBMED:15256295 The attitude toward truth telling of cancer in Taiwan. Objective: This study examined a group of Taiwanese subjects at a medical university hospital regarding their attitudes toward truth telling of cancer. Method: Self-report survey with convenience sampling of 195 participants admitted for a 3-day comprehensive health examination in a medical university hospital in Taiwan. Three instruments used to collect the data included the Brief Psychiatric Symptom Rating Scale (BPSRS), Chinese Health Questionnaire (CHQ), and the Attitude Toward Truth Telling of Cancer List. Results: Once diagnosed with cancer, 92.3% of the participants preferred being told the truth about their diagnosis and 7.7% did not. Age, education, and employment were found to differ between disclosure and nondisclosure groups. The latter group also tended to have higher depression and hostility scores on the BPSRS and higher minor psychiatric morbidity scores. A total of 62.6% of the participants preferred that doctors tell a relative the truth about their cancer diagnosis, while 37.4% preferred that doctors not tell a relative the truth. The distributions of demographic data and mental status did not significantly differ between disclosure and nondisclosure groups if a relative was to be the cancer victim. Conclusions: A majority of subjects in Taiwan would prefer to know the truth if victimized by a cancer disease, despite the supposed influence of Chinese culture. Furthermore, attitudes toward truth telling of cancer differed between relatives of patients and the patients themselves. Relatives of cancer patients were more likely to follow to the principle of beneficence, whereas the patients themselves were more likely to follow to the principle of autonomy. abstract_id: PUBMED:18547350 To tell or not to tell: attitudes of Chinese oncology nurses towards truth telling of cancer diagnosis. Aims And Objectives: To investigate the attitude of oncology nurses towards whether and how to disclose diagnoses to patients with early-stage cancer or terminal illness. Background: The attitudes of patients and doctors towards the disclosure of cancer diagnosis differed from culture to culture. However, little research has focused on the attitudes of Chinese oncology nurses. Design: Survey. Methods: A questionnaire investigating nurses' attitudes towards truth telling was delivered to 243 Chinese oncology nurses. Results: One hundred and ninety-nine (819%) nurses completed the questionnaire. 814% of the nurses reported that patients with early-stage cancer should be informed of the diagnosis, while only 442% believed that patients with terminal illnesses should know the truth (p &lt; 0001). Nurses who preferred truth telling reported that patients with early or terminal stages of cancer should be informed by the doctor in charge (765% vs. 739%, respectively; p &gt; 005), immediately after the diagnosis (759% vs. 795%, respectively) and in a quiet and undisturbed room (809% vs. 705%, respectively; p &gt; 005). Nurses' attitudes towards truth telling of terminal cancer were influenced by their educational level and work experience. Conclusion: Oncology nurses differed in their attitudes towards truth telling of different stages of cancer. Nurses who preferred disclosure reported that cancer patients should be informed by the doctor in charge immediately after the diagnosis and in a quiet and undisturbed room. Relevance To Clinical Practice: Many Chinese doctors, patients and their relatives believed that patients with terminal illness should not know their diagnosis. Thus, oncology nurses need additional training to deal with these situations. Answer: The question of whether individuals want to know the truth about a cancer diagnosis is complex and varies across cultures, individual preferences, and specific circumstances. Research indicates that attitudes toward truth-telling in the context of a cancer diagnosis differ significantly among patients, healthcare providers, and within different cultural settings. In Western cultures, there has been a shift towards greater transparency and the ethical principle that patients have the right to know their health information, with the expectation that they will be informed of their medical condition regardless of severity or prognosis (PUBMED:27352417). However, the historical norm in American medicine was the intentional withholding of information, and it was not until 1979 that a majority of physicians reported disclosing cancer diagnoses to their patients (PUBMED:27352417). In China, healthcare providers face cultural, ethical, and legal dilemmas regarding patients' right to know and autonomy, influenced by traditional culture, the role of family, and ambiguity about patients' right to know according to Chinese laws and regulations (PUBMED:32129038). Similarly, in Taiwan, while most patients preferred to know their medical condition, there were gender differences in preferences for truth-telling, with female patients preferring more detailed medical information (PUBMED:32649335; PUBMED:28537956). In Nigeria, physicians tend to practice truth-telling for cancer diagnosis but not for a poor prognosis, with most needing training in physician-patient communication (PUBMED:21595528). In Taiwan, a majority of subjects preferred to know the truth if diagnosed with cancer, showing a preference for autonomy over the principle of beneficence that might guide relatives' preferences (PUBMED:15256295). Oncology nurses in China reported that patients with early-stage cancer should be informed of the diagnosis, but fewer believed that patients with terminal illnesses should know the truth, with attitudes influenced by educational level and work experience (PUBMED:18547350). Overall, the literature suggests that while many patients do want to know the truth about their cancer diagnosis, there are significant variations based on cultural norms, gender, the stage of cancer, and individual preferences. Healthcare providers must navigate these complexities to respect patient autonomy while also providing care that is sensitive to individual and cultural values (PUBMED:17081920; PUBMED:18235406).
Instruction: Are there predictors of outcome following rectovaginal fistula repair? Abstracts: abstract_id: PUBMED:20706066 Are there predictors of outcome following rectovaginal fistula repair? Background: Rectovaginal fistula is a distressing condition for patients and for physicians who are continuously challenged in providing durable treatment options. The aim of this study is to assess the results of rectovaginal fistula repair and identify predictive factors for poor outcome. Methods: Retrospective analysis of patients who underwent rectovaginal fistula repair from 1988 to 2008 was performed. chi tests and logistical regression analysis were used to study treatment outcomes according to the following fistula characteristics: etiology, size, location, and number of prior attempts at fistula repair. In addition, patient factors such as age, body mass index, smoking history, comorbid condition of diabetes, use of steroid and immunosuppressive medications, number of prior vaginal deliveries, and presence of a diverting stoma were analyzed. Results: A total of 184 procedures were performed in 125 patients. Inflammatory bowel disease was the most common indication for surgery (45.6%), followed by obstetric injury (24%) and surgical trauma (16%). The mean duration of fistula presence was 31.2 months. The procedures performed included endorectal advancement flap (35.3%), gracilis muscle interposition (13.6%), seton placement (13.6%), and transperineal (8.7%) and transvaginal repair (8.1%). The overall success rate per procedure was 60%, with no difference in recurrence rates based on the type of repair. Patients with Crohn's disease had more recurrent fistulas (44.2% success per procedure; P &lt; .01), although 78% eventually healed after an average of 1.8 procedures. Patients with obstetric injuries had an 89% success rate after an average of 1.3 procedures per patient, which is similar to the success rate for traumatic fistulas. Pouch vaginal fistulas had a 91% success rate after an average of 1.6 procedures per patient. The overall success rate per patient was 88% after multiple procedures with a mean follow-up of 16.3 months. Age, body mass index, diabetes, use of steroids and immunosuppressive agents, size and location of the fistula, number of vaginal deliveries, time interval between a recurrent episode and subsequent repair, and the presence of fecal diversion did not affect outcomes. The presence of Crohn's disease and a smoking history are strongly associated with rectovaginal fistula recurrence (P = .02). Conclusions: Despite a relatively low initial success rate (60%), most rectovaginal fistulas can be successfully repaired with subsequent operations. Crohn's disease and smoking are associated with adverse outcomes. abstract_id: PUBMED:28574159 Bladder spasm following urogenital fistula repair. Objective: To examine the occurrence of bladder spasms following surgical repair of urogenital fistula. Methods: The present retrospective study included data from patients who underwent surgical repair of urogenital fistula at the National Obstetric Fistula Centre, Abakaliki, Nigeria, between June 1, 2015, and May 31, 2016. Patients who underwent rectovaginal fistula repair and those who experienced persistent postoperative pain requiring high doses of analgesia were excluded. Bladder spasm was defined as the sudden onset of intermittent pain in the region of the bladder lasting for short periods of time in patients who were previously comfortable with routine postoperative analgesia. The incidence of bladder spasm was calculated and the presence of an association between repair outcome and bladder spasms was investigated. Results: There were 133 patients included in the present study with a mean age of 36 ± 11 years. Bladder spasms were experienced by 60 (45.1%) patients during the study period; of these patients, failed fistula repair was recorded for 13 (22%). Good surgical outcome (closed fistula) was associated with not experiencing postsurgical bladder spasms (P=0.044). Conclusion: The incidence of bladder spasm following surgical repair of urogenital fistula appeared high. The occurrence of bladder spasms could influence repair outcomes. abstract_id: PUBMED:26767081 Predictors of Outcome of Rectovaginal Fistula Surgery in Women With Crohn's Disease. Background: Despite measures taken to control traditional risk factors like increased rectal disease activity, anorectal stenosis, and, to a lesser extent, obesity, rates of poor outcomes after rectovaginal fistula (RVF) surgery in women with Crohn's disease (CD) are still high and require further elucidation. To bridge the gap, we sought to identify risk factors beyond the aforementioned determinants that may be associated with poor outcomes in these patients. Methods: We designed a retrospective, unmatched, case-control study to test our hypothesis. The population comprised women with CD who underwent RVF surgery between the years of 2000 and 2013. Cases were defined as surgeries with an unsuccessful outcome at 4 weeks post-surgery, and controls represented those with a successful outcome at 4 weeks post-surgery. With multivariable logistic regression models, we sought to identify pertinent risk factors for poor outcomes after RVF surgery in these women. Results: Of the 108 women with CD who were referred to our institution for evaluation of RVFs between 2000 and 2013, 39 underwent a total of 63 surgeries, of which 16 were cases and 47 were controls. There were no significant differences between them with regard to medications and clinical or mucosal disease severity, but a significantly higher proportion of poor outcomes arose from the group of women who underwent a mucosal flap procedure (88%) compared to those who had a seton placed (13%; P = 0.0004). The final adjusted logistic regression model demonstrated that women who underwent a mucosal flap procedure instead of a seton placement were 17.5 times more likely to have a poor surgical outcome (odds ratio (OR): 17.51; 95% confidence interval (CI): 3.12 - 98.43; P = 0.0012). Moreover, it was seen that women with active colonic mucosal disease, independent of rectal disease activity, were 4.4 times more likely to have a poor outcome (OR: 4.40; 95% CI: 1.06 - 18.26). Conclusion: Mucosal flap procedures, representing the second, or definitive, stage in surgical treatment of RVF are associated with much worse outcomes and therefore should be preceded by aggressive medical optimization of the patient. abstract_id: PUBMED:12077640 The outcome of transanal advancement flap repair of rectovaginal fistulas is not improved by an additional labial fat flap transposition. Transanal advancement flap repair (TAFR) has been advocated as the treatment of choice for patients with low rectovaginal fistulas. Recently, several studies have reported a significantly lower healing rate. We also encountered low healing rates after TAFR. In an attempt to improve our results, we added labial fat flap transposition (LFFT) to the TAFR of rectovaginal fistulas. The aim of the present study was to evaluate the outcome after TAFR and to investigate the impact of an additional LFFT. Between 1991 and 1997, 21 consecutive patients of median age 33 years underwent TAFR. The etiology of the fistulas was: obstetric injury (n=9), cryptoglandular abscess (n=8) and wound infection after anterior anal repair (n=4). The first 9 patients underwent TAFT with (n=3) or without (n=6) anterior anal repair. In the following 12 patients, LFFT was added to the advancement flap. In 4 of these a concomitant anterior anal repair was performed. The median follow-up was 15 months. The overall healing rate was 48%. In the first 9 patients, in whom no additional LFFT was performed, the rectovaginal fistula healed in 4 cases (44%). In the following 12 patients in whom an additional LFFT was performed, a similar healing rate was observed (50%). In conclusion, the outcome of transanal advancement flap repair of rectovaginal fistulas is poor. Addition of a labial fat flap transposition does not improve this outcome. abstract_id: PUBMED:30987849 Predictors of Obstetric Fistula Repair Outcomes in Lubango, Angola. Objective: Obstetric fistulas have a significant physical and social impact on many women in Angola. The majority of the population of this sub-Saharan African nation does not have access to high-quality obstetric care, and this is associated with a risk of prolonged labour and formation of obstetric fistulas. Fistulas are challenging to correct surgically and may require repeated operations. The objective of the study was to determine predictors of successful obstetric fistula repair. Methods: In this retrospective study, data from all recorded cases of fistula repair performed between July 2011 and December 2016 at the Centro Evangélico de Medicina do Lubango (CEML) hospital located in Lubango, Angola, were reviewed. Analysis of the data was carried out to determine factors affecting the success of fistula repair; parametric and non-parametric tests were used for group comparisons and logistic regression for outcome prediction (Canadian Task Force classification II-2). Results: A total of 407 operations were performed on 243 women. Of these, 224 women were diagnosed with a vesicovaginal fistula and 19 with a combined vesicovaginal and rectovaginal fistula. The success rate for the attempted repairs was 42%. On multivariate analysis, the success of first surgery was negatively affected by the difficulty of repair (odds ratio 0.28; P &lt; 0.01). For patients requiring repeat surgery, the odds of success were increased with each subsequent operation (odds ratio 5.32; P &lt; 0.01). Conclusion: Although fistulas rated as difficult to repair had a higher likelihood of initial failure, successive attempts at repair increased the likelihood of a successful outcome. abstract_id: PUBMED:12594672 Smoking affects the outcome of transanal mucosal advancement flap repair of trans-sphincteric fistulas. Background: The aim of the study was to identify variables affecting the outcome of transanal advancement flap repair (TAFR) for perianal fistulas of cryptoglandular origin. Methods: Between 1995 and 2000, a consecutive series of 105 patients (65 women, 40 men), with a median age of 44 (range 19-72) years was included in the study. The patients were recruited from the colorectal departments of two university medical centres. Patients with a rectovaginal fistula and those with a fistula due to Crohn's disease were excluded. The following variables were assessed: age, sex, number of previous attempts at repair, preoperative seton drainage, fistula type, presence of horseshoe extensions, location of the internal opening, postoperative drainage, body mass index and the number of cigarettes smoked per day. The results were analysed by means of multiple logistic regression. Results: The median follow-up was 14 months. No differences were observed between the two centres. TAFR was successful in 72 patients (69 per cent). None of the variables affected the outcome of the procedure, except for smoking habit of the patient. In patients who smoked the observed healing rate was 60 per cent, whereas a rate of 79 per cent was found in patients who did not smoke. This difference was statistically significant (P = 0.037). Moreover, a significant correlation was observed between the number of cigarettes smoked per day and the healing rate (P = 0.003). Conclusion: Cigarette smoking affects the outcome of TAFR in patients with a cryptoglandular perianal fistula. abstract_id: PUBMED:35779081 Surgical treatment of rectovaginal fistula-predictors of outcome and effects on quality of life. Purpose: To determine the results after rectovaginal fistula (RVF) repair and find predictors of outcome. Primary objective was fistula healing. Secondary outcomes were morbidity and patient health-related quality of life (HRQoL). Method: An observational study of 55 women who underwent RVF repair including both local procedures and tissue transposition 2003-2018 was performed. Baseline patient and fistula characteristics were registered, combined with a prospective HRQoL follow-up and a general questionnaire describing fistula symptoms. Results: Healing rate after index surgery was 25.5% (n = 14) but the final healing rate was 67.3% (n = 37). Comparing the etiologies, traumatic fistulas (iatrogenic and obstetric) had the highest healing rates after index surgery (n = 11, 45.9%) and after repeated operations at final follow-up (n = 22, 91.7%) compared with fistulas of inflammatory fistulas (Crohn's disease, cryptoglandular infection, and anastomotic leakage) that had inferior healing rates after both index surgery (n = 7, 7.1%) and at final follow-up (n = 13, 46.4%). Fistulas of the category others (radiation damage and unknown etiology) included a small amount of patients with intermediate results at both index surgery (n = 1, 33.3%) and healing rate at last follow-up (n = 2, 66.7%). The differences were statistically significant for both index surgery (p = 0.004) and at final follow-up (p = 0.001). Unhealed patients scored lower than both healed patients and the normal population in 6/8 Rand-36 domains, but the differences were not statistically significant. Conclusions: Most traumatic rectovaginal fistulas closed after repeated surgery whereas inflammatory fistulas had a poor prognosis. Low healing rates after local repairs suggest that tissue transfer might be indicated more early in the treatment process. Unhealed fistulas were associated with reduced quality of life. Trial registration Clinicaltrials.gov No. NCT05006586. abstract_id: PUBMED:33911467 Stapled transperineal repair for low- and mid-level rectovaginal fistulas: A 5-year experience and comparison with sutured repair. Background: Currently, rectovaginal fistula (RVF) continues to be a surgical challenge worldwide, with a relatively low healing rate. Unclosed intermittent suture and poor suture materials may be the main reasons for this. Aim: To evaluate the efficacy and safety of stapled transperineal repair in treating RVF. Methods: This was a retrospective cohort study conducted in the Coloproctology Department of The Sixth Affiliated Hospital of Sun Yat-sen University (Guangzhou, China). Adult patients presenting with RVF who were surgically managed by perineal repair between May 2015 and May 2020 were included. Among the 82 total patients, 37 underwent repair with direct suturing and 45 underwent repair with stapling. Patient demographic data, Wexner faecal incontinence score, and operative data were analyzed. Recurrence rate and associated risk factors were assessed. Results: The direct suture and stapled repair groups showed similar clinical characteristics for aetiology, surgical history, fistula features, and perioperative Wexner score. The stapled repair group did not show superior results over the suture repair group in regard to operative time, blood loss, and hospital stay. However, the stapled repair group showed better postoperative Wexner score (1.04 ± 1.89 vs 2.73 ± 3.75, P = 0.021), less intercourse pain (1/45 vs 17/37, P = 0.045), and lower recurrence rate (6/45 vs 17/37, P = 0.001). There was no protective effect from previous repair history, smaller diameter of fistula (&lt; 0.5 cm), better control of defecation (Wexner &lt; 10), or stapled repair. Direct suture repair and preoperative high Wexner score (&gt; 10) were risk factors for fistula recurrence. Furthermore, stapled repair gave better efficacy in treating complex RVFs (i.e., multiple transperineal repair history, mid-level fistula position, and poor control of defecation). Conclusion: Stapled transperineal repair is advantageous for management of RVF, providing a high primary healing rate and low recurrence rate. abstract_id: PUBMED:27638282 Robotic rectovaginal fistula repair. Minimally invasive surgery for diseases in the pelvic region is gaining popularity due to advances in technology and increased benefit to the patient. As indications for such surgeries increase, the known boundaries for minimal access are being extended by a few teams. We report a patient who underwent robotic-assisted transabdominal repair of a high rectovaginal fistula which developed following a vaginal hysterectomy. Vaginography revealed a communication between the vaginal vault and the upper rectum. After evaluation of the colon and the vagina, the patient was planned for a robotic-assisted rectovaginal fistula repair. The three-arm daVinci(®) surgical robot was used. A total of five ports were used to complete the entire procedure, which included adhesiolysis, re-creation of the vaginal vault, repair of the fistula and omental interposition. This is the first robotic-assisted rectovaginal fistula repair reported to date. Besides the advantages of minimally invasive surgery for the patient, the surgeon benefits from the ease of suturing deep in the pelvis afforded by the articulated robotic arms. abstract_id: PUBMED:37420132 Predictors of outcome for treatment of enterovaginal fistula : Therapeutical strategies for treatment. Background: Enterovaginal fistulas represent a serious complication of various diseases and therapeutic procedures, often associated with complicated clinical courses and massive impairment of quality of life. As underlying conditions and procedures are multifarious, therapeutic approaches are challenging and have to be tailored individually. As the therapeutic management is complex and individualized, multiple surgical interventions might be necessary. Methods: The aim of this study was to identify possible predictors for outcome in the treatment enterovaginal fistula patients. The study was realized as a retrospective analysis. Ninety-two patients treated with enterovaginal fistulas between 2004 and 2016 were analyzed. Patient characteristics, therapeutic data, and endoscopic findings were stratified according to etiology, closure rate and time, as well as recurrence of fistula. Main outcome measure was the overall rate of fistula closure. Results: Overall therapeutic success rate was 67.4%. Postoperatively derived fistulas were most frequent (40.2%), mainly after rectal surgery (59.5%). Postoperative and non-IBD-inflammation associated fistulas had better outcome than IBD-, radiotherapy-, and tumor-related fistulas (p = 0.001). Successful fistula closure was observed more frequently after radical surgical interventions, best results observed after transabdominal surgery (p &lt; 0.001). Fistula recurrence was also less frequently observed after radical surgical therapies (p = 0.029). A temporary stoma was associated with higher incidence of fistula closure (p = 0.013) and lower incidence of fistula recurrence (p = 0.042) in the postoperative subgroup, as well as shortened therapy period in all groups (p = 0.031). Conclusion: Enterovaginal fistulas are a result of various etiologies, and treatment should be adjusted accordingly. A very sustainable, rapid, and persistent therapeutic success can be expected after radical surgical approaches with temporary diverting stoma. This is especially true for postoperatively derived fistulas. Answer: Yes, there are several predictors of outcome following rectovaginal fistula repair. Studies have identified various factors that can influence the success rate of rectovaginal fistula repair surgeries. For instance, the presence of Crohn's disease and a history of smoking are strongly associated with rectovaginal fistula recurrence (PUBMED:20706066). Additionally, the occurrence of bladder spasms following surgical repair of urogenital fistula could influence repair outcomes, with good surgical outcome being associated with not experiencing postsurgical bladder spasms (PUBMED:28574159). In women with Crohn's disease, poor outcomes after rectovaginal fistula surgery were significantly higher in those who underwent a mucosal flap procedure compared to those who had a seton placed (PUBMED:26767081). Furthermore, the outcome of transanal advancement flap repair of rectovaginal fistulas is not improved by an additional labial fat flap transposition (PUBMED:12077640). For obstetric fistula repair, the difficulty of repair negatively affected the success of the first surgery, while the odds of success increased with each subsequent operation (PUBMED:30987849). Smoking was also found to significantly affect the outcome of transanal advancement flap repair in patients with cryptoglandular perianal fistulas (PUBMED:12594672). The etiology of the fistula plays a role in the success rate, with traumatic fistulas (iatrogenic and obstetric) having higher healing rates compared to inflammatory fistulas (PUBMED:35779081). Stapled transperineal repair showed better efficacy in treating complex rectovaginal fistulas, providing a high primary healing rate and low recurrence rate compared to direct suture repair (PUBMED:33911467). Lastly, enterovaginal fistulas resulting from postoperative and non-IBD-inflammation had better outcomes than those related to IBD, radiotherapy, and tumors. Radical surgical interventions, especially with temporary diverting stoma, were associated with higher incidence of fistula closure and lower incidence of fistula recurrence (PUBMED:37420132).
Instruction: Does carbohydrate-deficient transferrin have diagnostic value in non-alcoholic fatty liver disease? Abstracts: abstract_id: PUBMED:19617184 Does carbohydrate-deficient transferrin have diagnostic value in non-alcoholic fatty liver disease? Unlabelled: Non-alcoholic fatty liver disease (NAFLD) is an acquired metabolic liver disease in the consequence of triglyceride accumulation within liver cells. It is accompanied by necrobiotic inflammatory reaction, fibrosis and also by liver cirrhosis. Its distinction from alcoholic fatty liver disease (AFLD) can be difficult. Clear anamnesis may help, but if the patient does not inform us about the alcohol consumption, the differentiation is not easy. Aim: Our aim was to determine the value of carbohydrate deficient transferrin (CDT) in patients with non-alcoholic fatty liver disease, as well as to analyze the background of high CDT values, according to the anamnesis. Patients And Methods: 39 patients (21 female and 18 male) with ultrasound signs of fatty liver disease were studied. CDT, body mass index (BMI) and HOMA index were determined. Results: Mean value of CDT was: 2.39+/-0.52% (in male 2.51+/-0.61, in female 2.28+/-0.4). No significant difference was found among the two sexes. The diversity of CDT values was normal. The value of BMI belonged to the overweight area without any significant difference between the two sexes. Conclusion: On the basis of the results, NAFLD and AFLD can be differentiated according to the CDT value, and this value could be a higher specific value than the activity of gammaglutamyl transpeptidase, however, anamnesis and other factors are as well very important in achieving the correct diagnosis. abstract_id: PUBMED:12934779 Value of determining carbohydrate-deficient transferrin isoforms in the diagnosis of alcoholic liver disease. Objective: To determine whether isoform separation of carbohydrate-deficient transferrin (CDT) is of value in the diagnosis of alcoholic liver disease (ALD) and is specific to ALD when compared with other liver diseases. Patients And Methods: During 1995 and 1996, 47 patients with ALD were evaluated with CDT at the Mayo Clinic in Rochester, Minn. The diagnosis of ALD was based on biochemical and histological analyses and on a history of drinking that exceeded 5 years with an average alcohol intake of more than 60 g/d. Disease controls included nonalcoholic steatohepatitis (NASH) (n = 26) and other liver disease (n = 22). Normal controls (n = 21) were healthy individuals without liver disease. Transferrin isoforms were quantified by densitometry of Coomassie-stained transferrins after affinity purification and isoelectric focusing. The pentasialo, tetrasialo, trisialo, disialo, monosialo, and asialo isoforms were quantified as percentages of total band densities. Results: Receiver operating characteristic (ROC) curves were constructed for each isoform. The curves for total desialated isoforms (sum of disialo, monosialo, and asialo) displayed the best relationship between sensitivity and specificity with an ROC-area under the curve (AUC) of 0.922. The ROC-AUC values for individual transferrin isoforms in ALD vs NASH for pentasialo, tetrasialo, trisialo, disialo, monosialo, and asialo were 0.806, 0.917, 0.885, 0.933, 0.804, and 0.785, respectively. Only 58% of patients with ALD were detected at a specificity that excluded ALD in 84% of those who did not have it. Conclusion: Within alcohol ingestion times reported to us, no associations with recent drinking were observed. Alcohol as a cause of liver disease is not perfectly established by CDT analysis, although a high total CDT value favors ALD over NASH. abstract_id: PUBMED:16385229 Use of serum carbohydrate-deficient transferrin values to exclude alcoholic hepatitis from non-alcoholic steatohepatitis: a pilot study. Background: Since no clinical or biochemical parameters allow an accurate diagnosis of non-alcoholic steatohepatitis (NASH), the diagnosis by exclusion of alcoholic hepatitis is necessary. However, it is difficult to get the accurate amount of alcohol consumed from the patients, especially from females. To differentiate between NASH and alcoholic hepatitis, we investigated whether serum biochemical markers of chronic alcohol abuse are useful or not. Methods: Sera were obtained from 13 patients with NASH and 26 patients with alcoholic hepatitis. Diagnoses in these patients were confirmed histologically by needle biopsy of the liver. All patients with alcoholic hepatitis consumed more than 80 g of ethanol/day for more than 10 years. As markers of chronic alcohol abuse, serum levels of aspartate aminotransferase (AST), alanine aminotransferase (ALT), gamma-glutamyl transpeptidase (GGT), hyaluronate, mean corpuscular volume of red blood cells (MCV) and carbohydrate-deficient transferrin (CDT) were measured. Results: Among alcohol markers, serum values of AST, AST/ALT ratio, GGT, CDT and MCV in patients with alcoholic hepatitis were significantly higher than those in patients with NASH, respectively. However, serum values of these markers, except for CDT, were overlapped in many cases of NASH and alcoholic hepatitis. Serum CDT values of all patients with NASH were lower than the cutoff value, 2.66%, and those of all patients with alcoholic hepatitis were higher than the cutoff value. Conclusion: The results of the present study suggest that serum CDT level could be used to differentiate between NASH and alcoholic hepatitis. abstract_id: PUBMED:26092603 Evaluation of serum carbohydrate-deficient transferrin by HPLC and MALDI-TOF MS. Background: The percentage of carbohydrate-deficient transferrin (%CDT) in serum is a marker of habitual alcohol intake that can be determined by antibody detection of abnormal disialo sugar chains at D432 and D630. However, this approach lacks specificity for alcoholic liver disease. To decrease the false-positive rate in patients with non-alcoholic liver diseases, we developed a screening method using the disialo sugar chain at D630 alone. Methods: Serum was obtained from 12 patients with alcoholic liver disease, 12 with type C chronic liver disease, 6 with non-alcoholic steatohepatitis (NASH), and 12 healthy non-alcohol drinkers. Transferrin with two sialic acids (disialotransferrin) was fractionated from serum using HPLC, digested with trypsin, and evaluated using MALDI-TOF MS. Results: An abnormal sugar chain at D630 of transferrin was not detected in healthy subjects or in patients with chronic liver disease or NASH, but was detected in 9 patients (75%) with alcoholic liver disease. Positive results were found in 3 samples that were negative using an N-Latex CDT kit and in one sample negative for γ-glutamylaminotransferase and CDT. Conclusions: Detection of CDT by HPLC/MALDI-TOF MS based on an abnormal sugar chain at D630 may permit identification of habitual alcohol drinkers when used in combination with current markers. abstract_id: PUBMED:35060066 Carbohydrate-deficient transferrin is a sensitive marker of alcohol consumption in fatty liver disease. Background: The prevalence of nonalcoholic fatty liver disease (NAFLD) and alcohol-associated/related liver disease (ALD) with metabolic syndrome is increasing globally. Metabolic syndrome and excessive alcohol consumption synergically exacerbate liver pathologies; therefore, drinking-specific serum markers unaffected by liver injury or metabolic syndrome are essential for assessing alcohol consumption. We evaluated the ratio of carbohydrate-deficient transferrin to total transferrin (%CDT) in patients with fatty liver disease, particularly focusing on its correlation with metabolic factors (UMIN000033550). Methods: A total of 120 patients with fatty liver disease, including ALD and NAFLD, were screened for alcohol misuse using the Alcohol Use Disorders Identification Test. Associations of metabolic syndrome-related factors and hepatic steatosis/liver stiffness with drinking markers, such as %CDT, gamma-glutamyl transferase (GGT), and mean corpuscular volume (MCV), were assessed using multiple linear regression analyses. Results: %CDT significantly increased with 3-4 drinks/day. The optimal cutoff value for identifying non- to light drinkers was 1.78% (sensitivity, 71.8%; specificity, 83.7%; and area under the receiver operating characteristic curve [AUROC], 0.851), which was significantly higher than that for GGT. The cutoff value for identifying heavy drinkers was 2.08% (sensitivity, 65.5%; specificity, 86.8%; and AUROC, 0.815). Multiple regression analysis revealed that this proportion was negatively correlated with body mass index, whereas GGT and MCV were influenced by multiple factors involved in liver injury and dyslipidemia. Conclusions: %CDT showed a strong correlation with alcohol consumption, independent of liver damage, steatosis/stiffness, or metabolic syndrome-related factors, indicating that it is a useful drinking marker for the accurate diagnosis of NAFLD and ALD. abstract_id: PUBMED:21570653 Alterations in lipid, carbohydrate and iron metabolism in patients with non-alcoholic steatohepatitis (NASH) and metabolic syndrome. Background: NASH (non-alcoholic steatohepatitis) is considered the hepatic manifestation of the metabolic syndrome (MS). We aimed to analyze lipid, carbohydrate, and iron metabolism in NASH. Patients, Methods: 37 patients with MS (17 M/20 F, 51+/-15 years), elevated transaminases; 25 patients had histologically proven NASH (NAS score≥5), 12 patients had toxic background (nonNASH). 37 age, sex, BMI-matched healthy controls. Lipid variables, LDL-subfractions, iron, ferritin, transferrin (T), transferrin saturation (TS), and hepcidin (H) were measured in patients/controls. Oral glucose tolerance tests were performed. Results: NASH patients with steatosis gr. 2 and 3 (&gt;33% hepatic fat) had higher sd-LDL (mg/dl) concentrations than patients with steatosis gr. 1 (&lt;33%) (p=0.002), nonNASH patients (p=0.03) and controls (p=0.001). Sd absolute (mg/dl) correlated directly with the steatosis grade only in patients with NASH and steatosis &gt;33% (p=0.04). NASH-patients showed higher insulin, C-peptide and IRI values than nonNASH patients (p=0.034; 0.032; 0.04). H was increased in patients versus controls (p&lt;0.001). H correlated with ferritin in MS-patients (p=0.01), correlated directly with sd-LDL (mg/dl) (p=0.017) and IRI (p&lt;0.001) and indirectly with HDL (p=0.05) in NASH. No associations between hepatic inflammation/iron content on liver biopsy and variables of lipid metabolism were found but hepcidin correlated with hepatic inflammation in all patients and with NAS scores in NASH. Conclusions: NASH-patients show insulin resistance and increased sd-LDL subfractions, suggesting an atherogenic profile. The correlation of H with sd-LDL and IRI, without relation to hepatic iron content suggests a putative link between inflammation, carbohydrate and lipid metabolism in NASH. abstract_id: PUBMED:24011648 Alcoholic and non-alcoholic steatohepatitis: who is affected and what can we do for them? The most common causes of steatohepatitis are alcohol intake and metabolic disorders. Several methods based on biochemical determinations (carbohydrate deficient transferrin) and questionnaires (AUDIT, CAGE, MALE) are useful for detecting surreptitious alcohol intake. Although new non-invasive methods are under development, based both on lipidomics (Owl-Liver(®)) and on biochemical determinations and anthropometric parameters (NAFLD Fibrosis score) or imaging methods (DeMILI NASH-MRi(®)), none has been proposed as definitive and the gold standard continues to be liver biopsy. The pathogenesis of alcoholic and non-alcoholic steatohepatitis shares some elements such as insulin resistance, cytochrome CYP2E1-mediated oxidative stress, adiponutrin and its PNPLA3 gene, and the microbiota. The first-line treatment consists of lifestyle changes, including giving up alcohol, diet and exercise. abstract_id: PUBMED:35605744 Ethyl glucuronide in hair detects a high rate of harmful alcohol consumption in presumed non-alcoholic fatty liver disease. Background & Aims: Non-alcoholic fatty liver disease (NAFLD) and alcohol-related liver disease (ALD) cannot reliably be distinguished by routine diagnostics, and the role of alcohol consumption in metabolic dysfunction-associated fatty liver disease (MAFLD) remains unclear. We investigated alcohol consumption in patients with presumed NAFLD and ALD using novel objective alcohol markers. Methods: In total, 184 consecutive patients were included in this prospective observational study. Alcohol intake was assessed by ethylglucuronide in hair (hEtG) and urine (uEtG); the utility of these measures for alcohol detection was compared to Alcohol Use Disorders Identification Test-Consumption (AUDIT-C), carbohydrate deficient transferrin (CDT), mean corpuscular volume (MCV), gamma-glutamyltransferase (GGT), and ALD/NAFLD index (ANI). Clinical characteristics of patients with NAFLD and ALD were re-assessed after reclassification based on repeated moderate (≥10 g &lt;60 g EtOH/day) and excessive (≥60 g EtOH/day) alcohol consumption, and patients were retrospectively reclassified based on MAFLD criteria. Results: Repeated moderate to excessive alcohol consumption was detected in 28.6%, 28.5%, and 25.0% of patients with presumed NAFLD, ALD or MAFLD, respectively. ANI score, AUDIT-C, uEtG, and hEtG showed AUCs of 0.628, 0.733, 0.754, and 0.927 for the detection of repeated moderate to excessive alcohol consumption, respectively. The indirect markers CDT, MCV and GGT were not reliable. Patients with repeated moderate or excessive alcohol consumption were significantly more often male, had a significantly lower BMI, and suffered significantly less often from type 2 diabetes or impaired glucose tolerance. Conclusions: In total, 28.6% of patients with presumed NAFLD, and 25.0% with MAFLD are at risk of alcohol-related liver damage. AUDIT-C, uEtG and hEtG should be used to screen for alcohol consumption in patients with fatty liver disease. Lay Summary: Fatty liver disease can be caused by metabolic factors and/or alcohol consumption. The diagnosis of non-alcoholic fatty liver disease (NAFLD) is based on the exclusion of harmful alcohol consumption, while metabolic dysfunction-associated fatty liver disease (MAFLD), which has been proposed as a new name for NAFLD, is based on the presence of metabolic comorbidities and allows for alcohol consumption. Herein, we show that up to 29% of patients diagnosed with NAFLD and 25% with MAFLD are at risk of alcohol-related liver damage. We show that ethyl glucuronide (a metabolite of alcohol) in the hair and urine can accurately detect potentially harmful alcohol consumption in these patients - as such, these tests should be integrated into routine diagnostic work-up for patients with fatty liver disease. abstract_id: PUBMED:20412588 Prevalence of liver fibrosis and risk factors in a general population using non-invasive biomarkers (FibroTest). Background: FibroTest and elastography have been validated as biomarkers of liver fibrosis in the most frequent chronic liver diseases and in the fibrosis screening of patients with diabetes. One challenge was to use them for estimating the prevalence of fibrosis, identifying independent risk factors and to propose screening strategies in the general population. Methods: We prospectively studied 7,463 consecutive subjects aged 40 years or older. Subjects with presumed advanced fibrosis (FibroTest greater than 0.48) were re-investigated in a tertiary center. Results: The sample characteristics were similar to those of the French population. FibroTest was interpretable in 99.6%. The prevalence of presumed fibrosis was 2.8%, (209/7,463), including cirrhosis in 0.3% (25/7,463); 105/209 (50%) subjects with presumed fibrosis accepted re-investigation. Fibrosis was confirmed in 50, still suspected in 27, indeterminate in 25 and not confirmed with false positive FibroTest or false negative elastography in 3 subjects. False negative rate of FibroTest estimated using elastography was 0.4% (3/766). The attributable causes for confirmed fibrosis were both alcoholic and nonalcoholic fatty liver disease (NAFLD) in 66%, NAFLD in 13%, alcohol in 9%, HCV in 6%, and other in 6%. Factors independently associated (all P &lt; 0.003) with confirmed fibrosis were age, male gender, waist circumference, HCV antibody and alcohol consumption estimated using carbohydrate-deficient transferrin, enabling efficient screening-oriented strategies to be compared and proposed. Conclusions: Biomarkers have permitted to estimate prevalence of advanced fibrosis around 2.8% in a general population aged 40 years or older, and several risk factors which may be used for the validation of selective or non-selective screening strategies. abstract_id: PUBMED:17030176 Utility of a new model to diagnose an alcohol basis for steatohepatitis. Background & Aims: Distinguishing an alcohol basis from a nonalcoholic basis for the clinical and histologic spectrum of steatohepatitic liver disease is difficult because of unreliability of alcohol consumption history. Unfortunately, various biomarkers have had limited utility in distinguishing alcoholic liver disease (ALD) from nonalcoholic fatty liver disease (NAFLD). Thus, the aim of our study was to create and validate a model to diagnose ALD in patients with steatohepatitis. Methods: A cross-sectional cohort study was performed at the Mayo Clinic, Rochester, Minnesota, to create a model using multivariable logistic regression analysis. This model was validated in 3 independent data sets comprising patients of varying severity of steatohepatitis spanning over 10 years. Results: Logistic regression identified mean corpuscular volume, aspartate aminotransferase (AST)/alanine aminotransferase (ALT) ratio, body mass index, and gender as the most important variables that separated patients with ALD from NAFLD. These variables were used to generate the ALD/NAFLD Index (ANI), with ANI of greater than zero incrementally favoring ALD and ANI of less than zero incrementally favoring a diagnosis of NAFLD, thus making ALD unlikely. ANI had a c-statistic of 0.989 in the derivation sample, and 0.974, 0.989, 0.767 in the 3 validation samples. ANI performance characteristics were significantly better than several conventional and recently proposed biomarkers used to differentiate ALD from NAFLD, including the histopathologic marker protein tyrosine phosphatase 1b, AST/ALT ratio, gamma-glutamyl transferase, and carbohydrate-deficient transferrin. Conclusions: ANI, derived from easily available objective variables, accurately differentiates ALD from NAFLD in hospitalized, ambulatory, and pretransplantation patients and compares favorably with other traditional and proposed biomarkers. Answer: Carbohydrate-deficient transferrin (CDT) has been investigated as a diagnostic marker to differentiate between non-alcoholic fatty liver disease (NAFLD) and alcoholic fatty liver disease (AFLD). The studies suggest that CDT can be useful in distinguishing NAFLD from AFLD or alcoholic hepatitis, particularly when accurate information about alcohol consumption is not available from patients. One study found that the mean value of CDT did not significantly differ between sexes in patients with ultrasound signs of fatty liver disease, and the diversity of CDT values was normal. The study concluded that NAFLD and AFLD could be differentiated according to the CDT value, potentially with higher specificity than the activity of gammaglutamyl transpeptidase, although anamnesis and other factors are also very important for correct diagnosis (PUBMED:19617184). Another study suggested that serum CDT levels could be used to differentiate between NASH and alcoholic hepatitis, as serum CDT values of all patients with NASH were lower than the cutoff value, and those of all patients with alcoholic hepatitis were higher than the cutoff value (PUBMED:16385229). Additionally, a study evaluating serum CDT by HPLC and MALDI-TOF MS found that an abnormal sugar chain at D630 of transferrin, which is indicative of CDT, was detected in patients with alcoholic liver disease but not in healthy subjects or patients with chronic liver disease or NASH. This suggests that CDT detection based on an abnormal sugar chain at D630 may help identify habitual alcohol drinkers (PUBMED:26092603). Furthermore, a study indicated that the %CDT showed a strong correlation with alcohol consumption, independent of liver damage, steatosis/stiffness, or metabolic syndrome-related factors, making it a useful marker for the accurate diagnosis of NAFLD and ALD (PUBMED:35060066). In summary, CDT has diagnostic value in distinguishing NAFLD from alcoholic liver diseases, and it can be a sensitive marker for assessing alcohol consumption in patients with fatty liver disease. However, it should be used in conjunction with other diagnostic factors and patient history for accurate diagnosis.
Instruction: Does orthostatic hypotension predict the occurrence of nocturnal arterial hypertension in the elderly patient? Abstracts: abstract_id: PUBMED:12940176 Does orthostatic hypotension predict the occurrence of nocturnal arterial hypertension in the elderly patient? Objective: To determine whether the presence of orthostatic hypotension--which, in this age-group, could be due to varying degrees of autonomic dysfunction--is an indicator of nocturnal arterial hypertension. Patients: Between 1999 and 2001 we prospectively and consecutively studied 93 elderly patients with untreated (office) arterial hypertension, 65 (70%) of whom were true hypertensives according to 24 h ambulatory blood pressure monitoring (ABPM). Interventions: The patients were studied by clinical examination including blood pressure (BP) measurement in dorsal decubitus and orthostatic position, 24 h ABPM, evaluation of vascular distensibility by carotid-femoral pulse wave velocity (PWV) and Doppler echocardiography. For this study we analyzed especially the ambulatory behavior of BP, so we could relate the variation of systolic blood pressure (SBP) during orthostatism with non-dipper status for SBP and absolute nocturnal values of SBP. Measurements And Results: The results indicated that a greater decrease of blood pressure with orthostatism corresponded to a greater probability of nocturnal hypertension (p = 0.005) and of non-dipper status (p = 0.02). These results are in agreement with those subsequently found by other authors (Kario et al., 2002). Conclusions: In this way, by means of a simple clinical maneuver that should always be performed in an elderly hypertensive patient, we can suspect the presence of nocturnal hypertension--which is a high-risk cardiovascular situation--and use this information to help select patients to undergo 24 hour-ABPM. abstract_id: PUBMED:2356428 Antihypertensive agents in elderly patients Arterial hypertension is common in elderly people, and the risk of cardiovascular complications due to that disease is reduced by antihypertensive treatments in these as in younger hypertensive patients. However, some points must be borne in mind when treating and following up elderly subjects with hypertension. Old age alters the metabolism and effects of antihypertensive agents. Concomitant pathologies are frequent and may preclude the use of some of these agents, while other medicines taken by the patient interreact with them. Several complications of antihypertensive therapy particularly threaten elderly hypertensive drugs patients. The complications depend on the antihypertensive drugs used and include malaise, excessive fall in blood pressure, postural hypotension, water and electrolyte disorders, renal impairment and neuropsychological disturbances. Because of these iatrogenic effects, which may have more serious consequences in elderly therapy must be handled with care. Close supervision enables these undesirable effects to be detected early on and corrected before more serious complications develop. abstract_id: PUBMED:3027620 Hypertension therapy in the elderly. Our experience with converting enzyme inhibitors Arterial hypertension shows, in the elderly, particular features and special problems connected with its pharmacological treatment. In our work ten patients, aged between 65-75, suffering from essential hypertension, were examined for eight weeks. At the end of this period, we observed a significant reduction of systolic and diastolic pressure, heart rate being unchanged. We didn't observe any significant change in the metabolic parameters considered (uricemia, creatininemia, triglycerides and cholesterol). No patient had to interrupt the treatment as a consequence of side effects. According to our data, we can affirm that Captopril reduces arterial pressure gradually and doesn't cause orthostatic hypotension, being thus very useful in the elderly. abstract_id: PUBMED:2772350 Clinical characteristics of arterial hypertension in the elderly The clinical characteristics of systolic and diastolic hypertension in 75 and of systolic hypertension in 50 elderly patients have been studied and the results have been compared to those obtained in 23 normotense elderly controls and 500 young patients with essential hypertension. A greater incidence in cardiovascular and neurologic morbility was observed in the hypertense elderly, existing also a greater incidence of electrocardiographic abnormalities and impairment in renal function. The changes in blood pressure with postural changes and isometric and physical exercise were evaluated in a subgroup of these patients, finding that the elderly with hypertension, specially those with systolic hypertension, showed orthostatic hypotension, and an increase in blood pressure with exercise, reaching levels that could potentially cause the clinical complications. abstract_id: PUBMED:20659040 Cardiorenal interrelationships in elderly patients with arterial hypertension Aim: To study clinico functional characteristics of the kidney and cardiovascular system in elderly patients with arterial hypertension (AH) in comparison with corresponding data from young and middle age patients with AH. Material And Methods: We examined 142 patients with AH (110 patients aged 60 years and 32 patients aged 59 years). Examination included registration of parameters of echocardiography, 24 hour blood pressure monitoring, renal function testing. Results: Compared with patients aged 59 years elderly patients had increased variability and augmented morning elevation of blood pressure. Diastolic dysfunction, endothelial dysfunction, microalbuminuria, and lowered glomerular filtration rate were more frequent among elderly patients. abstract_id: PUBMED:33671878 Blood Pressure Patterns in Patients with Parkinson's Disease: A Systematic Review. (1) Background: Cardiovascular autonomic dysfunction is a non-motor feature in Parkinson's disease with negative impact on functionality and life expectancy, prompting early detection and proper management. We aimed to describe the blood pressure patterns reported in patients with Parkinson's disease, as measured by 24-h ambulatory blood pressure monitoring. (2) Methods: We conducted a systematic search on the PubMed database. Studies enrolling patients with Parkinson's disease undergoing 24-h ambulatory blood pressure monitoring were included. Data regarding study population, Parkinson's disease course, vasoactive drugs, blood pressure profiles, and measurements were recorded. (3) Results: The search identified 172 studies. Forty studies eventually fulfilled the inclusion criteria, with 3090 patients enrolled. Abnormal blood pressure profiles were commonly encountered: high blood pressure in 38.13% of patients (938/2460), orthostatic hypotension in 38.68% (941/2433), supine hypertension in 27.76% (445/1603) and nocturnal hypertension in 38.91% (737/1894). Dipping status was also altered often, 40.46% of patients (477/1179) being reverse dippers and 35.67% (310/869) reduced dippers. All these patterns were correlated with negative clinical and imaging outcomes. (4) Conclusion: Patients with Parkinson's disease have significantly altered blood pressure patterns that carry a negative prognosis. Ambulatory blood pressure monitoring should be validated as a biomarker of PD-associated cardiovascular dysautonomia and a tool for assisting therapeutic interventions. abstract_id: PUBMED:38419751 Primary autonomic failure: a complex case of orthostatic hypotension in a hypertensive elderly patient. Background: Primary autonomic failure (PAF) or Bradbury Eggleston syndrome is a neurodegenerative disorder of the autonomic nervous system characterized by orthostatic hypotension. Case Summary: We report the case of a 76-year-old patient with a history of hypertension, who presented with exercise-induced fatigue. He exhibited systolic hypertension and resting bradycardia in the supine position, with orthostatic hypotension without reactive tachycardia, suggesting dysautonomia. Neurological examination was unremarkable. The patient underwent cardiovascular autonomic testing, revealing evidence of beta-sympathetic deficiency associated with neurogenic orthostatic hypotension. Causes of secondary dysautonomia were excluded. The patient was diagnosed with PAF. Even if managing the combination of supine hypertension and orthostatic hypotension was challenging, significant improvements in functional and haemodynamic status were observed with a personalized management approach. Discussion: Throughout this case report, we emphasize the critical need for an evaluation of autonomic function and blood pressure's dynamics in hypertensive patients experiencing orthostatic symptoms, enabling the implementation of tailored therapeutic strategies. abstract_id: PUBMED:21334114 Medication adjustment in a diabetic patient with orthostatic hypotension and nocturnal arterial hypertension. N/A abstract_id: PUBMED:29609870 Prevalence and relations of orthostatic hypotension in patients 80 years old and over from a primary care unit Objective: To estimate the prevalence of orthostatic hypotension (OH) in patients 80 years old and over attending a primary care unit. To relate OH to the most prevalent pathologies and to the most used drugs. Design: Transversal observational study. Location: Primary care unit, Santiago de Compostela. Participants: Eighty one patients 80 years old or over representative of a primary care unit were recruited. Ten patients were excluded. Main Measurements: Blood pressure was measured in decubitus and later in erect position first immediately after standing and then after 3 minutes. Diagnoses and active treatments were reviewed in the electronic clinical history and through an interview with the patient and caregiver. Results: In 26.76% of patients the systolic blood pressure fell by 20mmHg or more and/or the diastolic blood pressure fell by 10mmHg in the instant following the postural shift. In 16.90% of patients the drop persisted after 3 minutes of standing from decubitus position. None of the patients was diagnosed with OH. The highest prevalence ratio was observed for diabetes mellitus (1.6; P=.412), not existing differences for arterial hypertension (P=.881). OH related in a statistically meaningful way to the use of renin angiotensin aldosterone system inhibitors (OR: 8.174, CI95%: 1.182-56.536); P=.033] and benzodiazepines (OR: 5.938, CI95%: 1.242-28.397; P=.026)]. Conclusion: OH had a prevalence of 16.90% among the elderly patients who had a consultation. Its connection with some drugs (renin angiotensin aldosterone system inhibitors and benzodiazepines) must be considered. abstract_id: PUBMED:18991830 Dynamics of pulse pressure in orthostatic test: interrelationships with risk factors and involvement of the heart and vessels in elderly patients with arterial hypertension We studied dynamics of arterial pulse pressure (APP) during active orthostatic test (AOT) in relation to cardiac and vascular involvement in 198 elderly patients with arterial hypertension. We measured and compared parameters of hemodynamics, rate of detection of risk factors, signs of affection of the heart and vessels in subgroups with elevation of APP (n=50), lowering of APP by more than 10 mm Hg (n=50) and intermediate deviations of APP (n=98) in orthostasis. In patients with elevation of APP during AOT smoking, increased aortic stiffness, history of myocardial infarction were detected more often and risk of development of cardiovascular diseases according to SCORE system was higher than in patients with lowering of APP or absence of its dynamics. Patients with lowering of APP more than 10 mm Hg significantly more often had cerebrovascular disease appearing as history of cerebral strokes and/or transitory ischemic attacks. The conclusion was made about possible significance of orthostatic increment of APP as supplementary marker of cardiovascular risk in elderly patients with arterial hypertension. Answer: Yes, orthostatic hypotension does predict the occurrence of nocturnal arterial hypertension in the elderly patient. A study indicated that a greater decrease of blood pressure with orthostatism corresponded to a greater probability of nocturnal hypertension and of non-dipper status (PUBMED:12940176). This suggests that orthostatic hypotension, which may be due to autonomic dysfunction, can be an indicator of nocturnal arterial hypertension in the elderly.
Instruction: Is endoscopy necessary to initiate medical treatment in primary-level medical care patients with not investigated dyspepsia? Abstracts: abstract_id: PUBMED:15127645 Is endoscopy necessary to initiate medical treatment in primary-level medical care patients with not investigated dyspepsia? Introduction: Approximately 2 to 5% of primary care consultations are patients with dyspepsia. One of the most controversial points among primary-care physicians and gastroenterologists is the diagnostic-therapeutic approach of the uninvestigated dyspeptic patient. Aims And Methods: Our aim was to investigate prevalence of organic disease and Helicobacter pylori (Hp) status in uninvestigated dyspeptic out-patients at a primary-care level. Two hundred two adult out-patients (130 F/72 M), &gt; or = 18 and &lt; 75 years, mean age 36 years (range 18-73 years), were recruited and sent for upper gastrointestinal endoscopy and gastric biopsies for CLO-test due to dyspeptic symptoms according to Rome II criteria. Patients were excluded if they had previously been investigated by any means, had a background of NSAIDs or corticoesteroids intake, irritable bowel syndrome (IBS) or any alarm sign or symptom. All endoscopic findings were reported to primary care physicians who decided on patient management. Results: A total of 86.14% (174/202 patients) showed no endoscopic lesions and only 13.86% (28/202) [95% CI = 9.1-18.6] showed evidence of non-malignant organic lesions. By age group, prevalence of organic diseases was 10.81% (16/148) [95% CI = 6.3-16.9] in patients &lt; 45 years and 22.22% (12/54) [95% CI = 12.04-35.6] in patients &gt; 45 years, OR = 0.424. [95% CI = 0.186-0.968]. Conclusions: Prevalence of upper gastrointestinal tract organic disease in dyspeptic out-patients selected according to Rome II criterion at a primary-care level is low, with no malignancies detected. These findings show that patients &lt; 45 years of age with uninvestigated dyspepsia may empirically and safely receive symptom-guided antisecretory therapy. abstract_id: PUBMED:12492183 Helicobacter pylori test-and-treat intervention compared to usual care in primary care patients with suspected peptic ulcer disease in the United States. Objectives: [corrected] The Helicobacter pylori (H. pylori) "test-and-treat" strategy in uninvestigated dyspepsia is an effective alternative to prompt endoscopy. Our aims were to determine whether the combination of an educational session and availability of office-based H. pylori testing (test-and-treat intervention [TTI]) increases use of the test-and-treat strategy by primary care practitioners and whether it improves patient outcomes. Methods: We conducted a 1-yr prospective trial of patients with suspected peptic ulcer disease in six primary care centers, three with TTI and three designated as usual care controls (UCC). Results: H. pylori testing was performed in 81% of 54 TTI patients and in 49% of 39 UCC patients (p = 0.004). TTI and UCC patients had similar gastroenterology referral rates (24% vs 33%, p = 0.33), endoscopy or upper GI radiography rates (30% vs 31%, p = 0.91), and primary care visits per patient (3.1 +/- 2.8 vs 3.1 +/- 2.6, p = 0.92). TTI patients were less likely than UCC patients to receive repeated antisecretory medication prescriptions (35% vs 66%, p = 0.003). Symptomatic status at 1 yr and satisfaction with medical care did not differ between groups. Median (and interquartile range) annualized disease-related expenditures per patient were $454 ($162-932) for TTI and $576 ($327-1,435) for UCC patients (p = 0.17). Conclusions: The combination of an educational session and availability of office-based H. pylori testing may increase acceptance of the test-and-treat strategy by primary care providers. It remains to be determined whether increased use of the test-and-treat strategy yields significant improvements in clinical and economic outcomes compared to usual care. abstract_id: PUBMED:26231401 Symptom clusters on primary care medical service trips in five regions in Latin America. Short-term primary care medical service trips organized by the North American non-governmental organizations (NGOs) serve many communities in Latin America that are poorly served by the national health system. This descriptive study contributes to the understanding of the epidemiology of patients seen on such low-resource trips. An analysis was conducted on epidemiologic data collected from anonymized electronic medical records on patients seen during 34 short-term medical service trips in five regions in Ecuador, Guatemala, and the Dominican Republic between April 2013 and April 2014. A total of 22,977 patients were assessed by North American clinicians (physicians, nurse practitioners, physician assistants) on primary care, low-resource medical service trips. The majority of patients were female (67.1%), and their average age was 36. The most common presenting symptoms in all regions were general pain, upper respiratory tract symptoms, skin disorders, eye irritation, dyspepsia, and nonspecific abdominal complaints; 71-78% of primary care complaints were easily aggregated into well-defined symptom clusters. The results suggest that guideline development for clinicians involved in these types of medical service trips should focus on management of the high-yield symptom clusters described by these data. abstract_id: PUBMED:11130524 Cost effectiveness of initial endoscopy for dyspepsia in patients over age 50 years: a randomised controlled trial in primary care. Background: Dyspepsia can be managed by initial endoscopy and treatment based on endoscopic findings, or by empirical prescribing. We aimed to determine the cost effectiveness of initial endoscopy compared with usual management in patients with dyspepsia over age 50 years presenting to their primary care physician. Methods: 422 patients were recruited and randomly assigned to initial endoscopy or usual management. Primary outcomes were effect of treatment on dyspepsia symptoms and cost effectiveness. Secondary outcomes were quality of life and patient satisfaction. Total costs were calculated from individual patient's use of resources with unit costs applied from national data. Statistical analysis of uncertainty on incremental cost-effectiveness ratio (ICER) was done along with a sensitivity analysis on unit costs with cost-effectiveness acceptability curves. Findings: In the 12 months following recruitment, 213 (84%) patients had an endoscopy compared with 75 (41%) controls. Initial endoscopy resulted in a significant improvement in symptom score (p=0.03), and quality of life pain dimension (p=0.03), and a 48% reduction in the use of proton pump inhibitors (p=0.005). The ICER was Pound Sterling1728 (UK Pound Sterling) per patient symptom-free at 12 months. The ICER was very sensitive to the cost of endoscopy, and could be reduced to Pound Sterling165 if the unit cost of this procedure fell from Pound Sterling246 to Pound Sterling100. Interpretation: Initial endoscopy in dyspeptic patients over age 50 might be a cost-effective intervention. abstract_id: PUBMED:19364542 Cost of detecting malignant lesions by endoscopy in 2741 primary care dyspeptic patients without alarm symptoms. Background & Aims: Current guidelines recommend empirical, noninvasive approaches to manage dyspeptic patients without alarm symptoms, but concerns about missed lesions persist; the cost savings afforded by noninvasive approaches must be weighed against treatment delays. We investigated the prevalence of malignancies and other serious abnormalities in patients with dyspepsia and the cost of detecting these by endoscopy. Methods: We studied 2741 primary-care outpatients, 18-70 years in age, who met Rome II criteria for dyspepsia. Patients with alarm features (dysphagia, bleeding, weight loss, etc) were excluded. All patients underwent endoscopy. The cost and diagnostic yield of an early endoscopy strategy in all patients were compared with those of endoscopy limited to age-defined cohorts. Costs were calculated for a low, intermediate, and high cost environment. Results: Endoscopies detected abnormalities in 635 patients (23%). The most common findings were reflux esophagitis with erosions (15%), gastric ulcers (2.7%), and duodenal ulcers (2.3%). The prevalence of upper gastrointestinal malignancy was 0.22%. If all dyspeptic patients 50 years or older underwent endoscopy, 1 esophageal cancer and no gastric cancers would have been missed. If the age threshold for endoscopy were set at 50 years, at a cost of $500/endoscopy, it would cost $82,900 (95% CI, $35,714-$250,000) to detect each case of cancer. Conclusions: Primary care dyspeptic patients without alarm symptoms rarely have serious underlying conditions at endoscopy. The costs associated with diagnosing an occult malignancy are large, but an age cut-off of 50 years for early endoscopy provides the best assurance that an occult malignancy will not be missed. abstract_id: PUBMED:16937497 Comparison between empirical prokinetics, Helicobacter test-and-treat and empirical endoscopy in primary-care patients presenting with dyspepsia: a one-year study. Aim: To investigate the optimal strategy to treat dyspeptic patients in primary care. Methods: Dyspeptic patients presenting to primary care outpatient clinics were randomly assigned to: (1) empirical endoscopy, (2) H pylori test-and-treat, and (3) empirical prokinetic treatment with cisapride. Early endoscopy was arranged if patients remained symptomatic after 2 wk. Symptom severity, quality-of-life (SF-36) as well as patient preference and satisfaction were assessed. All patients underwent endoscopy by wk 6. Patients were followed up for one year. Results: Two hundred and thirty four patients were recruited (163 female, mean age 49). 46% were H pylori positive. 26% of H pylori tested and 25% of empirical prokinetic patients showed no improvement at wk 2 follow-up and needed early endoscopy. 15% of patients receiving empirical cisapride responded well to treatment but peptic ulcer was the final diagnosis. Symptom resolution and quality-of-life were similar among the groups. Costs for the three strategies were HK dollar 4343, dollar 1771 and dollar 1750 per patient. 66% of the patients preferred to have early endoscopy. Conclusion: The three strategies are equally effective. Empirical prokinetic treatment was the least expensive but peptic ulcers may be missed with this treatment. The H pylori test-and-treat was the most cost-effective option. abstract_id: PUBMED:12860586 Approach to treatment of dyspepsia in primary care: a randomized trial comparing "test-and-treat" with prompt endoscopy. Background: The value of the "test-and-treat" strategy in the approach to dyspepsia has been evaluated only in a few secondary care studies. Most patients with dyspepsia, however, are treated by their primary care physician. This study evaluated the test-and-treat strategy in primary care. Methods: Patients consulting their general practitioners for dyspepsia were randomized to either direct open-access endoscopy with Helicobacter pylori testing or a test-and-treat strategy by H pylori serology. In the 12-month follow-up period, any additional treatment or referral for investigations was left at the discretion of the general practitioner. At the end of the study, data were collected concerning the number of endoscopies, changes in symptom severity and quality of life, patient satisfaction, and the use of medical resources. Results: Two hundred seventy patients were enrolled (129 who received endoscopy and 141 in the test-and-treat group). The prevalence of H pylori infection was 38.3% and 37.2% in the test-and-treat and endoscopy groups, respectively. In the test-and-treat group, 46 patients (33%) were referred for endoscopy during follow-up. Improvement in symptom severity, quality of life, and patient satisfaction was comparable in both groups. Patients in the test-and-treat group paid more dyspepsia-related visits to their general practitioner (P =.005). Patients in the endoscopy group were more often prescribed proton pump inhibitors (P =.007), whereas patients in the test-and-treat group were more often prescribed prokinetic drugs (P =.005). Conclusions: The test-and-treat strategy proved to be as effective and safe as prompt endoscopy. Only a minority of patients were referred for endoscopy after the test-and-treat approach. abstract_id: PUBMED:26005267 A new approach to the management of uninvestigated dyspepsia in primary care. Introduction: The prevalence of dyspepsia in the general population worldwide is very high (20-40%). Upper abdominal complaints are one of the most common cause of patients' visits to primary care settings. Making an accurate etiological diagnosis of dyspepsia is difficult, but is an important challenge and goal for every doctor in primary care practice. Clinical guidelines have standards for gastroesophageal reflux disease, management of Helicobacter infection and indications for the use of endoscopy (empiric treatment, prompt endoscopy, "test and treat"). In spite of the application of those standards, many patients experience no improvement in their symptoms or often the recurrence of disease. Aim: This study presents a new approach to the diagnostic and therapeutic management of uninvestigated dyspepsia in primary care settings to provide long-term effective control of symptoms for family doctors. Material And Methods: 3000 unselected consecutive dyspeptic patients underwent abdominal ultrasound, and 1000 dyspeptic patients from the same group upper endoscopy.. In this approach diagnostic evaluation of dyspepsia includes: abdominal ultrasonography as a first line obligatory routine method and the exact estimation of nutritional condition. Results: Abdominal ultrasound, physical examination and BMI control have significant value in the diagnostic evaluation of dyspepsia. The therapeutic approach includes, besides general standards (acid suppressive drugs, eradication of H. pylori, prokinetic and antidepressant agents), life style modification and nutritional interventions as first-line treatments. In this approach the use of new drugs such as ursodeoxycholic acid (UDCA), pre and probiotics, and digestive enzymes supplements is recommended. Conclusion: Throug the combination of different diagnostic procedures as first line methods, including abdominal ultrasound and nutritional condition (BMI), a family doctor can manage successfully uninvestigated dyspepsia at the primary care level. abstract_id: PUBMED:18801056 Clinical trial: a randomized trial of early endoscopy, Helicobacter pylori testing and empirical therapy for the management of dyspepsia in primary care. Background: Early endoscopy, Helicobacter pylori eradication and empirical acid suppression are commonly used dyspepsia management strategies in primary care but have not been directly compared in a single trial. Aim: To compare endoscopy, H. pylori test and refer, H. pylori test and treat and empirical acid suppression for dyspepsia in primary care. Methods: Patients presenting to their general practitioner with dyspepsia were randomized to endoscopy, H. pylori'test and treat', H. pylori test and endoscope positives, or empirical therapy with symptoms, patient satisfaction, healthcare costs and cost effectiveness at 12 months being the outcomes. Results: At 2 months, the proportion of patients reporting no or minimal dyspeptic symptoms ranged from 74% for those having early endoscopy to 55% for those on empirical therapy (P = 0.009), but at 1 year, there was little difference among the four strategies. Early endoscopy was associated with fewer subsequent consultations for dyspepsia (P = 0.003). 'Test and treat' resulted in fewer endoscopies overall and was most cost-effective over a range of cost assumptions. Empirical therapy resulted in the lowest initial costs, but the highest rate of subsequent endoscopy. Gastro-oesophageal cancers were found in four patients randomized to the H. pylori testing strategies. Conclusions: While early endoscopy offered some advantages 'Test and treat' was the most cost-effective strategy. In older patients, early endoscopy may be an appropriate strategy in view of the greater risk of malignant disease. abstract_id: PUBMED:9495689 Agreement between medical record data and patients' accounts of their medical history and treatment for dyspepsia. We examined agreement between data abstracted from medical records and interview data for patients with dyspepsia admitted to hospital for endoscopy, to determine the extent to which health records could be used to validate self-reports of dyspepsia and the management of this condition. Results from the sample of 220 patients showed that there was poor agreement between data sources for information about duration of dyspepsia (k=0.34) and previous barium meal examination (k=0.34). Patients reported significantly longer dyspepsia histories (Wilcoxon sign test Z=4.13, p&lt;0.0001) and significantly more barium meals (sign test Z=8.43, p&lt;0.0001) than were documented in their records. There was also disagreement between data sources regarding the number of drugs taken before and after endoscopy (k=0.28 and k=0.31, respectively). Where there was disagreement for number of drugs there was no significant difference in the direction of the disagreement. There was moderate agreement regarding the name of pre-endoscopy medication (k=0.55) and substantial agreement for the name of medication used post-endoscopy (k=0.62). There was very poor agreement regarding diagnosis. The medical record was the gold standard for this information. Choice of data source, medical records or self-reports, will in many instances provide significantly different results and it is likely that this may also be true for other variables of interest to researchers. Thus in the case where no gold standards are available researchers need to consider carefully the implication of choice of data source on their results. Answer: Endoscopy is not always necessary to initiate medical treatment in primary-level medical care patients with uninvestigated dyspepsia. The prevalence of upper gastrointestinal tract organic disease in dyspeptic out-patients selected according to Rome II criteria at a primary-care level is low, with no malignancies detected in a study, suggesting that patients under 45 years of age with uninvestigated dyspepsia may empirically and safely receive symptom-guided antisecretory therapy (PUBMED:15127645). Additionally, the "test-and-treat" strategy for Helicobacter pylori in uninvestigated dyspepsia has been shown to be an effective alternative to prompt endoscopy, potentially increasing the acceptance of this strategy by primary care providers without significant improvements in clinical and economic outcomes compared to usual care (PUBMED:12492183). Moreover, a study comparing empirical prokinetics, Helicobacter test-and-treat, and empirical endoscopy in primary-care patients presenting with dyspepsia found that the three strategies are equally effective, with the H. pylori test-and-treat being the most cost-effective option (PUBMED:16937497). Another trial found that the test-and-treat strategy was as effective and safe as prompt endoscopy, with only a minority of patients referred for endoscopy after the test-and-treat approach (PUBMED:12860586). In terms of cost-effectiveness, initial endoscopy in dyspeptic patients over age 50 might be a cost-effective intervention, but the cost-effectiveness is sensitive to the cost of the endoscopy procedure itself (PUBMED:11130524). For patients without alarm symptoms, primary care dyspeptic patients rarely have serious underlying conditions at endoscopy, and the costs associated with diagnosing an occult malignancy are large, suggesting that an age cut-off of 50 years for early endoscopy provides the best assurance that an occult malignancy will not be missed (PUBMED:19364542). In summary, while endoscopy can be useful, especially in patients over 50 years of age or those with alarm symptoms, it is not always necessary to initiate treatment for dyspepsia in primary care settings. Non-invasive approaches such as the test-and-treat strategy for H. pylori or empirical antisecretory therapy can be effective and cost-efficient alternatives.
Instruction: Is magnesium sulfate effective for pain in chronic postherpetic neuralgia patients comparing with ketamine infusion therapy? Abstracts: abstract_id: PUBMED:25792176 Is magnesium sulfate effective for pain in chronic postherpetic neuralgia patients comparing with ketamine infusion therapy? Background: Postherpetic neuralgia (PHN) is a frequent debilitating complication and one of the most intractable pain disorders, particularly in elderly patients. Although tricyclic antidepressants, topical capsaicin, gabapentin, and oxycodone are effective for alleviating PHN, many patients remain refractory to current therapies. Here, the analgesic effects of ketamine or magnesium for PHN were assessed in an open prospective study. Method: Thirty patients with severe, intractable PHN who were unresponsive to conservative therapy participated. The effects of ketamine hydrochloride (Ketara, Parke Davis) 1 mg/kg and magnesium sulfate (Magnesin) 30 mg/kg were investigated. The patients were randomly divided into 2 groups of 15 patients each, and ketamine 1 mg/kg or magnesium 30 mg/kg was administered intravenously for 1 hour after midazolam sedation. Pain was rated on a visual analog scale (VAS) during a 2-week follow-up. All patients also completed the Doleur Neuropathique 4 questionnaire at baseline and final visits. Results: Response to treatment, defined as a 50% reduction in VAS score 2 weeks after, was recorded in 10 of 15 patients in the ketamine group and 7 of 15 patients in the magnesium group. The difference in VAS reduction was not significant between the 2 groups. Conclusions: Ketamine and magnesium showed significant analgesic effects in patients with PHN. abstract_id: PUBMED:29357328 Efficacy of Pain Relief in Different Postherpetic Neuralgia Therapies: A Network Meta-Analysis. Background: Postherpetic neuralgia (PHN) is a nerve pain disease usually controlled by different therapies, i.e., topical therapies, antiepileptics, analgesics, antipsychotics, antidepressants, anti-dementia drugs, antivirals, amitriptyline, fluphenazine, and magnesium sulfate. It is believed that different therapies may lead to different levels of pain relief. Objectives: We proposed this study to compare the efficacy of PHN treatments. Study Design: We conducted a systematic review of the current literature. All relevant studies were retrieved from online databases. The standardized mean difference (SMD) was used for pain relief measurement in different PHN therapies. Setting: A conventional meta-analysis and a network meta-analysis (NMA) were carried out together with the surface under the cumulative ranking curve (SUCRA) for each therapy calculated regarding their efficacy. Results: A pairwise meta-analysis suggested that 4 treatment classes, including topical therapies, antiepileptics, analgesics, and antidepressants, exhibited better pain relief results than placebo. Likewise, a NMA suggested that patients with 4 treatment classes exhibited significant improvements in pain scores compared to those with placebo. Limitations: There is a lack of direct head-to-head comparisons of some treatments, especially for antivirals, anti-dementia drugs, and magnesium sulfate. Secondly, the specific agents belonging to the same class of therapies might exhibit different effects (gabapentin and carisbamate) with different mechanisms (opioids and ketamine) on reducing pain, and some agents were hard to find in literatures and were not involved in our study, which may influence our results. Conclusions: Analgesics were preferable to other treatments with respect to pain relief for PHN, while antivirals appeared to be less effective than other therapies. Key Words: Postherpetic neuralgia, topical agents, antiepileptics, analgesics, antipsychotics, antidepressants. abstract_id: PUBMED:14633551 Ketamine in chronic pain management: an evidence-based review. Ketamine has diverse effects that may be of relevance to chronic pain including: N-methyl-D-aspartic acid, alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid, kainate, gamma-aminobutyric acid(A) receptors; inhibition of voltage gated Na(+) and K(+) channels and serotonin, dopamine re-uptake. Ketamine has been in clinical practice for over 30 yr; however, there has been little formal research on the effectiveness of ketamine for chronic pain management. In this review we evaluate the available clinical data as a basis for defining the potential use of ketamine for chronic pain. Literature referenced in this review was obtained from a computer search of EMBASE and MEDLINE from 1966 through August, 2002. Search terms included ketamine, ketalar, pain, painful, analgesic, and analgesia. Abstracts were screened for relevance and publications relating to chronic pain use were obtained. Levels of evidence were stratified according to accepted guidelines (level I-IV). For central pain, there is level II and level IV evidence of efficacy for parenteral and oral ketamine. For complex regional pain syndromes, there is only level IV evidence of efficacy of epidural ketamine. For fibromyalgia, there is level II evidence of pain relief, reduced tenderness at trigger points, and increased endurance. For ischemic pain, a level II study reported a potent dose-dependent analgesic effect, but with a narrow therapeutic window. For nonspecific neuropathic pain, level II and level IV studies reported divergent results with questionable long-term effects on pain. For phantom limb pain and postherpetic neuralgia, level II and level II studies provided objective evidence of reduced hyperpathia and pain relief was usually substantial either after parenteral or oral ketamine. Acute on chronic episodes of severe neuropathic pain represented the most frequent use of ketamine as a "third line analgesic," often by IV or subcutaneous infusion (level IV). In conclusion, the evidence for efficacy of ketamine for treatment of chronic pain is moderate to weak. However, in situations where standard analgesic options have failed ketamine is a reasonable "third line" option. Further controlled studies are needed. abstract_id: PUBMED:7659432 Continuous subcutaneous administration of the N-methyl-D-aspartic acid (NMDA) receptor antagonist ketamine in the treatment of post-herpetic neuralgia. The effect of continuous subcutaneous (s.c.) infusion of ketamine on nerve injury pain was examined in patients with post-herpetic neuralgia. Five patients that reported pain relief after acute intravenous injection of ketamine were included in this open prospective study. Ketamine was administered continuously in increasing doses using a portable infusion pump (CADD-PLUS, Pharmacia), and the treatment period for each infusion rate (0.05, 0.075, 0.10, or 0.15 mg/kg/h) was 7 days and nights. Relief of continuous pain, as evaluated daily by visual analogue scales, was observed at the infusion rate of 0.05 mg/kg/h, but was most marked during infusion of 0.15 mg/kg/h. All the patients reported that ketamine reduced the severity of continuous pain as well as reduced the severity and number of attacks of spontaneous pain. Changes in evoked pain (allodynia and wind-up-like pain) were recorded before change of infusion rate. Allodynia was maximally reduced 59-100% after 1 week infusion of 0.05 mg/kg/h, and wind-up-like pain was maximally reduced 60-100% after 1 week infusion of 0.15 mg/kg/h. Itching and painful indurations at the injection site was the most bothersome side-effect and for this reason 1 patient discontinued treatment after 2 weeks. Other common side-effects were nausea, fatigue and dizziness. The present results show that continuous, spontaneous and evoked pain in patients with post-herpetic neuralgia is reduced by continuous s.c. infusion of ketamine, but is associated with intolerable side effects. abstract_id: PUBMED:23703410 Intravenous infusions in chronic pain management. In the United States, millions of Americans are affected by chronic pain, which adds heavily to national rates of morbidity, mortality, and disability, with an ever-increasing prevalence. According to a 2011 report titled Relieving Pain in America: A Blueprint for Transforming Prevention, Care, Education, and Research by the Institute of Medicine of the National Academies, pain not only exacts its toll on people's lives but also on the economy with an estimated annual economic cost of at least $560 - 635 billion in health care costs and the cost of lost productivity attributed to chronic pain. Intravenous infusions of certain pharmacologic agents have been known to provide substantial pain relief in patients with various chronic painful conditions. Some of these infusions are better, and although not necessarily the first therapeutic choice, have been widely used and extensively studied. The others show promise, however are in need of further investigations. This article will focus on non-opiate intravenous infusions that have been utilized for chronic painful disorders such as fibromyalgia, neuropathic pain, phantom limb pain, post-herpetic neuralgia, complex regional pain syndromes (CRPS), diabetic neuropathy, and central pain related to stroke or spinal cord injuries. The management of patients with chronic pain conditions is challenging and continues to evolve as new treatment modalities are explored and tested. The following intravenous infusions used to treat the aforementioned chronic pain conditions will be reviewed: lidocaine, ketamine, phentolamine, dexmedetomidine, and bisphosphonates. This overview is intended to familiarize the practitioner with the variety of infusions for patients with chronic pain. It will not, however, be able to provide guidelines for their use due to the lack of sufficient evidence. abstract_id: PUBMED:12799815 Ketamine for treatment of chronic pain: meta-analysis The idea of using Ketamine to treat chronic pain is mainly based on the central antinoceptive effect of the substance acting as a noncompetitive antagonist at the NMDA-receptor. In the present meta analysis over a period from 1/1981 up to 6/1996 twelve publications (1994-1996), which have dealt the use of Ketamine for patients with chronic pain, are evaluated and discussed. The entire positive evaluation of the drug is based on the results of the studies under consideration. The studies investigating 88 cases are classified, i.e. seven studies are assigned to level 3, two studies to level 4, and three studies to level 5. A significant reduction of pain has been proven for patients with neuropathic pain by four cross-over-studies comparing the drug with placebo. However, observations have been restricted to a few hours after the administration of the drug. A positive effect of Ketamine as compared with opioids has been demonstrated by three studies, one of them with statistical significance. Positive long term results have been observed by two studies considering the subcutaneous and intravenous application of Ketamine (72-480 mg, daily) to 14 cancer pain patients, to whom opioids have been administered with insufficient effects before. A successful five-week-application (oral) of Ketamine has been described convincingly in a case report of a patient with postherpetic neuralgia. Two studies interpret the additional intrathecal administration of Ketamine to cancer patients as a possibility of either impeding the development of tolerance of the local anesthetic or reducing morphine requirements. Only one paper has dealt with the known side effects of Ketamine. Here, the therapy had to break off in two cases. In nine cases the side effects could be suppressed by Droperidol. For the future, research with more study power is necessary to establish Ketamine in the therapy of chronic pain. abstract_id: PUBMED:26245075 Can we prevent pain becoming chronic? Central aspects in the prevention of pain from becoming chronic are good management of acute pain, early recognition of risk factors and a multidisciplinary working approach. Postherpetic neuralgia can probably be prevented with a vaccine and medication. In the prevention of prolonged postoperative pains there is some evidence of the effect of local anesthetics and ketamine, but their clinical significance is unclear. Multidisciplinary therapeutic and rehabilitative actions can be taken to prevent prolongation and recurrence of lower back pain especially in patients having an increased risk of chronic pain. abstract_id: PUBMED:33534003 Comprehensive Review of Topical Analgesics for Chronic Pain. Purpose Of Review: Topical analgesics are a non-opioid option for the treatment of chronic pain conditions including neuropathic pain, musculoskeletal pain, and osteoarthritis. There are many topical medications available; however their efficacy is variable. This article reviews the various topical analgesics, their mechanisms of action, and their efficacy. Recent Findings: Studies have found topical NSAIDs are useful in treating acute musculoskeletal pain syndromes (strains and sprains) and show some efficacy in treating hand and knee osteoarthritis (Derry et al. Cochrane Database Syst Rev 5:CD008609, 2017). Topical capsaicin 8% has been shown to be efficacious in the treatment of postherpetic neuralgia, painful diabetic peripheral neuropathy, and HIV-neuropathy (Derry et al. Cochrane Database Syst Rev 1:CD007393, 2017). Topical lidocaine has been widely studied and found to reduce pain in patients with postherpetic neuralgia (Knezevic et al. Pain Manag 7:537-58, 2017). Although many other topical analgesics are available, there is limited data to support the efficacy of other agents. Topical analgesics are a relatively benign treatment for chronic pain conditions including neuropathic pain, musculoskeletal, and myofascial pain. There is evidence to support the use of topical NSAIDs, high concentration topical capsaicin, and topical lidocaine for various painful conditions. abstract_id: PUBMED:11554028 Usefulness of epidural administration of ketamine for relief of postherpetic neuralgia Four patients with postherpetic neuralgia had their pain alleviated by epidural administration of ketamine. No oral non-steroidal anti-inflammatory drugs and anti-depressant drugs were effective in all cases. Lidocaine or bupivacaine was administered epidurally to all four patients. When these patients stated that they did not feel pain reduced, they received epidural infusion of ketamine at doses from 5 mg to 20 mg with lidocaine or bupivacaine and their postherpetic neuralgia was controlled. Therefore with these cases, we suspect that epidural administration of ketamine, an antagonist for N-methyl-D-aspartic acid receptor, could be an effective and useful alternative treatment in a patient with refractory postherpetic neuralgia. abstract_id: PUBMED:15471658 Treatment of postherpetic neuralgia: a review of therapeutic options. Postherpetic neuralgia (PHN) is a disabling consequence of the reactivation of the varicella zoster infection. The observation that patients with PHN experience various types of pain suggests that multiple pathophysiologic mechanisms are involved, which may include the peripheral and central nervous systems. A reasonable initial strategy would involve selecting from among multiple agents that have complementary mechanisms of action and that have been proven effective in controlled clinical trials, such as the lidocaine patch 5%, gabapentin, tricyclic antidepressants, and opioids. Based on initial assessment and ongoing reassessment, sequential trials should be undertaken until adequate pain relief is achieved. This may ultimately lead to therapy with more than one medication. Safety and tolerability are important considerations in choosing initial therapy, particularly in older patients. Physicians can either add another agent to the current regimen or switch to a new type of monotherapy if there is inadequate response to initial therapy. Alternative therapies, (i.e., ketamine, intrathecal corticosteroid injections) have not been adequately studied. Well-designed, multicenter, controlled clinical trials are needed to develop a treatment algorithm that provides an evidence-based, rational approach to treating PHN. Answer: Based on the provided abstracts, magnesium sulfate has shown significant analgesic effects in patients with chronic postherpetic neuralgia (PHN), but its effectiveness compared to ketamine infusion therapy is not definitively superior or inferior. In an open prospective study, the analgesic effects of ketamine and magnesium for PHN were assessed, and both treatments showed significant analgesic effects. Response to treatment, defined as a 50% reduction in visual analog scale (VAS) score 2 weeks after treatment, was recorded in 10 of 15 patients in the ketamine group and 7 of 15 patients in the magnesium group. The difference in VAS reduction was not significant between the two groups (PUBMED:25792176). However, it is important to note that the evidence for the efficacy of ketamine for the treatment of chronic pain is moderate to weak, and ketamine is considered a reasonable "third line" option when standard analgesic options have failed (PUBMED:14633551). Additionally, ketamine has been used in various forms, including continuous subcutaneous infusion, for the treatment of PHN, with some patients reporting relief of continuous pain and reduced severity and number of attacks of spontaneous pain (PUBMED:7659432). The efficacy of different PHN therapies, including magnesium sulfate, has been compared in a network meta-analysis, but there is a lack of direct head-to-head comparisons for some treatments, and the specific agents within the same class might exhibit different effects (PUBMED:29357328). This indicates that while magnesium sulfate may be effective, more research is needed to directly compare its efficacy with that of ketamine and other treatments for PHN. In conclusion, both magnesium sulfate and ketamine infusion therapy have been shown to provide analgesic effects in PHN patients, but the available evidence does not clearly establish one as being more effective than the other. Further research with direct comparisons and larger sample sizes would be beneficial to determine the relative efficacy of these treatments for PHN.
Instruction: Is polycystic ovary syndrome an exception for reproductive aging? Abstracts: abstract_id: PUBMED:20435693 Is polycystic ovary syndrome an exception for reproductive aging? Background: Anti-Mullerian hormone (AMH) is increased in women with polycystic ovary syndrome (PCOS), suggesting a delay in ovarian aging. We examined AMH levels in PCOS and normo-ovulatory women in a population-based cohort over a period of 10 years and used this information to estimate their menopausal age. Methods: Of a subset of 1002 non-menopausal women randomly selected from the Tehran Lipid and Glucose Study, 85 cases of PCOS were diagnosed. We frequency-matched our control subjects with PCOS cases based on age and BMI. AMH levels were assessed at the time of recruitment (T1) and twice after that (T2 and T3). AMH levels were then plotted against age of the individual at the time of the measurement and the most appropriate model was selected. Menopause was calculated based on AMH levels below 0.2 ng/ml. Results: AMH levels were significantly higher in PCOS cases compared with controls at the beginning of the study (5.58 +/- 3.64 versus 4.35 +/- 2.90 ng/ml, P = 0.03), but the difference diminished considerably in subsequent assessments. The rate of AMH decline in PCOS cases decreased in the second compared with the first interval; however, no apparent change in the rate of decline was observed in controls. Estimated ages at menopause were 51 [95% confidence interval (CI), 34-81] and 49 (95% CI, 38-63) years in PCOS cases and controls, respectively. Conclusions: The reproductive lifespan of PCOS women extends on average 2 years beyond that of normo-ovulatory women. abstract_id: PUBMED:22341880 Executive summary of the Stages of Reproductive Aging Workshop + 10: addressing the unfinished agenda of staging reproductive aging. Objective: The aim of this article is to summarize the recommended updates to the 2001 Stages of Reproductive Aging Workshop (STRAW) criteria. The 2011 STRAW + 10 reviewed advances in understanding of the critical changes in hypothalamic-pituitary-ovarian function that occur before and after the final menstrual period. Method(s): Scientists from five countries and multiple disciplines evaluated data from cohort studies of midlife women and in the context of chronic illness and endocrine disorders on change in menstrual, endocrine, and ovarian markers of reproductive aging including antimüllerian hormone, inhibin-B, follicle-stimulating hormone, and antral follicle count. Modifications were adopted by consensus. Result(s): STRAW + 10 simplified bleeding criteria for the early and late menopausal transition, recommended modifications to criteria for the late reproductive stage (Stage -3) and the early postmenopause stage (Stage +1), provided information on the duration of the late transition (Stage -1) and early postmenopause (Stage +1), and recommended application regardless of women's age, ethnicity, body size, or lifestyle characteristics. Conclusion(s): STRAW + 10 provides a more comprehensive basis for assessing reproductive aging in research and clinical contexts. Application of the STRAW + 10 staging system should improve comparability of studies of midlife women and facilitate clinical decision making. Nonetheless, important knowledge gaps persist, and seven research priorities are identified. abstract_id: PUBMED:21510080 Nature of obesity and the rate of aging in women with reproductive function disturbances Anthropometrics and metabolic features of obesity were determined, the rate of aging was evaluated in young women (mean age of 25,5 years) with infertility (as a result of polycystic ovary syndrome, chronic anovulation and other reproductive non-inflammatory disturbances) compared to these in different age groups of women with obesity, who have already realized their reproductive potential up to the moment of the survey. The metabolic disturbances ordinarily peculiar to mature age women, were found in women with infertility due to increased visceral fat content, which has begun to accumulate already in pre- and pubertal periods of their ontogenesis. These women showed the highest aging rate coefficient and that results in biological age growth compared with chronological one. Thus, visceral obesity early development combined with reproductive function disturbances is associated with the rate of aging increase in young women. abstract_id: PUBMED:33043012 Extracellular Vesicles: Recent Developments in Aging and Reproductive Diseases. Extracellular vesicles (EVs), present in cell culture media and several body fluids, play a prominent role in intercellular communication under physiological and pathological conditions. We performed a systematic literature search to review evidence regarding the existence, composition, and release of different EVs, as well as the biomarkers, cargos, and separation methods. We also reviewed the potential of EVs to transport cargos and alter the function and phenotype of recipient cells associated with aging and reproductive diseases, including polycystic ovary syndrome and endometriosis. In aging, EVs promote inflammatory reactions and offsetting the occurrence of aging. In the polycystic ovary syndrome and endometriosis, EVs and their cargos are involved in the occurrence of diseases, therapeutic strategies, and perform as non-invasive biomarkers. As the study of EVs is still in the early stages, it is not surprising that most of the current literature only describes their possible roles. abstract_id: PUBMED:34121193 Oxidative stress in oocyte aging and female reproduction. In a healthy body, reactive oxygen species (ROS) and antioxidants remain balanced. When the balance is broken toward an overabundance of ROS, oxidative stress appears and may lead to oocyte aging. Oocyte aging is mainly reflected as the gradual decrease of oocyte quantity and quality. Here, we aim to review the relationship between oxidative stress and oocyte aging. First, we introduced that the defective mitochondria, the age-related ovarian aging, the repeated ovulation, and the high-oxygen environment were the ovarian sources of ROS in vivo and in vitro. And we also introduced other sources of ROS accumulation in ovaries, such as overweight and unhealthy lifestyles. Then, we figured that oxidative stress may act as the "initiator" for oocyte aging and reproductive pathology, which specifically causes follicular abnormally atresia, abnormal meiosis, lower fertilization rate, delayed embryonic development, and reproductive disease, including polycystic ovary syndrome and ovary endometriosis cyst. Finally, we discussed current strategies for delaying oocyte aging. We introduced three autophagy antioxidant pathways like Beclin-VPS34-Atg14, adenosine 5'-monophosphate (AMP)-activated protein kinase/mammalian target of rapamycin (AMPK/mTOR), and p62-Keap1-Nrf2. And we also describe the different antioxidants used to combat oocyte aging. In addition, the hypoxic (5% O2 ) culture environment for oocytes avoiding oxidative stress in vitro. So, this review not only contribute to our general understanding of oxidative stress and oocyte aging but also lay the foundations for the therapies to treat premature ovarian failure and oocyte aging in women. abstract_id: PUBMED:34872233 An overview of polycystic ovary syndrome in aging women The manifestations of polycystic ovary syndrome (PCOS), a ubiquitous reproductive disorder, may vary significantly depending on the severity of a number of endocrine and metabolic changes. Although no diagnostic criteria are presently available for PCOS for perimenopausal and menopausal women, the condition can still be suspected in case of a previous diagnosis of the condition, a chronic history of irregular menstrual cycles and hyperandrogenism, and/or polycystic ovarian morphology during the reproductive period. PCOS is associated with long-term health risks, including obesity, diabetes, hypertension, dyslipidemia, metabolic syndrome and cardiovascular risk factors during reproductive age, especially in patients possessing classic phenotypes. The aim of this review was to outline the available data about the impact of PCOS on long-term health risks after reproductive age in patients with PCOS. Previously, it was assumed that women with PCOS would be more prone to develop cardiometabolic diseases after reproductive age but current data suggest that in accordance with the healing in the phenotypic characteristics of PCOS, no deterioration appears to occur in cardiometabolic health in these patients. While there is substantial evidence for a greater prevalence of abnormal subclinical atherosclerotic markers among younger patients with PCOS, data for older women are insufficient. However, there is also support for an increased risk of endometrial cancer in PCOS patients. Extensive prospective cohort studies in which healthy controls as well as patients with defining PCOS phenotypes are observed and monitored from the early reproductive period into the late postmenopausal period should now be performed in order to clarify morbidities and mortality in aging women with PCOS. abstract_id: PUBMED:29051304 Consequences of advanced aging on renal function in chronic hyperandrogenemic female rat model: implications for aging women with polycystic ovary syndrome. Polycystic ovary syndrome (PCOS) is the most common endocrine and reproductive disorder in premenopausal women, characterized by hyperandrogenemia, metabolic syndrome, and inflammation. Women who had PCOS during their reproductive years remain hyperandrogenemic after menopause. The consequence of chronic hyperandrogenemia with advanced aging has not been studied to our knowledge. We have characterized a model of hyperandrogenemia in female rats and have aged them to 22-25 months to mimic advanced aging in hyperandrogenemic women, and tested the hypothesis that chronic exposure to hyperandrogenemia with aging has a deleterious effect on renal function. Female rats were chronically implanted with dihydrotestosterone pellets (DHT 7.5 mg/90 days) that were changed every 85 days or placebo pellets, and renal function was measured by clearance methods. Aging DHT-treated females had a threefold higher level of DHT with significantly higher body weight, mean arterial pressure, left kidney weight, proteinuria, and kidney injury molecule-1 (KIM-1), than did age-matched controls. In addition, DHT-treated-old females had a 60% reduction in glomerular filtration rate, 40% reduction in renal plasma flow, and significant reduction in urinary nitrate and nitrite excretion (UNOxV), an index of nitric oxide production. Morphological examination of kidneys showed that old DHT-treated females had significant focal segmental glomerulosclerosis, global sclerosis, and interstitial fibrosis compared to controls. Thus chronic hyperandrogenemia that persists into old age in females is associated with renal injury. These data suggest that women with chronic hyperandrogenemia such as in PCOS may be at increased risk for development of chronic kidney disease with advanced age. abstract_id: PUBMED:36187169 Polycystic Ovarian Syndrome in Aging Women: An Observational Study. Background: Polycystic ovarian syndrome (PCOS) in aging women has not been extensively studied, although it is a very common disorder. Most of the research has been conducted on women of the reproductive age group, although PCOS is a disease occurring in all age groups. This study aims to provide an idea of the PCOS pattern prevalent in aging women presenting at a tertiary care center in Eastern India. Methods: This was a prospective, single-center, observational cohort study conducted over three years, from October 2016 to September 2019. A total of 36 patients between 35 and 65 years of age were found to have PCOS according to Rotterdam's criteria during this period. These patients were prospectively followed up to study their demographic characteristics, symptoms, management, and outcomes. Results: A total of 80% of the women were below 40 years of age, while only 20% were above 40. In total, 70% of the women were overweight or obese. Infertility was seen in 75% of patients below 40 years of age and 28% of patients above 40. Among these, 23 (95%) conceived successfully after proper management. The main presenting complaints were infertility and menstrual irregularities. With aging, diabetes increased from 24%, in below 40, to 28%, in above 40; hypertension increased from 13% to 28%. The occurrence of impaired oral glucose tolerance test (OGTT) and impaired lipid profile increased with age: from 48% to 57% and 13% to 28%, respectively. Conclusion: PCOS in aging women causes considerable morbidity and greatly affects the quality of life, although it is less understood. Further research in this field is very much needed to understand and appropriately manage the problems in aging women, to improve their quality of life. abstract_id: PUBMED:23022527 Polycystic ovary syndrome (PCOS)-like phenotypes in the d-galactose-induced aging mouse model. The D-galactose (D-gal)-induced animal model, which is established by consecutive subcutaneous d-gal injections for approximately 6weeks, has been frequently used for aging research. This animal model has been shown to accelerate aging of the brain, kidneys, liver, and blood cells. However, aging of the female reproductive organs in this animal model has not been reported. The aim of this study was to investigate changes in the ovary in the d-gal-induced aging mouse model. First, we evaluated anti-Müllerian hormone (AMH) as a marker of ovarian aging in blood plasma. We speculated there would be lower AMH levels in d-gal-treated mice because ovarian aging would be induced by d-gal, as reported for other tissues. However, the results showed that AMH levels in d-gal-treated mice were approximately four-fold higher than control mice. Abnormally high AMH levels are detected in ovarian cancer and polycystic ovary syndrome (PCOS) patients. Therefore, we examined PCOS-related markers in this mouse model. Total testosterone levels were high and abnormal estrous cycles were induced in d-gal-treated mice. These changes, including AMH levels, in d-gal-treated mice were inhibited by aminoguanidine treatment, an advanced glycation end product reducer. In addition, ovarian cysts were observed in some d-gal-treated mice. These results indicate that with respect to female reproduction, d-gal-treated mice are suitable for PCOS studies, rather than aging studies. abstract_id: PUBMED:30156157 Melatonin as Potential Targets for Delaying Ovarian Aging. In previous studies, oxidative stress damage has been solely considered to be the mechanism of ovarian aging, and several antioxidants have been used to delay ovarian aging. But recently, more reports have found that endoplasmic reticulum stress, autophagy, sirtuins, mitochondrial dysfunction, telomeres, gene mutation, premature ovarian failure, and polycystic ovary syndrome are all closely related to ovarian aging, and these factors all interact with oxidative stress. These novel insights on ovarian aging are summarized in this review. Furthermore, as a pleiotropic molecule, melatonin is an important antioxidant and used as drugs for several diseases treatment. Melatonin regulates not only oxidative stress, but also the various molecules, and normal and pathological processes interact with ovarian functions and aging. Hence, the mechanism of ovarian aging and the extensive role of melatonin in the ovarian aging process are described herein. This systematic review supply new insights into ovarian aging and the use of melatonin to delay its onset, further supply a novel drug of melatonin for ovarian aging treatment. Answer: Polycystic ovary syndrome (PCOS) appears to be an exception for reproductive aging in some respects. A study examining Anti-Mullerian hormone (AMH) levels in women with PCOS found that AMH levels were significantly higher in PCOS cases compared to controls at the beginning of the study, suggesting a delay in ovarian aging. The reproductive lifespan of women with PCOS was estimated to extend on average 2 years beyond that of normo-ovulatory women, indicating that PCOS may be associated with a longer reproductive lifespan (PUBMED:20435693). However, it is important to note that PCOS is also associated with long-term health risks, including obesity, diabetes, hypertension, dyslipidemia, metabolic syndrome, and cardiovascular risk factors during reproductive age. While there is evidence for a greater prevalence of abnormal subclinical atherosclerotic markers among younger patients with PCOS, data for older women are insufficient. There is also support for an increased risk of endometrial cancer in PCOS patients (PUBMED:34872233). Moreover, women with PCOS who remain hyperandrogenemic after menopause may be at increased risk for development of chronic kidney disease with advanced age (PUBMED:29051304). Additionally, PCOS in aging women causes considerable morbidity and affects the quality of life, with an increase in diabetes, hypertension, and impaired glucose tolerance and lipid profiles with age (PUBMED:36187169). In summary, while PCOS may delay certain aspects of reproductive aging, it is not without its long-term health implications, and women with PCOS may face increased risks for various health issues as they age. Therefore, PCOS can be considered an exception to some aspects of reproductive aging, but it also presents unique challenges and health risks that need to be managed throughout a woman's lifespan.
Instruction: Fluoride Varnishes--Is There a Correlation Between Fluoride Release and Deposition on Enamel? Abstracts: abstract_id: PUBMED:26106650 Fluoride Varnishes--Is There a Correlation Between Fluoride Release and Deposition on Enamel? Purpose: Fluoride uptake of enamel after application of fluoride varnishes was compared with fluoride release into artificial saliva. The hypothesis was that fluoride uptake is higher for products exhibiting faster fluoride release. Materials And Methods: Fluoride varnishes, i.e. Fluor Protector S, Duraphat, MI Varnish, Clinpro White Varnish, Profluorid Varnish and Enamel Pro Varnish were applied on bovine enamel specimens. Subsequently, specimens were incubated in artificial saliva. After removal of the varnishes, surface bound fluoride was extracted with potassium hydroxide and measured with an ion-selective electrode. Structurally bound fluoride was etched from the same specimens with perchloric acid. Fluoride release of varnish films into artificial saliva was measured for comparison. Results: After 4 h in artificial saliva, the highest total enamel fluoride uptake of 47.9 μg F·cm-² was found with Fluor Protector S, followed by Enamel Pro Varnish with 22.1 μg F·cm-². The other products ranged between 12-16 μg F·cm-². This was several times higher than the negative control. Fluoride uptake did not correlate with release into artificial saliva. During the first 4 h, Duraphat released the lowest and MI Varnish the highest amount of fluoride with 7.7 and 249 μg F·cm-², respectively. The fluoride uptake of these two products was not statistically different. Conclusion: Enamel fluoride uptake cannot be predicted from the fluoride release rate of a product. Hence, based on the results of this study, fluoride release into artificial saliva is no measure for the efficacy of a fluoride varnish. abstract_id: PUBMED:22299122 Fluoride release from newly marketed fluoride varnishes. Objective: New fluoride varnishes have been marketed that reportedly release more fluoride (Enamel Pro) or release fluoride more slowly (Vanish XT). The purpose of this study was to compare the amount and rate of fluoride release of new fluoride varnishes with other traditional fluoride varnishes. Method And Materials: Extracted molars were cut into block sections. The enamel surfaces of the sections were painted with Enamel Pro, Duraphat, Vanish, or Vanish XT fluoride varnishes. One group was not treated and served as a negative control. The tooth sections were immersed in artificial saliva. The concentration of fluoride in parts per million was measured after the first 30 minutes, daily for the first week, and weekly until the level was below the limit of detection. Fluoride release was plotted over time. Cumulative fluoride release and rate of release (slope) were analyzed using one-way ANOVA/Tukey (α = .05). Results: Enamel Pro had the greatest cumulative fluoride release. There was no significant difference between Duraphat and Vanish. Vanish XT had the lowest cumulative fluoride release. The rate of fluoride release from 1 week to limit of detection was Enamel Pro &gt; Vanish &gt; Duraphat &gt; Vanish XT. Conclusion: The two newly marketed fluoride varnishes (Enamel Pro and Vanish XT) had significantly different fluoride release from the two conventional fluoride varnishes (Duraphat and Vanish). abstract_id: PUBMED:26541509 Effect of calcium phosphate addition to fluoride containing dental varnishes on enamel demineralization. Background: The aim of this study was to evaluate the ability of calcium phosphate and fluoride containing varnishes to inhibit enamel demineralization. Methods: Six varnishes were selected for analysis: (1) Enamel Pro containing amorphous calcium phosphate; (2) Clinpro White containing functionalized tricalcium phosphate (fTCP); (3) MI Varnish containing casein phosphopeptide-stabilized amorphous calcium phosphate (CPP-ACP); (4) Duraphat (first no added calcium control); (5) Profluorid (second no added calcium control); and (6) placebo (no added calcium or fluoride control). Human enamel slabs (36) were each cut into half-slabs and covered with one of the six dental varnishes to create a window. The half-slabs were then individually immersed in a polyacrylate demineralization buffer pH 4.8 for four days at 37 °C with a change of solution each day. Mineral content was determined using transverse microradiography. Results: All fluoride-containing varnishes significantly inhibited enamel demineralization when compared with the placebo varnish. However, out of the calcium phosphate and fluoride containing varnishes only MI Varnish, containing fluoride and CPP-ACP was superior to the fluoride-alone varnishes. MI Varnish also released the highest levels of calcium, phosphate and fluoride ions. Conclusions: MI Varnish containing fluoride and CPP-ACP was superior to the other varnishes in protecting against enamel demineralization. abstract_id: PUBMED:25631724 Fluoride release from fluoride varnishes under acidic conditions. Objective: The aim was to investigate the in vitro fluoride release from fluoride varnishes under acidic conditions. Study Design: Poly(methyl methacrylate) blocks (Perspex, n=3 per group) were painted with 80 ± 5 mg fluoride varnish (n=10) and placed into artificial saliva for 30 min. Then, blocks were placed into either 1% citric acid (pH 2.27) or 0.3% citric acid (pH 3.75) solutions (n=3 per solution and varnish) for 30 min with the solutions being replaced every 5 min. Saliva and acid solutions were analyzed for fluoride content. Data were analyzed using three-way ANOVA (varnish, solution, time). Results: The three-way interaction was significant (p&gt;0.0001). Fluoride release and release patterns varied considerably between varnishes. Fluoride release in saliva varied by a factor of more than 10 between varnishes. Some varnishes (CavityShield, Nupro, ProFluorid, Vanish) showed higher fluoride release in saliva than during the first 5 min of acid exposure, whereas other varnishes (Acclean, Enamel-Pro, MI Varnish, Vella) showed the opposite behavior. There was little difference between acidic solutions. Conclusions: Fluoride release from fluoride varnishes varies considerably and also depends on the dissolution medium. Bearing in mind the limitations of laboratory research, the consumption of acidic drinks after fluoride varnish application should be avoided to optimize the benefit/risk ratio. abstract_id: PUBMED:38014987 Review on fluoride varnishes currently recommended in dental prophylaxis. In dentistry, fluoride compounds play a very important role in the development of teeth hard tissue. They have been modifying the development of the carious process for many years in accordance with the principles of minimally invasive therapy. Studies have confirmed their effectiveness in the prevention and treatment of carious lesions and erosion of deciduous and permanent teeth, as well as in the dentin hypersensitivity treatment. Typically, each varnish consists of 3 basic components, i.e., a resin usually in the form of mastic, shellac and/or rosin, an alcohol-based organic solvent (usually ethanol) and active agents. In the first-generation varnishes, the active agent is fluorine compounds, most often in the form of 5% NaF, while in second-generation varnishes, the composition is further enriched with calcium and phosphorus compounds in the form of CPP-ACP/CPP-ACPF, ACP, TCP, fTCP, CSPS, TMP, CXP, or CaGP. This influences the bioavailability of fluoride in the oral environment by increasing both its release from the product and its subsequent accumulation in enamel and plaque, promotes more efficient closure of dentinal tubules, and facilitates pH buffering in the oral cavity. abstract_id: PUBMED:38233188 Fluoride release potential of arginine-incorporated fluoride varnishes. The study aimed to examine the fluoride (F) release potential of arginine (Arg)-incorporated F varnishes. Four commercially available F varnishes were included in the study: Duraphat® (5% NaF), Flúor Protector® (0.9% SiH2F2), Fluor Protector S® (NH4F), and Fluorimax™ (2.5% NaF). L-arginine (2% w/v.) was incorporated in these varnishes to estimate F release at 1 h, 4 h, 6 h, 24 h, 3 days, and 7 days using an F-ion selective electrode. The media pH of eluded varnishes was estimated and primary inorganic F extraction was performed. The main effects pH, F release, and computed integrated mean/cumulative F release for experimental groups were significantly higher than the controls (p&lt;0.01). The primary extracted F concentrations for the Arg-containing groups were significantly lower than the control groups (p&lt;0.001) demonstrating a chemical interplay with Arg incorporation. To conclude, irrespective of the inorganic F content, incorporating Arg in F-containing varnishes increases their F release potential. abstract_id: PUBMED:26176358 Fluoride varnishes with calcium glycerophosphate: fluoride release and effect on in vitro enamel demineralization. The aims of this study were (1) to assess the amount of fluoride (F) released from varnishes containing calcium glycerophosphate (CaGP) and (2) to assess the effect of the experimental varnishes on in vitro demineralization. Six test groups using 5 varnishes: base varnish (no active ingredients); Duraphat® (2.26% NaF); Duofluorid® (5.63% NaF/CaF2); experimental varnish 1 (1% CaGP/5.63% NaF/CaF2); experimental varnish 2 (5% CaGP/5.63% NaF/CaF2); and no varnish were set up. In stage 1, 60 acrylic blocks were randomly distributed into 6 groups (n = 10). Then 300 µg of each varnish was applied to each block. The blocks were immersed in deionized water, which was changed after 1, 8, 12, 24, 48 and 72 hours. Fluoride concentration in the water was analyzed using a fluoride electrode. In stage 2, 60 bovine enamel samples were distributed into 6 groups (n = 10), and treated with 300 µg of the respective varnish. After 6 h the varnish was removed and the samples were subjected to a 7-day in vitro pH cycle (6 h demineralization/18 h remineralization per day). The demineralization was measured using surface hardness. The results showed that both experimental varnishes released more fluoride than Duofluorid® and Duraphat® (p &lt; 0.05), but Duraphat® showed the best preventive effect by decreasing enamel hardness loss (p &lt; 0.05). Therefore, we conclude that even though (1) the experimental varnishes containing CaGP released greater amounts of F, (2) they did not increase in the preventive effect against enamel demineralization. abstract_id: PUBMED:28431491 Fluoride varnishes containing sodium trimetaphosphate reduce enamel demineralization in vitro. Objective: This study evaluated the effects of fluoride varnishes containing sodium trimetaphosphate (TMP) on bovine enamel demineralization in vitro. Material And Methods: Enamel bovine discs were randomly assigned into six groups (n = 20/group): placebo, 2.5% NaF, 2.5% NaF/5% TMP, 5% NaF, 5% NaF/5% TMP, and a commercial formulation (Duraphat, 5% NaF). Varnishes were applied on all enamel discs and kept for 6 h. Loosely and firmly bound fluoride formed on/in enamel after treatment were analyzed in 10 discs from each group. The other 10 discs were subjected to a pH-cycling regimen for 7 days, and analyzed for surface (SH) and cross-sectional hardness (ΔKHN), as well as for loosely and firmly bound fluoride in/on enamel. Data were analyzed by analysis of variance (ANOVA) followed by Student-Newman-Keuls' test (p &lt; .05). Results: The lowest SH change and ΔKHN were observed for the 5%NaF/5%TMP varnish, which was significantly different from all the other groups. Both fluoridated varnishes containing TMP promoted significantly lower SH change and ΔKHN when compared with their counterparts without TMP. Loosely and firmly bound fluoride was significantly lower in groups treated with varnishes containing TMP. Conclusion: TMP and fluoride added to varnishes have a synergistic effect against enamel demineralization in vitro. abstract_id: PUBMED:17043869 Deposition of fluoride on enamel surfaces released from varnishes is limited to vicinity of fluoridation site. The aim of the in-situ study was to determine fluoride uptake in non-fluoridated, demineralized enamel after application of fluoride varnishes on enamel samples located at various distances from the non-fluoridated samples. All enamel samples used were demineralized with acidic hydroxyethylcellulose before the experiment. Intra-oral appliances were worn by ten volunteers in three series: (1, Mirafluorid, 0.15% F; 2, Duraphat, 2.3% F and 3, unfluoridated controls) of 6 days each. Each two enamel samples were prepared from 30 bovine incisors. One sample was used for the determination of baseline fluoride content (BFC); the other was treated according to the respective series and fixed in the intra-oral appliance for 6 days. Additionally, from 120 incisors, each four enamel samples were prepared (one for BFC). Three samples (a-c) were placed into each appliance at different sites: (a) directly neighboured to the fluoridated specimen (=next), (b) at 1-cm distance (=1 cm) and (c) in the opposite buccal aspect of the appliance (=opposite). At these sites, new unfluoridated samples were placed at days 1, 3 and 5, which were left in place for 1 day. The volunteers brushed their teeth and the samples with fluoridated toothpaste twice per day. Both the KOH-soluble and structurally bound fluoride were determined in all samples to determine fluoride uptake and were statistically analyzed. One day, after fluoridation with Duraphat, KOH-soluble fluoride uptake in specimen a (=next) was significantly higher compared to the corresponding samples of both the control and Mirafluorid series, which in turn were not significantly different from each other. At all other sites and time points, fluoride uptake in the enamel samples were not different from controls for both fluoride varnishes. Within the first day after application, intra-oral-fluoride release from the tested fluoride varnish Duraphat leads to KOH-soluble fluoride uptake only in enamel samples located in close vicinity to the fluoridation site. abstract_id: PUBMED:24494654 Ion release from calcium and fluoride containing dental varnishes. Background: A range of dental varnishes have been commercialized recently that contain calcium and inorganic phosphate in addition to fluoride. The aim of this study was to analyse the fluoride, calcium and inorganic phosphate ion release from: (1) MI Varnish containing casein phosphopeptide-amorphous calcium phosphate (CPP-ACP); (2) Clinpro White containing functionalized tricalcium phosphate (fTCP); (3) Enamel Pro containing amorphous calcium phosphate; (4) Bifluorid 5 containing calcium fluoride; and (5) Duraphat (no added calcium control). Methods: The varnishes were applied to a standardized surface area of polyvinyl chloride (n = 7 per group) and immersed in 25 g of distilled deionized water which was changed at 1, 4, 24, 72 and 168 hours. The ion release was determined by ion chromatography and expressed as μmol (cumulative) per gram of varnish. Results: All varnishes released measurable fluoride and calcium, however only MI Varnish and Enamel Pro released significant levels of inorganic phosphate. At 24 hours the order of cumulative fluoride release was: 1&gt;3&gt;4&gt;2=5 with 1 significantly higher (p &lt; 0.05) than the rest. At 72 and 168 hours, the cumulative calcium release was: 1&gt;4&gt;3&gt;2=5 with 1 significantly higher (p &lt; 0.05) than the rest. Conclusions: MI Varnish containing CPP-ACP had the highest release of calcium and fluoride ions. Answer: Based on the provided abstracts, there does not appear to be a direct correlation between fluoride release from varnishes and fluoride deposition on enamel. In one study, it was found that enamel fluoride uptake could not be predicted from the fluoride release rate of a product, indicating that fluoride release into artificial saliva is not a measure for the efficacy of a fluoride varnish (PUBMED:26106650). This suggests that the amount of fluoride released by a varnish does not necessarily equate to the amount of fluoride that will be taken up by the enamel. Furthermore, the study comparing the fluoride release of new fluoride varnishes with traditional ones found significant differences in fluoride release among the varnishes, but this did not directly translate to their effectiveness in preventing enamel demineralization (PUBMED:22299122). Another study showed that while all fluoride-containing varnishes significantly inhibited enamel demineralization compared to a placebo, only MI Varnish, which contained fluoride and casein phosphopeptide-stabilized amorphous calcium phosphate (CPP-ACP), was superior to fluoride-alone varnishes (PUBMED:26541509). Additionally, fluoride release from varnishes can vary considerably and also depends on the dissolution medium, such as saliva or acidic conditions (PUBMED:25631724). This variability further complicates the potential correlation between fluoride release and enamel deposition. In conclusion, the studies suggest that while fluoride varnishes do release fluoride and can inhibit enamel demineralization, the relationship between the amount of fluoride released and the amount deposited on enamel is not straightforward and cannot be used as a sole indicator of a varnish's efficacy.
Instruction: Individualized early goal-directed therapy in systemic inflammation: is full utilization of preload reserve the optimal strategy? Abstracts: abstract_id: PUBMED:25402295 Individualized early goal-directed therapy in systemic inflammation: is full utilization of preload reserve the optimal strategy? Objectives: In severe acute pancreatitis, the administration of fluids in the presence of positive fluid responsiveness is associated with better outcome when compared to guiding therapy on central venous pressure. We compared the effects of such consequent maximization of stroke volume index with a regime using individual values of stroke volume index assessed prior to severe acute pancreatitis induction as therapeutic hemodynamic goals. Design: Prospective, randomized animal study. Setting: University animal research laboratory. Subjects: Thirty domestic pigs. Interventions: After randomization, fluid resuscitation was started 2 hours after severe acute pancreatitis induction and continued for 6 hours according to the respective treatment algorithms. In the control group, fluid therapy was directed by maximizing stroke volume index, and in the study group, stroke volume index assessed prior to severe acute pancreatitis served as primary hemodynamic goal. Measurements And Main Results: Within the first 6 hours of severe acute pancreatitis, the study group received a total of 1,935.8 ± 540.7 mL of fluids compared with 3,462.8 ± 828.2 mL in the control group (p &lt; 0.001). Pancreatic tissue oxygenation did not differ significantly between both groups. Vascular endothelial function, measured by flow-mediated vasodilation before and 6 hours after severe acute pancreatitis induction, revealed less impairment in the study group after treatment interval (-90.76% [study group] vs -130.89% [control group]; p = 0.046). Further, lower levels of heparan sulfate (3.41 ± 5.6 pg/mL [study group] vs 43.67 ± 46.61 pg/mL [control group]; p = 0.032) and interleukin 6 (32.18 ± 8.81 pg/mL [study group] vs 77.76 ± 56.86 pg/mL [control group]; p = 0.021) were found in the study group compared with control group. Histopathological examination of the pancreatic head and corpus at day 7 revealed less edema for the study group compared with the control group (1.82 ± 0.87 [study group] vs 2.89 ± 0.33 [control group, pancreatic head]; p = 0.03; 2.2 ± 0.92 [study group] vs 2.91 ± 0.3 [control group, pancreatic corpus]; p = 0.025). Conclusions: Individualized optimization of intravascular fluid status during the early course of severe acute pancreatitis, compared with a treatment strategy of maximizing stroke volume by fluid loading, leads to less vascular endothelial damage, pancreatic edema, and inflammatory response. abstract_id: PUBMED:25741797 Individualized early goal-directed therapy in systemic inflammation: is full utilization of preload reserve the optimal strategy? Erratum. [This corrects the article on p. e741 in vol. 42, issue 12, PMID: 25402295]. abstract_id: PUBMED:18443478 Early goal-directed therapy in severe sepsis and septic shock: a contemporary review of the literature. Purpose Of Review: Aggressive approaches to acute diseases such as acute myocardial infarction, trauma, and stroke have improved outcomes. Early goal-directed therapy for severe sepsis and septic shock represents a similar approach. An analysis of the literature assessing external validity and generalizability of this intervention is lacking. Recent Findings: Eleven peer-reviewed publications (1569 patients) and 28 abstracts (4429 patients) after the original early goal-directed therapy study were identified from academic, community and international settings. These publications total 5998 patients (3042 before and 2956 after early goal-directed therapy). The mean age, sex, APACHE II scores and mortality were similar across all studies. The mean relative and absolute risk reduction was 0.46 +/- 26% and 20.3 +/- 12.7%, respectively. These findings are superior to the original early goal-directed therapy trial which showed figures of 34% and 16%, respectively. A consistent and similar decrease in healthcare resource consumption was also found. Summary: Early goal-directed therapy modulates systemic inflammation and results in significant reductions in morbidity, mortality, and healthcare resource consumption. Early goal-directed therapy has been externally validated and is generalizable across multiple healthcare settings. Because of these robust findings, further emphasis should be placed on overcoming logistical, institutional, and professional barriers to implementation which can save the life of one of every six patients presenting with severe sepsis and septic shock. abstract_id: PUBMED:34150106 Analysis on the application value of goal-directed fluid therapy in patients undergoing laparoscopy-assisted radical gastrectomy with fast-track anesthesia. Objective: To explore the application value of goal-directed fluid therapy (GDFT) in patients undergoing laparoscopy-assisted radical gastrectomy with fast-track anesthesia. Methods: From December 2016 to December 2019, 74 patients who underwent laparoscopy-assisted radical gastrectomy under the concept of enhanced recovery after surgery (ERAS) in gastrointestinal Surgery department of Tongling People's Hospital were selected as research participants. They were divided into two groups: the routine group (patients were treated with conventional fluids) (n=37) and the GDFT group (patients were treated with GDFT) (n=37). In the two groups, patients were compared in terms of intraoperative fluid inflow and outflow, hemodynamic indexes before operation for 30 min (T0), after anesthesia induction for 30 min (T1), during operation for 0.5 h (T2) and 1.5 h (T3) and after operation (T4), postoperative complications, postoperative recovery, mini-mental state examination (MMSE) scores on the first day (d0) before operation and the first day (d1), the third day (d2) and the seventh day (d3) after operation, and inflammatory factor levels. Results: The amount of crystal input, colloid, blood loss, fluid replacement and urine volume in the GDFT group were significantly less than those in the routine group (P &lt; 0.05). From T1 to T4, the values of mean arterial pressure (MAP) and central venous pressure (CVP) in the GDFT group were higher than those in the routine group (p &lt; 0.05). The total incidence of postoperative complications in the GDFT group was lower than that in the routine group (P &lt; 0.05). Compared with those in the routine group, the postoperative anus exhaust time, the first time of starting to eat, the time of leaving bed, the duration of stay in the postanesthesia care unit and the hospital stay were significantly shorter in the GDFT group (P &lt; 0.05). From D1 to D3, the MMSE score in the GDFT group was higher than that in the routine group, while the levels of C-reactive protein (CPR), interleukin 6 (IL-6) and procalcitonin (PCT) were lower than those in the routine group (P &lt; 0.05). Conclusion: GDFT has a better effect on the rapid rehabilitation of patients undergoing laparoscopy-assisted radical gastrectomy during fast-track anesthesia, and it also has a positive effect on maintaining the stability of hemodynamics, reducing systemic inflammation and decreasing postoperative complications. abstract_id: PUBMED:27920568 Reduced butyrylcholinesterase activity is an early indicator of trauma-induced acute systemic inflammatory response. Purpose: Early diagnosis of systemic inflammatory response syndrome is fundamentally important for an effective and a goal-directed therapy. Various inflammation biomarkers have been used in clinical and experimental practice. However, a definitive diagnostic tool for an early detection of systemic inflammation remains to be identified. Acetylcholine (Ach) has been shown to play an important role in the inflammatory response. Serum cholinesterase (butyrylcholinesterase [BChE]) is the major Ach hydrolyzing enzyme in blood. The role of this enzyme during inflammation has not yet been fully understood. This study tests whether a reduction in the BChE activity could indicate the onset of the systemic inflammatory response upon traumatic injury. Patients And Methods: This observational study measured BChE activity in patients with traumatic injury admitted to the emergency room by using point-of-care-test system (POCT). In addition, the levels of routine inflammation biomarkers during the initial treatment period were measured. Injury Severity Score was used to assess the trauma severity. Results: Altered BChE activity was correlated with trauma severity, resulting in systemic inflammation. Reduction in the BChE activity was detected significantly earlier compared to those of routinely measured inflammatory biomarkers. Conclusion: This study suggests that the BChE activity reduction might serve as an early indicator of acute systemic inflammation. Furthermore, BChE activity, measured using a POCT system, might play an important role in the early diagnosis of the trauma-induced systemic inflammation. abstract_id: PUBMED:17573521 Prospective external validation of the clinical effectiveness of an emergency department-based early goal-directed therapy protocol for severe sepsis and septic shock. Objective: To determine the clinical effectiveness of implementing early goal-directed therapy (EGDT) as a routine protocol in the emergency department (ED). Methods: Prospective interventional study conducted over 2 years at an urban ED. Inclusion criteria included suspected infection, criteria for systemic inflammation, and either systolic BP &lt; 90 mm Hg after a fluid bolus or lactate concentration &gt;/= 4 mol/L. Exclusion criteria were age &lt; 18 years, contraindication to a chest central venous catheter, and need for immediate surgery. We prospectively recorded preintervention clinical and mortality data on consecutive, eligible patients for 1 year when treatment was at the discretion of board-certified emergency physicians. We then implemented an EGDT protocol (the intervention) and recorded clinical data and mortality rates for 1 year. Prior to the first year, we defined a 33% relative reduction in mortality (relative mortality reduction that was found in the original EGDT trial) to indicate clinical effectiveness of the intervention. Results: We enrolled 79 patients in the preintervention year and 77 patients in the postintervention year. Compared with the preintervention year, patients in the postintervention year received significantly greater crystalloid volume (2.54 L vs 4.66 L, p &lt; 0.001) and frequency of vasopressor infusion (34% vs 69%, p &lt; 0.001) during the initial resuscitation. In-hospital mortality was 21 of 79 patients (27%) before intervention, compared with 14 of 77 patients (18%) after intervention (absolute difference, - 9%; 95% confidence interval, + 5 to - 21%). Conclusions: Implementation of EGDT in our ED was associated with a 9% absolute (33% relative) mortality reduction. Our data provide external validation of the clinical effectiveness of EGDT to treat sepsis and septic shock in the ED. abstract_id: PUBMED:22447123 Early interventions in severe sepsis and septic shock: a review of the evidence one decade later. The outcomes of acute myocardial infarction, trauma, and stroke have improved by implementing processes that provide early diagnosis and aggressive interventions at the most proximal point of disease presentation. A common feature in these conditions is the implementation of early intervention strategies. One decade ago, a similar approach to sepsis began when a prospective randomized trial compared early goal-directed therapy (EGDT) to standard care using specific criteria for the early identification of high risk patients with infection. The components of EGDT were derived from expert consensus opinion to produce a protocol to reverse the hemodynamic perturbations of hypovolemia, vasodysregulation, myocardial suppression and increased metabolic demands for patients with severe sepsis in the intensive care unit (ICU). However, EGDT was provided at the most proximal phase of disease presentation in the Emergency Department (ED). With EGDT, a reduction in mortality of over 16% was shown over standard care. Since the EGDT study was published a decade ago, significant emphasis worldwide has been placed on a comprehensive approach to the first 6 hours of sepsis management which is commonly referred to as the resuscitation bundle (RB). The RB consists of early diagnosis, risk stratification using lactate levels, hemodynamic response after a fluid challenge, antibiotics, source control and hemodynamic optimization or EGDT. This review will examine one decade of evidence for the components of the RB examining its impact on systemic inflammation, the progression of organ failure, health care resource consumption and mortality in severe sepsis and septic shock. abstract_id: PUBMED:24519882 Systemic inflammation among breast cancer survivors: the roles of goal disengagement capacities and health-related self-protection. Background: This study examined the associations between breast cancer survivors' capacity to adjust to unattainable goals (through goal disengagement and goal reengagement), health-related self-protection (e.g., positive reappraisals), and low-grade systemic inflammation (i.e., C-reactive protein [CRP]). Methods: Self-reports of goal adjustment capacities and health-related self-protection were measured and concentrations of the inflammatory molecule CRP were quantified in a cross-sectional sample of 121 female breast cancer survivors (Mage 55.53, SD = 10.99 years). Results: Results from hierarchical linear regression analyses indicated that low levels of goal disengagement capacities predicted higher CRP. Moreover, health-related self-protection buffered the association between failure to disengage from unattainable goals and elevated CRP. These results were independent from potential confounders including age, education, smoking, BMI, cancer stage, and time since diagnosis. Conclusions: The study's findings suggest that goal disengagement capacities and health-related self-protection can work together in predicting systemic inflammation among breast cancer survivors. Failure to disengage from unattainable goals may trigger health-compromising inflammatory processes, unless breast cancer survivors are able to engage in self-protection to manage their health threats. abstract_id: PUBMED:29221272 Goal-directed fluid restriction using stroke volume variation and cardiac index during one-lung ventilation: a randomized controlled trial. Background: Goal-directed therapy confers a strong prognosis in patients undergoing major cardiac or noncardiac surgery. The present study investigated whether intraoperative goal-directed fluid restriction (GDFR) using stroke volume variation (SVV) and cardiac index could improve oxygenation and postoperative outcome in patients undergoing one-lung ventilation (OLV). Methods: A Total of 168 patients scheduled for elective thoracoscopic lobectomy under OLV were randomized into the GDFR protocol (group G) or conventional fluid therapy groups (group C). Patients in group C underwent conventional fluid therapy based on mean arterial pressure (MAP), central venous pressure (CVP), and urine volume, whereas those in group G received GDFR protocol associated with the SVV from 10-13% and the cardiac index was controlled at a minimum of 2.5 L/min/m2. The primary outcome variable was PaO2/FiO2. The secondary outcomes were other pulmonary variables and lung mechanics, inflammatory response, the incidence of postoperative pulmonary complications, and the length of hospital stay. Results: During surgery, the PaO2/FiO2 ratio in group G was more than that of group C at 30 and 60 min after OLV, 10 min after re-expansion, and the end of the operation (259±29 vs. 314±34; 253±30 vs. 308±35; 341±34 vs. 394±39; 349±35 vs. 401±39, respectively, all P&lt;0.001). Compared to conventional fluid therapy, GDFR protocol also significantly improved the hemodynamic and lung mechanics with the initiation of OLV. The incidence of postoperative pulmonary complications such as acute lung injury and pneumonia, and the length of hospital stay were decreased by GDFR protocol as compared to conventional fluid therapy (all P&lt;0.05). However, there were no significant differences between groups with respect to the concentration of serum tumor necrosis factor-α (TNF-α), interleukin-6 (IL-6), and interleukin-10 (IL-10). Conclusions: The GDFR protocol based on SVV and cardiac index applied in patients undergoing OLV improves intraoperative pulmonary oxygenation. It can also reduce the postoperative complications and length of hospital stay. However, the GDFR strategy cannot reduce the local or systemic inflammation. Trial Registration: Chinese Clinical Trials Register ChiCTR-INR-16008288, Registered 20 April, 2016. abstract_id: PUBMED:15257083 Cortisol levels and adrenal reserve after successful cardiac arrest resuscitation. The postresuscitation phase after out-of-hospital circulatory arrest shares similarities with severe sepsis. Corticosteroid replacement is beneficial in patients with septic shock and adrenal dysfunction. The goal of this study was to assess baseline cortisol and adrenal reserve of out-of-hospital circulatory arrest patients after recovery of spontaneous circulation. Thirty-three consecutive patients successfully resuscitated after cardiac arrest were prospectively included between March 2002 and June 2003. A serum cortisol assay and a corticotropin test (250 microg i.v.) were done 6 to 36 h after circulatory arrest. A cortisol increase smaller than 9 microg/dL after corticotropin (nonresponders) defined adrenal reserve insufficiency. Response status was compared in the three outcome groups: survival with full neurologic recovery (n = 4), early death from refractory shock (n = 10), or later death from neurologic dysfunction (n = 19). Patients who died of early refractory shock had lower baseline cortisol levels than patients who died of neurologic dysfunction (27 microg/dL [15-47] vs. 52 microg/dL [28-73], respectively; P &lt; 0.01), suggesting an inadequate adrenal response to severe systemic inflammation. Corticotropin response status was not associated with standard severity markers and seemed uninfluenced by therapeutic hypothermia. In conclusion, patients who die of early refractory shock after cardiopulmonary resuscitation may have an inadequate adrenal response to the stress associated with this condition. Thresholds for cortisol levels at baseline and after corticotropin need to be determined in this clinical setting. Answer: The concept of individualized early goal-directed therapy (EGDT) in systemic inflammation, such as severe sepsis and septic shock, is to tailor fluid resuscitation and other therapeutic interventions to the specific needs of the patient rather than applying a one-size-fits-all approach. The question of whether full utilization of preload reserve is the optimal strategy in such cases is addressed by examining the effects of different fluid management strategies on patient outcomes. A study on severe acute pancreatitis in pigs found that individualized optimization of intravascular fluid status, as opposed to maximizing stroke volume by fluid loading, led to less vascular endothelial damage, pancreatic edema, and inflammatory response (PUBMED:25402295). This suggests that a more conservative approach to fluid resuscitation that avoids overloading the patient may be beneficial. In the context of severe sepsis and septic shock, EGDT has been shown to modulate systemic inflammation and result in significant reductions in morbidity, mortality, and healthcare resource consumption, with external validation across multiple healthcare settings (PUBMED:18443478). This supports the use of individualized EGDT as an effective strategy for managing systemic inflammation. Furthermore, goal-directed fluid therapy (GDFT) has been demonstrated to have a positive effect on the rapid rehabilitation of patients undergoing laparoscopy-assisted radical gastrectomy during fast-track anesthesia, maintaining the stability of hemodynamics, reducing systemic inflammation, and decreasing postoperative complications (PUBMED:34150106). In summary, while the full utilization of preload reserve may not be the optimal strategy, individualized EGDT that takes into account the patient's preload reserve and other hemodynamic parameters appears to be beneficial in managing systemic inflammation and improving patient outcomes. This approach should be tailored to each patient's unique physiological status and the specific clinical scenario.
Instruction: Women in surgery: do we really understand the deterrents? Abstracts: abstract_id: PUBMED:16618901 Women in surgery: do we really understand the deterrents? Hypothesis: Women are deterred from a surgical career owing to a lack of role models rather than lifestyle considerations. Design: Survey. Setting: University teaching hospital. Participants: Surgery and obstetrics/gynecology attending physicians, residents, and medical students. Intervention: Questionnaire. Main Outcome Measures: Potential deterrents to a surgical career. Results: Men and women had a similar interest in a surgical career before their surgical rotation (64% vs 53%, P = .68). A similar percentage developed a mentor (40.0% vs 45.9%, P = .40). Women were far more likely to perceive sex discrimination (46.7% vs 20.4%, P = .002), most often from male attending physicians (33.3%) or residents (31.1%). Women were less likely to be deterred by diminishing rewards (4.4% vs 21.6%, P = .003) or workload considerations (28.9% vs 49.0%, P = .02). They were also less likely to cite family concerns as a deterrent (47.8% vs 66.7%, P = .02) and equally likely to be deterred by lifestyle during residency (83.3% vs 76.5%, P = .22). However, women were more likely to be deterred by perceptions of the "surgical personality" (40.0% vs 21.6%, P = .03) and the perception of surgery as an "old boys' club" (22.2% vs 3.9%, P = .002). Conclusions: Men and women are very similar in what they consider important in deciding on a surgical career. Women are not more likely to be deterred by lifestyle, workload issues, or lack of role models. However, the perceived surgical personality and surgical culture is a sex-specific deterrence to a career in surgery for women. abstract_id: PUBMED:10577294 Older age, aggressiveness of care, and survival for seriously ill, hospitalized adults. SUPPORT Investigators. Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments. Background: Older age is associated with less aggressive treatment and higher short-term mortality due to serious illness. It is not known whether less aggressive care contributes to this survival disadvantage in elderly persons. Objective: To determine the effect of age on short-term survival, independent of baseline patient characteristics and aggressiveness of care. Design: Secondary analysis of data from a prospective cohort study. Setting: Five academic medical centers participating in SUPPORT (Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments). Patients: 9105 adults hospitalized with one of nine serious illnesses associated with an average 6-month mortality rate of 50%. Measurements: Survival through 180 days of follow-up. In Cox proportional hazards modeling, adjustment was made for patient sex; ethnicity; income; baseline physical function; severity of illness; intensity of hospital resource use; presence of do-not-resuscitate orders on study day 1; and presence and timing of decisions to withhold transfer to the intensive care unit, major surgery, dialysis, blood transfusion, vasopressors, and tube feeding. Results: The mean (+/- SD) patient age was 63 +/- 16 years, 44% of patients were female, and 16% were black. Overall survival to 6 months was 53%. In analyses that adjusted for sex, ethnicity, income, baseline functional status, severity of illness, and aggressiveness of care, each additional year of age increased the hazard of death by 1.0% (hazard ratio, 1.010 [95% CI, 1.007 to 1.013]) for patients 18 to 70 years of age and by 2.0% (hazard ratio, 1.020 [CI, 1.013 to 1.026]) for patients older than 70 years of age. Adjusted estimates of age-specific 6-month mortality rates were 44% for 55-year-old patients, 48% for 65-year-old patients, 53% for 75-year-old patients, and 60% for 85-year-old patients. Similar results were obtained in analyses that did not adjust for aggressiveness of care. Acute physiology and diagnosis had much larger relative contributions to prognosis than age. Conclusions: We found a modest independent association between patient age and short-term survival of serious illness. This age effect was not explained by the current practice of providing less aggressive care to elderly patients. abstract_id: PUBMED:10068357 Patient age and decisions to withhold life-sustaining treatments from seriously ill, hospitalized adults. SUPPORT Investigators. Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatment. Background: Patient age may influence decisions to withhold life-sustaining treatments, independent of patients' preferences for or ability to benefit from such treatments. Controversy exists about the appropriateness of using age as a criterion for making treatment decisions. Objective: To determine the effect of age on decisions to withhold life-sustaining therapies. Design: Prospective cohort study. Setting: Five medical centers participating in the Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments (SUPPORT). Patients: 9105 hospitalized adults who had one of nine illnesses associated with an average 6-month mortality rate of 50%. Measurements: Outcomes were the presence and timing of decisions to withhold ventilator support, surgery, and dialysis. Adjustment was made for sociodemographic characteristics, prognoses, baseline function, patients' preferences for life-extending care, and physicians' understanding of patients' preferences for life-extending care. Results: The median patient age was 63 years; 44% of patients were women, and 53% survived to 180 days. In adjusted analyses, older age was associated with higher rates of withholding each of the three life-sustaining treatments studied. For ventilator support, the rate of decisions to withhold therapy increased 15% with each decade of age (hazard ratio, 1.15 [95% CI, 1.12 to 1.19]); for surgery, the increase per decade was 19% (hazard ratio, 1.19 [CI, 1.12 to 1.27]); and for dialysis, the increase per decade was 12% (hazard ratio, 1.12 [CI, 1.06 to 1.19]). Physicians underestimated older patients' preferences for life-extending care; adjustment for this underestimation resulted in an attenuation of the association between age and decisions to withhold treatments. Conclusion: Even after adjustment for differences in patients' prognoses and preferences, older age was associated with higher rates of decisions to withhold ventilator support, surgery, and dialysis. abstract_id: PUBMED:9436083 Patients with DNR orders in the operating room: surgery, resuscitation, and outcomes. SUPPORT Investigators. Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments. N/A abstract_id: PUBMED:8790228 Seriously ill hospitalized adults: do we spend less on older patients? Support Investigators. Study to Understand Prognoses and Preference for Outcomes and Risks of Treatments. Objective: To determine the effect of age on hospital resource use for seriously ill adults, and to explore whether age-related differences in resource use are explained by patients' severity of illness and preferences for life-extending care. Study Design: Prospective cohort study. Setting: Five geographically diverse academic acute care medical centers participating in the SUPPORT Project. Patients: A total of 4301 hospitalized adults with at least one of nine serious illnesses associated with an average 6-month mortality of 50%. Measurements: Resource utilization was measured using a modified version of the Therapeutic Intervention Scoring System (TISS); the performance of three invasive procedures (major surgery, dialysis, and right heart catheter placement); and estimated hospital costs. Results: The median patient age was 65; 43% were female, and 48% died within 6 months. After adjustment for severity of illness, prior functional status, and study site, when compared with patients younger than 50, patients 80 years or older were less likely to undergo major surgery (adjusted odds ratio .46), dialysis (.19), and right heart catheter placement (.59) and had median TISS scores and estimated hospital costs that were 3.4 points and $ 71.61 lower, respectively. These differences persisted after further adjustment for patients' preferences for life-extending care. Conclusions: Compared with similar younger patients, seriously ill older patients receive fewer invasive procedures and hospital care that is less resource-intensive and less costly. This preferential allocation of hospital services to younger patients is not based on differences in patients' severity of illness or general preferences for life-extending care. abstract_id: PUBMED:36456020 Motivators and deterrents for early career female doctors applying to surgical training programmes in the UK National Health Service: a mixed-methods study. Objectives: To perform a mixed-methods study identifying motivators and deterrents to female doctors interested in core surgical training (CST). To provide tangible implementations based on the findings. Design: This study used quantitative (questionnaires) and qualitative (semistructured interviews (SSIs)) analyses. Participants completed online questionnaires on Qualtrics and SSIs were conducted remotely on Microsoft Teams. Questions were derived from previous studies and a novel term, the gender impact rating (GIR), was coined to assess the impact of gender on opportunities available during CST application. Setting: Participants were working in the UK National Health Service and data collected from December 2020 to January 2021. Participants: A total of 100 female surgical trainees in the UK ranging from Foundation Year 2 to Core Training Year 2. Main Outcome Measures: Participants ranked factors by their influence on their CST application. Of the 100 trainees, 21 were randomly selected for an SSI to explore their questionnaire responses. Statistical analyses were performed using MATLAB and SPSS, alongside a thematic analysis of the interviews. Results: A total of 44 out of 100 questionnaire respondents ranked early exposure to surgery as the most influential motivator, while 43% selected work-life balance as the greatest deterrent and 33% suggested mentoring schemes to encourage women to apply to CST. The median GIR was 3 out of 5, indicating a moderate perceived impact of gender on opportunities available during CST application. Qualitative analysis found four overarching themes: institutional factors (including mentorship schemes), organisational culture (including active engagement), social factors and personal factors. Conclusion: Thematic analysis suggested that seniors involving women in theatre and a supportive work environment would encourage entry of more female surgeons. Therefore, the proposed implementations are the active engagement of women in theatre and destigmatising less than full-time training. Further research into ethnicity and personality on motivations to enter surgery is advised. abstract_id: PUBMED:32237628 Genetics and epigenetics of obesity: keys to understand Genetics and epigenetics of obesity: the keys to understand. Obesity is a multifactorial disease due to central dysregulation of energy homeostasis. The contribution of genetics is constant but varies according to the situations from the rare forms of non-syndromic and syndromic monogenic obesities (about 5% of cases) and the socalled polygenic obesity (or common obesity) which is the most frequent situation (95% of cases). Environmental factors (early pre- and post-natal, societal or psychological determinants) always interact closely with the genetic factors of predisposition. The better understanding of these different actors should lead in the future to a real personalized medicine (targeted drug treatments according to the identified genetic anomaly and / or multidisciplinary management or even bariatric surgery according to clinical situations). abstract_id: PUBMED:21467150 Differential inputs from chemosensory appendages mediate feeding responses to glucose in wild-type and glucose-averse German cockroaches, Blattella germanica. Glucose is a universal phagostimulant in many animal species, including the cockroach Blattella germanica. However, some natural populations of B. germanica have been found that are behaviorally deterred from eating glucose. In dose-response studies, glucose was a powerful phagostimulant for wild-type cockroaches, but it strongly deterred feeding in a glucose-averse strain. Both strains, however, exhibited identical dose-response curves to other phagostimulants and deterrents. As a lead to electrophysiological and molecular genetics studies to investigate the mechanisms that underlie glucose-aversion, we used 2 assay paradigms to delineate which chemosensory appendages on the head contribute to the reception of various phagostimulatory and deterrent chemicals. Both simultaneous dual stimulation of the antenna and mouthparts of the insects and 2-choice preference tests in surgically manipulated insects showed that the glucose-averse behavior could be elicited through the gustatory systems of the antennae and mouthparts. The paraglossae alone were sufficient for maximum sensitivity to both phagostimulants and deterrents, including glucose as a deterrent in the glucose-averse strain. In addition to the paraglossae, the labial palps were more important than the maxillary palps in the reception of deterrents (caffeine in both strains and glucose in the glucose-averse strain). The maxillary palps, on the other hand, played a more important role in the reception of phagostimulants (fructose in both strains and glucose in the wild-type strain). Our results suggest that distinct inputs from the chemosensory system mediate opposite feeding responses to glucose in the wild-type and glucose-averse strains. abstract_id: PUBMED:30368673 A qualitative study exploring what it takes to be physically active with a stoma after surgery for rectal cancer. Purpose: Many adults must cope with the adverse effects of cancer and adjuvant therapies, which may limit their engagement in health-enhancing behaviors such as physical activity (PA). Furthermore, the placement of a stoma during surgery for rectal cancer may cause additional challenges to being physically active. The aim of this study was to explore the experiences of rectal cancer survivors who were living with a stoma and the impact on their engagement in PA. Methods: Interpretive phenomenological analysis was used as the approach for this qualitative study. Fifteen rectal cancer survivors took part in a semi-structured interview post-chemotherapy, and thematic analysis was used to identify themes within their accounts. Results: Three themes captured participants' personal lived experience: (1) reasons for engaging in PA, (2) deterrents to engaging in PA, and (3) practical implications. Conclusions: Participants' accounts offered insight into both their cancer and stoma-related experiences, highlighting both reasons for and deterrents to engaging in PA. Further, findings helped to identify tangible strategies that those learning to be physically active with a stoma may find useful. abstract_id: PUBMED:22043827 Statins as potential treatment for cholesterol gallstones: an attempt to understand the underlying mechanism of actions. Introduction: Statin therapy is widely used across the globe for the treatment and prevention of cardiovascular disease (CVD). It is well established that statin therapy is associated with significant decreases in low-density lipoprotein cholesterol (LDL-C) and plasma cholesterol levels. Cholesterol gallstones are a common problem, resulting in hospital admission and surgery, throughout western healthcare systems. Areas Covered: This review describes the mechanisms, and addresses the potential, for statins to be used as a treatment for gallstones. Medline was searched for the risk factors and treatment of cholesterol gallstones. Expert Opinion: Obesity, metabolic syndrome, non-alcoholic fatty liver disease (NAFLD), insulin resistance and high-fat diets (unsaturated fats) rich in cholesterol are all associated risk factors for cholesterol gallstones. In view of the high prevalence of cholesterol gallstones, there is an urgent need to understand whether pharmacological therapies can be harnessed for the treatment of cholesterol gallstones. Gallstones are shown to be associated with an increased risk, not only of mortality, but also of CVD. Statins, widely used in prevention of CVD and hypercholesteremia, have been shown to dissolve cholesterol gallstones in animal models and human studies, highlighting the potential for a pharmacological therapy for gallstones. More studies are required to understand the role of statins in the treatment of gallstones and for comparison with current treatment strategies. Answer: The deterrents to women pursuing a career in surgery have been explored in several studies, and the findings suggest that the reasons are multifaceted and complex. According to a survey conducted at a university teaching hospital, women were not more likely to be deterred by lifestyle, workload issues, or lack of role models. Instead, they were more likely to be deterred by perceptions of the "surgical personality" and the perception of surgery as an "old boys' club" (PUBMED:16618901). This indicates that the culture and environment within the surgical field may be significant factors in discouraging women from pursuing a surgical career. Additionally, a mixed-methods study in the UK National Health Service found that early exposure to surgery was the most influential motivator for female doctors interested in core surgical training, while work-life balance was the greatest deterrent. The study also suggested that mentorship schemes and a supportive work environment could encourage more women to enter the surgical field (PUBMED:36456020). These findings highlight that while practical considerations such as workload and lifestyle are important, the social and cultural aspects of the surgical profession play a crucial role in influencing women's decisions to pursue a career in surgery. Addressing these deterrents may require changes in the organizational culture, active engagement of women in the surgical environment, and destigmatization of less than full-time training to create a more inclusive and supportive atmosphere for female surgeons.
Instruction: Defining maltreatment chronicity: are there differences in child outcomes? Abstracts: abstract_id: PUBMED:38182921 Association between maltreatment, hair cortisol concentration, positive parent-child interaction, and psychosocial outcomes in Chinese preschool children. Children now are facing an increasing risk of early life stress (ELS), which leads to detrimental psychosocial outcomes. Behavior studies suggested that positive parental interactions might moderate the negative impact of ELS, but the related biological alteration remains unclear. This study aims to investigate whether positive parent-child interactions moderate the association between maltreatment (as a severe form of ELS) and hair cortisol concentration (HCC), as well as between HCC and psychosocial outcomes in young children. Participants were 6-year-old Chinese children (N = 257, Mage = 6.2, 121 were male) selected by stratified cluster random sampling from a Shanghai population representative cohort. Proximal 3 cm hair strands were analyzed using liquid chromatography coupled with tandem mass spectrometry for HCC. Children's psychosocial outcome was evaluated using the parental report Strengths and Difficulties Questionnaire (SDQ). Parents also reported the frequency of positive parent-child interactions using the Chinese Parent-Child Interaction Scale (CPCIS) as well as the history of maltreatment. Multi-level logistic regression models adjusting for individual, kindergarten, and district confounders were used to evaluate the associations between maltreatment, HCC, and psychosocial outcomes. Interactions terms tested whether more frequent positive parent-child interactions moderates the association between maltreatment and HCC, as well as between HCC and psychosocial outcomes. Maltreated children exhibited higher levels of HCC (B = 1.20, 95% CI: 0.38,2.02; p = 0.004), and children with higher HCC exhibited poorer psychosocial outcomes (B = 0.34, 95% CI: 0.18,0.51; p &lt; 0.001). Positive parent-child interactions did not have a moderating effect on the association between maltreatment and HCC, but they demonstrated a moderating effect on the association between increased HCC and psychosocial outcomes (interaction term: B = -0.42, 95% CI: -0.75,-0.10; p = 0.01). These findings provide evidence that positive parental interaction may serve as a moderator between chronic cortisol exposure and psychosocial problems. It highlights the importance of frequent parent-child interactions, especially among children under a high risk of ELS. abstract_id: PUBMED:34586552 Child Maltreatment and Long-Term Physical and Mental Health Outcomes: An Exploration of Biopsychosocial Determinants and Implications for Prevention. Child maltreatment rates remain unacceptably high and rates are likely to escalate as COVID-related economic problems continue. A comprehensive and evidence-building approach is needed to prevent, detect and intervene where child maltreatment occurs. This review identifies key challenges in definitions, overviews the latest data on prevalence rates, reviews risk and protective factors, and examines common long-term mental health outcomes for children who experience maltreatment. The review takes a systems approach to child maltreatment outcomes through its focus on the overall burden of disease, gene-environment interactions, neurobiological mechanisms and social ecologies linking maltreatment to mental ill-health. Five recommendations relating to the accurate measurement of trends, research on brain structures and processes, improving the reach and impact of teleservices for detecting, preventing and treating child maladjustment, community-based approaches, and building population-focused multidisciplinary alliances and think tanks are presented. abstract_id: PUBMED:31377533 Sexual minority status and child maltreatment: How do health outcomes among sexual minority young adults differ due to child maltreatment exposure? Background: Although prior studies indicate heightened health and well-being concerns for sexual minority (SM) youth, as well as for youth exposed to child maltreatment in the general population, it is unclear whether there are differences in these outcomes among SM youth that have and have not experienced maltreatment. Objective: Our aim was to investigate the unique associations between child maltreatment and emerging health outcomes beyond the impact of SM status. Data was drawn from a nationally representative sample of 648 SM youth in the U.S. in grades 7-12 during the 1994-1995 school year. Methods: Using data from the National Longitudinal Study of Adolescent to Adult Health, descriptive statistics and ordinary least squares regression models were used to investigate differences in SM young adult outcomes by experiences of child maltreatment. Results: Maltreatment among SM individuals showed strong associations with poor adult mental health outcomes (e.g. depression, anxiety, isolation, and suicidal ideation) and fairly strong negative associations with general health outcomes (e.g., heart and lung problems) when compared to their non-maltreated peers. Associations with maltreatment and behavioral health and socioeconomic outcomes were not as strong for this population, which suggests the effects of maltreatment for SM youth are most salient in regards to mental and physical health. Conclusions: Findings provide insight into what areas of health and well-being should be focused on when working with SM youth that have been maltreated, and offer evidence to encourage further exploration of the outcomes of SM maltreated individuals in young adulthood. abstract_id: PUBMED:35583017 Child maltreatment and asthma. A growing body of evidence suggests a potential link between child maltreatment and asthma. Determining whether and how child maltreatment causes or worsens asthma would have major implications for disease prevention and treatment, as well as public health policy. In this article, we examine epidemiologic studies of child maltreatment and asthma and asthma-related outcomes, review the evidence for potential mechanisms underlying the child maltreatment-asthma association, and discuss future directions. To date, a child maltreatment-asthma link has been reported in most studies of children and adults, though the type of maltreatment associated with asthma has differed across studies. Such discrepant findings are likely explained by differences in study design and quality. All studies have been limited by potential under-reporting of child maltreatment and selection bias, and nonthorough assessment of asthma. Despite these limitations, the aggregate evidence from epidemiologic studies suggests a possible causal link between child maltreatment and asthma, though the relative contributions of various types of maltreatment (physical, sexual, emotional, or neglect) are unclear. To date, there is insufficient evidence of an association between child maltreatment and lung function in children or adults. Limited evidence further suggests that child maltreatment could influence the development or severity of asthma through direct effects on stress responses and anxiety- or depressive-related disorders, immunity, and airway inflammation, as well as indirect effects such as increased obesity risk. Future prospective studies should aim to adequately characterize both child maltreatment and asthma, while also assessing relevant covariates and biomarkers of stress, immune, and therapeutic responses. abstract_id: PUBMED:32018212 Intergenerational maltreatment and child emotion dysregulation. Background: Childhood maltreatment has been related to adverse outcomes on social, cognitive, and biological development with sequelae present throughout the lifespan. As such, caregivers maltreated in childhood may face a different set of challenges and interpersonal stressors in rearing their children. Parental history of maltreatment has the potential to increase the risk of parental depression and exposure to maltreatment in the next generation, both of which can have a negative effect on children's development. Objective: The purpose of this study is to investigate maternal depression and child experiences of maltreatment as mediators of the relationship between mothers' own maltreatment experiences and child emotion dysregulation in children aged 10-12. Participants And Setting: 378 low-income mothers and their children were recruited to participate in a research summer camp from 2004-2007. Method: Mothers self-reported on their experiences of maltreatment in childhood and current depressive symptoms. Current generation child maltreatment information was coded from Child Protective Services records. Child emotion dysregulation (rated by camp counselors) was the outcome measure in this study. Structural equation modeling was employed to test associations between maternal maltreatment and child emotion dysregulation. Results: Maternal history of maltreatment related to both child maltreatment (β = .24, SE = .052, p &lt; .001) and greater maternal depressive symptoms (β = .28, SE = .049, p &lt; .001). Only child maltreatment mediated the effect of mothers' maltreatment on child emotion dysregulation (95 %CI: .005-.023). Conclusions: In this low-income sample, the rate of intergenerational maltreatment is high and represents a pathway of influence that increases risk for maladaptive socioemotional child development. abstract_id: PUBMED:15970326 Defining maltreatment chronicity: are there differences in child outcomes? Objective: For nearly 25 years researchers have suggested that better taxonomic systems conceptualizing and reliably differentiating among different dimensions of maltreatment are required. This study examines the utility of three different characterizations of one dimension of maltreatment, chronicity, to predict child behavioral and emotional functioning in a sample of maltreated children. A secondary objective of the study is to examine additional parameters of maltreatment inherent in our definitions of chronicity: age at first report to CPS, extent and continuity of maltreatment. Method: The study consists of children reported for maltreatment (N=519) from the larger LONGSCAN study cohort. Lifetime maltreatment data were collected from CPS records and coded into two chronicity constructs: "developmental" and "calendar" definitions. Variables for age at first report, frequency, extent and continuity of maltreatment reports also were constructed. Hierarchical regression analyses were utilized to determine the extent to which the various chronicity constructs contributed to the prediction of child outcomes. Results: The most salient definition of chronicity, in terms of its effects on child behavioral and emotional functioning, varied by outcome. The developmental definition was found to have the most balanced sensitivity across outcomes. Among other significant findings, extent and continuity of maltreatment contributed respectively to the prediction of behavior and emotional trauma symptoms. Early age at first report was a predictor of poor daily living skills. Conclusion: Chronicity is a complex construct. Findings indicate there are multiple parameters that make up the chronicity construct itself that may be important for understanding child outcomes. abstract_id: PUBMED:28163367 Parent-Child Agreement on Parent-to-Child Maltreatment. Parent-child agreement on child maltreatment was examined in a multigenerational study. Questionnaires on perpetrated and experienced child maltreatment were completed by 138 parent-child pairs. Multi-level analyses were conducted to explore whether parents and children agreed about levels of parent-to-child maltreatment (convergence), and to examine whether parents and children reported equal levels of child maltreatment (absolute differences). Direct and moderating effects of age and gender were examined as potential factors explaining differences between parent and child report. The associations between parent- and child-reported maltreatment were significant for all subtypes, but the strength of the associations was low to moderate. Moreover, children reported more parent-to-child neglect than parents did. Older participants reported more experienced maltreatment than younger participants, without evidence for differences in actual exposure. These findings support the value of multi-informant assessment of child maltreatment to improve accuracy, but also reveal the divergent perspectives of parents and children on child maltreatment. abstract_id: PUBMED:37483318 Use of technology in evidence-based programs for child maltreatment and its impact on parent and child outcomes. Introduction: Technology has been used in evidence-based child maltreatment (CM) programs for over a decade. Although advancements have been made, the extent of the application of technology in these programs, and its influence on parental and child outcomes, remains unclear within the context of changes that emerged because of the COVID-19 pandemic. This scoping review provides a contextualized overview and summary of the use of technology in evidence-based parenting and child programs serving families impacted by child maltreatment and the effects of technology-enhanced programs on target outcomes. Materials And Methods: Using Arksey and O'Malley's methodological framework, we searched seven databases to identify peer-reviewed and grey literature published in English from 2000 to 2023 on evidence-based programs, according to the California Evidence-Based Clearinghouse (CEBC), that included technological supports for two populations: at-risk parents for child maltreatment prevention, and children and youth 0-18 years exposed to child maltreatment. All study designs were included. Results: Eight evidence-based parenting programs and one evidence-based child trauma program were identified as using technology across a total of 25 peer-reviewed articles and 2 peer-reviewed abstracts meeting inclusion criteria (n = 19 on parent-level programs; n = 8 on child-level programs). Four studies were published in the context of COVID-19. Two main uses of technology emerged: (1) remote programmatic delivery (i.e., delivering all or part of the program virtually using technology) and (2) programmatic enhancement (i.e., augmenting program content with technology). Improvements across parenting and child mental health and behavioral outcomes were generally observed. Discussion: Technology use in evidence-based child maltreatment programs is not new; however, the small sample since the start of the COVID-19 pandemic in this review that met inclusion criteria highlight the dearth of research published on the topic. Findings also suggest the need for the inclusion of implementation outcomes related to adoption and engagement, which could inform equitable dissemination and implementation of these programs. Additional considerations for research and practice are discussed. abstract_id: PUBMED:26980121 Child Maltreatment Prevention and the Scope of Child and Adolescent Psychiatry. Child maltreatment is one of the most deleterious known influences on the mental health and development of children. This article briefly reviews a complement of methods that are ready to incorporate into child and adolescent psychiatric practice, by having been validated either with respect to the prevention of child maltreatment or with respect to adverse outcomes associated with maltreatment (and primarily focused on enhancing the caregiving environment); they are feasible for integration into clinical decision making, and most importantly, can be included in the training of the next generation of clinicians. abstract_id: PUBMED:30037281 Early Exposure to Child Maltreatment and Academic Outcomes. Early childhood trauma increases the risk of academic difficulties. The purpose of this study was to investigate the prevalence of early contact with child protection services (CPS) and to determine whether early exposure to maltreatment investigations was associated with important academic outcomes. The authors focused specifically on standardized test scores (math and reading), grade retention, and special education status in third grade. The sample was diverse and included all children born between 2000 and 2006 and enrolled in Michigan's public schools ( N = 732,838). By the time these students reached third grade, approximately 18% were associated with a formal CPS investigation. In some school districts, more than 50% of third graders were associated with an investigation. African American and poor students were more likely to be investigated for maltreatment. Children associated with maltreatment investigations scored significantly lower on standardized math and reading tests, were more likely to be identified as needing special education, and were more likely to be held back at least one grade. These findings indicate that involvement with CPS is not an infrequent event in the lives of young children and that within some school districts, maltreatment investigations are the norm. Child welfare and educational systems must collaborate so that the early academic struggles experienced by victims of maltreatment do not mature into more complicated difficulties later in life. Answer: Yes, there are differences in child outcomes when defining maltreatment chronicity. The study described in abstract PUBMED:15970326 examined the utility of three different characterizations of maltreatment chronicity to predict child behavioral and emotional functioning in a sample of maltreated children. The findings indicated that the most salient definition of chronicity varied by outcome, with the developmental definition having the most balanced sensitivity across outcomes. The study also found that extent and continuity of maltreatment contributed respectively to the prediction of behavior and emotional trauma symptoms, and that an early age at first report was a predictor of poor daily living skills. This suggests that chronicity is a complex construct with multiple parameters that may be important for understanding child outcomes.
Instruction: Do racial disparities persist in infant mortality from respiratory distress syndrome? Abstracts: abstract_id: PUBMED:25849882 Socioeconomic position, health behaviors, and racial disparities in cause-specific infant mortality in Michigan, USA. Objectives: Studies about racial disparities in infant mortality suggest that racial differences in socioeconomic position (SEP) and maternal risk behaviors explain some, but not all, excess infant mortality among Blacks relative to non-Hispanic Whites. We examined the contribution of these to disparities in specific causes of infant mortality. Methods: We analyzed data about 2,087,191 mother-child dyads in Michigan between 1989 and 2005. First, we calculated crude Black-White infant mortality ratios independently and by specific cause of death. Second, we fit multivariable Poisson regression models of infant mortality, overall and by cause, adjusting for SEP and maternal risk behaviors. Third, Crude Black-White mortality ratios were compared to adjusted predicted probability ratios, overall and by specific cause. Results: SEP and maternal risk behaviors explained nearly a third of the disparity in infant mortality overall, and over 25% of disparities in several specific causes including homicide, accident, sudden infant death syndrome, and respiratory distress syndrome. However, SEP and maternal risk behaviors had little influence on disparities in other specific causes, such as septicemia and congenital anomalies. Conclusions: These findings help focus policy attention toward disparities in those specific causes of infant mortality most amenable to social and behavioral intervention, as well as research attention to disparities in specific causes unexplained by SEP and behavioral differences. abstract_id: PUBMED:21121951 Do racial disparities persist in infant mortality from respiratory distress syndrome? Objective: To examine the subset of the infants who died in 2005 from respiratory distress syndrome to determine if racial disparities persist. Design: A secondary data analysis was performed on the data set of infants who died of respiratory distress syndrome (RDS) in the 2005 period linked birth-infant death data file (N=774). Methods: Logistic regression was performed to evaluate the contribution of race to RDS-related mortality. Results: When analyzed independently, Black infants were 1.3 times more likely to die of RDS mortality than White infants. After the known predictors of gestation, birth weight, and gender were added to the model, Black race was no longer a significant predictor of RDS. Conclusions: Despite Black race having been identified as a contributor to RDS mortality in the past, race was not a predictor of RDS mortality in the 2005 cohort. Previous analyses comparing pre- and post-surfactant era mortality assumed that all eligible infants received surfactant, and Black infants did not respond as well as White infants. The 2003 birth certificate revision included surfactant administration, but only 12 states reported that data in the 2005 data set. Only 22% of eligible infants had documentation of surfactant administration. Future research is needed to confirm that eligible infants are receiving surfactant and to evaluate the response of infants by race. If Black infants do not respond to the available surfactant replacement products, genomics research is needed to explore targeted therapies for this group. abstract_id: PUBMED:22224890 Disparities in infant mortality and effective, equitable care: are infants suffering from benign neglect? Quality care for infant mortality disparity elimination requires services that improve health status at both the individual and the population level. We examine disparity reduction due to effective care and ask the following question: Has clinical care ameliorated factors that make some populations more likely to have higher rates of infant mortality compared with other populations? Disparities in postneonatal mortality due to birth defects have emerged for non-Hispanic black and Hispanic infants. Surfactant and antenatal steroid therapy have been accompanied by growing disparities in respiratory distress syndrome mortality for black infants. Progesterone therapy has not reduced early preterm birth, the major contributor to mortality disparities among non-Hispanic black and Puerto Rican infants. The Back to Sleep campaign has minimally reduced SIDS disparities among American Indian/Alaska Native infants, but it has not reduced disparities among non-Hispanic black infants. In general, clinical care is not equitable and contributes to increasing disparities. abstract_id: PUBMED:29071590 Racial/Ethnic Disparities in Morbidity and Mortality for Preterm Neonates Admitted to a Tertiary Neonatal Intensive Care Unit. Objective: The objective of this study was to assess whether in-hospital morbidity or mortality differed by race/ethnicity for preterm neonates admitted to the neonatal intensive care unit (NICU). Study Design: In a retrospective cohort study, preterm infants, &lt; 37 weeks, were admitted to the NICU from 1994 to 2009. Exclusions included structural anomalies and aneuploidy. Primary outcome was in-hospital mortality (IHM). Secondary outcomes were respiratory distress syndrome (RDS), interventricular hemorrhage (IVH), necrotizing enterocolitis (NEC), and retinopathy of prematurity (ROP). Sub-analysis of very preterm (VPT) infants, &lt; 28 weeks, was performed. Five racial/ethnic groups (REGs) were compared: White, Black, Hispanic, Asian, and Mixed. Associations were modeled by logistic regression. White neonates (WNs) were the referent group. Unadjusted and adjusted odds ratios and 95% confidence intervals for remaining REGs were reported. p value was significant at 5% for overall tests and at Bonferroni-corrected level &lt; 0.0125 for between-race comparisons with WNs. Results: Four thousand nine hundred fifty-five preterm neonates were identified; 153 were excluded leaving 4802 for analysis. After controlling covariates that were chosen a priori, there was no difference across REGs for IHM (all between-race comparison p values &gt; 0.0125). There was a significant difference in RDS among Black neonates (BNs) (aOR 0.57, 95% CI 0.45-0.73; p &lt; 0.001) and Hispanic neonates (HNs) (aOR 0.67, 95% CI 0.50-0.89; p = 0.005) compared to WNs. The risk of ROP was significantly different across REGs with HNs having a 70% increase in ROP (aOR 1.70, 95% CI 1.15-2.49; p = 0.008) and Mixed neonates (MNs) experiencing a 55% reduction (aOR 0.45, 95% CI 0.29-0.68; p &lt; 0.001) compared to WNs. There was no difference in IVH or NEC across REGs (all p values &gt; 0.0125). In the VPT cohort sub-analysis, BNs experienced a significant 59% reduction in IHM compared to WNs (BNs aOR 0.41, 95% CI 0.22-0.73; p = 0.003). MNs experienced a 46% reduction in ROP compared to WNs (aOR 0.54, 95% CI 0.35-0.81; p = 0.004). There was no difference in RDS, IVH, or NEC in very preterm infants across REGs (all between comparison p values &gt; 0.0125). Conclusion: In preterm neonates, in-hospital mortality does not significantly differ across racial and ethnic groups. However, in very preterm infants, in-hospital mortality for Black neonates is improved. There are morbidity differences (RDS, ROP) seen among racial/ethnic groups. abstract_id: PUBMED:7604920 Infant mortality in the United States: trends, differentials, and projections, 1950 through 2010. Objectives: This study examined long-term trends and differences in infant mortality in the United States from 1950 through 1991 according to race and ethnicity, education, family income, and cause of death. Forecasts are made through the year 2010. Methods: Log-linear regression models were applied to data from the National Vital Statistics System, National Linked Birth and Infant Death files, the National Maternal and Infant Health Survey, the National Natality Survey, and the National Infant Mortality Survey to model and forecast infant mortality. Results: Dramatic declines in the US infant mortality rate have occurred in the past 4 decades, largely as a result of declines in mortality from pneumonia and influenza, respiratory distress syndrome, prematurity and low birthweight, congenital anomalies, and accidents. Despite the overall reductions, however, substantial racial/ethnic, educational, and income differences in infant mortality still exist. Conclusions: The long-term downward trend in US infant mortality has not benefited Blacks and Whites equally. The Black/White disparity in infant mortality has not only persisted but increased over time and is not expected to diminish in the near future. Educational inequalities have also widened, and racial disparities have generally increased across all educational levels. abstract_id: PUBMED:10753242 Racial differences in respiratory-related neonatal mortality among very low birth weight infants. Objective: To examine racial differences in the secular trends in respiratory-related neonatal mortality among very low birth weight (VLBW) infants in the United States, temporally associated with surfactant availability. Design: Comparison of time trends in African American and non-Hispanic white (NHW) VLBW infants of cause-specific neonatal mortality and neonatal and infant mortality for 2 consecutive 3-year periods. Results: From 1985 to 1988 there was no racial difference in the rate of decline of each mortality outcome. From 1988 to 1991 rates of decline in neonatal mortality caused by respiratory distress syndrome and by all respiratory causes were significantly greater for NHWs compared with African Americans. However, the rate of decline in nonrespiratory neonatal mortality was similar for African Americans and NHWs. Compared with African American VLBW infants, NHWs had a greater rate of decline in both neonatal (31% vs 20%; P &lt;.01) and infant mortality (32% vs 21%; P &lt;.01) during this period. Conclusions: Between 1988 and 1991, declines in neonatal mortality risks caused by respiratory distress syndrome and all respiratory causes were greater for NHW infants than for African American VLBW infants. The decline in nonrespiratory mortality risk showed no racial differences. These findings suggest possible racial disparities in timely access or racial differences in the efficacy of respiratory treatments for VLBW infants. abstract_id: PUBMED:15622954 The increasing racial disparity in infant mortality: respiratory distress syndrome and other causes. Although substantial declines in infant mortality rates have occurred across racial/ethnic groups, there has been a marked increase in relative black-white disparity in the risk of infant death over the past two decades. The objective of our analysis was to gain insight into the reasons for this growing inequality on the basis of data from linked cohort files for 1989-1990 and 1995-1998. We found a nationwide reversal from a survival advantage to a survival disadvantage for blacks with respect to respiratory distress syndrome over this period. The results are consistent with the view that the potential for a widening of the relative racial gap in infant mortality is high when innovations in health care occur in a continuing context of social inequality. As expected, the results for other causes of infant mortality, although similar, are less striking. Models of absolute change demonstrate that among low-weight births, absolute declines in mortality were greater for white infants than for black infants. abstract_id: PUBMED:22585434 Patterns of infant mortality from 1993 to 2007 in Belgrade (Serbia). The population of the Republic of Serbia has been exposed to radical changes in living standards and employment status and inequalities in utilization of health-care services. Given that infant mortality rates (IMR) reflect general community health, we evaluated the trends and mortality structure of Belgrade's infant population for a 15-year period (1993-2007). Data were collected from published and unpublished materials of the Municipal Institute of Statistics in Belgrade. Records were based on official notifications of live-born infants and death certificates. A linear regression equation was used to estimate mortality trends over time, while an F test was performed to assess the significance of the linear regression coefficient. The average IMR was 11.3 [95% confidence interval (CI) 9.4, 13.2] per 1,000 live births for both sexes with a higher average rate observed for male infants. Throughout the whole period, a statistically significant declining trend (y = 17.072 - 0.721x, p = 0.001) was noted. The most common causes of death were conditions occurring during the perinatal period, with an average annual mortality rate of 7.7 [95% CI 6.4, 8.9] per 1,000, arising mainly from respiratory distress of the newborns. Regarding congenital anomalies, deformations of the heart and aortic and mitral valves were most frequently found. A statistically significant inverse correlation was observed between average net salary and IMR for each sex separately (for males r = -0.727, p = 0.002, for females r = -0.721, p = 0.002) and for both sexes jointly (r = -0.759, p = 0.001). A decline in infant mortality in Belgrade has been observed. However, further promotion of health-related activities, as well as continuous surveillance of IMR, is required. abstract_id: PUBMED:20175045 Respiratory causes of infant mortality: progress and challenges. A marked reduction in infant mortality due to respiratory distress syndrome (RDS) has been reported in previous studies; however, deaths due to RDS are still more common in black infants than white infants. Because advances in respiratory care may have impacted non-RDS respiratory causes of infant mortality as well, the objective of this study was to determine if specific and total non-RDS respiratory causes of infant mortality have changed over time, and if health disparities exist. We analyzed and compared infant deaths due to RDS and other respiratory diseases from 1980 to 2005 in the United States and evaluated outcomes by race and gender. Infant mortality due to non-RDS causes declined more than twofold over this time frame, but not as dramatically as the fivefold decline in RDS deaths. Black compared with white infants had twice the mortality rate due to non-RDS respiratory causes. The most common non-RDS respiratory cause of infant mortality was due to congenital malformations of the respiratory tract, which did not change dramatically over the 25 years studied. abstract_id: PUBMED:36991562 The association of race and ethnicity on obstetric patients' COVID-19 outcomes: A summary of current literature. Amidst the ongoing coronavirus disease 2019 (COVID-19) pandemic, evidence suggests racial and ethnic disparities in COVID-19-related outcomes. Given these disparities, it is important to understand how such patterns may translate to high-risk cohorts, including obstetric patients. A PubMed search was performed to identify studies assessing pregnancy, neonatal, and other health-related complications by race or ethnicity in obstetric patients with COVID-19 infection. Forty articles were included in our analysis based on novelty, relevance, and redundancy. These articles revealed that Black and Hispanic obstetric patients present an increased risk for SARS-CoV-2 infection and maternal mortality; racial and ethnic minority patients, particularly those of Black and Asian backgrounds, are at increased risk for hospitalization and ICU admission; racial and ethnic minority groups, in particular Black patients, have an increased risk for mechanical ventilation; Black and Hispanic patients are more likely to experience dyspnea; Hispanic patients showed higher rates of pneumonia; and Black patients present an increased risk of acute respiratory distress syndrome (ARDS). There is conflicting literature on the relationship between race and ethnicity and various pregnancy and neonatal outcomes. Several factors may underly the racial and ethnic disparities observed in the obstetric population, including biological mechanisms and social determinants of health. Answer: Yes, racial disparities persist in infant mortality from respiratory distress syndrome (RDS). Studies have shown that Black infants were more likely to die from RDS than White infants when analyzed independently (PUBMED:21121951). However, after adjusting for known predictors such as gestation, birth weight, and gender, Black race was no longer a significant predictor of RDS mortality in the 2005 cohort (PUBMED:21121951). Despite this, other research indicates that disparities in postneonatal mortality due to birth defects have emerged for non-Hispanic black and Hispanic infants, and that surfactant and antenatal steroid therapy have been accompanied by growing disparities in RDS mortality for black infants (PUBMED:22224890). Furthermore, a study examining the subset of infants who died in 2005 from RDS found that race was not a predictor of RDS mortality in that cohort, suggesting that previous disparities may have been mitigated by factors such as surfactant administration (PUBMED:21121951). However, the study also noted that only a small percentage of eligible infants had documentation of surfactant administration, indicating a need for further research to confirm that eligible infants are receiving surfactant and to evaluate the response of infants by race (PUBMED:21121951). Another study found that while in-hospital mortality for preterm neonates did not significantly differ across racial and ethnic groups, there were morbidity differences in RDS among racial/ethnic groups, with Black neonates having a lower risk of RDS compared to White neonates (PUBMED:29071590). This suggests that while overall mortality rates may be similar, the incidence and outcomes of specific conditions like RDS can still vary by race. Overall, the literature indicates that while some progress has been made in addressing racial disparities in infant mortality due to RDS, these disparities continue to exist and are influenced by a complex interplay of factors including access to care, quality of care, and social determinants of health.
Instruction: Smoking after heart transplantation: an underestimated hazard? Abstracts: abstract_id: PUBMED:30009940 Application of competing risks analysis improved prognostic assessment of patients with decompensated chronic heart failure and reduced left ventricular ejection fraction. Objective: The Kaplan-Meier method may overestimate absolute mortality risk (AMR) in the presence of competing risks. Urgent heart transplantation (UHT) and ventricular assist device implantation (VADi) are important competing events in heart failure. We sought to quantify the extent of bias of the Kaplan-Meier method in estimating AMR in the presence of competing events and to analyze the effect of covariates on the hazard for death and competing events in the clinical model of decompensated chronic heart failure with reduced ejection fraction (DCHFrEF). Study Design And Setting: We studied 683 patients. We used the cumulative incidence function (CIF) to estimate the AMR at 1 year. CIF estimate was compared with the Kaplan-Meier estimate. The Fine-Gray subdistribution hazard analysis was used to assess the effect of covariates on the hazard for death and UHT/VADi. Results: The Kaplan-Meier estimate of the AMR was 0.272, whereas the CIF estimate was 0.246. The difference was more pronounced in the patient subgroup with advanced DCHF (0.424 vs. 0.338). The Fine-Gray subdistribution hazard analysis revealed that established risk markers have qualitatively different effects on the incidence of death or UHT/VADi. Conclusion: Competing risks analysis allows more accurately estimating AMR and better understanding the association between covariates and major outcomes in DCHFrEF. abstract_id: PUBMED:34396337 Heart Transplantation in Adriamycin-Associated Cardiomyopathy in the Contemporary Era of Advanced Heart Failure Therapies. Background: Adriamycin-associated cardiomyopathy (ACM) can lead to end-stage heart failure requiring advanced heart failure therapies. Objectives: This study sought to provide post-cardiac transplant survival data in patients with ACM in the contemporary era of mechanical circulatory support and cardiac transplantation. Methods: Adults (≥18 years of age) who underwent first-time, single-organ heart transplantation were identified from the United Network for Organ Sharing between October 18, 2008, and October 18, 2018. Cardiomyopathy subtypes that could have been supported with a left ventricular assist device (LVAD) including ACM, dilated cardiomyopathy (DCM), and ischemic cardiomyopathy (ICM) were included. A multivariable Cox regression analysis was performed to determine the association between cardiomyopathy subtype and post-cardiac transplant survival. Results: This analysis included 18,270 patients (357 with ACM; 10,662 with DCM; and 7,251 with ICM). Heart transplant recipients with ACM were younger, included more women, and had higher pulmonary vascular resistance at the time of listing. Patients with ACM had a lower percentage of durable LVADs at the time of transplant across all years of the study period. Patients with ACM did not experience an increase in post-cardiac transplant mortality compared to those with DCM (adjusted hazard ratio: 0.96; 95% confidence interval: 0.79 to 1.40; p = 0.764) or ICM (adjusted hazard ratio: 0.85; 95% confidence interval: 0.6 to 1.2; p = 0.304). Conclusions: Patients with ACM who received heart transplants between 2008 and 2018 had similar post-cardiac transplant survival to those with dilated and ischemic cardiomyopathy. Bridge-to-transplant LVAD use remains lower compared to other cardiomyopathy subtypes. abstract_id: PUBMED:36590718 Prolonged waitlisting is associated with mortality in extracorporeal membrane oxygenation-supported heart transplantation candidates. Objective: Heart transplantation (HTx) candidates supported with venoarterial extracorporeal membrane oxygenation (ECMO) may be listed at highest status 1 but are at inherent risk for ECMO-related complications. The effect of waitlist time on postlisting survival remains unclear in candidates with ECMO support who are listed using the new allocation system. Methods: Adult candidates listed with ECMO for a first-time, single-organ HTx from October 18, 2018, to March 21, 2021, in the Scientific Registry of Transplant Recipients database were included and stratified according to waitlist time (≤7 vs ≥8 days). Postlisting outcomes were compared between cohorts. Results: Among 175 candidates waitlisted for ≤7 days, 162 (92.6%) underwent HTx whereas 13 (7.4%) died/deteriorated compared with 41 (57.8%) and 21 (29.6%) of the 71 candidates waitlisted for ≥8 days, respectively (P &lt; .01). Blood type O candidates (odds ratio [OR], 2.94; 95% CI, 1.54-5.61) were more likely to wait ≥8 days whereas candidates with concurrent intra-aortic balloon pump were less likely (OR, 0.30; 95% CI, 0.10-0.89). Obesity was additionally associated among those listed at status 1 (OR, 2.04; 95% CI, 1.00-4.17). Waitlisting for ≥8 days was independently associated with 90-day postlisting mortality conditional on survival to day 8 postlisting (hazard ratio, 5.59; 95% CI, 2.59-12.1). Candidates listed at status 1 showed similar trends (hazard ratio, 5.49; 95% CI, 2.39-12.6). There was no significant difference in 90-day post-HTx survival depending on whether a candidate waited for ≥8 days versus ≤7 days (92.7 vs 92.0%; log rank P = .87). Conclusions: Among ECMO-supported candidates, obtaining HTx within 1 week of listing might improve overall survival. abstract_id: PUBMED:2650577 A possible hazard. N/A abstract_id: PUBMED:9262083 Smoking after heart transplantation: an underestimated hazard? Objective: Risk factors for the development of vasculopathy and malignancies as the most important causes of morbidity and mortality after heart transplantation are not well defined. Methods: Univariate and multivariate Cox regression analysis of the data derived from our 84 survivors of more than 3 months after orthotopic heart transplantation between 1984 and 1996. Measurement of carbonmonoxide-hemoglobin blood levels with an ABL 520 analyzer. Results: Recipient or donor age, the mode of immunosuppression, total-, LDL- and HDL-cholesterol, the HDL/LDL-ratio, triglycerides, hypertension, diabetes mellitus, CMV status and rejection episodes had no independent influence on total mortality or the occurrence of graft vasculopathy or cancer. By means of an intensive questionnaire (in case of deceased patients, by their relatives) and measurement of CO-Hb blood levels we detected a high rate of patients who smoked after transplantation (22/84 = 26%). Four patients confessed smoking after undergoing the blood test. Non-smokers were defined as denying it in the questionnaire and having CO-Mb levels &lt; 2.5% in repeated measurements. All but one were smokers before heart transplantation. Mean consumption was 11 cigarettes per day. Five and 10 years survival was significantly reduced in smokers vs. non-smokers (37 vs. 80% and 10 vs. 74%, respectively, P &lt; 0.0001). Survival curves diverged dramatically after 4 years of observation. Smokers had a higher prevalence of transplant vasculopathy as revealed by coronary angiography and/or autopsy (10/22 smokers vs. 2/62 non-smokers, P &lt; 0.00001) and a higher rate of malignancies (7/22 smokers developed cancer, as compared to 4 cancers in 62 non-smokers, P = 0.0001). The primary site of cancer was the lung in 5/6 smoking and lymphoma in all non-smoking cancer patients. Conclusions: Our data show that the prevalence of smoking after heart transplantation may be relatively high, especially in former smokers. Repeated measurements of CO-Hb could be helpful in its detection. Despite a relatively low cigarette count, smoking is a major risk factor of morbidity and mortality after heart transplantation (HTx). Approximately 4 years of exposure time is needed to uncover its negative influence. These findings should lead to aggressive smoking screening and weaning programs in every HTx center. abstract_id: PUBMED:2276566 Is postoperative HBV hepatitis in heart transplant recipients the fruit of hazard? N/A abstract_id: PUBMED:27625403 Health Insurance Trajectories and Long-Term Survival After Heart Transplantation. Background: Health insurance status at heart transplantation influences recipient survival, but implications of change in insurance for long-term outcomes are unclear. Methods And Results: Adults aged 18 to 64 receiving first-time orthotopic heart transplants between July 2006 and December 2013 were identified in the United Network for Organ Sharing registry. Patients surviving &gt;1 year were categorized according to trajectory of insurance status (private compared with public) at wait listing, transplantation, and 1-year follow-up. The most common insurance trajectories were continuous private coverage (44%), continuous public coverage (27%), and transition from private to public coverage (11%). Among patients who survived to 1 year (n=9088), continuous public insurance (hazard ratio =1.36; 95% confidence interval 1.19, 1.56; P&lt;0.001) and transition from private to public insurance (hazard ratio =1.25; 95% confidence interval 1.04, 1.50; P=0.017) were associated with increased mortality hazard relative to continuous private insurance. Supplementary analyses of 11 247 patients included all durations of post-transplant survival and examined post-transplant private-to-public and public-to-private transitions as time-varying covariates. In these analyses, transition from private to public insurance was associated with increased mortality hazard (hazard ratio =1.25; 95% confidence interval 1.07, 1.47; P=0.005), whereas transition from public to private insurance was associated with lower mortality hazard (hazard ratio =0.78; 95% confidence interval 0.62, 0.97; P=0.024). Conclusions: Transition from private to public insurance after heart transplantation is associated with worse long-term outcomes, compounding disparities in post-transplant survival attributed to insurance status at transplantation. By contrast, post-transplant gain of private insurance among patients receiving publicly funded heart transplants was associated with improved outcomes. abstract_id: PUBMED:23490247 Cardiac transplantation can be safely performed using selected diabetic donors. Objective: Cardiac transplantation (OHT) using diabetic donors (DDs) is thought to adversely influence survival. We attempt to determine if adult OHT can be safely performed using selected DDs. Methods: The United Network for Organ Sharing (UNOS) database was examined for adult OHT from 2000 to 2010. Results: Of the 20,348 patients undergoing OHT, 496 (2.4%) were with DDs. DDs were older (39.6 vs 31.3 years; P &lt; .001), more likely female (41.5% vs 28.3%; P &lt; .001), and had a higher body mass index (BMI) (29.9 vs 26.4; P &lt; .001). Recipients of DD hearts were older (53.4 vs 51.8; P = .004) and more likely to have diabetes (18.9% vs 14.9%; P = .024). The 2 groups were evenly matched with regard to recipient male gender (78.0% vs 76.1%; P = .312), ischemic time (3.3 vs 3.2 hours; P = .191), human leukocyte antigen mismatches (4.7 vs 4.6; P = .483), and requirement of extracorporeal membrane oxygenation (ECMO) as a bridge to transplant (0.8% vs 0.5%; P = .382). Median survival was similar (3799 vs 3798 days; P = .172). On multivariate analysis, DD was not associated with mortality (hazard ratio [HR], 1.155; 95% confidence interval [CI], 0.943-1.415; P = .164). As previously demonstrated, donor age, decreasing donor BMI, ischemic time, recipient creatinine, recipient black race, recipient diabetes, race mismatch, and mechanical ventilation or ECMO as a bridge to transplant were associated with mortality. On multivariate analysis of subgroups, neither insulin-dependent diabetes (1.173; 95% CI, 0.884-1.444; P = .268) nor duration of diabetes for more than 5 years (HR, 1.239; 95% CI, 0.914-1.016; P = .167) was associated with mortality. Conclusions: OHT can be safely performed using selected DDs. Consensus criteria for acceptable cardiac donors can likely be revised to include selected DDs. abstract_id: PUBMED:30080315 Risk of genital warts in renal transplant recipients-A registry-based, prospective cohort study. Genital warts (GWs) are a risk factor for subsequent human papillomavirus (HPV)-related anogenital cancers. In this register-based, prospective cohort study, we estimated the risk of GWs in renal transplant recipients (RTRs) compared with a nontransplanted cohort. In a nationwide database, we identified first-time RTRs in Denmark during 1996 to 2015. For each RTR, 50 age- and sex-matched nontransplanted individuals were selected from the population registry. Information on GWs, sociodemographic characteristics, HPV vaccination, and other causes of immunosuppression was retrieved from registries. We estimated the cumulative incidence of GWs and used Cox regression to estimate hazard ratios (HR) of GWs in RTRs vs non-RTRs. We included 3268 RTRs and 162 910 non-RTRs without GWs 1 year before baseline. RTRs had higher hazard of GWs than non-RTRs (HR = 3.30; 95% confidence interval, 2.76-3.93, adjusted for sex, age, education, and income). The increased hazard of GWs compared with non-RTRs was more pronounced in female than in male RTRs. Although not statistically significant, the hazard tended to be higher in RTRs with functioning grafts compared with RTRs on dialysis after graft failure. The hazard of GWs was increased &lt;1 year after transplantation and remained increased during ≥10 years. In conclusion, RTRs had substantially higher risk of GWs than non-RTRs. abstract_id: PUBMED:11169599 Regression splines for threshold selection in survival data analysis. The Cox proportional hazards model restricts the hazard ratio to be linear in the covariates. A survival model based on data from a clinical trial is developed using spline functions with variable knots to estimate the log hazard function. Moreover, the main point of the method is that a knot, seen as free parameters for a piecewise linear spline, represents a break point in the log hazard function which may be interpreted as a threshold value. The likelihood ratio test is used to select the final model and to determine the threshold number for a covariate. Confidence intervals for these threshold values are computed by bootstrapping the data. Two examples illustrate the method. Answer: Yes, smoking after heart transplantation appears to be an underestimated hazard. According to the study reported in PUBMED:9262083, smoking after heart transplantation is a major risk factor for morbidity and mortality. The study found that the prevalence of smoking among heart transplant recipients may be relatively high, especially in those who were smokers before the transplantation. The data showed that survival at 5 and 10 years was significantly reduced in smokers compared to non-smokers (37% vs. 80% and 10% vs. 74%, respectively). Additionally, smokers had a higher prevalence of transplant vasculopathy and a higher rate of malignancies, with lung cancer being particularly prevalent among smoking cancer patients. The study concluded that smoking after heart transplantation is associated with a negative influence on survival and health outcomes, which becomes evident after approximately 4 years of exposure. These findings suggest the need for aggressive smoking screening and cessation programs in heart transplant centers.
Instruction: Are parenchymal changes in early post-transplant biopsies related to preservation-reperfusion injury or rejection? Abstracts: abstract_id: PUBMED:11435966 Are parenchymal changes in early post-transplant biopsies related to preservation-reperfusion injury or rejection? Background: The progression of parenchymal changes in liver allograft biopsies due to preservation-reperfusion injury (PRI) and their differentiation from rejection related changes is poorly understood. The aim of this study was to determine which changes in a 1-week posttransplant biopsy could be attributed to PRI and which to acute rejection. Methods: One week protocol liver transplant biopsies from patients with mild PRI (day 1 AST&lt;400 IU/L) were compared with those from patients with severe PRI (day 1 AST&gt;2000 IU/L). Parenchymal changes (cholestasis, ballooning, steatosis, necrosis) and rejection-related inflammatory features (portal tract inflammation, bile duct inflammation, portal vein endothelial inflammation, hepatic vein endothelial inflammation, and centrilobular inflammation) were blindly assessed semiquantitatively. Results: Fat, cholestasis, and hepatocyte ballooning were significantly worse in the severe PRI group, and these features showed no correlation with histological features related to acute rejection. Centrilobular hepatocyte necrosis correlated with hepatic venular endothelial inflammation and centrilobular inflammation but not with rejection related features in portal tracts or with PRI. These findings suggest that centrilobular necrosis is a manifestation of a rejection-related parenchymal injury and may involve different pathogenetic mechanisms to rejection-related features in portal tracts. Conclusions: This study indicates that in early posttransplant biopsies, fat, cholestasis, and ballooning can largely be attributed to PRI. By contrast, centrilobular hepatocyte loss should be suspected as a rejection related phenomenon, even if typical portal tract changes are not prominent, and augmentation of immunosuppression should be considered. abstract_id: PUBMED:2071091 Diffuse hepatocyte ballooning in liver biopsies from orthotopic liver transplant patients. Mild to moderate liver injury to the orthotopically transplanted liver may result from acute rejection, mild ischaemia, or viral hepatitis. Because these conditions are often clinically indistinguishable, liver biopsy is frequently helpful. We previously characterized and reported the morphological spectrum of mild-to-moderate ischaemic injury from 170 liver biopsies (51 liver transplant patients). During this review, we found eight patients with a diffuse hepatocyte ballooning pattern. This pattern had some similarity to 'preservation injury' described by others, and in fact seven of the eight patients had these changes within the first 2 weeks post-transplant. However, two of the seven patients also displayed these changes up to 6 months post-transplant and the eighth patient developed these histological patterns only after the early post-transplant period. Follow-up data on patients with diffuse hepatocyte ballooning showed that some reverted to normal histology, some transformed to well-delineated perivenular ballooning, and some progressed to perivenular necrosis and/or infarction. Only when superimposed perivenular necrosis became apparent did the injury become irreversible, necessitating allograft removal or resulting in death. The pathogenesis of diffuse hepatocyte ballooning is unclear. However, the association of some of the cases with 'preservation injury' pattern in the early post-transplant period and the progression of others to necrosis and infarction, suggest an ischaemic basis for this lesion. abstract_id: PUBMED:24032049 Histopathology of post-transplant liver biopsies, the first report from iran. Background: Evaluation of a transplanted liver by Imaging techniques and enzyme changes is sensitive to hepatocellular or biliary problems, but in most instances liver allograft biopsies are performed in order to find out the final reason for these changes. Objectives: It's been about 17 years (with more than 1326 cases) since the first liver transplantation in the Namazi Hospital of Shiraz University of Medical Sciences while during the last five years the number of post liver transplant biopsies have increased. Until now there has been no report of the pathological results of post liver transplant needle biopsies from Iran. Materials And Methods: During the last 5 years, there have been 382 post liver transplant biopsies. We studied the clinical charts and pathological results of all needle biopsies. Results: A total of 382 needle biopsies were performed on 287 patients aged between 1 and 64 years old. The earliest specimen was obtained within the first few hours following transplantation, and the last was gathered 3209 days (261 ± 523) post-transplantation. Acute rejection was the most common diagnosis, which occurred in 180 (47%) of specimens. Among other complications were vascular problems (8.6%), preservation/reperfusion (I/R) injury (7%), chronic rejection (5.2%), biliary injury/obstruction (3.4%), recurrence of primary disease (2.6%), drug-induced hepatic injury (1.8%), cirrhosis (1.6%), sepsis (1.4%), cytomegalovirus hepatitis (1.4%), post-transplantation lymphoproliferative disease (1%) and Venous outflow obstruction (0.5%). Conclusions: The most common pathological diagnosis of post-transplant liver needle biopsies has been acute rejection, followed by ischemia due to hepatic artery thrombosis, preservation/reperfusion injury, and chronic rejection. abstract_id: PUBMED:37437666 Circulating "Neutrophils extra-cellular traps" during the early post-renal transplant period and correlation with graft dysfunction and rejection. Background: Neutrophil extracellular traps (NETs) have a role in infection, autoimmunity, autoinflammation, thrombosis, ischemia-reperfusion injury (IRI), epithelial-mesenchymal transition, vasculitis, and metabolic diseases. However, its role in early graft injury and graft outcome has not been elucidated till now. We evaluated the circulating NETs during early post-transplant periods and their correlation with graft outcome and IRI. Methods: Prospectively, thirty kidney transplants recipient (KTR) were recruited and grouped into non-dysfunction (Group-A) and dysfunction groups (Group-B). Serum levels of circulating NETs were estimated by measuring myeloperoxidase-DNA complex at three-time points: pre-transplant, 8 h post-transplant, and 18 h post-transplant; and correlated with early graft outcome. Malondialdehyde (MDA), a marker of oxidative stress or IRI, was also measured to assess its relation with NETs and early graft outcome. Results: Circulating NETs were significantly increased in both non-dysfunctional [Median OD: 0.11 (0.01-0.19) to 0.51 (0.22-0.91); p = 0.001] and dysfunctional [Median OD: 0.16 (0.12-0.27) to 0.38 (0.19-0.68); p = 0.047] KTR during first 8 h of transplant followed by fall at 18 h post-transplant [0.25 (0.18-0.72) and 0.35 (0.26-0.36) respectively]; however, no significant difference were observed between two groups at any time points. Isolated biopsy-proven graft rejection KTR also had higher circulating NETs during the early post-transplant period [Median OD: 0.16 (0.13-0.31) to 0.38 (0.28-1.5); p &gt; 0.05] but no significant difference compared to non-dysfunctional KTR. MDA also displayed similar trends with an early significant rise [9.30 (7.74-12.56) μM to 17.37 (9.11-22.25) μM; p = 0.03 in group-A, and 8.7 (6.04-10.30) μM to 14.66 (13.39-21.63) μM; p = 0.01in group-B] followed by fall at 18 h in both groups [10.21 (7.64-13.90) μM and 11.11 (9.15-17.54) μM respectively]. Despite similar trends of both NETs and MDA, there was no significant correlation between these; however, creatinine exhibits a significant inverse correlation with NETs and MDA both. Conclusion: Circulating NETs are significantly increased during the early post-transplant period in KTR irrespective of early graft outcome. Similar dynamics of MDA indicate that the early rise of NETs might be a part of IRI. However, molecular studies with large sample sizes and longer follow up are required to reach more defined conclusions. abstract_id: PUBMED:29735782 An Overview of Liver Transplant Pathology: Data from a Tertiary Referral Centre in Western India. Introduction And Aim: 1. Study of liver explants - Etiologic types of end-stage chronic liver disease (ESCLD) and acute liver failure (ALF) in adults and children. 2. Assessment of donor steatosis and incidental granulomas. 3. Post-transplant liver biopsies. Material And Methods: Specimens of 180 explant hepatectomies, 173 donor wedge and 30 core liver biopsies, and 58 post transplant liver biopsies received in our department from April 2013 to March 2017. Results: 1. Most common causes of ESCLD in adults were: alcohol related (30.32%), hepatitis virus related (18.71%) and non-alcoholic steatohepatitis related (18.06%); and in children ≤ 12 years were: biliary atresia (27.27%), autoimmune disease (18.18%) and Wilson's disease (18.18%). Most common causes of ALF in adults and children were anti-tubercular therapy induced and idiopathic respectively. 2. Prevalence rate of moderate steatosis (between 30-60%) was 4.28%. Incidental granulomas were seen in 5 cases. 3. Most common diagnoses of post-transplant biopsies in adults included acute cellular rejection (ACR) (36.17%), recurrence of viral disease (8.51%) and moderate non-specific portal triaditis (8.51%). Among children ≤ 12 years, most common diagnoses included unremarkable liver parenchyma, ACR and ischemia/reperfusion injury. Conclusion: 1. Alcohol- and hepatitis- virus related ESCLD, and biliary atresia are leading indications for liver transplantation in adults and children respectively. 2. Prevalence of 4.28% of moderate steatosis, is much lower than that documented in western literature. Only 5 cases of incidental granulomas is unexpectedly low in a country endemic for tuberculosis. 3. Most common diagnoses of post-transplant liver biopsies in adults has been acute rejection, which is similar to the findings from much larger published series. abstract_id: PUBMED:11979366 Centrilobular histopathologic changes in liver transplant biopsies. We evaluated centrilobular histologic changes seen on post-orthotopic liver transplantation (OLT) biopsies to refine the pathologic diagnosis by systematic study of morphologic and clinical data with possible identification of prognostic criteria. A total of 110 biopsies with zone 3 pathology from 59 patients were reviewed and correlated with clinical findings. Within the first 6 months post-OLT (group I), 39 of 47 patients had combinations of centrilobular hepatocytic dropout, ballooning, and cholestasis on single or multiple biopsies attributed to perioperative ischemic/perfusion injury; 12 of 39 patients with all 3 features present had increased incidence of biliary complications and sepsis and decreased 1-year patient and graft survival; 17 of 39 patients with 2 of the 3 features had increased biliary complications but not decreased 1-year survival; and the remaining 8 of 47 patients had central venulitis associated with acute cellular rejection. After 6 months post-OLT (group II), 14 patients, including 2 from group I, had biopsies with centrilobular pathology; 8 of 14 had central venulitis related to rejection (acute, 4; chronic, 4), and fibrosis was seen in 8 (rejection, 6; cardiac problems, 2). In conclusion, combinations of centrilobular hepatocytic ballooning, dropout, and cholestasis are seen in association with reversible or irreversible ischemic/perfusion damage in the early post-OLT period. The presence of all 3 features is associated with a poor outcome. Central venulitis as a feature of acute/chronic rejection is seen at any time post-OLT and is not a predictor of poor graft/patient survival. abstract_id: PUBMED:7819618 Preservation injury in clinical liver transplantation: incidence and effect on rejection and survival. The distribution of the severity of preservation injury (PI), its association with acute early rejection, and its effect on graft and patient survival were examined in 151 patients transplanted between 9/88 and 12/91. PI was graded into mild, moderate and severe using prothrombin time (PT) on the 2nd POD and peak AST and ALT levels of the first 3 days after transplant. Of the 151 patients, 104 (68.8%) had minimal, 24 (15.9%) had moderate, and 23 (15.3%) had severe PI. The rate of early rejection, within 21 days after transplant, (54.8% vs 54.2% vs 52.2%) and its timing (10.0 +/- 0.5 vs 8.8 +/- 0.8 vs 8.9 +/- 1.4 days) as well as the incidence of steroid-resistant rejection (21.4% vs 33% vs 30%) did not differ significantly among groups. Graft survival was significantly lower in the severe PI group than among patients with minimal PI at 3 months and 24 months (65.2% vs 86.2%; p &lt; 0.05 and 58.3% vs 82.7%; p &lt; 0.05, respectively). Patient survival at 3 and 24 months was similar among the groups (91.1% vs. 95.8% vs 86.9% and 81.9% vs 91.1% vs. 74.3%). We conclude that the cellular damage associated with preservation injury does not predispose to development of acute rejection. abstract_id: PUBMED:16536315 Pathologic diagnosis of 1123 post-transplant liver biopsies from 665 liver transplant patients Objective: To summarize the Chinese experience in pathologic diagnosis of liver biopsies after orthotopic liver transplantation (OLTx). Methods: 1123 post-transplant liver biopsies from 665 OLTx patients from the Shanghai Eastern Hepatobiliary Surgery Hospital, Tianjin First Central Hospital, Guangzhou Sun Yat-sen University and Chongqing Southwest Hospital were retrospectively analyzed. All liver biopsies were stained with hematoxylin and eosin. Immunohistochemical studies for cytomegalovirus, HBsAg, CK19, CD4 and CD8 were also performed in selected examples. Results: In the involved hospitals, 4 to 12 types of complications were encountered after OLTx. The number of liver biopsies performed for each patient ranged from 1 to 9 (mean = 2.2). The timing of these biopsies varied from the second to the 2877 th post-transplant day. The 5 most common complications were acute cellular rejection (35.6%), ischemic-reperfusion injury (13.4%), biliary stricture (5.6%), drug complication (5.0%) and chronic rejection (4.7%). The 5 earliest complications after OLTx were primary non-function (occurring at day 4.7 +/- 2.1), ischemic-reperfusion injury (occurring at day 14.0 +/- 4.0), acute cellular rejection (occurring at day 32.1 +/- 62.9), hepatic artery thrombosis / stricture (occurring at day 62.9 +/- 74.2) and cytomegalovirus infection (occurring at day 107.7 +/- 93.0). Conclusions: This study has evaluated the types, incidence and timing of major complications occurring after OLTx. The most important issue is the distinction between rejection and non-rejection pathology. Thorough understanding of atypical pathologic features of these complications is necessary. The Banff Schema (rejection activity index) for grading liver allograft rejection is useful for monitoring anti-rejection therapy and should be used routinely. abstract_id: PUBMED:10515392 Complement deposition in early cardiac transplant biopsies is associated with ischemic injury and subsequent rejection episodes. Background: Prolonged warm or cold ischemia is associated with poor survival of cardiac transplants, and ischemic changes in early posttransplantation endomyocardial biopsies correlate with the later development of chronic rejection. In animal models, tissue ischemia has been shown to activate complement. Methods: To determine whether ischemic changes in endomyocardial biopsies were associated with complement deposition, biopsies obtained 1-3 weeks after transplantation from 33 patients were evaluated immunohistologically for C4d and C3d deposition as well as for IgM, IgG, and IgA. The histological changes associated with ischemic injury were scored independently, using previously reported criteria without knowledge of the immunohistochemical results. Results: Diffuse capillary and pericapillary deposition of C4d or C3d were detected in endomyocardial biopsies of 14 of the 33 patients. The majority of biopsies (79%) with C4d or C3d deposits had histological evidence of ischemic injury, including eight of the nine biopsies containing both C4d and C3d deposition. In contrast, only 8 of 18 (45%) of the biopsies without C4d or C3d deposition had ischemic injury. Only trace amounts of IgM and no IgG or IgA were demonstrable in the biopsies. Only 2 of the 14 biopsies with C4d or C3d deposition had evidence of moderate acute rejection, whereas 5 of the 18 biopsies without C4d or C3d deposition had moderate acute rejection. However, C4d and C3d deposition did correlate with repeated acute rejection episodes on subsequent biopsies. Conclusions: Thus, ischemic changes are associated with the activation of complement. Complement activation may in turn promote tissue injury and provide a potential target for future treatment. abstract_id: PUBMED:35755857 Validation of a Simple, Rapid, and Cost-Effective Method for Acute Rejection Monitoring in Lung Transplant Recipients. Despite advances in immunosuppression therapy, acute rejection remains the leading cause of graft dysfunction in lung transplant recipients. Donor-derived cell-free DNA is increasingly being considered as a valuable biomarker of acute rejection in several solid organ transplants. We present a technically improved molecular method based on digital PCR that targets the mismatch between the recipient and donor at the HLA-DRB1 locus. Blood samples collected sequentially post-transplantation from a cohort of lung recipients were used to obtain proof-of-principle for the validity of the assay, correlating results with transbronchial biopsies and lung capacity tests. The results revealed an increase in dd-cfDNA during the first 2 weeks after transplantation related to ischemia-reperfusion injury (6.36 ± 5.36%, p &lt; 0.0001). In the absence of complications, donor DNA levels stabilized, while increasing again during acute rejection episodes (7.81 ± 12.7%, p &lt; 0.0001). Respiratory tract infections were also involved in the release of dd-cfDNA (9.14 ± 15.59%, p = 0.0004), with a positive correlation with C-reactive protein levels. Overall, the dd-cfDNA percentages were inversely correlated with the lung function values measured by spirometry. These results confirm the value of dd-cfDNA determination during post-transplant follow-up to monitor acute rejection in lung recipients, achieved using a rapid and inexpensive approach based on the HLA mismatch between donor and recipient. Answer: Parenchymal changes in early post-transplant biopsies can be related to both preservation-reperfusion injury (PRI) and rejection, but differentiating between the two is crucial for appropriate management. According to a study by Neil D. Theise et al., fat, cholestasis, and hepatocyte ballooning observed in early post-transplant biopsies can largely be attributed to PRI, particularly when comparing patients with mild PRI (day 1 AST<400 IU/L) to those with severe PRI (day 1 AST>2000 IU/L). These features showed no correlation with histological features related to acute rejection. However, centrilobular hepatocyte necrosis correlated with hepatic venular endothelial inflammation and centrilobular inflammation but not with rejection-related features in portal tracts or with PRI, suggesting that centrilobular necrosis is a manifestation of rejection-related parenchymal injury (PUBMED:11435966). Another study by Neil D. Theise et al. found that diffuse hepatocyte ballooning in liver biopsies from orthotopic liver transplant patients had some similarity to 'preservation injury' described by others, particularly when observed within the first 2 weeks post-transplant. However, the progression of some cases to necrosis and infarction suggests an ischemic basis for this lesion (PUBMED:2071091). In the context of kidney transplants, a study by Ashish Sharma et al. found that circulating neutrophil extracellular traps (NETs) were significantly increased during the early post-transplant period in kidney transplant recipients irrespective of early graft outcome. This rise in NETs might be a part of ischemia-reperfusion injury (IRI), although the study did not find a significant correlation between NETs and graft dysfunction or rejection (PUBMED:37437666). Overall, while certain parenchymal changes such as fat, cholestasis, and hepatocyte ballooning can be attributed to PRI, other changes like centrilobular necrosis may indicate rejection-related injury. It is important to consider the timing and specific histological features when interpreting early post-transplant biopsies to distinguish between PRI and rejection (PUBMED:11435966; PUBMED:2071091).
Instruction: Does cigarette use influence adiposity or height in adolescence? Abstracts: abstract_id: PUBMED:18346909 Does cigarette use influence adiposity or height in adolescence? Purpose: To study the effect of cigarette use on height and adiposity in adolescents. Methods: Data on cigarette use were collected every 3 months for 5 years from adolescents initially 12-13 years of age. Height, weight, and triceps skinfold thickness were measured in survey cycles 1, 12, and 19. Multivariate linear regression models were fitted to estimate the association between cigarette use and the anthropometric measures in a dataset that pooled data over two time periods, from survey cycles 1-12 and from survey cycles 12-19. Results: Data were available for 451 boys and 478 girls. Seven percent of boys and 14% of girls smoked &gt; or =30 cigarettes per month on average during the first time period; 9% of boys and 18% of girls smoked &gt; or =30 cigarettes per month on average during the second time period. In boys, a 100-cigarette per month increment in cigarette use over the preceding 2.5 years was independently associated with lower body mass index (-0.4 kg/m(2)) and shorter height (-0.7 cm). In girls, cigarette use was not associated with height or adiposity. Conclusions: While there was no relation in girls, cigarette use appears to decrease body mass index and height in boys. Young girls may be less likely to take up cigarette smoking if tobacco control messages emphasize that cigarette use may not be associated with reduced weight in adolescent females. abstract_id: PUBMED:23899428 Association between adolescent substance use and obesity in young adulthood: a group-based dual trajectory analysis. Purpose: This study investigated whether and how trajectories of substance use in adolescence were associated with obesity trajectories in young adulthood. We hypothesized that: (1) exposure to persistent substance use throughout adolescence may heighten obesity risk in young adulthood; and (2) such associations may differ once gender, ethnicity, socioeconomic status, and obesity status in adolescence, are considered. Methods: The study included 5141 adolescents from the child sample of the 1979 National Longitudinal Survey of Youth and utilized biennial data across the 12 assessments (1986-2008) to examine trajectories of substance use behaviors (i.e., cigarette smoking, alcohol use, and marijuana use) from ages 12 to 18 and obesity trajectories from ages 20 to 24. Group-based dual trajectory modeling was applied to examine sequential associations of trajectories of each type of substance use behavior with obesity trajectories. Results: Three distinctive trajectory patterns were respectively identified for cigarette smoking, alcohol use, and marijuana use from ages 12 to 18, as well as for obesity status (BMI ≥ 30) from ages 20 to 24. Taking into account gender, ethnicity, socioeconomic status, and obesity status in adolescence, adolescents with the most problematic smoking trajectory (High-decreasing) were more likely to exhibit a High-obesity trajectory from ages 20 to 24. Also, adolescents with an Increasing marijuana use trajectory were more likely to exhibit an Increased obesity trajectory in young adulthood. Conclusions: The current study demonstrates that adolescent substance use is associated with subsequent obesity in young adulthood. The associations appear to differ based on the type of substance use and patterns of use. abstract_id: PUBMED:37925843 Trajectories of body mass index and combustible and electronic cigarette use across adolescence: Findings from the PATH study. Purpose: Associations between empirically-generated body mass index (BMI) trajectories and risk of current use of combustible cigarettes and e-cigarettes across adolescence were examined using longitudinal data from the Population Assessment of Tobacco and Health (PATH) study. Methods: The PATH study is an ongoing annual longitudinal population-based study of adolescents. We utilized Waves 1-4 conducted from 2013 to 2017. Adolescents completed self-reported surveys of their height, weight, and current tobacco use at Waves 1-4 and their tobacco weight control beliefs at Waves 1-2. Results: Using latent growth mixture modeling, six trajectories of BMI were identified. The largest group ("normal weight increasing;" n = 4,858; 86.6 %), which was used as the comparator in subsequent analyses, consisted of adolescents ages 12-17 who were normal weight at Wave 1 with a significant increase in BMI across Waves 2--4. The "overweight early increasing," "overweight late increasing," and "obesity stable" classes had greater likelihood of current combustible cigarette use during the study compared to the "normal weight increasing class." The "overweight early increasing," "overweight late increasing," and "overweight increasing then decreasing" classes showed elevated risk for e-cigarette use during the study. Compared to those in the "normal weight increasing" class, those in the "overweight increasing then decreasing" and "obesity stable" classes had greater weight control beliefs at Wave 1 and those in the "obesity stable" class had greater weight control beliefs at Wave 2. Conclusions: Findings highlight the importance of weight trajectories and weight control beliefs by tobacco product use across adolescence and the need for mechanistic and intervention research. abstract_id: PUBMED:32336559 E-Cigarette Use Associated With Asthma Independent of Cigarette Smoking and Marijuana in a 2017 National Sample of Adolescents. Purpose: Knowledge about the health consequences of e-cigarette use in adolescence remains limited. Available studies of asthma among adolescents are based on data collected five or more years ago, and evidence from more recent generations of e-cigarette products is needed. Methods: We analyzed data from the 2017 Youth Risk Behavior Survey, a cross-sectional study with a representative U.S. national sample of high school students. Multivariable analyses tested for associations of ever and 30-day e-cigarette use with asthma controlling for cigarette smoking, marijuana use, demographics, and obesity. Results: A significant association with asthma was found for ever use of e-cigarettes, adjusted odds ratio (AOR) = 1.15 (confidence interval [CI], 1.02-1.30; p = .02) and for currently using e-cigarettes, AOR = 1.30 (CI, 1.10-1.53; p = .002). Also related to asthma were current cigarette smoking, AOR = 1.24 (CI, 1.03-1.51; p = .03) and obesity, AOR = 1.48 (CI, 1.30-1.68; p &lt; .0001). E-cigarettes had an additive effect for asthma beyond smoking (p = .03). Differentials in asthma prevalence by race/ethnicity were found, with lower prevalence of asthma for non-Hispanic Asian (p = .02) and Hispanic (p = .03) and higher prevalence for non-Hispanic black (p &lt; .0001) and non-Hispanic multirace (p &lt; .0001) relative to non-Hispanic white populations. Conclusions: E-cigarette use was significantly associated with asthma in recent data on adolescents, controlling for several disease-relevant covariates. The results are consistent with studies based on previous generations of e-cigarettes and support the significance of e-cigarettes as a continuing public health concern. abstract_id: PUBMED:11890638 A prospective study of weight and height going from infancy to adolescence. Weight and height from infancy to age 15 years was studied in the Geelong population (n = 1200 in infancy; n = 213 at adolescence), Victoria, Australia. Body mass index (BMI) increased from 3 months to 12 months and then decreased again until 80 months after which it increased to 20.5 kg/m2 at the age of 15 years. The extent of tracking of BMI in infants classified as overweight or underweight was similar and differed from that of subjects of normal weight. Only one in four of the infants classified as overweight or underweight in infancy were still in the same category in adolescence, compared with three in four of those classified as of normal weight. Socioeconomic status has an effect on weight and height status in adolescence but not on the tracking of BMI. The age at 6-7 years is a critical age for weight and height status in adolescence. It appears that weight and height in infancy have a significant relationship with body size in adolescence but only in boys. abstract_id: PUBMED:27814299 Body Mass Index, Height and Socioeconomic Position in Adolescence, Their Trajectories into Adulthood, and Cognitive Function in Midlife. Background: Whether life course anthropometric indices relate to cognitive function in midlife remains insufficiently explored. Rarely was socioeconomic position (SEP) adequately accounted for. Objective: To examine the association of the cumulative life course burden of high-ranked body mass index (BMI), its trajectory, and stature with cognitive function in midlife. Methods: Weight and height were measured from age 17 across a 33-year follow-up. 507 individuals completed a NeuroTrax computerized cognitive assessment at ages 48-52. Life course SEP was assessed by multiple methods. Using mixed models we calculated the area under the curve (AUC), representing both the life-course burden of BMI (total AUC) and trends in BMI (incremental AUC) from age 17 to midlife. The associations of BMI and height with global cognition and its five component domains were assessed by multiple regression. Results: Higher BMI in late adolescence and total AUC over the life course were associated with poorer global cognition (Standardized beta (Beta) = -0.111, p = 0.005 and Beta = -0.105, p = 0.018, respectively), adjusted for childhood and adulthood SEP, and demographic characteristics. The associations with higher adolescent and midlife BMI were both restricted to those with low childhood SEP (p &lt; 0.05 for interaction). Short adolescent stature was related to poorer cognition (Beta = 0.115, p = 0.040), whereas late final growth in women was associated with better cognition (Beta = 0.213, p = 0.007). Conclusion: An adverse association of higher BMI with cognitive function began in adolescence and was restricted to low childhood SEP. Taller stature in both sexes and late growth in women were associated with better midlife cognitive performance. abstract_id: PUBMED:18236297 Early cigarette use and psychological distress as predictors of obesity risk in adulthood. The study examined the simultaneous longitudinal relationships of cigarette use in adolescence to continuing cigarette use, psychological distress, physical activity, subjective rating of health in emerging adulthood and, finally, to body mass index (BMI) in early adulthood. The study utilized data (N=414) from a drug abuse prevention trial, Midwestern Prevention Project, with subjects participating from ages 11-34 years. Structural equation modeling showed that cigarette use in early adolescence had direct paths to distress in the beginning of emerging adulthood, which in turn had significant relationships to cigarette use, physical activity, and subjective rating of health in mid-emerging adulthood. Finally, both cigarette use and physical activity had significant negative relationships to BMI in early adulthood. Results suggest that prevention programs that have been previously applied to either cigarette use or distress prevention might be re-examined for their potential to also affect obesity risk in adulthood. abstract_id: PUBMED:29129845 Microalbuminuria Is Associated with Lower Weight and Taller Height in Adolescence. Microalbuminuria (MA) is a well-known predictor of cardiovascular disease and mortality in adults. However, these relationships remain unclear in adolescents. A few studies on association between MA and obesity have been conducted in adolescents. However, the association between MA and growth parameters such as height has not been studied, despite the fact that adolescence is a period of rapid physiological change. We, therefore, aimed to evaluate the association between MA and growth parameters, and the association between MA and obesity related cardiovascular risk factors after adjusting for growth. The study included 1,459 adolescents (847 boys and 612 girls) aged 12-18 years who participated in the Korean National Health and Nutrition Examination Survey (2011-2013). A urine albumin/creatinine ratio (UACR) of 30-299 mg/g in a morning urine sample was considered to reflect MA. MA was detected in 53 (3.6%) including 24 girls. Height z-score of adolescents with MA was greater than that of normoalbuminuric adolescents (0.87 vs. 0.38; P &lt; 0.01). Upon multiple regression analysis, UACR was associated with lower weight z-score (β = -0.100, P &lt; 0.01) and higher height z-score (β = 0.069, P &lt; 0.01). In term of cardiovascular risk factors, the UACR was not associated with fasting glucose, high-density-lipoprotein cholesterol, or triglyceride levels. Adolescents with MA tend to be thin and tall. MA is not a useful screening method for obesity-related cardiovascular risk in adolescents, but instead MA is associated with taller height and lower weight, growth-related parameters in adolescence. abstract_id: PUBMED:33841892 Electronic cigarette use is negatively associated with body mass index: An observational study of electronic medical records. Objective: Vaping is advertised as a method to mitigate weight gain after smoking cessation; however, while there is an established inverse association between conventional tobacco use and body mass index (BMI), there is little research on the relationship between e-cigarettes and BMI. This research tested whether e-cigarette use was associated with BMI. Methods: A secondary data analysis of 207,117 electronic medical records from the UAB was conducted. Patient data from 1 September 2017 through 1 June 2018 were extracted. To be included in the analysis, a patient's record had to include measures of e-cigarette use and key sociodemographic information. Ordinary least squares regression was used to test the association between e-cigarette use and BMI, controlling for covariates; unconditional quantile regression was used to determine whether the association varied by BMI quantile. For comparison with tobacco smoking, the association between current tobacco smoking and BMI was estimated in a sample from the same population. Results: Respondents in the sample had an average BMI of 30.8 and average age of 50.0 years when BMI was measured. The sample was 51% female, 49.7% white, 46.7% black, and 1.0% Hispanic; 16.4% of the sample had less than a college education and approximately 5% reported currently using e-cigarettes. Individuals who reported using e-cigarettes had, on average, a lower BMI compared to those who did not report currently using e-cigarettes; results indicated that this association did not significantly vary by BMI quantile. Individuals who reported being current smokers had a lower BMI, on average. Conclusion: These findings suggest that using e-cigarettes is associated with a lower BMI in a population of individuals seeking health care, consistent with the association between conventional tobacco use and BMI. This study is a springboard for future research investigating the associations between e-cigarette use, BMI, and risk of obesity in the general population. abstract_id: PUBMED:35937794 Association between height growth patterns in puberty and stature in late adolescence: A longitudinal analysis in chinese children and adolescents from 2006 to 2016. Introduction: The relationship between the characteristics of puberty growth and the stature (height and overweight and obesity) in late adolescence was not clear. We aimed to explore the effects of puberty growth patterns on the stature in late adolescence. Methods: A total of 13,143 children from a longitudinal cohort from 2006 to 2016 in Zhongshan city of China were included. The Preece-Baines growth curve was fitted for each individual child, and the age at peak height velocity (APHV), peak height velocity (PHV), and age at take-off (TOA) were obtained from the Preece-Baines model. To compare the difference in height in late adolescence (at 18 years old) at different pubertal height growth patterns (height spurt timing, intensity, and duration), the height at baseline was matched by using the propensity score matching. The log-binomial model was applied to assess the association between the three pubertal height growth patterns (timing, intensity, and duration) and overweight and obesity status in late adolescence, controlling the urbanity and body mass index (BMI) at baseline. Results: After matching the baseline height, boys and girls in three pubertal patterns with early timing (P &lt; 0.01), small intensity (P &lt; 0.01), and short duration (P &lt; 0.01) of height spurt had the lowest final height in the late adolescence. A 16% increase and 45% increase of risk for overweight and obesity were significantly associated with the early APHV in boys and girls, respectively, relative risk (RR) in boys, 1.16(95% confidence interval, CI: 1.03-1.30), P = 0.011; RR in girls, 1.45(1.21-1.75), P &lt; 0.001. A 21% increase and 59% increase of risk for overweight and obesity were significantly associated with small PHV in boys and girls, respectively, RR in boys, 1.21(1.07-1.36), P &lt; 0.001; RR in girls, 1.59(1.30-1.95), P &lt; 0.001; and an 80% increase of risk for overweight and obesity with small spurt duration in girls (RR = 1.80; 95% CI: 1.49, 2.18; P &lt; 0.001). Conclusion: Pubertal growth patterns, including earlier puberty onset timing, smaller puberty intensity, and shorter puberty spurt duration, had a positive association with lower height risks and higher overweight and obesity risks in late adolescence. Answer: Cigarette use does appear to influence adiposity and height in adolescence, but the effects may differ by gender. A study found that in boys, an increase in cigarette use over a 2.5-year period was independently associated with a lower body mass index (BMI) and shorter height. However, in girls, cigarette use was not associated with height or adiposity (PUBMED:18346909). This suggests that cigarette use may have a negative impact on growth and weight in male adolescents, but the same relationship was not observed in females. Another study showed that adolescents with the most problematic smoking trajectory were more likely to exhibit a high-obesity trajectory from ages 20 to 24. Additionally, adolescents with an increasing marijuana use trajectory were more likely to exhibit an increased obesity trajectory in young adulthood (PUBMED:23899428). This indicates that substance use in adolescence, including cigarette smoking, can be associated with obesity later in life. Furthermore, a study using data from the Population Assessment of Tobacco and Health (PATH) study found that certain BMI trajectories were associated with a greater likelihood of current combustible cigarette use compared to a "normal weight increasing" class. Specifically, the "overweight early increasing," "overweight late increasing," and "obesity stable" classes had a greater likelihood of current combustible cigarette use (PUBMED:37925843). This suggests a relationship between weight trajectories and tobacco use during adolescence. In summary, cigarette use in adolescence does seem to influence adiposity and height, particularly in boys, where it is associated with lower BMI and shorter stature. The relationship between cigarette use and obesity may also extend into young adulthood, with certain patterns of substance use in adolescence linked to higher obesity trajectories later in life.
Instruction: Is a general women's health promotion program as effective as an HIV-intensive prevention program in reducing HIV risk among Hispanic women? Abstracts: abstract_id: PUBMED:12196620 Is a general women's health promotion program as effective as an HIV-intensive prevention program in reducing HIV risk among Hispanic women? Objective: This study aimed to assess whether participants in an HIV-intensive prevention program and participants in a general women's health promotion program reported greater HIV risk-reduction than participants in a wait-list control group immediately following program participation and at three-month follow-up. Methods: The authors studied 162 Hispanic women ages 18 to 35 years, most of them immigrants. Three-fourths of the sample (74%) reported earning less than $800 a month, 29% did not have a high school degree, and 90% had children. Data were gathered through surveys at baseline, at intervention completion, and at three-month follow-up. Information was collected on sociodemographics, HIV risk factors, and risk behaviors. Crude and adjusted (for demographics and dose) logistic regression analyses were used to assess program effects on participants' risk reduction. Results: Crude logistic regression analyses reveal that both programs resulted in increased condom use at post-test and follow-up. Only participants in the HIV-intensive prevention program reported increased safer sex negotiation at post-test and follow-up, however, and only participants in the women's health promotion program reported increased HIV testing at post-test. Conclusion: Both interventions increased condom use. The HIV-intensive prevention program appeared to be more effective in promoting safer sex negotiation, and the women's health promotion program appeared more effective in promoting HIV testing. The findings suggest that both approaches may be viable ways to package HIV prevention for short-term behavior change in this population. abstract_id: PUBMED:29958775 HIV Testing, Positivity, and Receipt of Services among Black, White, and Hispanic Women Participating in HIV Prevention Programs Funded by the Centers for Disease Control and Prevention, 2015. Background: Black women who are positive for human immunodeficiency virus (HIV) are more likely than other HIV-positive women to experience poor health outcomes. Given these disparities, we compared receipt of HIV testing and other preventive services among Black, White, and Hispanic women participating in HIV prevention programs funded by the Centers for Disease Control and Prevention in 2015. Methods: Data came from 61 health department jurisdictions and 123 community-based organizations. Using Wald χ2 tests, we assessed racial/ethnic differences in HIV testing by demographic characteristics, risk factors, HIV status, HIV service delivery, and (using unlinked data) participation in evidence-based HIV prevention interventions among women. We also assessed temporal changes in Black women's receipt of preventive services (2012-2015). Results: In 2015, there were 1,326,589 HIV testing events (single sessions in which one or more HIV tests are performed to determine HIV status) that occurred among Black, White, and Hispanic women, resulting in 4,868 positive diagnoses. Proportionally fewer Black (vs. White and Hispanic) women were linked to HIV medical care within 90 days (64.4% vs. 69.1% and 73.7%), interviewed for partner services (50.4% vs. 54.4% and 63.5%), and referred to risk reduction services (55.5% vs. 57.7% and 59.8%). From 2012 to 2015, HIV testing events among Black women decreased by 15.2%, but linkage to care within 90 days increased by 30.6% and participation in one or more evidence-based intervention increased by 46.8% among HIV-positive Black women. Conclusions: Black HIV-positive women were less likely than White and Hispanic women to receive services that prevent HIV-related morbidity and mortality. Additional programmatic efforts are needed to increase the proportion of HIV-positive Black women who are linked to services to reduce HIV disparities among women. abstract_id: PUBMED:28467279 Woman to Woman (W2W): Adapting an HIV risk reduction intervention for older women. Woman to Woman (W2W) is a novel adaptation of the Sisters Informing Sisters about Topics on AIDS (SISTA) HIV prevention program. This article describes the process of adapting and piloting W2W based on recommendations from existing HIV prevention research. Six older women, all of whom had histories of homelessness and the majority of whom identified as African American, enrolled in the study, which piloted the adapted intervention and materials, evaluated the acceptability of the program, and assessed the measures related to the intervention. Participants described satisfaction with the program and had high rates of attendance; observations regarding the measures suggest the need to further develop assessments of HIV knowledge, condom use self-efficacy, and risk behaviors in this context. abstract_id: PUBMED:25788609 After the Fact: A Brief Educational Program on HIV Postexposure Prophylaxis for Female Detainees in a Local Jail. Women who have been in jail are at increased risk of acquiring HIV when they are in the community. Nonoccupational postexposure prophylaxis (nPEP) reduces HIV transmission following high-risk behaviors and is an effective HIV prevention strategy. The authors designed a 15-minute interactive educational program to increase inmates' knowledge of nPEP. Before the program, participants self-reported high HIV risk yet low risk perception and lack of nPEP awareness. After the program, nPEP knowledge scores increased by 40% regardless of demographic or HIV-risk characteristics. This study demonstrates that a brief, easy-to-deliver educational intervention can be carried out in a jail, is effective at raising awareness of both HIV risk and nPEP, and may be useful for others seeking to increase use of this prevention strategy for high-risk women during incarceration. abstract_id: PUBMED:15727962 Promoting health for transgender women: Transgender Resources and Neighborhood Space (TRANS) program in San Francisco. Transgender women are at high risk for HIV, substance abuse, and mental health problems. We describe a health promotion intervention program tailored to transgender women in San Francisco. The program creates a safe space for providing transgender-sensitive education about HIV risk reduction, substance abuse prevention, and general health promotion. Transgender health educators conduct workshops and make referrals to appropriate substance abuse treatment programs and other services in the community. Evaluation findings indicate that this community-tailored intervention may be an effective way to reach transgender women and reduce sexual risk behaviors, depression, and perceived barriers to substance abuse treatment. abstract_id: PUBMED:8743877 HIV sexual risk reduction interventions for women: a review. AIDS has emerged as a serious public health threat for women. Reducing the risk for HIV infection among sexually active women requires the adoption of preventive strategies that effectively inhibit viral transmission. However, there is a paucity of published literature describing interventions targeted toward women and even fewer published reports evaluating their effectiveness. We conducted a search of HIV prevention interventions for women using the MEDLINE, ERIC and PSYCHLIT on-line computer database for the years January 1984-May 1995 to assess the effectiveness of interventions in increasing condom use during sexual intercourse. Interventions efficacious at increasing condom use applied a social psychological model of behavior to guide the development and implementation of the intervention, used randomized controlled designs to evaluate program efficacy, emphasized gender-related influences, were peer-led, and used multiple intervention sessions. While these findings are promising, methodologic limitations preclude any definitive assessment of programmatic efficacy. One challenge for future studies is to corroborate and extend these findings using rigorous methodologic research designs to evaluate programmatic efficacy, in particular, longer follow-up to assess stability of treatment effects over time. Another challenge is to understand the implications of previous research for the development of more gender-relevant interventions. The urgency of the HIV epidemic demands that the development and evaluation of HIV-prevention interventions tailored toward women remain a public health priority. Medical Subject Headings (MeSH): women, HIV/AIDS, interventions, prevention. abstract_id: PUBMED:24432191 The Impact of a Health Promotion Educational Program on Cardiovascular Risk Factors for HIV Infected Women on Antiretroviral Therapy. This study investigates the efficacy of a health promotion educational program on improving cardiovascular risk factors of weight, Body Mass Index, and waist to hip ratio in 76 predominately African American HIV-infected women. The health promotion educational program was the control group (that focused on improving self-efficacy for nutrition, exercise, stress reduction and women's health behaviors) of a NIH-funded study. The majority of participants was overweight, obese, or at high risk based on waist hip ratio at the beginning of the study. There were no statistically significant improvements in body mass index or waist hip ratio from pre intervention to up to 9 months post intervention. There were significant changes in waist hip ratio and body mass index in both directions (improvement and worsened) for a small group of participants. The health promotion program did not affect significant changes in cardiovascular risk and should be revised, lengthened, and refocused on nutrition, diet, exercise, and long term goal commitments to reduce the high risk for cardiovascular disease in this group. abstract_id: PUBMED:18774889 The diffusion of a community-level HIV intervention for women: lessons learned and best practices. Abstract Early in the HIV/AIDS epidemic in the United States, relatively few women were diagnosed with HIV infection and AIDS. Today, the epidemic represents a growing and persistent health threat to women in the United States, especially young women and women of color. In 2005, the leading cause of HIV infection among African American women and Latinas was heterosexual contact. In addressing HIV prevention needs among women, community-level strategies are needed to increase consistent condom use by women and their partners and to change community norms to support safer sex behaviors. The Real AIDS Prevention Project (RAPP) is a community-based HIV prevention intervention for women and their partners. RAPP is based on a community mobilization model that involves a combination of activities, including street outreach, one-on-one discussions called stage-based encounters, role model stories, community networks, and small group activities. The objectives of RAPP are to increase consistent condom use by women and their partners and change community norms associated with perceptions of condom use and high-risk behaviors in an effort to make safer sex practice more acceptable. This paper describes the Centers for Disease Control and Prevention (CDC) Division of HIV/AIDS Prevention (DHAP) effort to nationally diffuse RAPP from March 2003 through May 2007 and lessons learned from that diffusion experience. The paper specifically discusses (1) collaborating and planning with researchers, (2) a diffusion needs assessment that was designed to assess prior implementation experiences among select agencies, (3) developing the intervention package, (4) developing and piloting training for community-based organizations (CBOs), (5) a rollout of national trainings for health departments and community-based organizations interested in implementing RAPP, and (6) ongoing quality assurance activities and the provision of technical assistance and support. RAPP has been proven effective in reducing HIV transmission risk behaviors and improving communication and negotiation skills necessary for African American women and Latinas to reduce their risk for HIV infection and improve their overall health status. abstract_id: PUBMED:10097964 Sex in the New World: an empowerment model for HIV prevention in Latina immigrant women. In 1996, nearly 60% of U.S. AIDS cases among Latinas were attributed to unprotected sex with men. Economic disadvantage, language barriers, and strong cultural gender norms regarding sex exacerbate the risk for HIV infection among Latina immigrant women. Through a collaboration among scientists and providers, this study was designed to evaluate the impact of a multifaceted empowerment program for Latina immigrant women on HIV risk behaviors. Women (N = 74) were followed for the first 6 months of their participation and attended up to nine distinct types of activities (e.g., information meetings, friendship circles, and workshops). Although the program was not developed to specifically target HIV risk behaviors, women showed significant increases in sexual communication comfort, were less likely to maintain traditional sexual gender norms, and reported changes in decision-making power. Targeting broader sociocultural issues may increase the necessary skills for Latina women to prevent HIV infection from their sexual partners. Successful collaborations between scientists and providers are critical in developing effective, community-relevant interventions. abstract_id: PUBMED:12802115 Project CHARM: an HIV prevention program for adolescent mothers. Effects were examined of a human immunodeficiency virus (HIV) prevention program for adolescent mothers offered in pregnant minor and parenting programs in Los Angeles County. The sample included young women (N = 497) of predominantly poor, Latina backgrounds who had complete data from baseline to the one-year follow-up. The treatment group demonstrated statistically significant improvements in AIDS knowledge and intentions to use condoms, and fewer sex partners at the six-month follow-up as compared to the control group. Findings related to the effects of the intervention on theoretical variables provide some support for selected concepts from Social Cognitive Theory and the Theory of Reasoned Action. Answer: Based on the provided abstracts, it appears that both general women's health promotion programs and HIV-intensive prevention programs can be effective in reducing HIV risk among Hispanic women, but they may impact different aspects of risk reduction. A study comparing these two types of programs found that both interventions increased condom use among participants. However, the HIV-intensive prevention program was more effective in promoting safer sex negotiation, while the women's health promotion program was more effective in promoting HIV testing (PUBMED:12196620). This suggests that while both types of programs can contribute to reducing HIV risk, they may do so through different mechanisms and outcomes. It is important to note that the effectiveness of these programs can also be influenced by how they are tailored to the specific needs of the population they serve. For example, the Woman to Woman (W2W) program, an adaptation of the Sisters Informing Sisters about Topics on AIDS (SISTA) program for older women, was described as satisfactory by participants and had high rates of attendance, indicating that adapting interventions to the target demographic is crucial (PUBMED:28467279). Additionally, other studies have highlighted the importance of addressing broader sociocultural issues, such as gender norms and empowerment, to increase the skills necessary for Latina women to prevent HIV infection (PUBMED:10097964). Interventions that are community-tailored, such as the Transgender Resources and Neighborhood Space (TRANS) program, have also been shown to be effective in reaching high-risk populations and reducing sexual risk behaviors (PUBMED:15727962). In conclusion, both general women's health promotion programs and HIV-intensive prevention programs have roles to play in reducing HIV risk among Hispanic women, but they may emphasize different aspects of risk reduction. Tailoring these programs to the specific needs and contexts of the target population is key to their effectiveness.
Instruction: MR angiography of peripheral arterial disease of the distal legs: is time resolved MRA (TRICKS) necessary? Abstracts: abstract_id: PUBMED:18772748 MR angiography of peripheral arterial disease of the distal legs: is time resolved MRA (TRICKS) necessary? Purpose: To demonstrate the added diagnostic value of time-resolved imaging of contrast kinetics (TRICKS) in the evaluation of lower limb arteries compared to standard 3 level MRA with stepping table method. Patients And Methods: Forty patients (30% diabetics) with lower extremity peripheral arterial disease (87.5% with chronic ischemia) underwent standard contrast MRA including TRICKS of the distal arteries. Five arterial segments were defined per leg, and 395 arterial segments were compared (one patient with amputation). Two reviewers evaluated the quality of arterial imaging, presence of venous return and degree of stenosis per segment. The degree of interobserver agreement for arterial stenosis measurement was calculated. Results: More arterial segments could be analyzed on the TRICKS sequence (good or excellent analysis in 63.03%-66.32% of arterial segments compared to 41.51%-47.08% on routine MRA). There was less venous contamination on TRICKS images (25.57% to 27.60% gain). The degree of interobserver agreement was superior with TRICKS compared to standard MRA (kappa 0.85 vs 0.69). Conclusion: The TRICKS sequence can be added to standard MRA for pre-therapeutic evaluation of distal arteries in patients with peripheral arterial disease, especially with chronic ischemic with rest pain and/or trophic changes. abstract_id: PUBMED:33825259 Diagnostic Performance of a Lower-dose Contrast-Enhanced 4D Dynamic MR Angiography of the Lower Extremities at 3 T Using Multisegmental Time-Resolved Maximum Intensity Projections. Background: For peripheral artery disease (PAD), MR angiography (MRA) is a well-established diagnostic modality providing morphologic and dynamic information comparable to digital subtraction angiography (DSA). However, relatively large amounts of contrast agents are necessary to achieve this. Purpose: To evaluate the diagnostic accuracy of time-resolved 4D MR-angiography with interleaved stochastic trajectories (TWIST-MRA) by using maximum intensity projections (MIPs) of dynamic images acquired with reduced doses of contrast agent. Study Type: Retrospective. Population: Forty adult PAD patients yielding 1088 artery segments. Field Strength/sequence: A 3.0 T, time-resolved 4D MR-angiography with TWIST-MRA and MIP of dynamic images. Assessment: DSA was available in 14 patients (256 artery segments) and used as reference standard. Three-segmental MIP reconstructions of TWIST-images after administration of 3 mL of gadolinium-based contrast agent (Gadoteridol/Prohance®, 0.5 M) per anatomical level (pelvis, thighs, and lower legs) yielded 256 artery segments for correlation between MRA and DSA. Three independent observers rated image quality (scale: 1 [nondiagnostic] to 4 [excellent]) and the degree of venous overlay (scale: 0 [none] to 2 [significant]) for all segments. Diagnostic accuracy for the detection of &gt;50% stenosis and artery occlusion was calculated for all observers. Statistical Tests: Binary classification test (sensitivity, specificity, positive/negative predictive values, diagnostic accuracy). Intraclass correlation coefficients (ICCs), logistic regression analysis with comparison of areas under the receiver-operating-characteristics (ROC) curves (AUCs) with the DeLong method. Bland-Altman-comparison. Results: High diagnostic performance was achieved for the detection of &gt;50% stenosis (sensitivity 92.9% [84.3-99.9% (95%-CI)] and specificity 98.5% [95.7-99.8% (95%-CI)]) and artery occlusion (sensitivity 93.1% [77.2-99.2% (95%-CI)] and specificity 99.1% [96.9-99.9% (95%-CI)]). Inter-reader agreement was excellent with ICC values ranging from 0.95 to 1.0 for &gt;50% artery stenosis and occlusion. Image quality was good to excellent for both readers (3.41 ± 0.72, 3.33 ± 0.65, and 3.38 ± 0.61 [mean ± SD]) with good correlation between observer ratings (ICC 0.71-0.81). No significant venous overlay was observed (0.06 ± 0.24, 0.23 ± 0.43 and 0.11 ± 0.45 [mean ± SD]). Data Conclusion: MIPs of dynamic TWIST-MRA offer a promising diagnostic alternative necessitating only reduced amounts (50%) of gadolinium-based contrast agents for the entire runoff vasculature. Evidence Level: 3 TECHNICAL EFFICACY: Stage 2. abstract_id: PUBMED:24338813 Nonenhanced ECG-gated quiescent-interval single shot MRA: image quality and stenosis assessment at 3 tesla compared with contrast-enhanced MRA and digital subtraction angiography. Purpose: To evaluate the diagnostic accuracy of a nonenhanced electrocardiograph-gated quiescent-interval single shot MR-angiography (QISS-MRA) at 3 Tesla with contrast-enhanced MRA (CE-MRA) and digital subtraction angiography (DSA) serving as reference standard. Materials And Methods: Following institutional review board approval, 16 consecutive patients with peripheral arterial disease underwent a combined peripheral MRA protocol consisting of a large field-of-view QISS-MRA, continuous table movement MRA, and an additional time-resolved MRA of the calves. DSA correlation was available in eight patients. Image quality and degree of stenosis was assessed. Sensitivity and specificity of QISS-MRA was evaluated with CE-MRA and DSA serving as the standards of reference and compared using the Fisher exact test. Results: With the exception of the calf station, image quality with QISS-MRA was rated statistically significantly less than that of CE-MRA (P &lt; 0.05, P = 0.17, and P = 0.6, respectively). A greater percentage of segments were not accessible with QISS-MRA (19.5-20.1%) in comparison to CE-MRA (10.9%). Relative to DSA, sensitivity for QISS-MRA was high (100% versus 91.2% for CE-MRA, P = 0.24) in the evaluated segments; however, specificity (76.5%) was substantially less than that of CE-MRA (94.6%, P = 0.003). Conclusion: Overall image quality and specificity of QISS-MRA at 3T are diminished relative to CE-MRA. However, when image quality is adequate, QISS-MRA has high sensitivity and, thus, has potential use in patients with contraindications to gadolinium. abstract_id: PUBMED:30211442 Unenhanced Velocity-Selective MR Angiography (VS-MRA): Initial Clinical Evaluation in Patients With Peripheral Artery Disease. Background: Safe and accurate imaging of the peripheral arterial system is important for diagnosis and treatment planning of patients with peripheral artery disease (PAD). Purpose: To evaluate image quality and diagnostic performance of unenhanced magnetic resonance angiography (MRA) based on velocity-selective (VS) magnetization preparation (termed VS-MRA). Study Type: Prospective. Population: Thirty-one symptomatic PAD patients underwent VS-MRA. Twenty-four of them underwent clinical digital subtraction angiography (DSA) examination, 18.8 ± 5.2 days after the MR scans. Field Strength/sequence: 1.5T MRI that included VS-MRA (homemade research sequence) and phase-contrast flow imaging (clinical sequence). Assessment: Image quality (0: nondiagnostic, 3: excellent) and stenosis severity (0: normal, 3: occlusion) of VS-MRA images were assessed independently by three reviewers. Arterial signal-to-noise-ratio (SNR) and artery-to-muscle contrast-to-noise ratio (CNR) were calculated. Statistical Tests: The sensitivity and specificity of VS-MRA were calculated for the detection of significant stenosis (&gt;50%) with DSA as the reference standard. Interobserver agreement among the three reviewers was evaluated by using Cohen κ-statistics. Results: The image quality score of VS-MRA was 2.7 ± 0.5 for Reader 1, 2.8 ± 0.5 for Reader 2, and 2.8 ± 0.4 for Reader 3; SNR and CNR were 37.8 ± 12.5 and 30.5 ± 11.8, respectively. Segment-based analysis revealed that VS-MRA had sensitivities of 85.3%, 74.5%, and 78.4%, respectively, for the three reviewers, and specificities of 93.5%, 96.8%, and 95.2%. The interobserver agreement for the stenosis grading was good, as demonstrated by Cohen κ values of 0.76 (Reader 1 vs. Reader 2), 0.82 (Reader 1 vs. Reader 3), and 0.79 (Reader 2 vs. Reader 3). Data Conclusion: Unenhanced VS-MRA allows clear depiction of the peripheral arteries and accurate stenosis grading, as evidenced by high image quality scores and strong agreement with DSA. Level Of Evidence: 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2019;49:744-751. abstract_id: PUBMED:21442557 Nonenhanced MR angiography techniques Especially in regard to the potential risks for the development of nephrogenic systemic fibrosis (NSF) following the administration of Gadolinium-based contrast material, nonenhanced MR angiography (MRA) methods are becoming ever more important. Besides well-established time-of-flight (TOF) and phase-contrast (PC) MRA, alternative imaging techniques based on balanced steady-state free precession (bSSFP) and turbo-spin-echo (TSE) sequences are increasingly used in combination with or without arterial spin labeling (ASL) strategies. This article provides an overview of the principles and clinical values of different nonenhanced MRA techniques. In addition, recent nonenhanced MRA developments are presented. abstract_id: PUBMED:26185106 Noncontrast MR angiography (MRA) of infragenual arteries using flow-sensitive dephasing (FSD)-prepared steady-state free precession (SSFP) at 3.0 Tesla: Comparison with contrast-enhanced MRA. Background: To evaluate the feasibility and diagnostic performance of flow-sensitive dephasing (FSD)-prepared steady-state free precession (SSFP) MR angiography (MRA) for imaging infragenual arteries at 3.0T, with contrast enhanced MR angiography (CE MRA) as reference. Methods: Twenty consecutive patients with suspicion of lower extremity arterial disease undergoing routine CE MRA were recruited. FSD MRA was performed at calf before CE MRA. Image quality and stenosis degree of infragenual arteries from both techniques were independently evaluated and compared. Six patients in this study underwent DSA examination. Results: Three undiagnostic segments were excluded with severe venous contamination in CE MRA. A total of 197 calf arterial segments images were analyzed. No significant difference existed in the relative signal intensity (rSI) of arterial segments between FSD MRA and CE MRA techniques (0.92 ± 0.09 versus 0.93 ± 0.05; P = 0.207). However, the subjective image quality score was slightly higher in FSD MRA (3.66 ± 0.81 versus 3.49 ± 0.87; P = 0.050). With CE MRA images as reference standard, slight overestimation existed in FSD MRA (2.19 ± 1.24 versus 2.09 ± 1.18; P = 0.019), with total agreement of 84.3% on the basis of all arterial segments. The sensitivity, specificity, negative predictive value, and positive predictive value of FSD MRA was 96.4%, 93.0%, 98.5%, and 84.1%. No significant difference in the stenosis degree score was detected between MRA (FSD MRA and CE MRA) and DSA (P &gt; 0.05). Conclusion: FSD MRA performed on at 3.0T without the use of contrast medium provides diagnostic images allowing for arterial stenosis assessment of calf arteries that was highly comparable with CE MRA. Moreover, venous contamination was less problematic with FSD MRA. abstract_id: PUBMED:26160043 Have CT and MR Angiography Replaced Catheter Angiography in Diagnosing Peripheral Arterial Disease? Purpose: To determine whether MR angiography (MRA) and CT angiography (CTA) have replaced diagnostic catheter angiography (DCA) in diagnosing peripheral arterial disease. Methods: Medicare Part B databases for 2002-2013 were reviewed. Current Procedural Terminology codes for extremity MRA, CTA, and DCA were selected. Physician specialty codes were used to classify providers as radiologists, cardiologists, or surgeons. Utilization rates per 100,000 Medicare beneficiaries were calculated. Results: Among all specialties, the combined utilization rate of all 3 types of angiography increased from 917 per 100,000 in 2002 to 1,261 in 2006 (+38%), after which it remained stable until 2010, and then declined to 1,010 in 2013. The overall rate of MRA and CTA together increased from 89 in 2002 to 440 in 2006 (+394%), after which it leveled off, and then gradually decreased to 331 in 2013. In 2013, 33% of the total procedures were MRA or CTA, up from 10% in 2002. Radiologists performed &gt;85% of MRA and CTA examinations. Among radiologists, the DCA utilization rate decreased by 75% from 2002 to 2013, whereas among cardiologists and surgeons together, the overall DCA utilization rate increased by 64% from 2002 to 2010 before dropping somewhat in 2011. Conclusions: Among radiologists, MRA and CTA have replaced DCA in diagnosing peripheral arterial disease. Although overall utilization of DCA has remained steady, it has risen sharply among cardiologists and surgeons, while dropping sharply among radiologists. Given the increased utilization of DCA among cardiologists and surgeons despite noninvasive alternatives, self-referral continues to be of concern in the setting of increasing health care costs. abstract_id: PUBMED:24261379 Impact of time-resolved MRA on diagnostic accuracy in patients with symptomatic peripheral artery disease of the calf station. Objective: The purpose of this article is to evaluate the added diagnostic accuracy of time-resolved MR angiography (MRA) of the calves compared with continuous-table-movement MRA in patients with symptomatic lower extremity peripheral artery disease (PAD) using digital subtraction angiography (DSA) correlation. Materials And Methods: Eighty-four consecutive patients with symptomatic PAD underwent a low-dose 3-T MRA protocol, consisting of continuous-table-movement MRA, acquired from the diaphragm to the calves, and an additional time-resolved MRA of the calves; 0.1 mmol/kg body weight (bw) of contrast material was used (0.07 mmol/kg bw for continuous-table-movement MRA and 0.03 mmol/kg bw for time-resolved MRA). Two radiologists rated image quality on a 4-point scale and stenosis degree on a 3-point scale. An additional assessment determined the degree of venous contamination and whether time-resolved MRA improved diagnostic confidence. The accuracy of stenosis gradation with continuous-table-movement and time-resolved MRA was compared with that of DSA as a correlation. Overall diagnostic accuracy was calculated for continuous-table-movement and time-resolved MRA. Results: Median image quality was rated as good for 578 vessel segments with continuous-table-movement MRA and as excellent for 565 vessel segments with time-resolved MRA. Interreader agreement was excellent (κ = 0.80-0.84). Venous contamination interfered with diagnosis in more than 60% of continuous-table-movement MRA examinations. The degree of stenosis was assessed for 340 vessel segments. The diagnostic accuracies (continuous-table-movement MRA/time-resolved MRA) combined for the readers were obtained for the tibioperoneal trunk (84%/93%), anterior tibial (69%/87%), posterior tibial (85%/91%), and peroneal (67%/81%) arteries. The addition of time-resolved MRA improved diagnostic confidence in 69% of examinations. Conclusion: The addition of time-resolved MRA at the calf station improves diagnostic accuracy over continuous-table-movement MRA alone in symptomatic patients with PAD. abstract_id: PUBMED:36781568 Non-contrast MR angiography of pelvic arterial vasculature using the Quiescent interval slice selective (QISS) sequence. To evaluate Quiescent Interval Slice Selective (QISS) balanced steady-state free precession (bSSFP) and QISS fast low-angle shot (FLASH) sequences for non-contrast Magnetic Resonance Angiography (MRA) of iliac arteries regarding image quality and diagnostic confidence in order to establish these sequences in daily clinical practice. A prospective study of healthy subjects (n = 10) was performed. All subjects underwent the QISS MRI protocol with bSSFP und FLASH sequences. Vessel contrast-to-background ratio (VCBR) were measured in pre-defined vessel segments. Image quality and diagnostic confidence was assessed using a Likert scale (five-point scale). Inter-reader agreement was determined using Cohen's kappa coefficient (κ). Ten healthy subjects (median age 29 years, IQR: 26.25 to 30 years) were included in this prospective study. Median MR examination time was 2:05 min (IQR 1:58 to 2:16) for QISS bSSFP and 4:11 min (IQR 3:57 to 4:32) for QISS FLASH. Both sequences revealed good VCBR in all examined vessel segments. VCBR (muscle tissue) were marginally higher for FLASH sequences (e.g., 0.82 vs. 0.78 in the right femoral artery, p = 0.035*), while bSSFP sequence showed significantly higher VCBR (fat tissue) in the majority of examined arterials vessels (e.g., 0.78 vs. 0.62 in right femoral artery, p = 0.001*). The image quality and diagnostic confidence of both sequences were rated as good to excellent. Moderate to good inter-reader agreement was found. QISS MRA using bSSFP and FLASH sequences are diagnostic for visualization of iliac arterial vasculature. The QISS bSSFP sequence might offer advantages due to the markedly shorter exam time and superior visualization of smaller vessels. The QISS FLASH sequence seems to be a robust alternative for non-contrast MRA since it is less sensitive to magnetic field inhomogeneities. abstract_id: PUBMED:26560724 3D non-contrast-enhanced ECG-gated MR angiography of the lower extremities with dual-source radiofrequency transmission at 3.0 T: Intraindividual comparison with contrast-enhanced MR angiography in PAOD patients. Objective: To compare prospectively image quality and diagnostic confidence of flow-sensitive 3D turbo spin echo (TSE)-based non-contrast-enhanced MR angiography (NE-MRA) at 3.0 T using dual-source radiofrequency (RF) transmission with contrast-enhanced MRA (CE-MRA) in patients with peripheral arterial occlusive disease (PAOD). Methods: After consent was obtained, 35 patients (mean age 69.1 ± 10.6 years) with PAOD stage II-IV underwent NE-MRA followed by CE-MRA. Signal-to-noise ratio and contrast-to-noise ratio were calculated. Subjective image quality was independently assessed by two radiologists and stenosis scoring was performed in 875 arterial segments. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for stenosis classification were calculated using CE-MRA as a reference method. Diagnostic agreement with CE-MRA was evaluated with Cohen's kappa statistics. Results: NE-MRA provided high objective and subjective image quality at all levels of the arterial tree. Sensitivity and specificity for the detection of relevant stenosis was 91 % and 89 %, respectively; the NPV was 96 % and the PPV 78 %. There was good concordance between CE-MRA and NE-MRA in stenosis scoring. Conclusions: 3D electrocardiography (ECG)-gated TSE NE-MRA with patient-adaptive dual-source RF transmission at 3.0 T is a promising alternative for PAOD patients with contraindications for gadolinium-based contrast agents. It offers high sensitivity and NPV values in the detection of clinically relevant arterial stenosis. Key Points: • Flow-sensitive TSE NE-MRA is a promising technique for PAOD evaluation. • Diagnostic accuracy is comparable to contrast-enhanced MRA. • NE-MRA eliminates the risk of NSF in patients with renal insufficiency. • Costs arising from the use of contrast agents can be avoided. Answer: The question of whether time-resolved MRA (TRICKS) is necessary for the evaluation of peripheral arterial disease (PAD) of the distal legs can be addressed by examining the diagnostic value and performance of TRICKS compared to other MRA techniques. According to the study with PUBMED:18772748, TRICKS offers added diagnostic value in the evaluation of lower limb arteries compared to standard MRA. The study found that more arterial segments could be analyzed with TRICKS, with a higher quality of arterial imaging and less venous contamination. Additionally, the interobserver agreement for arterial stenosis measurement was superior with TRICKS compared to standard MRA. This suggests that TRICKS can be beneficial, especially for pre-therapeutic evaluation of distal arteries in patients with chronic ischemic conditions. However, other studies have explored alternative MRA techniques that also show promise. For instance, the study with PUBMED:33825259 demonstrated that time-resolved 4D MR-angiography with TWIST-MRA using reduced doses of contrast agent achieved high diagnostic performance for detecting stenosis and artery occlusion, with excellent inter-reader agreement and good image quality. Moreover, nonenhanced MRA techniques are gaining importance due to the potential risks associated with gadolinium-based contrast agents, such as nephrogenic systemic fibrosis (NSF) ( PUBMED:21442557). Techniques like QISS-MRA ( PUBMED:24338813) and VS-MRA ( PUBMED:30211442) have shown high sensitivity and good image quality, making them potential alternatives for patients with contraindications to gadolinium. In summary, while TRICKS has demonstrated added diagnostic value in the evaluation of PAD of the distal legs, other MRA techniques, including time-resolved 4D MR-angiography with TWIST-MRA, QISS-MRA, and VS-MRA, also offer high diagnostic accuracy and quality imaging. The choice of technique may depend on the specific clinical scenario, patient safety considerations, and the availability of technology.
Instruction: Are total prostate-specific antigen serum levels in cirrhotic men different from those in normal men? Abstracts: abstract_id: PUBMED:19286245 Are total prostate-specific antigen serum levels in cirrhotic men different from those in normal men? Objectives: To determine the serum total prostate-specific antigen (tPSA) levels in cirrhotic men and compare them with those in noncirrhotic men. Methods: We prospectively evaluated 113 cirrhotic patients listed for liver transplantation using the serum tPSA, total testosterone level, and Child-Pugh liver function score according to age and severity of liver disease. The tPSA levels were compared with those of 661 healthy men. The Mann-Whitney U test was used for statistical analysis, with a significance level of .05. Results: The median age of the cirrhotic and noncirrhotic patients was 55 years (range 28-70) and 58 years (range 46-70), respectively (P &lt; .01). However, when stratified by age group (&lt;49, 50-59, and &gt;60 years), this difference was not significant. The median serum tPSA level was 0.3 ng/mL (range 0.04-9.9) and 1.3 ng/mL (range 0.04-65.8) in the cirrhotic and noncirrhotic group, respectively (P &lt; .0001). Stratifying both groups according to age, the cirrhotic patients had significantly lower tPSA levels than did the noncirrhotic patients. According to the Child-Pugh score (A, B, and C), Child-Pugh class C patients had significantly lower tPSA levels than did Child-Pugh class A patients and also had lower testosterone levels than did Child-Pugh class A and B patients. The tPSA levels correlated significantly with the testosterone levels in the cirrhotic patients (P = .028). Conclusions: The results of our study have shown that cirrhotic patients have approximately 4 times lower serum tPSA levels than noncirrhotic men. Patients with more severe liver disease have lower tPSA and testosterone levels than patients less affected. The tPSA levels in cirrhotic men are affected by the total testosterone levels. abstract_id: PUBMED:28490387 Serum carotenoid and retinol levels in African-Caribbean Tobagonian men with high prostate cancer risk in comparison with African-American men. Black men are known to have a higher risk for prostate cancer (PC). Carotenoids and retinol, linked to PC, have not been compared in different black populations at risk. We examined serum carotenoid and retinol levels between PC-free African-Caribbean (AC) Tobagonian men with a high PC risk (high-grade prostatic intraepithelial neoplasia, atypical foci or repeated abnormal PC screenings) and African-American (AA) men with elevated serum prostate-specific antigen (PSA) levels (≥4 ng/ml). AC men who participated in the 2003 lycopene clinical trial and AA men who participated in the 2001-2006 National Health and Nutrition Examination Survey were compared. Serum specimens were analysed for carotenoid (β-carotene, α-carotene, β-cryptoxanthin, lutein/zeaxanthin and lycopene) and retinol levels by isocratic HPLC. Quantile regression was used to examine the association between serum carotenoid and retinol levels and black ethnicity, overall and among men with elevated serum PSA. There were sixty-nine AC men and sixty-five AA men, aged 41-79 years, included. AC men were associated with lower serum lycopene and retinol levels, and higher serum α- and β-carotenes and lutein/zeaxanthin levels compared with AA men, after adjusting for age, BMI, ever smoked cigarettes, education and hypertension (P≤0·03). Among men with elevated PSA, serum retinol was no longer statistically significant with ethnicity (P=0·06). Possible differences may be attributed to dietary intake, genetics and/or factors that influence bioavailability of these micronutrients. Prospective studies are warranted that investigate whether these differences in micronutrients between AC Tobagonian and AA men influence PC risk. abstract_id: PUBMED:30084276 No association of serum PSA with vitamin D or total oxidant-antioxidant capacity in healthy men. Background and aim: Vitamin D deficiency and oxidative stress were suggested to be related to prostate cancer risk. We aimed to investigate the association of serum PSA concentration with vitamin D and total oxidant/antioxidant levels. Materials and methods: A total of 95 healthy men were enrolled for the cross sectional study. Serum PSA, 25(OH)D, serum total oxidant status, and total antioxidant status were measured. Results: Serum PSA was significantly negatively correlated with serum total oxidant status (r= -0.309, p = .003) but there was no significant correlation between PSA and 25(OH)D (p = .383) or total antioxidant levels (p = .233). After adjustment for age BMI and smoking status with multiple regression analysis, there was no significant association between serum PSA and total oxidant status. Conclusion: We find no evidence for an association between PSA and vitamin D levels or serum total oxidant/antioxidant levels. abstract_id: PUBMED:11096235 Free and total prostate-specific antigen levels in saliva and the comparison with serum levels in men. Objective: We investigated free and total prostate-specific antigen (PSA) levels and free/total (f/t) ratio in the fasting saliva and compared them with the serum levels in normal individuals, in patients with benign prostatic hyperplasia (BPH) and prostate cancer. Our aim was to determine free and total PSA and f/t ratio in saliva and to improve and simplify the differentiation between BPH and prostate cancer by using saliva as an alternative to serum. Methods: Serum and fasting saliva concentrations of free and total PSA were measured in 35 men with BPH, 16 men with stage D prostate cancer, and 25 healthy men. Serum and fasting saliva samples were collected at the same time and were analyzed on the same day at our laboratory with microparticle enzyme immunoassay technology. Results: For the total of 76 men, there was a significant correlation between free and total PSA levels in each sample (r = 0.97 for serum and r = 0.44 for saliva, p&lt;0.001). Although there was a significant difference between three groups for serum-free and total PSA levels and serum f/t ratios, no significant difference was determined between groups for salivary free and total PSA levels and salivary f/t ratios. No correlations were found between patient age and salivary PSA levels. Conclusions: Fasting salivary free and total PSA levels are not effected by high serum levels of prostatic origin. Although there was a significant difference between mean serum and salivary levels of free and total PSA in each group, the f/t ratio of saliva was very close to the serum ratio of normal subjects. Determination of free and total PSA in saliva to improve and simplify the differentiation between prostate cancer and BPH is not suitable for use as alternative measurement of serum. abstract_id: PUBMED:24139354 Racial differences in longitudinal changes in serum prostate-specific antigen levels: the Olmsted County Study and the Flint Men's Health Study. Objective: To determine the distribution of, and racial differences in, changes in prostate-specific antigen (PSA) from a population-based sample of men. Materials And Methods: Data from 2 prospective cohort studies of a random sample of white men, aged 40-79 years in 1990, followed biennially through 2007, and African American men, aged 40-79 years in 1996, followed through 2000, were examined to assess the longitudinal changes in PSA concentrations. Serum PSA levels were determined at each examination for both cohorts and observations after a diagnosis of prostate cancer or treatment of benign prostatic hyperplasia were censored. The observed and estimated annual percentage of change in the serum PSA levels were examined by race. Results: At baseline, the median PSA level in the white men did not differ from the median level observed in the African American men (white men 0.9 ng/mL; African American men 0.9 ng/mL; P = .48). However, African American men had a much more rapid increase in the PSA level over time compared with the white men (median annual percent change in PSA for white men 3.6%/y, African American men 7.9%/y; P &lt;.001). Conclusion: These data suggest that African American men have more rapid rates of change in the PSA levels over time. If the difference in the rate of changes between African American and white men is an early indicator of future prostate cancer diagnosis, earlier detection in African American men could help to alleviate the racial disparities in prostate cancer diagnosis and mortality. abstract_id: PUBMED:16925734 Serum prostate-specific antigen levels in older men with or at risk of HIV infection. Objectives: The aim of the study was to determine the rate of, and factors associated with, elevated prostate-specific antigen (PSA) levels in older men with or at risk of HIV infection. Methods: Using a cross-sectional analysis, we interviewed 534 men &gt; or =49 years old at risk for HIV infection on demographics, behaviours and medical history. Laboratory testing included serum PSA level and HIV serology, and T-cell subsets for those who were HIV seropositive. Elevated PSA level was defined as &gt;4.0 ng/mL, and men with elevated PSA levels were referred for urological evaluation. Results: Fifteen per cent of men were white, 55% black, and 23% Hispanic; median age was 53 years (range 49-80 years); 74% were sexually active; 65% currently smoked cigarettes; and 16% had taken androgens. Among 310 HIV-positive men, CD4 counts were &gt;500 cells/microL in 31%, 200-500 cells/microL in 51%, and &lt;200 cells/microL in 19%. Twenty men (4%) had elevated PSA. On univariate analysis, only older age was significantly associated with elevated PSA, and there was no significant difference in the number of men with elevated PSA between HIV-positive and HIV-negative men (nine of 310 vs 11 of 224; P = 0.28). On multivariate analysis, older age remained the only variable associated with elevated PSA level [reference group &lt; or =50 years; adjusted odds ratio (OR(adj)) 1.0 for age 51-60 years; OR(adj) 5.9 (95% confidence interval 1.2-30.1) for age &gt; or =61 years] adjusted for HIV status, family history of prostate cancer, and androgen use. Conclusions: Among older men, PSA levels increased with age but did not differ by HIV status. The clinical use of PSA levels in older men currently do not need to be modified for those with HIV infection. abstract_id: PUBMED:29264136 The likelihood of having serum level of PSA of ≥4.0 ng/mL and ≥10.0 ng/mL in non-obese and obese Nigerian men with LUTS. Objective: This study was undertaken to determine the likelihood of having serum total prostate specific antigen (PSA) levels ≥4.0 ng/mL and ≥10.0 ng/mL among a cohort of non-obese and obese Nigerian men with lower urinary tract symptoms (LUTS). Methods: This was a prospective cross-sectional survey among men who presented with benign prostatic hypertrophy to the urology clinic of the Ekiti State University Teaching Hospital, Ado -Ekiti with LUTS between January 1 and December 31, 2014. One hundred and forty men who presented in the urologic clinic with LUTS were recruited. PSA was analyzed using standard method while other clinical variables were collected using a clinical case form. Multivariate logistic regression was used to estimate the odds of an abnormal PSA of ≥4.0 ng/mL or ≥10.0 ng/mL in these men. Results: The mean ages of obese and non-obese men were 64.8 and 64.0 years respectively. The mean total serum PSA were 14.8 and 13.2 ng/mL for obese and non-obese men respectively. Univariate analysis showed no difference (p &gt; 0.05) in the proportion of obese and non-obese men with LUTS who had a PSA threshold of at least 4.0 ng/mL. Multivariate logistic regression showed that, at a PSA threshold of 10.0 ng/mL, obese men had a statistically significant proportion (p &lt; 0.05). Although not significant, non-obese patients were less likely to have PSA level of ≥4.0 ng/mL (OR 0.701; 95% CI 0.301-1.630) compared to obese men. In the same vein, non-obese men were less likely to have a PSA level of 10.0 ng/mL (OR, 0.686; 95% CI, 0.318-1.478) in a simultaneous context of age. Conclusion: Our study demonstrated that, in a sample population of predominantly native African men, there was a non-significantly higher likelihood of overweight/obese patients having a higher serum PSA level than the non-obese. A community based study is needed to further confirm this finding. abstract_id: PUBMED:8968017 Occult prostate cancer in men with low serum testosterone levels. Objective: To determine the prevalence of occult prostate cancer in men with low serum total testosterone or free testosterone levels. Design: Retrospective analysis of a consecutive series of men. Setting: Academic teaching hospital. Patients: Seventy-seven men with low total testosterone or free testosterone levels, with normal results of digital rectal examination and prostate-specific antigen (PSA) levels of 4.0 ng/mL or less. The mean age was 58 years. Interventions: Sextant prostate needle biopsies with ultrasound guidance. Main Outcome Measures: Results of prostate needle biopsies, transrectal ultrasound, prostate volume, PSA level, PSA density, total and free testosterone levels. Results: Prostate cancer was identified in 14% (11/77) of the entire group and in 10 men (29%) aged 60 years or older. The median age for men with cancer was 64 years. Histologic examination showed Gleason scores of 6 or 7 for all cancers. No significant differences were noted between the cancer and benign groups with regard to PSA level, PSA density, prostate volume, total testosterone level, or free testosterone level. Conclusions: A high prevalence of biopsy-detectable prostate cancer was identified in men with low total or free testosterone levels despite normal PSA levels and results of digital rectal examination. These data suggest that (1) digital rectal examination and PSA levels are insensitive indicators of prostate cancer in men with low total or free testosterone levels, and (2) PSA levels may be altered by naturally occurring reductions in serum androgen levels. abstract_id: PUBMED:18290413 Normal serum prostate specific antigen levels in men in Yasuj province, Islamic Republic of Iran. We assessed serum prostate specific antigen (PSA) levels in 650 men over 40 years referred to 3 Yasuj hospitals for blood cell count in 2003/2004. Men affected by prostate cancer, prostatitis or transurethral instrumentation were excluded. PSA was determined by an immunoassay technique. PSA levels in different age groups were: 40-49-year-olds--mean = 0.7 ng/dL, normal = 0-1.35 ng/dL; 50-59-year-olds--mean = 0.9 ng/dL, normal = 0-1.85 ng/dL; 60-69-year-olds--mean = 1.6 ng/dL, normal = 0-3.2 ng/dL; &gt; or = 70-years-olds--mean = 2.3 ng/dL, normal = 0-4.4 ng/dL. Normal PSA levels in our society were lower than those in the United States, Europe and Japan. abstract_id: PUBMED:18660852 Prostate cancer in elderly men. Due to increasing life expectancy and the introduction of prostate-specific antigen (PSA) screening, a rising number of elderly men are diagnosed with prostate cancer. Besides PSA serum levels and Gleason score, age is considered to be a key prognostic factor in terms of treatment decisions. In men older than 70 years, treatment without curative intent may deprive the frail patient of years of life. Modern radical prostatectomy techniques are associated with low perioperative morbidity, excellent clinical outcome, and documented long-term disease control. Thus, radical prostatectomy should be considered because local treatment of organ-confined prostate cancer potentially cures disease. The huge extent of PSA screening programs may lead to overdiagnosis of prostate cancer. Not every man who is diagnosed with prostate cancer will develop clinically significant disease. This has led to the concept of expectant management for screen-detected, small-volume, low-grade disease, with the intention of providing therapy for those men with disease progression. Answer: Yes, total prostate-specific antigen (tPSA) serum levels in cirrhotic men are different from those in normal men. A study found that cirrhotic patients have approximately 4 times lower serum tPSA levels than noncirrhotic men. Additionally, patients with more severe liver disease have lower tPSA and testosterone levels than patients less affected. The tPSA levels in cirrhotic men are affected by the total testosterone levels (PUBMED:19286245).
Instruction: Visual evoked potentials in migraine: Is the "neurophysiological hallmark" concept still valid? Abstracts: abstract_id: PUBMED:26093935 Visual evoked potentials in migraine: Is the "neurophysiological hallmark" concept still valid? Objective: Lack of habituation is considered a neurophysiological hallmark of migraine. However, the results of visual evoked potential (VEP) studies have been discrepant, possibly because of different stimulation parameters and lack of blinding. Hence, there is a need for independent confirmation of lack of VEP habituation in migraine. In this blinded study we applied 16' checks to supplement our previous findings with 8', 31', 62' and 65' checks. Methods: VEPs in 41 interictal migraineurs and 30 controls were compared. VEPs were recorded in six blocks of 100 single responses. Linear N70-P100 amplitude change over blocks (habituation slope) was compared with an independent samples Student's t-test. Results: Amplitude decline over blocks was observed in both groups. Habituation slope was not significantly different between controls (-0.43 ± 0.54 μV/block) and migraineurs (-0.29 ± 0.35 μV/block) (p=0.33). Conclusion: VEP habituation with 16' checks did not differ in migraineurs and controls. This is in agreement with previous findings with other stimulation parameters. It is therefore unlikely that use of different stimulation parameters could explain the discrepant results of previous studies. No studies that applied blinding during recording of VEP have found lack of habituation in migraineurs. Significance: Lack of VEP habituation cannot be considered a reliable neurophysiological hallmark in migraine. abstract_id: PUBMED:34037365 Neurophysiological methods in the assessment of different forms of migraine The review considers the efficacy of neurophysiological methods for the study of migraine. According to many authors, such neurophysiological methods as analysis of visual and somatosensory evoked potentials, trigeminal evoked potentials are informative for assessing the functional state of trigeminocervical and sensory systems. Analysis of bioelectric activity of the brain is used for differential diagnosis of migraine and epilepsy, evaluation of various forms and types of migraine. Studies with recording and analysis of laser evoked potentials, as well as the effects of transcranial magnetic stimulation, both diagnostic and non-pharmacological rehabilitation effects on pain syndrome, which increases the efficiency and quality of life in migraine, are considered. abstract_id: PUBMED:23298223 Visual evoked potentials in interictal migraine: no confirmation of abnormal habituation. Objective: We intended to study the effect of check size on visual evoked potential habituation in interictal migraine, using the faster 3 per second reversal rate and an improved analytic procedure with block-number blinding. Background: Habituation in migraineurs has been extensively studied with visual evoked potentials. Despite discrepant results, possibly related to the use of different stimulus conditions, lack of habituation in the period between attacks is presently considered to be a neurophysiological hallmark of migraine. Methods: Midoccipital monocular visual evoked potentials were recorded and analyzed in 27 interictal migraineurs and 34 healthy controls using a blinded study design. Small 8' checks and large 65' checks were applied in random order, both with 3 reversals per second. Six consecutive blocks of 100 responses were recorded for each check size. N70-P100 and P100-N145 peak-to-peak amplitudes were measured. Regression slopes across the 6 blocks, supplemented by last block/first block ratio and repeated measures analysis of variance with amplitude as the dependent variable, were used to test for habituation. Results: N70-P100 habituation to small and large checks was observed in controls (mean slope -0.30 and -0.11 μV/block) and interictal migraineurs (-0.32 and -0.26 μV/block). P100-N145 habituation to small checks in controls (mean slope -0.39 μV/block) and to small and large checks in interictal migraineurs (-0.38 and -0.17 μV/block) was also observed. None of the habituation measures were significantly different between healthy controls and migraineurs (F &lt; 1.6, P &gt; .18). The check-size effect was similar in the 2 groups (F &lt; 2.3, P &gt; .14). Conclusion: Reversal rate and check-size differences do not seem to explain the discrepant visual evoked potential habituation results in the migraine literature. Furthermore, no differences in first block amplitudes or N70, P100, and N145 latencies between healthy controls and migraineurs were found. We recommend blinded evaluation designs in future habituation studies in migraine. abstract_id: PUBMED:16419567 Visual evoked potentials in children and school adolescents with migraine and tension-type headache. Clinical and neurophysiological correlations Unlabelled: To differentiate clinically migraine, especially without aura, from tension-type headache (TTH), especially an episodic one, in patients in the developmental age may become a difficult task. The reason is the similarities of clinical symptoms in both types of headaches. The results of examinations published so far do not explain satisfactorily the pathomechanism and clinical symptoms of primary headaches. Therefore, it makes scientists apply more and more recent research methods to explain their pathophysiology. The aim of the study in children and adolescents with migraine and TTH was to evaluate the diagnostic values of visual evoked potentials (VEP). Clinical and neurophysiological correlations in primary headaches were also noted. Material And Methods: 90 patients with migraine and 35 with TTH aged 8-18 participated in the study. The control group for neurophysiological examinations consisted of 66 healthy children and adolescents of the same age. VEPs were performed in the pain-free interval using 4-channel Multiliner (Toennies, Germany). The parameters for analysis were: latencies P100, N75, N145, amplitudes N75/P100, P100/N145 and interocular and interhemispheric differences. Results: Substantial prolongation of P100 and N145 latencies and reduction of N75/P100, P100/N145 amplitudes were recorded in one hemisphere only in patients with migraine. Correlations between VEP and age, duration of the disease and the frequency of migraine attacks were found. Presence and type of existing aura did not change values of recorded parameters in VEP CONCLUSIONS: The changes in VEP parameters were observed only in patients with migraine. The statistically significant interhemispheric differences without interocular changes point to their unilateral localization of the neuronal changes in visual pathway. The positive clinical and neuropsychological correlations obtained in migraine may be related to the vascular theory of migraine. abstract_id: PUBMED:11576204 Spectral analysis of visual potentials evoked by pattern-reversal checkerboard in juvenile patients with headache. Changes in visual evoked potentials, mainly affecting the amplitude of the major positive wave, are referred to by many authors and are related to the pathophysiological basis of primary headache. We performed both transient pattern-reversal visual evoked potentials and spectral analysis by means of fast Fourier transform of 8-Hz steady-state pattern-reversal visual evoked potentials in 34 children affected with migraine (14 with aura, 20 without aura), and compared them with 14 patients with tension-type headache and 10 healthy subjects. The amplitude of the response to the transient stimulation (P100) was higher and the latency shorter in the patients with headache compared with the controls, but the difference was not statistically significant. The absolute power of the first harmonic (1F) obtained by the spectral analysis of the steady-state stimulation was increased in all the patients with headache compared with the controls, and the increase was significant in patients with migraine. These data seem to confirm the hypothesis of abnormal processing of visual input in migraineurs and could be interpreted as neurophysiological support for the theory that different headache types are related conditions. Furthermore, the spectral analysis of steady-state pattern-reversal visual evoked potentials could be proposed as a test to diagnose migraine. abstract_id: PUBMED:14628587 Clinico-neurophysiological features of migraine with and without aura Clinical phenomenon of migraine aura and comparison of main types of migraine were studied, using a method of visual evoked potentials (VEP) in 46 patients with migraine with aura, in 30--without aura and in 10 healthy controls. Common features as well as differences were revealed. Patients with migraine with aura were distinguished by significantly higher mean amplitudes of P300 component and by more frequents unstable VEP registration as compared to those without aura. Qualitative changes of amplitude and configuration and reduction of latent P300 period concomitant with headache side had the same frequency in both groups. In intra-group clinico-neurophysiological analysis, in regard to visual aura phenomena, the attacks with simple phenomena were the most severe in frequency, combinations with other aura symptoms, clinical and neurophysiological signs of visual system hyperexcitability. abstract_id: PUBMED:28856911 Sunlight irradiance and habituation of visual evoked potentials in migraine: The environment makes its mark. Background Migraine is a complex multifactorial disease that arises from the interaction between a genetic predisposition and an enabling environment. Habituation is considered as a fundamental adaptive behaviour of the nervous system that is often impaired in migraine populations. Given that migraineurs are hypersensitive to light, and that light deprivation is able to induce functional changes in the visual cortex recognizable through visual evoked potentials habituation testing, we hypothesized that regional sunlight irradiance levels could influence the results of visual evoked potentials habituation studies performed in different locations worldwide. Methods We searched the literature for visual evoked potentials habituation studies comparing healthy volunteers and episodic migraine patients and correlated their results with levels of local solar radiation. Results After reviewing the literature, 26 studies involving 1291 participants matched our inclusion criteria. Deficient visual evoked potentials habituation in episodic migraine patients was reported in 19 studies. Mean yearly sunlight irradiance was significantly higher in locations of studies reporting deficient habituation. Correlation analyses suggested that visual evoked potentials habituation decreases with increasing sunlight irradiance in migraine without aura patients. Conclusion Results from this hypothesis generating analysis suggest that variations in sunlight irradiance may induce adaptive modifications in visual processing systems that could be reflected in visual evoked potentials habituation, and thus partially account for the difference in results between studies performed in geographically distant centers. Other causal factors such as genetic differences could also play a role, and therefore well-designed prospective trials are warranted. abstract_id: PUBMED:35484504 Central effects of galcanezumab in migraine: a pilot study on Steady State Visual Evoked Potentials and occipital hemodynamic response in migraine patients. Background: The discovery of the prominent action of Calcitonin Gene Related Peptide -CGRP- on trigeminal afferents and meningeal vessels, opened a new era in migraine treatment. However, how the block of nociceptive afferents could act on central mechanisms of migraine is still not clear. In this pilot study we aimed to test the effect of 3 months Galcanezumab (CGA) therapy on occipital visual reactivity in migraine patients, using the Steady State Visual Evoked Potentials-SSVEPs and Functional Near Infrared Spectroscopy -fNIRS. Method: Thirteen migraine patients underwent clinical and neurophysiological examination in basal condition (T0), 1 h after GCA injection (T1) and after 3 months of GCA treatment (T2). Ten healthy volunteers were also evaluated. Results: At T2, there was a reduction of headache frequency and disability. At T2, the EEG power significantly diminished as compared to T0 and T1 at occipital sites, and the topographical analysis confirmed a restoration of SSVEPs within normal values. The Oxyhemoglobin levels in occipital cortex, which were basically increased during visual stimulation in migraine patients, reverted to normal values at T2. Conclusions: The present pilot study indicates that Galcanezumab could act on cortical targets located beyond the pain network, restoring the abnormal occipital reactivity. This effect could indicate the possible disease modifying properties of CGRP related monoclonal antibodies. abstract_id: PUBMED:10335127 The use of visual evoked potentials in clinical diagnostics In the presented work the use of visual evoked potentials in the diagnostics of multiple sclerosis, optic neuritis, vitamin deficiency, neurotoxic substance intoxication, brain tumours, Parkinson's and Alzheimer's disease and migraine was discussed. abstract_id: PUBMED:29928222 Cerebral Functional Magnetic Resonance Imaging and Multifocal Visual Evoked Potentials in a Patient with Unexplained Impairment of Visual Function: A Case Report. We present a case of a young female with a slowly progressing visual impairment who was examined with multifocal visual evoked potentials and functional magnetic resonance imaging (fMRI) for underlying neuronal abnormality. The fMRI examination consisted of presenting black-and-white checkerboard stimuli, and her activation patterns were compared to the patterns from 4 normal-sighted subjects. The results showed clear differences in neuronal activation between the patient and the controls in the occipital and parietal lobes. Although we have shown neuronal correlates in a case of unexplained visual loss, it is still an open question as to whether this has an organic or functional cause, which should be the subject for future research. Answer: The concept of lack of habituation in visual evoked potentials (VEPs) as a neurophysiological hallmark of migraine has been challenged by recent studies. A blinded study using 16' checks found no significant difference in VEP habituation slopes between interictal migraineurs and controls, suggesting that lack of VEP habituation cannot be considered a reliable neurophysiological hallmark in migraine (PUBMED:26093935). This finding is consistent with previous studies that used different stimulation parameters and also failed to find a lack of habituation in migraineurs when blinding was applied during the recording of VEPs (PUBMED:23298223). Moreover, a review of neurophysiological methods in the assessment of different forms of migraine indicates that while various neurophysiological methods such as analysis of visual and somatosensory evoked potentials are informative for assessing the functional state of trigeminocervical and sensory systems, the efficacy of these methods for the study of migraine is still under consideration (PUBMED:34037365). In contrast, some studies have found changes in VEP parameters specifically in patients with migraine, suggesting unilateral localization of neuronal changes in the visual pathway (PUBMED:16419567). Spectral analysis of visual potentials evoked by pattern-reversal checkerboard in juvenile patients with headache has also shown increased absolute power in migraineurs, supporting the hypothesis of abnormal processing of visual input in this group (PUBMED:11576204). Environmental factors such as sunlight irradiance have been hypothesized to influence VEP habituation results, with higher sunlight irradiance associated with deficient habituation in episodic migraine patients (PUBMED:28856911). Additionally, a pilot study on the central effects of galcanezumab, a CGRP monoclonal antibody, indicated that treatment could restore abnormal occipital reactivity in migraine patients (PUBMED:35484504). In summary, while the concept of lack of habituation in VEPs as a neurophysiological hallmark of migraine is not consistently supported by current evidence, neurophysiological methods continue to provide valuable insights into the functional state of the nervous system in migraine patients. However, the variability in study results suggests that the relationship between VEP habituation and migraine may be more complex than previously thought, and further research is warranted to clarify this relationship.
Instruction: Can circulating M30 and M65 levels be beneficial markers in the diagnosis and management of patients with complete hydatidiform mole? Abstracts: abstract_id: PUBMED:25869758 Can circulating M30 and M65 levels be beneficial markers in the diagnosis and management of patients with complete hydatidiform mole? Objective: The objective of this study is to evaluate the prognostic value of M30 and M65 levels as markers of apoptotic activity and maternal serum oxidative stress in patients with complete hydatidiform mole (CHM). Methods: In total, 68 pregnant women were included in the study. The study group included 34 pregnant with CHM, while 34 healthy pregnant were employed as a control group. Venous blood samples were drawn to assess the maternal serum oxidative stress and M30-M65 levels. In addition, a second blood sample was drawn from patients with CHM on day 8 after dilatation evacuation. Results: Maternal serum oxidative stress and M30-M65 levels were found to be significantly higher in patients with CHM as compared with the control group. It was found that serum β-subunit of human chorionic gonadotropin (β-hCG) level had a significant positive correlation with M30-M65 levels in patients with CHM. In addition, serum M65 level was found to be as effective as β-hCG in the identification of the patients with CHM. Conclusion: Our results indicated that oxidative stress and apoptosis may play significant roles in CHM development. In addition, it seems that serum M30-M65 levels can presumably be an ancillary laboratory test to β-hCG in the diagnosis and follow-up of the patients with CHM. abstract_id: PUBMED:26981554 Circulating Cell Free DNA in the Diagnosis of Trophoblastic Tumors. Gestational trophoblastic neoplasia (GTN) represents a group of diseases characterized by production of human chorionic gonadotropin (hCG). Since non-gestational tumors may occasionally secrete hCG, histopathological diagnosis is important for appropriate clinical management. However, a histopathological diagnosis is not always available. We therefore investigated the feasibility of extracting cell free DNA (cfDNA) from the plasma of women with GTN for use as a "liquid biopsy" in patients without histopathological diagnosis. cfDNA was prepared from the plasma of 20 women with a diagnosis of GTN and five with hCG-secreting tumors of unknown origin. Genotyping of cfDNA from the patient, genomic DNA from her and her partner and DNA from the tumor tissue identified circulating tumor DNA (ctDNA) (from 9% to 53% of total cfDNA) in 12 of 20 patients with GTN. In one case without a tissue diagnosis, ctDNA enabled a diagnosis of GTN originating in a non-molar conception and in another a diagnosis of non-gestational tumor, based on the high degree of allelic instability and loss of heterozygosity in the ctDNA. In summary ctDNA can be detected in the plasma of women with GTN and can facilitate the diagnosis of both gestational and non-gestational trophoblastic tumors in cases without histopathological diagnosis. abstract_id: PUBMED:6823357 Circulating immune complex levels in patients with molar pregnancy. Circulating immune complex levels were measured in patients with molar pregnancy to investigate the relationship between circulating immune complex and trophoblastic tumor burden. When 27 (87%) of 31 patients with molar pregnancy were first seen, circulating immune complex values were in the normal range. Three of the 4 patients with elevated levels had concurrent medical illness. Eighteen patients were followed with serial measurements until gonadotropin remission was achieved and all 18 patients developed increased levels as they entered remission (P less than .001). Circulating immune complex values remained elevated during gonadotropin remission from 6 to 16 weeks and then declined to initial levels. Further investigation should be undertaken to evaluate possible interactions between circulating immune complex and host immune defenses. abstract_id: PUBMED:33306861 Hydatidiform mole diagnostics using circulating gestational trophoblasts isolated from maternal blood. Background: In gestational trophoblastic disease, the prognosis is related to the genetic constitution. In some cases, taking a biopsy is contraindicated. Methods: In a pregnant woman, ultrasound scanning suggested hydatidiform mole. To explore if the genetic constitution can be established without taking a biopsy (or terminating the pregnancy), cell-free DNA and circulating gestational trophoblasts were isolated from maternal blood before evacuation of the uterus. The evacuated tissue showed the morphology of a complete hydatidiform mole. Without prior whole-genome amplification, short tandem repeat analysis of 24 DNA markers was performed on the samples, and on DNA isolated from evacuated tissue, and from the blood of the patient and her partner. Results: Identical genetic results were obtained in each of three circulating gestational trophoblasts and the evacuated tissue, showing that this conceptus had a diploid androgenetic nuclear genome. In contrast, analysis of cell-free DNA was less informative and less specific due to the inherent presence of cell-free DNA from the patient. Conclusion: Our results show that it is possible to isolate and analyze circulating gestational trophoblasts originating in a pregnancy without maternal nuclear genome. For diagnosing gestational trophoblastic diseases, genotyping circulating gestational trophoblasts appears to be superior to analysis of cell-free DNA. abstract_id: PUBMED:11260306 Apoptotic activity in gestational trophoblastic disease correlates with clinical outcome: assessment by the caspase-related M30 CytoDeath antibody. Aims: The objective of this study was to assess apoptotic activity in gestational trophoblastic disease (GTD) and its prognostic value in hydatidiform mole (HM). Methods And Results: Expression of the specific caspase cleavage site within cytokeratin 18 was assessed immunohistochemically using the monoclonal antibody M30 CytoDeath in 12 spontaneous abortions, 22 partial and 57 complete HM, eight choriocarcinoma (CCA) and 28 normal placentas. The M30 immunoreactivity occurred predominantly in the syncytiotrophoblasts. A significantly higher M30 index in HM and CCA was found when compared with normal placentas and spontaneous abortions (P &lt; 0.001). The M30 index of those HM which spontaneously regressed was significantly higher than those HM which developed persistent disease requiring chemotherapy (P &lt; 0.001). The M30 index correlated with another apoptotic index previously detected by TdT-mediated dUTP nick-end labelling (TUNEL) (P = 0.007) and the proliferation index assessed by the Ki67 antigen (P = 0.034). Conclusions: We conclude that apoptosis is important in the pathogenesis of GTD. Assessment of apoptotic activity in HM by the M30 index may be considered as an alternative prognostic indicator for predicting the clinical behaviour. abstract_id: PUBMED:6285061 Circulating immune complex levels in patients with gestational trophoblastic neoplasia. The clinical course, human chorionic gonadotropin (HCG) levels, and serial circulating immune complex (CIC) levels in 21 patients with gestational trophoblastic neoplasia (GTN) were correlated for the evaluation of the relationship between CIC levels and trophoblastic tumor burden. CIC levels were normal in 18 of 21 patients at the time of presentation, and 2 of 3 patients who presented with elevated CIC levels had significant comorbid disease (toxemia and hepatitis). Nine patients were followed into gonadotropin remission, and all 9 developed an increase in CIC levels at the time of remission. It was concluded that CIC, at least as measured by two antigen-nonspecific techniques, is generally not elevated at initial presentation in the patient with GTN; this lack of an elevation is probably due to marked tumor antigen excess. Thus the in vivo importance of CIC as a "blocker" of host antitumor response at this stage is doubtful. After effective treatment as HCG levels return to normal, the demonstrated elevation in serial levels of CIC may reflect a return of adequate host immune response at a time of minimal tumor burden. abstract_id: PUBMED:10221724 The contribution of maternal serum markers in the early prenatal diagnosis of molar pregnancies. The aim of this study was to evaluate the usefulness of maternal serum markers in the early prenatal diagnosis of molar pregnancies. The ultrasound features, cytogenetic and histopathological findings of 10 cases of molar pregnancy diagnosed at 11-13 weeks of gestation were compared retrospectively with the maternal serum concentrations of human chorionic gonadotrophin (HCG), alpha fetoprotein (AFP), pregnancy-associated plasma protein A (PAPP-A) and pregnancy-specific beta1-glycoprotein (SP1). Free beta-HCG and intact HCG concentrations were very high [&gt; or = 2.5 multiples of the median (MoM)] in all cases. AFP concentrations were extremely low in all cases of singleton complete moles (&lt; or = 0.5 MoM) and were high in one case of twin complete mole, in one case of triploid partial mole and two cases of euploid partial mole (&gt; or = 2.5 MoM). Serum PAPP-A and SP1 were high in complete moles. The combined use of ultrasound features, maternal serum proteins and fetal cytogenetic findings should enable the early differential diagnosis in utero and perinatal management of those molar pregnancies presenting with an anatomically normal fetus. abstract_id: PUBMED:1713070 Placental proteins as tumor markers. Among the three placental proteins discussed, HCG is the only clinically useful tumor marker, and the value of HCG measurements is restricted to patients with gestational and nongestational trophoblastic disease. In patients with gestational trophoblastic disease, HCG levels may serve as an adjunct for the diagnosis, provide prognostic information, and be an objective parameter to evaluate the effects of therapy. Little or no additional information is obtained from HPL or SP-1 measurements. In patients with germ cell neoplasms of the testis, HCG measurements add useful information for clinical staging and monitoring of therapy, although discordance between tumor growth and HCG levels can be found in patients whose tumors contain several different elements. Therefore, AFP measurements must be made as well in these patients to monitor disease activity. Neither HPL nor SP-1 measurements are useful in these patients. None of the placental proteins are useful for screening, as prognostic indicators, or for evaluating the effects of therapy in groups of patients with nontrophoblastic neoplasms. In some patients with nontrophoblastic malignancies, each of the markers may accurately reflect changes in tumor burden during therapy. However, the problems with specificity and sensitivity of the tests and the fact that the majority of patients whose tumors produce the hormone have circulating concentrations that are at the limits of detection of the assays decrease the utility of these measurements and render them cost-ineffective for routine patient care. abstract_id: PUBMED:32509606 A review on management of gestational trophoblastic neoplasia. Background: The rare presence of malignant cancerous cells afar any type of pregnancy is known as gestational trophoblastic neoplasia (GTN). GTN are benign lesions which mostly happen due to the activity of extravillous trophoblast cells and the placental villous tree development. These kinds of diseases would be occurring mainly due to the following clinicopathologic conditions: (I) existence of epithelioid trophoblastic tumor (ETT), (II) rare type of choriocarcinoma cancer, (III) gestational trophoblastic tumor of mole, and (IV) the rare malignant tumor of placental site trophoblastic tumor. Objective: This comprehensive study is trying to review the most recent approaches in comprehension of pathogenesis, more precise diagnosis, and also the most effective therapeutic procedures for patients who suffer from GTN disorders. Materials And Method: A comprehensive research was carried out on scientific databases of Science Citation Index (SCI), MEDLINE, EMBASE, HMIC, PubMed, CINAHL, Google Scholar, Cochrane Database of Systematic Reviews (CDSR), and PsycINFO over the time period of 2005 to 2019. The keywords which applied for discovering more related records were including: Gestational trophoblastic diseases (GTD), Gestational trophoblastic neoplasia (GTN), molar pregnancy, choriocarcinoma, human chorionic gonadotropin (hCG), diagnosis, management and treatment. Conclusion: In spite of the fact that GTN patients are treated with conventional surgical therapies or/and chemotherapy, in some patients with resistant disease, these therapies may not be effective and patients may die. Some novel remedial agents are required for decreasing the level of toxicity caused through administering conventional chemotherapy and also treating the patients who suffer from refractory or resistant disease. The newest issues are related to GTN diagnosis, process of progression of hydatidiform mole (HM) to GTN, and the issue of GTN drug resistance. In this regard, we should have a comprehensive knowledge on GTN genetics for answering all the available questions about this disorder. abstract_id: PUBMED:7947525 Circulating levels of placental protein 14 in ectopic pregnancy. Objective: To determine circulating levels of placental protein 14 (PP14) in complications of early pregnancy. Design: Biochemical monitoring of women presenting with vaginal bleeding and/or abdominal pain. Setting: An emergency gynaecological ultrasound clinic in a London teaching hospital. Subjects: Venous blood samples were obtained from 67 women with normal pregnancy (n = 9), ectopic pregnancy (n = 26) and failed intrauterine pregnancy (n = 32). This group included anembryonic pregnancy (n = 18), missed miscarriage (n = 2), spontaneous miscarriage of a previously demonstrated live fetus (n = 6), incomplete miscarriage (n = 4), complete miscarriage (n = 1) and molar pregnancy (n = 1). Main Outcome Measures: Serum PP14 levels in the group of women with a failed intrauterine pregnancy in relation to the normal range for PP14. Results: Eighty-one percent of women who miscarried spontaneously had normal serum PP14 levels; 81% of women with ectopic pregnancy had depressed (&lt; 5th centile) serum PP14 levels. Conclusion: Measurements of PP14 may be useful in distinguishing spontaneous miscarriage from ectopic pregnancy, but not in the management of threatened miscarriage. Answer: Yes, circulating M30 and M65 levels can be beneficial markers in the diagnosis and management of patients with complete hydatidiform mole (CHM). A study found that maternal serum oxidative stress and M30-M65 levels were significantly higher in patients with CHM compared to a control group of healthy pregnant women. There was also a significant positive correlation between serum β-subunit of human chorionic gonadotropin (β-hCG) level and M30-M65 levels in patients with CHM. Furthermore, serum M65 level was found to be as effective as β-hCG in identifying patients with CHM. These results suggest that oxidative stress and apoptosis may play significant roles in CHM development, and serum M30-M65 levels could potentially serve as an ancillary laboratory test to β-hCG in the diagnosis and follow-up of patients with CHM (PUBMED:25869758).
Instruction: Diffusion tensor magnetic resonance imaging finding of discrepant fractional anisotropy between the frontal and parietal lobes after whole-brain irradiation in childhood medulloblastoma survivors: reflection of regional white matter radiosensitivity? Abstracts: abstract_id: PUBMED:17544593 Diffusion tensor magnetic resonance imaging finding of discrepant fractional anisotropy between the frontal and parietal lobes after whole-brain irradiation in childhood medulloblastoma survivors: reflection of regional white matter radiosensitivity? Purpose: To test the hypothesis that fractional anisotropy (FA) is more severely reduced in white matter of the frontal lobe compared with the parietal lobe after receiving the same whole-brain irradiation dose in a cohort of childhood medulloblastoma survivors. Methods And Materials: Twenty-two medulloblastoma survivors (15 male, mean [+/- SD] age = 12.1 +/- 4.6 years) and the same number of control subjects (15 male, aged 12.0 +/- 4.2 years) were recruited for diffusion tensor magnetic resonance imaging scans. Using an automated tissue classification method and the Talairach Daemon atlas, FA values of frontal and parietal lobes receiving the same radiation dose, and the ratio between them were quantified and denoted as FFA, PFA, and FA(f/p), respectively. The Mann-Whitney U test was used to test for significant differences of FFA, PFA, and FA(f/p) between medulloblastoma survivors and control subjects. Results: Frontal lobe and parietal lobe white matter FA were found to be significantly less in medulloblastoma survivors compared with control subjects (frontal p = 0.001, parietal p = 0.026). Moreover, these differences were found to be discrepant, with the frontal lobe having a significantly larger difference in FA compared with the parietal lobe. The FA(f/p) of control and medulloblastoma survivors was 1.110 and 1.082, respectively (p = 0.029). Conclusion: Discrepant FA changes after the same irradiation dose suggest radiosensitivity of the frontal lobe white matter compared with the parietal lobe. Special efforts to address the potentially vulnerable frontal lobe after treatment with whole-brain radiation may be needed so as to balance disease control and treatment-related morbidity. abstract_id: PUBMED:16448821 Mapping radiation dose distribution on the fractional anisotropy map: applications in the assessment of treatment-induced white matter injury. We describe a method to map whole brain radiation dose distribution on to diffusion tensor MR (DT-MR) fractional anisotropy (FA) images and illustrate its applications for studying dose-effect relationships and regional susceptibility in two childhood medulloblastoma survivors. To determine the FA changes voxel-by-voxel in white matter, the post-treatment follow-up FA maps were coregistered to baseline pre-treatment FA maps and automatic segmentation for white matter was carried out. DeltaFA maps representing relative FA change in white matter were hence generated for visual inspection and quantitative analysis. The radiation dose distribution, calculated from radiotherapy plan and exported as images, was coregistered to baseline FA images. DT-MR imaging and processing noise was small with root mean square value of 1.49% for mean DeltaFA. We evaluated the mean DeltaFA changes of regions-of-interest according to radiation dose regions to provide an estimate of the dose-response and found increasing reduction in mean DeltaFA with increasing radiation dose up to 45 Gy after which there was a reversal in the mean FA trend and mean FA approached baseline value. We also found more severe mean FA reduction in the frontal lobes compared to the parietal lobes despite the same radiation dose, suggesting regional susceptibility in the frontal lobe, and mean FA increase in the brainstem after radiation in both patients. We conclude that the method described may be useful in estimating dose-effect relationships and studying regional susceptibility of the brain to radiation in medulloblastoma survivors. abstract_id: PUBMED:19540067 Differences in supratentorial damage of white matter in pediatric survivors of posterior fossa tumors with and without adjuvant treatment as detected by magnetic resonance diffusion tensor imaging. Purpose: To elucidate morphologic correlates of brain dysfunction in pediatric survivors of posterior fossa tumors by using magnetic resonance diffusion tensor imaging (DTI) to examine neuroaxonal integrity in white matter. Patients And Methods: Seventeen medulloblastoma (MB) patients who had received surgery and adjuvant treatment, 13 pilocytic astrocytoma (PA) patients who had been treated only with surgery, and age-matched healthy control subjects underwent magnetic resonance imaging on a 3-Tesla system. High-resolution conventional T1- and T2-weighted magnetic resonance imaging and DTI data sets were obtained. Fractional anisotropy (FA) maps were analyzed using tract-based spatial statistics, a part of the Functional MRI of the Brain Software Library. Results: Compared with control subjects, FA values of MB patients were significantly decreased in the cerebellar midline structures, in the frontal lobes, and in the callosal body. Fractional anisotropy values of the PA patients were not only decreased in cerebellar hemispheric structures as expected, but also in supratentorial parts of the brain, with a distribution similar to that in MB patients. However, the amount of significantly decreased FA was greater in MB than in PA patients, underscoring the aggravating neurotoxic effect of the adjuvant treatment. Conclusions: Neurotoxic mechanisms that are present in PA patients (e.g., internal hydrocephalus and damaged cerebellar structures affecting neuronal circuits) contribute significantly to the alteration of supratentorial white matter in pediatric posterior fossa tumor patients. abstract_id: PUBMED:27015518 Vulnerability of white matter to insult during childhood: evidence from patients treated for medulloblastoma. OBJECTIVE Craniospinal irradiation damages the white matter in children treated for medulloblastoma, but the treatment-intensity effects are unclear. In a cross-sectional retrospective study, the effects of treatment with the least intensive radiation protocol versus protocols that delivered more radiation to the brain, in addition to the effects of continuous radiation dose, on white matter architecture were evaluated. METHODS Diffusion tensor imaging was used to assess fractional anisotropy, mean diffusivity, radial diffusivity, and axial diffusivity. First, regional white matter analyses and tract-based spatial statistics were conducted in 34 medulloblastoma patients and 38 healthy controls. Patients were stratified according to those treated with 1) the least intensive radiation protocol, specifically reduced-dose craniospinal irradiation plus a boost to the tumor bed only (n = 17), or 2) any other dose and boost combination that delivered more radiation to the brain, which was also termed the "all-other-treatments" group (n = 17), and comprised patients treated with standard-dose craniospinal irradiation plus a posterior fossa boost, standard-dose craniospinal irradiation plus a tumor bed boost, or reduced-dose craniospinal irradiation plus a posterior fossa boost. Second, voxel-wise dose-distribution analyses were conducted on a separate cohort of medulloblastoma patients (n = 15). RESULTS The all-other-treatments group, but not the reduced-dose craniospinal irradiation plus tumor bed group, had lower fractional anisotropy and higher radial diffusivity than controls in all brain regions (all p &lt; 0.05). The reduced-dose craniospinal irradiation plus tumor bed boost group had higher fractional anisotropy (p = 0.05) and lower radial diffusivity (p = 0.04) in the temporal region, and higher fractional anisotropy in the frontal region (p = 0.04), than the all-other-treatments group. Linear mixed-effects modeling revealed that the dose and age at diagnosis together 1) better predicted fractional anisotropy in the temporal region than models with either alone (p &lt; 0.005), but 2) did not better predict fractional anisotropy in comparison with dose alone in the occipital region (p &gt; 0.05). CONCLUSIONS Together, the results show that white matter damage has a clear association with increasing radiation dose, and that treatment with reduced-dose craniospinal irradiation plus tumor bed boost appears to preserve white matter in some brain regions. abstract_id: PUBMED:25850573 Cerebral white matter fractional anisotropy and tract volume as measured by MR imaging are associated with impaired cognitive and motor function in pediatric posterior fossa tumor survivors. Background: Disease and therapy cause brain damage and subsequent functional loss in pediatric patients with posterior fossa tumors. Treatment-related toxicity factors are resection in patients with pilocytic astrocytoma (PA) and, additionally, cranio-spinal irradiation together with chemotherapy in patients with medulloblastoma (MB). We tested whether damage to white matter (WM) as revealed by diffusion tensor MR imaging (DTI) correlated with specific cognitive and motor impairments in survivors of pediatric posterior fossa tumors. Procedures: Eighteen MB (mean age ± SD, 15.2 ± 4.9 y) and 14 PA (12.6 ± 5.0 y) survivors were investigated with DTI on a 3-Tesla-MR system. We identified fractional anisotropy (FA) of WM, the volume ratio of WM to gray matter and cerebrospinal fluid (WM/GM + CSF), and volume of specific frontocerebellar tracts. Ataxia was assessed using the International Cooperative Ataxia Rating Scale (ICARS), while the Wechsler Intelligence Scale for Children determined full-scale intelligence quotients (FSIQ). Amsterdam Neuropsychological Tasks (ANT) was used to assess processing speed. Handwriting automation was analyzed using a digitizing graphic tablet. Results: The WM/GM + CSF ratio correlated significantly with cognitive measures (IQ, P = 0.002; ANT baseline speed, P = 0.04; ANT shifting attention, P = 0.004). FA of skeletonized tracts correlated significantly with FSIQ (P = 0.008), ANT baseline speed (P = 0.028) and ANT shifting attention (P = 0.045). Moreover, frontocerebellar tract volumes correlated with both the FSIQ (P = 0.011) and ICARS (P = 0.007). Conclusion: DTI provides a method for quantification of WM damage by tumor and by therapy-associated effects in survivors of pediatric posterior fossa tumors. DTI-derived WM integrity may be a representative marker for cognitive and motor deterioration. abstract_id: PUBMED:28541578 Disrupted development and integrity of frontal white matter in patients treated for pediatric medulloblastoma. Background: Treatment of pediatric medulloblastoma is associated with known neurocognitive deficits that we hypothesize are caused by microstructural damage to frontal white matter (WM). Methods: Longitudinal MRI examinations were collected from baseline (after surgery but before therapy) to 36 months in 146 patients and at 3 time points in 72 controls. Regional analyses of frontal WM volume and diffusion tensor imaging metrics were performed and verified with tract-based spatial statistics. Age-adjusted, linear mixed-effects models were used to compare patient and control images and to associate imaging changes with Woodcock-Johnson Tests of Cognitive Abilities. Results: At baseline, WM volumes in patients were similar to those in controls; fractional anisotropy (FA) was lower bilaterally (P &lt; 0.001) and was associated with decreased Processing Speed (P = 0.014) and Broad Attention (P = 0.025) performance at 36 months. During follow-up, WM volumes increased in controls but decreased in patients (P &lt; 0.001) bilaterally. Smaller WM volumes in patients at 36 months were associated with concurrent decreased Working Memory (P = 0.026) performance. Conclusions: Lower FA in patients with pediatric medulloblastoma compared with age-similar controls indicated that patients suffer substantial acute microstructural damage to supratentorial frontal WM following surgery but before radiation therapy or chemotherapy. Additionally, this damage to the frontal WM was associated with decreased cognitive performance in executive function 36 months later. This early damage also likely contributed to posttherapeutic failure of age-appropriate WM development and to the known association between decreased WM volumes and decreased cognitive performance. abstract_id: PUBMED:12695214 Diffusion-tensor imaging for the detection and quantification of treatment-induced white matter injury in children with medulloblastoma: a pilot study. Background And Purpose: Treatment-induced white matter (WM) injury in medulloblastoma survivors, as manifested by deterioration of cognitive function, is prevalent. However, no reliable imaging method exists for early detection and quantification. Our goal was to determine whether anisotropy of WM is reduced in medulloblastoma survivors and whether fractional anisotropy (FA) can be used as an index for evaluation of treatment-induced WM injury. Methods: We evaluated nine medulloblastoma survivors treated with surgery, cranial irradiation, and chemotherapy by use of diffusion-tensor (DT) imaging and compared FA findings in selected WM sites (cerebellar hemispheres, pons, medulla oblongata, frontal periventricular WM, parietal periventricular WM, and corona radiata) with those of healthy age-matched control subjects. FA maps were compared with conventional T2-weighted images. FA was also compared with age at treatment, time interval since treatment, and deterioration of school performance. The two-tailed paired t test was used to determine statistical significance (P &lt;.05). Results: Significant reduction of FA (P &lt;.05) was seen in all anatomic sites in the patient group compared with FA in control subjects, except in the frontal periventricular WM, even in areas with normal appearance on T2-weighted images. FA reduction ranged from 12.4-19% (mean, 16.5%). Compared with control subjects, posterior fossa and supratentorial WM FA in patients were reduced by 14.6% (SD 1.9%) and 18.4% (SD 0.55%), respectively (P =.029). Reduction of supratentorial WM FA correlated with younger age at treatment (&lt; 5 years), longer interval since treatment (&gt; 5 years), and deterioration of school performance. Conclusion: DT imaging and use of the index FA is potentially useful for early detection and monitoring of treatment-induced WM injury in children with medulloblastoma. abstract_id: PUBMED:14741664 White-matter diffusion anisotropy after chemo-irradiation: a statistical parametric mapping study and histogram analysis. The aim of the study was to evaluate white-matter (WM) diffusion anisotropy in medulloblastoma survivors after cranial irradiation and chemotherapy using voxel-based analysis with SPM99 and fractional anisotropy (FA) histogram-derived indices, and to identify quantitative indices for detecting and monitoring children with treatment-induced white-matter injury. Familywise error rate (FWE) that corrects for multiple comparisons was used to locate statistically significant regions of P &lt; 0.05 in voxel-based analysis. Subsequently, the false discovery rate (FDR) controlling procedure (corrected P &lt; 0.05) was used. FA map histogram analysis of histogram-derived indices, mean FA, mean FA peak height, and peak location was performed. Two-sample t test was used in all analyses. Using FWE-corrected P &lt; 0.05, there was a cluster of reduced anisotropy in the periventricular white matter lateral to the left ventricular atrium. When FDR-corrected P &lt; 0.05 was used, there were multiple clusters of reduced anisotropy in the periventricular white matter, the corpus callosum, and corona radiata. Simplified voxel-based morphometry (VBM)-like analysis of cerebrospinal fluid (CSF) did not show significant differences between patient and control subjects. 'White-matter FA map' histogram showed significant reduction in mean FA and mean FA peak location and significant increase in mean FA peak height in the patient group compared to control subjects (P = 0.003, P = 0.003, and P = 0.014, respectively). This approach of quantifying FA can be applied to characterize anisotropy in the white matter after cranial irradiation and chemotherapy and can potentially be used to detect and monitor treatment-induced neurotoxicity. abstract_id: PUBMED:19117694 White matter fractional anisotropy correlates with speed of processing and motor speed in young childhood cancer survivors. Purpose: To determine whether childhood medulloblastoma and acute lymphoblastic leukemia (ALL) survivors have decreased white matter fractional anisotropy (WMFA) and whether WMFA is related to the speed of processing and motor speed. Methods And Materials: For this study, 17 patients (6 medulloblastoma, 5 ALL treated with high-dose methotrexate (MTX) (4 x 5 g/m(2)) and 6 with low-dose MTX (3 x 2 g/m(2))) and 17 age-matched controls participated. On a 3.0-T magnetic resonance imaging (MRI) scanner, diffusion tensor imaging (DTI) was performed, and WMFA values were calculated, including specific regions of interest (ROIs), and correlated with the speed of processing and motor speed. Results: Mean WMFA in the patient group, mean age 14 years (range 8.9 - 16.9), was decreased compared with the control group (p = 0.01), as well as WMFA in the right inferior fronto-occipital fasciliculus (IFO) (p = 0.03) and in the genu of the corpus callosum (gCC) (p = 0.01). Based on neurocognitive results, significant positive correlations were present between processing speed and WMFA in the splenium (sCC) (r = 0.53, p = 0.03) and the body of the corpus callosum (bCC) (r = 0.52, p = 0.03), whereas the right IFO WMFA was related to motor speed (r = 0.49, p &lt; 0.05). Conclusions: White matter tracts, using a 3.0-T MRI scanner, show impairment in childhood cancer survivors, medulloblastoma survivors, and also those treated with high doses of MTX. In particular, white matter tracts in the sCC, bCC and right IFO are positively correlated with speed of processing and motor speed. abstract_id: PUBMED:31786546 Diffusion tensor magnetic resonance imaging of the optic nerves in pediatric hydrocephalus. Objective: While conventional imaging can readily identify ventricular enlargement in hydrocephalus, structural changes that underlie microscopic tissue injury might be more difficult to capture. MRI-based diffusion tensor imaging (DTI) uses properties of water motion to uncover changes in the tissue microenvironment. The authors hypothesized that DTI can identify alterations in optic nerve microstructure in children with hydrocephalus. Methods: The authors retrospectively reviewed 21 children (&lt; 18 years old) who underwent DTI before and after neurosurgical intervention for acute obstructive hydrocephalus from posterior fossa tumors. Their optic nerve quantitative DTI metrics of mean diffusivity (MD) and fractional anisotropy (FA) were compared to those of 21 age-matched healthy controls. Results: Patients with hydrocephalus had increased MD and decreased FA in bilateral optic nerves, compared to controls (p &lt; 0.001). Normalization of bilateral optic nerve MD and FA on short-term follow-up (median 1 day) after neurosurgical intervention was observed, as was near-complete recovery of MD on long-term follow-up (median 1.8 years). Conclusions: DTI was used to demonstrate reversible alterations of optic nerve microstructure in children presenting acutely with obstructive hydrocephalus. Alterations in optic nerve MD and FA returned to near-normal levels on short- and long-term follow-up, suggesting that surgical intervention can restore optic nerve tissue microstructure. This technique is a safe, noninvasive imaging tool that quantifies alterations of neural tissue, with a potential role for evaluation of pediatric hydrocephalus. Answer: The findings from diffusion tensor magnetic resonance imaging (DT-MRI) in childhood medulloblastoma survivors indicate that there is a discrepancy in fractional anisotropy (FA) between the frontal and parietal lobes following whole-brain irradiation. This discrepancy suggests that the white matter in the frontal lobe may be more radiosensitive compared to the parietal lobe. Specifically, medulloblastoma survivors showed significantly reduced FA in both the frontal and parietal lobes when compared to control subjects, with the frontal lobe exhibiting a larger difference in FA than the parietal lobe (PUBMED:17544593). This finding was supported by another study that also observed more severe mean FA reduction in the frontal lobes compared to the parietal lobes despite the same radiation dose, suggesting regional susceptibility in the frontal lobe (PUBMED:16448821). Additional research has confirmed that pediatric survivors of posterior fossa tumors, including medulloblastoma, exhibit decreased FA values in the frontal lobes among other brain regions, which is indicative of white matter damage. This damage is more pronounced in patients who received adjuvant treatment, highlighting the neurotoxic effect of such treatments (PUBMED:19540067). Moreover, a study evaluating the effects of different radiation protocols on white matter architecture found that treatment with reduced-dose craniospinal irradiation plus a tumor bed boost preserved white matter in some brain regions, suggesting that the intensity of radiation treatment impacts the extent of white matter damage (PUBMED:27015518). Furthermore, DT-MRI has been used to demonstrate that white matter integrity, as measured by FA, is associated with cognitive and motor function impairments in pediatric posterior fossa tumor survivors (PUBMED:25850573). Another study found that pediatric medulloblastoma patients had lower FA in frontal white matter compared to controls, which was associated with decreased cognitive performance, indicating that frontal white matter is particularly vulnerable to treatment-induced damage (PUBMED:28541578). In summary, the discrepant FA between the frontal and parietal lobes after whole-brain irradiation in childhood medulloblastoma survivors reflects the regional radiosensitivity of white matter, with the frontal lobe being more susceptible to radiation-induced damage. This vulnerability has implications for cognitive and motor functions and underscores the need for treatment strategies that minimize neurotoxicity while effectively controlling the disease.
Instruction: Do maternal- or pregnancy-associated disease states affect blood pressure in the early neonatal period? Abstracts: abstract_id: PUBMED:19694689 Do maternal- or pregnancy-associated disease states affect blood pressure in the early neonatal period? Background: Placental vascular changes associated with maternal disease states may affect fetal vascular development. There is evidence suggesting that being born prematurely is associated with a higher blood pressure (BP) in later life. Aim: To determine whether maternal disease state affects BP in the early neonatal period. Methods: Cohort study of neonates admitted to neonatal intensive care unit with exposure to maternal hypertension and diabetes. Inclusion criteria were neonates greater than 27 weeks gestation not ventilated or requiring inotropes for more than 24 h, materna l hypertension (pregnancy induced or essential) or diabetes of any kind requiring treatment, and spontaneous delivery. Exclusion criteria included chromosomal or congenital anomaly and illicit maternal drug use. Oscillometric BP measurements taken until discharge on days 1, 2, 3, 4, 7, 14, 21 and 28. Placental histopathology was performed. Results: One hundred and ninety infants enrolled, 104 in the control and 86 in the study group. Sixty-five infants were born between 28-31 weeks and 125 infants between 32-41 weeks gestation. Those born between 28-31 weeks with a history of diabetes had a statistically higher systolic, mean and diastolic BP throughout the first 28 days of life (P = 0.001; P = 0.007; P = 0.02). Those born between 32-41 weeks gestation with placental pathology associated with altered uteroplacental perfusion had a higher systolic BP (P = 0.005). Conclusions: Maternal- or pregnancy-associated disease states appear to influence BP in the early neonatal period. Diabetes and altered placental perfusion were associated with higher BP readings. Clinical significance of these statistically elevated BPs in the early neonatal period is unknown. abstract_id: PUBMED:35692979 Maternal Obesity-Associated Neonatal Morbidities in Early Newborn Period. Maternal obesity has been associated with pregnancy-related complications and neonatal morbidities. The primary aim of this study was to evaluate early neonatal morbidities associated with maternal obesity from the infant-mother dyad data set at a single, large Regional Perinatal Center (RPC) in NY. A retrospective chart review of all mother-infant dyads born from January 2009 to December 2019 was done. Maternal obesity was defined using the NIH definition of pre-pregnancy body mass index (BMI) ≥ 30 Kg/m2. Maternal data included pre-pregnancy BMI, gestational diabetes, hypertension, and mode of delivery. Neonatal data recorded the birth weight, gestational age, respiratory support after delivery, diagnosis of transient tachypnea of the newborn (TTN), respiratory distress syndrome (RDS), neonatal hypoglycemia (NH), and hypoxic-ischemic encephalopathy (HIE). Diagnosis of TTN, RDS, NH, and HIE was defined by the service neonatologist and cross-checked by the data system review neonatologist. Medical records of 22,198 infant-mother dyads included in the study had 7,200 infants (32.4%) born to obese mothers and 14,998 infants (67.6%) born to non-obese women. There was a statistically significant increase in the diagnosis of gestational diabetes, gestational hypertension, and cesarean deliveries in obese mothers. Diagnosis of TTN, RDS, and NH was significantly higher in infants born to obese mothers, while HIE incidence was similar in both the groups. Infants born to obese mothers are more likely to be delivered by cesarean section and are at a higher risk of diagnosis of transient tachypnea of newborn, respiratory distress syndrome, and hypoglycemia in the early neonatal period. abstract_id: PUBMED:26924745 Use of Maternal Early Warning Trigger tool reduces maternal morbidity. Background: Maternal mortality in the United States has increased unabated for the past 20 years. Maternal morbidity is also affecting an increasingly large number of women in the United States. A number of national and state organizations have recommend the use of maternal early warning tools as a method to combat this problem. There are limited data suggesting that the use of these types of clinical assessment tools can reduce maternal morbidity. Objective: We sought to determine if maternal morbidity could be reduced with the implementation of a clinical pathway-specific Maternal Early Warning Trigger (MEWT) tool. Study Design: The tool was developed internally and prospectively implemented as a pilot project in 6 of 29 hospitals within a large hospital system. The primary goal was early assessment and treatment of patients suspected of clinical deterioration. The tool addressed the 4 most common areas of maternal morbidity: sepsis, cardiopulmonary dysfunction, preeclampsia-hypertension, and hemorrhage. To be considered positive, triggers needed to be sustained for &gt;20 minutes and were defined as severe (single abnormal value): maternal heart rate (HR) &gt;130 beats/min (bpm), respiratory rate &gt;30/min, mean arterial pressure &lt;55 mm Hg, oxygen saturation &lt;90%, or nurse concern; or nonsevere (required 2 abnormal values): temperature &gt;38 or &lt;36°C, blood pressure &gt;160/110 or &lt;85/45 mm Hg, HR &gt;110 or &lt;50 bpm, respiratory rate &gt;24 or &lt;10/min, oxygen saturation &lt;93%, fetal HR &gt;160 bpm, altered mental status, or disproportionate pain. Within each group, recommended management or assessment was also provided. Outcome measures were Centers for Disease Control and Prevention (CDC)-defined severe maternal morbidity, composite maternal morbidity, and intensive care unit (ICU) admissions. Two time intervals were used to analyze the effect of the MEWT tool: a 24-month baseline control period and a 13-month MEWT study period. To determine that the findings noted were not simply changes that would have occurred without the utilization of the early warning tool, we also compared a control population from nonpilot sites during the same baseline and 13-month time periods. Results: There were 36,832 deliveries at the pilot sites (24,221 pre- and 12,611 post-MEWT testing) and 146,359 at the nonpilot sites (95,718 pre- and 50,641 post-MEWT testing) during the 2 study time periods. Use of the MEWT tool resulted in significant reductions in CDC severe maternal morbidity (P &lt; 0.01) and composite morbidity (P &lt; 0.01). ICU admissions were unchanged. At nonpilot sites CDC severe maternal morbidity, composite morbidity, and ICU admissions were unchanged between baseline and the post-MEWT testing time period. Conclusion: The use of the MEWT tool in this study, designed to address 4 of the most common causes of maternal morbidity, as well as provide assessment and management recommendations, resulted in significant improvement in maternal morbidity. The variation in hospital delivery services at the pilot sites suggests that this maternal early warning tool would be suitable for use in the majority of maternity centers in the United States. abstract_id: PUBMED:31267796 Thresholds for Ambulatory Blood Pressure Monitoring Based on Maternal and Neonatal Outcomes in Late Pregnancy in a Southern Chinese Population. Background In contrast to the general population, outcome-derived thresholds for diagnosing ambulatory hypertension in pregnancy are not yet available. We aimed to identify and compare outcome-derived ambulatory blood pressure (BP) monitoring thresholds for adverse perinatal outcomes by using approaches related and not related to clinic BP in a southern Chinese population. Methods and Results Ambulatory BP monitoring was performed in a cohort of 1768 high-risk participants in late pregnancy who were not taking antihypertensive medications. Participants were followed for composite maternal (severe complications) and neonatal (pregnancy loss, advanced neonatal care, and small for gestational age) outcomes. Modeling of clinic BP-unrelated approaches revealed a nonlinear threshold effect of ambulatory diastolic BP on the composite outcome, with increased risk for daytime ≥79 mm Hg and 24-hour measurement ≥76 mm Hg. For other ambulatory BP components showing linear associations with outcome, the following thresholds were identified: 131 mm Hg for daytime systolic, 121 mm Hg for nighttime systolic, 130 mm Hg for 24-hour systolic, and 73 mm Hg for night-time diastolic BP. These thresholds unrelated to clinic BP were lower than the equivalents yielding a similar probability of outcome to clinic BP of 140/90 mm Hg and were comparable with equivalents to clinic BP of 130/80 mm Hg. Conclusions Using an outcome-derived approach unrelated to clinic BP, we identified rounded thresholds to define ambulatory hypertension in at-risk women in late pregnancy in a southern Chinese population as follows: 130/80 mm Hg for daytime, 120/75 mm Hg for nighttime, and 130/75 mm Hg for 24-hour measurement. For wider clinical applicability and to align both nonpregnancy and pregnancy ambulatory BP monitoring with an outcomes-based approach, prospective, multiethnic, international studies from early pregnancy onward will be required. abstract_id: PUBMED:28219626 Maternal depression and anxiety and fetal-neonatal growth. Objective: Maternal depression and anxiety have been found to negatively affect fetal and neonatal growth. However, the independent effects of maternal depression and anxiety on fetal-neonatal growth outcomes and trajectories remain unclear. This study aimed to analyze simultaneously the effects of maternal prenatal depression and anxiety on (1) neonatal growth outcomes, and (2), on fetal-neonatal growth trajectories, from the 2nd trimester of pregnancy to childbirth. Methods: A sample of 172 women was recruited and completed self-reported measures of depression and anxiety during the 2nd and 3rd trimesters of pregnancy, and at childbirth. Fetal and neonatal biometrical data were collected from clinical reports at the same assessment moments. Results: Neonates of prenatally anxious mothers showed lower weight (p=0.006), length (p=0.025), and ponderal index (p=0.049) at birth than neonates of prenatally non-anxious mothers. Moreover, fetuses-neonates of high-anxiety mothers showed a lower increase of weight from the 2nd trimester of pregnancy to childbirth than fetuses-neonates of low-anxiety mothers (p&lt;0.001). Considering maternal depression and anxiety simultaneously, only the effect of maternal anxiety was found on these markers of fetal-neonatal growth outcomes and trajectories. Conclusion: This study demonstrates the independent longitudinal effect of maternal anxiety on major markers of fetal-neonatal growth outcomes and trajectories, simultaneously considering the effect of maternal depression and anxiety. abstract_id: PUBMED:15451123 Deficits in plasma oxytocin responses and increased negative affect, stress, and blood pressure in mothers with cocaine exposure during pregnancy. In animals, oxytocin enhances maternal behavior and lowers blood pressure (BP) and negative affect, while parturitional cocaine disrupts oxytocin activity and increases maternal neglect and aggression. Thus, we compared oxytocin, BP, maternal behavior, and affect in mothers of infants who used cocaine (cocaine, n = 10) or did not (no drug, n = 25) during pregnancy. Laboratory BP and circulating oxytocin, catecholamines, and cortisol were examined before and during a speech stressor on 2 days, with vs. without prestress baby holding. Ambulatory monitoring assessed BP, urinary norepinephrine, and cortisol for 24 h at home. The cocaine group had lower oxytocin levels, greater hostility and depressed mood, less support from others and mastery over life events, higher BP during all events of testing without the baby, and higher ambulatory BP and urinary norepinephrine at home, while cortisol and epinephrine responses were blunted. Although they tended to hold their babies less often at home, baby holding in the laboratory led to decreased BP in cocaine mothers who then did not differ from no-drug mothers in BP or observed affect. abstract_id: PUBMED:28416176 Maternal early warning systems-Towards reducing preventable maternal mortality and severe maternal morbidity through improved clinical surveillance and responsiveness. Despite increasing awareness of obstetric safety initiatives, maternal mortality and severe maternal morbidity in the United States have continued to increase over the past 20 years. Since results from large-scale surveillance programs suggest that up to 50% of maternal deaths may be preventable, new efforts are focused on developing and testing early warning systems for the obstetric population. Early warning systems are a set of specific clinical signs or symptoms that trigger the awareness of risk and an urgent patient evaluation, with the goal of reducing severe morbidity and mortality through timely diagnosis and treatment. Early warning systems have proven effective at predicting and reducing mortality and severe morbidity in medical, surgical, and critical care patient populations; however, there has been limited research on how to adapt these tools for use in the obstetric population, where physiologic changes of pregnancy render them inadequate. In this article, we review the available obstetric early warning systems and present evidence for their use in reducing maternal mortality and severe maternal morbidity. We also discuss considerations and strategies for implementation and acceptance of these early warning systems for clinical use in obstetrics. abstract_id: PUBMED:24286869 Neonatal morbidity in early-term newborns Introduction: In the last decades has increased significantly The birth of children from 37 to 38 weeks of gestation, a period called early term, has significantly increased in the past twenty years or so, parallel to the increase in induced deliveries and the cesarean rate. Patients And Method: Retrospective cohorts population study, which included those babies born between 37 and 41 weeks of gestation in the period 1992-2011 (n=35.539). This population was divided into two cohorts, early term newborn (RNTP) of 37-38 weeks (n=11,318), and full term newborn (RNTC), of 39-41 weeks of gestation (n=24,221). The rates of cesarean section, neonatal unit admission, respiratory morbidity, apnea and need for assisted ventilation, hyperbilirubinemia requiring phototherapy, hypoglycemia, seizures, hypoxic-ischemia encephalopathy, need for parenteral nutrition and early sepsis were all reviewed. Results: There was a progressive increase in the number of caesarean sections throughout the period studied (from 30.9% to 40.3%). The cesarean section rate was higher in RNTP than in the RNTC (38.3% vs 31.3%, P&lt;.0001). On comparing the two groups, significant differences were found in the rate of admission to neonatal unit, 9.1% vs 3.5% (P&lt;.0001); respiratory morbidity (hyaline membrane 0.14% vs 0.007% [P&lt;.0001], transient tachypnea 1.71% vs 0.45% [P&lt;.0001], mechanical ventilation 0.2% vs 0.07% [P&lt;.009], continuous positive airway pressure 0.11% vs 0.01% [P&lt;.0001]), phototherapy 0.29% vs 0.07% (P&lt;.0001), hypoglycemia 0.54% vs 0.11% (P&lt;.0001), parenteral nutrition 0.16% vs 0.04% (P&lt;.0001). There were no significant differences in the rate of early sepsis, pneumothorax, aspiration syndromes, seizures and hypoxic-ischemic encephalopathy. Conclusions: In our environment, there is a significant number of RNTP, which have a significantly higher morbidity than newborns RNTC registered. After individualizing each case, it is essential not end a pregnancy before 39 weeks of gestation, except for maternal, placental or fetal conditions indicating that continuing the pregnancy may increase the risk for the fetus and/or the mother. abstract_id: PUBMED:32770947 Interaction of chorioamnionitis at term with maternal, fetal and obstetrical factors as predictors of neonatal mortality: a population-based cohort study. Background: Chorioamnionitis is a frequent complication of pregnancy and is known to be associated with serious adverse post-natal outcomes including death. However, the assessment of fetal well-being in labor in the context of chorioamnionitis is often challenging because of fetal tachycardia. Identifying specific risk factors for adverse neonatal outcomes in the context of chorioamnionitis could therefore be of paramount importance. This study aimed to determine if maternal and fetal risk factors for increased neonatal mortality and early neonatal mortality are modified in the context of chorioamnionitis in term pregnancies. Methods: A retrospective population-based cohort study using the United States birth/infant death public file from 2011 to 2013 was performed, including all live births at 37 weeks gestation and beyond. Interaction between chorioamnionitis and maternal demographic variables as well as labor and delivery potential risk factors were analyzed for association with neonatal death (&lt; 28 days) and early neonatal death (&lt; 7 days) using multivariate logistic regressions. Results: Among 9,034,428 live births, the prevalence of chorioamionitis was 1.29% (95% CI 1.28-1.30%). The incidence of neonatal death and early neonatal death were 0.09 and 0.06% in the chorioamnionitis group versus 0.06 and 0.04% in the no chorioamnionitis group (p = 0.0003 and &lt; 0.0001), respectively. Smoking was significantly associated with neonatal death and early neonatal death in the context of chorioamnionitis (OR 2.44, CI:1.34-4.43/ 2.36 CI:1.11-5.01) but was either less strongly or not associated in the absence of chorioamnionitis (OR 1.24, CI:1.14-1.35/0.93, CI:0.82-1.05). The association between gestational age (37 weeks compared to 39 weeks) and neonatal death was more important in the context of chorioamnionitis (OR = 3.19, CI: 1.75-5.82 versus 1.63, CI: 1.49-1.79). Multivariate analysis identified the following risk factors for neonatal death and/or early neonatal death: low maternal education, extreme maternal age, obesity (BMI &gt; 35 kg/m2), late or no prenatal care, diabetes, meconium-stained amniotic fluid, gestational ages other than 39 weeks, neonatal weight &lt; 2500 g and delivery by vacuum or caesarian. Conclusions: Smoking as well as early term have a positive interaction with chorioamnionitis for the risk of neonatal mortality. This should be taken into account when counseling pregnant women and managing laboring pregnant women with suspected chorioamnionitis. abstract_id: PUBMED:35141037 Does the Degree of Maternal Fever in the Setting of Chorioamnionitis Lead to Adverse Neonatal Outcomes? Objective The effect of the degree of maternal fever in the setting of chorioamnionitis on neonatal morbidity is unclear. The objective of this study is to assess the association between high maternal fevers (≥ 39°C) on neonatal morbidity. Study Design Secondary analysis of Maternal-Fetal Medicine Units (MFMU) Cesarean Registry data obtained from 1999 to 2002 among singleton gestations with chorioamnionitis. Women with a temperature less than 39°C (low fever) compared with those with greater than or equal to 39°C (high fever). Primary outcome was a composite of adverse neonatal outcomes such as death, sepsis, necrotizing enterocolitis, grade-3 or -4 intraventricular hemorrhage, seizure within 24 hours of delivery, intubation within 24 hours of delivery, and requiring cardiopulmonary resuscitation. Demographic characteristics compared using Fisher's exact and Wilcoxon's rank-sum test as appropriate. Multivariate logistic regression analysis with performed to control for cofounders. Stratified analysis also performed to assess outcomes in term infants. Results Of 1,313 included women, 1,200 (91.3%) were in the low fever group and 113 (8.7%) were in the high fever group. Women in the high fever group were more likely to be African American and group B Streptococcus positive. No difference in primary outcome was noted between the groups (38.9% high fever vs. 35.8% low fever, p = 0.54). High maternal fever was associated with increased risk of NICU admission (48.1 vs. 50.4%, p = 0.02). When controlling for African American race, preterm birth, and delivery route, patients with high fever were not more likely to have adverse neonatal outcomes (adjusted odds ratio [aOR] = 1.28, 95% confidence interval [CI]: 0.84, 1.98). In the analysis limited to term infants, when controlling for confounders, high fever, similarly, was not associated with increased odds of adverse neonatal outcomes (aOR = 1.59, 95% CI: 0.96, 2.65). Conclusion The degree of maternal fever does not appear to be associated with an increased likelihood of adverse neonatal outcomes. Better understanding maternal factors that affect neonatal morbidity in the setting of chorioamnionitis is critical. Key Points High maternal fever in the setting of chorioamnionitis does not appear to have an increased likelihood of adverse neonatal outcomes.It is important to identify factors that may increase the risk of adverse outcomes such as early onset sepsis.Maternal fever may not be a strong indicator for neonatal outcomes and antibiotic protocols. Answer: Yes, maternal or pregnancy-associated disease states appear to influence blood pressure in the early neonatal period. A cohort study of neonates admitted to a neonatal intensive care unit with exposure to maternal hypertension and diabetes found that infants born between 28-31 weeks with a history of diabetes had statistically higher systolic, mean, and diastolic blood pressure throughout the first 28 days of life. Additionally, those born between 32-41 weeks gestation with placental pathology associated with altered uteroplacental perfusion had a higher systolic blood pressure (PUBMED:19694689). This suggests that conditions such as maternal diabetes and altered placental perfusion are associated with higher blood pressure readings in neonates.
Instruction: Is antenatal depression associated with adverse obstetric and perinatal outcomes? Abstracts: abstract_id: PUBMED:25777791 Is antenatal depression associated with adverse obstetric and perinatal outcomes? Objective: To examine whether a pre-gestational diagnosis of depression is a risk factor for adverse obstetric and neonatal outcome. Study Design: A retrospective cohort study investigating maternal characteristics, obstetrical and perinatal outcomes in singleton pregnancies of women with and without a diagnosis of depression was conducted. A pre-gestational diagnosis of depression was made by a psychiatrist or family physician and was recorded in the patients' chart. Multiple logistic regression models were used to control for possible confounders. Results: During the study period, 256 312 deliveries occurred. Of which, 221 women (0.1%) had a pre-gestational diagnosis of depression. When examining obstetric outcomes, women with a diagnosis of depression were older (32.05 ± 5.772 versus 28.56 ± 5.851) and smokers (7.2% versus 1.1%), had a higher rate of preterm deliveries (37.99 ± 2.989 versus 39.02 ± 2.249) and cesarean sections (CS; 28.5% versus 13.6%) in comparison to the control group. When examining neonatal outcomes, neonates of women diagnosed with depression had a lower birth mean weight (3.038.47 ± 649.6 versus 3183.44 ± 551.8) and increased rates of perinatal mortality (3.2% versus 1.3%). Using a multiple logistic regression model, with perinatal mortality as the outcome variable to control for cofounders such as maternal age, preterm birth, chronic hypertension and gestational diabetes mellitus, a diagnosis of depression was not found to be an independent risk factor for perinatal mortality. Another multiple logistic regression model found advanced maternal age, smoking, preterm birth and labor induction to be associated with a diagnosis of depression. Conclusion: Pregnant women diagnosed with depression are at an increased risk for preterm birth, low birth weight, and CSs. However, it was not associated with increased rates of perinatal mortality. abstract_id: PUBMED:35057865 Effect of perinatal depression on birth and infant health outcomes: a systematic review and meta-analysis of observational studies from Africa. Background: Antenatal depression is associated with intrauterine growth retardation, preterm birth, and low birth weight. Infants born to mothers with postnatal depression also may suffer from malnutrition and other health problems. Even though there are few single studies conducted so far, a systematic review of these studies is highly important to highlight the effect of antenatal and perinatal depression on adverse birth and infant health outcomes in Africa. Methods: We used the Preferred Report Items for Systematic Review and Meta-analysis (PRISMA) when conducting this study. Databases like CINAHL (EBSCO), MEDLINE (via Ovid and PubMed), PsycINFO, Emcare, Psychiatry Online, and Scopus were searched. In addition, Google Scholar and references from a list of eligible studies were explored. We included good quality observational studies based on Newcastle Ottawa Scale which are published in the English language between 2007 and 2018. Heterogeneity and publication bias were assessed. Meta-analysis with a random effect model was employed to determine the pooled effect sizes with a 95% confidence interval. The review protocol is registered in PROSPERO (CRD42018106714). Result: We found three studies (1511 participants) and 11 studies (22,254 participants) conducted on the effect of antenatal depression on birth outcomes and perinatal depression on adverse infant health outcomes, respectively. The overall risk of having adverse birth outcomes was 2.26 (95% CI: 1.43, 3.58) times higher among pregnant mothers with depression. The risk of preterm birth and low birth weight was 1.77 (95% CI: 1.03, 3.04) and 2.98 (95% CI: 1.60, 5.55) respectively. Similarly, the risk of having adverse infant health outcomes namely malnutrition and febrile illness was 1.61 (95% CI: 1.34, 1.95) times higher among mothers who had perinatal depression. Conclusions: We have found a significant association between antenatal depression and adverse birth outcomes, low birth weight and preterm birth. Similarly, a significant effect of perinatal depression on adverse infant health outcomes namely, malnutrition, and febrile illnesses was observed. The findings highlight that it is time to integrate mental health services with routine maternal health care services to improve birth outcomes and reduce infant morbidity. abstract_id: PUBMED:31851552 Increased rates of adverse perinatal outcomes in women with gestational diabetes and depression. Objective: We sought to examine the impact of depression on adverse perinatal outcomes in women with Gestational Diabetes Mellitus (GDM). Methods: We performed a retrospective cohort study comparing the rates of perinatal complications among singleton, nonanomalous births to women with GDM and the diagnosis of depression compared to GDM women without depression between 2007 and 2011 in California. Perinatal outcomes were analyzed using chi-square and multivariable logistic regression to compare frequencies of characteristics and outcomes and to determine the strength of association of depression and adverse perinatal outcomes among women with GDM. Statistical comparisons with a p-value of less than .05 and 95% CI that did not cross the null were considered statistically significant. Results: Among the cohort of 170,572 women with GDM, 2090 (1.22%) were diagnosed with antenatal depression. Women with GDM and depression had significantly higher rates of preeclampsia (adjusted Odds Ratio [aOR] 1.28, 95% CI 1.11-1.49) and gestational hypertension (aOR 1.23, 95% CI 1.05-1.44). Women with GDM and depression also had higher rates of preterm delivery at &lt;37, and &lt;34 weeks gestational age (aOR 1.33, 95% CI 1.18-1.50 and 1.36, 95% CI 1.15-1.61, respectively). Conclusion: Women with GDM and a diagnosis of depression have higher rates of adverse perinatal outcomes than women with GDM alone. Identifying and managing depression among women with GDM has the potential to improve the care and health of this high-risk population. abstract_id: PUBMED:36496099 Association between antenatal depression and adverse perinatal outcomes: a prospective cohort study. Background: Antenatal depression might cause adverse pregnancy outcomes. However, previous study results were inconsistent, especially in the low- and middle- income countries. We aimed to study the association between antenatal depression and adverse perinatal outcomes in a Chinese population. Methods: We performed a prospective cohort study and enrolled pregnant women from January 2020 to January 2021. Antenatal depressive symptoms in the third trimester of pregnancy were evaluated by the Edinburgh Postpartum Depression Scale (EPDS). Baseline characteristics and pregnancy outcomes were recorded. After adjusting for confounding factors (age, occupation, education level, and annual income), multivariate logistic regression analysis was applied to evaluate the associations between antenatal depression and pregnancy outcomes. Results: Among the 5209 participants, 1448 (27.7 %) pregnant women were positive for depression. After adjusting for potential confounders, women with antenatal depressive symptoms were significantly more likely to deliver prematurely [Odds ratio (OR) = 1.404, 95 % confidence interval (CI) = 1.020-1.933, P = 0.037] and receive cesarean section (OR = 1.154, 95 % CI = 1.002-1.331, P = 0.048). Limitations: EPDS, not a structured diagnostic interview, was used for psychological assessment. In addition, we only screened the women in their third trimester in a single research center. The association between the duration of antenatal depression and perinatal outcomes was not evaluated. Conclusions: Depressive symptoms were common among Chinese women in their third trimester of pregnancy. Women with antenatal depressive symptoms had increased cesarean section and preterm delivery risks. Screening and treatment for antenatal depression are needed during the prenatal care. abstract_id: PUBMED:34280383 Trajectories of antenatal depression and adverse pregnancy outcomes. Background: Antenatal depression affects approximately 1 of 7 pregnancies, with an increasing prevalence across gestation. Data regarding the associations between antenatal depression and adverse pregnancy outcomes yielded conflicting results. However, previous studies evaluated the cross-sectional prevalence of depression at various time points and not the depressive symptom trajectory across gestation. Objective: This study aimed to identify whether the trajectory of antenatal depressive symptoms is associated with different risks of adverse pregnancy outcomes. Study Design: This was a secondary analysis of a large multisite prospective cohort of nulliparous women across the United States. The Edinburgh Postpartum Depression Scale was administered at 2 study visits: between 6 and 14 weeks' gestation and between 22 and 30 weeks' gestation. The Edinburgh Postpartum Depression Scale score trajectories were categorized as improved, stable, or worsened based on whether the scores changed by at least 1 standard deviation between the 2 visits. The frequencies of adverse pregnancy outcomes (hypertensive disorders of pregnancy, abruption, cesarean delivery, preterm birth [ie, &lt;37 weeks' gestation], small for gestational age neonates, neonatal intensive care unit admission, and maternal readmission) were compared with depression trajectories across gestation in bivariable and multivariable analyses. Secondary analyses evaluated the frequencies of spontaneous and medically indicated preterm births and frequencies of spontaneous and medically indicated preterm births before 35, 32, and 28 weeks' gestation. Results: Of the 8784 women who completed the 2 antenatal Edinburgh Postpartum Depression Scale screens, 1141 (13.0%) had improved, 6663 (75.9%) had stable, and 980 (11.2%) had worsened depressive symptom trajectories across gestation. Compared with women with improved or stable depressive symptoms, those with worsened symptoms were more likely to experience preterm birth (8.3% vs 7.4% vs 9.9%, respectively; P=.018). After controlling for potential confounders, worsened depressive symptoms remained associated with more frequent preterm birth (adjusted odds ratio, 1.68; 95% confidence interval, 1.10-2.57). Conclusion: Women with depression symptoms that worsen as pregnancy progresses have increased odds of preterm birth. Future research is warranted to optimize and implement effective prevention, screening, and treatment protocols for antenatal depressive symptoms as a strategy to prevent preterm birth. abstract_id: PUBMED:38283425 The Effectiveness of Adequate Antenatal Care in Reducing Adverse Perinatal Outcomes: Evidence From a Low- or Middle-Income Country. Background and aim Antenatal care (ANC) is universally acknowledged as an essential intervention for enhancing the well-being of both mothers and children. The World Health Organization advises a minimum of four ANC visits. The objective of this study is to assess the effectiveness of adequate ANC in mitigating adverse perinatal outcomes. Methods This cross-sectional study was done at the Department of Obstetrics and Gynecology, Delta Medical College &amp; Hospital, Bangladesh, from March 2023 to August 2023. A total of 226 mothers who gave birth at the hospital during this period were enrolled in the study. Results More than 87% of the participants received adequate (≥4 visits) antenatal care from a registered physician. More than 84% of the mothers gave birth via cesarean section. Among the mothers who received inadequate ANC, the proportion of adverse perinatal outcomes was higher (69.0%) than that of those who received adequate ANC (32.0%). A significant association (p&lt;0.05) was noted between inadequate antenatal care and adverse perinatal outcomes. Pregnant women receiving adequate antenatal checkups were 79% less likely to experience adverse perinatal outcomes compared to those receiving inadequate ANC. Conclusion Adequate ANC is a very efficient and economical strategy for mitigating adverse perinatal outcomes. abstract_id: PUBMED:33879069 Antenatal depressive symptoms and adverse perinatal outcomes. Background: The association of antenatal depression with adverse pregnancy, birth, and postnatal outcomes has been an item of scientific interest over the last decades. However, the evidence that exists is controversial or limited. We previously found that one in five women in Kuwait experience antenatal depressive symptoms. Therefore, the aim of this study was to examine whether antenatal depressive symptoms are associated with preterm birth (PTB), small for gestational age (SGA), or large for gestational age (LGA) babies in this population. Methods: This was a secondary analysis based on data collected in the Transgenerational Assessment of Children's Environmental Risk (TRACER) Study that was conducted in Kuwait. Logistic regression analysis was used to examine whether antenatal depressive symptoms assessed using the Edinburgh Depression Scale (EDS) were associated with preterm birth, small for gestational age, and large for gestational age babies. Results: A total of 1694 women had complete information about the outcomes of interest. Women with depressive symptoms in pregnancy had increased, albeit non-significant, odds of having PTB (OR = 1.41; 95%CI: 0.81, 2.45), SGA babies (OR = 1.26; 0.80, 1.98), or LGA babies (OR = 1.27; 0.90, 1.79). Antenatal depressive symptoms had similar increased odds for the three outcomes even after adjusting for several covariates though none of these reached statistical significance. Conclusions: In the present study, the depressive symptoms in pregnancy did not predict adverse birth outcomes, such as PTB, SGA, and LGA, which adds to the currently non-conclusive literature. However, further research is needed to examine these associations, as the available evidence is quite limited. abstract_id: PUBMED:30349732 A retrospective analysis of adverse obstetric and perinatal outcomes in adolescent pregnancy: the case of Luapula Province, Zambia. Background: About three in ten young women aged 15-19 have begun childbearing among the Zambian population, with adolescent pregnancy levels as high as 35% in rural areas. In 2009, Luapula reported 32.1% adolescent pregnancies. The study sought to investigate obstetric and perinatal outcomes among adolescents compared to mothers aged 20-24 years delivering at selected health facilities in Kawambwa and Mansa districts of Luapula. Methods: A retrospective analysis was carried out of all deliveries to mothers aged between 10 and 24 years for the period January 2012 to January 2013. A total of 2795 antenatal and delivery records were reviewed; 1291 adolescent mothers and 1504 mothers aged 20-24 years. Crude and adjusted odds ratios for the association between maternal age and adverse obstetric and perinatal outcomes were obtained using logistic regression models. Results: The mean age of the adolescent mothers was 17.5 years. Mothers younger than 20 years faced a higher risk for eclampsia, anaemia, haemorrhage, Cephalopelvic disproportion, prolonged labour and caesarean section. After adjustment for potential confounders, the association between maternal age and adverse obstetric and perinatal outcome diminished. Children born to mothers younger than 20 were at increased risk for low birth weight, pre-term delivery, low Apgar score and neonatal death; the risk for asphyxia, however, tended to increase with age. Conclusion: The findings demonstrate that adolescent pregnancy increases the risk of adverse obstetric and perinatal outcomes. High rates of adolescent pregnancies in Luapula province are likely as a result of the predominantly rural and poor population. Understanding the factors that contribute to the high levels of adolescent pregnancy in the region will be vital in addressing the situation and subsequently reducing the high obstetric and perinatal morbidity and mortality. abstract_id: PUBMED:37204970 Association of antenatal anxiety with adverse pregnancy outcomes: A prospective hospital-based study. Background: Anxiety is common in pregnant women. Many studies have shown association of antenatal anxiety with adverse pregnancy outcomes, though the results are conflicting. Further, there are very limited studies on the subject reported from India, due to which, the data is limited. Hence this study was undertaken. Materials And Methods: Two hundred randomly selected consenting registered pregnant women reporting for antenatal follow up during third trimester of pregnancy were included in the study. Hindi version of Perinatal Anxiety Screening scale (PASS) was used to assess anxiety. Edinburgh Postnatal Depression rating Scale (EPDS) was used to assess comorbid depression. These women were followed up in the post-natal period to assess pregnancy outcomes. Chi-square test, Analysis of Variance (ANOVA) and correlation coefficients were calculated. Results: Analysis was done for 195 subjects. Most women (48.7%) were between 26 and 30 yrs of age. Primigravidas comprised 11.3 % of total study sample. Mean anxiety score was 23.6 (range 5-80). Adverse pregnancy outcomes were noted in 99 women; however anxiety scores in this group were not different from the group without adverse outcomes. No significant group differences were found with respect to PASS or EPDS scores. None of the women were found to have a syndromal anxiety disorder. Conclusions: Antenatal anxiety was not found to be associated with adverse pregnancy outcomes. This finding is contrary to the results obtained in earlier studies. More enquiry is needed into this area to replicate the results with clarity in larger samples, in Indian context. abstract_id: PUBMED:37121567 Adverse obstetric and perinatal outcomes of patients with history of recurrent miscarriage: a retrospective cohort study. Objective: To examine the associations between a history of recurrent miscarriage (RM) and adverse obstetric and perinatal outcomes in the subsequent pregnancy that progressed beyond 24 weeks. Design: Retrospective cohort study. Setting: A large tertiary maternity hospital. Patient(s): All women who booked for antenatal care and delivery between January 2014 and August 2021 were recorded. The study was limited to women with a singleton pregnancy, and to avoid intraperson correlation, we selected the first record of delivery from each mother in the study, leaving 108,792 deliveries for analysis. Obstetric and perinatal outcomes were compared among 1994 women (1.83%) with a history of ≥2 miscarriages (RM), 11,477 women (10.55%) with a history of 1 miscarriage, and 95,321 women (87.62%) with no history of miscarriage, respectively. Intervention(s): Women with a history of ≥2 miscarriages or RM. Main Outcome Measure(s): Obstetric complications included gestational diabetes mellitus, preeclampsia (subclassified as preterm and term preeclampsia), placenta previa, placenta accreta, and fetal distress. Perinatal outcomes included emergency cesarean section, elective cesarean section, induction, postpartum hemorrhage, preterm birth, stillbirth, Apgar score &lt;7 at 5 minutes, neonatal asphyxia, neonatal sex, congenital; malformation, low birth weight, and neonatal death. Result(s): After adjusting for relevant confounders, there was an increased risk of adverse obstetric and perinatal outcomes in a subsequent pregnancy for women with a history of RM, specifically for placental dysfunction disorders: preterm preeclampsia (risk ratio [RR] = 1.58; 95% confidence interval [CI], 1.03-2.32), preterm birth (RR = 1.34; 95% CI, 1.15-1.54)], and abnormal placentation, that is placenta previa (RR = 1.78; 95% CI, 1.36-2.28), and placenta accreta (RR = 4.19; 95% CI, 2.75-6.13). Conclusion(s): Significant associations existed between a history of RM and the occurrence of adverse obstetric and perinatal outcomes including placental dysfunction disorders and abnormal placentation. These findings may contribute to the early detection and appropriate intervention for placenta-associated diseases in women with a history of RM, with the goal of avoiding or reducing the associated detrimental effects. Answer: Yes, antenatal depression is associated with adverse obstetric and perinatal outcomes. Several studies have found significant associations between antenatal depression and various negative outcomes. For instance, a retrospective cohort study found that pregnant women diagnosed with depression are at an increased risk for preterm birth, low birth weight, and cesarean sections (CSs), although it was not associated with increased rates of perinatal mortality (PUBMED:25777791). A systematic review and meta-analysis of observational studies from Africa reported that the overall risk of having adverse birth outcomes was higher among pregnant mothers with depression, with increased risks of preterm birth and low birth weight, as well as adverse infant health outcomes such as malnutrition and febrile illness (PUBMED:35057865). In women with Gestational Diabetes Mellitus (GDM), those diagnosed with antenatal depression had significantly higher rates of preeclampsia, gestational hypertension, and preterm delivery (PUBMED:31851552). Another prospective cohort study in a Chinese population found that women with antenatal depressive symptoms were significantly more likely to deliver prematurely and receive cesarean section (PUBMED:36496099). A study on the trajectories of antenatal depression revealed that women with depressive symptoms that worsen as pregnancy progresses have increased odds of preterm birth (PUBMED:34280383). However, some studies have reported conflicting or non-significant findings. For example, a study in Kuwait did not find that depressive symptoms in pregnancy predicted adverse birth outcomes such as preterm birth, small for gestational age, and large for gestational age (PUBMED:33879069). Similarly, a study in India did not find an association between antenatal anxiety, which can be related to depression, and adverse pregnancy outcomes (PUBMED:37204970). Despite some conflicting evidence, the majority of studies suggest that antenatal depression is a risk factor for adverse obstetric and perinatal outcomes, highlighting the importance of screening and treating depression during pregnancy to potentially improve outcomes for both mothers and infants.
Instruction: Does minimally-invasive pancreaticoduodenectomy have advantages over its open method? Abstracts: abstract_id: PUBMED:31949389 Pancreatic head cancer: Open or minimally invasive pancreaticoduodenectomy? Pancreatic head cancer still represents an insurmountable barrier for patients and pancreatic surgeons. Pancreaticoduodenectomy (PD) continues to be the operative standard of care and potentially curative procedure for pancreatic head cancer. Despite the rapid development of minimally invasive techniques, whether the efficacy of minimally invasive pancreaticoduodenectomy (MIPD) is noninferior or superior to open pancreaticoduodenectomy (OPD) remains unclear. In this review, we summarized the history of OPD and MIPD and the latest staging and classification information for pancreatic head cancer as well as the proposed recommendations for MIPD indications for patients with pancreatic head cancer. By reviewing the MIPD- vs. OPD-related literature, we found that MIPD shows noninferiority or superiority to OPD in terms of safety, feasibility, enhanced recovery after surgery (ERAS) and several short-term and long-term outcomes. In addition, we analyzed and summarized the different MIPD outcomes in the USA, Europe and China. Certain debates over MIPD have continued, however, selection bias, the large number of low-volume centers, the steep MIPD learning curve, high conversion rate and administration of neoadjuvant therapy may limit the application of MIPD for pancreatic head cancer. abstract_id: PUBMED:27005287 Systematic review and meta-analysis of minimally invasive versus open approach for pancreaticoduodenectomy. Backgrounds And Objective: The technique of minimally invasive pancreatic surgeries has evolved rapidly, including minimally invasive pancreaticoduodenectomy (MIPD). However, controversy on safety and feasibility remains when comparing the MIPD with the open pancreaticoduodenectomy (OPD); therefore, we aimed to compare MIPD and OPD with a systemic review and meta-analysis. Methods: Multiple electronic databases were systematically searched to identify studies (up to February 2016) comparing MIPD with OPD. Intra-operative outcomes, oncologic data, postoperative complications and postoperative recovery were evaluated. Results: Twenty-two retrospective studies including 6120 patients (1018 MIPDs and 5102 OPDs) were included. MIPD was associated with a reduction in estimated blood loss (WMD -312.00 ml, 95 % CI -436.30 to -187.70 ml, p &lt; 0.001), transfusion rate (OR 0.41, 95 % CI 0.30-0.55, p &lt; 0.001), wound infection (OR 0.37, 95 % CI 0.20-0.66, p &lt; 0.001) and length of hospital stay (WMD -3.57 days, 95 % CI -5.17 to -1.98 days, p &lt; 0.001). Meanwhile, MIPD group has a higher R0 resection rate (OR 1.47, 95 % CI 1.18-1.82, p &lt; 0.001) and more lymph nodes harvest (WMD 1.74, 95 % CI 1.03-2.45, p &lt; 0.001). However, it had longer operation time (WMD 83.91 min, 95 % CI 36.60-131.21 min, p &lt; 0.001). There were no significant differences between the two procedures in morbidities (p = 0.86), postoperative pancreatic fistula (p = 0.17), delayed gastric empting (p = 0.65), vascular resection (p = 0.68), reoperation (p = 0.33) and mortality (p = 0.90). Conclusions: MIPD can be a reasonable alternative to OPD with potential advantages. However, further large-volume, well-designed RCTs with extensive follow-ups are suggested to confirm and update the findings of our analysis. abstract_id: PUBMED:24743678 Minimally invasive surgical approach compared with open pancreaticoduodenectomy: a systematic review and meta-analysis on the feasibility and safety. Background: Laparoscopic and robotic pancreaticoduodenectomy have started utilization tentatively; however, the clinical benefits are still controversial. This study aims to evaluate the safety and efficiency of minimally invasive pancreaticoduodenectomy. Methods: A systematic literature search was performed through PubMed, EMBASE, and Cochrane Library database without restriction to regions, publication types, or languages. Nine studies that compared laparoscopic/robotic with open pancreaticoduodenectomy were included. Fixed or random-effects models was used to measure the pooled estimates. Sensitivity and subgroup analysis were performed to evaluate the study quality. Results: Patients who underwent minimally invasive pancreaticoduodenectomy experienced longer operative time (P=0.007), but the estimated blood loss (P=0.007), length of stay, (P=0.02), and wound infection (P=0.04) decreased. Perioperative complications, such as pancreatic fistula, delayed gastric emptying, hemorrhage, bile leakage, reoperation, and mortality, were of no significant differences. Pathologically, lymph node number was similar (P=0.11); meanwhile, margin R0 ratio was higher in minimally invasive approach group (P=0.03). Subgroup analysis manifested robotic surgery spent comparable surgical time (P=0.16) as laparotomy, with earlier discharge (P=0.04). Conclusions: This meta-analysis indicates minimally invasive pancreaticoduodenectomy may be associated with shorter hospital stay, less estimated blood loss, and positive margin rate without compromising surgical safety as open surgery. Surgical duration of robotic method could even be equivalent as laparotomy. Minimally invasive approach can be a reasonable alternative to laparotomy pancreaticoduodenectomy with potential advantages. Nevertheless, future large-volume, well-designed RCTs with extensive follow-up are awaited to confirm and update the findings of this analysis. abstract_id: PUBMED:26830906 Systemic Review of the Feasibility and Advantage of Minimally Invasive Pancreaticoduodenectomy. Background: Minimally invasive pancreaticoduodenectomy (MIPD), which includes laparoscopic pancreaticoduodenectomy (LPD) and robotic pancreaticoduodenectomy (RPD), is a complex procedure that needs to be performed by experienced surgeons. However, the safety and oncologic performance have not yet been conclusively determined. Methods: A systematic literature search was performed using the Embase, Medline, and PubMed databases to identify all studies published up to March 2015. Articles written in English containing the keywords: "pancreaticoduodenectomy" or "Whipple operation" combined with "laparoscopy," "laparoscopic," "robotic," "da vinci," or "minimally invasive surgery" were selected. Furthermore, to increase the power of evidence, articles describing more than ten MIPDs were selected for this review. Results: Twenty-six articles matched the review criteria. A total of 780 LPDs and 248 RPDs were included in the current review. The overall conversion rate to open surgery was 9.1 %. The weighted average operative time was 422.6 min, and the weighted average blood loss was 321.1 mL. The weighted average number of harvested lymph nodes was 17.1, and the rate of microscopically positive tumor margins was 8.4 %. The cumulative morbidity was 35.9 %, and a pancreatic fistula was reported in 17.0 % of cases. The average length of hospital stay was 12.4 days, and the mortality rate was 2.2 %. Conclusions: In conclusion, after reviewing one-thousand cases in the current literature, we conclude that MIPD offers a good perioperative, postoperative, and oncologic outcome. MIPD is feasible and safe in well-selected patients. abstract_id: PUBMED:28070234 Contemporary review of minimally invasive pancreaticoduodenectomy. Aim: To assess the current literature describing various minimally invasive techniques for and to review short-term outcomes after minimally invasive pancreaticoduodenectomy (PD). Methods: PD remains the only potentially curative treatment for periampullary malignancies, including, most commonly, pancreatic adenocarcinoma. Minimally invasive approaches to this complex operation have begun to be increasingly reported in the literature and are purported by some to reduce the historically high morbidity of PD associated with the open technique. In this systematic review, we have searched the literature for high-quality publications describing minimally invasive techniques for PD-including laparoscopic, robotic, and laparoscopic-assisted robotic approaches (hybrid approach). We have identified publications with the largest operative experiences from well-known centers of excellence for this complex procedure. We report primarily short term operative and perioperative results and some short term oncologic endpoints. Results: Minimally invasive techniques include laparoscopic, robotic and hybrid approaches and each of these techniques has strong advocates. Consistently, across all minimally invasive modalities, these techniques are associated less intraoperative blood loss than traditional open PD (OPD), but in exchange for longer operating times. These techniques are relatively equivalent in terms of perioperative morbidity and short term oncologic outcomes. Importantly, pancreatic fistula rate appears to be comparable in most minimally invasive series compared to open technique. Impact of minimally invasive technique on length of stay is mixed compared to some traditional open series. A few series have suggested that initiation of and time to adjuvant therapy may be improved with minimally invasive techniques, however this assertion remains controversial. In terms of short-terms costs, minimally invasive PD is significantly higher than that of OPD. Conclusion: Minimally invasive approaches to PD show great promise as a strategy to improve short-term outcomes in patients undergoing PD, but the best results remain isolated to high-volume centers of excellence. abstract_id: PUBMED:32632485 Perioperative and oncological outcomes following minimally invasive versus open pancreaticoduodenectomy for pancreatic duct adenocarcinoma. Background: The outcomes of minimally invasive pancreaticoduodenectomy have not been adequately compared with those of open pancreaticoduodenectomy in patients with pancreatic ductal adenocarcinoma. We performed a meta-analysis to compare the perioperative and oncological outcomes of these two pancreaticoduodenectomy procedures specifically in patients with pancreatic ductal adenocarcinoma. Methods: Before this study was initiated, a specific protocol was designed and has been registered in PROSEPRO (ID: CRD42020149438). Using the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines, PubMed, EMBASE, Web of Science, Cochrane Central Register, and ClinicalTrials.gov databases were systematically searched for studies published between January 1994 and October 2019. Overall survival, disease-free survival, and time to commencing adjuvant chemotherapy were the primary endpoint measurements, whereas perioperative and short-term outcomes were the secondary endpoints. Results: The final analysis included 9 retrospective cohorts comprising 11,242 patients (1377 who underwent minimally invasive pancreaticoduodenectomy and 9865 who underwent open pancreaticoduodenectomy). There were no significant differences in the patients' overall survival, operative time, postoperative complications, 30-day mortality, rate of vein resection, number of harvested lymph nodes, or rate of positive lymph nodes between the two approaches. However, disease-free survival, time to starting adjuvant chemotherapy, length of hospital stay, and rate of negative margins in patients who underwent minimally invasive pancreaticoduodenectomy showed improvements relative to those in patients who underwent open surgery. Conclusions: Minimally invasive pancreaticoduodenectomy provides similar or even improved perioperative, short-term, and long-term oncological outcomes when compared with open pancreaticoduodenectomy for patients with pancreatic ductal adenocarcinoma. abstract_id: PUBMED:32332518 Robotic pancreaticoduodenectomy in the era of minimally invasive surgery. Minimally invasive surgery has become a worldwide trend to reduce the wound and mitigate pain. However, the role of robotic pancreaticoduodenectomy (RPD) has not been well established in the era of minimally invasive surgery. It would be nearly impossible to apply minimally invasive surgery in the tedious and complicated Whipple operation without an experienced and dedicated pancreas team. The pancreatic team led by Shyr YM and Wang SE at Taipei Veterans General Hospital have already been highly accredited with a Copper Award of Symbol of National Quality (SNQ award), entitled with "Minimally Invasive Robotic Pancreatic Surgery-Small Wound for Major Pancreatic Surgery" in Taiwan in 2019. RPD was claimed to be associated with less blood loss, less delayed gastric emptying, shorter length of postoperative stay, and lower wound infection rate, but longer operative time, as compared with the traditional open pancreaticoduodenectomy (OPD). More than 99% of the patients undergoing minimally invasive RPD are satisfied with the surgical outcomes and would like to recommend RPD to those with periampullary lesions. RPD could be recommended not only to surgeons but also to patients in terms of surgical outcomes and patient satisfaction. Moreover, our study showed a survival benefit for pancreatic head adenocarcinoma in RPD group, with 1-year, 3-year, and 5-year survival of 82.9%, 45.3%, and 26.8%, respectively, as compared with 63.8%, 26.2%, and 17.4% in OPD. abstract_id: PUBMED:24154790 Recent developments in surgery: minimally invasive approaches for patients requiring pancreaticoduodenectomy. Over the past decade, minimally invasive surgery has been introduced as a means to allow manipulation of delicate tissues with outstanding visualization of the surgical field. The purpose of this article is to review the available literature regarding early postoperative outcomes and the technical challenges of minimally invasive pancreaticoduodenectomy, including robotic techniques. Herein, we provide a retrospective review of all published studies in the English literature in which a minimally invasive pancreaticoduodenectomy was performed. The reported advantages of minimally invasive pancreaticoduodenectomy include better visualization, faster recovery time, and decreased length of hospital stay. In cases of robotic approaches, some of the proposed advantages include increased dexterity and a superior ergonomic position for the operating surgeon. To our knowledge, few studies have reported results comparable to open techniques in oncologic outcomes with regard to the number of lymph nodes resected and clear margins obtained. An increasing number of pancreatic resections are being performed using minimally invasive approaches. It remains to be determined if the benefits of this technique outweigh its longer operative times and higher costs. abstract_id: PUBMED:33005845 Current statement and safe implementation of minimally invasive surgery in the pancreas. Minimally invasive pancreatic resection has become very popular in modern pancreatic surgery. Evidence of the benefits of a minimally invasive approach is accumulating thanks to prospective and randomized controlled studies. Minimally invasive surgery provides advantages to the surgeon due to the high definition of the surgical field and the freedom of fine movement of the robot, but should be considered only in selected patients and in high-volume centers. Minimally invasive distal pancreatectomy for benign and low-grade malignant tumors has established a secure position over open distal pancreatectomy, since it is associated with a shorter hospital stay, reduced blood loss, and equivalent complication rates. Minimally invasive distal pancreatectomy for pancreatic ductal adenocarcinoma appears to be a feasible, safe, and oncologically equivalent technique in experienced hands. On the other hand, the feasibility and safety of minimally invasive pancreaticoduodenectomy are still controversial compared with open pancreaticoduodenectomy. The choice of either technique among open, laparoscopic, and robotic approaches depends on surgeons' experience and hospital resources with a focus on patient safety. Further studies are needed to prove the perioperative and oncological advantages of minimally invasive surgery compared to open surgery in the pancreas. Here, we review the current status of minimally invasive pancreatic surgery and its safe implementation. abstract_id: PUBMED:38307784 Minimally invasive pancreaticoduodenectomy: A favorable approach for frail patients with pancreatic cancer. Background: Within the past decade, minimally invasive pancreaticoduodenectomy has been increasingly adopted in high-volume cancer centers. Amid broader trends of a growing older population, the numbers of frail patients with cancer are expected to increase. In this study, we compared the postoperative outcomes of open pancreaticoduodenectomy and minimally invasive pancreaticoduodenectomy in frail patients with pancreatic ductal adenocarcinoma. Methods: Using the pancreatectomy-targeted American College of Surgeons-National Surgical Quality Improvement Program database (2014-2021), we identified pancreaticoduodenectomy cases for pancreatic ductal adenocarcinoma. Patients with a modified frailty index ≥2 were considered frail. We performed 2:1 (open pancreaticoduodenectomy to minimally invasive pancreaticoduodenectomy) optimal pair propensity score matching for both patient- and disease-specific characteristics. We evaluated baseline covariate balance for homogeneity and assessed 30-day postoperative outcomes: complications, discharge destination, major morbidity, and mortality. Results: We identified 3,143 frail patients who underwent pancreaticoduodenectomy for pancreatic ductal adenocarcinoma. Of those, 275 (9%) underwent minimally invasive pancreaticoduodenectomy. Minimally invasive pancreaticoduodenectomy was associated with a lower rate of any complications compared with open pancreaticoduodenectomy (43% vs 54%; P &lt; .001), major morbidity (29% vs 35%; P = .042), and nonhome discharge (12% vs 17%; P = .022). When comparing the 2 minimally invasive pancreaticoduodenectomy approaches, robotic surgery was associated with fewer complications compared with laparoscopy (39% vs 51%; P = .040) and a lower mortality rate (1% vs 4%; P = .041) CONCLUSION: In frail patients with pancreatic cancer, minimally invasive pancreaticoduodenectomy was associated with better postoperative outcomes than open pancreaticoduodenectomy. This study builds on growing literature reporting that, when properly implemented, minimally invasive pancreaticoduodenectomy is associated with more favorable postoperative outcomes. Given the particularly high risk of complication in frail patients, implementing a preoperative frailty assessment can provide valuable insights to inform patient counseling. Answer: Yes, minimally invasive pancreaticoduodenectomy (MIPD) has been shown to have several advantages over the open pancreaticoduodenectomy (OPD) method. According to the literature, MIPD is associated with a reduction in estimated blood loss, transfusion rate, wound infection, and length of hospital stay (PUBMED:27005287). Additionally, MIPD has a higher R0 resection rate and more lymph nodes harvest, although it does have a longer operation time (PUBMED:27005287). Patients who underwent MIPD experienced less estimated blood loss, shorter hospital stays, and decreased wound infection rates, without significant differences in perioperative complications such as pancreatic fistula, delayed gastric emptying, hemorrhage, bile leakage, reoperation, and mortality compared to OPD (PUBMED:24743678). Furthermore, MIPD may be associated with shorter hospital stay, less estimated blood loss, and positive margin rate without compromising surgical safety as open surgery (PUBMED:24743678). The literature also suggests that MIPD offers good perioperative, postoperative, and oncologic outcomes and is feasible and safe in well-selected patients (PUBMED:26830906). Minimally invasive techniques, including laparoscopic, robotic, and hybrid approaches, are associated with less intraoperative blood loss than traditional OPD, but with longer operating times (PUBMED:28070234). Moreover, MIPD provides similar or even improved perioperative, short-term, and long-term oncological outcomes when compared with OPD for patients with pancreatic ductal adenocarcinoma (PUBMED:32632485). Robotic pancreaticoduodenectomy (RPD), a form of MIPD, has been associated with less blood loss, less delayed gastric emptying, shorter postoperative stay, and lower wound infection rate, but longer operative time compared with OPD (PUBMED:32332518). In summary, MIPD, including laparoscopic and robotic approaches, has been shown to have several advantages over OPD, such as reduced blood loss, shorter hospital stays, and potentially improved oncological outcomes, although it may require longer operative times and is associated with higher costs (PUBMED:24154790; PUBMED:33005845; PUBMED:38307784).
Instruction: Women with chest pain: is exercise testing worthwhile? Abstracts: abstract_id: PUBMED:33634176 Safety of Exercise Testing in the Clinical Chinese Population. This 18-year cross-sectional study was conducted to provide data on the safety of exercise testing in the clinical Chinese population. We retrospectively identified exercise tests completed at Xiangya Hospital of Central South University from January 1, 2002 to December 31, 2019. From 43,130 unique individuals (50.9% female), a total of consecutive 50,142 tests (standard exercise testing 29,466; cardiopulmonary exercise testing 20,696) were retrieved. Demographics, patients' medical history, exercise testing characteristics, and exercise testing-related adverse events were described. Safety data is expressed as the number of adverse events per 10,000 tests, with 95% confidence interval. The average patients' age was 51 ± 13 years. The majority of patients were diagnosed with at least one disease (N = 44,941, 89.6%). Tests were maximal or symptom-limited. Common clinical symptoms included dizziness (6,822, 13.6%), chest pain or distress (2,760, 5.5%), and musculoskeletal limitations (2,507, 5.0%). Out of 50,142 tests, three adverse events occurred, including one sustained ventricular tachycardia, one sinus arrest with junctional escape rhythm at a rate of 28 bpm, and one syncopal event with fecal and urinary incontinence. The rate of adverse events was 0.8 events per 10,000 tests (95% confidence interval, 0.2-3.0) in men, 0.4 per 10,000 tests (0.7-2.2) in women, and 0.6 per 10,000 tests (0.21.8) total. This study represents the largest dataset analysis of exercise testing in the clinical Chinese population. Our results demonstrate that clinical exercise testing is safe, and the low rate of adverse events related to exercise testing might be due to the overall changes in clinical practice over time. abstract_id: PUBMED:29101917 Troponin testing in the primary care setting. Background: Chest pain is a common presenting complaint in general practice. Serum troponin testing is an important clinical tool to help identify patients who present with suspected acute coronary syndrome (ACS). Objective: This article will discuss the role of troponin testing in the diagnosis of ACS, and the role of high-sensitive troponin, which is now in widespread use. The importance of clinical acumen in the interpretation of troponin testing and the pitfalls of troponin testing in the primary care setting will also be explored. Discussion: Patients should be promptly referred to the hospital when there is a high clinical suspicion of ACS. This is to ensure early diagnosis, provide specialist care and minimise the risk of complications. For patients who present with suspected ACS, troponin testing in the community should not delay referral to the emergency department. Troponin testing has a limited role in the primary care setting, which will be discussed in this article. abstract_id: PUBMED:33308833 Implementing Point-of-Care Troponin Testing in the Emergency Department: Impact on Time to Result. Introduction: In the emergency department, troponin assays are commonly used and essential in the evaluation of chest pain and diagnosis of acute coronary syndrome. This study was designed to assess the potential impact of implementing point-of-care troponin testing by comparing the time to point-of-care laboratory result and time to conventional laboratory result. Methods: The study enrolled 60 ED patients deemed to need a troponin test in the evaluation of low-risk chest pain (HEART score &lt;4 based on history, electrocardiogram, age, risk factors). Point-of-care troponin testing was performed with the same blood sample obtained for a conventional troponin assay. If the provider determined that the patient required 2 troponin tests, the second laboratory draw was used in the data collection. This was to correlate the time of laboratory result to time of disposition. Results: Of the 60 subjects enrolled, 2 subjects were excluded because of user errors with the point-of-care testing equipment and 2 others for not meeting inclusion criteria on later review. The median times for the point-of-care troponin and conventional troponin assays were 11:00 minutes (interquartile range 10:00-15:30) and 40:00 minutes (interquartile range 31:30-52:30), respectively; P &lt; 0.001. There were 3 extreme outliers from the conventional troponin assay that significantly skewed the distribution of the mean, making the median the more accurate assessment of the central tendency. Discussion: Point-of-care troponin testing provided results in a median time 29 minutes quicker than the conventional troponin assay. This result is statistically significant and has the potential to greatly improve time to disposition in all patients with chest pain requiring a troponin assay. abstract_id: PUBMED:8795480 Women with chest pain: is exercise testing worthwhile? Objective: To determine the diagnostic value of the exercise tolerance test (ETT) in women presenting with chest pain. Design: Prospective study of all women presenting to a centre with chest pain between 1987 and 1993 who were assessed by an ETT and coronary angiography. Setting: The outpatient clinic of one consultant cardiologist in a tertiary referral centre. Patients: All women referred to this outpatient clinic with chest pain were screened. For inclusion, patients had to perform ETT and undergo coronary angiography. Of the 347 referred during this period, 142 were excluded because they were unable to perform ETT or because of Q waves or other abnormalities on their resting electrocardiogram. Results: Overall the sensitivity of the ETT was 68% and the specificity was 61%, with a positive predictive value of 0.61 and a negative predictive value of 0.68. There were 42 false positive and 31 false negative ETT results (36% of the study group). The predictive value of a negative test was higher in younger women (&lt; 52 years) than in the older group (&gt; or = 52 years) (P = 0.004), but the positive predictive value in the two groups was not significantly different. The predictive value of a negative test was also higher in those with two or fewer risk factors than in those with three or more risk factors (P = 0.001). The negative predictive value for those women above 52 years with three or more risk factors (24% of the study group) was only 0.25. Lack of chest pain during ETT was associated with a higher negative predictive value in the younger group than in the older women (P = 0.006). Conclusions: In women with chest pain use of the ETT was a misleading predictor of the presence or absence of coronary disease in 36% of these patients. In particular, a negative test in older women with three or more risk factors had a very low predictive value. The inclusion of risk factors and division by age can, however, be used to identify a population at intermediate risk for coronary artery disease in whom the ETT result has the highest diagnostic utility. abstract_id: PUBMED:34752274 Performance and Interpretation of Office Exercise Stress Testing. In an era where cardiovascular disease continues to increase in prevalence, chest pain is a commonly encountered complaint in the outpatient setting. Clinicians are often tasked with the challenge of selecting the most appropriate screening tool in the evaluation of a patient with suspected coronary artery disease. With proper consideration of indications and contraindications, exercise electrocardiogram (ECG) stress testing is an accessible, cost-conscious, and validated outpatient diagnostic modality for predicting coronary artery disease. abstract_id: PUBMED:33041122 The disutility of stress testing in low-risk HEART Pathway patients. Background: The HEART Pathway identifies low-risk chest pain patients for discharge from the Emergency Department without stress testing. However, HEART Pathway recommendations are not always followed. The objective of this study is to determine the frequency and diagnostic yield of stress testing among low-risk patients. Methods: An academic hospital's chest pain registry was analyzed for low-risk HEART Pathway patients (HEAR score ≤ 3 with non-elevated troponins) from 1/2017 to 7/2018. Stress tests were reviewed for inducible ischemia. Diagnostic yield was defined as the rate of obstructive CAD among patients with positive stress testing. T-test or Fisher's exact test was used to test the univariate association of age, sex, race/ethnicity, and HEAR score with stress testing. Multivariate logistic regression was used to determine the association of age, sex, race/ethnicity, and HEAR score with stress testing. Results: There were 4743 HEART Pathway assessments, with 43.7% (2074/4743) being low-risk. Stress testing was performed on 4.1% (84/2074). Of the 84 low-risk patients who underwent testing, 8.3% (7/84) had non-diagnostic studies and 2.6% (2/84) had positive studies. Among the 2 patients with positive studies, angiography revealed that 1 had widely patent coronary arteries and the other had multivessel obstructive coronary artery disease, making the diagnostic yield of stress testing 1.2% (1/84). Each one-point increase in HEAR score (aOR 2.17, 95% CI 1.45-3.24) and being male (aOR 1.59, 95% CI 1.02-2.49) were associated with testing. Conclusions: Stress testing among low-risk HEART Pathway patients was uncommon, low yield, and more likely in males and those with a higher HEAR score. abstract_id: PUBMED:32464138 Cardiac biomarker measurement by point of care testing - Development, rationale, current state and future developments. Cardiac biomarker measurements are integral to the diagnosis and management of patients presenting with breathlessness and chest pain. Measurement of B type natriuretic peptide either directly or of the N-terminal portion of the prohormone although possible by point of care testing (POCT) has largely become a laboratory test. Measurement of the cardiac troponins cardiac troponin T (cTnT) and cardiac troponin I (cTnI) can easily and accurately be performed by POCT. The situation has been complicated by the development of high sensitivity assays for cTnT and cTnI and the subsequent development of rapid rule out algorithms allowing patient categorisation and discharge on admission and 1 to 2 h following admission. This article reviews the development of POCT for cardiac biomarkers, the evidence base comparing POCT with central laboratory testing, its strengths and limitations, and how POCT fits into the world of high sensitivity troponin assays. It also discusses what evidence there is that POCT can form part of rapid decision-making strategies and how this applies in an era of algorithms based on and is derived from measurement of high sensitivity troponin in the central laboratory. abstract_id: PUBMED:14617551 Improving the positive predictive value of exercise testing in women. Objective: To identify exercise test variables that can improve the positive predictive value of exercise testing in women. Design: Cohort study. Setting: Regional cardiothoracic centre. Subjects: 1286 women and 1801 men referred by primary care physicians to a rapid access chest pain clinic, of whom 160 women and 406 men had ST depression of at least 1 mm during exercise testing. The results for 136 women and 124 men with positive exercise tests were analysed. Main Outcome Measures: The proportion of women with a positive exercise test who could be identified as being at low risk for prognostic coronary heart disease and the resulting improvement in the positive predictive value. Results: Independently of age, an exercise time of more than six minutes, a maximum heart rate of more than 150 beats/min, and an ST recovery time of less than one minute were the variables that best identified women at low risk. One to three of these variables identified between 11.8% and 41.2% of women as being at low risk, with a risk for prognostic disease of between 0-11.5%. The positive predictive value for the remaining women was improved from 47.8% up to 61.5%, and the number of normal angiograms was potentially reducible by between 21.1-54.9%. By the same criteria, men had higher risks for prognostic disease. Conclusions: A strategy of discriminating true from false positive exercise tests is worthwhile in women but less successful in men. abstract_id: PUBMED:24201078 Provocative testing for coronary reactivity and spasm. Coronary spasm is an important and often overlooked etiology of chest pain. Although coronary spasm, or Prinzmetal's angina, has been thought of as benign, contemporary studies have shown serious associated adverse outcomes, including acute coronary syndrome, arrhythmia, and death. Definitive diagnosis of coronary spasm can at times be difficult, given the transience of symptoms. Numerous agents have been historically described for provocative testing. We provide a review of published data for the role of provocation testing in the diagnosis of coronary spasm. abstract_id: PUBMED:24768399 Cardiopulmonary exercise testing is more accurate than ECG-stress testing in diagnosing myocardial ischemia in subjects with chest pain. Background: Cardiopulmonary exercise stress testing (CPET) is used to grade the severity of heart failure and to assess its prognosis. However it is unknown whether CPET may improve diagnostic accuracy of standard ECG stress testing to identify or exclude obstructive coronary artery disease (O-CAD) in patients with chest pain. Methods: We prospectively studied 1265 consecutive subjects (55 ± 8 years, 156 women) who were evaluated with ECG stress testing (ET) for chest pain. No one had a documented O-CAD. All patients performed an incremental CPET with ECG recordings on an electronically braked cycle ergometer. Results: Of 1265 patients, 73 had a positive CPET and 1192 had a negative CPET. Seventy-three patients with a positive CPET and 71 patients with a negative CPET agreed to undergo nuclear SPECT imaging and coronary angiography. Follow-up lasted 48 ± 7 months. As compared with ET, sensitivity, specificity, PPV and NPV were all improved significantly (ET: 48%, 55%, 33%, 95%; CPET: 88%, 98%, 73%, 99%, respectively, P&lt;0.001). Patients with both peak VO2&gt;91% of predicted VO2 max and absence of VO2-related signs of myocardial ischemia had no evidence of O-CAD in 100% of cases. Cardiac events occurred in 32 patients with a positive CPET and 8 patients with a negative CPET (log rank 18.2, P&lt;0.0001). Conclusions: In patients with chest pain, CPET showed a better diagnostic and predictive accuracy than traditional ET to detect/exclude myocardial ischemia. Its use should be encouraged among physicians as a first line diagnostic tool in clinical practice. Answer: Exercise testing in women presenting with chest pain has been a subject of investigation to determine its diagnostic value. A prospective study that assessed the diagnostic value of the exercise tolerance test (ETT) in women with chest pain found that the ETT had a sensitivity of 68% and a specificity of 61%, with a positive predictive value of 0.61 and a negative predictive value of 0.68. However, the study also reported that 36% of the patients had either false positive or false negative ETT results, indicating that the ETT could be a misleading predictor of the presence or absence of coronary disease in a significant proportion of these patients. The study suggested that the inclusion of risk factors and division by age could help identify a population at intermediate risk for coronary artery disease in whom the ETT result has the highest diagnostic utility (PUBMED:8795480). Another study aimed to identify exercise test variables that could improve the positive predictive value of exercise testing in women. It found that an exercise time of more than six minutes, a maximum heart rate of more than 150 beats/min, and an ST recovery time of less than one minute were the variables that best identified women at low risk. By using these variables, the positive predictive value for the remaining women was improved from 47.8% up to 61.5%, and the number of normal angiograms was potentially reducible by between 21.1-54.9% (PUBMED:14617551). In the context of office exercise stress testing, it is noted that with proper consideration of indications and contraindications, exercise electrocardiogram (ECG) stress testing is an accessible, cost-conscious, and validated outpatient diagnostic modality for predicting coronary artery disease (PUBMED:34752274). In summary, while exercise testing can be worthwhile in women with chest pain, particularly when considering specific variables and risk factors, it is important to recognize its limitations and the potential for misleading results. Clinicians should use exercise testing judiciously and consider other diagnostic tools and patient-specific factors when evaluating women with chest pain.
Instruction: Chest pain evaluation in the emergency department: can MDCT provide a comprehensive evaluation? Abstracts: abstract_id: PUBMED:8470835 Emergency cardiac stress testing in the evaluation of emergency department patients with atypical chest pain. Study Objectives: To determine the feasibility, safety, and reliability of emergency cardiac treadmill exercise stress testing (CTEST) in the evaluation of emergency department patients with atypical chest pain. Design: Thirty-two patients with atypical chest pain, normal ECGs, and risk factor stratification having low-probability of coronary artery disease were evaluated prospectively using outpatient, emergency CTEST. Study patients were compared with a retrospectively selected sample of admitted patients diagnosed with atypical chest pain who met the study criteria and were evaluated with CTEST as inpatients. All patients had follow-up at three and six months after evaluation. Setting: University-affiliated community teaching hospital with 65,000 annual ED visits. Results: All patients had normal CTEST. No patient had evidence of coronary artery disease, myocardial infarction, or sudden death during the follow-up period. The average length of stay was 5.5 hours for emergency CTEST patients versus two days for inpatients. The average patient charge was $467 for ED evaluation with emergency CTEST versus $2,340 for inpatient evaluation. Conclusion: Emergency CTEST is a safe, efficient, cost-effective, and practical method of evaluating selected ED patients with chest pain. It is a useful aid for clinical decision making and may help to prevent unnecessary hospital admissions. abstract_id: PUBMED:16037533 Chest pain evaluation in the emergency department: can MDCT provide a comprehensive evaluation? Objective: The purpose of our study was to determine whether MDCT can provide a comprehensive assessment of cardiac and noncardiac causes of chest pain in stable emergency department patients. Subjects And Methods: Patients with chest pain who presented to the emergency department without definitive findings of acute myocardial infarction based on history, physical examination, and ECG were recruited immediately after the initial clinical assessment. For each patient, the emergency department physician was asked whether a CT scan would normally have been ordered on clinical grounds (e.g., to exclude pulmonary embolism). Each consenting patient underwent enhanced ECG-gated 16-MDCT. Ten cardiac phases were reconstructed. The images were evaluated for cardiac (coronary calcium and stenosis, ejection fraction, and wall motion and perfusion) and significant noncardiac (pulmonary embolism, dissection, pneumonia, and so forth) causes of chest pain. Correlation was made between the presence of significant cardiac and noncardiac findings on CT and the final clinical diagnosis based on history, examination, and any subsequent cardiac workup at the 1-month follow-up by a consensus of three physicians. Results: Sixty-nine patients met all criteria for enrollment in the study, of whom 45 (65%) would not otherwise have undergone CT. Fifty-two patients (75%) had no significant CT findings and a final diagnosis of clinically insignificant chest pain. Thirteen patients (19%) had significant CT findings (cardiac, 10; noncardiac, 3) concordant with the final diagnosis. CT failed to suggest a diagnosis in two patients (3%), both of whom proved to have clinically significant coronary artery stenoses. In two patients (3%), CT overdiagnosed a coronary stenosis. Sensitivity and specificity for the establishment of a cardiac cause of chest pain were 83% and 96%, respectively. Overall sensitivity and specificity for all other cardiac and noncardiac causes were 87% and 96%, respectively. Conclusion: ECG-gated MDCT appears to be logistically feasible and shows promise as a comprehensive method for evaluating cardiac and noncardiac chest pain in stable emergency department patients. Further hardware and software improvements will be necessary for adoption of this paradigm in clinical practice. abstract_id: PUBMED:9470340 Cost effective nuclear scanning in a comprehensive and systematic approach to the evaluation of chest pain in the emergency department. The cost of evaluation and treatment of patients presenting to the emergency department with chest pain is estimated in billions of dollars. Current standards of practice, however, cannot reliably distinguish between high- and low-risk patients. Efficient management of this population requires that we reduce: 1) delays in therapy, 2) soft admissions, 3) inappropriate dispositions, and 4) cost. In a multidisciplinary effort, the acute cardiac team developed a comprehensive evaluation and triage strategy based on risk. Our data suggest that we have been able to accomplish all four goals, including a reduction in overall costs through the use of perfusion imaging. abstract_id: PUBMED:21247533 Usefulness of comprehensive cardiothoracic computed tomography in the evaluation of acute undifferentiated chest discomfort in the emergency department (CAPTURE). Newer cardiac computed tomographic (CT) technology has permitted comprehensive cardiothoracic evaluations for coronary artery disease, pulmonary embolism, and aortic dissection within a single breath hold, independent of the heart rate. We conducted a randomized diagnostic trial to compare the efficiency of a comprehensive cardiothoracic CT examination in the evaluation of patients presenting to the emergency department with undifferentiated acute chest discomfort or dyspnea. We randomized the emergency department patients clinically scheduled to undergo a dedicated CT protocol to assess coronary artery disease, pulmonary embolism, or aortic dissection to either the planned dedicated CT protocol or a comprehensive cardiothoracic CT protocol. All CT examinations were performed using a 64-slice dual source CT scanner. The CT results were immediately communicated to the emergency department providers, who directed further management at their discretion. The subjects were then followed for the remainder of their hospitalization and for 30 days after hospitalization. Overall, 59 patients (mean age 51.2 ± 11.4 years, 72.9% men) were randomized to either dedicated (n = 30) or comprehensive (n = 29) CT scanning. No significant difference was found in the median length of stay (7.6 vs 8.2 hours, p = 0.79), rate of hospital discharge without additional imaging (70% vs 69%, p = 0.99), median interval to exclusion of an acute event (5.2 vs 6.5 hours, p = 0.64), costs of care (p = 0.16), or the number of revisits (p = 0.13) between the dedicated and comprehensive arms, respectively. In addition, radiation exposure (11.3 mSv vs 12.8 mSv, p = 0.16) and the frequency of incidental findings requiring follow-up (24.1% vs 33.3%, p = 0.57) were similar between the 2 arms. Comprehensive cardiothoracic CT scanning was feasible, with a similar diagnostic yield to dedicated protocols. However, it did not reduce the length of stay, rate of subsequent testing, or costs. In conclusion, although this "triple rule out" protocol might be helpful in the evaluation of select patients, these findings suggest that it should not be used routinely with the expectation that it will improve efficiency or reduce resource use. abstract_id: PUBMED:35962251 Evaluation of diagnostic efficiency of bedside cardiac ultrasonography performed by emergency specialist. Background: In emergency medicine, ultrasound is frequently used in the diagnostic and therapeutic procedures and evaluation of treatment of critically ill patients simultaneously, due to its bedside applicability, rapidness and inexpensive cost. Aim: The competence of emergency physician in focused cardiac ultrasound evaluations and the success of diagnosing cardiac diseases were evaluated in a patient group presenting to the emergency department with complaints of non-traumatic chest pain and shortness of breath. Methods: We included patients with complaints of chest pain and shortness of breath and underwent cardiac ultrasound performed by emergency physician. Then, patients were evaluated by a cardiologist. The diagnoses made by the emergency physician were compared with the diagnoses made by the cardiologist. Results: A total of 303 patients were included. The diagnoses made by the emergency medicine specialist and cardiologist as a result of the evaluation were recorded as 56.7% vs 52.10% for acute coronary syndrome, 29.70% vs 31.60% for congestive heart failure, 3.6% vs 3.30% for pulmonary embolism, 2.1% vs 2.10% for hypertensive pulmonary edema, 1.9% vs 2.10% for pericarditis, and 0.60% vs 1.30% for aortic dissection. Conclusion: It was determined that focused cardiac ultrasound performed by emergency physician was sufficient in terms of accuracy of findings and diagnosis, and played an important role in excluding or including fatal diagnoses and conditions. The success rates of emergency physician can be further increased with long-term and comprehensive training programs. abstract_id: PUBMED:11346505 Exercise testing in special situations: ER, preoperative and disability evaluation. In a number of medical centers, exercise testing has proven to be a safe and useful tool in the evaluation of patients presenting chest pain in an emergency room. At these centers, after a period of observation without evidence of acute myocardial infarction, exercise testing is done. If the exercise test result is normal, the patient is discharged from the emergency room, without being admitted to the hospital. Exercise testing is a well-accepted noninvasive method to evaluate at-risk patients being considered for elective noncardiac surgery. Exercise testing is frequently used to determine functional capacity during disability assessment. abstract_id: PUBMED:9470352 Emergency medical service system evaluation and planning strategies for prehospital chest pain in Howard County, Maryland. Patients experiencing ischemic chest pain represent one of the most common emergencies in prehospital emergency medical service (EMS) systems. Recent national guidelines for emergency department and EMS care of chest pain and acute myocardial infarction (AMI) patients have quantified standards for time to evaluation and treatment. Prehospital EMS systems and hospitals will need to change their processes of care for chest pain patients to meet or exceed these national guidelines. In addition, the EMS system and the hospital will need to work more closely and seamlessly to integrate chest pain care for the maximum benefit of the patient. The Howard County Department of Fire and Rescue Services (DFRS) recognized the need for change in its EMS system. Its unique approach involved chartering a multidisciplinary task force to assess the current EMS system. The task force analyzed the literature to decide evidence-based standards for EMS system performance, and evaluated public and private EMS systems that possessed state-of-the-art processes for providing patient care. Based upon the information gathered and considering qualities essential to future performance, the task force made recommendations for changes to Howard County's EMS system. This study reports the task force recommendations and describes those implemented thus far. Changes to the EMS system are quantified, descriptive data from the system are reported, and future goals are presented. abstract_id: PUBMED:17432765 Evaluation of acute chest pain in the emergency department: utility of multidetector computed tomography. Distinguishing insignificant from life-threatening causes of acute chest pain in patients who present to the emergency department remains a major challenge. Initial evaluation with history, electrocardiography, and biochemical markers is often unrevealing leading to additional workup. Radionuclide perfusion and echocardiography may be diagnostic but provide only indirect assessment of coronary status. The development of multidetector computed tomography (MDCT) and its increasingly frequent placement near the emergency suite has facilitated its use for the evaluation of serious noncardiac diagnoses such as pulmonary embolism and aortic dissection. Recent innovations in MDCT technology have facilitated the depiction of coronary arteries. These advances have led to the possibility of using CT to evaluate cardiac etiologies of chest pain, using either a comprehensive or triple rule out protocol to assess both cardiac and noncardiac causes or a dedicated coronary protocol. This article will review both options and describes our preliminary experience with the first of these protocols. The article also reviews the potential value of an acute chest pain CT protocol and the considerable challenges that remain prior to its implementation for routine clinical use. abstract_id: PUBMED:10340735 Evaluation of the value of an observation ward in an emergency department. A prospective study was undertaken to describe the pattern of utilization of an observation ward in an emergency department (ED). During a 1-month study period, the following data were collected for all patients admitted to the observation ward: (1) patient demographics, (2) purpose of observation, (3) interventions at the observation ward, (4) disposal destinations, (5) disposal diagnosis, (6) outcome categories, and (7) duration of stay. A total of 12188 patients attended our ED and 1042 (8.51%) patients were admitted into the observation ward. An average of 34 patients was admitted into the observation ward each day. The age of the patients ranged from neonates to 94 years (mean age of 45.7 years, +/-25.7 SD). Sex distribution was almost equal. The diagnostic evaluation group was the largest (58%) followed by short-term therapy (38%) and psychosocial problems (3.5%). Of the 554 patients with a disposal diagnosis, 350 (59%) had their diagnosis clarified after the observation period. The percentage of patients admitted to the hospital was 23%. There were 42 chest pain and 46 trauma patients. The impact of an observation ward on the service in ED was discussed. abstract_id: PUBMED:9107535 Evaluation of chest pain in the emergency department. The evaluation of chest pain in the emergency setting should be systematic, risk based, and goal driven. An effective program must be able to evaluate all patients with equal thoroughness under the assumption that any patient with chest pain could potentially be having an MI. The initial evaluation is based on the history, a focused physical examination, and the ECG. This information is sufficient to categorize patients into groups at high, moderate, and low risk. Table 14 is a template for a comprehensive chest-pain evaluation program. Patients at high risk need rapid initiation of appropriate therapy: thrombolytics or primary angioplasty for the patients with MIs or aspirin/heparin for the patients with unstable angina. Patients at moderate risk need to have an acute coronary syndrome ruled in or out expediently and additional comorbidities addressed before discharge. Patients at low risk also need to be evaluated, and once the likelihood of an unstable acute coronary syndrome is eliminated, they can be discharged with further evaluation performed as outpatients. Subsequent evaluation should attempt to assign a definitive diagnosis while also addressing issues specific to risk reduction, such as cholesterol lowering and smoking cessation. It is well documented that 4% to 5% of patients with MIs are inadvertently missed during the initial evaluation. This number is surprisingly consistent among many studies using various protocols and suggests that an initial evaluation limited to the history, physical examination, and ECG will fail to identify the small number of these patients who otherwise appear at low risk. The solution is to improve the sensitivity of the evaluation process to identify these patients. It appears that more than simple observation is required, and at the present time, no simple laboratory test can meet this need. However, success has been reported with a number of strategies including emergency imaging with either radionuclides such as sestamibi or echocardiography. Early provocative testing, either stress or pharmaceutic, may also be effective. The added value of these tests is only in their use as part of a systematic protocol for the evaluation of all patients with acute chest pain. The initial evaluation of the patient with chest pain should always consider cardiac ischemia as the cause, even in those with more atypical symptoms in whom a cardiac origin is considered less likely. The explicit goals for the evaluation of acute chest pain should be to reduce the time to treat MIs and to reduce the inadvertent discharge of patients with occult acute coronary syndromes. All physicians should become familiar with appropriate risk stratification of patients with acute chest pain. Systematic strategies must be in place to assure rapid and consistent identification of all patients and the expedient initiation of treatment for those patients with acute coronary syndromes. These strategies should include additional methods of identifying acute coronary syndromes in patients initially appearing as at moderate or low risk to assure that no unstable patients are discharged. All patients should be followed up closely until the cardiovascular evaluation is completed and, when possible, a definitive diagnosis is determined. Finally, this must be done efficiently, cost-effectively, and in a manner that will result in an overall improvement in patient care. Answer: Yes, MDCT (multidetector computed tomography) can provide a comprehensive evaluation of both cardiac and noncardiac causes of chest pain in stable emergency department patients. A study aimed to determine whether MDCT could provide a comprehensive assessment of cardiac and noncardiac causes of chest pain in stable emergency department patients found that ECG-gated MDCT was logistically feasible and showed promise as a comprehensive method for evaluating these causes. The study reported that MDCT had a sensitivity and specificity for the establishment of a cardiac cause of chest pain of 83% and 96%, respectively, and overall sensitivity and specificity for all other cardiac and noncardiac causes were 87% and 96%, respectively. However, the study also noted that further hardware and software improvements would be necessary for the adoption of this paradigm in clinical practice (PUBMED:16037533). Additionally, another study evaluating the utility of multidetector computed tomography in the emergency department for acute chest pain suggested that MDCT, with its ability to depict coronary arteries, could be used to evaluate cardiac etiologies of chest pain using either a comprehensive or triple rule out protocol to assess both cardiac and noncardiac causes or a dedicated coronary protocol. This study also highlighted the potential value of an acute chest pain CT protocol and the challenges that remain prior to its implementation for routine clinical use (PUBMED:17432765). In conclusion, MDCT appears to be a promising tool for comprehensive chest pain evaluation in the emergency department, with high sensitivity and specificity for detecting cardiac and noncardiac causes. However, its routine use may require further technological advancements and consideration of the challenges associated with its implementation.
Instruction: Does primary sclerosing cholangitis impact quality of life in patients with inflammatory bowel disease? Abstracts: abstract_id: PUBMED:26743764 Factors that Influence Health-Related Quality of Life in Patients with Primary Sclerosing Cholangitis. Background: Primary sclerosing cholangitis (PSC) is an incurable, cholestatic liver disease often coincident with inflammatory bowel disease (IBD). Aims: To evaluate the impact of liver disease and IBD on health-related quality of life (HRQoL) in PSC. Methods: A mixed-methods, cross-sectional study was performed at a tertiary center. Short Form-36 (SF-36) scores were compared between PSC, Canadian normative data, and disease controls. Disease-specific instruments scores [PBC-40, Short IBD questionnaire, Liver Disease Quality of Life Questionnaire (LDQOL)] were compared between PSC and disease controls. Multivariable regression identified factors independently associated with final SF-36 component scores. Qualitative evaluation of patient questionnaires was performed using a content analysis framework. Results: One hundred and sixty-two surveys were completed (99 PSC, 26 primary biliary cirrhosis, 16 non-autoimmune cholestatic liver disease, and 21 IBD). PSC patients had significantly lower SF-36 scores than Canadian controls, but similar scores to disease controls. LDQOL most accurately predicted HRQoL. Factors negatively associated with physical HRQoL included shorter IBD duration, liver disease symptoms, and decompensated cirrhosis. Mental HRQoL was influenced by liver disease and IBD symptoms, pruritus, social isolation, and depression. Nearly 75 % expressed existential anxiety regarding disease progression and diminished life expectancy, with 25 % disclosing social isolation. Conclusions: Patients with PSC have significantly lower HRQoL than healthy controls. Both symptoms of IBD and chronic liver disease impact HRQoL in patients with PSC, which lead to significant psychologic burden that is expressed by existential anxieties and social isolation. A PSC-specific HRQoL tool is critical to adequately quantify the distinct impact of IBD and cholestatic liver disease. abstract_id: PUBMED:25341059 Pruritus is associated with severely impaired quality of life in patients with primary sclerosing cholangitis. Objectives: Quality of life, fundamental to the individual patient, has shown a lack of correlation with severity in research on several diseases. Thus, we aimed to identify factors associated with quality of life in patients with primary sclerosing cholangitis. Methods: The Short Form Health Survey and the Patient Health Questionnaire were used to assess quality of life and depression. Complete data sets of 113 patients were analyzed for correlation with sex, age, presence of concomitant inflammatory bowel disease and dominant stenosis, frequency of pruritus, and Mayo Risk Score. Results: Physical functioning decreased with age (P&lt;0.001). Further, women experienced more prominent role limitations because of physical (P&lt;0.03) and emotional (P&lt;0.01) problems. Although patients' quality of life and depression scores were only slightly lower than normal, more frequent pruritus was associated with a considerable reduction in quality of life in terms of physical and social functioning, general and mental health, bodily pain, vitality, and roles (because of physical problems) (P&lt;0.01). It did not differ significantly according to the Mayo Risk Score or the presence of dominant stenoses. Depression scores were only significantly affected in patients with more frequent pruritus. Conclusion: Pruritus severely affects quality of life in patients with primary sclerosing cholangitis and is associated with depression to varying extents, although the most commonly used parameters of disease severity do not correspond to quality of life in these patients. These findings need to be considered with respect to treatment outcomes and indications for liver transplantation. abstract_id: PUBMED:19637332 Does primary sclerosing cholangitis impact quality of life in patients with inflammatory bowel disease? Background: Impairment of health-related quality of life (HRQoL) is an important concern in inflammatory bowel disease (IBD; ulcerative colitis [UC], Crohn's disease [CD]). Between 2%-10% of patients with IBD have primary sclerosing cholangitis (PSC). There has been limited examination of the disease-specific HRQoL in this population compared to non-PSC IBD controls. Methods: This was a retrospective, case-control study performed at a tertiary referral center. Cases comprised 26 patients with a known diagnosis of PSC and IBD (17 UC, 9 CD). Three random controls were selected for each case after matching for IBD type, gender, age, and duration of disease. Disease-specific HRQoL was measured using the Short Inflammatory Bowel Disease Questionnaire (SIBDQ). Disease activity for CD was measured using the Harvey-Bradshaw index (HB) and using the UC activity index for UC. Independent predictors of HRQoL were identified. Results: There was no significant difference in the age, gender distribution, or disease duration between PSC-IBD and controls. There was no difference in use of immunomodulators or biologics between the 2 groups. Mean SIBDQ score was comparable between PSC-IBD patients (54.5) and controls (54.1), both for UC and CD. Likewise, the disease activity scores were also similar (2.8 versus 3.1, P = 0.35). On multivariate analysis, higher disease activity score (-1.33, 95% confidence interval [CI] 95% CI -1.85 to -0.82) and shorter disease duration were predictive of lower HRQoL. Coexisting PSC did not influence IBD-related HRQoL. There was a higher proportion of permanent work disability in PSC-IBD (7.7%) compared to controls (0%). Conclusions: PSC does not seem to influence disease-specific HRQoL in our patients with IBD but is associated with a higher rate of work disability. abstract_id: PUBMED:25546575 Health-related quality of life among patients with primary sclerosing cholangitis. Background & Aims: To assess health-related quality of life (HRQoL) of patients with primary sclerosing cholangitis (PSC), and to compare it with that of the general population. Also, to examine changes in HRQoL in newly diagnosed PSC patients at a follow-up 1-2 years later, and to compare their HRQoL with HRQoL of newly diagnosed inflammatory bowel disease (IBD) patients. Furthermore, sources of and need for disease-related information among PSC patients were surveyed. Methods: Primary sclerosing cholangitis patients filled in the survey questionnaire when attending an endoscopic retrograde cholangiography examination. The 15D served as a general HRQoL instrument. The follow-up questionnaire was mailed to the newly diagnosed patients 1-2 years later. Results: No significant difference was seen in 15D scores between PSC patients and general population, but the dimensions of excretion (P &lt; 0.001), depression (P = 0.003), distress (P = 0.003) and vitality (P = 0.005) were significantly lower in PSC. Age and symptoms affected HRQoL but severity of biliary changes did not. Those with newly diagnosed IBD had lower 15D scores than those with PSC. No significant changes were observed in 15D scores of new PSC patients in the follow-up. Many patients were dissatisfied with information received. Conclusion: Newly diagnosed PSC patients have better HRQoL than do IBD patients, and no significant HRQoL changes were observed in the mean follow-up of 1.58 years after PSC diagnosis. ERC findings did not correlate with HRQoL or symptoms. HRQoL of PSC patients was mostly comparable with that of general population, but special attention should be paid to patients' psychological well-being. abstract_id: PUBMED:33743710 Quality of life in primary sclerosing cholangitis: a systematic review. Background: Primary sclerosing cholangitis (PSC) is a rare bile duct and liver disease which can considerably impact quality of life (QoL). As part of a project developing a measure of QoL for people with PSC, we conducted a systematic review with four review questions. The first of these questions overlaps with a recently published systematic review, so this paper reports on the last three of our initial four questions: (A) How does QoL in PSC compare with other groups?, (B) Which attributes/factors are associated with impaired QoL in PSC?, (C) Which interventions are effective in improving QoL in people with PSC?. Methods: We systematically searched five databases from inception to 1 November 2020 and assessed the methodological quality of included studies using standard checklists. Results: We identified 28 studies: 17 for (A), ten for (B), and nine for (C). Limited evidence was found for all review questions, with few studies included in each comparison, and small sample sizes. The limited evidence available indicated poorer QoL for people with PSC compared with healthy controls, but findings were mixed for comparisons with the general population. QoL outcomes in PSC were comparable to other chronic conditions. Itch, pain, jaundice, severity of inflammatory bowel disease, liver cirrhosis, and large-duct PSC were all associated with impaired QoL. No associations were found between QoL and PSC severity measured with surrogate markers of disease progression or one of three prognostic scoring systems. No interventions were found to improve QoL outcomes. Conclusion: The limited findings from included studies suggest that markers of disease progression used in clinical trials may not reflect the experiences of people with PSC. This highlights the importance for clinical research studies to assess QoL alongside clinical and laboratory-based outcomes. A valid and responsive PSC-specific measure of QoL, to adequately capture all issues of importance to people with PSC, would therefore be helpful for clinical research studies. abstract_id: PUBMED:25388280 Prospective evaluation of PBC-specific health-related quality of life questionnaires in patients with primary sclerosing cholangitis. Background & Aims: Primary biliary cirrhosis and Primary sclerosing cholangitis are autoimmune cholestatic liver diseases sharing a lot in common, including a significant impairment of patients' health-related quality of life HRQoL HRQoL in PBC is assessed with disease-specific PBC-40 and PBC-27 questionnaires. A PSC-specific questionnaire has not been developed. Neither PBC-40 nor PBC-27s applicability for PSC has been evaluated. We applied these three questionnaires for HRQoL assessment in a large homogenous cohort of PSC patients. Patients And Methods: This cross-sectional study enrolled 102 Caucasian PSCs and 53 matched healthy controls and measured HRQoL using generic SF-36, and disease-specific (PBC-40/PBC-27) questionnaires. Results: (i) SF-36. Most SF-36 domains were significantly lower in PSCs than controls. Physical Functioning and Mental Component Summary scores were significantly lower in female patients and correlated negatively with age but not with concurrent inflammatory bowel disease. Cirrhosis was associated with lower Physical Functioning, Role Physical, General Health, Vitality and Physical Component Summary. (ii) PBC-40 and PBC-27. Both tools showed similar HRQoL impairment scoring. Fatigue and Cognitive were impaired in female patients. Several correlations existed between HRQoL and laboratory parameters, including cholestatic tests and Itch. Cirrhosis correlated with Other symptoms and Fatigue PBC-40. (iii) PBC-40 vs PBC-27. Strong correlations among most domains of both questionnaires were seen, as well as between (iv) SF-36 vs PBC-40 or SF-36 vs PBC-27. Conclusion: This is the first study directly comparing PBC-40, PBC-27 and SF-36 in PSC. PSC patients, especially females, show HRQoL impairment. PBC-40 and PBC-27 questionnaires could be of potential use for HRQoL assessment in PSC. abstract_id: PUBMED:15513335 Fatigue in patients with primary sclerosing cholangitis. Background: The occurrence of fatigue in primary sclerosing cholangitis (PSC), its impact on quality of life and the role of concomitant inflammatory bowel disease (IBD) and coexisting irritable bowel syndrome (IBS) is unexplored. Methods: Ninety-three patients with PSC, associated with IBD in 80% of cases and 77 patients with IBD alone, were enrolled in the study. The patients completed the following questionnaires: the Fatigue Impact Scale (FIS), the Psychological General Well-Being Index (PGWB), the Gastrointestinal Symptom Rating Scale (GSRS), the Beck Depression Inventory (BDI) and diagnostic criteria for IBS. Questionnaire data were related to liver tests and the latest liver biopsy in the PSC patients. Two sex- and age matched controls from the general population (GP) were assigned to each PSC patient and these controls completed the FIS and the BDI. Results: Total fatigue score did not differ significantly between patients with PSC and IBD alone. Median total fatigue score among GP subjects was 39 (13-72), which was higher than in PSC (19 (6-52) (P = 0.02)) and in IBD patients (19 (5-35) (P &lt; 0.0001)). PGWB and GSRS scores did not differ between patients with PSC and IBD alone. Depression and general health (PGWB) were independent predictors for total fatigue score in PSC. No correlation was observed between fatigue in PSC and the severity of the liver disease. Conclusions: Fatigue in patients with PSC is related to depression but not to the severity of the liver disease. Both the PSC and IBD patients had lower total fatigue scores than subjects from the general population. This argues against fatigue as a specific symptom of PSC and IBD patients. abstract_id: PUBMED:10423073 Follow-up after liver transplantation for primary sclerosing cholangitis: effects on survival, quality of life, and colitis. Background: Except for primary biliary cirrhosis, primary sclerosing cholangitis (PSC) is now the commonest indication for liver transplantation in Sweden. There are several specific problems related to post-transplantation life in these patients. Methods: We analyzed a 10-year cohort of 47 patients transplanted at our center. Results: Incidental biliary carcinoma was diagnosed in 13%,, with a 2-year survival of only 17%, compared with 70% in the other patients. The 2-year survival has improved considerably (to 82%) in recent years. Quality of life was much or slightly improved in 80%. The colitis was much or slightly improved in 65%, with some deterioration in only 6%. No patient developed colonic carcinoma. Previous biliary tract surgery was identified as a risk factor for a worse 1-year mortality. Conclusions: Liver transplantation for PSC is now associated with a high survival rate and an improvement in the quality of life and course of ulcerative colitis. Previous biliary tract surgery is a risk factor for non-survival. abstract_id: PUBMED:28921801 Complications, symptoms, quality of life and pregnancy in cholestatic liver disease. Cholestatic liver diseases (CLDs) encompass a variety of disorders of bile formation and/or flow which generally result in progressive hepatobiliary injury and ultimately end-stage liver disease. Many patients with CLD are diagnosed between the ages of 20-50 years, a particularly productive period of life professionally, biologically and in other respects; it is not surprising, thus, that CLD is often associated with impaired health-related quality of life (HRQOL) and uncertainty regarding implications for and outcomes of pregnancy. Primary sclerosing cholangitis (PSC) and primary biliary cholangitis (PBC) are the most prominent CLDs, both having considerable morbidity and mortality and representing major indications for liver transplantation. These disorders, as a consequence of their complications (eg ascites, hepatic osteodystrophy), associated conditions (eg inflammatory bowel disease) and symptoms (eg pruritus and fatigue), can significantly impair an array of domains of HRQOL. Here we review these impactful clinical aspects of PSC and PBC as well as the topics of fertility and pregnancy. abstract_id: PUBMED:36724122 Development and validation of a primary sclerosing cholangitis-specific health-related quality of life instrument: CLDQ-PSC. Objectives: To understand the full impact of primary sclerosing cholangitis (PSC) on patients' health, it is important to assess their health-related quality of life (HRQL). Using the Chronic Liver Disease Questionnaire (CLDQ), we aimed to develop and validate a PSC-specific HRQL instrument. Methods: Previously collected clinical and patient-reported outcome data from PSC patients were used. The original CLDQ with 29 items was subjected to item reduction, followed by factor analysis. A standard HRQL instrument validation pipeline was then applied to the new CLDQ-PSC. Results: There were 100 PSC patients (44±13 y, 32% male, 79% college educated, 39% cirrhosis, 67% inflammatory bowel disease, 66% ulcerative colitis, and 50% on ursodeoxycholic acid After item reduction and exploratory factor analysis, there were 24 items and 5 factors left; based on factor loadings, the factors were named emotional function, fatigue, symptoms, worry, and sleep. Internal consistency assessment returned Cronbach alpha 0.85-0.94, item-to-own domain correlations &gt;0.66 for 22/24 items. Known-groups validity suggests discrimination between PSC patients with and without cirrhosis or its complications, obesity, history of depression, weight loss, and PSC patients on versus not on ursodeoxycholic acid (p&lt;0.05 for all or select CLDQ-PSC domains). Relevant items of Short Form-36 and CLDQ-PSC were highly correlated (all p&lt;0.0001). Matching with items of another PSC-specific instrument (PSC-patient-reported outcome; 42 items) for relevance and redundancy suggests that CLDQ-PSC is a relevant, comprehensive, and short HRQL instrument, which can be used for patients with PSC. Conclusions: The CLDQ-PSC is a PSC-specific HRQL instrument that was developed using an established methodology and demonstrated good psychometric characteristics. Answer: Primary sclerosing cholangitis (PSC) does not appear to significantly influence disease-specific health-related quality of life (HRQoL) in patients with inflammatory bowel disease (IBD). A retrospective, case-control study found that the mean Short Inflammatory Bowel Disease Questionnaire (SIBDQ) score, which measures disease-specific HRQoL, was comparable between patients with both PSC and IBD and controls who only had IBD. The study also reported similar disease activity scores between the two groups. However, there was a higher proportion of permanent work disability among patients with PSC-IBD compared to controls, indicating that while PSC may not impact the HRQoL related to IBD symptoms, it could affect other aspects of patients' lives (PUBMED:19637332). In contrast, other studies have shown that PSC patients experience significantly lower HRQoL compared to healthy controls, with both liver disease and IBD symptoms impacting their quality of life. Factors such as pruritus, social isolation, and depression were found to negatively influence mental HRQoL in PSC patients. Moreover, nearly 75% of PSC patients expressed existential anxiety regarding disease progression and diminished life expectancy, with 25% disclosing social isolation (PUBMED:26743764). Pruritus, in particular, has been associated with severely impaired quality of life in PSC patients and is linked with depression, although it does not correlate with commonly used parameters of disease severity (PUBMED:25341059). Additionally, fatigue, which is a common symptom in PSC, is related to depression but not to the severity of liver disease (PUBMED:15513335). Overall, while PSC may not significantly impact the HRQoL specifically related to IBD symptoms, it does have a broader impact on the overall quality of life of patients through various symptoms and psychological factors. A PSC-specific HRQoL tool is suggested to better quantify the distinct impact of IBD and cholestatic liver disease on these patients (PUBMED:26743764).
Instruction: Is brachial artery flow-mediated dilation associated with negative affect? Abstracts: abstract_id: PUBMED:19306064 Is brachial artery flow-mediated dilation associated with negative affect? Background: Trait negative affect has been implicated as a risk marker for cardiovascular disease, but the mechanisms underlying this association are uncertain. Purpose: Our aim was to examine associations between trait measures of anger, hostility, depression, and anxiety with endothelial dysfunction via brachial artery flow-mediated dilation (FMD), an early indicator of cardiovascular disease. Method: FMD was examined in 332 healthy older adults. Measures included Beck Anxiety and Depression Inventories, Cook-Medley Hostility Scale, and Spielberger State-Trait Anger Expression Inventory (Anger In, Anger Out, and Trait Anger). Results: Mean age was 60.5 +/- 4.8 years; 83% of participants were Caucasian and 49% were female. FMD was greater in women compared to men (6.17% vs. 4.07%, p &lt; 0.001). Women reported significantly greater Anxiety (p &lt; 0.001), and men reported greater Hostility (p = 0.004). In separate multivariable linear regression models controlling for cardiovascular risk factors, plus current hormone therapy for women, smaller FMD was associated with higher Anger In for women (beta = -0.222, p = 0.04) and showed a trend with higher Hostility for men (beta = -0.082, p = 0.09). Conclusion: Endothelial dysfunction, as indicated by less vasodilatation of the brachial artery, is positively associated with measures of hostility and anger suppression in healthy older adults. Thus, associations between negative affect and cardiovascular health may be apparent early in the disease process. abstract_id: PUBMED:31046519 Fluctuation in shear rate, with unaltered mean shear rate, improves brachial artery flow-mediated dilation in healthy, young men. Increase in mean shear stress represents an important and potent hemodynamic stimulus to improve conduit artery endothelial function in humans. No previous study has examined whether fluctuations in shear rate patterns, without altering mean shear stress, impacts conduit artery endothelial function. This study examined the hypothesis that 30-min exposure to fluctuations in shear rate patterns, in the presence of unaltered mean shear rate, improves brachial artery flow-mediated dilation. Fifteen healthy men (27.3 ± 5.0 yr) completed the study. Bilateral brachial artery flow-mediated dilation was assessed before and after unilateral exposure to 30 min of intermittent negative pressure (10 s, -40mmHg; 7 s, 0 mmHg) to induce fluctuation in shear rate, while the contralateral arm was exposed to a resting period. Negative pressure significantly increased shear rate, followed by a decrease in shear rate upon pressure release (both P &lt; 0.001). Across the 30-min intervention, mean shear rate was not different compared with baseline (P = 0.458). A linear mixed model revealed a significant effect of time observed for flow-mediated dilation (P = 0.029), with exploratory post hoc analysis showing an increase in the intervention arm (∆FMD +2.0%, P = 0.008), but not in the contralateral control arm (∆FMD +0.5%, P = 0.664). However, there was no effect for arm (P = 0.619) or interaction effect (P = 0.096). In conclusion, we found that fluctuations in shear patterns, with unaltered mean shear, improves brachial artery flow-mediated dilation. These novel data suggest that fluctuations in shear pattern, even in the absence of altered mean shear, represent a stimulus to acute change in endothelial function in healthy individuals. NEW &amp; NOTEWORTHY Intermittent negative pressure applied to the forearm induced significant fluctuations in antegrade and retrograde shear rate, while mean shear was preserved relative to baseline. Our exploratory study revealed that brachial artery flow-mediated dilation was significantly improved following 30-min exposure to intermittent negative pressure. Fluctuations in blood flow or shear rate, with unaltered mean shear, may have important implications for vascular health; however, further research is required to identify the underlying mechanisms and potential long-term health benefits. abstract_id: PUBMED:35739540 Brachial flow-mediated dilation and carotid intima-media thickness in glaucoma patients. Background: The purpose of the study was to assess the ultrasound measurements of the brachial artery flow-mediated dilation (FMD) and carotid artery intima-media thickness (IMT) and their relationship in glaucoma patients. Methods: Thirty-seven patients with glaucoma and thirty-one healthy controls were included in the study. All glaucoma patients and controls underwent ultrasound measurement of FMD of the brachial artery and ultrasound measurement of IMT of the carotid artery. Results: The mean values of brachial FMD were significantly lower among the glaucoma compared with controls (16.4 ± 10.6% vs 20.3 ± 8.5%, p = 0.034). No significant difference was found in carotid IMT (1.2 ± 0.4 vs. 1.1 ± 0.4, p = 0.3), and brachial artery diameter at rest (4.7 ± 0.6 vs. 4.9 ± 0.3, p = 0.2) between the glaucoma patients and controls. The significant difference in brachial artery diameter in hyperemia between the glaucoma patients and controls (5.5 ± 0.6 vs. 5.9 ± 0.4 p = 0.002) was found. A negative correlation among brachial FMD and carotid IMT as well as among brachial FMD and brachial artery diameter at rest was found. Conclusions: Impaired brachial FMD indicates presence of systemic vascular endothelial dysfunction in glaucoma; glaucoma patients with lower values of the brachial FMD are at increased risk of having thickened carotid IMT. abstract_id: PUBMED:26104609 A comparison of Doppler waveform parameters versus flow-mediated vascular dilation of the brachial artery in pregnant women. Objective: Flow mediated dilation of the brachial artery is impaired with endothelial dysfunction. We previously identified that brachial artery waveforms were changed in preeclamptic women. We therefore sought to compare Doppler waveform analysis with flow mediated vascular dilation (FMVD) measurements in pregnant women. Study Design: A cross sectional study of 71 pregnant women at &gt;20weeks gestation comparing FMVD measurements with Doppler waveform parameters was performed. Research was conducted at three hospitals and two geographic settings; (1) Yale-New Haven Hospital in New Haven, CT, (2) Mount Hope Maternity and (3) Port of Spain General Hospital in Trinidad. Brachial artery Doppler waveform measurements were done at baseline and 90s post cuff-release. From the Doppler waveforms we assessed percent change in Peak Systolic Velocity (% ΔPSV), systolic acceleration, acceleration time and pulsatility index and compared them with the percent change in FMVD. Statistical analysis using Student's t tests and correlation coefficient was done as necessary. Results: Correlation analysis between the % change in the Doppler waveform parameters and the % change in mean FMVD identified only the waveform parameter of % ΔPSV as significantly correlated (p=0.040). Conclusion: FMVD remains the gold standard for assessment of endothelial dysfunction. A correlation was observed between the Doppler measured % ΔPSV and FMVD, which may allow it to be used to assess endothelial dysfunction changes under different conditions was identified. abstract_id: PUBMED:31762781 Brachial flow-mediated dilation by continuous monitoring of arterial cross-section with ultrasound imaging. Objective: Impairment of flow-mediated dilation of the brachial artery is a marker of endothelial dysfunction and often predisposes atherosclerosis and cardiovascular events. In this study, we propose a user-guided automated approach for monitoring arterial cross-section during hyperemic response to improve reproducibility and sensitivity of flow-mediated dilation. Material And Methods: Ultrasound imaging of the brachial artery was performed in 11 volunteers in cross-sectional and in 5 volunteers in longitudinal view. During each examination, images were recorded continuously before and after inducing ischemia. Time-dilation curves of the brachial lumen cross-section were measured by user-guided automated segmentation of brachial images with the feed-forward active contour (FFAC) algorithm. %FMD was determined by the ratio of peak dilation to the baseline value. Each measurement was repeated twice in two sessions 1 h apart on the same arm to evaluate the reproducibility of the measurements. The intra-subject variation in flow-mediated dilation between two sessions (subject-specific) and inter-group variation in flow-mediated dilation with all the subjects within a session grouped together (group-specific) were measured for FFAC. The FFAC measurements were compared with the conventional diameter measurements made using echo tracking in longitudinal views. Results: Flow-mediated dilation values for cross-sectional area were greater than those measured by diameter dilation: 33.1% for cross-sectional area compared to 22.5% for diameter. Group-specific flow-mediated dilation measurements for cross-sectional area were highly reproducible: 33.2% vs. 33.0% (p &gt; 0.05) with coefficient of variation CV of 0.4%. The group-specific flow-mediated dilations measured by echo tracking for the two sessions were 21.1 vs. 23.9% with CV of 9%. Subject-specific CV for cross-sectional area by FFAC was 10% ± 2% versus 24% ± 10% for the conventional approach. Using correlation as a metric of evaluation also showed better performance for cross-sectional imaging: correlation coefficient, R, between two sessions for cross-sectional area was 0.92 versus 0.72 for the conventional approach based on diameter measurements. Conclusion: Peak dilation area measured by continuous automated monitoring of cross-sectional area of the brachial artery provides more reproducible and higher-sensitivity measurement of flow-mediated dilation compared to the conventional approach of using vascular diameter measured using longitudinal imaging. abstract_id: PUBMED:23603324 Predictive value of brachial reactive hyperemia and flow-mediated dilation in stable coronary artery disease. Background: The purpose of this study was to determine the predictive value of a single measurement of reactive hyperemia (RH) and brachial flow-mediated dilation (FMD) in patients with established stable coronary artery disease (CAD). Methods: RH and brachial artery FMD were ultrasonographically measured in 325 patients with stable CAD. Patients were followed for cerebro-cardiovascular events. The median follow-up was 3.7 years (range 0.01-5.7 years). Results: Sixty-seven patients (20.6%) had an cerebro-cardiovascular event. Patients with subsequent events had lower FMD (4.9 ± 3.3% versus 6.3 ± 3.5%, p = 0.003), higher brachial artery resting diameter (5.1 ± 0.7 mm versus 4.8 ± 0.7 mm, p = 0.002) and lower NMD (11.2 ± 5.1% versus 12.8 ± 5.4%, p = 0.02), while the mean hyperemic flow velocity and shear stress did not differ from patients without cerebro-cardiovascular events. Cox proportional hazard model adjusted for sex, age, BMI, and traditional cardiovascular risk factors revealed a hazard ratio of 0.84 for lower FMD (p = 0.01). Conclusions: We conclude that single spot measurements of peak RH do not provide long-term prognostic information, but evaluation of conduit artery FMD predicts long-term cerebro-cardiovascular events in patients with stable CAD. The prognostic value of FMD is incremental to traditional cardiovascular risk factors and may therefore be of clinical importance. abstract_id: PUBMED:28348690 Relationship Between Brachial Artery Flow-Mediated Dilation, Carotid Artery Intima-Media Thickness and Coronary Flow Reserve in Patients With Coronary Artery Disease. Background: The aim of this study was to assess the relationship between brachial artery flow mediated dilation (FMD), carotid artery intima-media thickness (IMT) and coronary flow reserve (CFR) in patients with coronary artery disease (CAD). Methods: Fifty patients with coronary artery disease, except left anterior descending artery (LAD), who showed no cardiac symptoms and 45 control subjects underwent assessment of brachial artery FMD, carotid artery intima-media thickness by high-resolution ultrasound. In addition, transthoracic second harmonic Doppler echocardiography was used to measure CFR. Results: All of the parameters were found to be correlated with each other. CFR correlated with brachial artery FMD (r = 0.232, P &lt; 0.05) and with carotid IMT (r = -0.403, P &lt; 0.001). Carotid IMT correlated with brachial artery FMD (r = -0.211, P &lt; 0.05). Conclusion: Transthoracic CFR correlated with well-established noninvasive predictors of atherosclerosis and we suggest that it can be used as a surrogate for coronary atherosclerosis. abstract_id: PUBMED:35500495 On the examination of the viscous response of the brachial artery during flow-mediated dilation. In this study, mechanotransduction is investigated through a physics-based viscoelastic model describing the arterial diameter response during a brachial artery flow mediated dilation (BAFMD) test. The study is a significant extension of two earlier studies by the same group, where only the elastic response was considered. Experimental BAFMD responses were collected from 12 healthy volunteers. The arterial wall's elastic and viscous properties were treated as local variable quantities depending on the wall shear stress (WSS) sensed by mechanotransduction. The dimensionless parameters, arising from the model which serve as a quantitative assessment of the artery's physical state, were adjusted to replicate the experimental response. Among those dimensionless parameters, the viscoelastic ratio, which reflects the relative strength of the viscous response compared to its elastic counterpart, is of special relevance to this paper's main conclusion. Based on the results, it is concluded that the arterial wall's mechanical behavior is predominantly elastic, at least in the strict context of the BAFMD test. Recommendations for potential future research and applications are provided. abstract_id: PUBMED:34044251 On the modeling of mechanotransduction in flow-mediated dilation. In this paper, we report a physics based mathematical model to describe the mechanotransduction at the luminal surface of the brachial artery during a flow-mediated dilation (FMD) process. To account for the effect of the released vasodilators in response to the sudden blood flow resurgence, a scalar property is introduced as a signal radially diffusing through the arterial wall, locally affecting its compliance. The model was evaluated on 19 in vivo responses of brachial artery FMD (BAFMD) in 12 healthy subjects. It successfully reproduces the time-dependent dilation of the brachial artery. The predicted artery's outer-to-inner radius ratio was also found to be consistent with the measurements within an acceptable margin of error. Physically meaningful dimensionless parameters quantifying the artery's physical state arose from the model, providing a description to how sensitive or responsive the artery is to the changes of wall shear stress (WSS). Future applications of this model, via incorporating inexpensive, relatively quick, and non-invasive imaging, could potentially help detect early stages of developing forms of cardiovascular diseases. abstract_id: PUBMED:26777795 Serum amyloid P (SAP) is associated with impaired brachial artery flow-mediated dilation in chronically HIV-1 infected adults on stable antiretroviral therapy. Objective: This study aimed to evaluate the relationship between inflammatory biomarkers and endothelial dysfunction (ED), as measured by brachial artery flow-mediated dilation (FMD). Methods: We conducted a cross-sectional analysis utilizing baseline data of 135 participants with HIV infection on stable antiretroviral therapy (ART) in the Hawaii Aging with HIV-Cardiovascular (HAHC-CVD) study who had available baseline inflammatory biomarkers and brachial artery FMD measurements. Results: We observed significant associations between brachial artery FMD and baseline brachial artery diameter, age, male gender, traditional cardiovascular disease (CVD) risk factors such as BMI, waist to hip ratio, hypertension, systolic blood pressure (BP), diastolic BP, and LDL cholesterol, and 10-year coronary heart disease (CHD) risk estimated by Framingham risk score (FRS). Of all biomarkers tested, higher level of C-reactive protein (CRP) (beta = - 0.695, P = 0.030) and serum amyloid P (SAP) (beta = - 1.318, P = 0.021) were significantly associated with lower brachial artery FMD in univariable regression analysis. After adjusting for baseline brachial artery diameter, age, and selected traditional CVD risk factors in multivariable model, SAP remained significantly associated with brachial artery FMD (beta = - 1.094, P = 0.030), while CRP was not (beta = - 0.391, P = 0.181). Discussion: Serum amyloid P was independently associated with impaired brachial artery FMD and may potentially relate to ED and increased CVD risk in HIV-infected patients on stable ART. Answer: Yes, brachial artery flow-mediated dilation (FMD) is associated with negative affect. A study found that smaller FMD, which indicates endothelial dysfunction, was associated with higher Anger In for women and showed a trend with higher Hostility for men among healthy older adults. This suggests that negative affect, such as anger suppression and hostility, may be linked to cardiovascular health early in the disease process (PUBMED:19306064).
Instruction: Is breakfast skipping associated with physical activity among U.S. adolescents? Abstracts: abstract_id: PUBMED:28898562 Skipping breakfast among Australian children and adolescents; findings from the 2011-12 National Nutrition and Physical Activity Survey. Objective: Skipping breakfast has been linked with poor diet quality, higher BMI and adverse cardiometabolic outcomes. This study aimed to determine the prevalence and correlates of skipping breakfast among Australian children and adolescents. Methods: A total of 1,592 2-17-year-olds completed two 24-hour recalls, collected via face-to-face and telephone interview, in the 2011-12 National Nutrition and Physical Activity Survey. Breakfast was an eating occasion of ≥210kJ named as 'breakfast' by the participant. Child, household and adult correlates of skipping breakfast were reported. Odds ratios were calculated using ordinal regression. Linear regression was used to examine differences in dietary intake. Survey weights were applied to give nationally representative estimates. Results: Most (86.8% of boys, 81.4% of girls) ate breakfast on both days, 11.8% of boys and 14.8% girls skipped on one day and 1.4% boys and 3.8% girls skipped on both days. Characteristics associated with skipping breakfast were being female, being older, being underweight or overweight/obese, poorer diet, lower physical activity, inadequate sleep, lower household income, greater socioeconomic disadvantage, and being from a single-parent home. Conclusion: Skipping breakfast was common among Australian adolescents but few consistently skipped. Implications for public health: Interventions to increase breakfast should target adolescents, particularly girls, and low SEP households. abstract_id: PUBMED:38183436 Association between breakfast skipping and psychosomatic symptoms among Canadian adolescents. This cross-sectional study aimed to examine the mediating roles of gender and substance use in the influence of breakfast skipping on psychosomatic symptoms. The study used data among a nationally-representative sample of 2855 Canadian adolescents who participated in the 2018 Health Behavior in School-aged Children (HBSC) survey. Sequential logistic regressions were used to estimate the associations between breakfast skipping and psychosomatic symptoms. Path analysis using a non-parametric bootstrapping technique tested the hypothesized mediating roles. Results showed that adolescents who skipped breakfast were 2.55 times more likely to report higher psychosomatic symptoms compared to non-breakfast skippers (AOR = 2.55; 95% CI = 1.75-3.82). The bootstrapping mediation models showed that breakfast skipping indirectly influenced psychosomatic symptoms through substance use and gender, accounting for 18.47% ([Formula: see text] = 0.0052, Boots 95% CI = 0.0025, 0.00730) and 10.70% ([Formula: see text] = 0.0091, Boots 95% CI = 0.0052, 0.0125), respectively, of the total effect. Our findings have important implications for targeted public and mental health interventions to address both breakfast skipping and psychosomatic symptoms among adolescents. Conclusion: The study compellingly underscores the significance of incorporating gender-specific factors and substance use in understanding the correlation between breakfast skipping and psychosomatic symptoms. These insights hold importance for tailoring public health interventions to alleviate the prevalence of psychosomatic symptoms among adolescents by actively addressing breakfast skipping. What is Known: • Breakfast is considered the most important meal of the day due to its role in providing the brain with the energy necessary to enhance cognitive functions. • Adolescents commonly exhibit a prevalent lifestyle behaviour of skipping breakfast. What is New: • This study provides robust evidence supporting the association between breakfast skipping and elevated psychosomatic symptoms in adolescents. • Gender and substance use mediate this association, offering novel insights into the complex interplay that contributes to psychosomatic symptoms among this demographic group. • Longitudinal research is needed to unravel causal relationships and illuminate the underlying mechanisms of this intricate connection. abstract_id: PUBMED:27635371 Does Skipping Breakfast and Being Overweight Influence Academic Achievement Among Korean Adolescents? Objectives: Health status and health behaviors are associated with academic achievement in children and adolescents. The purpose of this study was to investigate whether skipping breakfast and being overweight are related to academic achievement of Korean adolescents. Methods: Cross-sectional data on a sample of 1,652 high-school seniors (942 males and 710 females) drawn from the 2004 Korea Education Employment Panel were analyzed. Results: A higher proportion of males (15.3%) than females (6.1%) was overweight (p &lt; 0.001); 37% of males and 41% of females reported skipping breakfast. Overall test scores were significantly higher for females than males (p &lt; 0.05), and in language and foreign language subjects. However, both males and females who reported skipping breakfast had significantly lower scores in language, mathematics, and foreign language than those who did not report skipping breakfast. Overweight males had a lower probability than normal-weight males of having the highest language scores (OR = 0.52, p &lt; 0.05), but there was no difference among females. Females who skipped breakfast had a lower probability of having the highest scores in language (OR = 0.41, p &lt; 0.05), mathematics (OR = 0.24, p &lt; 0.01), or foreign language (OR = 0.18, p &lt; 0.01), while males had a lower probability of having the highest scores in language only (OR = 0.46, p &lt; 0.05). Conclusion: Skipping breakfast and being overweight are associated with poor academic achievement in Korean adolescents. Eating breakfast and weight control is being discussed as the overlooked factors that may influence better academic achievement. abstract_id: PUBMED:23534672 Is breakfast skipping associated with physical activity among U.S. adolescents? A cross-sectional study of adolescents aged 12-19 years, National Health and Nutrition Examination Survey (NHANES). Objective: To examine the association between breakfast skipping and physical activity among US adolescents aged 12-19 years. Design: A cross-sectional study of nationally representative 2007-2008 National Health and Nutrition Examination Survey (NHANES) data. Setting: Breakfast skipping was assessed by two 24 h dietary recalls. Physical activity was self-reported by participants and classified based on meeting national recommendations for physical activity for the appropriate age group. Multiple logistic regression analysis was used to model the association between breakfast skipping and physical activity while controlling for confounders. Subjects: A total of 936 adolescents aged 12-19 years in the USA. Results: After adjusting for family income, there was no association between breakfast skipping and meeting physical activity guidelines for age among adolescents aged 12-19 years (OR = 0.95, 95% CI 0.56, 1.32). Conclusions: Findings from the study differ from previous research findings on breakfast skipping and physical activity. Therefore, further research that uses large, nationally representative US samples and national recommended guidelines for physical activity is needed. abstract_id: PUBMED:35459164 Prevalence of breakfast skipping among children and adolescents: a cross-sectional population level study. Background: Interventions to promote breakfast consumption are a popular strategy to address early life inequalities. It is important to understand the epidemiology of children and adolescents who skip breakfast so that interventions and policy can be appropriately considered. This study investigated the prevalence of breakfast skipping among a contemporary, population-wide sample of children and adolescents in Australia. Methods: Participants were grade 4-12 students (n = 71,390, 8-18 years) in South Australian government (public) schools who took part in the 2019 Wellbeing and Engagement Collection. The prevalence of breakfast skipping (never, sometimes, often, or always) was calculated for the overall sample and stratified by gender, school grade, socioeconomic status and geographical remoteness. Multinomial logistic regression analyses were conducted to determine the relative risk ratio of sometimes, often, and always skippers compared with never skippers, according to demographic characteristics. Results: Overall, 55.0% of students reported never skipping breakfast, 17.4% reported sometimes skipping, 18.0% reported often skipping, and 9.5% reported always skipping breakfast. Skipping breakfast was more prevalent among females, students in senior grades, and those living in socioeconomically disadvantaged and regional and remote areas. Analyses disaggregated by gender revealed that grade level gradients in breakfast skipping were more marked among females compared to males. Conclusions: Breakfast skipping among children and adolescents appears considerably more prevalent than previous research suggests. Drivers of breakfast skipping across population sub-groups need to be explored to better inform strategies to promote breakfast consumption. abstract_id: PUBMED:32725071 Skipping breakfast and physical fitness among school-aged adolescents. Objectives: This study explored the relationship between skipping breakfast and physical fitness in a group of school-aged adolescents in China. Methods: This cross-sectional study from the Chinese National Surveillance on Students' Constitution and Health (CNSSCH) survey in Ningbo, China, used a standardized questionnaire to assess the frequency of breakfast consumption. Physical fitness was measured through standing long jump, 50-m sprint, 1,000 (or 800)-m run, and vital capacity tests. Multiple linear regression analysis was used to investigate the relationship between the frequency of breakfast consumption and physical fitness. Results: Our study included a total of 1,849 school-aged adolescents (aged 15.53±1.80 years). Among boys, non-breakfast-skippers had good scores for 50-m sprints, 1,000-m run, and vital capacity tests when compared with breakfast skippers (all p&lt;0.05). Among girls, non-breakfast-skippers had a good scores for the standing long jump test compared with breakfast skippers (p=0.003). The multiple linear regression model showed that not skipping breakfast was positively associated with vital capacity (β=-173.78, p=0.004) and inversely associated with 50-m sprint (β=-0.12, p=0.018) and 1,000-m run times (β=-8.08, p=0.001) in boys. Conclusion: The results of this cross-sectional study revealed that skipping breakfast might be associated with lower physical fitness in Chinese adolescents aged 13-18 years, especially boys. Breakfast consumption should be promoted among Chinese school-aged boys. abstract_id: PUBMED:31512338 Predictors of breakfast skipping among 14 to 16 years old adolescents in Jordan: The influential role of mothers. Aim: To assess the prevalence and identify the predictors of breakfast skipping among 14- to 16-year-old adolescents in Jordan, focusing on mother-related variables. Background: Breakfast is an essential meal across one's entire lifespan and especially important during the adolescent years. The practice of skipping breakfast has become so prevalent among adolescents that it is now a well-documented nutritional problem. Design: A descriptive cross-sectional correlational design. Methods: A proportional cluster stratified sampling protocol was used to select 1896 adolescents and their mothers (1013) during the period of March to June 2016. A self-reported questionnaire was used to collect data on breakfast-skipping rates, the perceived reasons for this behavior, the perceived importance of consuming breakfast, and maternal encouragement of breakfast consumption. Results: The prevalence of breakfast skipping was 34.3% among adolescents and 21.5% among their mothers. A significant association was found between breakfast skipping among adolescents and their mothers (χ2 [1, n = 998] = 37.90, P = .001). Maternal encouragement of breakfast consumption, gender, and adolescent perception of the importance of this meal were found to be significant predictors of adolescent breakfast skipping. Conclusion: The findings highlight the importance of involving mothers in developing nutritional health plans aimed at promoting regular breakfast consumption among adolescents. abstract_id: PUBMED:29952827 Physical Activity and Skipping Breakfast Have Independent Effects on Body Fatness Among Adolescents. Objective: We analyzed the longitudinal relationship between breakfast intake and adiposity among adolescents, and the possible mediation role of physical activity on this phenomenon. Methods: We conducted the longitudinal study composed of 86 adolescents (11-14 years old), absence of any known chronic diseases and no regular medicine use that returned the formal consent and assent both signed. Breakfast intake was reported through face-to-face interviews. Adiposity was assessed using a densitometry scanner. Physical activity was estimated through step counts performed using pedometers. Biological maturation was estimated through the maturity level. Student t test for independent samples was used to compare adolescents who were "nonskipping breakfast" and "skipping breakfast," whereas the relationship between adiposity, physical activity, and skipping breakfast was assessed using Pearson correlation. Structural equation model was created to identify the mediation role of physical activity on the relationship between nonskipping breakfast and adiposity. Results: After 12 months, adolescents who had consumed breakfast regularly presented decreased trunk fatness (-3.5% [95% confidence interval (CI): -6.9 to -0.2]) and body fatness (-2.3% [95% CI: -3.9 to -0.7]) compared with their counterparts. Physical activity had an inverse relationship with trunk fatness (r = -0.270 [95% CI: -0.457 to -0.060]). Nonskipping breakfast was, however, not significantly related to trunk fatness; however, physical activity seems to reduce trunk fatness in our sample (r = -0.281; 95% CI: -0.479 to -0.083). Conclusions: Adolescents who ate breakfast regularly presented lower body fatness independent of physical activity, whereas trunk fatness decreased in adolescents who improved physical activity. abstract_id: PUBMED:28448079 Relationship between Breakfast Skipping and Obesity among Elderly: Cross-Sectional Analysis of the HEIJO-KYO Study. Objective: Breakfast skipping is reported to be associated with obesity in children and younger populations; however, few studies report the association among elderly. The purpose of this study was to investigate the relationships between breakfast skipping and obesity prevalence among elderly. Design: Cross-sectional study. Setting: Community-dwelling elderly in Nara, Japan. Participants: 1052 elderly participants (mean age: 71.6 years). Measurements: Obesity and breakfast skipping were defined as body mass index of ≥25 kg/m2 and skipping breakfast one or more times per week, respectively. Results: Two hundred and seventy-two participants (25.9%) were classified as obese and forty-one (3.9%) were as breakfast skippers. Obesity prevalence was significantly higher in breakfast skippers than in breakfast eaters (43.9% vs. 25.1%, P = 0.007). In multivariable logistic regression analysis adjusted for potential confounders (age, sex and alcohol consumption), breakfast skippers showed significantly higher odds ratio (OR) for obesity than breakfast eaters (OR, 2.23; 95% confidence interval, 1.17-4.27; P = 0.015), which continued to be significant after further adjustment for socioeconomic status. In addition, breakfast skippers showed significantly lower daily potassium (P &lt;0.001) and dietary fibre intakes (P = 0.001) and lower subjective physical activity (P = 0.035) than breakfast eaters. Conclusions: Breakfast skipping was significantly associated with obesity among elderly. Poor diet quality and physical inactivity may be potential intermediators underlying the association between breakfast skipping and obesity. abstract_id: PUBMED:25633132 Relationships between bullying victimization psychological distress and breakfast skipping among boys and girls. The purpose of this study was to further explore the association between bullying victimization and breakfast skipping in children and adolescents. Compared to the previous study, we have used a larger and representative sample of middle and high school students, examined the effect of gender, different forms (physical, verbal, theft/vandalism and cyber) and severity of bullying on breakfast eating behaviour. Data from students (2286 boys and 2859 girls) aged 11 to 19 years (mean ± SD age: 14.6 ± 1.9 years) from the 2013 Ontario Student Drug Use and Health Survey (OSDUHS) were analysed using self-reports of being bullied, diet, psychological distress, demographics, socio-economic status, weight status, and substance use. Results revealed greater odds of breakfast skipping in girl victims of physical, verbal, and cyber bullying, and in boy victims of verbal and cyber bullying. There was a dose-response relationship between experience of both school and cyber bullying victimization and breakfast skipping behaviour for both genders. Mediation analysis indicated that psychological distress fully mediated the relationship between both verbal and physical bullying victimization and breakfast skipping in girls, and partially mediated the relationship between verbal bullying victimization and breakfast skipping in boys. Psychological distress also partially mediated the link between cyber bullying victimization and breakfast skipping in both boys and girls. These results corroborate previous findings on the association between bullying victimization and breakfast skipping in children and adolescents. The strong and consistent associations with different forms of bullying victimization, the dose-response relationship, and the mediating role of psychological distress suggest a causal relationship. Answer: According to the cross-sectional study of adolescents aged 12-19 years using data from the National Health and Nutrition Examination Survey (NHANES), there was no association found between breakfast skipping and meeting physical activity guidelines for age among adolescents aged 12-19 years after adjusting for family income (PUBMED:23534672). This finding suggests that, within the context of this study, breakfast skipping is not related to whether U.S. adolescents engage in the recommended levels of physical activity. However, it is important to note that this result differs from previous research findings on breakfast skipping and physical activity, indicating that further research using large, nationally representative U.S. samples and national recommended guidelines for physical activity is needed to better understand this relationship (PUBMED:23534672).
Instruction: Does waist circumference uncorrelated with BMI add valuable information? Abstracts: abstract_id: PUBMED:24915975 Does waist circumference uncorrelated with BMI add valuable information? Background: Estimation of relative contribution of Body Mass Index (BMI) and waist circumference (WC) on health outcomes requires a regression model that includes both obesity metrics. But, multicollinearity could yield biased estimates. Methods: To address the multicollinearity issue between BMI and WC, we used the residual model approach. The standard WC (Y-axis) was regressed on the BMI (X-axis) to obtain residual WC. Data from two adult population surveys (Nunavik Inuit and James Bay Cree) were analysed to evaluate relative effect of BMI and WC on four cardiometabolic risk factors: insulin, triglycerides, systolic blood pressure and high-density lipoprotein levels. Results: In multivariate models, standard WC and BMI were significantly associated with cardiometabolic outcomes. Residual WC was not linked with any outcomes. The BMI effect was weakened by including standard WC in the model, but its effect remained unchanged if residual WC was considered. Conclusions: The strong correlation between standard WC and BMI does not allow assessment of their relative contributions to health in the same model without a risk of making erroneous estimations. By contrast with BMI, fat distribution (residual WC) does not add valuable information to a model that already contains overall adiposity (BMI) in Inuit and Cree. abstract_id: PUBMED:26302915 Conflicting effects of BMI and waist circumference on iron status. The association between obesity and iron status has a long history and is still receiving attention. However comparative analysis of the association between general obesity (BMI) and visceral obesity (waist circumference) with iron status has not been extensively researched. The aim of the present study is thus to determine if body mass index and waist circumference have the same correlation with iron status. One thousand one hundred and thirty people (225 men and 905 women) aged 30 years and above participated in this study. Anthropometric parameters, haemoglobin, iron and total iron binding capacity concentrations were measured using standard methods. Percentage transferrin saturation was calculated and ferritin concentrations were measured using an enzyme linked immunosorbent assay. Obese or overweight women had significantly lower iron and transferrin saturation concentration when compared to non-obese women. In contrast, women with high waist circumference had comparable plasma iron and transferrin saturation to women with normal waist circumference. Partial correlation analysis and linear regression analysis showed that BMI is negatively and significantly associated with plasma iron, transferrin saturation, Hb and ferritin concentration, whilst waist circumference is positively but insignificantly associated with plasma iron, transferrin saturation, Hb and ferritin concentration. Binary regression analysis showed that obese or overweight people are more likely to have iron deficiency, whilst those with raised waist circumference are more likely to have iron overload. Multivariate analysis showed that body mass index is negatively and significantly associated with low iron status, while waist circumference is positively and insignificantly associated with iron status. This is supported by a comparison of plasma iron, transferrin saturation and ferritin concentrations in participants with high body mass index and normal waist circumference and participants with normal body mass index and high waist circumference to those participants having normal body mass index and normal waist circumference. The present study suggests that in women body mass index is associated with low plasma iron, transferrin saturation and ferritin concentrations, while waist circumference is associated with high plasma iron, transferrin saturation and ferritin concentrations. abstract_id: PUBMED:26094141 Reprint of "Relationship between BMI, waist circumference, physical activity and probable developmental coordination disorder over time". Background: Cross-sectional studies have shown that children with developmental coordination disorder (DCD) are less likely to be physically active and have excess weight gain. However, longitudinal studies examining the relationship between DCD and measures of body composition (BMI and waist circumference) over time are lacking. It is not known if sex and physical activity affect the relationship between DCD and measures of body composition over time. Objective: (1) To examine if BMI and waist circumference in children with and without probable DCD (pDCD) remain constant over time or change as children age, and whether this relationship varies by sex. (2) To examine if differences in physical activity between children with and without pDCD account for differences in BMI and waist circumference over time. Methods: Physical Health Activity Study Team (PHAST) data were used for this longitudinal analysis. At baseline, a total of 2,278 (pDCD = 103) children aged 9-10 years were included in the analysis. The total follow-up period was five years. Mixed-effects modeling was used to estimate change in body composition measures in children over time. Results: Children with pDCD have higher BMI and waist circumference compared to typically developing children, and this difference increased over the study period. The relationship between pDCD and BMI over time also varied by sex. A similar trend was observed for waist circumference. Boys with pDCD were found to have a more rapid increase in BMI and waist circumference compared to girls with pDCD. Physical activity had neither a mediating nor a moderating effect on the relationship between pDCD and measures of body composition. However, physical activity was independently and negatively associated with measures of body composition. Conclusions: pDCD is associated with higher body mass and waist circumference, both important risk factors for cardiovascular disease, type 2 diabetes, and psychological problems and other health conditions. abstract_id: PUBMED:25733625 Are the recent secular increases in the waist circumference of adults independent of changes in BMI? Background: Several studies showed that the waist circumference of US adults has increased over the past 25 y. However, because of the high correlation between waist circumference and body mass index (BMI; in kg/m²) (r ∼ 0.9), it is uncertain if these trends in waist circumference exceed those expected on the basis of BMI changes over this time period. Objective: We assessed whether the recent trend in waist circumference was independent of changes in BMI, age, and race-ethnicity. Design: We analyzed data from the 1999-2000 through 2011-2012 cycles of the NHANES. Results: The mean waist circumference increased by ∼2 cm (in men) and ∼4 cm (in women) in adults in the United States over this 12-y period. In men, this increase was very close to what would be expected because of the 0.7 increase in mean BMI over this period. However, in women, most of the secular increase in waist circumference appeared to be independent of changes in BMI (mean: 0.6), age, and race-ethnicity over the 12-y period. We estimated that, independent of changes in these covariates, the mean waist circumference increased by 0.2 cm in men and 2.4 cm in women from 1999-2000 through 2011-2012; only the latter estimate was statistically significant. Conclusions: Our results indicate that, in women but not men, the recent secular trend in waist circumference is greater than what would be expected on the basis of changes in BMI. Possible reasons for this secular increase, along with sex differences, are uncertain. abstract_id: PUBMED:35304685 Body mass index (BMI), waist circumference (WC), waist-to-height ratio (WHtR) e waist body mass index (wBMI): Which is better? Obesity and overfat are most commonly assessed using the body mass index (BMI), which evaluates "total obesity", without accounting for body fat distribution. Therefore, several indexes of obesity have been proposed, combining BMI with other measures or singular parameters. The aim of the study was to evaluate the accuracy of a new, simple index that takes into account both BMI and Waist Circumference (WC), Waist Body Mass Index (wBMI) in comparison to BMI, WC e Waist-to-Height Ratio (WHtR) for the identification of overfat and obese patients identified by fat mass percentage (FM%). 2400 non diabetic patients were enrolled. From the analysis carried out it emerges that wBMI, BMI, WC and WHtR all have a statistically significant positive correlation (p-Value &lt; 0.001) with FM%. The multivariate analysis showed the positive relationship between these four indexes and the FM. To assess the accuracy of these indices in diagnosing the condition of overfat and obesity we used the statistical analysis Receiver Operating Characteristic (ROC). The Area Under the Curve (AUC) derived from the ROC showed that for the male gender the indicator with the greatest discriminating capacity of the conditions of overfat and obesity was the WHtR and the wBMI for the female gender. The wBMI is therefore configured as an additional tool at the disposal of the healthcare professional aimed at framing the overfat and obese patient and monitoring him during the course of treatment. Moreover wBMI is an indicator able to provide information about the FM% constituting an accurate tool for the evaluation of the overfat and obese patient. abstract_id: PUBMED:32275177 An examination of potential mediators of the relationship between polygenic scores of BMI and waist circumference and phenotypic adiposity. Objective: The present study examined whether physical activity, personality, cognition, education, and depressive symptoms mediate the association between polygenic scores (PGS) for body mass index (BMI) and waist circumference and the corresponding phenotypic adiposity measures.Design: Participants were 9,139 individuals aged 50 to 107 years (57% women; Mean Age: 68.17, SD: 10.06) from the Health and Retirement Study who were genotyped. Trained staff measured their height, weight, and waist circumference, and participants answered questions on physical activity, personality, education, cognitive function, and depressive symptoms.Main Outcome Measures: BMI and waist circumference.Results: A higher PGS for both BMI and waist circumference were related to higher phenotypic BMI and waist circumference, respectively, in part through their association with lower physical activity, conscientiousness, education, and higher depressive symptoms but not cognition. The mediators accounted for 6.6% of the association between PGS and BMI and 9.6% of the association between PGS and waist circumference.Conclusion: The present study provides new evidence on the multiple, distinct pathways through which genetic propensity to higher BMI and waist circumference may lead to higher adiposity in adulthood. Individuals with a higher genetic predisposition to obesity may gain more weight through less adaptive behavioral, personality and educational profiles. abstract_id: PUBMED:25617993 Relationship between BMI, waist circumference, physical activity and probable developmental coordination disorder over time. Background: Cross-sectional studies have shown that children with developmental coordination disorder (DCD) are less likely to be physically active and have excess weight gain. However, longitudinal studies examining the relationship between DCD and measures of body composition (BMI and waist circumference) over time are lacking. It is not known if sex and physical activity affect the relationship between DCD and measures of body composition over time. Objective: (1) To examine if BMI and waist circumference in children with and without probable DCD (pDCD) remain constant over time or change as children age, and whether this relationship varies by sex. (2) To examine if differences in physical activity between children with and without pDCD account for differences in BMI and waist circumference over time. Methods: Physical Health Activity Study Team (PHAST) data were used for this longitudinal analysis. At baseline, a total of 2,278 (pDCD=103) children aged 9-10 years were included in the analysis. The total follow-up period was five years. Mixed-effects modeling was used to estimate change in body composition measures in children over time. Results: Children with pDCD have higher BMI and waist circumference compared to typically developing children, and this difference increased over the study period. The relationship between pDCD and BMI over time also varied by sex. A similar trend was observed for waist circumference. Boys with pDCD were found to have a more rapid increase in BMI and waist circumference compared to girls with pDCD. Physical activity had neither a mediating nor a moderating effect on the relationship between pDCD and measures of body composition. However, physical activity was independently and negatively associated with measures of body composition. Conclusions: pDCD is associated with higher body mass and waist circumference, both important risk factors for cardiovascular disease, type 2 diabetes, and psychological problems and other health conditions. abstract_id: PUBMED:37864476 Ratio of waist circumference to body mass index: A novel predictor of clinical outcome in hypertension patients. We aim to investigate the influence of waist circumference and body mass index (BMI) on all-cause death and cardiovascular-specific death in patients with hypertension. This prospective cohort study, based on waist circumference and body mass index measurements in patients with hypertension, provided risk estimates of all-cause mortality and cardiovascular events. The waist circumference-to-BMI ratio (WtBR) is an anthropometric measure integrating waist circumference and BMI. We utilized multivariable Cox regression analysis, restricted cubic spline model, Kaplan-Meier plot, random forest analysis, and sensitivity analysis to assess the relationship of WtBR with all-cause mortality. Subsequently, Fine-Gray competing risk regression models were applied to precisely evaluate the probability of cardiovascular-specific death attributed to high WtBR. The results indicate that thea deceased group showed significantly higher WtBR and lower BMI compared with the alive groups (P &lt; .05), while no significant difference was observed in waist circumference (P = .373). When analyzed as continuous, the risk of all-cause death elevated with increasing WtBR in the adjusted model with an HR of 2.42 (95% CI, 2.06-2.85). The restricted cubic spline illustrated an elevated risk of all-cause mortality as WtBR increased (J-shaped curve). Nevertheless, WtBR showed no significant association with cardiovascular-specific death and the prediction model exhibited a reliable performance in the testing set. This study supported that WtBR, an anthropometric measure, is independently associated with all-cause death in hypertensive patients. It's advisable to routinely assess waist circumference in hypertensive patients regardless of BMI, in order to more effectively manage the risk of obesity-related health. abstract_id: PUBMED:34401198 Waist circumference, waist-to-height ratio and BMI percentiles in children aged 5 to 19 years in India: A population-based study. Objective: Nationally representative percentiles for waist circumference (WC), waist-to-height-ratio (WHtR), and body mass index (BMI) are not available for children and adolescents in India. Methods: Using LMS method, age- and gender-specific reference growth charts were constructed for WC (n = 68,261), WHtR (n = 68,261), and BMI (n = 67,741) from children/adolescents aged 5-19 years who participated in a nationally representative survey. General obesity, indicating overall obesity, was defined as age-sex-specific BMI z-scores ≥ 95th percentile. Central obesity was defined in three ways: WC ≥ 90th percentile, WHtR ≥ 0.5, and both WC ≥ 90th percentile and WHtR ≥ 0.5. Findings: WC and BMI percentiles for boys and girls are lower than those previously reported from India and several other countries. The BMI percentiles are lower than the WHO 2007 reference population. The prevalence of general obesity using India specific BMI centiles was 2.9% (95% CI: 2.6-3.2). The prevalence of central obesity was 6.1% (95% CI: 5.7-6.6) using WC ≥ 90th percentile, 5.3% (95% CI: 5.0-5.7) using WHtR ≥ 0.5, and 3.6% using both criteria. Three-fourth of children with general obesity also had central obesity based on WC ≥ 90th. Conclusions: Indian children are thinner than Caucasian and other Asian children, and the global WHO reference population. Using India specific reference, the prevalence of central obesity is higher than general obesity with a significant overlap between the two. abstract_id: PUBMED:35119095 Metabolic syndrome in women - correlation between BMI and waist circumference. Introduction: Metabolic syndrome (MetS), which includes abdominal obesity, hypertension, hypertriglyceridaemia, low high-density lipoprotein cholesterol and elevated fasting plasma glucose, is now well understood. The prevalence of MetS is increasing all over the world. Central obesity is the main and the most common component of MetS. The most simple, practical, and frequently used tool for assessing body weight is body mass index. The aim of our study was to show the correlation between waist circumference (WC) and BMI in women with MetS depending on its components. Material And Methods: It was a study of 36-month duration (2016-2019) conducted in University Hospital No. 1 in Bydgoszcz, Poland. The study included 696 women with diagnosed MetS according to the 2009 International Diabetes Federation criteria. Results: In the study group, WC was found to be significantly correlated with BMI according to the arterial hypertension, T2DM, IFG, lower HDL-C, and higher TG level. Conclusion: We should consider changing the WC guidelines for Europeans, but for this we need more prospective studies. Answer: The question of whether waist circumference (WC) uncorrelated with Body Mass Index (BMI) adds valuable information has been explored in various studies with mixed results. One study found that when WC was adjusted for BMI (using a residual model approach), the residual WC was not linked with any cardiometabolic outcomes in two adult population surveys. This suggests that WC strongly correlated with BMI does not provide additional information on health outcomes when both obesity metrics are included in the same model (PUBMED:24915975). In contrast, another study reported conflicting effects of BMI and WC on iron status. It found that BMI was negatively associated with plasma iron, transferrin saturation, Hb, and ferritin concentration, while WC was positively associated with these measures. This suggests that WC may provide different information than BMI regarding iron status, particularly in women (PUBMED:26302915). Further, a study on children with probable developmental coordination disorder (pDCD) found that those with pDCD had higher BMI and WC compared to typically developing children, and this difference increased over time. The study did not find that physical activity mediated or moderated the relationship between pDCD and body composition measures, indicating that WC and BMI independently related to health outcomes in this population (PUBMED:25617993). Another study examined secular increases in WC independent of changes in BMI and found that in women, but not men, the increase in WC over a 12-year period was greater than what would be expected based on changes in BMI alone (PUBMED:25733625). Additionally, a study comparing various obesity indices found that a new index, the Waist Body Mass Index (wBMI), which combines BMI and WC, was a good tool for identifying overfat and obese patients, suggesting that combining BMI and WC into a single measure might provide valuable information (PUBMED:35304685). Lastly, a study on hypertensive patients found that the ratio of WC to BMI (WtBR) was a novel predictor of all-cause mortality, indicating that WC provides valuable information when considered in relation to BMI (PUBMED:37864476).
Instruction: Do elderly patients experience increased perioperative or postoperative morbidity or mortality when given neoadjuvant chemoradiation before esophagectomy? Abstracts: abstract_id: PUBMED:21106309 Do elderly patients experience increased perioperative or postoperative morbidity or mortality when given neoadjuvant chemoradiation before esophagectomy? Background: The use of induction chemoradiotherapy followed by surgery has been widely used for the treatment of esophageal cancer. The presumed risk of increased postoperative morbidity and mortality with this regimen has led to reluctance to offer this therapy to elderly patients. We compared the perioperative morbidity and mortality of patients 70 years old and older with those of patients younger than 70 who received CRT followed by esophagectomy and sought to identify preoperative risk factors that may predict higher risk of postoperative death or complications. Methods And Materials: We identified 260 patients who underwent preoperative chemoradiotherapy followed by esophagectomy. The association of age with postoperative death and complications was evaluated. The Charlson index, prior cardiac history, and diabetes were identified as preoperative risk factors and were evaluated as potential confounders or effect modifiers. Results: Cardiac disease and the Charlson index were potential modifiers of the effect of age on length of hospital stay (p = 0.08 and p = 0.07, respectively) and postoperative complications (p = 0.1 and p = 0.2) but were not statistically significant. There was a slight nonsignificant decrease in the risk of death in elderly patients after adjustment for the Charlson index (p = 0.2). Conclusion: No significant differences were detected with respect to morbidity and mortality in elderly patients. The presence of cardiac disease, higher scores on the Charlson index, or diabetes did not significantly influence length of stay, postoperative complications, or postoperative death. Given the potential to improve outcomes, this regimen should not be discounted in elderly patients. abstract_id: PUBMED:17713823 Effects of neoadjuvant therapy on perioperative morbidity in elderly patients undergoing esophagectomy for esophageal cancer. Background: The use of cytoreductive therapy followed by surgery is preferred by many centers dealing with locally advanced esophageal cancer. However, the potential for increase in mortality and morbidity rates has raised concerns on the use of chemoradiation therapy, especially in elderly patients. The aim of this study was to assess the effects of induction therapy on postoperative mortality and morbidity in elderly patients undergoing esophagectomy for locally advanced esophageal cancer at a single institution. Methods: Postoperative mortality and morbidity of patients &gt; or = 70 years old undergoing esophagectomy after neoadjuvant therapy, between January 1992 and October 2005 for cancer of the esophagus or esophagogastric junction, were compared with findings in younger patients also receiving preoperative cytoreductive treatments. Results: 818 patients underwent esophagectomy during the study period. The study population included 238 patients &lt; 70 years and 31 &gt; or = 70 years old undergoing esophageal resection after neoadjuvant treatment. Despite a significant difference in comorbidities (pulmonary, cardiological and vascular), postoperative mortality and morbidity were similar irrespective of age. Conclusions: Elderly patients receiving neoadjuvant therapies for cancer of the esophagus or esophagogastric junction do not have a significantly increased prevalence of mortality and major postoperative complications, although cardiovascular complications are more likely to occur. Advanced age should no longer be considered a contraindication to preoperative chemoradiation therapy preceding esophageal resection in carefully selected fit patients. abstract_id: PUBMED:27569043 Hybrid minimally invasive Ivor Lewis esophagectomy after neoadjuvant chemoradiation yields excellent long-term survival outcomes with minimal morbidity. Background: There is a clear survival benefit to neoadjuvant chemoradiation prior to esophagectomy for patients with stages II-III esophageal cancer. A minimally invasive esophagectomy approach may decrease morbidity but is more challenging in a previously radiated field and therefore patients who undergo neoadjuvant chemoradiation may experience more postoperative complications. Methods: A prospective database of all esophageal cancer patients who underwent attempted hybrid minimally invasive Ivor Lewis esophagectomy was maintained between 2006 and 2015. The clinical characteristics, neoadjuvant treatments, perioperative complications, and survival outcomes were reviewed. Results: Overall 30- and 90-day mortality rates were 0.8% (1/131) and 2.3% (3/131), respectively. The majority of patients 58% (76/131) underwent induction treatment without significant adverse impact on mortality, major complications, or hospital stay. Overall survival at 1, 3, and 5 years was 85.9%, 65.3%, and 53.9%. Five-year survival by pathologic stage was stage I 68.9%, stage II 54.0%, and stage III 29.6%. Conclusions: The hybrid minimally invasive Ivor Lewis esophagectomy approach results in low perioperative morbidity and mortality and is well tolerated after neoadjuvant chemoradiation. Good long-term overall survival rates likely resulted from combined concurrent neoadjuvant chemoradiation in the majority of patients, which did not impact the ability to safely perform the operation or postoperative complications rates. J. Surg. Oncol. 2016;114:838-847. © 2016 2016 Wiley Periodicals, Inc. abstract_id: PUBMED:21945223 Morbidity and mortality after esophagectomy following neoadjuvant chemoradiation. Background: Neoadjuvant chemoradiation (CRT) is an accepted treatment for locally advanced esophageal carcinoma. A survival benefit has not been definitively established, and there is concern that chemoradiation may increase postoperative morbidity and mortality. Methods: A retrospective review was made of 138 patients treated for esophageal carcinoma between January 1999 and December 2009. Fifty-four patients who underwent CRT followed by esophagectomy were compared with 84 patients who underwent esophagectomy alone. Results: The chemoradiation and esophagectomy alone cohorts were well matched on all preoperative variables. There was a higher percentage of Ivor Lewis procedures in the esophagectomy alone cohort (82.0%) compared with the CRT cohort (59.3%; p = 0.006). Thirty-five percent of the CRT group underwent transhiatal esophagectomy. Thirty-day mortality was 6.0% (5 of 84) in the esophagectomy alone cohort compared with 1.9% (1 of 54) in the CRT cohort (p = 0.5). Similarly, mean intensive care unit stay (4.7 versus 6.5 days; p = 0.5), ventilator time (2.4 versus 4.2 days; p = 0.5), and length of stay (13.5 versus 17 days; p = 0.2) did not differ significantly between the groups. The overall major complication rates were similar in the CRT and esophagectomy alone cohorts: 57.4% versus 56% (p = 0.98). Multivariate analysis determined that coronary artery disease (p = 0.01; odds ratio 3.5) and transthoracic esophagectomy (p = 0.05; odds ratio 1.4) were predictive of development of postoperative complications. Only cervical anastomotic location (p = 0.04; odds ratio 3.0) was predictive of anastomotic leak on multivariate analysis. Conclusions: Neoadjuvant chemoradiation does not appear to increase postoperative morbidity or mortality after esophagectomy. Major postoperative complications are associated with the transthoracic approach and preoperative coronary artery disease. abstract_id: PUBMED:27217665 Radical Esophagectomy After Neoadjuvant Chemoradiation: Single Institutional Experience from Tertiary Cancer Centre in India. Although preoperative chemoradiation has shown to improve surgical outcomes in both loco-regional control and long term survival; it has still not become the standard of care in many centers. There is reluctance in accepting preoperative chemoradiation primarily due to fear of increased perioperative morbidity/mortality or non-availability of infrastructure and expertise. We present a retrospective analysis of our results of radical esophagectomy after neoadjuvant chemoradiation. All patients who underwent Radical Esophagectomy from January 2009 to December 2013 by a single surgical team at our institute were included in the series (n = 118). Patients undergoing surgery after chemo-radiation (group A = 66) were compared with those under going upfront surgery (group B = 52) in terms of patient variables (age, sex, comorbidities, tumor location, staging, histology) and postoperative surgical outcomes and complications using Chi square test. Overall and disease free survival was analyzed using Kaplan Meir curve. There was no difference in duration of surgery, postoperative stay and overall morbidity and mortality in both groups. Although group A patients had more of advanced cases clinically, but histopathology showed complete pathological response (pCR) in nearly 40 % patients and negative nodes (pN0) in 62.5 % patients. OS and DFS showed a trend towards better survival with preoperative chemoradiation. We conclude that radical esophagectomy after preoperative chemoradiation is feasible and safe in developing countries. Moreover pathological complete response correlates well with improved survival. Randomized control trials may be required to further substantiate the results. abstract_id: PUBMED:33281406 Robot-Assisted Esophagectomy After Neoadjuvant Chemoradiation-Current Status and Future Prospects. Multimodality treatment with neoadjuvant chemoradiation followed by surgery has become the standard of care for esophageal cancer. In the recent years, there has been a shift in focus of surgical approach from open esophagectomy to minimally invasive esophagectomy. Robot-assisted esophagectomy is being performed more often in centers across the world. However, there is limited data on role of robot-assisted esophagectomy in patients who have received neoadjuvant chemoradiation. Initial reports have shown that integrating neoadjuvant therapy to robot-assisted esophagectomy is feasible and safe. With the growing popularity of robot-assisted surgery worldwide among both surgeons and patients, understanding the impact of neoadjuvant chemoradiation on the procedure and its oncological outcome seems worthwhile. In the present study, we present a review of available literature on the feasibility and safety of robot-assisted minimally invasive esophagectomy in esophageal cancer patients after neoadjuvant chemoradiation. abstract_id: PUBMED:29581681 Robot-Assisted Mckeown Esophagectomy is Feasible After Neoadjuvant Chemoradiation. Our Initial Experience. Neoadjuvant chemoradiation has become the standard of care for esophageal cancer, especially for middle third esophageal lesions and those with squamous histology. Although more and more thoracic surgeons and surgical oncologists have now shifted to video-assisted and robot-assisted thoracoscopic esophagectomy; there is still limited experience for the use of minimal-assisted approaches in patients undergoing surgery after neoadjuvant chemoradiation. Most surgeons have concerns of feasibility, safety, and oncological outcomes as well as issues related to difficult learning curve in adopting robotic esophagectomy in patients after chemoradiation. We present our initial experience of Robot-Assisted Mckeown Esophagectomy in 27 patients after neoadjuvant chemoradiation, from May 2013 to October 2014. All patients underwent neoadjuvant chemoradiation to a dose of 50.4 Gy/25Fr with concurrent weekly cisplatin, followed by reassessment with clinical examination and repeat FDG PET/CT 6 weeks after completion of chemoradiation. Patients with progressive disease underwent palliative chemotherapy while patients with either partial or significant response to chemoradiation underwent Robot-Assisted Mckeown Esophagectomy with esophageal replacement by gastric conduit and esophagogastric anastomosis in the left neck. Out of 27 patients, 92.5 % patients had stage cT3/T4 tumours and node-positive disease in 48.1 % on imaging. Most patients were middle thoracic esophageal cancers (23/27), with squamous histology in all except for one. All patients received neoadjuvant chemoradiation and subsequently underwent Robot Assisted Mckeown Esophagectomy. The average time for robot docking, thoracic mobilization and total surgical procedure was 13.2, 108.4 and 342.7 min, respectively. The procedure was well tolerated by all patients with only one case of peri-operative mortality. Average ICU stay was 6.35 days (range 3-9 days). R0 resection rate of 96.3 % and average lymph node yield of 18 could be achieved. Pathological node negativity rate (pN0) and complete response (pCR) were 66.6 and 44.4 %, respectively. In the initial cases, four patients had to be converted to open due technical reasons or intraoperative complications. The present study, with shorter operative times, similar ICU stay, overall low morbidity, and mortality and optimal oncological outcomes suggest that robot-assisted thoracic mobilization of esophagus in patients with prior chemoradiation is feasible and safe with acceptable oncological outcomes. It has a shorter learning curve and hence allows for a transthoracic minimally invasive transthoracic esophagectomy to more and more patients, otherwise unfit for conventional approach. abstract_id: PUBMED:18355499 Neoadjuvant chemoradiation versus chemotherapy for patients undergoing esophagectomy for esophageal cancer. Background: Neoadjuvant chemoradiation followed by esophagectomy is currently the standard of care for locally advanced esophageal cancer. This intense preoperative regimen delays definitive resection and increases perioperative risks. With the improvement of chemotherapy agents, chemotherapy alone may be better suited for patients awaiting esophagectomy because of shorter preoperative treatment time and less associated perioperative complications. No recent study has compared chemoradiation to chemotherapy alone before esophageal resection with respect to operative morbidity and mortality and overall survival. Methods: A retrospective review was performed of all patients (281) who underwent an esophagectomy for cancer at our institution from July 1995 through June 2005; 122 patients (43%) had neoadjuvant treatment and form the basis of this study. Results: Preoperative chemoradiation (CR) was administered in 64 patients and chemotherapy only (CO) in 58 patients. Operative mortality was 6% (4 patients) in the CR group and 0% in the CO group (p = 0.12). Overall postoperative complications rate was 48% in CR patients and 33% in CO patients (p = 0.09). Complete pathologic response occurred in 11 CR patients (17%) and in 2 CO patients (4%; p = 0.02). There was no difference in recurrences between the two groups (p = 0.43). Median survival was 17 months in the CR patients and 21 months in the CO patients (p = 0.14). One-, 3-, and 5-year survivals were 76%, 46%, and 41%, respectively, in the CR patients and 70%, 40%, and 31%, respectively, in the CO patients (p = 0.31). Conclusions: Although neoadjuvant chemoradiation resulted in a significantly better complete pathologic response rate when compared with chemotherapy alone, that did not translate into a long-term survival advantage. Chemotherapy alone may be the preferred neoadjuvant modality to expedite resection, decrease operative mortality and postoperative complications, and improve survival in patients with locally advanced esophageal cancer. abstract_id: PUBMED:29469074 Perioperative complications of esophagectomy: Postneoadjuvant treatment versus primary surgery - Our experience and review of literature. Aims: : To compare perioperative complications in esophagectomy after neoadjuvant therapy v/s primary surgery. Settings And Design: : Retrospective analysis of perioperative complications in a prospectively maintained data base of patients who underwent esophagectomy as Primary surgery or after neoadjuvant therapy was done. Methods And Material: : 238 cases of esophagectomies performed for esophageal carcinoma were analysed and compared, out of which 125(52.5%) were given neoadjuvant therapy followed by surgery and 113(47.5%) underwent primary surgery. Surgical procedure was standard for both the groups. All the cases were analysed for perioperative complications. Statistical Analysis Used: : Data was analysed using Open Epi soft ware. Association between the two study group was assessed with Chi square test. Results: : On comparison, both the groups were comparable in demographic profile and type of surgery performed. However, tumour stage was higher for cases who received neoadjuvant therapy as expected. On analysis there was no significant difference in overall morbidity and 30 days mortality. Conclusions: : Neoadjuvant Chemo/chemoradiotherapy is a feasible option in esophageal carcinoma without increase in incidence of peri operative morbidity or mortality. abstract_id: PUBMED:31253184 Total laparoscopic and thoracoscopic Ivor Lewis esophagectomy after neoadjuvant Chemoradiation with minimal overall and anastomotic complications. Background: The published rates of morbidity and mortality remain relatively high for patients who undergo laparoscopic and thoracoscopic Ivor Lewis esophagectomy. We report the postoperative and oncologic outcomes of a large cohort of patients with esophageal carcinoma who were uniformly treated with laparoscopic and thoracoscopic Ivor Lewis esophagectomy following neoadjuvant chemoradiation. Methods: This is a retrospective observational study of 112 patients diagnosed with esophageal carcinoma who underwent total laparoscopic and thoracoscopic Ivor Lewis esophagectomy from May 2014 to May 2018. All of the patients received neoadjuvant chemoradiation consisting of 45 to 50.4 Gray of radiation and 3-5 cycles of carboplatin and paclitaxel chemotherapy. Perioperative morbidity and 90-day mortality were recorded. The overall and disease-free survival rates were estimated by Kaplan Meier techniques. Results: A total of 112 patients completed induction chemoradiation followed by a total laparoscopic and thoracoscopic Ivor Lewis esophagectomy. There were 87 (77.68%) males and 25 (22.32%) females with a mean age of 61.6 years ± 10.4. A total of 28 (25%) patients had one or more complications. A total of 4 patients (3.57%) had an anastomotic leak. The 90-day mortality rate was 0.89%. The 3-year overall survival rate was 64.7% and the 3-year disease-free survival rate was 70.2%. Conclusion: The current outcomes suggest that laparoscopic and thoracoscopic Ivor Lewis esophagectomy can be performed with minimal overall and anastomotic complications following neoadjuvant chemoradiation. Answer: The available evidence suggests that elderly patients do not experience significantly increased perioperative or postoperative morbidity or mortality when given neoadjuvant chemoradiation before esophagectomy. Several studies have compared the outcomes of elderly patients to those of younger patients undergoing this treatment regimen. One study found no significant differences in morbidity and mortality between patients 70 years and older and those younger than 70 who received chemoradiotherapy (CRT) followed by esophagectomy. The presence of cardiac disease, higher scores on the Charlson index, or diabetes did not significantly influence the length of stay, postoperative complications, or postoperative death (PUBMED:21106309). Another study reported that elderly patients receiving neoadjuvant therapies for cancer of the esophagus or esophagogastric junction did not have a significantly increased prevalence of mortality and major postoperative complications, although cardiovascular complications were more likely to occur (PUBMED:17713823). A study on hybrid minimally invasive Ivor Lewis esophagectomy after neoadjuvant chemoradiation showed excellent long-term survival outcomes with minimal morbidity, indicating that this approach is well tolerated by patients, including the elderly, after neoadjuvant chemoradiation (PUBMED:27569043). Furthermore, a retrospective review comparing patients who underwent CRT followed by esophagectomy with those who underwent esophagectomy alone found that neoadjuvant chemoradiation does not appear to increase postoperative morbidity or mortality (PUBMED:21945223). A single institutional experience from a tertiary cancer center in India also concluded that radical esophagectomy after preoperative chemoradiation is feasible and safe, with no difference in overall morbidity and mortality between patients who had neoadjuvant chemoradiation and those who had upfront surgery (PUBMED:27217665). Additionally, studies on robot-assisted esophagectomy after neoadjuvant chemoradiation have shown that this approach is feasible and safe, with initial reports indicating that integrating neoadjuvant therapy with robot-assisted esophagectomy does not adversely impact the procedure or its oncological outcomes (PUBMED:33281406, PUBMED:29581681). In summary, the evidence suggests that advanced age should not be considered a contraindication to preoperative chemoradiation therapy preceding esophageal resection in carefully selected fit patients, as it does not significantly increase perioperative or postoperative morbidity or mortality (PUBMED:17713823, PUBMED:21106309).
Instruction: Myocardial infarction and prevalence of diabetes mellitus. Is increased casual blood glucose at admission a reliable criterion for the diagnosis of diabetes? Abstracts: abstract_id: PUBMED:11428850 Myocardial infarction and prevalence of diabetes mellitus. Is increased casual blood glucose at admission a reliable criterion for the diagnosis of diabetes? Aims: To investigate the prevalence of diabetes mellitus in patients with acute myocardial infarction and to determine whether casual blood glucose and haemoglobin A1c measured at admission could be used to diagnose diabetes mellitus. Methods And Results: A prospective study that included all patients with acute myocardial infarction hospitalized during a one-year period at a coronary care unit. Casual blood glucose was measured at admission, fasting blood glucose during the hospital stay and, if necessary for classification, 2-3 months after discharge. Haemoglobin A1c was measured once at admission. Of 305 patients included in the study, 285 could be classified into three groups: 21% of these had previously diagnosed diabetes, 4% had newly diagnosed diabetes and the remaining patients were categorized as non-diabetic. Casual blood glucose &gt;or=11.1 mmol x l(-1)at admission was found in 12 patients with no previously known diabetes, but diabetes mellitus was confirmed in only six of these patients. Haemoglobin A1c showed considerable overlapping of values between the three groups of patients (i.e. patients with known diabetes mellitus, patients with newly diagnosed diabetes mellitus and non-diabetics). Conclusion: One of four patients with acute myocardial infarction had diabetes mellitus. Increased casual blood glucose at admission was not a reliable measure to establish a diagnosis of diabetes and thus follow-up measures were necessary. Haemoglobin A1c was found to be an unreliable measure in the verification of diabetes. abstract_id: PUBMED:31982283 Admission glucose level was associated with increased short-term mortality and length-of-stay irrespective of diagnosis, treating medical specialty or concomitant laboratory values. Background: Glucose is a routine emergency sample. General guidelines for inpatient hyperglycemia are scarce, except in myocardial infarction, stroke, and perioperative/ICU. Previous studies found admission glucose associated with increased mortality in specific conditions. Scandinavian data, and for general patients, are scarcer. We investigated admission glucose levels, 30-day mortality, and length-of-stay (LoS), in a Swedish hospital. Methods: From 8146 emergency visits data regarding age, gender, dates of admission, discharge and death, diagnoses, admission p-glucose, s-sodium, s-potassium, b-hemoglobin, b-WBC and s-CRP, was collected, and for 6283 information regarding diagnosis of diabetes the previous 5 years. Visits were grouped in hypoglycemia (≤4.0), normoglycemia (&gt;4.0-≤7.0), modest (&gt;7.0-≤11.1) and severe hyperglycemia (&gt;11.1) mmol/l. Results: Short-term mortality was 1.5% in the normoglycemic, 2.6% in the hypoglycemic, 4.0-4.5% in modest and severe hyperglycemia, p &lt; 0.001; Cox proportional hazard ratios (HR) for groups of patients without/with diabetes were 6.8; 1; 3.4; 4.4/7.3; 3.9; 4.0; 2.1 compared to the normoglycemic without diabetes (p 0.0001-0.05); adjusted for age, and concurrent levels of sodium, potassium, Hb, WBC and CRP 1.51 (1.07-2.1, p 0.02) with modest hyperglycemia, and 1.08 (0.60-1.95, p 0.80) in severe hyperglycemia. Mean LoS was 1.2 and 1.7 days longer with modest and severe hyperglycemia. Conclusions: Short-term mortality increased substantially with admission hypo- and hyperglycemia for patients both with and without diabetes, irrespective of treating medical specialty, main discharge diagnosis, or concurrent laboratory values. Patients with diabetes (16%) were older, with higher glucose levels at admission, and with a different pattern of the association of admission glucose and mortality. abstract_id: PUBMED:19824882 Prognostic value of admission blood glucose concentration and diabetes diagnosis on survival after acute myocardial infarction: results from 4702 index cases in routine practice. The diagnosis of diabetes and admission blood glucose concentration are associated with adverse outcome after acute coronary syndromes. We compared the relative association with survival after ST elevation AMI (acute myocardial infarction) of admission blood glucose concentration and of diabetes diagnosis. We carried out a retrospective cohort study in 4702 consecutive patients with STEMI (ST elevation AMI) occurring from 1 April 1993 to 31 December 2005, assessed for mortality at 30 days and 1 year. Patients were classified according to antecedent diabetes and by blood glucose concentration at admission (quartile 1, &lt;7 mmol/l; quartile 2, 7–8.2 mmol/l;quartile 3, 8.3–10.9 mmol/l; quartile 4, 11 mmol/l). Multivariable models were constructed for determinants of mortality, including year of STEMI and demographic variables, entering blood glucose concentration and antecedent diabetes individually and together. All-cause 30-day and 1-year mortality were 22.8% and 31.3% for patients with antecedent diabetes, compared with 16.3% and 23.0% respectively for those without. For glucose quartiles 1, 2, 3 and 4, crude 30-day mortality was 9.0%, 10.6%, 17.9% and 31.0%. Adjusted 30-day mortality risk was similar in quartile 2, higher by &gt;80% in quartile 3 and by &gt;150% in quartile 4, compared with glucose quartile 1. Antecedent diabetes was associated with an increase in mortality [unadjusted odds ratio (OR)1.52 (95% CI 1.24, 1.86)]. On multivariable analysis (excluding glucose quartile), this reduced to 1.24 (0.98, 1.58) and changed to a small, statistically non-significant reduction in risk when glucose quartile was added to the analysis [adjusted OR 0.87 (0.67, 1.13)]. Inclusion of antecedent diabetes in multivariable models did not add to the predictive value for mortality of glucose quartile(P=0.368). Similar relationships were observed for 1 year mortality. In patients with STEMI, blood glucose concentration shows graded association with risk of 30-day and 1-year mortality and is of greater prognostic relevance than antecedent diabetes diagnosis. Moderate elevation of blood glucose, below levels previously considered to be clinically relevant, is associated with adverse impact on survival. abstract_id: PUBMED:15136307 Admission blood glucose level as risk indicator of death after myocardial infarction in patients with and without diabetes mellitus. Background: High admission blood glucose levels after acute myocardial infarction (AMI) are common and associated with an increased risk of death in subjects with and without known diabetes. Recent data indicate a high prevalence of abnormal glucose metabolism in patients with unknown diabetes at the time of AMI. We investigated the predictive value of admission blood glucose levels after AMI for long-term prognosis in patients with and without previously diagnosed diabetes mellitus, particularly in those with unknown diabetes but with blood glucose levels in the diabetic range. Methods: In a retrospective study with prospective follow-up, 846 patients (737 without and 109 with known diabetes) were eligible for follow-up during a median of 50 months (range, 0-93 months). Results: During follow-up, 208 nondiabetic patients (28.2%) and 47 diabetic patients (43.1%) died (P =.002). An increase of 18 mg/dL (1 mmol/L) in glucose level was associated with a 4% increase of mortality risk in nondiabetic patients and 5% in diabetic patients (both P&lt;.05). Of the 737 previously nondiabetic subjects, 101 had admission blood glucose levels of 200 mg/dL (11.1 mmol/L) or more, and mortality in these patients was comparable to that in patients who had established diabetes (42.6% vs 43.1%). Conclusions: Admission blood glucose level after AMI is an independent predictor of long-term mortality in patients with and without known diabetes. Subjects with unknown diabetes and admission glucose levels of 200 mg/dL (11.1 mmol/L) or more after AMI have mortality rates comparable to those of subjects with established diabetes. Admission blood glucose level may serve to identify subjects at high long-term mortality risk, in particular among those with unknown diabetes. abstract_id: PUBMED:27788389 The association of admission blood glucose level with the clinical picture and prognosis in cardiogenic shock - Results from the CardShock Study. Background: Critically ill patients often present with hyperglycemia, regardless of previous history of diabetes mellitus (DM). Hyperglycemia has been associated with adverse outcome in acute myocardial infarction and acute heart failure. We investigated the association of admission blood glucose level with the clinical picture and short-term mortality in cardiogenic shock (CS). Methods: Consecutively enrolled CS patients were divided into five categories according to plasma glucose level at the time of enrolment: hypoglycemia (glucose &lt;4.0mmol/L), normoglycemia (4.0-7.9mmol/L), mild (8.0-11.9mmol/L), moderate (12.0-15.9mmol/L), and severe (≥16.0mmol/L) hyperglycemia. Clinical presentation, biochemistry, and short-term mortality were compared between the groups. Results: Plasma glucose level of 211 CS patients was recorded. Glucose levels were distributed equally between normoglycemia (26% of patients), mild (27%), moderate (19%) and severe (25%) hyperglycemia, while hypoglycemia (2%) was rare. Severe hyperglycemia was associated with higher blood leukocyte count (17.3 (5.8) E9/L), higher lactate level (4.4 (3.3-8.4) mmol/L) and lower arterial pH (7.23 (0.14)) compared with normoglycemia or mild to moderate hyperglycemia (p&lt;0.001 for all). In-hospital mortality was highest among hypoglycemic (60%) and severely hyperglycemic (56%) patients, compared with 22% in normoglycemic group (p&lt;0.01). Severe hyperglycemia was an independent predictor of in-hospital mortality (OR 3.7, 95% CI 1.19-11.7, p=0.02), when adjusted for age, gender, LVEF, lactate, and DM. Conclusions: Admission blood glucose level has prognostic significance in CS. Mortality is highest among patients with severe hyperglycemia or hypoglycemia. Severe hyperglycemia is independently associated with high in-hospital mortality in CS. It is also associated with biomarkers of systemic hypoperfusion and stress response. abstract_id: PUBMED:19962462 Effect of blood glucose concentrations on admission in non-diabetic versus diabetic patients with first acute myocardial infarction on short- and long-term mortality (from the MONICA/KORA Augsburg Myocardial Infarction Registry). The aim of this study was to investigate the association between increased admission glucose in nondiabetic (ND) patients and in patients with type 2 diabetes mellitus (T2DM) with first acute myocardial infarctions (AMIs) and 28-day as well as 1- and 3-year case fatality. The Monitoring Trends and Determinants in Cardiovascular Disease (MONICA)/Cooperative Health Research in the Region of Augsburg (KORA) myocardial infarction registry database in Augsburg, Germany, was used, and 1,631 patients without and 659 patients with T2DM (aged 25 to 74 years) who were admitted from 1998 to 2003 with first AMIs were included. Mortality follow-up was carried out in 2005. ND patients with AMIs with admission glucose &gt;152 mg/dl (top quartile) compared with those in the bottom quartile had an odds ratio of 2.82 (95% confidence interval [CI] 1.30 to 6.12) for death within 28 days after multivariate adjustment; correspondingly, patients with T2DM with admission glucose &gt;278 mg/dl (top quartile) compared with those in the bottom quartile (&lt;152 mg/dl) showed a nonsignificantly increased odds ratio of 1.45 (95% CI 0.64 to 3.31). After the exclusion of patients who died within 28 days, a nonsignificantly increased relative risk (RR) was seen between admission blood glucose and 1-year mortality in ND subjects (RR 2.71, 95% CI 0.90 to 8.15), whereas no increased RR was found in subjects with diabetes (RR 0.99, 95% CI 0.34 to 2.82). After 3 years, there was no increased risk for death in patients with high admission blood glucose levels, neither for ND patients nor for those with T2DM. In conclusion, elevated admission blood glucose is associated with increased short-term mortality risk in patients with AMIs, particularly in ND subjects. These patients constitute a high-risk group needing aggressive, comprehensive polypharmacotherapy. abstract_id: PUBMED:2602543 Prognostic importance of admission plasma glucose in diabetic and non-diabetic patients with acute myocardial infarction. We assessed the relationship of plasma glucose concentrations measured on admission to mortality during the acute phase of myocardial infarction in 143 diabetic patients and 277 patients without a previous history of diabetes. Mortality in hospital in patients not known to have diabetes increased significantly from 4 per cent in patients with admission plasma glucose below 8 mmol/l to 35 per cent in patients with admission plasma glucose above 11 mmol/l. In diabetic patients the mortality increased with increasing admission plasma glucose but the difference was not significant. In patients with diabetes mean admission plasma glucose levels of the survivors and those who died were similar, whilst in the non-diabetic group the mean plasma glucose levels of the patients who did not survive were significantly higher than those of the survivors. Stepwise logistic regression analyses identified admission plasma glucose level as an important predictor of mortality in the non-diabetic group but not in the diabetic patients. Plasma glucose level on admission is an important prognostic indicator in non-diabetic patients, in that hyperglycaemia is associated with a higher mortality. In diabetic patients a clear relationship between admission plasma glucose and mortality was not demonstrated. abstract_id: PUBMED:22041344 Admission glucose level and in-hospital outcomes in diabetic and non-diabetic patients with ST-elevation acute myocardial infarction. Background: Hyperglycemia on admission is a predictor of an unfavorable prognosis in patients with ST-elevation Acute Myocardial Infarction (AMI). Data concerning associations between an elevated glucose level on admission and other in-hospital complications are still limited. Methods: A total of 1,137 AMI patients with complete admission blood glucose level (ABGL) analysis were identified and stratified according to ABGL. Results: A total of 16.1% patients had admission glucose level &lt;5 mmol/L, 36.1% &lt;7 mmol/L, 20.2% &lt;9 mmol/L, 9.9% &lt;11 mmol/L and 17.7% ≥11 mmol/L. Compared with the euglycemia group, both the hypo- and hyperglycemia groups were associated with higher in-hospital mortality. In-hospital mortality of diabetic patients with hypoglycemia (12.2%) was higher than that of diabetic patients with either euglycemia or mild hyperglycemia (11.1%, or 10.7% relatively). The same results were seen in non-diabetic patients. In the logistic regression analysis, admission glucose and cardiac function of Killip grade were the independent predictors of in-hospital death for patients with AMI. Conclusion: Elevated admission glucose levels are associated with an increased risk of life-threatening complications in diabetic and non-diabetic AMI patients. Compared with the euglycemia group, hypoglycemia was associated with a higher trend of in-hospital mortality. abstract_id: PUBMED:28344659 Comparison of in-hospital glycemic variability and admission blood glucose in predicting short-term outcomes in non-diabetes patients with ST elevation myocardial infarction underwent percutaneous coronary intervention. Aims: Admission hyperglycemia is associated with increased mortality and major adverse cardiac events (MACE) in patients with or without diabetes mellitus after acute myocardial infarction (AMI). However, effects of glycemic variability (GV) on outcomes of non-diabetes patients with AMI still remains unclear. The aim of this study is to compare the prognostic value of in-hospital GV with admission blood glucose (ABG) for 3-month MACE in non-diabetes patients with ST elevation myocardial infarction (STEMI) who underwent percutaneous coronary intervention (PCI). Methods: We analyzed 256 non-diabetes patients with STEMI in study. The GV accessed by mean amplitude of glycemic excursions (MAGE) was calculated from blood glucose profiles of continuous glucose monitoring system (CGMS) during hospitalization. ABG was measured on admission. Main endpoints were 3-month MACE; secondary endpoints were GRACE scores and enzymatic infarct size. Predictive effects of MAGE and ABG on the MACE in patients were analyzed. Results: In all participants, MAGE level was associated with ABG level (r = 0.242, p &lt; 0.001). Both elevated MAGE levels (p = 0.001) and elevated ABG (p = 0.046) were associated with incidences of short-term MACE. Patients with a higher MAGE level had a significantly higher cardiac mortality (5.8 vs. 0.6%, p = 0.017) and incidence of acute heart failure (12.8 vs. 2.4%, p = 0.001) during 3 months follow-up. In multivariable analysis, high MAGE level (HR 2.165, p = 0.023) was significantly associated with incidence of short-term MACE, but ABG (HR 1.632, p = 0.184) was not. The area under the receiver-operating characteristic curve for MAGE (0.690, p &lt; 0.001) was superior to that for ABG (0.581, p = 0.076). Conclusions: To compare with ABG, in-hospital GV may be a more important predictor of short-term MACE and mortality in non-diabetes patients with STEMI treated with PCI. abstract_id: PUBMED:21348814 Acute myocardial infarction: high risk ventricular tachyarrhythmias and admission glucose level in patients with and without diabetes mellitus. Objective: The aim of this study was to evaluate the impact of glucose levels on admission and High Risk Ventricular Tachyarrhythmia (HRVT) in hospital mortality in patients with Acute Myocardial Infarction (AMI). Methods: We studied 1.258 consecutive patients admitted to the Coronary Care Unit with a diagnosis of AMI. Ventricular Fibrillation, sustained and nonsustained Ventricular Tachycardia were considered as HRVT. Association between admission glucose, in-hospital mortality and HRVT was assessed with Cox regression analysis. Results: The overall incidence of in-hospital HRVT was 20% (254/1258 patients) and in-hospital mortality (115/1.258) was higher in patients with HRVT (20% vs 6%) (p&lt; 0.001). Diabetes Mellitus (DM) was present in 441 patients (35%). Optimal threshold level of glycemia admission to predict ventricular arrhythmia was 180 mg/dl (AUC = 0.716; 0.66- 0.76)(p&lt;0.001). Patients with euglycemia on admission (&lt; 120 mg/dL) had lowest prevalence of HRVT (13%)(OR=0.6;0.46-0.78) in contrast to non DM patients who presented glucose 180 mg/dL that exhibited 2-fold increase of in-hospital HRVT (36%; OR=2.2; 1.6-3)(p&lt;0.001). Multivariate risk adjusted hazard ratio (HR) analysis showed that, blood pressure &lt; 100 mmHg (HR=2.4; 1.6-3.6)(p&lt;0.001), White Blood Count (WBC)&gt;10.000 cell count (HR=1.44;1.02-2)(p=0.04) and admission glycemia 180 mg/dL (HR=1.5; 1.04-2.3)( p=0.03) had a significantly increased risk in in-hospital HRTV only in NDM patients. Conclusions: The higher glycemia on admission the higher prevalence of life-threatening arrhythmia and mortality regardless diabetes status in patients presenting with AMI. Elevated initial glucose level and WBC count considered along with other clinical data can assist in life-threatening ventricular arrhythmia in non diabetic patients. Answer: The prevalence of diabetes mellitus in patients with acute myocardial infarction is significant, with one study finding that one in four patients with acute myocardial infarction had diabetes mellitus. However, increased casual blood glucose at admission was not found to be a reliable measure for establishing a diagnosis of diabetes. In the study, casual blood glucose levels greater than or equal to 11.1 mmol/L were observed in 12 patients without previously known diabetes, but diabetes mellitus was confirmed in only half of these cases. Furthermore, haemoglobin A1c, which was measured at admission, showed considerable overlapping of values between patients with known diabetes, newly diagnosed diabetes, and non-diabetics, indicating that it was an unreliable measure for verifying diabetes in the context of acute myocardial infarction (PUBMED:11428850). Therefore, while there is a high prevalence of diabetes among patients with acute myocardial infarction, increased casual blood glucose at admission should not be used as a sole criterion for the diagnosis of diabetes. Follow-up measures are necessary to accurately diagnose diabetes in these patients.
Instruction: Cytotoxic extravasation: an issue disappearing or a problem without solution? Abstracts: abstract_id: PUBMED:27032703 Cytotoxic extravasation: an issue disappearing or a problem without solution? Purpose: The incidence and management of antitumoral compound extravasation that occurred in our medical day hospital unit were registered in a 10-year period. Methods: A total of 114 episodes were consecutively recorded out of an estimated number of 211,948 administrations performed (0.05%). Type of compound, localization, timing, symptoms, treatment, resolution, or sequelae were documented. Results: Extravasations after anthracyclines (17/114), platinum compounds (34/114), vinca alkaloids (7/114), and taxanes (34/114) were more frequently associated with edema and erythema ± pain. Five cases of monoclonal antibodies extravasation were observed without sequelae. With the involvement of an interdisciplinary task force and the use of dedicated guidelines, conservative management was successful in all patients. In the great majority of cases, recovery was complete within 48 hours after antidote administration. The support of our pharmacy was crucial. Physiatric evaluation was considered in several cases. No patients required surgery. Conclusions: We confirm that the adopted standardized approach to this event resulted in a satisfactory outcome and could be suggested as appropriate for managing extravasation in a large clinical context. abstract_id: PUBMED:28318850 Safety and efficacy of hyperosmolar irrigation solution in shoulder arthroscopy. Background: A hyperosmolar irrigation solution has been reported to be safe and have potential benefits for use during shoulder arthroscopy in an animal model study. In this study, the clinical effects of a hyperosmolar solution were compared with a standard isotonic solution when used for shoulder arthroscopy. Methods: A prospective, double-blind, randomized controlled trial was performed to compare isotonic (273 mOsm/L) and hyperosmolar (593 mOsm/L) irrigation solutions used for arthroscopic rotator cuff repair. Primary outcomes focused on the amount of periarticular fluid retention based on net weight gain, change in shoulder girth, and pain. All patients were tracked through standard postsurgical follow-up to ensure no additional complications arose. Patients were contacted at 1 year to assess American Shoulder and Elbow Surgeon score, visual analog scale pain score, and the Single Assessment Numeric Evaluation shoulder scores RESULTS: Fifty patients (n = 25/group) were enrolled and completed the study. No statistically significant differences were noted between cohorts in demographics or surgical variables. The hyperosmolar group experienced significantly less mean weight gain (1.6 ± 0.82 kg vs. 2.25 ± 0.77 kg; P = .005), significantly less change in shoulder girth (P &lt; .05), and a significantly lower immediate postoperative visual analog scale pain score (P = .036). At 1 year postoperatively, the differences between groups for American Shoulder and Elbow Surgeons, visual analog scale pain, and Single Assessment Numeric Evaluation were not significant (P &gt; .2). Conclusion: A hyperosmolar irrigation solution provides a safe and effective way to decrease periarticular fluid retention associated with arthroscopic rotator cuff surgery without any adverse long-term effects. Use of a hyperosmolar irrigation solution for shoulder arthroscopy has potential clinical benefits to surgeons and patients. abstract_id: PUBMED:25725966 Hyperosmolar irrigation compared with a standard solution in a canine shoulder arthroscopy model. Background: A hyperosmolar irrigation solution may decrease fluid extravasation during arthroscopic procedures. Demonstrating the safety of a hyperosmolar irrigation solution with respect to chondrocyte viability and cartilage water content was deemed necessary before designing a clinical efficacy study. Methods: We designed a translational animal model study in which hyperosmolar arthroscopy irrigation fluid (1.8%, 600 mOsm/L) was compared with normal saline (0.9%, 300 mOsm/L). Purpose-bred research dogs (n = 5) underwent bilateral shoulder arthroscopy. Irrigation fluid was delivered to each shoulder joint (n = 10) at 40 mm Hg for 120 minutes using standard ingress and egress portals. The percentage change in shoulder girth was documented at the completion of 120 minutes. Articular cartilage sections from the glenoid and humeral head were harvested from both shoulders. Chondrocyte viability and tissue water content were evaluated. Differences between groups and compared with time 0 controls were determined, with significance set at P &lt;.05. Results: The mean percentage change in shoulder girth was higher in the isotonic control group (13.3%) than in the hyperosmolar group (10.4%). Chondrocyte viability and tissue water content for glenoid and humeral head cartilage were well maintained in both treatment groups, and differences were not statistically significant. Conclusions: The data from this study suggest that doubling the osmolarity of the standard irrigation solution used for arthroscopy was not associated with any detrimental effects on chondrocyte viability or tissue water content after 2 hours of arthroscopic irrigation. On the basis of potential benefits in conjunction with the safety demonstrated in these data, clinical evaluation of a hyperosmolar solution for irrigation during shoulder arthroscopy appears warranted. abstract_id: PUBMED:18788219 A hyperosmolar-colloidal additive to the CPB-priming solution reduces fluid load and fluid extravasation during tepid CPB. Cardiopulmonary bypass(CPB) is associated with fluid overload. We hypothesized that fluid gain during CPB could be reduced by substituting parts of a crystalloid prime with 7.2% hypertonic saline and 6% poly (O-2-hydroxyethyl) starch solution (HyperHaes). 14 animals were randomized to a control group (Group C) or to Group H. CPB-prime in Group C was Ringer's solution. In group H, 4 ml/kg of Ringer's solution was replaced by the hypertonic saline/hydroxyethyl starch solution. After 60 min stabilization, CPB was initiated and continued for 120 min. All animals were allowed drifting of normal temperature (39.0 degrees C) to about 35.0 degrees C. Fluid was added to the CPB circuit as needed to maintain a 300-ml level in the venous reservoir. Blood chemistry, hemodynamic parameters, fluid balance, plasma volume, fluid extravasation rate (FER), tissue water content and acid-base parameters were measured/calculated. Total fluid need during 120 min CPB was reduced by 60% when hypertonic saline/hydroxyethyl starch solution was added to the CPB prime (p &lt; 0.01). The reduction was related to a lowered FER. The effect was most pronounced during the first 30 min on CPB, with 0.6 (0.43) (Group H) compared with 1.5 (0.40) ml/kg/min (Group C) (p &lt; 0.01). Hemodynamics and laboratory parameters were similar in both groups. Serum concentrations of sodium and chloride increased to maximum levels of 148 (1.5) and 112 (1.6) mmol/l in Group H. To conclude: addition of 7.2% hypertonic saline and 6% poly (O-2-hydroxyethyl) starch solution to crystalloid CPB prime reduces fluid needs and FER during tepid CPB. abstract_id: PUBMED:29199540 Is the use of hydroxyethyl starch as priming solution during cardiac surgery advisable? A randomized, single-center trial. Introduction: The use of cardiopulmonary bypass (CPB) leads to increased fluid filtration and edema. The use of artificial colloids to counteract fluid extravasation during cardiac surgery is controversial. Beneficial effects on global fluid loading, leading to better cardiac performance and hemodynamics, have been claimed. However, renal function and coagulation may be adversely affected, with unfavorable impact on outcome following cardiac surgery. Methods: Forty patients were randomly allocated to study groups receiving either acetated Ringer's solution (CT group) or hydroxyethyl starch (HES group, Tetraspan®) as CPB priming solution. Fluid balance, bleeding and hemodynamics, including cardiac output, were followed postoperatively. The occurrence of acute kidney injury was closely registered. Results: Two patients were excluded from further analyzes due to surgical complications. Fluid accumulation was attenuated in the HES group (3374 (883) ml) compared with the CT group (4328 (1469) ml) (p=0.024). The reduced perioperative fluid accumulation was accompanied by an increased cardiac index immediately after surgery (2.7 (0.4) L/min/m2 in the HES group and 2.1 (0.3) L/min/m2 in the CT group (p&lt;0.001)). No increase in bleeding could be demonstrated in the HES group. Three patients, all of them in the HES group, experienced acute kidney injury postoperatively. Conclusions: CPB priming with HES solution lowers fluid loading during bypass and improves cardiac function in the early postoperative period. The manifestation of acute kidney injury exclusively in the HES group of patients raises doubts about the use of HES products in conjunction with cardiac surgery. ( https://clinicaltrials.gov/ct2/show/NCT01511120 ). abstract_id: PUBMED:24900456 Discovery and Development of Potent LFA-1/ICAM-1 Antagonist SAR 1118 as an Ophthalmic Solution for Treating Dry Eye. LFA-1/ICAM-1 interaction is essential in support of inflammatory and specific T-cell regulated immune responses by mediating cell adhesion, leukocyte extravasation, migration, antigen presentation, formation of immunological synapse, and augmentation of T-cell receptor signaling. The increase of ICAM-1 expression levels in conjunctival epithelial cells and acinar cells was observed in animal models and patients diagnosed with dry eye. Therefore, it has been hypothesized that small molecule LFA-1/ICAM-1 antagonists could be an effective topical treatment for dry eye. In this letter, we describe the discovery of a potent tetrahydroisoquinoline (THIQ)-derived LFA-1/ICAM-1 antagonist (SAR 1118) and its development as an ophthalmic solution for treating dry eye. abstract_id: PUBMED:35020918 Photo-Induced Antifungal Activity of Chitosan Composite Film Solution with Nano-Titanium Dioxide and Nano-Silver. Abstract: This study was conducted to investigate the UV light-induced effect of chitosan-titanium dioxide-silver (CTS-TiO2-Ag) nanocomposite film solution against Penicillium steckii and the underlying physiological mechanism of this effect. With longer UV exposure time, pathogen inhibition increased. UV-photoinduced treatment for 120 min produced the smallest P. steckii colony diameter, at 4.85 mm. However, when this treatment was followed by 8 h of storage, the conductivity of the P. steckii culture medium reached its highest level, at 713 microsiemens per cm. After a 120-h growth period on mangoes under the same conditions, the lesion diameters and proportion of infected mangoes reached 12.61 mm and 41.67%, respectively. Because the P. steckii cell membrane was severely disrupted, its permeability increased, causing serious extravasation of intracellular protein and nucleic acid material. Malondialdehyde, catalase, and superoxide dismutase in the P. steckii cells reached maximum concentrations (2.1106 μmol/mL, 44.06 U/mL, and 24.67 U/mL, respectively) after 8 h of incubation. These results indicate significant P. steckii inhibition by the UV light induction of the CTS-TiO2-Ag nanocomposite film solution. Highlights: abstract_id: PUBMED:16879469 Reduced fluid gain during cardiopulmonary bypass in piglets using a continuous infusion of a hyperosmolar/hyperoncotic solution. Background: The aim of this study was to evaluate how a continuous infusion of a hyperosmolar/hyperoncotic solution influences fluid shifts and intracranial pressure during cardiopulmonary bypass in piglets. Methods: Fourteen animals, randomized to the control (CT) group or the hypertonic saline/hydroxyethyl starch (HyperHaes) (HSH) group, received acetated Ringer's solution as prime and supplemental fluid. The HSH group received, in addition, HyperHaes 1 ml/kg/h. After 1 h of normothermic cardiopulmonary bypass, hypothermic cardiopulmonary bypass (28 degrees C) was initiated and continued for 90 min. Fluid balance, plasma volume, tissue water content, acid-base parameters and intracranial pressure were recorded, and protein masses and fluid extravasation rates were calculated. Results: At the start of normothermic cardiopulmonary bypass, the fluid extravasation rates (ml/kg/min) increased from 0.19 (0.06) to 1.57 (0.71) and 0.19 (0.09) to 0.82 (0.14) in the CT and HSH groups, respectively, with no between-group differences (P = 0.081) During hypothermic cardiopulmonary bypass, the fluid extravasation rates (ml/kg/min) increased from 0.19 (0.14) to 0.51 (0.10) (P &lt; 0.01) and 0.15 (0.08) to 0.33 (0.08) (P &lt; 0.05), respectively, with significantly lower extravasation rates in the HSH group (P &lt; 0.01). In the HSH group, the total fluid gain during cardiopulmonary bypass decreased by about 50% (P &lt; 0.05) and the tissue water content was significantly lower in the left and right heart as well as in the lungs. The intracranial pressure remained stable in the HSH group, but increased in the CT group. Conclusions: A continuous infusion of HSH (HyperHaes) during cardiopulmonary bypass reduced the fluid extravasation rate and the total fluid gain during bypass. No electrolyte or acid-base disturbances were present. The intracranial pressure remained stable in the HSH group. abstract_id: PUBMED:29927835 The Use of Antidotes for Calcium Gluconate Extravasation: An Experimental Study in Mice. Background: Calcium gluconate extravasation is a process that can cause serious lesions, such as necrosis and calcification of the soft tissues. The aim of the present study was to analyze the beneficial effects of four possible local antidotes for calcium gluconate extravasation: hyaluronidase, sodium thiosulfate, triamcinolone acetonide, and physiologic saline solution. Methods: Seventy-four BALB/c mice were used in the study. The substances selected for use in this study were calcium gluconate (4.6 mEq/ml), hyaluronidase (1500 IU/ml), sodium thiosulfate (25%), triamcinolone acetonide (40 mg/ml 0.5 mg/kg), and saline solution 0.9%. Five minutes were allowed to lapse after the calcium gluconate infiltration, and then an antidote was infiltrated. After 3 weeks, a skin biopsy was performed and a radiographic and histologic study was carried out. Results: Only in the group infiltrated with sodium thiosulfate did all skin lesions disappear after the 3-week period after infiltration. In the radiographic study, calcium deposits larger than 0.5 mm were observed in 40 percent of cases without an antidote, in 33 percent with triamcinolone acetonide, in 13 percent with a saline solution, and in none with thiosulfate and hyaluronidase. In the histologic study, calcium deposits were found in 53 percent of cases without antidote, 100 percent of cases with triamcinolone acetonide, 33 percent of cases with saline solution, and 13 percent of cases with sodium thiosulfate or hyaluronidase. Conclusion: Sodium thiosulfate and hyaluronidase prevent the development of calcium deposits after calcium gluconate extravasation. abstract_id: PUBMED:2108548 A solution to the problem of high-flow jets from miniature angiographic catheters. In this study we evaluated methods for reducing high-flow jets from 5-French catheters that occur when injection pressures approach catheter tolerance (1000 psi [6.9 MPa]). This "jet effect" has been responsible for subintimal extravasation of contrast material in patients. We designed a physical model that accurately measures flow rates through the end hole and each side hole of 5-French high-flow catheters under simulated physiologic arterial pressure. When a standard catheter commonly used for injection of contrast material was studied, flow of contrast material was 34% through the end hole and 31% through the distal side-hole pair at high injection pressures (1000 psi). We examined the effect of altering the size and configuration of catheter side holes and end hole in an effort to create an improved flow profile, and thus a safer angiographic catheter. End-hole flow rate was reduced by 73% to 9% of total flow by tapering the 5-French catheter to 0.018 in. (0.046 cm) and by using smaller 0.015-in. (0.038-cm) side holes for even flow distribution. The high-flow jets present with standard high-flow 5-French catheters do not occur when flow-restrictive end holes and side holes are used. A uniform flow profile can be obtained without sacrificing delivery of contrast material through small catheters. Answer: Cytotoxic extravasation remains a concern in medical practice, particularly in the administration of antitumoral compounds. However, it does not appear to be a problem without a solution. Over a 10-year period, a study recorded 114 episodes of antitumoral compound extravasation out of an estimated 211,948 administrations (0.05%). With the implementation of an interdisciplinary task force and dedicated guidelines, conservative management was successful in all patients, and the majority recovered completely within 48 hours after antidote administration. No patients required surgery, suggesting that a standardized approach to managing extravasation can result in satisfactory outcomes (PUBMED:27032703). In the context of other medical procedures, the use of hyperosmolar solutions has been explored to reduce fluid extravasation. For instance, in shoulder arthroscopy, a hyperosmolar irrigation solution was found to be safe and effective in decreasing periarticular fluid retention without adverse long-term effects, suggesting potential clinical benefits (PUBMED:28318850). Similarly, in a canine shoulder arthroscopy model, a hyperosmolar irrigation solution did not show detrimental effects on chondrocyte viability or tissue water content, supporting its clinical evaluation (PUBMED:25725966). Moreover, the addition of a hyperosmolar-colloidal additive to cardiopulmonary bypass (CPB) priming solution has been shown to reduce fluid needs and fluid extravasation rates during tepid CPB (PUBMED:18788219). Another study found that CPB priming with hydroxyethyl starch solution lowered fluid loading during bypass and improved cardiac function in the early postoperative period, although there were concerns about the manifestation of acute kidney injury (PUBMED:29199540). In summary, while cytotoxic extravasation remains an issue, it is not without solutions. The adoption of standardized management protocols and the use of hyperosmolar solutions in various medical procedures have demonstrated efficacy in reducing the incidence and severity of extravasation, leading to better patient outcomes.
Instruction: The hybrid of basic science and clinical training for the urologic oncologist: Necessity or waste? Abstracts: abstract_id: PUBMED:19285235 The hybrid of basic science and clinical training for the urologic oncologist: Necessity or waste? Objective: To examine the necessity and adequacy of basic science training for urologic oncology training programs. Methods: Evaluated whether urology physician scientists are adequately trained in the basic sciences. Results: The current urologic oncology training system does not adequately train physician scientists. We propose a major reform to define, train, and maintain the urology physician scientists. Conclusions: Urology physician scientists have played a major role in advancement of urologic oncology. Major reform is necessary, if we wish to continue to successfully train urologic oncology physician scientists. abstract_id: PUBMED:34457921 Are Clerks Proficient in the Basic Sciences? Assessment of Third-Year Medical Students' Basic Science Knowledge Prior to and at the Completion of Core Clerkship Rotations. Basic sciences are a cornerstone of undergraduate medical education (UME), yet research indicates that students' basic science knowledge is not well retained. Many UME curricula are increasing the integration between the basic and clinical sciences with the goal of enhancing students' knowledge levels; however, the impact of clerkship training on students' basic science knowledge remains inconclusive. Thus, using clerkship directors' expectations as framework, we aimed to assess third-year medical students' basic science knowledge during clerkship training and evaluate the influence of clerkship training on their basic science knowledge. Using concepts deemed necessary by clerkship directors, we created a basic science assessment for each clerkship rotation. Assessments were distributed to third-year medical students as a pre- and post-test to assess their basic science knowledge prior to and at the completion of each rotation. On average, students retained ≥ 60% of relevant basic science knowledge from pre-clerkship, and neither clerkship rotation order, nor the basic science discipline being assessed, impacted students' basic science knowledge levels. Post-test data revealed that students, on average, reinforced fundamental concepts during clerkship. Interestingly, even though lower-performing students demonstrated the greatest post-test improvement, they still left each rotation with knowledge deficits compared with their highest-performing peers, suggesting that the clinical experience of clerkship appears to be particularly beneficial for lower-performing students, in regard to enhancing their basic science knowledge. Overall, results indicate that earlier exposure to clinical learning in UME, along with integration of basic science education into clerkship, could promote students' basic science knowledge acquisition and retention. abstract_id: PUBMED:34457526 Integrating the Educators: Outcomes of a Pilot Program to Prime Basic Science Medical Educators for Success in Integrated Curricula. "Integrating the Educators" is a program to prepare early career basic scientists to teach effectively in integrated medical curricula, where connections between basic sciences and clinical application are highlighted and emphasized. Advanced PhD trainees received didactic training in best practices in medical education followed by 3 months shadowing clinician educator mentors. This combination of experiences was designed to contrast the methods, focus, and pace of teaching in the clinical setting compared to the typical basic science classroom. Trainees' knowledge and attitudes shifted to recognize the faster pace and clinical focus necessary in integrated curricula. abstract_id: PUBMED:33712963 Innovations in Urologic Surgical Training. Purpose Of Review: This review aims to summarize innovations in urologic surgical training in the past 5 years. Recent Findings: Many assessment tools have been developed to objectively evaluate surgical skills and provide structured feedback to urologic trainees. A variety of simulation modalities (i.e., virtual/augmented reality, dry-lab, animal, and cadaver) have been utilized to facilitate the acquisition of surgical skills outside the high-stakes operating room environment. Three-dimensional printing has been used to create high-fidelity, immersive dry-lab models at a reasonable cost. Non-technical skills such as teamwork and decision-making have gained more attention. Structured surgical video review has been shown to improve surgical skills not only for trainees but also for qualified surgeons. Research and development in urologic surgical training has been active in the past 5 years. Despite these advances, there is still an unfulfilled need for a standardized surgical training program covering both technical and non-technical skills. abstract_id: PUBMED:30832440 Basic Life Support Training Methods for Health Science Students: A Systematic Review. : The acquisition of competencies in basic life support (BLS) among university students of health sciences requires specific and updated training; therefore, the aim of this review was to identify, evaluate, and synthesise the available scientific knowledge on the effect of training in cardiorespiratory resuscitation in this population. A comprehensive literature search was conducted in MEDLINE, CUIDEN, Web of Science, Wiley Online Library, CINAHL, and Cochrane, including all randomised clinical trials published in the last ten years that evaluated basic life support training methods among these students. We selected a total of 11 randomissed clinical trials that met the inclusion criteria. Participants were nursing and medicine students who received theoretical and practical training in basic life support. The studies showed a great heterogeneity in training methods and evaluators, as did the feedback devices used in the practical evaluations and in the measurement of quality of cardiorespiratory resuscitation. In spite of the variety of information resulting from the training methods in basic life support, we conclude that mannequins with voice-guided feedback proved to be more effective than the other resources analysed for learning. abstract_id: PUBMED:37768162 Robotic Simulation in Urologic Surgery. Robotic surgery continues to revolutionize the field of urologic surgery, and thus it is crucial that graduating urologic surgery residents demonstrate proficiency with this technology. The large learning curve of utilizing robotic technology limits resident immediate participation in real-life robotic surgery, and skill acquisition is further challenged by variable case volume. Robotic simulation offers an invaluable opportunity for urologic trainees to cultivate strong foundational skills in a non-clinical setting, ultimately leading to both competence and operative confidence. Several different simulation technologies and robotic assessment protocols have been developed and demonstrate validity in several domains. However, despite their demonstrable utility, there is no formal robotic curricula within US urologic surgery residencies. In this article, we will review the current state of robotic simulation training in urologic surgery and highlight the importance of its widespread utilization in urologic surgery residency training programs. abstract_id: PUBMED:37546212 Integrating Basic Science into Endocrine Clinical Electives. Practical and effective methods to integrate basic science material into clinical electives are lacking. We developed a primer for medical students participating in an ambulatory endocrinology elective highlighting pathophysiology, symptoms, diagnosis, and management of disorders. Students felt better prepared for the elective and mean scores on the endocrine knowledge test improved (7.5 (SD = 2.4) to 9.6 (SD = 2.2), p &lt; 0.001). The endocrine primer required minimal faculty time and resources while integrating basic science information into a clinical elective, standardizing students' knowledge, and enhancing student satisfaction with the elective. This innovative primer lays the groundwork to expand to other specialty electives and institutions. abstract_id: PUBMED:36260926 Models of clinical integration into basic science education for first-year medical students. While medical education continues to evolve away from a two-stage model of preclinical basic science education followed by clinical rotations, medical schools have struggled with how best to integrate basic science education with clinical application in the early educational curriculum. As Course Directors for the entire first year of medical student education at a new medical school, we review three models for the clinical application of basic science education that we utilized in the first year, including clinical retrofitting, parallel session development, and session co-creation. abstract_id: PUBMED:27068987 Progress in the utilization of high-fidelity simulation in basic science education. High-fidelity patient simulators are mainly used to teach clinical skills and remain underutilized in teaching basic sciences. This article summarizes our current views on the use of simulation in basic science education and identifies pitfalls and opportunities for progress. abstract_id: PUBMED:35186430 Improving Integration of Basic Science into Clinical Medicine: Vertical Integration into Clinical Education (VICE) Activity. The integration of basic science into clinical clerkships continues to be a challenge in medical curricula. We developed an integrated session for 3rd year medical students enrolled in OB-Gyn/Pediatric Block. The session focused on transplacental and perinatal infections, and consisted of a student-driven pedagogy activity in which students were required to explain the basic science principles behind the pathophysiology of the clinical presentations, the work-up, and the treatment of the infections. This approach helps students understand how basic science knowledge informs clinical practice and potential increase clerkship-level students' confidence as it makes them serve as leaders of active learning modules. Supplementary Information: The online version contains supplementary material available at 10.1007/s40670-021-01485-7. Answer: The hybrid of basic science and clinical training for the urologic oncologist is considered a necessity rather than a waste. According to the abstract from PUBMED:19285235, the current urologic oncology training system does not adequately train physician scientists, and major reform is necessary to continue successfully training urologic oncology physician scientists. The authors argue that urology physician scientists have played a significant role in the advancement of urologic oncology, and to maintain this progress, a major reform in training that defines, trains, and maintains urology physician scientists is essential. This perspective is supported by the broader trend in medical education that emphasizes the integration of basic science and clinical training. For instance, PUBMED:34457921 discusses the importance of basic science knowledge in undergraduate medical education and suggests that earlier exposure to clinical learning, along with the integration of basic science education into clerkship, could promote students' basic science knowledge acquisition and retention. Similarly, PUBMED:34457526 describes a program designed to prepare early career basic scientists to teach effectively in integrated medical curricula, highlighting the importance of connecting basic sciences with clinical application. In the context of urologic surgical training, PUBMED:33712963 notes that despite innovations in training, including the use of various simulation modalities and the emphasis on non-technical skills, there is still a need for a standardized surgical training program that covers both technical and non-technical skills, which implicitly includes basic science knowledge. Therefore, the integration of basic science and clinical training is not only necessary for the development of competent urologic oncologists but is also a recognized need across medical education to ensure that physicians are well-equipped to apply scientific principles to clinical practice.
Instruction: Does the endothelial function change in renal transplant patients with longer duration of exposure and with higher cumulative doses of cyclosporine? Abstracts: abstract_id: PUBMED:15251333 Does the endothelial function change in renal transplant patients with longer duration of exposure and with higher cumulative doses of cyclosporine? Objective: Administration of cyclosporine (CsA) is one potential cause of endothelial dysfunction in renal transplant patients. We sought to investigate endothelial functional changes with respect to the cumulative dose and duration of exposure to CsA. Methods: Sixty-six renal recipients and 25 healthy controls were included in the study. The recipients were classified according to their time of CsA exposure: group 1 (0 to 36 months); group 2 (36 to 72 months); and group 3 (over 72 months). Endothelial function of the brachial artery was evaluated using high-resolution vascular ultrasound. Endothelium-dependent and -independent vasodilatation (EDD and EID, respectively) were assessed by assessing the responses to reactive hyperemia and using sublingual isosorbide dinitrate (ISDN), respectively. Results: There were no statistically significant differences between the groups with regard to their demographic, clinical, and most biochemical characteristics. Baseline measurements of the diameter of the brachial artery were similar in all groups. The values of mean brachial artery EDD and EID responses in groups 1, 2, and 3 were less than those in the control group (P &lt; .05, P &lt; .05, and P &lt; .05, respectively). Mean brachial artery EDD and EID in group 1 were significantly impaired compared to groups 2 and 3 (for EDD: P &lt; .05 and P &lt; .05, respectively; for EID: P &lt; .05 and P &lt; .05, respectively). In contrast there was no difference between groups 2 and 3 with respect to these parameters. There were mild to moderate positive correlations between the cumulative doses of CsA and EDD and EID (r = .26 and r = .52, P &lt; .05, respectively). Conclusion: Endothelial dysfunction was more prominent in the first 36-month period than later despite the longer exposure to and higher cumulative doses of CsA. This finding may reflect an extended effect of the uremic state on endothelial function or more intense doses of CsA in early posttransplant period. abstract_id: PUBMED:24018796 Circulating endothelial cells in pediatric renal transplant recipients. Background: An increase in the number of circulating endothelial cells (CEC) indicates endothelial damage and the risk of cardiovascular disease. The aim of our study was to investigate the association of CEC with various clinical parameters in pediatric renal transplant recipients. Methods: CEC, defined as CD45(-)CD146(+), were enumerated by flow cytometry from the peripheral blood of 50 pediatric renal transplant recipients and 20 healthy controls. Clinical parameters, including renal function tests, fasting blood glucose, serum cholesterol and triglyceride, cyclosporine A (CsA) (trough and 2nd-hour) and tacrolimus (tac) trough blood levels and their association with CEC numbers were analyzed. Results: CEC numbers of patients were higher than those of controls (respectively, 128 ± 89 cells/ml (42-468 cells/ml), 82 ± 33 cells/ml (32-137 cells/ml), p = 0.024). There was a statistically significant negative correlation between CEC numbers and glomerular filtration rate (GFR) (r = -0.300, p = 0.012). There was also a statistically positive association between CEC numbers and transplant duration as well as cyclosporine trough level (respectively, r = 0.397, p = 0.004, r = 0.714, p = 0.004). CEC numbers in patients on tac and CsA were similar (p = 0.716). Conclusions: Our results demonstrate that renal transplant recipients with high CsA trough blood level, longer transplant duration, and lower GFR, are at greater risk of developing endothelial damage. abstract_id: PUBMED:15350442 Renal transplants with delayed graft function show decreased renal function despite monitoring with postabsorptive levels. Cyclosporine (CyA) monitoring with postabsorptive levels can predict the risk of an acute rejection episode (ARE). Large doses of CyA are needed to obtain adequate drug exposure. The impact of this strategy on renal function, especially in patients with delayed graft function (DGF), is unknown. We report our experience comparing C3 (3-hour postdose) monitoring with a historical series of cadaveric renal transplants. Sixty-three consecutive patients who received cadaveric renal transplants were followed for 1 year. Group A (historical n = 31) patients received 6 mg/kg/d CyA with the dose adjusted according to the trough level (target, 250-350 ng/mL), group B (study n = 32) received 10 mg/kg/d CyA with dose adjustments based upon C3 (target, 1100-1500 ng/mL). All patients received cyclosporine prednisone and a third agents. The general characteristics of the donors and recipients were comparable. The incidence of biopsy-proven ARE at 1 year in group A was 42% and 19% in group B (P &lt;.05). Patients achieving C3 levels &gt;1000 ng/mL at 1 week displayed significantly lower ARE rates (8% vs 50%; P &lt;.05). The rate of DGF was similar in both groups, but the duration was longer in group B (15 vs 21 days, P &lt;.05). The serum creatinine (SCr) level was significantly higher in group B at 3 months (1.47 mg/dL group A vs 1.76 mg/dL group B; P &lt;.05). Patients in group B with DGF showed significantly higher SCr values at 1 year (1.18mg% vs 2.03 mg%; P &lt;.05). C3 level monitoring of CyA yields excellent results in terms of decreased ARE, but an increased SCR was observed among patients with DGF. abstract_id: PUBMED:25650527 Association between endothelial and platelet function markers and adiponectin in renal transplanted recipients on cyclosporine and tacrolimus immunosuppression based therapy. Aim: Coagulation abnormalities, endothelial dysfunction and arteriosclerosis play a key role in cardiovascular disease state observed in transplanted patients. Plasma adiponectin levels are lower following kidney transplantation. However, there is still a debate about this topic in the literature. This study evaluated, adiponectin levels associated with markers of endothelial dysfunction and platelet function in renal transplant patients. Methods: Sixty-six renal transplant patients were studied. Patients were grouped according to immunosuppression regimen. Group 1 (n = 36) were treated with cyclosporine A based regimes and group 2 (n = 30) were treated with tacrolimus based regimes. Plasma adiponectin, asymmetric dimethyl arginine (ADMA), sP-selectin levels and platelet aggregation tests were studied and were compared with those in control group (n = 15, group 3). Results: Adiponectin, sP-selectin and ADMA levels were higher in group 1 and statistically significant differences were observed compared with those of group 2 and group 3, respectively (P &lt; 0.001, P &lt; 0.05, P &lt; 0.05). Platelet aggregation values induced by agonists were lower in group 1 than group 2 and group 3, but the difference did not reach statistical significance (P &gt; 0.05). Conclusion: Adiponectin levels are elevated in line with ADMA and sP-selectin levels. Since CsA induces higher adiponectin levels, platelet activation and endothelial dysfunction. These changes may be responsible for the increased risk of post-transplant cardiovascular events in renal transplant patients. abstract_id: PUBMED:34102006 Relationship of transitional regulatory B and regulatory T cells and immunosuppressive drug doses in stable renal transplant recipients. Objectives: Regulatory B cells (Bregs) and T cells (Tregs) are thought to be involved in the regulation of graft acceptance in renal transplant recipients. However, mechanisms that affect Breg differentiation and interaction with Tregs are rather unclear. Methods: Using eight-color-fluorescence flow cytometry, Tregs and CD19+ CD24hiCD38hi Bregs were analyzed in whole blood samples of 80 stable kidney transplant recipients, 20 end-stage renal disease (ESRD) patients and 32 healthy controls (HC). In addition, differentiation of Bregs and Tregs was studied in different micromilieus using cocultures with strongly enriched B-lymphocytes and autologous peripheral blood mononuclear cells stimulated with CpG and phytohemagglutinin. Results: Bregs were higher in HC than in ESRD patients and lowest in transplant recipients. Bregs were higher early as compared to late posttransplant. Posttransplant, high Bregs were associated with higher glomerular filtration rate (GFR) and lower C-reactive protein (CRP). Higher doses and blood levels of ciclosporine, tacrolimus, and mycophenolate mofetil as well as higher doses of steroids were not associated with low Bregs. In contrast, most Treg subsets were lower when blood levels of ciclosporine, tacrolimus, and mycophenolate mofetil were higher. Tregs were not associated with Bregs, GFR, CRP plasma levels, and occurrence of rejection or infection. In vitro, differentiation of Bregs was strongly dependent on T cell support and was blocked by excessive or lacking T-cell help. Tregs were not associated with Breg numbers in vitro. Conclusion: Bregs appear to be insensitive to high doses of posttransplant immunosuppressive drugs. The protracted Breg decrease posttransplant might be caused by impaired T cell support attributable to immunosuppressive drugs. abstract_id: PUBMED:16895674 Effects of tacrolimus vs cyclosporin-A on bone metabolism after kidney transplantation: a cross-sectional study in 28 patients Background: Controversy exists about the effects of calcineurin inhibitors on bone metabolism. We decided to compare the effects of CyA vs FK506 on bone metabolism of kidney recipients. Patients And Methods: From 94 patients grafted at the University Hospital of Nice between 1996 and 1999 treated either by CyA (N=49) or by FK506 (N=45), we selected 14 pairs (18 M, 10F), matched for gender, BMI, time lapsed since transplantation and gonadal status in females. Patients with&gt;1 transplantation or&gt;1 rejection episode were excluded. Cumulative dose of steroids was recorded. Bone mineral density (BMD) was mesured at heel and forearm, as well as serum concentration of calcium, phosphate, parathyroid hormone (PTH), vitamin D metabolites, C-telopeptide (CTX), creatinine, estradiol as well as Bone Alkaline Phosphatase (BAP) activity. Results: Despite the matching, time on hemodialysis was longer in FK506 group. Cumulative dose of steroids was similar between groups. There was no difference between groups in BMD and biochemical parameters except for estradiol serum levels which were dramatically lower in FK506 than in CsA (P=0,02) and for a trend (p=0,08) for BAP and CTX to be higher in FK506 than in CsA. Conclusions: BMD is not lower in FK506- than in CsA-treated patients although exposure to hyperparathyroidism was longer and estradiol levels were lower in the FK506 than in the CsA group. These data suggest that FK506 may have a favorable bone effect to compensate for these deleterious factors. This hypothesis remains to be tested in longitudinal studies. abstract_id: PUBMED:21383657 How delayed graft function impacts exposure to mycophenolic acid in patients after renal transplantation. Introduction: Mycophenolic acid (MPA) plasma concentrations are highly variable on standard-dose mycophenolate mofetil therapy. At creatinine clearances below 25 mL/min, MPA clearance increases as a result of a higher nonprotein-bound fraction. Patients with delayed graft function (DGF) after renal transplantation are exposed to low total MPA concentrations, when risk of rejection is highest. This study investigated the influence of DGF on MPA exposure and on clinical outcome. Methods: Adult renal transplantation patients treated with mycophenolate mofetil, corticosteroids, and either microemulsified cyclosporine (n = 459) or tacrolimus (n = 371) participated in a randomized controlled trial (the Fixed-Dose Concentration-Controlled [FDCC] Study). Abbreviated MPA areas under the curve (AUCs) were obtained on Day 3, Day 10, Week 4, and Month 3, to calculate MPA AUC₀₋₁₂. Free MPA AUC values were available for a subgroup of patients (n = 269). Results: The overall incidence of DGF was 187 of 830 (23%) and did not differ between cyclosporine-treated (24%) and tacrolimus- (21%) treated patients. The incidence of biopsy-proven acute rejection at 12 months was significantly higher in patients with DGF (13.8% versus 21.4%). Patients with DGF had significantly lower dose-corrected MPA AUC on Day 3 and Day 10. Free MPA fraction and dose-corrected free MPA AUC were significantly higher in patients with DGF, from Day 3 until Month 3. The total number of patients with at least one opportunistic infection was significantly higher in patients with DGF (33.2%) compared with patients without DGF (25.8%) (P = 0.048). Patients with DGF developing opportunistic infections did not have higher total MPA AUC nor higher free MPA AUC compared with those without opportunistic infections. Conclusion: Patients with DGF have significantly lower dose-corrected MPA AUC in the first month after renal transplantation, presumably as a result of enhanced MPA clearance on account of the elevated MPA free fraction. Because patients with DGF have a higher rate of acute rejection and lower MPA exposure, higher dosing of mycophenolate mofetil in such patients may improve outcome. However, the already increased incidence of opportunistic infections in patients with DGF is a concern. abstract_id: PUBMED:21888859 The effect of calcineurin inhibitors on endothelial and platelet function in renal transplant patients. Background/aim: Posttransplant cardiovascular mortality is still an important problem in renal transplant patients. In addition to conventional coronary risk factors, coagulation abnormalities play a key role in the hypercoagulable state observed in transplanted patients. Though renal transplantation eliminates cardiovascular disease risk factors by restoring renal function, it introduces new cardiovascular risks derived, in part from immunosupressive medications. We aimed to assess the effect of calcineurin inhibitors on endothelial function, platelet activation and aggregation in renal transplant patients. Methods: 62 renal transplant were studied. Staging was performed according to immunosuppression regimen. Group 1 (n = 37) were treated with cyclosporine/mycophenolate mofetil/methylprednisolone and Group 2 (n = 25) were treated with tacrolimus/mycophenolate mofetil/methylprednisolone. The control group consisted of 16 healthy subjects (Group 3). Hematological and biochemical tests, asymmetric dimethyl arginine (ADMA), sP-selectin levels and platelet aggregation tests were studied. Results: ADMA levels were higher in Group1 and statistically significant differences were observed compared with those of Group 2 and Group 3 (p &lt; 0.05). Platelet aggregation values induced by all agonists (Adenosine diphosphate (ADP), epinephrine, ristocetin, collagen) were lower in Group 1 than Group 2 and Group 3, but the difference did not reach statistical significance (p &gt; 0.05). There was a negative correlation between cyclosporine level and platelet aggregation values induced by ADP (r = -0.43, p &lt; 0.01), ristocetin (r = -0.40, p &lt; 0.05), epinephrine (r = -0.41, p &lt; 0.05), and collagen (r = -0.43, p &lt; 0.01). sP-selectin levels were appreciably higher in Group 1 and statistically significant differences were observed compared with those of Group 2 (p &lt; 0.05) and Group 3 (p &lt; 0.01). Conclusion: The results of our study suggest that CsA induces platelet activation without inducing platelet aggregation. Endothelial dysfunction due to vascular endothelial damage reflected by increases in ADMA values may increase the tendency for thrombotic events in patients who had undergone renal transplantation. abstract_id: PUBMED:9767908 Long-term stability of bone mineral density in patients with renal transplant treated with cyclosporine and low doses of corticoids. Protective role of cyclosporine? Objectives: Cyclosporine has been thought to have a deleterious effect on bone in transplant recipients because of high turnover osteopenia observed in humans after transplantation. However, varying confounding factors such as renal and parathyroid function, cumulative steroid doses and previous exposure to aluminium, also play a role and hinder interpretation of the cyclosporine effect on bone mineral density (BMD). Patients And Methods: A 2-year prospective study was conducted to measure BMD starting 3 months after transplantation and bone remodeling markers from the first post-transplantation day in 52 kidney recipients with no prior exposure to aluminum. None of the patients experienced rejection and at 3 months all had good stable renal function (serum creatinine 137 mumol/l) and mildly elevated parathyroid hormone levels (1.5 times the upper limit of normal). All patients were given the same low dose steroid treatment (10 mg/day) and at 6 months cyclosporine was decreased from 7 to 4.8 mg/kg/day. Results: BMD measured by double energy X-ray absorptiometry, (DEXA) and expressed in Z score, was moderately decreased at 3 months for the vertebrae (-1.40), the femoral neck (-1.34) and the ultradistal radius (-0.95) which have predominantly cancellous bone and was significantly less decreased (p &lt; 0.05) for the lower third of the radius (-0.6) which is mainly cortical bone. BMD measurements were comparable at 6, 12 and 24 months. When measured by axial computerized tomography (ACT) BMD of the vertebrae showed a non-significant increase of Z score from -1.37 to -1.19 at 2 years. Bone remodeling markers was observed up to month 6 (from month 3 for osteocalcine and from month 1 for total and bone alkaline phosphatase and urinary pyridinoline), then returned to baseline levels at 2 years in parallel with decreased cyclosporine dosage. The increase of vertebral BMD measured by ACT at 1 year was correlated both to cyclosporine dose at 1 year and to bone alkaline phosphatase at 6 months. Conclusion: Our data confirm the presence of moderate osteopenia 3 months after transplantation, predominantly in trabecular bone, logically linked to the initial high doses of corticosteroids. The long-term stability of BMD measured by DEXA and the correlation of vertebral BMD increase measured by ACT with cyclosporine dose and bone alkaline phosphate suggest that cyclosporine had a beneficial immunosuppressor effect by stimulating bone remodeling and thus counterbalancing the suppressive effect of corticosteroids. abstract_id: PUBMED:32557653 Population Pharmacokinetic Modeling of Cyclosporine Among Malaysian Renal Transplant Patients: An Evaluation of Methods to Handle Missing Doses in Conventional Drug-Monitoring Data. Cyclosporine is a primary drug in transplant immunosuppression regimens. It has a narrow therapeutic index and variable pharmacokinetic behavior. This study aimed to develop a population pharmacokinetic model of cyclosporine in Malaysian renal transplant recipients as well as to evaluate the performances of different methodsfor handling missing doses. A total of 2804 concentrationts predose and 2 hours after doses were collected retrospectively from 113 renal transplant patients on cyclosporine in Penang General Hospital. Model structure and pharmacokinetic parameters were estimated using nonlinear mixed-effects modeling software. Missing doses were handled using different methods to evaluate their performance. Covariate analysis was performed using stepwise forward addition (P &lt; .05) followed by backward elimination (P &lt; .001). Prediction-corrected visual predictive check and sampling-importance resampling methods were used to validate the final model. A 1-compartment model with first-order absorption and elimination best fitted the data. All methods to handle missing doses performed well with the missing dose method being superior to other methods and thus was applied in the final model. Cyclosporine clearance (CL/F) was estimated as 15.1 L/h, and volume of distribution (V/F) was 108 L. Postoperative time, sex, and calcium channel blockers were identified as significant covariates on CL/F, whereas sex and cholesterol level were identified as significant covariates on V/F. This is the first population pharmacokinetic model developed in Malaysian renal transplant patients using a large sample with an evaluation of different methods to handle missing doses in less informative conventional therapeutic drug-monitoring data. Answer: Yes, endothelial function does change in renal transplant patients with longer duration of exposure and with higher cumulative doses of cyclosporine. A study investigating endothelial functional changes with respect to the cumulative dose and duration of exposure to cyclosporine (CsA) in renal recipients found that endothelial dysfunction was more prominent in the first 36-month period than later, despite longer exposure to and higher cumulative doses of CsA. This finding suggests that the initial period post-transplant may involve more intense doses of CsA or reflect an extended effect of the uremic state on endothelial function (PUBMED:15251333). Additionally, another study demonstrated that renal transplant recipients with high CsA trough blood levels, longer transplant duration, and lower glomerular filtration rate (GFR) are at greater risk of developing endothelial damage, as indicated by an increase in the number of circulating endothelial cells (CECs) (PUBMED:24018796). This supports the notion that prolonged exposure to CsA and the associated cumulative dose can negatively impact endothelial function over time. Moreover, research has shown that CsA can induce platelet activation without inducing platelet aggregation and may cause endothelial dysfunction due to vascular endothelial damage, as reflected by increases in asymmetric dimethyl arginine (ADMA) values. This could potentially increase the tendency for thrombotic events in patients who have undergone renal transplantation (PUBMED:21888859). In summary, the evidence suggests that endothelial function is indeed affected in renal transplant patients with longer duration of exposure to and higher cumulative doses of cyclosporine, with a more pronounced effect observed in the early post-transplant period.
Instruction: Do cerebrovascular risk factors confer risk for suicide in later life? Abstracts: abstract_id: PUBMED:17545453 Do cerebrovascular risk factors confer risk for suicide in later life? A case-control study. Objective: Cerebrovascular pathologies have been implicated as a cause of depressive disorders. This study aims to examine the role of cerebrovascular risk factors in late-life suicide. Methods: The authors calculated cerebrovascular risk factor (CVRF) score, based on the American Heart Association Criteria, for each case and comparison subject from a psychological autopsy study of suicide among community-dwelling adults over age 50 years. Results: CVRF scores were significantly higher in suicide cases than community-dwelling comparison subjects after accounting for age, sex, depression diagnosis, and functional status. Conclusion: Cerebrovascular disease may predispose to suicide in older adults by a variety of psychopathological means. abstract_id: PUBMED:12182926 Risk factors for suicide in later life. Suicide rates are higher in later life than in any other age group. The design of effective suicide prevention strategies hinges on the identification of specific, quantifiable risk factors. Methodological challenges include the lack of systematically applied terminology in suicide and risk factor research, the low base rate of suicide, and its complex, multidetermined nature. Although variables in mental, physical, and social domains have been correlated with completed suicide in older adults, controlled studies are necessary to test hypothesized risk factors. Prospective cohort and retrospective case control studies indicate that affective disorder is a powerful independent risk factor for suicide in elders. Other mental illnesses play less of a role. Physical illness and functional impairment increase risk, but their influence appears to be mediated by depression. Social ties and their disruption are significantly and independently associated with risk for suicide in later life, relationships between which may be moderated by a rigid, anxious, and obsessional personality style. Affective illness is a highly potent risk factor for suicide in later life with clear implications for the design of prevention strategies. Additional research is needed to define more precisely the interactions between emotional, physical, and social factors that determine risk for suicide in the older adult. abstract_id: PUBMED:25258538 A multidimensional risk factor model for suicide attempts in later life. Background: Elderly suicide is a public health problem worldwide, and the risk factors are multidimensional. Chronic mental health problems, personality traits, stressful life events, comorbid medical conditions, social isolation, unemployment, and poverty are associated with higher risk for suicide in later life. There was a relative paucity of data on the neurobiological markers of elderly suicide. Objective: This study examines the conjoint roles of cerebrovascular risk factors (CVRFs) and other established biopsychosocial risk factors in older adults who had made a recent suicide attempt. Design: A cross-sectional, case-controlled study. Setting: A tertiary care setting in a public sector and a community setting. Subjects And Methods: Cases (N=77) were nondemented Chinese adults aged ≥65 years, enrolled in a regional psychogeriatric service following a suicide attempt; comparison subjects (N=99) were community-dwelling nondemented older adults with no lifetime history of suicide. Measures of sociodemographic profile, life events, suicidal behavior, Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) Axis I psychopathology, personality traits, functional status, physical health, CVRFs, and executive cognitive functions were administered. Results: WEIGHTED SUM OF CVRF SCORE WAS SIGNIFICANTLY HIGHER IN OLDER WOMEN WHO HAD MADE A RECENT SUICIDE ATTEMPT (MEAN: 10.56; standard deviation [SD]: 5.46) than comparison subjects (mean: 7.24; SD: 4.04) (t=3.52, P=0.001; df=99). Logistic regression showed that CVRF score (Exp[B]: 1.289, P=0.033), DSM-IV depressive disorders (current) (Exp[B]: 348, P&lt;0.001), number of life events in the past 12 weeks (Exp[B]: 10.4; P&lt;0.001), and being married (Exp[B]: 12.2, P&lt;0.048) significantly increased odds for suicide attempt status in older women (Nagelkerke R (2): 0.844). Association of CVRF score and suicide attempt status was not observed in older men for whom number of life events in the past 12 weeks (Exp[B]: 9.164; P&lt;0.001), higher neuroticism (Exp[B]: 1.028; P=0.048), and impaired performance on a Modified Card Sorting Test (Exp[B]: 0.646; P=0.032) significantly increased odds for suicide attempt status in a logistic regression model (Nagelkerke R (2): 0.611). Conclusion: Risk factors for cerebrovascular diseases may be associated with higher risk for attempted suicide in older women, but not in older men. Older men and women have distinct risk factor profiles that may inform targeted intervention and prevention strategies. abstract_id: PUBMED:30132468 Suicidality in the elderly: socio-cultural and clinical risk factors Advanced age is a significant risk factor for a completed suicide. The review considers the patterns of suicidal behavior of older adults in the context of their gender, ethnicity as well as social and cultural factors. In terms of pro- and antisuicidal variables, special emphasis is made on the role of somatic and psychic disorders. The article looks at the features of committing a suicide peculiar to the older people and debates some specific issues of self-inflicted death by elderly people. The obtained data indicate the involvement of multilevel factors (biological, clinical, cultural, micro- and microsocial) in suicidal behavior in later life. abstract_id: PUBMED:36911509 Risk factors and consequences of depression in later life: Findings from the health in men study (HIMS). Background: The Health In Men Study (HIMS) has been collecting data on risk factors and health events for the past 25 years in a large community-representative sample of older men. This paper summarises key-findings of the study about depression in later life. Methods: Narrative review of selected HIMS studies published over the past 15 years describing risk factors associated with prevalent and incident depression in older men, as well as clinical outcomes associated with depression. Results: Data from HIMS showed that cardiovascular diseases and risk factors are associated with increased risk of depression, but this association is neither specific nor causative. Findings from HIMS are not supportive of the vascular hypothesis of depression in later life. Studies investigating lifestyle have generated risk tables capable of guiding risk reduction strategies. Other potentially modifiable risk factors associated with depression in the HIMS cohort included abnormal allostatic inflammatory response, high plasma homocysteine and low testosterone. The results from HIMS also showed that depression is most likely a prodromal manifestation of dementia rather that a true risk factor, but it increases frailty and mortality. The association between depression and suicide in older men is largely mediated by deteriorating health and increasing frailty. Conclusion: HIMS has contributed to advance knowledge about risk factors associated with depression, as well as the health consequences of depression in older men. The study is ongoing and the investigators welcome the opportunity to share data with colleagues who are interested in the health of older people. abstract_id: PUBMED:34719223 Analysis of risk factors affecting suicidal ideation in South Korea by life cycle stage. Objective: This study analyzed risk factors for suicidal ideation in South Koreans from a life cycle perspective. Methods: A secondary analysis was conducted of data collected in 2015 as part of the 6th Korea National Health and Nutrition Examination Survey (KNHANES). The participants comprised 5,935 individuals aged 12 years or older. The statistical analysis reflected the complex sampling design of the KNHANES, and the Rao-Scott chi-square test and multiple logistic regression analysis were performed. Results: The prevalence of suicidal ideation was 5.7% in adolescents, 3.7% in young adults, 5.4% in middle-aged adults, and 7.0% in older adults. Depression and stress were risk factors in every stage of the life cycle. In those aged 12 to 19 years, activity restrictions were significantly associated with suicidal ideation. Education and subjective health status were risk factors in adults aged 20 to 39 years, and education, activity restrictions, and quality of life were the major risk factors in those aged 40 to 64 years. For adults 65 years of age or older, the risk of suicidal ideation was higher among those with inappropriate sleep time. Conclusion: The risk factors for suicidal ideation were found to be different across stages of the life cycle. This suggests a need for individualized suicide prevention plans and specific government policies that reflect the characteristics of each life cycle stage. abstract_id: PUBMED:9121911 Suicide in later life: how to spot the risk factors. Health care staff often fail to recognise depression and other suicide risk factors in older people. Ageism may play a part in this oversight too. By listening to older people and learning to recognise the warning signs, nurses can identify and treat the underlying problem, perhaps preventing the often fatal consequences. abstract_id: PUBMED:26990856 Meaning in Life Buffers the Association Between Risk Factors for Suicide and Hopelessness in Participants With Mental Disorders. Objective: Hopelessness is a proximal risk factor of suicide. Meaning in life has been found to be a protective factor against suicidal ideation; however, the majority of studies that have explored the role of meaning in life in the context of suicidality have been conducted in nonclinical populations. The aim of this study was to investigate whether meaning in life can moderate and buffer the association between suicide risk factors and hopelessness in a clinical sample with a heightened risk of suicide. Method: A total of 224 participants diagnosed with mental disorders completed self-report measures of suicide risk factors, hopelessness, and meaning in life. Results: The main result from this study was that meaning in life moderated the association between suicide risk factors and hopelessness. Conclusion: Meaning in life is an important variable in the prevention and treatment of people at risk of suicide. abstract_id: PUBMED:23506973 The Senior Connection: design and rationale of a randomized trial of peer companionship to reduce suicide risk in later life. There is a pressing public health need to find interventions that reduce suicide risk in later life. Psychiatric and physical illness, functional decline, and social factors place seniors at risk for suicide. Reflecting this body of evidence, the Centers for Disease Control and Prevention (CDC) has identified the promotion and strengthening of social connectedness, between and within the individual, family, community, and broader societal levels, as a key strategy for suicide prevention. The Senior Connection, a randomized trial of peer companionship for older adults, is described here, with an emphasis on the most novel features of the study design-grounding in a psychological theory of suicide and intervening at an early stage in the suicide risk trajectory by linking primary care patients with the Aging Services Provider Network. abstract_id: PUBMED:30146371 Increased Risk of Suicide Attempts and Unintended Death Among Those Transitioning From Prison to Community in Later Life. Objective: The number of individuals transitioning from correctional facilities to community in later life (age ≥50 ) is increasing. We sought to determine if later-life prison release is a risk factor for suicidal behavior and death by accidental injury, including drug overdose. Design: Retrospective cohort study. Setting: U.S. Department of Veterans Affairs and Medicare healthcare systems, 2012-2014. Participants: Veterans age ≥50 released from correctional facilities (N = 7,671 re-entry veterans) and those never incarcerated (N = 7,671). Methods: Dates of suicide attempt and cause-specific mortality defined using the National Suicide Prevention Applications Network and the National Suicide Data Repository, respectively. Results: Later-life prison release was associated with increased risk of suicide attempt (599.7 versus 134.7 per 100,000 per year; adjusted hazard ratio [HR] 3.45; 95% confidence interval [CI] 2.24-5.32; p &lt; 0.001, Wald χ2 = 31.58, degrees of freedom [df] = 1), death by drug overdose (121.7 versus 43.5; adjusted HR 3.45; 95% CI 1.37-8.73; p = 0.009, Wald χ2 = 6.86, df = 1), and other accidental injury (126.0 versus 39.1; adjusted HR 3.13; 95% CI 1.28-7.69; p = 0.013, Wald χ2 = 6.25, df = 1), adjusting for homelessness, traumatic brain injury, medical and psychiatric conditions, and accounting for competing risk of other deaths. Suicide mortality rates were observed as nonsignificant between re-entry veterans and those never incarcerated (30.4 versus 17.4, respectively; adjusted HR 2.40; 95% CI 0.51-11.24; p = 0.266, Wald χ2 = 1.23, df = 1). Conclusion: Older re-entry veterans are at considerable risk of attempting suicide and dying by drug overdose or other accidental injury. This study highlights importance of prevention and intervention efforts targeting later-life prison-to-community care transitions. Answer: Yes, cerebrovascular risk factors do confer risk for suicide in later life. Studies have found that cerebrovascular risk factor (CVRF) scores were significantly higher in suicide cases among older adults compared to community-dwelling comparison subjects, even after accounting for age, sex, depression diagnosis, and functional status (PUBMED:17545453). Additionally, a cross-sectional, case-controlled study found that the weighted sum of CVRF scores was significantly higher in older women who had made a recent suicide attempt compared to comparison subjects, suggesting that risk factors for cerebrovascular diseases may be associated with a higher risk for attempted suicide in older women (PUBMED:25258538). However, the association of CVRF score and suicide attempt status was not observed in older men, indicating that older men and women may have distinct risk factor profiles for suicide attempts (PUBMED:25258538). Furthermore, the Health In Men Study (HIMS) has shown that cardiovascular diseases and risk factors are associated with an increased risk of depression, which is a known risk factor for suicide, although the association is neither specific nor causative (PUBMED:36911509). The study also found that depression in older men is largely mediated by deteriorating health and increasing frailty, which are associated with suicide (PUBMED:36911509). Overall, these findings suggest that cerebrovascular risk factors are indeed associated with an increased risk of suicide in later life, particularly in older women, and highlight the importance of considering these factors in suicide prevention strategies for the elderly population.
Instruction: Is the MAdrid Sonographic Enthesitis Index useful for differentiating psoriatic arthritis from psoriasis alone and healthy controls? Abstracts: abstract_id: PUBMED:24488414 Is the MAdrid Sonographic Enthesitis Index useful for differentiating psoriatic arthritis from psoriasis alone and healthy controls? Objective: To assess the usefulness of the MAdrid Sonographic Enthesitis Index (MASEI) in classifying patients as having psoriatic arthritis (PsA) and comparing entheseal abnormalities between patients with PsA, psoriasis alone (PsC), and healthy controls (HC). Methods: Patients with PsC were assessed to exclude inflammatory arthritis. The MASEI scoring system was used to quantify the extent of ultrasonographic (US) entheseal abnormalities. The total MASEI score was categorized into items that reflected inflammatory abnormalities (MASEI-inflammatory) and chronic damage (MASEI-damage). Nonparametric tests were used to compare MASEI scores across the groups. A cutoff point of MASEI ≥ 20 was used to calculate the sensitivity and specificity of the MASEI to classify patients as having PsA. Results: Patients with PsA (n = 50), PsC (n = 66), and HC (n = 60) were assessed. Total MASEI scores were higher in patients with PsA than in those with PsC, and both those groups were higher than HC (p &lt; 0.0001). MASEI-inflammatory showed a similar trend (p &lt; 0.0001). MASEI-damage was higher in patients with PsA compared to both patients with PsC and HC (p &lt; 0.0001); however, no difference was observed between patients with PsC and HC. No significant difference in MASEI scores was found across the 3 groups in patients with a body mass index &gt; 30. The sensitivity of the MASEI score to correctly classify patients as having PsA was 30% and the specificity was 95% when compared to HC and 89% when compared to PsC. Conclusion: The severity of US entheseal abnormalities is highest in patients with PsA followed by PsC and is lowest in healthy controls. MASEI can specifically classify patients as having PsA. abstract_id: PUBMED:38368555 Detection of subclinical enthesitis by ultrasonography in patients with psoriasis and controls. Introduction: Psoriasis is a widespread chronic inflammatory skin disease; enthesitis is inflammation of the tendon, ligament, and joint capsule insertion, prevalent in patients with psoriatic arthritis. Objectives: The aim of study to evaluate the utility of the Madrid Sonography Enthesitis Index scoring system for accurate detection of subclinical enthesitis in patients with Psoriasis compared with healthy controls. Another objective was to assess increase in enthesis area and Psoriatic arthritis incidence, in a prospective 1-year follow-up. Method: Patients aged ≥18 years who were diagnosed with Psoriasis, without musculoskeletal complaints, and who did not have any clinical sign and/or symptom of enthesitis and synovitis were included in the study. The patients and healthy controls were evaluated with ultrasonography. Ultrasonography evaluation consisted of the detection of gray-scale enthesitis and power Doppler signal in the enthesis areas. The Madrid Sonography Enthesitis Index scoring system was used to quantify the extent of the sonographic enthesis abnormalities. Results: The mean MASEI score, structure, thickness, erosion, and calcification were significantly higher in the Psoriasis group than in the control group. The mean MASEI score, structure, erosion, and calcification measurements were significantly higher at the last examination when compared to the first examination. The triceps was the most commonly affected tendon in both groups. Conclusion: Ultrasonography is an important tool for diagnosis and follow-up of subclinical enthesitis in patients with psoriasis. Regardless of disease duration and severity, patients should be screened using ultrasonography at yearly intervals. abstract_id: PUBMED:37166435 The association between sonographic enthesitis with sonographic synovitis and tenosynovitis in psoriatic arthritis patients. Objectives: To examine the association between sonographic enthesitis with sonographic synovitis and tenosynovitis in PsA patients, and the association between sonographic enthesitis and clinical characteristics. Methods: Consecutive PsA patients that fulfilled the ClASsification criteria for Psoriatic ARthritis (CASPAR) were prospectively recruited. Each patient was evaluated by comprehensive clinical and sonographic assessment (greyscale and Doppler), the latter including 52 joints, 40 tendons and 14 entheses [according to MAdrid Sonography Enthesitis Index (MASEI) plus lateral epicondyles] performed by an experienced sonographer blinded to the clinical data. The US enthesitis score was further categorized to inflammatory (hypoechogenicity, thickening, bursitis and Doppler) and structural (enthesophytes/calcifications and erosions) subcategories. Multivariate linear regression models assessed the association between enthesitis and the selected variables. Results: A total of 158 PsA patients [mean (s.d.) age 52.3 (13) years, 88 (55.7%) females] were analysed. Multivariate linear regression analyses showed a significant association between sonographic enthesitis and sonographic synovitis (β = 0.18, P = 0.008) and between sonographic enthesitis and sonographic tenosynovitis (β = 0.06, P = 0.02). These associations were derived from the enthesitis inflammatory subcategory of the MASEI (P &lt; 0.05). Associations between enthesitis and synovitis were also demonstrated on the level of the elbow, knee and ankle joints (P &lt; 0.05). In addition, sonographic enthesitis was significantly associated with older age, male sex, swollen joint count, CRP level and physical occupation. Conclusions: Sonographic enthesitis is associated with sonographic synovitis and tenosynovitis. The severity of sonographic enthesitis may represent a marker for inflammatory activity in other musculoskeletal domains. abstract_id: PUBMED:32832446 Sonographic Evaluation of Subclinical Enthesopathy in Patients of Chronic Plaque Psoriasis. Background: Psoriasis is associated with spondyloarthropathy in 10%-30% of cases. Enthesitis is major feature of psoriatic arthritis. Ultrasonography can detect subclinical entheseal abnormalities in psoriasis patients. Objectives: To determine the prevalence of subclinical enthesopathy in psoriasis vulgaris using ultrasonography and evaluating its correlation with severity and duration of psoriasis. Materials And Methods: This study included 50 patients of psoriasis vulgaris and 50 healthy controls. Sonographic evaluation of six sites bilaterally (proximal plantar fascia, distal Achilles tendon, distal and proximal patellar ligaments, distal quadriceps, and brachial triceps tendons) were done in each subject. All Ultrasonographic findings were identified according to MASEI (Madrid sonography enthesitis index). Enthesopathy scores of patients and controls were compared and receiver operating characteristic curve was used to determine cut off value of MASEI, above which ultrasound enthesitis of clinical significance could be diagnosed. Results: 31 (62%) psoriasis patients had subclinical enthesopathy of clinical significance as compared to only 5 (10%) of controls. Mean MASEI score between psoriasis cases and control was statistically different, 12.72 ± 7.55 (Mean ± SD) and (5.14 ± 4.69), respectively (P value 0.000001).The receiver operating characteristic curve established an ultrasound score of &gt;11 as the best cut-off to differentiate between subject with enthesopathy of clinical significance from those with enthesopathy of unknown significance. No statistically significant correlation was found between the degree of enthesopathy (MASEI score) and duration and severity of the psoriasis. Conclusion: Ultrasonography can effectively screen subclinical entheseal abnormalities in psoriasis patients. abstract_id: PUBMED:36243411 Sex-Based Differences in Sonographic and Clinical Findings Among Patients With Psoriatic Arthritis. Objective: To investigate sex-based sonographic differences in patients with psoriatic arthritis (PsA). Methods: The study population included consecutive prospectively recruited patients with PsA, as determined by the CASPAR (Classification for Psoriatic Arthritis) criteria, who underwent clinical and physical examinations, followed by a detailed ultrasound (US) evaluation (greyscale and Doppler). US evaluation included 52 joints, 40 tendons, and 14 points of entheses (Modified Madrid Sonographic Enthesis Index [MASEI] plus lateral epicondyles) performed by an experienced sonographer blinded to the clinical data. The US score was based on the summation of a semiquantitative score for synovitis, tenosynovitis, and enthesitis. The US enthesitis score was categorized into inflammatory lesions (ie, hypoechogenicity, thickening, bursitis, and Doppler) and structural lesions (ie, enthesophytes/calcifications and erosions). Results: The study population of 158 patients included 70 males and 88 females. The males had higher rates of employment (P = 0.01), Psoriasis Area and Severity Index scores (P = 0.04), and mean swollen joint counts (P = 0.04). The total US score and its subcategory scores-the synovitis and tenosynovitis scores-were similar for both sexes, whereas the total enthesitis score and its subcategory score-the inflammatory enthesitis score-were significantly higher for the males compared to the females (P = 0.01 and P = 0.005, respectively). Hypoechogenicity, thickening, and enthesophytes were more prevalent in males compared to females (P &lt; 0.05). Multivariate ordinal logistic regression models showed that male sex was associated with a higher US inflammatory enthesitis score compared to female sex (odds ratio 1.96, P = 0.02). Conclusion: Sonographic enthesitis was more prevalent in males compared to females with PsA. These differences were not reflected by enthesitis disease activity scores derived from clinical assessment. abstract_id: PUBMED:35293334 Toward a Sonographic Composite Index for Diagnosis in Psoriatic Arthritis: Highlights From the GRAPPA Ultrasound Workshop. The ability to visualize musculoskeletal structures with high-resolution ultrasound is an asset to understanding the complexity of psoriatic arthritis (PsA). During the 2021 Annual Group for Research and Assessment of Psoriasis and Psoriatic Arthritis (GRAPPA) meeting, 3 topics were presented and discussed in the ultrasound workshop: (1) the progress on the Diagnostic Ultrasound Enthesitis Tool (DUET) project; (2) the sonographic evaluation of joints in PsA-GRAPPA joint project; and (3) extrasynovial lesions in PsA. The ultrasound group aims to develop sonographic tools that are feasible and can be used in standard care to diagnose PsA early. The discussions around these topics will shape the group's work toward developing a composite index to diagnose PsA early. abstract_id: PUBMED:37563062 Utility of Musculoskeletal Ultrasound in Psoriatic Arthritis. Purpose: Psoriatic arthritis (PsA) is a chronic autoimmune disease that causes a variety of musculoskeletal abnormalities. Musculoskeletal ultrasound in PsA is becoming increasingly popular, both in clinical practice and research. This narrative reviews recent literature on the utility of ultrasound in PsA. Methods: A search of PubMed was used to identify publications written in English, with titles containing the term psoriatic arthritis and either ultrasound, ultrasonography, or sonographic. A total of 178 publications were identified; those that were not relevant (n = 59), were not original research (n = 45), or that had small (&lt;30) sample sizes (n = 34) were excluded, leaving 40 studies for review of the use of ultrasound in various aspects of PsA. Publications with similar findings were grouped into seven domains: (1) the use of ultrasound findings compared to clinical assessment; (2) the use of ultrasound in the assessment of enthesitis; (3) the use of ultrasound in the assessment of nails; (4) the use of ultrasound as a screening tool in patients with psoriasis at risk for PsA; (5) the use of ultrasound in differentiating PsA from other similar conditions; (6) the use of ultrasound as a measure of disease activity; and (7) the use of ultrasound compared to MRI. Findings: In recent studies, ultrasound measures of inflammation tended to agree with objective clinical findings of disease, such as swollen joint counts, while being less influenced by subjective measures, such as pain. Ultrasound has utility in the assessment of enthesitis and psoriatic nail disease in PsA, and as an overall measure of disease activity. Ultrasound-based outcomes measures have been used in observational studies and in clinical trials involving PsA, and may have utility as a measure of treatment response. The findings from recent studies suggest that ultrasound may have utility in improving the accuracy and precision of screening programs designed to identify subclinical PsA in cohorts of patients with psoriasis; however, cost-efficacy remains to be determined. Beyond screening, ultrasound may have utility in the diagnosis of PsA in patients with suspected inflammatory arthritis, and ultrasound measures of inflammation agree with MRI measures of inflammation, meaning that incorporating ultrasound into clinical practice might help to overcome the barriers associated with MRI. Implications: As ultrasound technology continues to advance, and associated costs decrease, it is likely that ultrasound will become more integrated into the clinical journeys of patients with PsA. abstract_id: PUBMED:37574521 The burden of entheseal involvement in systemic lupus erythematosus: a comparative ultrasonograghic study. Background: Imaging is crucial for identifying and diagnosis of the musculoskeletal (MSK) symptoms, which are one of the most typical manifestations of systemic lupus erythematosus (SLE). For the joints, tendons, and entheseal sites, ultrasonography has shown to be sensitive and accurate for the diagnosis of both inflammation and structural damage. Aim: The goal of the current investigation is to determine the prevalence and the distribution of entheseal abnormalities in SLE patients, using musculoskeletal ultrasonography (MSUS) and to assess the relationship between entheseal sonographic changes and the SLE disease activity. Patients And Methods: One hundred sixty-eight subjects were studied (56 SLE patients, 56 psoriatic arthritis (PSA) patients, and 56 normal cases). To compare the frequency and the distribution of entheseal involvement, high-resolution MSUS was conducted to assess the entheseal sites of all patients in accordance with the Madrid Sonographic Enthesitis Index (MASEI). Results: Clinical enthesitis was detected in 39.3% of the SLE patients using the Leeds Enthesitis Index compared to 71% detected via US examination, indicating a high proportion of subclinical enthesitis in our SLE patients. The most frequently affected enthesis was the distal insertion of the patellar tendon at the tibial tuberosity which was detected in 41% of SLE patients. Enthesitis was significantly more frequent in PSA patients (100%) compared to SLE patients (71.4%) (p &lt; 0.05) and more significantly frequent in SLE patients compared to the healthy controls (19.6%). There was a significant correlation between MASI and SLEDAI scores (r = 0.250*, p = 0.048) and the total protein in 24 h (r = 0.289*, p = 0.031). In addition, there was an inverse significant correlation between MASEI and serum albumin (r = - 0.324*, p = 0.015). Conclusion: In SLE patients, enthesitis is frequently clinical and ultrasound-verified. The most impacted enthesis is at the insertion of the quadriceps tendon. Enthesitis presence and the rise in the MASI score can serve as indicators of the severity of the SLE disease. Key Points • The most impacted entheseal site lies at the insertion of the quadriceps tendon. • The presence and the rise in MASEI score can serve as indicators of the severity of the SLE disease. abstract_id: PUBMED:35861930 Serum interleukin-23 levels: relation to depression, anxiety, and disease activity in psoriatic arthritis patients. Objectives: Assessment of serum levels of IL-23 in PsA patients and its correlation with depression, anxiety, and disease activity. Methods: Eighty psoriatic arthritis (PsA) patients and eighty healthy volunteers matched for age and gender were included in this observational case-control study. All participants suspected to detailed history, clinical assessment, PsA activity using Disease Activity Index for Psoriatic Arthritis (DAPSA) score, the severity and extent of psoriasis was assessed by the Psoriasis Area and Severity Index (PASI), and ultrasonographic assessments of the entheses were examined according to the Madrid Sonographic Enthesitis Index (MASEI). Depression and anxiety were assessed by Hospital Anxiety and Depression Scale (HADS). Serum IL-23 was measured and correlated with disease activity, depression, and anxiety. Results: There was no significant difference between patients and controls regarding demographic data. Thirty-six PsA patients (45%) had anxiety and 28 patients (35%) had depression, while in the control group, 16 persons (20%) had anxiety and 12 (15%) had depression, with significant differences between the 2 groups (p &lt; 0.0001). There were significant differences in HADS anxiety and depression scores between patients and controls with significant positive correlations between HADS depression, anxiety scores and IL-23, DAPSA, PASI, and MASEI scores (p &lt; 0.05). IL-23 was positively correlated with DAPSA, PASI, and HADS scores; we observed that interleukin 23, higher DAPSA, and PASI were independently associated with depression and anxiety. Conclusion: Serum interleukin-23 levels were elevated in PsA patients and were found to be correlated with depression, anxiety, and disease activity. Key Points • Psoriatic arthritis is a multidimensional disorder with psychiatric drawbacks. • Interleukin-23 is a proinflammatory cytokines that was correlated with depression and anxiety in PsA patients. • Interleukin-23 was correlated with disease activity in PsA. • Depression and anxiety were positively correlated with disease activity in PsA. abstract_id: PUBMED:38233008 DKK-1 in psoriatic arthritis: Correlation with disease activity and enthesopathy. Background: Psoriatic arthritis (PsA) is a complex inflammatory disease with varied clinical characteristics. A pathognomonic characteristic of PsA is enthesitis. Entheseal inflammation ultimately leads to the production of new bone (enthesophytes). Dickkopf-related protein-1 (DKK-1) is a wingless (Wnt) inhibitor that inhibits osteoblast function. Objectives: Assessment of the serum level of DKK-1 and its association with disease activity and enthesopathy in PsA patients. Methods: This observational case-control study included 50 PsA patients and 50 healthy volunteers matched for age and gender. All participants were subjected to full medical history, clinical assessment, PSA activity using Disease Activity Index for Psoriatic Arthritis (DAPSA) score, the severity and extent of psoriasis were determined by the Psoriasis Area and Severity Index (PASI). Ultrasonographic assessment of the entheses was done in accordance with the Madrid Sonographic Enthesitis Index (MASEI). Serum level of DKK-1 and correlation with disease activity and enthesopathy in PsA patients were assessed. Results: There was no significant difference between patients and controls regarding age and sex. The mean value of SPARCC index, DAPSA score and PASI score were 6.74±4.58, 33.24±15.26, and 8.35±10.93, respectively. There was significant difference between patients and controls regarding the serum levels of DKK-1 and MASEI score (p&lt;0.0001). There was a significant positive correlation between serum DKK-1 and MASEI (r: 0.43527, p: 0.00158), MASEI inflammatory (r: 0.37958, p: 0.00655), and MASEI damage (r: 0.38384, p: 0.00593). Conclusions: Serum DKK-1 levels were elevated in PsA patients and were found to be correlated with MASEI score for enthesopathy. Answer: Yes, the Madrid Sonographic Enthesitis Index (MASEI) is useful for differentiating psoriatic arthritis (PsA) from psoriasis alone (PsC) and healthy controls (HC). Studies have shown that the total MASEI scores were higher in patients with PsA than in those with PsC, and both these groups had higher scores than HC, indicating that the severity of ultrasonographic (US) entheseal abnormalities is highest in patients with PsA followed by PsC and is lowest in healthy controls (PUBMED:24488414). The MASEI can specifically classify patients as having PsA with a sensitivity of 30% and a specificity of 95% when compared to HC and 89% when compared to PsC (PUBMED:24488414). Additionally, the MASEI scoring system has been used to detect subclinical enthesitis in patients with psoriasis, suggesting that ultrasonography is an important tool for diagnosis and follow-up of subclinical enthesitis in these patients (PUBMED:38368555). The MASEI score has also been associated with sonographic synovitis and tenosynovitis in PsA patients, indicating that the severity of sonographic enthesitis may represent a marker for inflammatory activity in other musculoskeletal domains (PUBMED:37166435). Furthermore, the MASEI has been used to evaluate subclinical enthesopathy in patients with psoriasis vulgaris, with a significant difference in mean MASEI scores between psoriasis cases and controls, suggesting that ultrasonography can effectively screen for subclinical entheseal abnormalities in psoriasis patients (PUBMED:32832446). The MASEI has also been used to investigate sex-based differences in sonographic findings among patients with PsA, revealing that sonographic enthesitis was more prevalent in males compared to females (PUBMED:36243411). Overall, the MASEI is a valuable tool for assessing entheseal involvement in PsA and can aid in differentiating PsA from psoriasis alone and healthy individuals.
Instruction: Is gestational age an independent variable affecting uterine scar rupture rates? Abstracts: abstract_id: PUBMED:16221523 Is gestational age an independent variable affecting uterine scar rupture rates? Objective: To evaluate the influence of gestational age on uterine scar rupture. Methods: This was a population-based study of data from Cardiff Births Survey over a 10-year (1990-1999) period. Women with only one previous lower segment caesarean section with singleton uncomplicated pregnancy of 37 or more week's gestation, undergoing trial of vaginal delivery were included. SPSS version 10 was used for statistical analysis. Mann-Whitney, Fisher's exact test and Chi-square tests were used wherever appropriate. Odds ratio (OR) with confidence intervals (CI) was used to quantify the risk. Potential confounding by other factors was controlled using logistic regression and corrected odds ratios with 95% confidence intervals were calculated. The data was analysed separately for induced and spontaneous labours. Primary outcome measure assessed was uterine scar rupture rate. Secondary outcome measures were repeat caesarean section rates, maternal and perinatal mortality and morbidity. Results: Total sample number was 1620. Eighty percent (n = 1301) of the population went into spontaneous labour and 20% (n = 319) were induced. Successful trial of vaginal birth was accomplished in 60% and trial of scar after estimated date of delivery did not alter this outcome significantly (39.1% versus 43.6%, p &gt; 0.05). We noted an overall scar rupture rate of 0.9% (n = 14) and caesarean section rate of 40.4% (n = 654). Scar rupture rates significantly increased in women who underwent trial of labour after estimated date of delivery (p &lt; 0.001, OR 6.3, CI 1.9-20.2) without a corresponding increase in caesarean section, maternal and perinatal morbidity figures. The influence of gestational age on scar rupture persisted even after controlling for other confounding factors such as birth weight, induction of labour and BMI (corrected OR 1.9, CI 1.1-3.5). Conclusions: The overall incidence of scar rupture and success of trial of scar after previous caesarean section in our population was similar to that quoted in the literature. Previous evidence has suggested that it is safe for these women to exceed 40 weeks gestation but our data do not support this. abstract_id: PUBMED:1763272 Uterine rupture due to unrecognized uterine scarring Uterine rupture affecting the pregnant uterus has become rare. It occurs essentially in the presence of scarring. The cases reported here described rupture following evacuation curettage which had caused an unrecognized uterine perforation. While such perforations often have no serious consequences, this explaining the usual abstention from treatment, it is nevertheless important that acute complications (intestinal or vascular trauma) or long term (uterine rupture of pregnant uterus) should not pass unrecognized. abstract_id: PUBMED:24394323 Bilateral uterine rupture of an unscarred gravid uterus before labor We report a case of bilateral spontaneous uterine rupture of an unscarred uterus occured in a primigravida at 32 weeks to take care in our department after in utero transfert. Uterine rupture occurs mainly on scarred uterus during labor. This is an unfrequent but serious complication involving fetal-maternal prognosis in the absence of immediate care. We are conducting a review about spontaneous uterine rupture of unscarred uterus, before and during labor. abstract_id: PUBMED:17963061 Uterine rupture and perinatal outcome. Background: In view of the increasing number of caesarean sections (CS), we wanted to investigate the clinical aspects of uterine rupture including perinatal outcome. Methods: A retrospective investigation of 24,181 deliveries at Stockholm South General Hospital between 1999 and 2004. Patient notes from cases with ICD-codes 0710 and 0711 were studied together with charts from previous deliveries and neonatal data from the Paediatric Department. Results: Some 22 cases of uterine rupture were found, giving an incidence of 0.9 per 1,000 deliveries. In all cases, the diagnosis was confirmed at laparotomy. In 19/22 cases, the rupture occurred in patients with a previous uterine scar, 18 of whom were delivered at term and one at 16 gestational weeks. One case of intrauterine fetal death was noted. Of the remaining 20 newborns, 9 had a 5-min Apgar &lt;/=7, one died after 3 days, and one had neurological sequele at follow-up. The remaining 18 newborns were healthy at discharge and up to 1 year of age. Conclusions: A previous uterine scar was the single most obvious risk factor. A positive finding was the relatively low risk of long-term post-asphyctic sequele in the newborns. abstract_id: PUBMED:20392573 Uterine rupture: risk factors, maternal and perinatal complications Objectives: To determine risk factors, maternal and perinatal morbidity and mortality associated with uterine rupture in women with previous caesarean delivery. Material And Methods: We conducted a population-based, retrospective cohort analysis, comparing deliveries with and without uterine rupture in women with uterine scar during a 12-year period. Women attempting a trial of labour were selected with precise criterion. We analysed obstetric history, characteristics of labour, mode of delivery, maternal and perinatal complications. Results: Thirty-six uterine ruptures were registered of which 11 were complete. These complete ruptures occurred at a rate of 0.4% among deliveries in women with previous caesarean delivery (n=2718) and 0.5% among women attempting a trial of labour (8/1440). Twenty-one ruptures (58%) were diagnosed during a trial of labour. Some risks factors were identified like a labor after 41weeks of amenorrhea and no medical history of natural childbirth. There were neither maternal nor neonatal deaths. The major maternal complications were postpartum haemorrhage (13.8%, n=5) and blood transfusion (8.3%, n=3), significantly most frequent than in the control group (p&lt;0.01). No hysterectomy was required. Concerning neonatal morbidity, mean lactate rate was significantly higher for the rupture group. Conclusion: The low rate of uterine rupture, maternal and neonatal complications supports a rigorously selection of women attemping a trial of labor. A labor after 41weeks of amenorrhea and no medical history of natural childbirth should be added to common criterion. abstract_id: PUBMED:33466158 Uterine wall rupture in a primigravid patient with oligohydramnios as the first manifestation: A case report. Rationale: Spontaneous uterine rupture during pregnancy, occurring most often during labor in the context of a scarred uterus, is a serious obstetric complication. Perhaps even more serious because of its extreme rarity, spontaneous uterine rupture in a primigravid patient with an unscarred gravid uterus would be essentially unexpected. Clinical manifestations of unscarred uterine ruptures are nonspecific and can be confusing, making a correct early diagnosis very difficult. Patient Concerns: A primigravid woman at 27 weeks of gestation presented to our hospital with acute oligohydramnios. Ultrasound examination at her local hospital revealed oligohydramnios that had not been present 1 week previously. A specific cause of the acute oligohydramnios, however, was not established. Diagnosis: Upon transfer to our hospital, the patient was hemodynamically stable without abdominal tenderness or peritoneal signs. Transabdominal ultrasound was repeated and confirmed oligohydramnios and seroperitoneum. The fetal heart rate was in the normal range, and blood tests revealed a low hemoglobin level of 91 g/L, which had been normal recently. A repeat sonogram after admission found that there was almost no amniotic fluid within the uterine cavity, and there was increased peritoneal fluid. Repeat hemoglobin showed a further decrease to 84 g/L. The combination of increased free abdominal fluid, lack of intrauterine fluid, and acutely decreasing hemoglobin strongly suggested uterine rupture with active intraperitoneal bleeding. Interventions: Emergent laparotomy was performed, and a male infant was delivered. Comprehensive abdominal exploration revealed a rupture in the right uterine cornua with ongoing slow bleeding, through which a portion of the amniotic sac protruded into the abdominal cavity. Outcomes: The laceration was repaired, the patient and neonate recovered without complications, and were discharged 5 days postoperatively. Lessons: An increased awareness of the rare but real possibility of spontaneous uterine rupture in a primigravid patient with no prior uterine scarring helped to establish an earlier diagnosis. Obstetricians should consider this possibility in pregnant females, even in the absence of risk factors and in early gestational age, when sudden unexplained clinical manifestations, such as acute oligohydramnios, are encountered. abstract_id: PUBMED:25030728 Clinical study on 67 cases with uterine rupture Objective: To investigate the incidence, etiology, diagnosis, treatment and outcome of uterus rupture. Methods: From January 1999 to May 2013, clinical data of 67 cases with uterine rupture in Woman's Hospital, School of Medicine, Zhejiang University were studied retrospectively. Results: A total of 67 cases of uterine rupture with 21(+2)-39(+2) gestational weeks out of 128 599 deliveries were recorded giving an incidence of uterine rupture was 0.052 1% (67/128 599) . Cesarean scar rupture were found in 59 cases (88%, 59/67) and noncesarean scar rupture were found in 8 cases (12%, 8/67). The causes of uterine rupture include 60 cases of scar uterus (59 cesarean scar cases and 1 myomyectomy scar case), 2 cases of assisted delivery operation trauma, 2 cases of malformed uterus, 3 cases of unknown causes (all with artificial abortion history).Non obstructive dystocia and improper oxytocin use were found to be related with uterine rupture.Fifty-two cases of cesarean scar spontaneous incomplete rupture were found and repaired during repeated cesarean delivery without maternal and fetal complications. The remaining 15 cases need emergency rescue operation for fetal distress or dead fetus, severe acute abdomen, prepartum or postpartum vaginal bleeding even maternal hypovolemia; 6/15 uterine rupture cases were diagnosed with the history, clinical symptoms and signs, 3/15 cases with ultrasonic found dead fetus in the peritoneal cavity before exploratory laparotomy and 6 cases were diagnosed just during laparotomy.Hysterectomy was done in 10/15 cases and uterine repair in 5/15 cases; there was no maternal death and 12 perinatal fetal death (5 cases of mid-late pregnancy termination for deformed fetus) of the 15 uterine rupture cases.One case with hysterectomy was complicated with stress pancreatitis and dysfunction of liver and kidney and discharged 20 days after operation, the remaining 14 cases were discharged 5-7 days postpartum.One case with repaired malformed uterus got pregnancy 4 years later and delivered a 2 000 g healthy baby by cesarean section at gestational age of 33(+4) weeks. Conclusion: Uterine scar caused by caesarean section or other operations became the leading cause of uterine rupture, assisted delivery operations, history of intrauterine manipulation and uterine malformations were the predisposing risk factors of uterine rupture. abstract_id: PUBMED:36309341 Trial of labor after cesarean delivery for estimated large for gestational age fetuses: A retrospective cohort study. Objective: Although ultrasonographic estimation of fetal weight ≥90th percentile is not associated with a greater risk for uterine rupture, trial of labor after cesarean delivery (TOLAC) is considered relatively contraindicated for macrosomic fetuses. Hence, when an estimated fetal weight of 4000 g is detected, TOLAC is usually avoided.Our aim was to evaluate the obstetrical outcome and safety of TOLAC in women with estimated large for gestational age fetuses (eLGA) (≥90th percentile). Study Design: Our retrospective cohort study encompassed all pregnant women with an estimated fetal weight ≥90th percentile for gestational age, admitted to a single tertiary care center between January 2012-July 2017 for TOLAC. Results: 1949 women met the inclusion criteria; 78 (4%) eLGA and 1871 (96%) controls. Fifty-five (70.5%) women in the study group had experienced a successful vaginal delivery compared to 1506 (80.5%) of the controls (p = 0.03). The rate of obstetrical complications, including scar dehiscence, uterine rupture, a 3rd/4th degree perineal tear or shoulder dystocia were comparable. The rate of post-partum hemorrhage (PPH) increased in the study group compared to the controls (7.7 % vs.1.7%; p = 0.001). Conclusion: TOLAC for eLGA fetuses can be considered safe, however, lower successful rates of vaginal births after a cesarean delivery and an increased PPH rate, may be expected. abstract_id: PUBMED:34197642 Uterine rupture during pregnancy: The URIDA (uterine rupture international data acquisition) study. Objective: To describe the characteristics and peripartum outcomes of patients diagnosed with uterine rupture (UR) by an observational cohort retrospective study on 270 patients. Methods: Demographic information, surgical history, symptoms, and postoperative outcome of women and neonates after UR were collected in a large database. The statistical analysis searched for correlation between UR, previous uterine interventions, fibroids, and the successive perinatal outcomes in women with previous UR. Results: Uterine rupture was significantly associated with previous uterine surgery, occurring, on average, at 36 weeks of pregnancy in women also without previous uterine surgery. UR did not rise exponentially with an increasing number of uterine operations. Fibroids were related to UR. The earliest UR occurred at 159 days after hysteroscopic myomectomy, followed by laparoscopic myomectomy (251 days) and laparotomic myomectomy (253 days). Fertility preservation was feasible in several women. Gestational age and birth weight seemed not to be affected in the subsequent pregnancy. Conclusion: Data analysis showed that previous laparoscopic and abdominal myomectomies were associated with UR in pregnancy, and hysteroscopic myomectomy was associated at earlier gestational ages. UR did not increase exponentially with an increasing number of previous scars. UR should not be considered a contraindication to future pregnancies. abstract_id: PUBMED:32774620 Spontaneous open book uterine rupture at 15 weeks' gestation in a pauciparous woman with scarred uterus: a case study Despite all the health policies implemented in our developing countries, uterine rupture (RU) remains common. They are severe and involve maternal and fetal prognosis. Among the multiple risk factors, the most common cited in the literature is C-section scar. Most occurs during labor or at the end of pregnancy. Uterine rupture during the first or second trimester is exceptional and its clinical manifestation is variable. We here report a case of spontaneous uterine rupture at 15 weeks' gestation in a pauciparous woman with scarred uterus. Exacerbation of clinical symptoms was manifested by peritoneal irritation. Surgical exploration revealed complete vertical rupture from the bottom to the lower segment of the uterus with open book opening of the uterus. This study highlights that uterine rupture should be considered in patients with scarred uterus presenting with abdominal pain associated with signs of hemoperitoneum, regardless of whether their pregnancies are at term or in the first two trimesters, and regardless of age (young patients) and parity. Answer: Yes, gestational age is an independent variable affecting uterine scar rupture rates. A study based on data from the Cardiff Births Survey over a 10-year period found that scar rupture rates significantly increased in women who underwent trial of labor after the estimated date of delivery. This increase persisted even after controlling for other confounding factors such as birth weight, induction of labor, and BMI, indicating that gestational age independently influences the risk of scar rupture (PUBMED:16221523).
Instruction: Is there any relationship between lipids and vitamin B levels in persons with elevated risk of atherosclerosis? Abstracts: abstract_id: PUBMED:12640345 Is there any relationship between lipids and vitamin B levels in persons with elevated risk of atherosclerosis? Background: There is increasing evidence that plasma homocysteine level is an independent risk factor for atherosclerosis. Low levels of serum folates, cobalamin and pyridoxine are associated with increased risk of cardiovascular disease. Most dietary products contain cholesterol as well as methionine, so hyperlipidemia could be associated with a higher level of homocysteine and inversely with lower levels of B vitamins. The aim of this study was to investigate the differences in levels of lipids and vitamins affecting homocysteine metabolism in different groups of patients. Material/methods: We examined 38 healthy persons, 55 patients hospitalised for cardiac surgery, and 62 patients without clinical evidence of atherosclerosis but with one of the atherosclerosis risk factors (hypercholesterolemia, NIDDM or chronic renal insufficiency). The levels of total cholesterol, triglycerides, vitamin B12, folic acid and vitamin B6 index in serum were determined using routine laboratory methods. Results: We found no association between lipids and B vitamins in any examined group. There were significant differences between concentrations of analysed parameters in all groups of patients as compared to controls. Conclusions: The lack of correlation between the levels of lipid parameters and B vitamins in serum indicates that these may be independent, additional risk factors for atherosclerosis. Higher vitamin B6 deficiency in dialysis patients is probably caused by low intake combined with the increased requirements of uremic patients. Permanent monitoring of B vitamins in serum is necessary in patients with elevated risk of atherosclerosis, as well as long-term education, careful diet planning and supplementation. abstract_id: PUBMED:20236081 Hyperhomocysteinemia and cardiovascular risk: effect of vitamin supplementation in risk reduction. Homocysteine is a sulfur-containing aminoacid produced during metabolism of methionine. Since 1969 the relationship between altered homocysteine metabolism and both coronary and peripheral atherotrombosis has been known; in recent years experimental evidences have shown that elevated plasma levels of homocysteine are associated with an increased risk of atherosclerosis and cardiovascular ischemic events. Several mechanisms by which elevated homocysteine impairs vascular function have been proposed, including impairment of endothelial function, production of Reactive Oxygen Species (ROS) and consequent oxidation of low-density lipids. Folic acid and B vitamins, required for remethylation of homocysteine to methionine, are the most important dietary determinants of homocysteinemia and daily supplementation typically lowers plasma homocysteine levels. Recently, large-scale intervention trials have been conducted to determine whether lowering homocysteine concentrations through B vitamins supplementation can decrease cardiovascular risk in healthy subjects or improve survival in patients with coronary heart disease. Some of these trials found no significant beneficial effects of combined treatment with folate and vitamin B(12), with or without vitamin B(6), in spite of adequate homocysteine lowering. In conclusion, it is still unclear whether decreasing plasma levels of homocysteine through diet or drugs may be paralleled by a reduction in cardiovascular risk. abstract_id: PUBMED:12204804 Effects of serum B vitamins on elevated plasma homocysteine levels associated with the mutation of methylenetetrahydrofolate reductase gene in Japanese. High plasma homocysteine, a risk factor for atherosclerosis, is frequently caused by a common mutation in the gene for the enzyme, 5,10-methylenetetrahydrofolate reductase (MTHFR), C677T (alanine to valine substitution) or low intake of B vitamins that affect the remethylation or transsulfuration pathways in homocysteine metabolism. However, the interaction of the C677T mutation and B vitamins other than folate has not been well elucidated. We conducted a cross-sectional survey of 324 men and 641 women who participated in a 1996 health examination under a hypothesis that high nutritional status of folate, vitamin B12 and vitamin B6 expressed as high serum levels, may compensate for the hyperhomocysteinemia associated with homozygosity for the C677T mutation, but not for having the mutation per se. Age-adjusted plasma homocysteine levels were higher for both men and women with the homozygous genotype for the mutation than those who were heterozygous or had no mutation. Elevated homocysteine levels in homozygous genotype was attenuated among persons with higher serum levels of vitamin B12 and folate, but not vitamin B6, and among persons with the combination of lower folate and higher vitamin B12 and of higher folate and higher vitamin B12, split by the median. These findings suggest that elevated homocysteine levels among Japanese with the homozygous genotype for the MTHFR gene mutation can be modified efficiently by dietary supplement of vitamin B12 as well as folate. abstract_id: PUBMED:15776726 Group B vitamins as new variables related to the cardiovascular risk The nutritional status and plasma concentrations of some group B vitamins, namely vitamin B6, vitamin B12 and folic acid, have recently emerged as inverse correlates of cardiovascular risk, and several experimental and clinical studies, these latter mostly retrospective and case-control studies, indicate a defect of such vitamins as capable of promoting the progression of atherosclerosis. Since all these vitamins are implicated in homocysteine metabolism, and since homocysteine has a well-recognized relationship with cardiovascular risk, the simplest hypothesis to explain the relationship of vitamin B6, vitamin B12 and folic acid on the one hand, and cardiovascular risk on the other is that this relationship is mediated by plasma levels of homocysteine. The most convincing literature data for the existence of a relationship with cardiovascular risk are for vitamin B6 and folic acid. These vitamins, however, have also a series of in vitro effects indicating a direct antiatherogenic action, and the results of several clinical studies, especially for vitamin B6, indicate an inverse relationship with cardiovascular risk at least in part independent of homocysteinemia. A further confirmation of these data is important to devise future intervention strategies in primary and secondary prophylaxis of atherosclerotic vascular disease. abstract_id: PUBMED:37554588 Early Detection of Subclinical Atherosclerosis: Hyperhomocysteinemia as a Promising Marker in Adolescents With Vitamin B Deficiency. In recent decades, the increased incidence of cardiovascular disease (CVD) mortality among young adults has raised concerns. Although clinical manifestations of CVD typically occur later in life, the underlying pathological processes emerge early on. This review article summarizes the association between vitamin B deficiency-induced hyperhomocysteinemia and subclinical atherosclerosis in adolescents. Numerous studies have demonstrated that elevated homocysteine levels are an independent risk factor for endothelial dysfunction (ED) and arterial stiffness, which are key contributors to CVD. Notably, vitamin B deficiency, particularly in vitamin B9 and vitamin B12, emerges as a significant factor in childhood hyperhomocysteinemia, initiating the development of subclinical atherosclerosis in early life. A comprehensive review of relevant literature from prominent bibliographic databases, including PubMed/MEDLINE, PubMed Central, Google Scholar, and Cochrane, was performed. Four cross-sectional studies focusing on homocysteine levels as an exposure variable and markers of atherosclerosis as outcome measures were included and reviewed as part of our analysis. The reviewed studies demonstrate a positive correlation between homocysteine levels and markers of atherosclerosis, including increased carotid intima-media thickness (CIMT) and ED. Mainly, adolescents with vitamin B12 deficiency exhibit a significant positive correlation between homocysteine levels and CIMT. These findings underscore the potential of hyperhomocysteinemia as an early marker for detecting subclinical atherosclerosis in adolescents with vitamin B deficiency. Despite limited research in this area, recognizing the importance of early detection and management of subclinical atherosclerosis in adolescents can help mitigate the risk of severe cardiovascular events such as myocardial infarction and stroke in young adulthood. abstract_id: PUBMED:27658132 Relationship between 25(OH)D levels and circulating lipids in African American adolescents. Background: Vitamin D deficiency is commonly seen among African American adolescents. Lipid levels during childhood are excellent predictors of adult dyslipidemia and atherosclerosis. There is a paucity of data on the relationship between 25 hydroxy vitamin D [25(OH)D] levels and lipids among African American adolescents. The objective of this study was to determine if there is an association between 25(OH)D levels and circulating lipids in African American adolescents residing in midwestern United States. Methods: African American adolescents residing in Rochester, MN (latitude 44°N), USA, underwent measurements of 25(OH)D and lipids following overnight fast. Pearson's correlation test, linear regression model and scatter plots were used to explore the association between 25(OH)D levels and lipids. Results: 25(OH)D levels &lt;30 ng/mL were seen in 21/24 (87%) of the subjects. 25(OH)D levels were inversely correlated with total cholesterol (r=-0.42; p=0.040) and with non-high-density lipoprotein (non-HDL) cholesterol (r=-0.42; p=0.040 ). These associations remained statistically significant after adjustment for age, gender and adiposity. In the multivariate linear regression model, and after adjusting for BMI Z-score, each 1 ng/mL increase in 25(OH)D was associated with a decrease in total cholesterol of 1.38 mg/dL (95% CI: -2.63, -0.14, p=0.030) and with a decrease in non-HDL cholesterol of 1.14 mg/dL (95% CI: -2.09, -0.18, p=0.020). Conclusions: 25(OH)D levels were inversely correlated with total cholesterol and non-HDL cholesterol levels in African American adolescents residing in midwestern United States. Further studies with larger sample sizes are needed to determine if low vitamin D status in African American adolescents is a potential modifiable risk factor for cardiovascular disease. abstract_id: PUBMED:15103481 Age-associated changes in the metabolism of vitamin B(12) and folic acid: prevalence, aetiopathogenesis and pathophysiological consequences The increasing number of older people is characteristic for most industrialised nations and implicates the known psychosocial and economic consequences. Therefore, an optimal nutrient supply that promotes continuing mental and physical well-being is particularly important. In this respect, vitamin B(12) and folic acid play a major role, since deficiency of both vitamins is associated with the pathogenesis of different diseases such as declining neurocognitive function and atherosclerotic lesions. Vitamin B(12) and folic acid act as coenzymes and show a close molecular interaction on the basis of the homocysteine metabolism. In addition to the serum concentrations of the vitamins, the metabolites homocysteine and methylmalonic acid are sensitive markers of cobalamin and folate status. Depending on the used marker, 3-60% of the elderly are classified as vitamin B(12) deficient and about 29% as folate deficient. Predominantly, this high prevalence of poor cobalamin status is caused by the increasing prevalence of atrophic gastritis type B, which occurs with a frequency of approximately 20-50% in elderly subjects. Atrophic gastritis results in declining gastric acid and pepsinogen secretion, and hence decreasing intestinal digestion and absorption of both B vitamins. This is the reason why an insufficient vitamin B(12) status in the elderly is rarely due to low dietary intake. In contrast, folic acid intake among elderly subjects is generally well below the recommended dietary reference values. Even moderately increased homocysteine levels or poor folate and vitamin B(12) status are associated with vascular disease and neurocognitive disorders. Results of a meta-analysis of prospective studies revealed that a 25% lower homocysteine level (about 3 micromol/L) was associated with an 11% lower ischemic heart disease risk and 19% lower stroke risk. It is still discussed, whether hyperhomocysteinemia is causally related to vascular disease or whether it is a consequence of atherosclerosis. Estimated risk reduction is based on cohort studies, not on clinical trials. Homocysteine initiates different proatherogenetic mechanisms such as the formation of reactive oxygen species and an enhanced fibrin synthesis. Supplementation of folic acid (0.5-5 mg/d) reduces the homocysteine concentration by 25%. Additional vitamin B(12) (0.5 mg/d) induces further reduction by 7%. In secondary prevention, supplementation already led to clinical improvements (reduction of restenosis rate and plaques). Depression, dementia, and mental impairment are often associated with folate and vitamin B(12) deficiency. The biochemical reason of this finding may be the importance of folic acid and vitamin B(12) for the transmethylation of neuroactive substances (myelin, neurotransmitters) which is impaired in vitamin deficiency ("hypomethylation hypothesis"). In recent years, there is increasing evidence for a role of folic acid in cancer prevention. As a molecular mechanism of a preventive effect of folic acid the hypomethylation of certain DNA sections in folate deficiency has been suggested. Since folate and vitamin B(12) intake and status are mostly insufficient in elderly subjects, a supplementation can generally be recommended. abstract_id: PUBMED:16927597 Hyperhomocysteinemia and B-vitamin deficiency. Current clinical aspects Mild to moderate hyperhomocysteinemia has been recognized as an independent risk factor for coronary heart diseases, neurodegenerative diseases, osteoporosis, and pregnancy complications. Low intakes and decreased absorption of the B-vitamins are the most common causes of hyperhomocysteinemia, which is very prevalent in elderly people. The current review summarizes results from epidemiological studies and presents possible pathological mechanisms of hyperhomocysteinemia. In addition, the current study is a critical evaluation of results from several intervention studies that have been initiated for primary or secondary prevention of coronary and degenerative diseases. Lowering the risk of stroke is currently the most impressive effect of homocysteine-lowering-treatment and underlines the causal relationship. Available data is not sufficient for a final conclusion about the effectiveness of homocysteine-lowering in patients with coronary vascular diseases. Current recommendations stress the importance of measurement of plasma concentrations of homocysteine in patients with multiple risk factors. This allow early intervention with B-vitamins. abstract_id: PUBMED:33724098 The role of B vitamins in stroke prevention. Elevated plasma levels of homocysteine (Hcy) are a recognized risk factor for stroke. This relationship represents one aspect of the debated `Hcy hypothesis'. Elevated Hcy may be an independent and treatable cause of atherosclerosis and thrombotic vascular diseases. Further observations indicate that proper dietary supplementation with B-vitamins decreases total plasma Hcy concentrations and may be an effective intervention for stroke prevention. Metabolic vitamin B12 deficiency is a nutritional determinant of total Hcy and stroke risk. Genetic factors may link B vitamins with stroke severity due to the impact on Hcy metabolism of polymorphism in the genes coding for methylenetetrahydrofolate reductase, methionine-synthase, methionine synthase reductase, and cystathionine β-synthase. Several meta-analyses of large randomized controlled trials exist. However, they are not completely in agreement about B vitamins' role, particularly folic acid levels, vitamin B12, and B6, in lowering the homocysteine concentrations in people at high stroke risk. A very complex relationship exists between Hcy and B vitamins, and several factors appear to modify the preventive effects of B vitamins in stroke. This review highlights the regulating factors of the active role of B vitamins active in stroke prevention. Also, inputs for further large, well-designed studies, for specific, particularly sensitive subgroups are given. abstract_id: PUBMED:21719424 Vitamin D levels in Chinese patients with systemic lupus erythematosus: relationship with disease activity, vascular risk factors and atherosclerosis. Objectives: To study the relationship of 25(OH)D(3) level with disease activity, vascular risk factors and atherosclerosis in SLE. Methods: Consecutive patients who fulfilled four or more ACR criteria for SLE were recruited for assay of 25(OH)D(3) level. Disease activity was assessed by the SLEDAI and physicians' global assessment (PGA). Patients with vascular risk factors were screened for atherosclerosis at the coronary or carotid arteries. Correlation between 25(OH)D(3) levels and SLEDAI scores was studied by linear regression. The link between vascular risk factors, atherosclerosis and vitamin D deficiency was also examined. Results: A total of 290 SLE patients were studied [94% women; mean (s.d.) age 38.9 (13.1) years; disease duration 7.7 (6.7) years; 78% patients had clinical or serological lupus activity]. Two hundred and seventy-seven (96%) patients had vitamin D insufficiency [25(OH)D(3) &lt; 30 ng/ml] and 77 (27%) patients had vitamin D deficiency (&lt;15 ng/ml). Levels of 25(OH)D(3) correlated inversely with PGA (β -0.20; P = 0.003), total SLEDAI scores (β -0.19; P = 0.003) and subscores due to active renal, musculoskeletal and haematological disease. Subjects with vitamin D deficiency had significantly higher total/high-density lipoprotein (HDL) cholesterol ratio [3.96 (2.94) vs 3.07 (0.80); P = 0.02] and prevalence of aPLs (57 vs 39%; P = 0.007). Of 132 patients, 58 (44%) with vascular risk factors screened were positive for subclinical atherosclerosis. No association could be demonstrated between 25(OH)D(3) level and atherosclerosis, which was mainly associated with increasing age, menopause, obesity and hyper-triglyceridaemia. Conclusions: In this large cross-sectional study of SLE patients, 25(OH)D(3) level correlates inversely with disease activity. Vitamin D deficiency is associated with dyslipidaemia. In patients with vascular risk factors, subclinical atherosclerosis is not associated with hypovitaminosis D. Answer: The relationship between lipids and vitamin B levels in persons with elevated risk of atherosclerosis is not straightforward. One study found no association between lipids and B vitamins in different groups of patients, including those hospitalized for cardiac surgery and those with atherosclerosis risk factors such as hypercholesterolemia, NIDDM, or chronic renal insufficiency (PUBMED:12640345). This suggests that lipid parameters and B vitamins may be independent, additional risk factors for atherosclerosis. However, other studies have highlighted the role of hyperhomocysteinemia, which is influenced by B vitamin levels, as an independent risk factor for atherosclerosis and cardiovascular ischemic events (PUBMED:20236081). Elevated homocysteine levels can be attenuated by higher serum levels of vitamin B12 and folate, but not vitamin B6, indicating the importance of these vitamins in managing homocysteine levels and potentially modifying the risk of atherosclerosis (PUBMED:12204804). Furthermore, B vitamins have been implicated in cardiovascular risk, with deficiencies in vitamins B6, B12, and folic acid being inversely correlated with cardiovascular risk. These vitamins are involved in homocysteine metabolism, and their deficiency can promote the progression of atherosclerosis (PUBMED:15776726). In adolescents, vitamin B deficiency-induced hyperhomocysteinemia has been associated with subclinical atherosclerosis, suggesting that early detection and management of vitamin B deficiency could mitigate the risk of cardiovascular events later in life (PUBMED:37554588). In summary, while one study found no direct association between lipid levels and B vitamin levels (PUBMED:12640345), the overall body of research suggests that B vitamins play a role in cardiovascular health, particularly through their impact on homocysteine levels, which can affect the risk of atherosclerosis (PUBMED:20236081, PUBMED:12204804, PUBMED:15776726, PUBMED:37554588).
Instruction: Is there any relationship between cardiovascular risk markers and young women with diminished ovarian reserve? Abstracts: abstract_id: PUBMED:24915163 Is there any relationship between cardiovascular risk markers and young women with diminished ovarian reserve? Objective: It has been widely known that cardiovascular disease (CVD) risk is increased in menopause. The aim of the study was to evaluate whether this risk was elevated in young women with diminished ovarian reserve (DOR). Methods: A hundred women with DOR and 100 women with normal ovarian reserve (NOR) attending the infertility unit at Suleymaniye Maternity, Research &amp;Training Hospital, were enrolled in the study. CVD risk markers such as insulin resistance (defined by the homeostasis model assessment ratio [HOMA-IR]), C-reactive protein (CRP), low-density lipoprotein (LDL), high-density lipoprotein (HDL), total cholesterol (TC), and triglyceride (TG) were assessed. Results: HOMA-IR, CRP, TG, LDL levels were higher and HDL was lower among patients with DOR compared to the controls (p &lt; 0.05 for all). There were positive associations between DOR and HOMA-IR, CRP, TG, LDL levels and a negative correlation with HDL (p &lt; 0.05 for all). However, multivariate logistic regression analysis showed that HOMA-IR, CRP, TG, and HDL were independent variables that were associated with DOR. Conclusions: CVD risk markers were increased in women with DOR. Further studies with larger groups are needed to investigate the nature of the link in these patients. abstract_id: PUBMED:26613901 Low antimullerian hormone levels may be associated with cardiovascular risk markers in women with diminished ovarian reserve. There is growing evidence that diminished ovarian reserve (DOR) may be associated with cardiovascular disease (CVD). The aim of the study was to investigate whether there was any relationship between antimullerian hormone (AMH) and CVD risk markers in the study. Ninety women with DOR and 90 women with normal ovarian reserve (NOR) attending the infertility unit at XXXX, were enrolled in the study. CVD risk markers such as insulin resistance [defined by the homeostasis model assessment ratio (HOMA-IR)], C-reactive protein (CRP), low density lipoprotein (LDL), high-density lipoprotein (HDL), total cholesterol (TC), and triglyceride (TG) were assessed. HOMA-IR, CRP, TG, LDL levels were higher and HDL and AMH were lower among patients with DOR compared with controls (p &lt; 0.05, for all). There were positive associations between low AMH and HOMA-IR, CRP, TG, LDL levels and a negative correlation with HDL (p &lt; 0.05, for all). However, multivariate logistic regression analysis showed that HOMA-IR, CRP, TG and HDL were independent variables that were associated with low AMH. There was a close relationship between low AMH and CVD risk markers in the study. Further studies with larger groups are needed to investigate the nature of this link in these patients. abstract_id: PUBMED:30316413 Cardiovascular health and ovarian aging. One in three American women will die from cardiovascular disease (CVD), making it the leading cause of death among women in the United States. Traditionally, CVD has been seen as a disease of postmenopausal women, yet increasingly, risk factors for CVD are being characterized earlier. Although menopause, and its associated hypoestrogenism, has been consistently linked to CVD risk, accelerated ovarian aging among premenopausal patients has become a focus of attempts to identify women with increased CVD risk earlier. We present a review of the evidence for the association between early menopause and diminished ovarian reserve with CVD and its risk factors. abstract_id: PUBMED:37533289 Increased serine synthesis in cumulus cells of young infertile women with diminished ovarian reserve. Study Question: What are the differences in gene expression of cumulus cells (CCs) between young women with diminished ovarian reserve (DOR) and those of similar age with normal ovarian reserve (NOR)? Summary Answer: Gene expression and metabolome profiling analysis demonstrate that the de novo serine synthesis pathway (SSP) is increased in the CCs of young women with DOR. What Is Known Already: The incidence of DOR has risen, tending to present at younger ages. Its mechanisms and aetiologies are still poorly understood. Abnormal metabolism is present in luteinized CCs of patients with DOR. Previous studies have revealed that mitochondrial dysfunction and impaired oxidative phosphorylation in CCs are related to DOR in women of advanced age. The pathogenic mechanisms likely differ between young women with DOR and cases associated with advanced maternal age. Several studies have examined amino acid metabolism in the follicle, with a focus on embryo development, but less information is available about CCs. The physiological significance of de novo serine synthesis in follicles and oocytes remains largely unknown. Study Design, Size, Duration: CC samples were obtained from 107 young infertile women (age &lt;38 years) undergoing ICSI, from July 2017 to June 2019, including 54 patients with DOR and 53 patients with NOR. Participants/materials, Setting, Methods: Oocyte development data were analysed retrospectively. Comprehensive genome-wide transcriptomics of CCs was performed. Differentially expressed genes (DEGs) were identified. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses were performed to categorize the functions of the DEGs and identify significantly enriched pathways. The transcript and protein levels of key enzymes involved in serine synthesis were verified in additional samples using quantitative real-time PCR (qRT-PCR) (n = 10) and capillary western blotting (n = 36). Targeted metabolomics of amino acids in CC extracts was performed by ultrahigh-performance liquid MS (UHPLC-MS/MS). Main Results And The Role Of Chance: The number of oocytes (2.4 ± 2.2 versus 12.1 ± 5.3) and metaphase II oocytes (2.1 ± 2.0 versus 9.9 ± 4.9) retrieved was significantly decreased in the DOR versus the NOR group, respectively (P &lt; 0.0001). The rates of fertilization (80.7% versus 78.8%), viable embryos (73.7% versus 72.5%), and high-quality embryos (42.8% versus 49.0%) did not differ between the DOR and NOR groups, respectively (P &gt; 0.05). A total of 95 DEGs were found by transcriptome sequencing. GO and KEGG analyses demonstrated that the DEGs were linked to amino acid metabolism and suggested significantly higher activity of the de novo SSP in the CCs of young women with DOR. Further qRT-PCR and capillary western blotting revealed that key enzymes (PHGDH, PSAT1, PSPH, and SHMT2) involved in de novo serine synthesis were upregulated, and UHPLC-MS/MS analysis showed increases in serine and glycine (a downstream product of serine) levels in the CCs of young patients with DOR. Our data clearly demonstrate that the de novo SSP, which diverts 3-phosphoglycerate from glycolysis to serine synthesis, was upregulated in young DOR CCs. Large Scale Data: N/A. Limitations, Reasons For Caution: Regarding the reproductive capacity of young patients DOR, the pregnancy outcomes were not analysed. The sample size was limited, and only women undergoing ICSI were examined since this was a prerequisite for the acquisition of CCs, which may cause selection bias. The exact mechanisms by which the SSP in CCs regulates ovarian reserve still require further study. Wider Implications Of The Findings: Our research presents new evidence that alterations of the SSP in CCs of young infertile women are associated with DOR. We believe this is a significant contribution to the field, which should be key for understanding the cause and mechanisms of ovarian hypofunction in young women. Study Funding/competing Interest(s): This work was supported by grants from the Ministry of Science and Technology of China (2018YFC1005001) and National Natural Science Foundation of China (31601197). There were no competing interests. Trial Registration Number: N/A. abstract_id: PUBMED:37837429 A phenomenological exploration of the mental health experiences of young women with diminished ovarian reserve. Infertility is a reproductive disease affecting one in six individuals that renders an individual unable to conceive. One cause of infertility is diminished ovarian reserve (DOR), which reduces the quantity and/or quality of a female's oocyte pool. Although typically indicating normal ovarian aging during the late 30s and early 40s, DOR can also impact younger women, increasing their risk for psychological distress from an unexpected diagnosis of infertility. A phenomenological approach examined the mental health experiences and perceptions of infertility-related mental health care of young women with DOR. Women diagnosed with DOR by age 35 in the United States who experienced emotional distress during infertility were recruited from infertility-specific social media and via snowball sampling. Participants completed a demographic survey and semi-structured individual interview that was audio-recorded, transcribed verbatim, and analyzed using a phenomenological approach. Ten women ages 27-41 completed the study. On average, participants were 30 years of age at the time of DOR diagnosis (age range 25-35), primarily Caucasian (90%), and married (90%). Two main themes were found: (1) Young women with DOR feel like a "forgotten community" coping with an invisible disease; and (2) Not all fertility clinics are created equal. Participants perceived their diagnosis as devastating and hopeless and urged others to find a provider with ample experience treating patients with DOR. This study helped to understand how young women with DOR perceive their mental health and identified a significant need for advancing towards more holistic infertility healthcare that encompasses both physical and mental health. abstract_id: PUBMED:33786734 Forecasting early onset diminished ovarian reserve for young reproductive age women. Purpose: To investigate the biological networks associated with DOR in young women and the subsequent molecular impact on preimplantation embryos. Methods: Whole peripheral blood was collected from patients: young women presenting with diminished ovarian reserve (DOR) and age-matched young women with normal ovarian reserve. Maternal exome sequencing was performed on the NovaSEQ 6000 and sequencing validation was completed using Taqman® SNP Genotyping Assays. Blastocyst global methylome and transcriptome sequencing were also analyzed. Results: Exome sequencing revealed 730 significant DNA variants observed exclusively in the young DOR patients. Bioinformatic analysis revealed a significant impact to the Glucocorticoid receptor (GR) signaling pathway and each young DOR female had an average of 6.2 deleterious DNA variants within this pathway. Additional stratification based on patient age resulted in a cut-off at 31 years for young DOR discrimination. Embryonic global methylome sequencing resulted in only a very small number of total CpG sites with methylation alterations (1,775; 0.015% of total) in the DOR group. Additionally, there was no co-localization between these limited number of altered CpG sites and significant variants, genes, or pathways. RNA sequencing also resulted in no biologically significant transcription changes between DOR blastocysts and controls. Conclusion: GR signaling DNA variants were observed in women with early-onset DOR potentially compromising oocyte production and quality. However, no significant downstream effects on biological processes appear to impact the resulting blastocyst. The ability to forecast premature DOR for young women may allow for earlier identification and clinical intervention for this patient population. abstract_id: PUBMED:26806685 Diminished ovarian reserve: is it a neglected cause in the assessment of recurrent miscarriage? A cohort study. Objective: To study whether diminished ovarian reserve is associated with recurrent miscarriage. Design: Cross-sectional clinical study. Setting: Tertiary-care center. Patient(s): Women with history of recurrent miscarriage (RM; n = 71) and sequentially selected age-matched fertile women who were seeking contraception (control; n = 70). Intervention(s): Not applicable. Main Outcome Measures(s): Serum levels of FSH, LH, E2, and antimüllerian hormone (AMH); FSH/LH ratio; ovarian volumes; and antral follicle count (AFC). Result(s): The levels of FSH were 8.6 ± 3.7 U/L in the RM group and 7.1 ± 3.9 U/L in the control group; this difference was statistically significant. The levels of AMH were significantly lower in the RM group than in the control group (2.9 ± 1.7 ng/mL vs. 3.6 ± 1.7 ng/mL). The percentage of women with levels of FSH ≥11 U/L was significantly higher in the RM group than in the control group (18.3% vs. 4.3%). In the RM group, the percentage of women with levels of AMH ≤1 ng/mL was significantly higher than in the control group (19.7% vs. 5.7%). Conclusion(s): Recurrent miscarriage may be associated with diminished ovarian reserve. Larger prospective randomized controlled trials are warranted to better determine the predictive potential of ovarian reserve markers in recurrent miscarriage. abstract_id: PUBMED:37331511 Predictive factors of spontaneous pregnancies among women with diminished ovarian reserve patients treated with DHEA Introduction: Diminished ovarian reserve remains a challenge in the reproductive medicine field. Treatment options for these patients are limited and there is no consensus to make any recommendations. Regarding adjuvant supplements, DHEA could play a role in follicular recruitment and, therefore, may increase spontaneous pregnancy rate. Materials And Methods: This study was a monocentric historical and observational cohort study carried out in the reproductive medicine department at the University Hospital, Femme-Mère-Enfant in Lyon. All women presenting with a diminished ovarian reserve treated with 75mg/day of DHEA were consecutively included. The main objective was to evaluate the spontaneous pregnancy rate. The secondary objectives were to identify predictive factors for pregnancy and the evaluation of treatment side effects. Results: Four hundred and thirty-nine women were included. In all, 277 were analyzed, 59 had a spontaneous pregnancy (21.3%). The probability of being pregnant was respectively 13.2% (IC95 9-17.2%), 21.3% (IC95 15.1-27%) and 38.8% (IC95 29.3-48.4%) at 6, 12 and 24 months. Only 20.6% of patients complained of side effects. Conclusion: DHEA may improve spontaneous pregnancies in women with diminished ovarian reserve without any stimulation. abstract_id: PUBMED:37607568 Prevalence of diminished ovarian reserve in Chinese women with follicular cysts and menstrual disorders. Objectives: To assess the prevalence of diminished ovarian reserve (DOR) in Chinese women with follicular cysts and menstrual disorders and relationship to hormonal markers. Methods: 117 women with follicular cysts and menstrual disorders, aged 24 ∼ 53 (39.19 ± 6.61) years; measurements of height, weight, follicle-stimulating hormone (FSH), luteinizing hormone (LH), E2, progesterone (Po), prolactin (PRL), total testosterone, AMH, follicular cyst diameter, endometrial thickness. Three age groups were compared: 1) 21 ∼ 30 years, 2) 30 ∼ 40 years, 3) &gt; 40 years. Results: Total prevalence of DOR 86.3%, in the groups 50%, 81.6%, and 98.4%, in group-3 significantly higher than in group-1 and 2. 34.2% of the 117 patients complained of cessation of regular menstruations or amenorrhea, 65.8% of abnormal uterine bleeding. Follicular cysts disappeard in cycle-1 for 98 (83.8%) and in cycle-2 for 117 (100%) patients. AMH decreased with age, significantly different between the three groups. Total testosterone in group-1 and 2 was significantly higher than in group-3. In total AMH had a negative correlation with age and E2 (p &lt; 0.01) and positive correlation with total testosterone (p &lt; 0.05). Conclusions: Assessing ovarian reserve with follicular cysts and menstrual disorders is important because often pointing to DOR. The overall prevalence of DOR was high; even young women (&lt;40 years) with follicular cysts and menstrual disorders had a low level of AMH. So AMH can be used as a marker to define DOR with higher sensitivity than other markers like FSH and E2. Primarily, these results only apply to Chinese women and should be confirmed in further studies. abstract_id: PUBMED:35072221 The value of LH supplementation in young women with diminished ovarian reserve treated with GnRH Antagonist Protocol for ovarian hyperstimulation in ICSI-cycles. Objectives: To compare the success of two controlled ovarian hyperstimulation protocols; rFSH + hp-hMG with only rFSH in the GnRH antagonist protocol in diminished ovarian reserve under 35 years of age. Material And Methods: Data from January 2015 to June 2019 were abstracted from the hospital records of IVF Clinic. The women younger than 35 years of age who were diagnosed as diminished ovarian reserve and underwent standard GnRH antagonist protocol were included. Patients in Group-1 underwent controlled ovarian stimulation with rFSH alone and Group-2 with rFSH in combination with hp-hMG. Patients in both groups were divided into three subgroups according to their antral follicle count at Day 3: &lt; 4 (a), 4-6 (b), and 7-10 (c). Demographic features and IVF outcomes of the patients were extracted. Results: Total number of retrieved oocytes, was higher in Group-1 than Group-2 (6.5 ± 2.1 vs 5.5 ± 2.3, respectively, p &lt; 0.001). However, there were no significant differences between the two groups in terms of clinical pregnancy rate, implantation rate, miscarriage rate and live birth rate. Although the main study outcome parameters did not show significant difference between Group-1a and Group-2a, the number of mature oocytes (5 ± 2.8 vs 1.8 ± 1.2, respectively, p = 0.006) was higher in Group-1a. Conclusions: We observed no beneficial effect of LH supplementation during IVF for the treatment of women under 35 years old with diminished ovarian reserve in the first treatment cycle when compared with rFSH only in the antagonist protocols. Answer: Yes, there is evidence suggesting a relationship between cardiovascular risk markers and young women with diminished ovarian reserve (DOR). Studies have found that cardiovascular disease (CVD) risk markers such as insulin resistance (HOMA-IR), C-reactive protein (CRP), triglycerides (TG), and low-density lipoprotein (LDL) levels were higher, while high-density lipoprotein (HDL) levels were lower in young women with DOR compared to those with normal ovarian reserve (NOR) (PUBMED:24915163; PUBMED:26613901). Additionally, low antimullerian hormone (AMH) levels, which are indicative of DOR, were associated with these elevated CVD risk markers (PUBMED:26613901). These findings suggest that young women with DOR may have an increased risk of cardiovascular disease, and further research is needed to fully understand the nature of this link (PUBMED:24915163; PUBMED:26613901).
Instruction: Parents' beliefs about condoms and oral contraceptives: are they medically accurate? Abstracts: abstract_id: PUBMED:15136207 Parents' beliefs about condoms and oral contraceptives: are they medically accurate? Context: Parents are encouraged to be the primary sex educators for their children; however, little is known about the accuracy of parents' views about condoms and oral contraceptives. Methods: Telephone surveys using validated measures provided data on beliefs about the effectiveness, safety and usability of condoms and the pill among 1,069 parents of 13-17-year-olds in Minnesota and Wisconsin in 2002. Pearson chi-square tests and multivariate logistic regression models were used to compare beliefs according to sex, age, race, religion, education, income and political orientation. Results: Substantial proportions of parents underestimated the effectiveness of condoms for preventing pregnancy and sexually transmitted diseases (STDs). Only 47% believed that condoms are very effective for STD prevention, and 40% for pregnancy prevention. Fifty-two percent thought that pill use prevents pregnancy almost all the time; 39% thought that the pill is very safe. Approximately one-quarter of parents thought that most teenagers are capable of using condoms correctly; almost four in 10 thought that most teenagers can use the pill correctly. Fathers tended to have more accurate views about condoms than mothers did; mothers' views of the pill were generally more accurate than fathers'. Whites were more likely than nonwhites to hold accurate beliefs about the pill's safety and effectiveness; conservatives were less likely than liberals to hold accurate views about the effectiveness of condoms. Conclusion: Campaigns encouraging parents to talk with their teenagers about sexuality should provide parents with medically accurate information on the effectiveness, safety and usability of condoms and the pill. abstract_id: PUBMED:20649380 Parental beliefs about cause and course of their child's autism and outcomes of their beliefs: a review of the literature. Background: This article provides a review of the literature on beliefs that parents of children with autism hold, with a focus on their beliefs on the cause and course of the disorder. Research on the outcomes of their beliefs also was reviewed. Methods: Medline, PsychInfo, Nursing@Ovid and PubMed were searched from 1995 through 2009 using the keywords autism, autistic disorder, beliefs, culture, parents, attitudes, and perceptions. Additional articles were identified through Google Scholar and from references in related articles. Thirteen articles were retrieved and reviewed. Results: It was found in the review that parents hold a wide variety of beliefs about the cause of their child's autism, including genetic factors, events surrounding the child's birth, and environmental influences in the early childhood period. Some parents continue to attribute their child's autism to immunizations, although more recent studies suggest the frequency may be decreasing. Some parents are pessimistic about their child's future while others are hopeful that new strategies will be developed. Some trust that society will become more accepting of their child's idiosyncrasies. Parents' beliefs about the cause of their child's autism have been found to have an impact on decisions regarding future health care, family planning, and maternal mental health. The link between parental beliefs and their choices for interventions has not yet been empirically explored. Conclusions: Research on the impact of cultural beliefs specific to autism is very limited, although studies focusing on other developmental disorders suggest that it is influential. The importance of exploring parental beliefs during the process of treatment planning is discussed. abstract_id: PUBMED:34727116 Parents' perceived knowledge and beliefs on congenital malformations and their causes in the Amhara region, Ethiopia. A qualitative study. Introduction: Knowledge and beliefs of people on congenital malformations and their causes differ from society to society. As a result, there is a paucity of understanding community perceived knowledge and beliefs towards congenital malformations and their risk factors among children's parents. Therefore, I sought to identify perceived knowledge and beliefs of parents on congenital malformations and their causes. Methods: An in-depth discussion and interview were carried out on purposively selected forty participants (women and men) in the Amhara region, Ethiopia. The data were collected from June to July, 2015. Semi-structured guiding topics/questions were used during the discussions and in-depth interviews. Note and audio records were taken while the participants discussed the topics. After the discussions and in-depth interviews were completed, the transcripts were read repeatedly to understand the participant's words, phrases, ideas, and concepts. Then notes were taken to combine pieces of similar transcripts. I have employed thematic framework analysis. The relevant transcripts were scrutinized, labeled and coded manually based on their relevance to the objective of the study. Then the coded transcripts were determined and categorized according to the type of thematic variables. Results: The participants responded on three aspects of lived experience, perceived knowledge and beliefs on congenital malformations and their causes. Nearly half of the participants' beliefs on the causes of congenital malformations were related to sin, contraceptive pills, un-prescribed drugs/medication use, and fertilizers (that is eating crops grown by using fertilizers). Almost all said that raising a child with a major congenital malformation was very difficult. About half of the participants' belief on congenital malformations were traditional and resulted from poor awareness. Conclusion: The findings of the present study highlight the challenges and impacts of congenital malformations on parents who had children with and without congenital malformation. Lived experience, perceived knowledge and beliefs of children parents on congenital malformation and their causes can be helpful information for designing preventive actions. Hence, planning a preventive strategy and providing health education on congenital malformations and their causes for children parents are very necessary. abstract_id: PUBMED:32747967 Exploring Knowledge, Beliefs, and Attitudes about Teen Pregnancy among Latino Parents in Arkansas. Objectives: Arkansas has the highest incidence of teen pregnancy in 15- to 19-year-olds in the United States, and Latinas remain one of the cultural groups that are most at risk of becoming adolescent mothers. Teen mothers and their children are more likely to face poor socioeconomic conditions and negative health sequelae that perpetuate the cycle of poverty. Tailored interventions meant for families, communities, and/or churches should address both abstinence and other types of contraception to educate young people how to stay healthy, prevent unwanted pregnancy, and empower them to make informed decisions. To develop effective educational interventions, it is essential to understand the current knowledge, beliefs, and attitudes toward teen pregnancy among Latino parents in Arkansas. Methods: Adult Latino parents were surveyed at two clinical sites, one church and one school in Little Rock between January 16, 2019 and February 23, 2019. The survey instrument was conducted in Spanish and included questions on demographics, knowledge, attitudes, and beliefs about teenage pregnancy. Response frequencies, percentages and descriptive statistics were calculated for the dataset. Results: A total of 181 individuals completed the survey. Participants almost unanimously agreed with the statement that it is important to talk about sex with their children for their sexual health development. Nearly all respondents, 96.1%, believe that it is important to prevent teenage pregnancy. Most respondents agreed that they speak with their sons and daughters differently on the topic of sexual health. In our sample, only 17.8% of parents believed that abstinence-only education should be the primary focus of reproductive health education. "My family values" was most frequently cited as the predominant factor in shaping participants' beliefs about teen pregnancy, followed by a desire for their child to attend college or be economically stable before having a child. Conclusions: Our findings indicate that based on attitudes toward sexual health, interventions should be tailored to certain groups based on children's sex and age and to parents who had children as teens themselves. Educational materials should emphasize the strengths of Latino culture, such as family values and desire for children to attain a college degree and economic stability before bearing children. abstract_id: PUBMED:37722252 Physician beliefs about abortion safety and their participation in abortion care. Objective: To document physicians' beliefs about abortion safety and the associations between these beliefs and physician support for, referral for, and participation in abortion care. Methods: In a 2019 survey at the University of Wisconsin School of Medicine and Public Health, we assessed physicians' abortion attitudes, beliefs, and practices (N = 893). We conducted bivariate analyses followed by logistic regression to document relationships between physician beliefs about abortion safety and their support for, referral to, and participation in abortion care. Results: Four-in-five physicians (78%, n = 690) believed that abortion is very or extremely safe. Medical specialty (Obstetrics-Gynecology vs. other; adjusted odds ratio [aOR] = 10.58, 95% CI: 1.41-79.56), educational exposure to abortion (aOR = 1.43, 95% CI: 1.02-2.01), and religiosity (aOR = 0.59, 95% CI: 0.41-0.85) were associated with physicians' beliefs about the safety of abortion. Providers who believed that abortion was very/extremely safe were more likely to support medication (aOR = 2.99, 95% CI: 1.93-4.65) and procedural abortion (aOR = 3.56, 95% CI: 2.31-5.50) and refer patients for abortion care (aOR = 3.14, 95% CI: 1.90-5.01). Conclusion: Although abortions are associated with extremely few adverse events, a sizable portion of surveyed physicians had incorrect perceptions of the safety of abortion. These beliefs were associated with decreased support and referrals for abortion care. Educational exposure to abortion is associated with more accurate assessments of abortion safety, underscoring the importance of training in this area. Considering the current abortion policy landscape, it is imperative for physicians to hold accurate knowledge about abortion so they can provide comprehensive counseling and, when indicated, referrals for safe and legal care. abstract_id: PUBMED:16730607 The influence of individual characteristics and contraceptive beliefs on parent-teen sexual communications: a structural model. Purpose: To explore relationships between parent and teen demographic characteristics, parent beliefs about the effectiveness, safety and usability of condoms and oral contraceptives, and parent-teen sexual communication. Methods: One thousand parents of 13-17-year-olds were surveyed. Structural equation modeling was used to test direct and indirect relationships between parent and teen characteristics, parent effectiveness beliefs, and amount of parent-teen communication about the negative consequences of sex and where to obtain birth control. Results: Parent effectiveness beliefs were directly related to parent-teen communication levels. Minority, religious, politically conservative, and low income parents reported lower effectiveness beliefs than other parent groups. Low income, minority parents reported more discussion with their teens about the negative consequences of sex and where to obtain birth control than high income, White parents. Politically conservative, religious parents reported more discussion with their teen about the negative consequences of sex than their liberal and nonreligious counterparts. In general, nonreligious parents reported more discussion about where to obtain birth control than religious parents. Parents were less likely to talk with males, younger teens, and teens not believed to be romantically involved. Parent and teen gender interacted to predict the amount of parent-teen communication. Conclusions: Parent beliefs about condom and oral contraceptive efficacy, safety and usability related directly to amount of parent-teen sexual communication. Parent beliefs and communication levels varied across a number of demographic groups. Educating parent groups may result in more frequent and accurate sexual communications with their teenage children. abstract_id: PUBMED:36994242 Women's beliefs about methods and contraceptive discontinuation: Results from a prospective study from Nairobi and Homa Bay counties in Kenya. Background: Rates of contraceptive discontinuation are high in many low and middle countries contributing to unmet need for contraception and other adverse reproductive health outcomes. Few studies have investigated how women's beliefs about methods and strength of fertility preferences affect discontinuation rates. This study examines this question using primary data collected in Nairobi and Homa Bay counties in Kenya. Methods: We used data from two rounds of a longitudinal study of married women ages 15-39 years (2,812 and 2,424 women from Nairobi and Homa Bay respectively at round 1). Information on fertility preferences, past and current contraceptive behavior, and method-related beliefs about six modern contraceptive methods were collected, along with a monthly calendar of contraceptive use between the two interviews. The analysis focused on discontinuation of the two most commonly used methods in both sites, injectables and implants. We carry out competing risk survival analysis to identify which method related beliefs predict discontinuation among women using at the first round. Results: The percentages of episodes discontinued in the 12 months between the two rounds was 36%, with a higher rate of discontinuation in Homa Bay (43%) than in the Nairobi slums (32%) and higher for injectables than implants. Method related concerns and side effects were the major self-reported reasons for discontinuation in both sites. The competing risk survival analysis showed that the probability of method related discontinuation of implants and injectables was significantly lower among respondents who believed that the methods do not cause serious health problems (SHR = 0.78, 95% CI: 0.62-0.98), do not interfere with regular menses (SHR = 0.76, 95% CI: 0.61-0.95) and do not cause unpleasant side effects (SHR = 0.72, 95% CI 0.56-0.89). By contrast, there were no net effects of three method related beliefs that are commonly cited as obstacles to contraceptive use in African societies: safety for long-term use, ability to have children after stopping the method, and the approval of the husband. Conclusion: This study is unique in its examination of the effect of method-specific beliefs on subsequent discontinuation for a method-related reason, using a longitudinal design. The single most important result is that concerns about serious health problems, which are largely unjustified and only moderately associated with beliefs about side effects, are a significant influence on discontinuation. The negative results for other beliefs show that the determinants of discontinuation differ from the determinants of method adoption and method choice. abstract_id: PUBMED:31709473 Early diagnosis of ataxia telangiectasia in the neonatal phase: a parents' perspective. Ataxia telangiectasia (A-T) is a severe neurodegenerative disorder with variable immunodeficiency. Together with the Dutch A-T community, we investigated the opinion of A-T parents on an early A-T diagnosis in the asymptomatic phase of the disease. During an annual national meeting for A-T patients and families, the topic of an early A-T diagnosis was discussed in relation to the recent introduction of neonatal screening for severe combined immunodeficiency (SCID) in the Netherlands. Based on the discussion, individual arguments were identified and processed into a questionnaire, which was sent out to 64 A-T parents (32 families). Arguments included were insecurity to diagnosis, possible medical advantages, appropriate genetic counseling and family planning, loss of "golden" year(s), and early cancer screening for parents. The response rate was 55% (n = 35 parents). Twenty-six (74%) parents felt that the advantages of an early diagnosis outweighed the disadvantages, five parents thought that the disadvantages would outweigh the advantages (14%), and four parents did not indicate a preference.Conclusion: The majority of parents of a child with A-T would have preferred an early diagnosis during the asymptomatic phase of the disease, because the uncertainty during the diagnostic process had had a major impact on their lives. In addition, the knowledge of being carriers of an ATM gene mutation influenced decisions about family planning. Parents who opposed against an early diagnosis emphasized the joy of having a seemingly healthy child until diagnosis.What is Known:• Ataxia telangiectasia (A-T) is a devastating DNA repair disorder with a huge impact on quality of life of patients and their parents.• Patients with A-T may incidentally be identified at birth as the consequence of neonatal screening for severe combined immunodeficiency (SCID).What is New:• The majority of Dutch parents of A-T patients (74%) would have preferred an early diagnosis of their child in the asymptomatic phase of the disease.• Major arguments for an early A-T diagnosis were (1) the experienced insecurity in diagnostic trajectories and its impact on families and (2) the knowledge of being ATM mutation carriers when deciding about family planning. An argument against an early diagnosis is losing the joy of having a seemingly healthy child until diagnosis. abstract_id: PUBMED:25058120 Starting a Family: The Experience of Parents With Cleft Lip and/or Palate. Background: One of the key challenges facing young adults with cleft lip and/or palate is making decisions about starting a family, because there is an increased likelihood of their own child being diagnosed with cleft lip and/or palate. Should this occur, a second key challenge is how to deal with their child's diagnosis and subsequent treatment. Objective: To explore the views, experiences and possible support needs of this unique group of parents in order to inform the services provided by nonspecialist Health Professionals, cleft teams, and genetic counselors. Design: Individual telephone interviews eliciting qualitative data. Results: Qualitative thematic analysis identified five themes. Accessing accurate information and appropriate support around heritability presented a significant challenge. Parents described feelings of responsibility and distress at their child's diagnosis, as well as a number of factors that had helped or hindered their adjustment. Parents also described ways in which their own experiences had impacted their parenting style and how becoming a parent had changed the way they felt about their own cleft. Conclusions: Young adults' understanding of what it means to grow up with cleft lip and/or palate may impact their decision to start a family and their experiences of having children. Possible methods of supporting prospective parents through this potentially difficult stage will be discussed. abstract_id: PUBMED:28836984 Effects of parents training on parents' knowledge and attitudes about adolescent sexuality in Accra Metropolis, Ghana. Background: Attitudes of parents about discussing sexuality issues with adolescents may or may not be influenced by their level of knowledge on such issues. This study seeks to examine parents' training and its effects on parent knowledge and attitudes about sexuality in Accra Metropolis, Ghana. Methods: This was an intervention study consisting of 145 parents who were recruited through their wards from 12 public junior high schools in Accra Metropolis. Parents were randomized equally into intervention and control groups and the intervention group received a 4 weeks training on adolescent sexuality topics. At pre-intervention and 3 months after parent training, parents answered questions on knowledge and attitudes about adolescent sexuality. Changes in baseline and follow-up within and between groups were compared using the difference- in-difference model and logistic regression. Results: The ages of the parents ranged from 26 to 63 years and 44.1% of them completed middle school. There were 69.9% and 59.7% mothers in the intervention and control groups respectively. At pre-intervention, 21.9% of parents in the intervention group had very good knowledge but this increased significantly to 60% three months after the training. Knowledge about sexuality increased to a lesser degree from 18.1% to 34.7% in the control group. Parents' positive attitudes towards adolescents' sexuality increased by 50% in the intervention group compared to 20% in the control group. There were significant differences in knowledge on adolescent sexuality as parents in the intervention group had a greater positive effect than parents in the control group (28.7%, p-value = &lt;0.001). Regarding attitudes of parents towards allowing adolescents to use family planning services (FPS), there was a greater positive effect on parents in the intervention group compared to those in the control group (37.4%; p- value = &lt;0.001). Being part of the intervention group increased the odds of parent knowledge on adolescent sexuality by 16-fold (p-value = &lt;0.001), whilst being in the intervention group increased the likelihood of parents' attitudes towards allowing adolescents to use FPS by four fold (p-value = 0.039). Conclusion: Training parents for a relatively short period of time can positively impact parents' knowledge and attitudes about adolescent sexuality. This may have beneficial effects on adolescent reproductive health. Answer: The research indicates that parents' beliefs about condoms and oral contraceptives are not always medically accurate. A study involving telephone surveys of 1,069 parents of 13-17-year-olds in Minnesota and Wisconsin found that substantial proportions of parents underestimated the effectiveness of condoms for preventing pregnancy and sexually transmitted diseases (STDs). Only 47% believed that condoms are very effective for STD prevention, and 40% for pregnancy prevention. Additionally, 52% thought that pill use prevents pregnancy almost all the time, and 39% thought that the pill is very safe. The study also revealed that fathers tended to have more accurate views about condoms than mothers, while mothers' views of the pill were generally more accurate than fathers'. Moreover, whites were more likely than nonwhites to hold accurate beliefs about the pill's safety and effectiveness, and conservatives were less likely than liberals to hold accurate views about the effectiveness of condoms (PUBMED:15136207). These findings suggest that there is a need for campaigns encouraging parents to talk with their teenagers about sexuality to provide parents with medically accurate information on the effectiveness, safety, and usability of condoms and the pill. This is important because parent beliefs about condom and oral contraceptive efficacy, safety, and usability are directly related to the amount of parent-teen sexual communication. Educating parent groups may result in more frequent and accurate sexual communications with their teenage children (PUBMED:16730607). In conclusion, while parents are encouraged to be the primary sex educators for their children, the evidence suggests that their beliefs about condoms and oral contraceptives may not always align with medical accuracy, highlighting the importance of providing medically accurate information to parents.
Instruction: Variation in vulnerability to tuberculosis in America today: random, or legacies of different ancestral epidemics? Abstracts: abstract_id: PUBMED:11573891 Variation in vulnerability to tuberculosis in America today: random, or legacies of different ancestral epidemics? Objective: To test the hypothesis that vulnerability to infection by Mycobacterium tuberculosis is the norm for mankind but innate resistance to the infection is common in the USA today as a legacy of TB epidemics survived by one's ancestors. Measurements: A reaction of 10 mm or more to a tuberculin skin test (TST) was used to determine the prevalence of infection following community outbreaks of the disease. For further information, a survey was conducted of prevalence of tuberculin reactivity among health-care workers with frequent exposures to tuberculosis patients during bronchoscopy and sputum collection. Results: Sixty per cent of African Americans exposed in 26 community outbreaks were TST positive compared to only 40% of whites following comparable exposures. Similarly only 56 (41.2%) of 136 heavily exposed white health-care workers were TST positive. Conclusions: When considered in conjunction with contrasting ancestral histories of exposure to TB, these observations suggest a difference in frequency of an innate ability to respond protectively to M. tuberculosis and to mount an effective mechanism to destroy it. This is best explained as a process of natural selection among largely separate heavily exposed ancestors. Several recent reports have identified genetically mediated mechanisms of immunity that could be involved with reduced vulnerability to tuberculosis. An understanding of these processes could aid in the development of immunomodulatory agents or vaccines. abstract_id: PUBMED:18992848 Ancestral Mycobacterium tuberculosis genotypes in India: implications for TB control programmes. It has been a decade since the genome sequence of Mycobacterium tuberculosis was unraveled. The fruits of genomic technologies are yet to reach high burden countries such as India, where tuberculosis (TB) kills a huge number of patients. Paradoxically, despite increased cases of human immunodeficiency virus (HIV) infection and diabetes mellitus, TB cure rates in India have been consistently improving during the DOTS program. Does this mean that the underlying TB bacilli are somehow 'co-operating' with the TB control program implementers? Genotypic analyses of the tubercle bacilli have identified a predominance of ancestral strains of M. tuberculosis in major parts of India in addition to various other lineages of modern evolutionary descent. Virulence and dissemination potentials of these ancestral strains are speculated to be 'low' as compared to the other 'aggressive' strains such as Beijing and LAM, which are expected to be more widespread in future, also in synergy with HIV and diabetes epidemics. We discuss the implications of the high prevalence of ancestral strains on TB control in India. It appears that despite a hypothetical 'ancestral advantage', future dynamics of tubercle bacilli in the back drop of surging HIV and diabetes incidences may pose a major healthcare problem in India in the years to come. abstract_id: PUBMED:30458842 On the reconstruction of the ancestral bacterial genomes in genus Mycobacterium and Brucella. Background: To reconstruct the evolution history of DNA sequences, novel models of increasing complexity regarding the number of free parameters taken into account in the sequence evolution, as well as faster and more accurate algorithms, and statistical and computational methods, are needed. More particularly, as the principal forces that have led to major structural changes are genome rearrangements (such as translocations, fusions, and so on), understanding their underlying mechanisms, among other things via the ancestral genome reconstruction, are essential. In this problem, since finding the ancestral genomes that minimize the number of rearrangements in a phylogenetic tree is known to be NP-hard for three or more genomes, heuristics are commonly chosen to obtain approximations of the exact solution. The aim of this work is to show that another path is possible. Results: Various algorithms and software already deal with the difficult nature of the problem of reconstruction of the ancestral genome, but they do not function with precision, in particular when indels or single nucleotide polymorphisms fall into repeated sequences. In this article, and despite the theoretical NP-hardness of the ancestral reconstruction problem, we show that an exact solution can be found in practice in various cases, encompassing organelles and some bacteria. A practical example proves that an accurate reconstruction, which also allows to highlight homoplasic events, can be obtained. This is illustrated by the reconstruction of ancestral genomes of two bacterial pathogens, belonging in Mycobacterium and Brucella genera. Conclusions: By putting together automatically reconstructed ancestral regions with handmade ones for problematic cases, we show that an accurate reconstruction of ancestors of the Brucella genus and of the Mycobacterium tuberculosis complex is possible. By doing so, we are able to investigate the evolutionary history of each pathogen by computing their common ancestors. They can be investigated extensively, by studying the gene content evolution over time, the resistance acquisition, and the impacts of mobile elements on genome plasticity. abstract_id: PUBMED:26421237 Modeling historical tuberculosis epidemics among Canadian First Nations: effects of malnutrition and genetic variation. Late 19th century epidemics of tuberculosis (TB) in Western Canadian First Nations resulted in peak TB mortality rates more than six times the highest rates recorded in Europe. Using a mathematical modeling approach and historical TB mortality time series, we investigate potential causes of high TB mortality and rapid epidemic decline in First Nations from 1885 to 1940. We explore two potential causes of dramatic epidemic dynamics observed in this setting: first, we explore effects of famine prior to 1900 on both TB and population dynamics. Malnutrition is recognized as an individual-level risk factor for TB progression and mortality; its population-level effects on TB epidemics have not been explored previously. Second, we explore effects of heterogeneity in susceptibility to TB in two ways: modeling heterogeneity in susceptibility to infection, and heterogeneity in risk of developing disease once infected. Our results indicate that models lacking famine-related changes in TB parameters or heterogeneity result in an implausibly poor fit to both the TB mortality time series and census data; the inclusion of these features allows for the characteristic decline and rise in population observed in First Nations during this time period and confers improved fits to TB mortality data. abstract_id: PUBMED:22083439 A worldwide investigation of tuberculosis epidemics. We analyse the tuberculosis (TB) epidemics of 211 countries with a view to proposing more efficient and targeted TB control strategies. Countries are classified by how their TB case notification rates have evolved over time and the age distribution of those suffering from active TB disease in 2008. Further analysis of key statistics associated with each of the countries shows the impact of different indicators. As expected, HIV is a key driver of TB epidemics and affects their age-distribution and their scale. The level of development of a country and its wealth also vary with the shape and scale of a country's TB epidemic. Immigration has an influence on the shape of TB epidemics, which is particularly pronounced in highly developed countries with low levels of TB disease in the native population. We conclude by proposing how the TB control programme in each country analysed should prioritise its efforts. abstract_id: PUBMED:25069354 Nurses in the face of the Great War's epidemics In 1914, nurses were still considered as volunteers. By 1918, given more efficient training, they had acquired legitimacy among the French public. Their skills and their professionalism were appreciated and recognised, notably thanks to the crucial role they played in the fight against the tuberculosis and Spanish flu epidemics. abstract_id: PUBMED:22945548 High levels of vulnerability and anticipated stigma reduce the impetus for tuberculosis diagnosis in Cape Town, South Africa. Prolonged diagnostic and treatment delays, particularly in settings experiencing concomitant human immunodeficiency virus (HIV) and tuberculosis (TB) epidemics, undermine global TB control efforts. Current TB control policy in South Africa, as organized through the National TB Control Programme (NTP), relies on the voluntary presentation of TB suspects to local clinics for diagnosis, i.e. passive case finding (PCF). In 2005 a participatory study suggested that popular interpretation and perception of TB within eight South African township sites in and around Cape Town, all carrying a high burden of HIV and undiagnosed TB, undermine PCF. Both people's association of TB with dirt and squalor, and the anticipation of HIV-related stigma, combine to impede TB diagnosis. Respondents conveyed TB as unavoidable; this perception is expressed in the context of vulnerability where so much-including dirt-is largely beyond the control of local residents. The lack of control has a disempowering effect, reducing the drive for seeking treatment. In addition, low confidence in patient confidentiality and anticipated HIV-related stigma act as direct deterrents to TB diagnosis and treatment. In conclusion, we wish to draw attention to high levels of disease stigma and vulnerability, and how these undermine PCF. Public health interventions that wish to improve case detection should aim to: (1) emphasize how early treatment improves outcome and can curb ongoing transmission; (2) combat a sense of communal vulnerability to TB; (3) address anticipated HIV-TB stigma; and (4) improve the quality of care provided at local diagnostic services, addressing low levels of patient confidentiality. abstract_id: PUBMED:35762734 Understanding drivers of phylogenetic clustering and terminal branch lengths distribution in epidemics of Mycobacterium tuberculosis. Detecting factors associated with transmission is important to understand disease epidemics, and to design effective public health measures. Clustering and terminal branch lengths (TBL) analyses are commonly applied to genomic data sets of Mycobacterium tuberculosis (MTB) to identify sub-populations with increased transmission. Here, I used a simulation-based approach to investigate what epidemiological processes influence the results of clustering and TBL analyses, and whether differences in transmission can be detected with these methods. I simulated MTB epidemics with different dynamics (latency, infectious period, transmission rate, basic reproductive number R0, sampling proportion, sampling period, and molecular clock), and found that all considered factors, except for the length of the infectious period, affect the results of clustering and TBL distributions. I show that standard interpretations of this type of analyses ignore two main caveats: (1) clustering results and TBL depend on many factors that have nothing to do with transmission, (2) clustering results and TBL do not tell anything about whether the epidemic is stable, growing, or shrinking, unless all the additional parameters that influence these metrics are known, or assumed identical between sub-populations. An important consequence is that the optimal SNP threshold for clustering depends on the epidemiological conditions, and that sub-populations with different epidemiological characteristics should not be analyzed with the same threshold. Finally, these results suggest that different clustering rates and TBL distributions, that are found consistently between different MTB lineages, are probably due to intrinsic bacterial factors, and do not indicate necessarily differences in transmission or evolutionary success. abstract_id: PUBMED:29081357 Genetic diversity of Mycobacterium tuberculosis from Pará, Brazil, reveals a higher frequency of ancestral strains than previously reported in South America. There is only scarce information available on genotypic diversity of the Mycobacterium tuberculosis complex (MTBC) clinical isolates circulating in the Northern part of Brazil, a relatively neglected region regarding research on tuberculosis. We therefore characterized 980 MTBC clinical isolates from the state of Pará, by spoligotyping and data was compared with patterns from around the world, besides analyzing drug susceptibility, and collecting sociodemographic data. We also performed 24 loci MIRU-VNTR typing to evaluate phylogenetic inferences among the East-African-Indian (EAI) lineage strains. The Geographic Information System analyses were performed to generate a descriptive visualization of MTBC strain distribution in the region. A total of 249 different spoligopatterns primarily belonging to evolutionary recent Euro-American lineages, as well as Central-Asian, Manu and ancestral EAI lineages, were identified, in addition to strains with reportedly unknown lineage signatures. The most frequent lineages were Latin American Mediterranean, T and Haarlem. Interestingly, EAI lineage strains were found in a significantly higher proportion in comparison with previous studies from South America. Regarding EAI lineage, the absence of spacers 4-9 and 23-24 co-related to 24 loci MIRU-VNTRs may suggest a close evolutionary relationship between such strains in Pará and those prevalent in Mozambique, which might have contributed to the genetic diversity of MTBC strains in this region. abstract_id: PUBMED:35880398 Population-based sequencing of Mycobacterium tuberculosis reveals how current population dynamics are shaped by past epidemics. Transmission is a driver of tuberculosis (TB) epidemics in high-burden regions, with assumed negligible impact in low-burden areas. However, we still lack a full characterization of transmission dynamics in settings with similar and different burdens. Genomic epidemiology can greatly help to quantify transmission, but the lack of whole genome sequencing population-based studies has hampered its application. Here, we generate a population-based dataset from Valencia region and compare it with available datasets from different TB-burden settings to reveal transmission dynamics heterogeneity and its public health implications. We sequenced the whole genome of 785 Mycobacterium tuberculosis strains and linked genomes to patient epidemiological data. We use a pairwise distance clustering approach and phylodynamic methods to characterize transmission events over the last 150 years, in different TB-burden regions. Our results underscore significant differences in transmission between low-burden TB settings, i.e., clustering in Valencia region is higher (47.4%) than in Oxfordshire (27%), and similar to a high-burden area as Malawi (49.8%). By modeling times of the transmission links, we observed that settings with high transmission rate are associated with decades of uninterrupted transmission, irrespective of burden. Together, our results reveal that burden and transmission are not necessarily linked due to the role of past epidemics in the ongoing TB incidence, and highlight the need for in-depth characterization of transmission dynamics and specifically tailored TB control strategies. Answer: The variation in vulnerability to tuberculosis (TB) in America today is not random but may be influenced by legacies of different ancestral epidemics. Studies have shown that innate resistance to Mycobacterium tuberculosis infection is more common in the USA today, potentially as a legacy of TB epidemics survived by one's ancestors (PUBMED:11573891). This hypothesis is supported by the observation that 60% of African Americans exposed in community outbreaks were tuberculin skin test (TST) positive compared to only 40% of whites following comparable exposures, suggesting a difference in the frequency of an innate ability to respond protectively to M. tuberculosis (PUBMED:11573891). Additionally, the prevalence of ancestral strains of M. tuberculosis in certain populations, such as those found in major parts of India, may have implications for TB control programs. These ancestral strains are speculated to have lower virulence and dissemination potentials compared to more aggressive strains, which could influence the dynamics of TB epidemics in the context of other health challenges like HIV and diabetes (PUBMED:18992848). Moreover, the reconstruction of ancestral bacterial genomes in the genus Mycobacterium can provide insights into the evolutionary history of pathogens and the gene content evolution over time, which may help understand resistance acquisition and impacts of mobile elements on genome plasticity (PUBMED:30458842). Historical TB epidemics, such as those among Canadian First Nations, have also been modeled to explore the effects of malnutrition and genetic variation on TB mortality rates, which were significantly higher than those recorded in Europe (PUBMED:26421237). In summary, the variation in vulnerability to TB in America today appears to be influenced by a complex interplay of historical exposure to TB epidemics, genetic factors, and possibly other social and environmental determinants, rather than being a random occurrence.
Instruction: Is reactivation of toxoplasmic retinochoroiditis associated to increased annual rainfall? Abstracts: abstract_id: PUBMED:24225023 Is reactivation of toxoplasmic retinochoroiditis associated to increased annual rainfall? Background: Reactivation of toxoplasmic retinochoroiditis is the most frequent form of uveitis in Misiones, Argentina. Fluctuations in the number of patients consulting with this type of uveitis were detected during the last decade. Since the province was consecutively exposed to rainy and dry periods over the last years, we decided to explore whether a relationship between reactivation of toxoplasmic retinochoroiditis and rain might be established according to the data registered during the 2004-2010 period. Results: The frequency of toxoplasmic reactivation episodes increases when precipitation increases (mostly in second and fourth trimesters of each year). Analysis of the independent variables demonstrates that precipitation is a significant predictor of the frequency of reactivation episodes. Although registered toxoplasmic reactivations were more frequent during the third trimester of the year, the association between the third trimester and the reactivation episodes did not reach statistical significance. Conclusion: Prolonged and intense rainfall periods were significantly associated with the reactivation of toxoplasmic retinochoroiditis. Changes promoted by this climatic condition on both the parasite survival in the soil as well as a putative effect on the host immune response due to other comorbidities are discussed. abstract_id: PUBMED:33324033 Intravitreal Clindamycin as First-Line Therapy for Toxoplasmic Retinochoroiditis: A Case Series. Purpose: To report a case series of four ocular toxoplasmosis patients who received intravitreal clindamycin as first-line treatment. Materials And Methods: Retrospective interventional case series. Results: Four (two females and two males) patients were diagnosed with active primary toxoplasmic retinochoroiditis based on their clinical presentation. All patients received intravitreal clindamycin 1mg/0.1mL as first-line therapy (two injections with 1-week interval). Oral corticosteroid 1mg/kg/day was also given in a tapering fashion over 4-6 weeks. A remarkable response was seen in all cases with improved visual acuity, sharpening of the lesion borders, and resolution of inflammation within 4-6 weeks. No recurrence or reactivation was noted until 2 years follow-up. Conclusion: Intravitreal clindamycin, combined with oral corticosteroids, can be considered an effective and safe first-line therapy for active toxoplasmic retinochoroiditis. It provides the patient a more convenience, safer systemic side effect profile, increased availability, and fewer follow-up visits and hematologic investigations. abstract_id: PUBMED:24820044 Risk of reactivation of toxoplasmic retinitis following intraocular procedures without the use of prophylactic therapy. Background/aims: Toxoplasmic retinochoroiditis is the commonest known cause of posterior uveitis worldwide and reactivation is unpredictable. Based on results from one study, the authors proposed that antitoxoplasmic therapy should be initiated as prophylaxis for intraocular surgery in patients with toxoplasmic scars. The aim of this study is to analyse the risk of toxoplasmic retinochoroiditis reactivation following intraocular procedures. Methods: Retrospective analysis of the medical records of a total of 69 patients who underwent intraocular surgery and presented with toxoplasmic retinochoroiditis scars. Results: No patient received prophylactic antitoxoplasmic therapy. Reactivation following the surgical procedure occurred in four cases, with one at 3 months and the others respectively at 13, 14 and 17 months. Conclusions: Our study shows that intraocular surgery did not result in a significant reactivation rate of toxoplasmic retinochoroiditis in the absence of preoperative prophylactic antitoxoplasmic therapy. abstract_id: PUBMED:32651033 Atypical toxoplasmic retinochoroiditis in patients with malignant hematological diseases. In immunocompromised patients, toxoplasmosis may have atypical presentation with bilateral, extensive or multifocal involvement. We report a case series of atypical toxoplasmic retinocoroiditis in patients with malignant hematological diseases who are usually immunosuppressed. Four patients were diagnosed of atypical toxoplasmic retinochoroiditis, all of them had immunosuppression (100%) and half of them (50%) had received a bone marrow transplant. The polymerase chain reaction for toxoplasma was positive in 75% of cases, and in one case (25%) the diagnosis was made with clinical and serological criteria. One patient presented ocular toxoplasmosis despite being on prophylactic treatment with atovaquone. Patients with atypical ocular toxoplasmosis and hematological diseases are generally immunocompromised, but they do not always have history of a bone marrow transplant. The presentation may be due to a primary infection or a reactivation of the disease. The aqueous humor and/or vitreous polymerase chain reaction allow confirming the diagnosis to perform a proper treatment. abstract_id: PUBMED:34151385 Foveal reorganization after treatment of acute foveal toxoplasmic retinochoroiditis. Purpose: To report a patient with impaired vision due to foveal involvement of toxoplasmic retinochoroiditis, who was successfully treated with intravitreal clindamycin and dexamethasone and oral therapy with azithromycin, trimethoprim-sulfamethoxazole, and prednisolone and led to successful visual and anatomic recovery. Case Presentation: A 32-year-old man presented with three-day history of gradually decreasing visual acuity, redness, pain and photophobia of the right eye. Anterior chamber cellular reaction, vitritis and a white retinochoroiditis patch with adjacent retinal vasculitis in the fovea was suggestive of the toxoplasmic retinochoroiditis. He was treated with intravitreal clindamycin and dexamethasone injection followed by six-week regimen of azithromycin, trimethoprim-sulfamethoxazole, and prednisolone. In serial optical coherence tomography imaging, retinitis patch changed to cavitary foveal destruction. Fovea reorganized gradually, and visual acuity concurrently improved from counting finger 3 m to 20/25. Conclusion: In foveal toxoplasmic retinochoroiditis lesions, timely treatment is associated with retinal reorganization and visual improvement. abstract_id: PUBMED:16885818 Reactivation of ocular toxoplasmosis after laser in situ keratomileusis Laser in situ keratomileusis (LASIK) is a safe and efficient refractive surgical procedure that provides excellent results in most cases. Several complications have been reported, most of them related to the posterior segment of the eye. Although they are quite rare, a growing number of vitreoretinal pathologic conditions after LASIK have been reported. To date no article has reported an inflammatory or infectious disease of the posterior segment after a LASIK procedure. We report a case of reactivation of toxoplasmic chorioretinitis that occurred 5 days after a LASIK procedure. Clinical outcome was spontaneously favorable after 1 month, with no loss of vision. Although a causal effect between LASIK and toxoplasmic chorioretinitis reactivation cannot be proven with a single case report, we stress the importance of dilated fundus examination in LASIK preoperative assessment: our case suggests that in the presence of preoperative toxoplasmic chorioretinitis scars, increased retinal monitoring is required. abstract_id: PUBMED:35568370 Prevalence of Toxoplasmic Retinochoroiditis in an Australian Adult Population: A Community-Based Study. Purpose: Toxoplasmic retinochoroiditis is the most common clinical manifestation of an infection with the protozoan parasite, Toxoplasma gondii. Up to 50% of the human population is estimated to be infected with T. gondii; however, the epidemiology of toxoplasmic retinochoroiditis has not been widely reported. We sought to estimate the prevalence of toxoplasmic retinochoroiditis in Australia using data that were collected as part of the Busselton Healthy Ageing Study. Design: Cross-sectional, community-based, prospective cohort study. Participants: 5020 Australian adults (2264 men and 2756 women; age range, 45-69 years, and median age, 58 years). Methods: Retinal color photographs, centered on the optic disc and macula, were captured using a digital retinal camera after the dilation of the pupils. Three uveitis-subspecialized ophthalmologists assessed each pigmented retinal lesion, and complete concordance of opinion was required to assign a toxoplasmic etiology. Serum T. gondii immunoglobulin (Ig)G levels were measured for those participants with retinal lesions judged to be toxoplasmic retinochoroiditis. Main Outcome Measures: Prevalence of toxoplasmic retinochoroiditis. Results: Eight participants (0.16%) had retinal lesions that were considered to have the characteristic appearance of toxoplasmic retinochoroiditis, plus detectable serum T. gondii IgG, consistent with the diagnosis of toxoplasmic retinochoroiditis. On the assumption that 23.81% of retinal lesions occur at the posterior pole, as reported in a community-based survey conducted in Brazil (Sci Rep. 2021;11:3420), the prevalence of toxoplasmic retinochoroiditis was estimated to be 0.67% or 1 per 149 persons. Conclusions: Toxoplasmic retinochoroiditis is common in Australian adults. Efforts to quantify and address risk factors for human infection with T. gondii are justified. abstract_id: PUBMED:28626822 A prospective randomized trial of azithromycin versus trimethoprim/sulfamethoxazole in treatment of toxoplasmic retinochoroiditis. Purpose: To assess the efficacy of oral azithromycin in the treatment of toxoplasmic retinochoroiditis. Methods: A randomized interventional comparative study was conducted on 14 patients with ocular toxoplasmosis who were treated with oral azithromycin and 13 patients who were treated with oral trimethoprim/sulfamethoxazole for 6-12 weeks. The achievement of treatment criteria in the two groups and lesion size reduction were considered as primary outcome measures. Results: The resolution of inflammatory activity, decrease in the size of retinochoroidal lesions, and final best corrected visual acuity (BCVA) did not differ between the two treatment groups. The lesion size declined significantly in all patients (P = 0.001). There was no significant difference in the reduction of the size of retinal lesions between the two treatment groups (P = 0.17). Within each group, there was a significant improvement in BCVA after treatment; BCVA increased by 0.24 logMAR in the azithromycin group (P = 0.001) and by 0.3 logMAR in the trimethoprim/sulfamethoxazole group (P = 0.001). Conclusions: Drug efficacy in terms of reducing the size of retinal lesions and visual improvement was similar in a regimen of trimethoprim/sulfamethoxazole or azithromycin treatment. Therefore, if confirmed with further studies, therapy with azithromycin seems to be an acceptable alternative for the treatment of ocular toxoplasmosis. abstract_id: PUBMED:30509327 Swept-source optical coherence tomography and optical coherence tomography angiography in acquired toxoplasmic chorioretinitis: a case report. Purpose: To describe swept-source optical coherence tomography and optical coherence tomography angiography retinal changes in a case of acute toxoplasmic chorioretinitis both at the time of diagnosis and after healing. Case Presentation: A 57-year-old white woman suffering from acquired toxoplasmic chorioretinitis underwent swept-source optical coherence tomography and optical coherence tomography angiography both at the time of diagnosis and after healing. In the acute phase of the disease, swept-source optical coherence tomography clearly showed retinal and choroidal involvement in the superficial retina and in the choroidal swelling. Optical coherence tomography angiography showed a complete loss of deep and superficial capillary networks and of choroidal vessels in the area of the inflammation. After healing, swept-source optical coherence tomography showed a retinal thinning of the area involved, with a subversion of retinal layers and no visible change at the choroid level. On the other hand, optical coherence tomography angiography showed the persistence of a vascular occlusion at the retina and choroid level. Conclusion: This is the first case in the optical coherence tomography angiography literature that shows the imaging of toxoplasmic chorioretinal lesions. This case confirms the involvement of the retina and choroid in toxoplasmic uveitis and the disruptive potential of such inflammation. The optical coherence tomography angiography performed after healing showed a persistent ablation of the retina, choriocapillaris, and choroidal vessels. The non-invasive optical coherence tomography angiography imaging technique may have diagnostic and prognostic value in regard to toxoplasmic uveitis. abstract_id: PUBMED:29859747 Toxoplasmic retinochoroiditis caused by Toxoplasma gondii strain ToxoDB#65. Ocular toxoplasmosis, caused by Toxoplasma gondii, is the most common cause of inflammation in the back of the eye. Analysis of the infecting strain may provide information regarding disease behavior and recurrence. Here, we describe clinical and epidemiological data for toxoplasmic retinochoroiditis in two Brazilian women infected by T. gondii strain ToxoDB#65, living in an urban region of São Paulo State, Brazil. Answer: Yes, reactivation of toxoplasmic retinochoroiditis is associated with increased annual rainfall. A study conducted in Misiones, Argentina, where fluctuations in the number of patients with toxoplasmic retinochoroiditis were observed over a decade, found that the frequency of reactivation episodes increases when precipitation increases, particularly in the second and fourth trimesters of each year. Precipitation was identified as a significant predictor of the frequency of reactivation episodes. The study concluded that prolonged and intense rainfall periods were significantly associated with the reactivation of toxoplasmic retinochoroiditis, suggesting that climatic conditions may affect both parasite survival in the soil and potentially the host immune response due to other comorbidities (PUBMED:24225023).
Instruction: Do different mattresses affect the quality of cardiopulmonary resuscitation? Abstracts: abstract_id: PUBMED:14504728 Do different mattresses affect the quality of cardiopulmonary resuscitation? Objective: To determine the effect of different mattresses on cardiopulmonary resuscitation performance and establish whether emergency deflation of an inflatable mattress improves the quality of resuscitation. Design And Setting: Randomised controlled cross-over trial performed in a general ICU PARTICIPANTS: Critical care staff from a general ICU. Interventions: Cardiopulmonary resuscitation on a manikin on the floor or on a bed with a standard foam mattress and inflated and deflated pressure redistributing mattresses. Maximal compression force was measured at different bed heights. Measurements And Results: Compression depth, duty cycle and rate and percentage correct expired air ventilation were recorded on a manikin. Compression depth was significantly lower on the foam (35.2 mm), inflated (37.2 mm) and deflated mattress (39.1 mm) than the floor (44.2 mm). There were no clinically important differences in duty cycle or compression rate. The quality of ventilation was poor on all surfaces. Maximal compression force declined as bed height increased. Conclusions: Resuscitation performance is adversely affected when performed on a bed (irrespective of mattress type) compared to the floor. There were no differences between the inflated and deflated mattresses, although the deflation process did not adversely affect performance. This study does not support the routine deflation of an inflated mattress during resuscitation and questions the potential benefits from using a backboard. The finding that bed height affects maximal compression forces, challenges the recommendation that cardiopulmonary resuscitation be performed with the bed at middle-thigh level and requires further investigation. abstract_id: PUBMED:18467890 Improving cardiopulmonary resuscitation quality to ensure survival. Purpose Of Review: There is correlation between quality of bystander cardiopulmonary resuscitation and patient survival. Recent developments in defibrillator technology enable recording of cardiopulmonary resuscitation quality, and have shown quality of professional cardiopulmonary resuscitation far from guidelines' levels for factors such as chest compression depth and rate, ventilation rate, and pauses in chest compressions. The effects of cardiopulmonary resuscitation quality factors on patient survival are presently under scrutiny. Recent Findings: Factors such as depth of and pauses in chest compressions immediately before defibrillation attempts affect outcome. Both immediate automated feedback on cardiopulmonary resuscitation quality and use of the same quality data during postevent debriefing improve cardiopulmonary resuscitation quality, and the combination appears to improve outcome. The increased awareness of quality problems, particularly unwanted pauses in chest compressions, has caused more emphasis on chest compressions in cardiopulmonary resuscitation protocols including the 2005 Guidelines. There is a growing number of reports of increased survival with these new protocols. Summary: Cardiopulmonary resuscitation quality affects survival after cardiac arrest. Reporting cardiopulmonary resuscitation quality data should be standard in all studies of cardiac arrest as effects of studied interventions can depend on or influence cardiopulmonary resuscitation quality. These data are also valuable in quality improvement processes both in-hospital and out-of-hospital. abstract_id: PUBMED:23399985 Use of backboard and deflation improve quality of chest compression when cardiopulmonary resuscitation is performed on a typical air inflated mattress configuration. No study has examined the effectiveness of backboards and air deflation for achieving adequate chest compression (CC) depth on air mattresses with the typical configurations seen in intensive care units. To determine this efficacy, we measured mattress compression depth (MCD, mm) on these surfaces using dual accelerometers. Eight cardiopulmonary resuscitation providers performed CCs on manikins lying on 4 different surfaces using a visual feedback system. The surfaces were as follows: A, a bed frame; B, a deflated air mattress placed on top of a foam mattress laid on a bed frame; C, a typical air mattress configuration with an inflated air mattress placed on a foam mattress laid on a bed frame; and D, C with a backboard. Deflation of the air mattress decreased MCD significantly (B; 14.74 ± 1.36 vs C; 30.16 ± 3.96, P &lt; 0.001). The use of a backboard also decreased MCD (C; 30.16 ± 3.96 vs D; 25.46 ± 2.89, P = 0.002). However, deflation of the air mattress decreased MCD more than use of a backboard (B; 14.74 ± 1.36 vs D; 25.46 ± 2.89, P = 0.002). The use of a both a backboard and a deflated air mattress in this configuration reduces MCD and thus helps achieve accurate CC depth during cardiopulmonary resuscitation. abstract_id: PUBMED:28565967 Quality of chest compressions by healthcare professionals using real-time audiovisual feedback during in-hospital cardiopulmonary resuscitation. Introduction: A high quality of chest compressions, e.g. sufficient depth (5-6 cm) and rate (100-120 per min), has been associated with survival. The patient's underlay affects chest compression depth. Depth and rate can be assessed by feedback systems to guide rescuers during cardiopulmonary resuscitation. Aim: The purpose of this study was to describe the quality of chest compressions by healthcare professionals using real-time audiovisual feedback during in-hospital cardiopulmonary resuscitation. Method: An observational descriptive study was performed including 63 cardiac arrest events with a resuscitation attempt. Data files were recorded by Zoll AED Pro, and reviewed by RescueNet Code Review software. The events were analysed according to depth, rate, quality of chest compressions and underlay. Results: Across events, 12.7% (median) of the compressions had a depth of 5-6 cm. Compression depth of &gt;6 cm was measured in 70.1% (median). The underlay could be identified from the electronic patient records in 54 events. The median compression depth was 4.5 cm (floor) and 6.7 cm (mattress). Across events, 57.5% (median) of the compressions were performed with a median frequency of 100-120 compressions/min and the most common problem was a compression rate of &lt;100 (median=22.3%). Conclusions: Chest compression quality was poor according to the feedback system. However, the distribution of compression depth with regard to underlay points towards overestimation of depth when treating patients on a mattress. Audiovisual feedback devices ought to be further developed. Healthcare professionals need to be aware of the strengths and weaknesses of their devices. abstract_id: PUBMED:17468556 Monitoring the quality of cardiopulmonary resuscitation. Purpose Of Review: Numerous recent reports have described limitations in the quality of cardiopulmonary resuscitation. Thus, there has been increasing interest in the techniques available to monitor quality. This review focuses on the major publications since the review published by the International Liaison Committee on Resuscitation in 2005. Some key articles published prior to this time period have also been included. Recent Findings: A number of devices can monitor various components of the quality of cardiopulmonary resuscitation. End-tidal CO2 measurement assists in confirming placement of endotracheal tubes, correlates with cardiac output and detects the return of spontaneous circulation. Turbine flow-meters monitor respiratory rate and tidal volume. Transthoracic impedance monitoring measures respiratory rate, and may assist in confirmation of endotracheal tube placement. A new mechanical device (CPREzy) and a new defibrillator/monitor allow estimation of depth (and rate) of compressions. Ventricular-fibrillation waveform analysis may facilitate better timing of defibrillation. Echocardiography detects conditions that may impair the quality of cardiopulmonary resuscitation. Summary: Many options are available to monitor the quality of cardiopulmonary resuscitation. Some have significant limitations, and others are only readily available in hospital. The use of the information from this more intensive monitoring promises to improve outcomes of cardiopulmonary resuscitation. abstract_id: PUBMED:30293601 Cardiopulmonary Resuscitation Quality Issues. Much of the current evidence and many of the recent treatment recommendations for increasing survival from cardiac arrest revolve around improving the quality of cardiopulmonary resuscitation during resuscitation. A focus on providing treatments proved beneficial and providing these treatments reliably, using measurement, monitoring, and implementation of quality-improvement strategies, will help eliminate variation in outcomes and provide a foundation from which future improvements in resuscitation care can be developed. Using the knowledge and tools available today will help reduce the ambiguity and variability that exists in resuscitation today and provide the ability to save more lives in communities. abstract_id: PUBMED:27440375 Survivors' quality of life after cardiopulmonary resuscitation: an integrative review of the literature. Background: The incidence of cardiac arrest and cardiopulmonary resuscitation continues to increase worldwide largely due to greater awareness of the symptoms of cardiac events and increased attention to cardiopulmonary resuscitation training in the community. Globally, predicted survival rates after cardiopulmonary resuscitation have remained at 10% for decades and although patient outcome remains unpredictable, there is a positive trend in life expectancy. For a resuscitation attempt to be classed as successful, not only survival but also quality of life has to be evaluated. Aim: The aim of this review was to examine literature that explores the quality of life (QOL) for survivors' after CPR and the influence cognitive impairment, anxiety, depression and post-traumatic stress disorder (PTSD) has had on their QOL. Review Methods: This review follows Whittemore and Knafl's framework for an integrative literature review. Electronic databases EBSCO, Ovid, PubMed and EMBASE were searched. After application of the inclusion and exclusion criteria, thirty-six papers published from January 2000 to June 2015 were included in this review. Results: These papers represent a broad spectrum of research evaluating quality of life for survivors of cardiopulmonary resuscitation. The heterogeneous research methods and vast number of different research tools make it challenging to compare the findings. The majority of papers concluded that quality of life for survivors of cardiac arrest and cardiopulmonary resuscitation was generally acceptable. However, studies also described survivors' experience of anxiety, depression, post-traumatic stress and cognitive dysfunction. Conclusion: A majority of papers reported an acceptable quality of life if the patient survived to hospital discharge. The heterogeneity in quantitative papers was noticeable and indicates a marked variance in patient outcomes. This review highlights the absence of specialized tools used to investigate survivors' experience of the event. Further exploration of the impact cardiopulmonary resuscitation has on the individual may improve ongoing rehabilitation and quality of life levels for survivors. abstract_id: PUBMED:30951004 Influence of Cardiopulmonary Resuscitation Coaching and Provider Role on Perception of Cardiopulmonary Resuscitation Quality During Simulated Pediatric Cardiac Arrest. Objectives: We aimed to describe the impact of a cardiopulmonary resuscitation coach on healthcare provider perception of cardiopulmonary resuscitation quality during simulated pediatric cardiac arrest. Design: Prospective, observational study. Setting: We conducted secondary analysis of data collected from a multicenter, randomized trial of providers who participated in a simulated pediatric cardiac arrest. Subjects: Two-hundred pediatric acute care providers. Interventions: Participants were randomized to having a cardiopulmonary resuscitation coach versus no cardiopulmonary resuscitation coach. Cardiopulmonary resuscitation coaches provided feedback on cardiopulmonary resuscitation performance and helped to coordinate key tasks. All teams used cardiopulmonary resuscitation feedback technology. Measurements And Main Results: Cardiopulmonary resuscitation quality was collected by the defibrillator, and perceived cardiopulmonary resuscitation quality was collected by surveying participants after the scenario. We calculated the difference between perceived and measured quality of cardiopulmonary resuscitation and defined accurate perception as no more than 10% deviation from measured quality of cardiopulmonary resuscitation. Teams with a cardiopulmonary resuscitation coach were more likely to accurately estimate chest compressions depth in comparison to teams without a cardiopulmonary resuscitation coach (odds ratio, 2.97; 95% CI, 1.61-5.46; p &lt; 0.001). There was no significant difference detected in accurate perception of chest compressions rate between groups (odds ratio, 1.33; 95% CI, 0.77-2.32; p = 0.32). Among teams with a cardiopulmonary resuscitation coach, the cardiopulmonary resuscitation coach had the best chest compressions depth perception (80%) compared with the rest of the team (team leader 40%, airway 55%, cardiopulmonary resuscitation provider 30%) (p = 0.003). No differences were found in perception of chest compressions rate between roles (p = 0.86). Conclusions: Healthcare providers improved their perception of cardiopulmonary resuscitation depth with a cardiopulmonary resuscitation coach present. The cardiopulmonary resuscitation coach had the best perception of chest compressions depth. abstract_id: PUBMED:17601847 Cardiopulmonary resuscitation in patients with cancer. Cancer is the second leading cause of death in the United States, with most of these deaths taking place in the hospital setting. Discussions on end-of-life care and on cardiopulmonary resuscitation in particular are an important component in the management of patients with cancer. Clinical decision making and respect for patient autonomy dictate that health care providers provide their patients with accurate information on the expected outcomes of cardiopulmonary resuscitation. This article reviews those factors that affect the outcome of cardiopulmonary resuscitation in patients with cancer and provides recommendations on obtaining do-not-resuscitate orders in these patients. abstract_id: PUBMED:10036483 The effects of pressure-reducing mattresses in CPR. In this article, the author summarises a small study examining the effects that support surface mattresses might have on perfusion in cardiopulmonary resuscitation. Answer: Yes, different mattresses do affect the quality of cardiopulmonary resuscitation (CPR). A study found that compression depth was significantly lower on a standard foam mattress, inflated pressure redistributing mattress, and deflated mattress compared to the floor, indicating that resuscitation performance is adversely affected when performed on a bed, irrespective of mattress type (PUBMED:14504728). Another study showed that deflation of an air mattress significantly decreased mattress compression depth (MCD) during CPR, and the use of a backboard also decreased MCD, suggesting that both deflation and the use of a backboard can improve the quality of chest compressions (PUBMED:23399985). Furthermore, the quality of CPR, including factors such as chest compression depth and rate, is correlated with patient survival, and recent developments in defibrillator technology have enabled the recording of CPR quality, highlighting that the quality of professional CPR often falls short of guideline levels (PUBMED:18467890). Real-time audiovisual feedback during in-hospital CPR has been shown to be a valuable tool for improving the quality of chest compressions, although the feedback system's accuracy may be affected by the underlay, such as a mattress, potentially leading to overestimation of compression depth (PUBMED:28565967). Monitoring devices and techniques are available to assess various components of CPR quality, and their use can help improve CPR outcomes (PUBMED:17468556). Emphasizing the importance of CPR quality, recent treatment recommendations for increasing survival from cardiac arrest focus on improving CPR quality through measurement, monitoring, and quality-improvement strategies (PUBMED:30293601). In summary, the type of mattress and the surface on which CPR is performed can significantly impact the quality of chest compressions, which in turn affects patient survival outcomes. Deflation of air mattresses and the use of backboards are strategies that can be employed to improve CPR quality on beds with mattresses. Monitoring and feedback systems are also crucial for ensuring high-quality CPR and improving patient outcomes.